diff --git a/.gitignore b/.gitignore
index f3de0c8..9539f72 100644
--- a/.gitignore
+++ b/.gitignore
@@ -38,12 +38,14 @@ Thumbs.db
# Code Index MCP specific files
.code_indexer/
-# Test files
-test_*.py
# Claude Code generated files
-CLAUDE.md
+CLAUDE.local.md
.claude/
.claude_chat/
claude_*
COMMIT_MESSAGE.txt
+RELEASE_NOTE.txt
+
+.llm-context/
+AGENTS.md
diff --git a/.pylintrc b/.pylintrc
new file mode 100644
index 0000000..5bc48e5
--- /dev/null
+++ b/.pylintrc
@@ -0,0 +1,24 @@
+[MAIN]
+# Ignore auto-generated protobuf files
+ignore-paths=src/code_index_mcp/scip/proto/scip_pb2.py
+
+[MESSAGES CONTROL]
+# Disable specific warnings for protobuf generated code
+disable=
+ # Generated code warnings
+ protected-access,
+ bad-indentation,
+ line-too-long,
+ # Other common warnings we might want to disable globally
+ unused-import,
+ logging-fstring-interpolation
+
+[FORMAT]
+# Maximum number of characters on a single line
+max-line-length=100
+
+[DESIGN]
+# Maximum number of arguments for function / method
+max-args=7
+# Maximum number of locals for function / method body
+max-locals=20
\ No newline at end of file
diff --git a/AGENTS.md b/AGENTS.md
new file mode 100644
index 0000000..886f335
--- /dev/null
+++ b/AGENTS.md
@@ -0,0 +1,25 @@
+# Repository Guidelines
+
+## Project Structure & Module Organization
+Code Index MCP lives in `src/code_index_mcp/`, with `indexing/` managing builders, `services/` exposing MCP tool implementations, `search/` coordinating query utilities, and `utils/` housing cross-cutting helpers. The lightweight CLI bootstrapper is `run.py`, which adds `src/` to `PYTHONPATH` before invoking `code_index_mcp.server`. Sample corpora for language regression reside under `test/sample-projects/` (for example `python/user_management/`). Reserve `tests/` for runnable suites and avoid checking in generated `__pycache__` artifacts.
+
+## Build, Test, and Development Commands
+Install dependencies with `uv sync` after cloning. Use `uv run code-index-mcp` to launch the MCP server directly, or `uv run python run.py` when you need the local sys.path shim. During development, `uv run code-index-mcp --help` will list available CLI flags, and `uv run python -m code_index_mcp.server` mirrors the published entry point for debugging.
+
+## Coding Style & Naming Conventions
+Target Python 3.10+ and follow the `.pylintrc` configuration: 4-space indentation, 100-character line limit, and restrained function signatures (<= 7 parameters). Modules and functions stay `snake_case`, classes use `PascalCase`, and constants remain uppercase with underscores. Prefer explicit imports from sibling packages (`from .services import ...`) and keep logging to stderr as implemented in `server.py`.
+
+## Testing Guidelines
+Automated tests should live under `tests/`, mirroring the package hierarchy (`tests/indexing/test_shallow_index.py`, etc.). Use `uv run pytest` (with optional `-k` selectors) for unit and integration coverage, and stage representative fixtures inside `test/sample-projects/` when exercising new language strategies. Document expected behaviors in fixtures' README files or inline comments, and fail fast if tree-sitter support is not available for a language you add.
+
+## Commit & Pull Request Guidelines
+Follow the Conventional Commits style seen in history (`feat`, `fix`, `refactor(scope): summary`). Reference issue numbers when relevant and keep subjects under 72 characters. Pull requests should include: 1) a concise problem statement, 2) before/after behavior or performance notes, 3) instructions for reproducing test runs (`uv run pytest`, `uv run code-index-mcp`). Attach updated screenshots or logs when touching developer experience flows, and confirm the file watcher still transitions to "active" in manual smoke tests.
+
+## Agent Workflow Tips
+Always call `set_project_path` before invoking other tools, and prefer `search_code_advanced` with targeted `file_pattern` filters to minimize noise. When editing indexing strategies, run `refresh_index` in between changes to confirm cache rebuilds. Clean up temporary directories via `clear_settings` if you notice stale metadata, and document any new tooling you introduce in this guide.
+
+## Release Preparation Checklist
+- Update the project version everywhere it lives: `pyproject.toml`, `src/code_index_mcp/__init__.py`, and `uv.lock`.
+- Add a release note entry to `RELEASE_NOTE.txt` for the new version.
+- Commit the version bump (plus any release artifacts) and push the branch to `origin`.
+- Create a git tag for the new version and push the tag to `origin`.
diff --git a/README.md b/README.md
index d8ad972..5cabcbe 100644
--- a/README.md
+++ b/README.md
@@ -3,10 +3,12 @@
[](https://modelcontextprotocol.io)
-[](https://www.python.org/)
+[](https://www.python.org/)
[](LICENSE)
-A Model Context Protocol server for code indexing, searching, and analysis.
+**Intelligent code indexing and analysis for Large Language Models**
+
+Transform how AI understands your codebase with advanced search, analysis, and navigation capabilities.
@@ -14,260 +16,342 @@ A Model Context Protocol server for code indexing, searching, and analysis.
-## What is Code Index MCP?
+## Overview
+
+Code Index MCP is a [Model Context Protocol](https://modelcontextprotocol.io) server that bridges the gap between AI models and complex codebases. It provides intelligent indexing, advanced search capabilities, and detailed code analysis to help AI assistants understand and navigate your projects effectively.
+
+**Perfect for:** Code review, refactoring, documentation generation, debugging assistance, and architectural analysis.
+
+## Quick Start
+
+### 🚀 **Recommended Setup (Most Users)**
+
+The easiest way to get started with any MCP-compatible application:
+
+**Prerequisites:** Python 3.10+ and [uv](https://github.com/astral-sh/uv)
-Code Index MCP is a specialized MCP server that provides intelligent code indexing and analysis capabilities. It enables Large Language Models to interact with your code repositories, offering real-time insights and navigation through complex codebases.
+1. **Add to your MCP configuration** (e.g., `claude_desktop_config.json` or `~/.claude.json`):
+ ```json
+ {
+ "mcpServers": {
+ "code-index": {
+ "command": "uvx",
+ "args": ["code-index-mcp"]
+ }
+ }
+ }
+ ```
-This server integrates with the [Model Context Protocol](https://modelcontextprotocol.io) (MCP), a standardized way for AI models to interact with external tools and data sources.
+2. **Restart your application** – `uvx` automatically handles installation and execution
+
+3. **Start using** (give these prompts to your AI assistant):
+ ```
+ Set the project path to /Users/dev/my-react-app
+ Find all TypeScript files in this project
+ Search for "authentication" functions
+ Analyze the main App.tsx file
+ ```
+
+## Typical Use Cases
+
+**Code Review**: "Find all places using the old API"
+**Refactoring Help**: "Where is this function called?"
+**Learning Projects**: "Show me the main components of this React project"
+**Debugging**: "Search for all error handling related code"
## Key Features
-- **Project Indexing**: Recursively scans directories to build a searchable index of code files
-- **Advanced Search**: Intelligent search with automatic detection of ugrep, ripgrep, ag, or grep for enhanced performance
-- **Fuzzy Search**: Native fuzzy matching with ugrep, or safe fuzzy patterns for other tools
-- **File Analysis**: Get detailed insights about file structure, imports, and complexity
-- **Smart Filtering**: Automatically ignores build directories, dependencies, and non-code files
-- **Persistent Storage**: Caches indexes for improved performance across sessions
-- **Lazy Loading**: Search tools are detected only when needed for optimal startup performance
+### 🔍 **Intelligent Search & Analysis**
+- **Dual-Strategy Architecture**: Specialized tree-sitter parsing for 7 core languages, fallback strategy for 50+ file types
+- **Direct Tree-sitter Integration**: No regex fallbacks for specialized languages - fail fast with clear errors
+- **Advanced Search**: Auto-detects and uses the best available tool (ugrep, ripgrep, ag, or grep)
+- **Universal File Support**: Comprehensive coverage from advanced AST parsing to basic file indexing
+- **File Analysis**: Deep insights into structure, imports, classes, methods, and complexity metrics after running `build_deep_index`
+
+### 🗂️ **Multi-Language Support**
+- **7 Languages with Tree-sitter AST Parsing**: Python, JavaScript, TypeScript, Java, Go, Objective-C, Zig
+- **50+ File Types with Fallback Strategy**: C/C++, Rust, Ruby, PHP, and all other programming languages
+- **Document & Config Files**: Markdown, JSON, YAML, XML with appropriate handling
+- **Web Frontend**: Vue, React, Svelte, HTML, CSS, SCSS
+- **Database**: SQL variants, NoSQL, stored procedures, migrations
+- **Configuration**: JSON, YAML, XML, Markdown
+- **[View complete list](#supported-file-types)**
+
+### ⚡ **Real-time Monitoring & Auto-refresh**
+- **File Watcher**: Automatic index updates when files change
+- **Cross-platform**: Native OS file system monitoring
+- **Smart Processing**: Batches rapid changes to prevent excessive rebuilds
+- **Shallow Index Refresh**: Watches file changes and keeps the file list current; run a deep rebuild when you need symbol metadata
+
+### ⚡ **Performance & Efficiency**
+- **Tree-sitter AST Parsing**: Native syntax parsing for accurate symbol extraction
+- **Persistent Caching**: Stores indexes for lightning-fast subsequent access
+- **Smart Filtering**: Intelligent exclusion of build directories and temporary files
+- **Memory Efficient**: Optimized for large codebases
+- **Direct Dependencies**: No fallback mechanisms - fail fast with clear error messages
## Supported File Types
-The server supports multiple programming languages and file extensions including:
-
-- Python (.py)
-- JavaScript/TypeScript (.js, .ts, .jsx, .tsx, .mjs, .cjs)
-- Frontend Frameworks (.vue, .svelte, .astro)
-- Java (.java)
-- C/C++ (.c, .cpp, .h, .hpp)
-- C# (.cs)
-- Go (.go)
-- Ruby (.rb)
-- PHP (.php)
-- Swift (.swift)
-- Kotlin (.kt)
-- Rust (.rs)
-- Scala (.scala)
-- Shell scripts (.sh, .bash)
-- Zig (.zig)
-- Web files (.html, .css, .scss, .less, .sass, .stylus, .styl)
-- Template engines (.hbs, .handlebars, .ejs, .pug)
-- **Database & SQL**:
- - SQL files (.sql, .ddl, .dml)
- - Database-specific (.mysql, .postgresql, .psql, .sqlite, .mssql, .oracle, .ora, .db2)
- - Database objects (.proc, .procedure, .func, .function, .view, .trigger, .index)
- - Migration & tools (.migration, .seed, .fixture, .schema, .liquibase, .flyway)
- - NoSQL & modern (.cql, .cypher, .sparql, .gql)
-- Documentation/Config (.md, .mdx, .json, .xml, .yml, .yaml)
-
-## Setup and Integration
-
-There are several ways to set up and use Code Index MCP, depending on your needs.
-
-### For General Use with Host Applications (Recommended)
-
-This is the easiest and most common way to use the server. It's designed for users who want to use Code Index MCP within an AI application like Claude Desktop.
-
-1. **Prerequisite**: Make sure you have Python 3.8+ and [uv](https://github.com/astral-sh/uv) installed.
-
-2. **Configure the Host App**: Add the following to your host application's MCP configuration file.
-
- *Claude Desktop* -> `claude_desktop_config.json`
-
- *Claude Code* -> `~/.claude.json`. There is one `mcpServers` for each project and one global
-
- ```json
- {
- "mcpServers": {
- "code-index": {
- "command": "uvx",
- "args": [
- "code-index-mcp"
- ]
- }
- }
- }
- ```
-
-4. **Restart the Host App**: After adding the configuration, restart the application. The `uvx` command will automatically handle the installation and execution of the `code-index-mcp` server in the background.
-
-### For Local Development
-
-If you want to contribute to the development of this project, follow these steps:
-
-1. **Clone the repository**:
- ```bash
- git clone https://github.com/johnhuang316/code-index-mcp.git
- cd code-index-mcp
- ```
-
-2. **Install dependencies** using `uv`:
- ```bash
- uv sync
- ```
-
-3. **Configure Your Host App for Local Development**: To make your host application (e.g., Claude Desktop) use your local source code, update its configuration file to execute the server via `uv run`. This ensures any changes you make to the code are reflected immediately when the host app starts the server.
-
- ```json
- {
- "mcpServers": {
- "code-index": {
- "command": "uv",
- "args": [
- "run",
- "code_index_mcp"
- ]
- }
- }
- }
- ```
+
+📁 Programming Languages (Click to expand)
-4. **Debug with the MCP Inspector**: To debug your local server, you also need to tell the inspector to use `uv run`.
- ```bash
- npx @modelcontextprotocol/inspector uv run code_index_mcp
- ```
+**Languages with Specialized Tree-sitter Strategies:**
+- **Python** (`.py`, `.pyw`) - Full AST analysis with class/method extraction and call tracking
+- **JavaScript** (`.js`, `.jsx`, `.mjs`, `.cjs`) - ES6+ class and function parsing with tree-sitter
+- **TypeScript** (`.ts`, `.tsx`) - Complete type-aware symbol extraction with interfaces
+- **Java** (`.java`) - Full class hierarchy, method signatures, and call relationships
+- **Go** (`.go`) - Struct methods, receiver types, and function analysis
+- **Objective-C** (`.m`, `.mm`) - Class/instance method distinction with +/- notation
+- **Zig** (`.zig`, `.zon`) - Function and struct parsing with tree-sitter AST
-### Manual Installation via pip (Alternative)
+**All Other Programming Languages:**
+All other programming languages use the **FallbackParsingStrategy** which provides basic file indexing and metadata extraction. This includes:
+- **System & Low-Level:** C/C++ (`.c`, `.cpp`, `.h`, `.hpp`), Rust (`.rs`)
+- **Object-Oriented:** C# (`.cs`), Kotlin (`.kt`), Scala (`.scala`), Swift (`.swift`)
+- **Scripting & Dynamic:** Ruby (`.rb`), PHP (`.php`), Shell (`.sh`, `.bash`)
+- **And 40+ more file types** - All handled through the fallback strategy for basic indexing
-If you prefer to manage your Python packages manually with `pip`, you can install the server directly.
+
-1. **Install the package**:
- ```bash
- pip install code-index-mcp
- ```
+
+🌐 Web & Frontend (Click to expand)
-2. **Configure the Host App**: You will need to manually update your host application's MCP configuration to point to the installed script. Replace `"command": "uvx"` with `"command": "code-index-mcp"`.
+**Frameworks & Libraries:**
+- Vue (`.vue`)
+- Svelte (`.svelte`)
+- Astro (`.astro`)
- ```json
- {
- "mcpServers": {
- "code-index": {
- "command": "code-index-mcp",
- "args": []
- }
- }
- }
- ```
+**Styling:**
+- CSS (`.css`, `.scss`, `.less`, `.sass`, `.stylus`, `.styl`)
+- HTML (`.html`)
-## Available Tools
+**Templates:**
+- Handlebars (`.hbs`, `.handlebars`)
+- EJS (`.ejs`)
+- Pug (`.pug`)
-### Core Tools
+
-- **set_project_path**: Sets the base project path for indexing.
-- **search_code**: Enhanced search using external tools (ugrep/ripgrep/ag/grep) with fuzzy matching support.
-- **find_files**: Finds files in the project matching a given pattern.
-- **get_file_summary**: Gets a summary of a specific file, including line count, functions, imports, etc.
-- **refresh_index**: Refreshes the project index.
-- **get_settings_info**: Gets information about the project settings.
+
+🗄️ Database & SQL (Click to expand)
-### Utility Tools
+**SQL Variants:**
+- Standard SQL (`.sql`, `.ddl`, `.dml`)
+- Database-specific (`.mysql`, `.postgresql`, `.psql`, `.sqlite`, `.mssql`, `.oracle`, `.ora`, `.db2`)
-- **create_temp_directory**: Creates the temporary directory used for storing index data.
-- **check_temp_directory**: Checks the temporary directory used for storing index data.
-- **clear_settings**: Clears all settings and cached data.
-- **refresh_search_tools**: Manually re-detect available command-line search tools (e.g., ripgrep).
+**Database Objects:**
+- Procedures & Functions (`.proc`, `.procedure`, `.func`, `.function`)
+- Views & Triggers (`.view`, `.trigger`, `.index`)
-## Common Workflows and Examples
+**Migration & Tools:**
+- Migration files (`.migration`, `.seed`, `.fixture`, `.schema`)
+- Tool-specific (`.liquibase`, `.flyway`)
-Here’s a typical workflow for using Code Index MCP with an AI assistant like Claude.
+**NoSQL & Modern:**
+- Graph & Query (`.cql`, `.cypher`, `.sparql`, `.gql`)
-### 1. Set Project Path & Initial Indexing
+
-This is the first and most important step. When you set the project path, the server automatically creates a file index for the first time or loads a previously cached one.
+
+📄 Documentation & Config (Click to expand)
-**Example Prompt:**
+- Markdown (`.md`, `.mdx`)
+- Configuration (`.json`, `.xml`, `.yml`, `.yaml`)
+
+
+
+### 🛠️ **Development Setup**
+
+For contributing or local development:
+
+1. **Clone and install:**
+ ```bash
+ git clone https://github.com/johnhuang316/code-index-mcp.git
+ cd code-index-mcp
+ uv sync
+ ```
+
+2. **Configure for local development:**
+ ```json
+ {
+ "mcpServers": {
+ "code-index": {
+ "command": "uv",
+ "args": ["run", "code-index-mcp"]
+ }
+ }
+ }
+ ```
+
+3. **Debug with MCP Inspector:**
+ ```bash
+ npx @modelcontextprotocol/inspector uv run code-index-mcp
+ ```
+
+
+Alternative: Manual pip Installation
+
+If you prefer traditional pip management:
+
+```bash
+pip install code-index-mcp
```
-Please set the project path to C:\Users\username\projects\my-react-app
+
+Then configure:
+```json
+{
+ "mcpServers": {
+ "code-index": {
+ "command": "code-index-mcp",
+ "args": []
+ }
+ }
+}
```
-### 2. Refresh the Index (When Needed)
+
+
-If you make significant changes to your project files after the initial setup, you can manually refresh the index to ensure all tools are working with the latest information.
+## Available Tools
-**Example Prompt:**
+### 🏗️ **Project Management**
+| Tool | Description |
+|------|-------------|
+| **`set_project_path`** | Initialize indexing for a project directory |
+| **`refresh_index`** | Rebuild the shallow file index after file changes |
+| **`build_deep_index`** | Generate the full symbol index used by deep analysis |
+| **`get_settings_info`** | View current project configuration and status |
+
+*Run `build_deep_index` when you need symbol-level data; the default shallow index powers quick file discovery.*
+
+### 🔍 **Search & Discovery**
+| Tool | Description |
+|------|-------------|
+| **`search_code_advanced`** | Smart search with regex, fuzzy matching, and file filtering |
+| **`find_files`** | Locate files using glob patterns (e.g., `**/*.py`) |
+| **`get_file_summary`** | Analyze file structure, functions, imports, and complexity (requires deep index) |
+
+### 🔄 **Monitoring & Auto-refresh**
+| Tool | Description |
+|------|-------------|
+| **`get_file_watcher_status`** | Check file watcher status and configuration |
+| **`configure_file_watcher`** | Enable/disable auto-refresh and configure settings |
+
+### 🛠️ **System & Maintenance**
+| Tool | Description |
+|------|-------------|
+| **`create_temp_directory`** | Set up storage directory for index data |
+| **`check_temp_directory`** | Verify index storage location and permissions |
+| **`clear_settings`** | Reset all cached data and configurations |
+| **`refresh_search_tools`** | Re-detect available search tools (ugrep, ripgrep, etc.) |
+
+## Usage Examples
+
+### 🎯 **Quick Start Workflow**
+
+**1. Initialize Your Project**
```
-I've just added a few new components, please refresh the project index.
+Set the project path to /Users/dev/my-react-app
```
-*(The assistant would use the `refresh_index` tool)*
-
-### 3. Explore the Project Structure
+*Automatically indexes your codebase and creates searchable cache*
-Once the index is ready, you can find files using patterns (globs) to understand the codebase and locate relevant files.
+**2. Explore Project Structure**
+```
+Find all TypeScript component files in src/components
+```
+*Uses: `find_files` with pattern `src/components/**/*.tsx`*
-**Example Prompt:**
+**3. Analyze Key Files**
```
-Find all TypeScript component files in the 'src/components' directory.
+Give me a summary of src/api/userService.ts
```
-*(The assistant would use the `find_files` tool with a pattern like `src/components/**/*.tsx`)*
+*Uses: `get_file_summary` to show functions, imports, and complexity*
+*Tip: run `build_deep_index` first if you get a `needs_deep_index` response.*
-### 4. Analyze a Specific File
+### 🔍 **Advanced Search Examples**
-Before diving into the full content of a file, you can get a quick summary of its structure, including functions, classes, and imports.
+
+Code Pattern Search
-**Example Prompt:**
```
-Can you give me a summary of the 'src/api/userService.ts' file?
+Search for all function calls matching "get.*Data" using regex
```
-*(The assistant would use the `get_file_summary` tool)*
+*Finds: `getData()`, `getUserData()`, `getFormData()`, etc.*
-### 5. Search for Code
+
-With an up-to-date index, you can search for code snippets, function names, or any text pattern to find where specific logic is implemented.
+
+Fuzzy Function Search
-**Example: Simple Search**
```
-Search for all occurrences of the "processData" function.
+Find authentication-related functions with fuzzy search for 'authUser'
```
+*Matches: `authenticateUser`, `authUserToken`, `userAuthCheck`, etc.*
+
+
+
+
+Language-Specific Search
-**Example: Search with Fuzzy Matching**
```
-I'm looking for a function related to user authentication, it might be named 'authUser', 'authenticateUser', or something similar. Can you do a fuzzy search for 'authUser'?
+Search for "API_ENDPOINT" only in Python files
```
+*Uses: `search_code_advanced` with `file_pattern: "*.py"`*
+
+
+
+
+Auto-refresh Configuration
-**Example: Search within Specific Files**
```
-Search for the string "API_ENDPOINT" only in Python files.
+Configure automatic index updates when files change
```
-*(The assistant would use the `search_code` tool with the `file_pattern` parameter set to `*.py`)*
-
-## Development
+*Uses: `configure_file_watcher` to enable/disable monitoring and set debounce timing*
-### Building from Source
+
-1. Clone the repository:
+
+Project Maintenance
-```bash
-git clone https://github.com/username/code-index-mcp.git
-cd code-index-mcp
```
+I added new components, please refresh the project index
+```
+*Uses: `refresh_index` to update the searchable cache*
-2. Install dependencies:
+
-```bash
-uv sync
-```
+## Troubleshooting
-3. Run the server locally:
+### 🔄 **Auto-refresh Not Working**
-```bash
-uv run code_index_mcp
-```
+If automatic index updates aren't working when files change, try:
+- `pip install watchdog` (may resolve environment isolation issues)
+- Use manual refresh: Call the `refresh_index` tool after making file changes
+- Check file watcher status: Use `get_file_watcher_status` to verify monitoring is active
-## Debugging
+## Development & Contributing
-You can use the MCP inspector to debug the server:
+### 🔧 **Building from Source**
+```bash
+git clone https://github.com/johnhuang316/code-index-mcp.git
+cd code-index-mcp
+uv sync
+uv run code-index-mcp
+```
+### 🐛 **Debugging**
```bash
npx @modelcontextprotocol/inspector uvx code-index-mcp
```
-## License
-
-[MIT License](LICENSE)
-
-## Contributing
-
+### 🤝 **Contributing**
Contributions are welcome! Please feel free to submit a Pull Request.
-## Languages
+---
+
+### 📜 **License**
+[MIT License](LICENSE)
+### 🌐 **Translations**
- [繁體中文](README_zh.md)
+- [日本語](README_ja.md)
diff --git a/README_ja.md b/README_ja.md
new file mode 100644
index 0000000..79059b1
--- /dev/null
+++ b/README_ja.md
@@ -0,0 +1,379 @@
+# Code Index MCP
+
+
+
+[](https://modelcontextprotocol.io)
+[](https://www.python.org/)
+[](LICENSE)
+
+**大規模言語モデルのためのインテリジェントコードインデックス作成と解析**
+
+高度な検索、解析、ナビゲーション機能で、AIのコードベース理解を根本的に変革します。
+
+
+
+
+
+
+
+## 概要
+
+Code Index MCPは、AIモデルと複雑なコードベースの橋渡しをする[Model Context Protocol](https://modelcontextprotocol.io)サーバーです。インテリジェントなインデックス作成、高度な検索機能、詳細なコード解析を提供し、AIアシスタントがプロジェクトを効果的に理解しナビゲートできるようにします。
+
+**最適な用途:**コードレビュー、リファクタリング、ドキュメント生成、デバッグ支援、アーキテクチャ解析。
+
+## クイックスタート
+
+### 🚀 **推奨セットアップ(ほとんどのユーザー)**
+
+任意MCP対応アプリケーションで開始する最も簡単な方法:
+
+**前提条件:** Python 3.10+ および [uv](https://github.com/astral-sh/uv)
+
+1. **MCP設定に追加** (例:`claude_desktop_config.json` または `~/.claude.json`):
+ ```json
+ {
+ "mcpServers": {
+ "code-index": {
+ "command": "uvx",
+ "args": ["code-index-mcp"]
+ }
+ }
+ }
+ ```
+
+2. **アプリケーションを再起動** – `uvx`がインストールと実行を自動処理
+
+3. **使用開始**(AIアシスタントにこれらのプロンプトを与える):
+ ```
+ プロジェクトパスを/Users/dev/my-react-appに設定
+ このプロジェクトのすべてのTypeScriptファイルを検索
+ 「authentication」関連関数を検索
+ メインのApp.tsxファイルを解析
+ ```
+
+## 一般的な使用ケース
+
+**コードレビュー**:「旧いAPIを使用しているすべての箇所を検索」
+**リファクタリング支援**:「この関数はどこで呼ばれている?」
+**プロジェクト学習**:「このReactプロジェクトの主要コンポーネントを表示」
+**デバッグ支援**:「エラーハンドリング関連のコードをすべて検索」
+
+## 主な機能
+
+### 🔍 **インテリジェント検索・解析**
+- **二重戦略アーキテクチャ**:7つのコア言語に特化したTree-sitter解析、50+ファイルタイプにフォールバック戦略
+- **直接Tree-sitter統合**:特化言語で正規表現フォールバックなし - 明確なエラーメッセージで高速フェイル
+- **高度な検索**:最適なツール(ugrep、ripgrep、ag、grep)を自動検出・使用
+- **汎用ファイルサポート**:高度なAST解析から基本ファイルインデックスまでの包括的カバレッジ
+- **ファイル解析**:`build_deep_index` 実行後に構造、インポート、クラス、メソッド、複雑度メトリクスを深く把握
+
+### 🗂️ **多言語サポート**
+- **7言語でTree-sitter AST解析**:Python、JavaScript、TypeScript、Java、Go、Objective-C、Zig
+- **50+ファイルタイプでフォールバック戦略**:C/C++、Rust、Ruby、PHPおよびすべての他のプログラミング言語
+- **文書・設定ファイル**:Markdown、JSON、YAML、XML適切な処理
+- **Webフロントエンド**:Vue、React、Svelte、HTML、CSS、SCSS
+- **データベース**:SQLバリアント、NoSQL、ストアドプロシージャ、マイグレーション
+- **設定ファイル**:JSON、YAML、XML、Markdown
+- **[完全なリストを表示](#サポートされているファイルタイプ)**
+
+### ⚡ **リアルタイム監視・自動更新**
+- **ファイルウォッチャー**:ファイル変更時の自動インデックス更新
+- **クロスプラットフォーム**:ネイティブOSファイルシステム監視
+- **スマート処理**:急速な変更をバッチ処理して過度な再構築を防止
+- **浅いインデックス更新**:ファイル変更を監視して最新のファイル一覧を維持し、シンボルが必要な場合は `build_deep_index` を実行
+
+### ⚡ **パフォーマンス・効率性**
+- **Tree-sitter AST解析**:正確なシンボル抽出のためのネイティブ構文解析
+- **永続キャッシュ**:超高速な後続アクセスのためのインデックス保存
+- **スマートフィルタリング**:ビルドディレクトリと一時ファイルのインテリジェント除外
+- **メモリ効率**:大規模コードベース向けに最適化
+- **直接依存関係**:フォールバック機構なし - 明確なエラーメッセージで高速フェイル
+
+## サポートされているファイルタイプ
+
+
+📁 プログラミング言語(クリックで展開)
+
+**特化Tree-sitter戦略言語:**
+- **Python** (`.py`, `.pyw`) - クラス/メソッド抽出と呼び出し追跡を含む完全AST解析
+- **JavaScript** (`.js`, `.jsx`, `.mjs`, `.cjs`) - Tree-sitterを使用したES6+クラスと関数解析
+- **TypeScript** (`.ts`, `.tsx`) - インターフェースを含む完全な型認識シンボル抽出
+- **Java** (`.java`) - 完全なクラス階層、メソッドシグネチャ、呼び出し関係
+- **Go** (`.go`) - 構造体メソッド、レシーバータイプ、関数解析
+- **Objective-C** (`.m`, `.mm`) - +/-記法を使用したクラス/インスタンスメソッド区別
+- **Zig** (`.zig`, `.zon`) - Tree-sitter ASTを使用した関数と構造体解析
+
+**すべての他のプログラミング言語:**
+すべての他のプログラミング言語は**フォールバック解析戦略**を使用し、基本ファイルインデックスとメタデータ抽出を提供します。これには以下が含まれます:
+- **システム・低レベル言語:** C/C++ (`.c`, `.cpp`, `.h`, `.hpp`)、Rust (`.rs`)
+- **オブジェクト指向言語:** C# (`.cs`)、Kotlin (`.kt`)、Scala (`.scala`)、Swift (`.swift`)
+- **スクリプト・動的言語:** Ruby (`.rb`)、PHP (`.php`)、Shell (`.sh`, `.bash`)
+- **および40+ファイルタイプ** - すべてフォールバック戦略による基本インデックス処理
+
+
+
+
+🌐 Web・フロントエンド(クリックで展開)
+
+**フレームワーク・ライブラリ:**
+- Vue (`.vue`)
+- Svelte (`.svelte`)
+- Astro (`.astro`)
+
+**スタイリング:**
+- CSS (`.css`, `.scss`, `.less`, `.sass`, `.stylus`, `.styl`)
+- HTML (`.html`)
+
+**テンプレート:**
+- Handlebars (`.hbs`, `.handlebars`)
+- EJS (`.ejs`)
+- Pug (`.pug`)
+
+
+
+
+🗄️ データベース・SQL(クリックで展開)
+
+**SQL バリアント:**
+- 標準SQL (`.sql`, `.ddl`, `.dml`)
+- データベース固有 (`.mysql`, `.postgresql`, `.psql`, `.sqlite`, `.mssql`, `.oracle`, `.ora`, `.db2`)
+
+**データベースオブジェクト:**
+- プロシージャ・関数 (`.proc`, `.procedure`, `.func`, `.function`)
+- ビュー・トリガー (`.view`, `.trigger`, `.index`)
+
+**マイグレーション・ツール:**
+- マイグレーションファイル (`.migration`, `.seed`, `.fixture`, `.schema`)
+- ツール固有 (`.liquibase`, `.flyway`)
+
+**NoSQL・モダンDB:**
+- グラフ・クエリ (`.cql`, `.cypher`, `.sparql`, `.gql`)
+
+
+
+
+📄 ドキュメント・設定(クリックで展開)
+
+- Markdown (`.md`, `.mdx`)
+- 設定 (`.json`, `.xml`, `.yml`, `.yaml`)
+
+
+
+## クイックスタート
+
+### 🚀 **推奨セットアップ(ほとんどのユーザー向け)**
+
+任意のMCP対応アプリケーションで開始する最も簡単な方法:
+
+**前提条件:** Python 3.10+ と [uv](https://github.com/astral-sh/uv)
+
+1. **MCP設定に追加**(例:`claude_desktop_config.json` または `~/.claude.json`):
+ ```json
+ {
+ "mcpServers": {
+ "code-index": {
+ "command": "uvx",
+ "args": ["code-index-mcp"]
+ }
+ }
+ }
+ ```
+
+2. **アプリケーションを再起動** – `uvx` が自動的にインストールと実行を処理
+
+### 🛠️ **開発セットアップ**
+
+貢献やローカル開発用:
+
+1. **クローンとインストール:**
+ ```bash
+ git clone https://github.com/johnhuang316/code-index-mcp.git
+ cd code-index-mcp
+ uv sync
+ ```
+
+2. **ローカル開発用設定:**
+ ```json
+ {
+ "mcpServers": {
+ "code-index": {
+ "command": "uv",
+ "args": ["run", "code-index-mcp"]
+ }
+ }
+ }
+ ```
+
+3. **MCP Inspectorでデバッグ:**
+ ```bash
+ npx @modelcontextprotocol/inspector uv run code-index-mcp
+ ```
+
+
+代替案:手動pipインストール
+
+従来のpip管理を好む場合:
+
+```bash
+pip install code-index-mcp
+```
+
+そして設定:
+```json
+{
+ "mcpServers": {
+ "code-index": {
+ "command": "code-index-mcp",
+ "args": []
+ }
+ }
+}
+```
+
+
+
+
+## 利用可能なツール
+
+### 🏗️ **プロジェクト管理**
+| ツール | 説明 |
+|--------|------|
+| **`set_project_path`** | プロジェクトディレクトリのインデックス作成を初期化 |
+| **`refresh_index`** | ファイル変更後に浅いファイルインデックスを再構築 |
+| **`build_deep_index`** | 深い解析で使う完全なシンボルインデックスを生成 |
+| **`get_settings_info`** | 現在のプロジェクト設定と状態を表示 |
+
+*シンボルレベルのデータが必要な場合は `build_deep_index` を実行してください。デフォルトの浅いインデックスは高速なファイル探索を担います。*
+
+### 🔍 **検索・発見**
+| ツール | 説明 |
+|--------|------|
+| **`search_code_advanced`** | 正規表現、ファジーマッチング、ファイルフィルタリング対応のスマート検索 |
+| **`find_files`** | globパターンを使用したファイル検索(例:`**/*.py`) |
+| **`get_file_summary`** | ファイル構造、関数、インポート、複雑度の解析(深いインデックスが必要) |
+
+### 🔄 **監視・自動更新**
+| ツール | 説明 |
+|--------|------|
+| **`get_file_watcher_status`** | ファイルウォッチャーの状態と設定を確認 |
+| **`configure_file_watcher`** | 自動更新の有効化/無効化と設定の構成 |
+
+### 🛠️ **システム・メンテナンス**
+| ツール | 説明 |
+|--------|------|
+| **`create_temp_directory`** | インデックスデータの保存ディレクトリをセットアップ |
+| **`check_temp_directory`** | インデックス保存場所と権限を確認 |
+| **`clear_settings`** | すべてのキャッシュデータと設定をリセット |
+| **`refresh_search_tools`** | 利用可能な検索ツール(ugrep、ripgrep等)を再検出 |
+
+## 使用例
+
+### 🎯 **クイックスタートワークフロー**
+
+**1. プロジェクトの初期化**
+```
+プロジェクトパスを /Users/dev/my-react-app に設定してください
+```
+*コードベースを自動インデックス作成し、検索可能なキャッシュを構築*
+
+**2. プロジェクト構造の探索**
+```
+src/components で全てのTypeScriptコンポーネントファイルを見つけてください
+```
+*使用ツール:`find_files`、パターン `src/components/**/*.tsx`*
+
+**3. キーファイルの解析**
+```
+src/api/userService.ts の要約を教えてください
+```
+*使用ツール:`get_file_summary` で関数、インポート、複雑度を表示*
+*ヒント:`needs_deep_index` が返った場合は `build_deep_index` を先に実行してください。*
+
+### 🔍 **高度な検索例**
+
+
+コードパターン検索
+
+```
+正規表現を使って "get.*Data" にマッチする全ての関数呼び出しを検索してください
+```
+*発見:`getData()`、`getUserData()`、`getFormData()` など*
+
+
+
+
+ファジー関数検索
+
+```
+'authUser' でファジー検索して認証関連の関数を見つけてください
+```
+*マッチ:`authenticateUser`、`authUserToken`、`userAuthCheck` など*
+
+
+
+
+言語固有検索
+
+```
+Pythonファイルのみで "API_ENDPOINT" を検索してください
+```
+*使用ツール:`search_code_advanced`、`file_pattern: "*.py"`*
+
+
+
+
+自動更新設定
+
+```
+ファイル変更時の自動インデックス更新を設定してください
+```
+*使用ツール:`configure_file_watcher` で監視の有効化/無効化とデバウンス時間を設定*
+
+
+
+
+プロジェクトメンテナンス
+
+```
+新しいコンポーネントを追加したので、プロジェクトインデックスを更新してください
+```
+*使用ツール:`refresh_index` で検索可能なキャッシュを更新*
+
+
+
+## トラブルシューティング
+
+### 🔄 **自動リフレッシュが動作しない**
+
+ファイル変更時に自動インデックス更新が動作しない場合、以下を試してください:
+- `pip install watchdog`(環境分離の問題を解決する可能性があります)
+- 手動リフレッシュを使用:ファイル変更後に `refresh_index` ツールを呼び出す
+- ファイルウォッチャーステータスを確認:`get_file_watcher_status` を使用して監視がアクティブかどうかを確認
+
+## 開発・貢献
+
+### 🔧 **ソースからのビルド**
+```bash
+git clone https://github.com/johnhuang316/code-index-mcp.git
+cd code-index-mcp
+uv sync
+uv run code-index-mcp
+```
+
+### 🐛 **デバッグ**
+```bash
+npx @modelcontextprotocol/inspector uvx code-index-mcp
+```
+
+### 🤝 **貢献**
+貢献を歓迎します!お気軽にプルリクエストを提出してください。
+
+---
+
+### 📜 **ライセンス**
+[MIT License](LICENSE)
+
+### 🌐 **翻訳**
+- [English](README.md)
+- [繁體中文](README_zh.md)
diff --git a/README_ko.md b/README_ko.md
new file mode 100644
index 0000000..6995b6a
--- /dev/null
+++ b/README_ko.md
@@ -0,0 +1,284 @@
+# 코드 인덱스 MCP
+
+
+
+[](https://modelcontextprotocol.io)
+[](https://www.python.org/)
+[](LICENSE)
+
+**대규모 언어 모델을 위한 지능형 코드 인덱싱과 분석**
+
+고급 검색, 정밀 분석, 유연한 탐색 기능으로 AI가 코드베이스를 이해하고 활용하는 방식을 혁신하세요.
+
+
+
+
+
+
+
+## 개요
+
+Code Index MCP는 [Model Context Protocol](https://modelcontextprotocol.io) 기반 MCP 서버로, AI 어시스턴트와 복잡한 코드베이스 사이를 연결합니다. 빠른 인덱싱, 강력한 검색, 정밀한 코드 분석을 제공하여 AI가 프로젝트 구조를 정확히 파악하고 효과적으로 지원하도록 돕습니다.
+
+**이럴 때 안성맞춤:** 코드 리뷰, 리팩터링, 문서화, 디버깅 지원, 아키텍처 분석
+
+## 빠른 시작
+
+### 🚀 **권장 설정 (대부분의 사용자)**
+
+어떤 MCP 호환 애플리케이션에서도 몇 단계만으로 시작할 수 있습니다.
+
+**사전 준비:** Python 3.10+ 및 [uv](https://github.com/astral-sh/uv)
+
+1. **MCP 설정에 서버 추가** (예: `claude_desktop_config.json` 또는 `~/.claude.json`)
+ ```json
+ {
+ "mcpServers": {
+ "code-index": {
+ "command": "uvx",
+ "args": ["code-index-mcp"]
+ }
+ }
+ }
+ ```
+
+2. **애플리케이션 재시작** – `uvx`가 설치와 실행을 자동으로 처리합니다.
+
+3. **사용 시작** (AI 어시스턴트에게 아래 프롬프트를 전달)
+ ```
+ 프로젝트 경로를 /Users/dev/my-react-app 으로 설정해줘
+ 이 프로젝트에서 모든 TypeScript 파일을 찾아줘
+ "authentication" 관련 함수를 검색해줘
+ src/App.tsx 파일을 분석해줘
+ ```
+
+## 대표 사용 사례
+
+**코드 리뷰:** "예전 API를 사용하는 부분을 모두 찾아줘"
+**리팩터링 지원:** "이 함수는 어디에서 호출되나요?"
+**프로젝트 학습:** "이 React 프로젝트의 핵심 컴포넌트를 보여줘"
+**디버깅:** "에러 처리 로직이 있는 파일을 찾아줘"
+
+## 주요 기능
+
+### 🧠 **지능형 검색과 분석**
+- **듀얼 전략 아키텍처:** 7개 핵심 언어는 전용 tree-sitter 파서를 사용하고, 그 외 50+ 파일 형식은 폴백 전략으로 처리
+- **직접 Tree-sitter 통합:** 특화 언어에 정규식 폴백 없음 – 문제 시 즉시 실패하고 명확한 오류 메시지 제공
+- **고급 검색:** ugrep, ripgrep, ag, grep 중 최적의 도구를 자동 선택해 활용
+- **범용 파일 지원:** 정교한 AST 분석부터 기본 파일 인덱싱까지 폭넓게 커버
+- **파일 분석:** `build_deep_index` 실행 후 구조, 임포트, 클래스, 메서드, 복잡도 지표를 심층적으로 파악
+
+### 🗂️ **다중 언어 지원**
+- **Tree-sitter AST 분석(7종):** Python, JavaScript, TypeScript, Java, Go, Objective-C, Zig
+- **폴백 전략(50+ 형식):** C/C++, Rust, Ruby, PHP 등 대부분의 프로그래밍 언어 지원
+- **문서 및 설정 파일:** Markdown, JSON, YAML, XML 등 상황에 맞는 처리
+- **웹 프론트엔드:** Vue, React, Svelte, HTML, CSS, SCSS
+- **데이터 계층:** SQL, NoSQL, 스토어드 프로시저, 마이그레이션 스크립트
+- **구성 파일:** JSON, YAML, XML, Markdown
+- **[지원 파일 전체 목록 보기](#지원-파일-형식)**
+
+### 🔄 **실시간 모니터링 & 자동 새로고침**
+- **파일 워처:** 파일 변경 시 자동으로 얕은 인덱스(파일 목록) 갱신
+- **크로스 플랫폼:** 운영체제 기본 파일시스템 이벤트 활용
+- **스마트 처리:** 빠른 변경을 묶어 과도한 재빌드를 방지
+- **얕은 인덱스 갱신:** 파일 목록을 최신 상태로 유지하며, 심볼 데이터가 필요하면 `build_deep_index`를 실행
+
+### ⚡ **성능 & 효율성**
+- **Tree-sitter AST 파싱:** 정확한 심볼 추출을 위한 네이티브 구문 분석
+- **지속 캐싱:** 인덱스를 저장해 이후 응답 속도를 극대화
+- **스마트 필터링:** 빌드 디렉터리·임시 파일을 자동 제외
+- **메모리 효율:** 대규모 코드베이스를 염두에 둔 설계
+- **직접 의존성:** 불필요한 폴백 없이 명확한 오류 메시지 제공
+
+## 지원 파일 형식
+
+
+💻 프로그래밍 언어 (클릭하여 확장)
+
+**전용 Tree-sitter 전략 언어:**
+- **Python** (`.py`, `.pyw`) – 클래스/메서드 추출 및 호출 추적이 포함된 완전 AST 분석
+- **JavaScript** (`.js`, `.jsx`, `.mjs`, `.cjs`) – ES6+ 클래스와 함수를 tree-sitter로 파싱
+- **TypeScript** (`.ts`, `.tsx`) – 인터페이스를 포함한 타입 인지 심볼 추출
+- **Java** (`.java`) – 클래스 계층, 메서드 시그니처, 호출 관계 분석
+- **Go** (`.go`) – 구조체 메서드, 리시버 타입, 함수 분석
+- **Objective-C** (`.m`, `.mm`) – 클래스/인스턴스 메서드를 +/- 표기로 구분
+- **Zig** (`.zig`, `.zon`) – 함수와 구조체를 tree-sitter AST로 분석
+
+**기타 모든 프로그래밍 언어:**
+나머지 언어는 **폴백 파싱 전략**으로 기본 메타데이터와 파일 인덱싱을 제공합니다. 예:
+- **시스템/저수준:** C/C++ (`.c`, `.cpp`, `.h`, `.hpp`), Rust (`.rs`)
+- **객체지향:** C# (`.cs`), Kotlin (`.kt`), Scala (`.scala`), Swift (`.swift`)
+- **스크립트:** Ruby (`.rb`), PHP (`.php`), Shell (`.sh`, `.bash`)
+- **그 외 40+ 형식** – 폴백 전략으로 빠른 탐색 가능
+
+
+
+
+🌐 웹 프론트엔드 & UI
+
+- 프레임워크: Vue (`.vue`), Svelte (`.svelte`), Astro (`.astro`)
+- 스타일링: CSS (`.css`, `.scss`, `.less`, `.sass`, `.stylus`, `.styl`), HTML (`.html`)
+- 템플릿: Handlebars (`.hbs`, `.handlebars`), EJS (`.ejs`), Pug (`.pug`)
+
+
+
+
+🗄️ 데이터 계층 & SQL
+
+- **SQL 변형:** 표준 SQL (`.sql`, `.ddl`, `.dml`), 데이터베이스별 방언 (`.mysql`, `.postgresql`, `.psql`, `.sqlite`, `.mssql`, `.oracle`, `.ora`, `.db2`)
+- **DB 객체:** 프로시저/함수 (`.proc`, `.procedure`, `.func`, `.function`), 뷰/트리거/인덱스 (`.view`, `.trigger`, `.index`)
+- **마이그레이션 도구:** 마이그레이션 파일 (`.migration`, `.seed`, `.fixture`, `.schema`), 도구 구성 (`.liquibase`, `.flyway`)
+- **NoSQL & 그래프:** 질의 언어 (`.cql`, `.cypher`, `.sparql`, `.gql`)
+
+
+
+
+📄 문서 & 설정 파일
+
+- Markdown (`.md`, `.mdx`)
+- 구성 파일 (`.json`, `.xml`, `.yml`, `.yaml`)
+
+
+
+## 사용 가능한 도구
+
+### 🏗️ **프로젝트 관리**
+| 도구 | 설명 |
+|------|------|
+| **`set_project_path`** | 프로젝트 디렉터리의 인덱스를 초기화 |
+| **`refresh_index`** | 파일 변경 후 얕은 파일 인덱스를 재생성 |
+| **`build_deep_index`** | 심층 분석에 사용하는 전체 심볼 인덱스를 생성 |
+| **`get_settings_info`** | 현재 프로젝트 설정과 상태를 확인 |
+
+*심볼 레벨 데이터가 필요하면 `build_deep_index`를 실행하세요. 기본 얕은 인덱스는 빠른 파일 탐색을 담당합니다.*
+
+### 🔍 **검색 & 탐색**
+| 도구 | 설명 |
+|------|------|
+| **`search_code_advanced`** | 정규식, 퍼지 매칭, 파일 필터링을 지원하는 스마트 검색 |
+| **`find_files`** | 글롭 패턴으로 파일 찾기 (예: `**/*.py`) |
+| **`get_file_summary`** | 파일 구조, 함수, 임포트, 복잡도를 분석 (심층 인덱스 필요) |
+
+### 🔄 **모니터링 & 자동 새로고침**
+| 도구 | 설명 |
+|------|------|
+| **`get_file_watcher_status`** | 파일 워처 상태와 구성을 확인 |
+| **`configure_file_watcher`** | 자동 새로고침 설정 (활성/비활성, 지연 시간, 추가 제외 패턴) |
+
+### 🛠️ **시스템 & 유지 관리**
+| 도구 | 설명 |
+|------|------|
+| **`create_temp_directory`** | 인덱스 저장용 임시 디렉터리를 생성 |
+| **`check_temp_directory`** | 인덱스 저장 위치와 권한을 확인 |
+| **`clear_settings`** | 모든 설정과 캐시 데이터를 초기화 |
+| **`refresh_search_tools`** | 사용 가능한 검색 도구를 재검색 (ugrep, ripgrep 등) |
+
+## 사용 예시
+
+### 🧭 **빠른 시작 워크플로**
+
+**1. 프로젝트 초기화**
+```
+프로젝트 경로를 /Users/dev/my-react-app 으로 설정해줘
+```
+*프로젝트를 설정하고 얕은 인덱스를 생성합니다.*
+
+**2. 프로젝트 구조 탐색**
+```
+src/components 안의 TypeScript 컴포넌트 파일을 모두 찾아줘
+```
+*사용 도구: `find_files` (`src/components/**/*.tsx`)*
+
+**3. 핵심 파일 분석**
+```
+src/api/userService.ts 요약을 알려줘
+```
+*사용 도구: `get_file_summary` (함수, 임포트, 복잡도 표시)*
+*팁: `needs_deep_index` 응답이 나오면 먼저 `build_deep_index`를 실행하세요.*
+
+### 🔍 **고급 검색 예시**
+
+
+코드 패턴 검색
+
+```
+"get.*Data"에 해당하는 함수 호출을 정규식으로 찾아줘
+```
+*예: `getData()`, `getUserData()`, `getFormData()`*
+
+
+
+
+퍼지 함수 검색
+
+```
+'authUser'와 유사한 인증 관련 함수를 찾아줘
+```
+*예: `authenticateUser`, `authUserToken`, `userAuthCheck`*
+
+
+
+
+언어별 검색
+
+```
+Python 파일에서만 "API_ENDPOINT" 를 찾아줘
+```
+*`search_code_advanced` + `file_pattern="*.py"`*
+
+
+
+
+자동 새로고침 설정
+
+```
+파일 변경 시 자동으로 인덱스를 새로고침하도록 설정해줘
+```
+*`configure_file_watcher`로 활성화 및 지연 시간 설정*
+
+
+
+
+프로젝트 유지 관리
+
+```
+새 컴포넌트를 추가했어. 프로젝트 인덱스를 다시 빌드해줘
+```
+*`refresh_index`로 빠르게 얕은 인덱스를 업데이트*
+
+
+
+## 문제 해결
+
+### 🔄 **자동 새로고침이 동작하지 않을 때**
+- 환경 문제로 `watchdog`가 빠졌다면 설치: `pip install watchdog`
+- 수동 새로고침: 변경 후 `refresh_index` 도구 실행
+- 워처 상태 확인: `get_file_watcher_status` 도구로 활성 여부 점검
+
+## 개발 & 기여
+
+### 🛠️ **소스에서 실행하기**
+```bash
+git clone https://github.com/johnhuang316/code-index-mcp.git
+cd code-index-mcp
+uv sync
+uv run code-index-mcp
+```
+
+### 🧪 **디버깅 도구**
+```bash
+npx @modelcontextprotocol/inspector uvx code-index-mcp
+```
+
+### 🤝 **기여 안내**
+Pull Request를 언제든 환영합니다. 변경 사항과 테스트 방법을 함께 공유해주세요.
+
+---
+
+### 📄 **라이선스**
+[MIT License](LICENSE)
+
+### 🌍 **번역본**
+- [English](README.md)
+- [繁體中文](README_zh.md)
+- [日本語](README_ja.md)
diff --git a/README_zh.md b/README_zh.md
index 8b75193..1e9c5ae 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -3,263 +3,377 @@
[](https://modelcontextprotocol.io)
-[](https://www.python.org/)
+[](https://www.python.org/)
[](LICENSE)
-一個用於程式碼索引、搜尋和分析的模型上下文協定(Model Context Protocol)伺服器。
+**為大型語言模型提供智慧程式碼索引與分析**
+
+以先進的搜尋、分析和導航功能,徹底改變 AI 對程式碼庫的理解方式。
-## 什麼是程式碼索引 MCP?
+
+
+
+
+## 概述
+
+程式碼索引 MCP 是一個 [模型上下文協定](https://modelcontextprotocol.io) 伺服器,架起 AI 模型與複雜程式碼庫之間的橋樑。它提供智慧索引、先進搜尋功能和詳細程式碼分析,幫助 AI 助理有效地理解和導航您的專案。
+
+**適用於:**程式碼審查、重構、文件生成、除錯協助和架構分析。
+
+## 快速開始
+
+### 🚀 **推薦設定(大多數使用者)**
+
+與任何 MCP 相容應用程式開始的最簡單方式:
+
+**前置需求:** Python 3.10+ 和 [uv](https://github.com/astral-sh/uv)
-程式碼索引 MCP 是一個專門的 MCP 伺服器,提供智慧程式碼索引和分析功能。它使大型語言模型能夠與您的程式碼儲存庫互動,提供對複雜程式碼庫的即時洞察和導航。
+1. **新增到您的 MCP 設定** (例如 `claude_desktop_config.json` 或 `~/.claude.json`):
+ ```json
+ {
+ "mcpServers": {
+ "code-index": {
+ "command": "uvx",
+ "args": ["code-index-mcp"]
+ }
+ }
+ }
+ ```
-此伺服器基於[模型上下文協定](https://modelcontextprotocol.io) (MCP) 建構,該協定標準化了 AI 模型與外部工具和資料來源的互動方式。
+2. **重新啟動應用程式** – `uvx` 會自動處理安裝和執行
+
+3. **開始使用**(向您的 AI 助理提供這些提示):
+ ```
+ 設定專案路徑為 /Users/dev/my-react-app
+ 在這個專案中找到所有 TypeScript 檔案
+ 搜尋「authentication」相關函數
+ 分析主要的 App.tsx 檔案
+ ```
+
+## 典型使用場景
+
+**程式碼審查**:「找出所有使用舊 API 的地方」
+**重構協助**:「這個函數在哪裡被呼叫?」
+**學習專案**:「顯示這個 React 專案的主要元件」
+**除錯協助**:「搜尋所有錯誤處理相關的程式碼」
## 主要特性
-- **專案索引**:遞迴掃描目錄以建構可搜尋的程式碼檔案索引
-- **進階搜尋**:智慧搜尋,自動偵測 ugrep、ripgrep、ag 或 grep 以提升效能
-- **模糊搜尋**:ugrep 的原生模糊匹配功能提供卓越的搜尋結果,或其他工具的安全模糊模式
-- **檔案分析**:取得有關檔案結構、匯入和複雜性的詳細資訊
-- **智慧篩選**:自動忽略建構目錄、相依套件和非程式碼檔案
-- **持久儲存**:快取索引以提高跨工作階段的效能
-- **延遲載入**:僅在需要時偵測搜尋工具,優化啟動效能
+### 🔍 **智慧搜尋與分析**
+- **雙策略架構**:7 種核心語言使用專業化 Tree-sitter 解析,50+ 種檔案類型使用備用策略
+- **直接 Tree-sitter 整合**:專業化語言無正則表達式備用 - 快速失敗並提供清晰錯誤訊息
+- **進階搜尋**:自動偵測並使用最佳工具(ugrep、ripgrep、ag 或 grep)
+- **通用檔案支援**:從進階 AST 解析到基本檔案索引的全面覆蓋
+- **檔案分析**:執行 `build_deep_index` 後深入了解結構、匯入、類別、方法和複雜度指標
+
+### 🗂️ **多語言支援**
+- **7 種語言使用 Tree-sitter AST 解析**:Python、JavaScript、TypeScript、Java、Go、Objective-C、Zig
+- **50+ 種檔案類型使用備用策略**:C/C++、Rust、Ruby、PHP 和所有其他程式語言
+- **文件與配置檔案**:Markdown、JSON、YAML、XML 適當處理
+- **網頁前端**:Vue、React、Svelte、HTML、CSS、SCSS
+- **資料庫**:SQL 變體、NoSQL、存儲過程、遷移腳本
+- **配置檔案**:JSON、YAML、XML、Markdown
+- **[查看完整列表](#支援的檔案類型)**
+
+### ⚡ **即時監控與自動刷新**
+- **檔案監控器**:檔案變更時自動更新索引
+- **跨平台**:原生作業系統檔案系統監控
+- **智慧處理**:批次處理快速變更以防止過度重建
+- **淺層索引更新**:監控檔案變更並維持檔案清單最新;需要符號資料時請執行 `build_deep_index`
+
+### ⚡ **效能與效率**
+- **Tree-sitter AST 解析**:原生語法解析以實現準確的符號提取
+- **持久快取**:儲存索引以實現超快速的後續存取
+- **智慧篩選**:智能排除建構目錄和暫存檔案
+- **記憶體高效**:針對大型程式碼庫優化
+- **直接依賴**:無備用機制 - 快速失敗並提供清晰錯誤訊息
## 支援的檔案類型
-伺服器支援多種程式語言和檔案副檔名,包括:
-
-- Python (.py)
-- JavaScript/TypeScript (.js, .ts, .jsx, .tsx, .mjs, .cjs)
-- 前端框架 (.vue, .svelte, .astro)
-- Java (.java)
-- C/C++ (.c, .cpp, .h, .hpp)
-- C# (.cs)
-- Go (.go)
-- Ruby (.rb)
-- PHP (.php)
-- Swift (.swift)
-- Kotlin (.kt)
-- Rust (.rs)
-- Scala (.scala)
-- Shell 指令碼 (.sh, .bash)
-- Zig (.zig)
-- Web 檔案 (.html, .css, .scss, .less, .sass, .stylus, .styl)
-- 模板引擎 (.hbs, .handlebars, .ejs, .pug)
-- **資料庫與 SQL**:
- - SQL 檔案 (.sql, .ddl, .dml)
- - 資料庫特定格式 (.mysql, .postgresql, .psql, .sqlite, .mssql, .oracle, .ora, .db2)
- - 資料庫物件 (.proc, .procedure, .func, .function, .view, .trigger, .index)
- - 遷移與工具 (.migration, .seed, .fixture, .schema, .liquibase, .flyway)
- - NoSQL 與現代資料庫 (.cql, .cypher, .sparql, .gql)
-- 文件/配置 (.md, .mdx, .json, .xml, .yml, .yaml)
-
-## 設定與整合
-
-您可以根據不同的需求,透過以下幾種方式來設定和使用 Code Index MCP。
-
-### 一般用途:與宿主應用整合(建議)
-
-這是最簡單也最常見的使用方式,專為希望在 AI 應用程式(如 Claude Desktop)中使用 Code Index MCP 的使用者設計。
-
-1. **先決條件**:請確保您已安裝 Python 3.8+ 和 [uv](https://github.com/astral-sh/uv)。
-
-2. **設定宿主應用**:將以下設定新增到您宿主應用的 MCP 設定檔中(例如,Claude Desktop 的設定檔是 `claude_desktop_config.json`):
-
- ```json
- {
- "mcpServers": {
- "code-index": {
- "command": "uvx",
- "args": [
- "code-index-mcp"
- ]
- }
- }
- }
- ```
-
-3. **重新啟動宿主應用**:新增設定後,請重新啟動您的應用程式。`uvx` 命令會在背景自動處理 `code-index-mcp` 伺服器的安裝與執行。
-
-### 本地開發
-
-如果您想為這個專案的開發做出貢獻,請遵循以下步驟:
-
-1. **複製儲存庫**:
- ```bash
- git clone https://github.com/johnhuang316/code-index-mcp.git
- cd code-index-mcp
- ```
-
-2. **安裝相依套件**:使用 `uv` 安裝所需套件。
- ```bash
- uv sync
- ```
-
-3. **設定您的宿主應用以進行本地開發**:為了讓您的宿主應用程式(例如 Claude Desktop)使用您本地的原始碼,請更新其設定檔,讓它透過 `uv run` 來執行伺服器。這能確保您對程式碼所做的任何變更,在宿主應用啟動伺服器時能立即生效。
-
- ```json
- {
- "mcpServers": {
- "code-index": {
- "command": "uv",
- "args": [
- "run",
- "code_index_mcp"
- ]
- }
- }
- }
- ```
+
+📁 程式語言(點擊展開)
-4. **使用 MCP Inspector 進行偵錯**:要對您的本地伺服器進行偵錯,您同樣需要告訴 Inspector 使用 `uv run`。
- ```bash
- npx @modelcontextprotocol/inspector uv run code_index_mcp
- ```
+**專業化 Tree-sitter 策略語言:**
+- **Python** (`.py`, `.pyw`) - 完整 AST 分析,包含類別/方法提取和呼叫追蹤
+- **JavaScript** (`.js`, `.jsx`, `.mjs`, `.cjs`) - ES6+ 類別和函數解析使用 Tree-sitter
+- **TypeScript** (`.ts`, `.tsx`) - 完整類型感知符號提取,包含介面
+- **Java** (`.java`) - 完整類別階層、方法簽名和呼叫關係
+- **Go** (`.go`) - 結構方法、接收者類型和函數分析
+- **Objective-C** (`.m`, `.mm`) - 類別/實例方法區分,使用 +/- 標記法
+- **Zig** (`.zig`, `.zon`) - 函數和結構解析使用 Tree-sitter AST
-### 手動安裝:使用 pip(替代方案)
+**所有其他程式語言:**
+所有其他程式語言使用 **備用解析策略**,提供基本檔案索引和元資料提取。包括:
+- **系統與低階語言:** C/C++ (`.c`, `.cpp`, `.h`, `.hpp`)、Rust (`.rs`)
+- **物件導向語言:** C# (`.cs`)、Kotlin (`.kt`)、Scala (`.scala`)、Swift (`.swift`)
+- **腳本與動態語言:** Ruby (`.rb`)、PHP (`.php`)、Shell (`.sh`, `.bash`)
+- **以及 40+ 種檔案類型** - 全部通過備用策略處理進行基本索引
-如果您習慣使用 `pip` 來手動管理您的 Python 套件,可以透過以下方式安裝。
+
-1. **安裝套件**:
- ```bash
- pip install code-index-mcp
- ```
+
+🌐 網頁與前端(點擊展開)
-2. **設定宿主應用**:您需要手動更新宿主應用的 MCP 設定,將命令從 `"command": "uvx"` 修改為 `"command": "code-index-mcp"`。
+**框架與函式庫:**
+- Vue (`.vue`)
+- Svelte (`.svelte`)
+- Astro (`.astro`)
- ```json
- {
- "mcpServers": {
- "code-index": {
- "command": "code-index-mcp",
- "args": []
- }
- }
- }
- ```
+**樣式:**
+- CSS (`.css`, `.scss`, `.less`, `.sass`, `.stylus`, `.styl`)
+- HTML (`.html`)
-## 可用工具
+**模板:**
+- Handlebars (`.hbs`, `.handlebars`)
+- EJS (`.ejs`)
+- Pug (`.pug`)
+
+
-### 核心工具
+
+🗄️ 資料庫與 SQL(點擊展開)
-- **set_project_path**:設定索引的基本專案路徑。
-- **search_code**:使用外部工具 (ugrep/ripgrep/ag/grep) 的增強搜尋,支援模糊匹配。
-- **find_files**:尋找專案中符合給定模式的檔案。
-- **get_file_summary**:取得特定檔案的摘要,包括行數、函式、匯入等。
-- **refresh_index**:重新整理專案索引。
-- **get_settings_info**:取得專案設定的資訊。
+**SQL 變體:**
+- 標準 SQL (`.sql`, `.ddl`, `.dml`)
+- 資料庫特定 (`.mysql`, `.postgresql`, `.psql`, `.sqlite`, `.mssql`, `.oracle`, `.ora`, `.db2`)
-### 工具程式
+**資料庫物件:**
+- 程序與函式 (`.proc`, `.procedure`, `.func`, `.function`)
+- 檢視與觸發器 (`.view`, `.trigger`, `.index`)
-- **create_temp_directory**:建立用於儲存索引資料的臨時目錄。
-- **check_temp_directory**:檢查用於儲存索引資料的臨時目錄。
-- **clear_settings**:清除所有設定和快取資料。
-- **refresh_search_tools**:手動重新偵測可用的命令列搜尋工具(例如 ripgrep)。
+**遷移與工具:**
+- 遷移檔案 (`.migration`, `.seed`, `.fixture`, `.schema`)
+- 工具特定 (`.liquibase`, `.flyway`)
-## 常見工作流程與範例
+**NoSQL 與現代資料庫:**
+- 圖形與查詢 (`.cql`, `.cypher`, `.sparql`, `.gql`)
-這是一個典型的使用場景,展示如何透過像 Claude 這樣的 AI 助理來使用 Code Index MCP。
+
-### 1. 設定專案路徑與首次索引
+
+📄 文件與配置(點擊展開)
-這是第一步,也是最重要的一步。當您設定專案路徑時,伺服器會自動進行首次的檔案索引,或載入先前快取的索引。
+- Markdown (`.md`, `.mdx`)
+- 配置 (`.json`, `.xml`, `.yml`, `.yaml`)
-**範例提示:**
+
+
+## 快速開始
+
+### 🚀 **建議設定(適用於大多數使用者)**
+
+在任何相容 MCP 的應用程式中開始使用的最簡單方法:
+
+**先決條件:** Python 3.10+ 和 [uv](https://github.com/astral-sh/uv)
+
+1. **新增到您的 MCP 配置**(例如 `claude_desktop_config.json` 或 `~/.claude.json`):
+ ```json
+ {
+ "mcpServers": {
+ "code-index": {
+ "command": "uvx",
+ "args": ["code-index-mcp"]
+ }
+ }
+ }
+ ```
+
+2. **重新啟動您的應用程式** – `uvx` 會自動處理安裝和執行
+
+### 🛠️ **開發設定**
+
+適用於貢獻或本地開發:
+
+1. **克隆並安裝:**
+ ```bash
+ git clone https://github.com/johnhuang316/code-index-mcp.git
+ cd code-index-mcp
+ uv sync
+ ```
+
+2. **配置本地開發:**
+ ```json
+ {
+ "mcpServers": {
+ "code-index": {
+ "command": "uv",
+ "args": ["run", "code-index-mcp"]
+ }
+ }
+ }
+ ```
+
+3. **使用 MCP Inspector 除錯:**
+ ```bash
+ npx @modelcontextprotocol/inspector uv run code-index-mcp
+ ```
+
+
+替代方案:手動 pip 安裝
+
+如果您偏好傳統的 pip 管理:
+
+```bash
+pip install code-index-mcp
```
-請將專案路徑設定為 C:\Users\username\projects\my-react-app
+
+然後配置:
+```json
+{
+ "mcpServers": {
+ "code-index": {
+ "command": "code-index-mcp",
+ "args": []
+ }
+ }
+}
```
-### 2. 更新索引(需要時)
+
-如果您在初次設定後對專案檔案做了較大的變更,可以手動重新整理索引,以確保所有工具都能基於最新的資訊運作。
-**範例提示:**
+## 可用工具
+
+### 🏗️ **專案管理**
+| 工具 | 描述 |
+|------|------|
+| **`set_project_path`** | 為專案目錄初始化索引 |
+| **`refresh_index`** | 在檔案變更後重建淺層檔案索引 |
+| **`build_deep_index`** | 產生供深度分析使用的完整符號索引 |
+| **`get_settings_info`** | 檢視目前專案配置和狀態 |
+
+*需要符號層級資料時,請執行 `build_deep_index`;預設的淺層索引提供快速檔案探索。*
+
+### 🔍 **搜尋與探索**
+| 工具 | 描述 |
+|------|------|
+| **`search_code_advanced`** | 智慧搜尋,支援正規表達式、模糊匹配和檔案篩選 |
+| **`find_files`** | 使用萬用字元模式尋找檔案(例如 `**/*.py`) |
+| **`get_file_summary`** | 分析檔案結構、函式、匯入和複雜度(需要深度索引) |
+
+### 🔄 **監控與自動刷新**
+| 工具 | 描述 |
+|------|------|
+| **`get_file_watcher_status`** | 檢查檔案監控器狀態和配置 |
+| **`configure_file_watcher`** | 啟用/停用自動刷新並配置設定 |
+
+### 🛠️ **系統與維護**
+| 工具 | 描述 |
+|------|------|
+| **`create_temp_directory`** | 設定索引資料的儲存目錄 |
+| **`check_temp_directory`** | 驗證索引儲存位置和權限 |
+| **`clear_settings`** | 重設所有快取資料和配置 |
+| **`refresh_search_tools`** | 重新偵測可用的搜尋工具(ugrep、ripgrep 等) |
+
+## 使用範例
+
+### 🎯 **快速開始工作流程**
+
+**1. 初始化您的專案**
```
-我剛新增了幾個新的元件,請幫我重新整理專案索引。
+將專案路徑設定為 /Users/dev/my-react-app
```
-*(AI 助理會使用 `refresh_index` 工具)*
+*自動索引您的程式碼庫並建立可搜尋的快取*
-### 3. 探索專案結構
-
-索引準備就緒後,您可以使用模式(glob)來尋找檔案,以了解程式碼庫的結構並找到相關檔案。
+**2. 探索專案結構**
+```
+在 src/components 中尋找所有 TypeScript 元件檔案
+```
+*使用:`find_files`,模式為 `src/components/**/*.tsx`*
-**範例提示:**
+**3. 分析關鍵檔案**
```
-尋找 'src/components' 目錄中所有的 TypeScript 元件檔案。
+給我 src/api/userService.ts 的摘要
```
-*(AI 助理會使用 `find_files` 工具,並搭配像 `src/components/**/*.tsx` 這樣的模式)*
+*使用:`get_file_summary` 顯示函式、匯入和複雜度*
+*提示:若收到 `needs_deep_index` 回應,請先執行 `build_deep_index`。*
-### 4. 分析特定檔案
+### 🔍 **進階搜尋範例**
-在深入研究一個檔案的完整內容之前,您可以先取得該檔案結構的快速摘要,包括函式、類別和匯入的模組。
+
+程式碼模式搜尋
-**範例提示:**
```
-可以給我 'src/api/userService.ts' 這個檔案的摘要嗎?
+使用正規表達式搜尋所有符合 "get.*Data" 的函式呼叫
```
-*(AI 助理會使用 `get_file_summary` 工具)*
+*找到:`getData()`、`getUserData()`、`getFormData()` 等*
-### 5. 搜尋程式碼
+
-有了最新的索引,您就可以搜尋程式碼片段、函式名稱或任何文字模式,以找到特定邏輯的實作位置。
+
+模糊函式搜尋
-**範例:簡單搜尋**
```
-搜尋 "processData" 函式所有出現過的地方。
+使用 'authUser' 的模糊搜尋尋找驗證相關函式
```
+*匹配:`authenticateUser`、`authUserToken`、`userAuthCheck` 等*
+
+
+
+
+特定語言搜尋
-**範例:模糊匹配搜尋**
```
-我正在尋找一個與使用者驗證相關的函式,它可能叫做 'authUser'、'authenticateUser' 或類似的名稱。你可以對 'authUser' 進行模糊搜尋嗎?
+只在 Python 檔案中搜尋 "API_ENDPOINT"
```
+*使用:`search_code_advanced`,`file_pattern: "*.py"`*
+
+
+
+
+自動刷新配置
-**範例:在特定檔案中搜尋**
```
-只在 Python 檔案中搜尋字串 "API_ENDPOINT"。
+配置檔案變更時的自動索引更新
```
-*(AI 助理會使用 `search_code` 工具,並將 `file_pattern` 參數設定為 `*.py`)*
-
-## 開發
+*使用:`configure_file_watcher` 啟用/停用監控並設定防抖時間*
-### 從原始碼建構
+
-1. 複製儲存庫:
+
+專案維護
-```bash
-git clone https://github.com/username/code-index-mcp.git
-cd code-index-mcp
```
+我新增了新元件,請重新整理專案索引
+```
+*使用:`refresh_index` 更新可搜尋的快取*
-2. 安裝相依套件:
+
-```bash
-uv sync
-```
+## 故障排除
-3. 在本地執行伺服器:
+### 🔄 **自動刷新無法運作**
-```bash
-uv run code_index_mcp
-```
+如果檔案變更時自動索引更新無法運作,請嘗試:
+- `pip install watchdog`(可能解決環境隔離問題)
+- 使用手動刷新:在檔案變更後呼叫 `refresh_index` 工具
+- 檢查檔案監視器狀態:使用 `get_file_watcher_status` 驗證監控是否處於活動狀態
-## 偵錯
+## 開發與貢獻
-您可以使用 MCP inspector 來偵錯伺服器:
+### 🔧 **從原始碼建構**
+```bash
+git clone https://github.com/johnhuang316/code-index-mcp.git
+cd code-index-mcp
+uv sync
+uv run code-index-mcp
+```
+### 🐛 **除錯**
```bash
npx @modelcontextprotocol/inspector uvx code-index-mcp
```
-## 授權條款
-
-[MIT 授權條款](LICENSE)
-
-## 貢獻
-
+### 🤝 **貢獻**
歡迎貢獻!請隨時提交拉取請求。
-## 語言
+---
+
+### 📜 **授權條款**
+[MIT 授權條款](LICENSE)
+### 🌐 **翻譯**
- [English](README.md)
+- [日本語](README_ja.md)
diff --git a/RELEASE_NOTE.txt b/RELEASE_NOTE.txt
new file mode 100644
index 0000000..8a744bb
--- /dev/null
+++ b/RELEASE_NOTE.txt
@@ -0,0 +1,7 @@
+## 2.4.1 - Search Filtering Alignment
+
+### Highlights
+- Code search now shares the central FileFilter blacklist, keeping results consistent with indexing (no more `node_modules` noise).
+- CLI search strategies emit the appropriate exclusion flags automatically (ripgrep, ugrep, ag, grep).
+- Basic fallback search prunes excluded directories during traversal, avoiding unnecessary IO.
+- Added regression coverage for the new filtering behaviour (`tests/search/test_search_filters.py`).
diff --git a/pyproject.toml b/pyproject.toml
index 72dc7e5..428e2d3 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "code-index-mcp"
-version = "0.4.1"
+version = "2.4.1"
description = "Code indexing and analysis tools for LLMs using MCP"
readme = "README.md"
requires-python = ">=3.10"
@@ -14,6 +14,14 @@ authors = [
]
dependencies = [
"mcp>=0.3.0",
+ "watchdog>=3.0.0",
+ "tree-sitter>=0.20.0",
+ "tree-sitter-javascript>=0.20.0",
+ "tree-sitter-typescript>=0.20.0",
+ "tree-sitter-java>=0.20.0",
+ "tree-sitter-zig>=0.20.0",
+ "pathspec>=0.12.1",
+ "msgpack>=1.0.0",
]
[project.urls]
diff --git a/requirements.txt b/requirements.txt
index a1da66e..1a80b2f 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1 +1,10 @@
mcp>=0.3.0
+watchdog>=3.0.0
+protobuf>=4.21.0
+tree-sitter>=0.20.0
+tree-sitter-javascript>=0.20.0
+tree-sitter-typescript>=0.20.0
+tree-sitter-java>=0.20.0
+tree-sitter-zig>=0.20.0
+pathspec>=0.12.1
+libclang>=16.0.0
diff --git a/run.py b/run.py
index b07486f..1303bfe 100644
--- a/run.py
+++ b/run.py
@@ -4,7 +4,6 @@
"""
import sys
import os
-import traceback
# Add src directory to path
src_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'src')
@@ -14,15 +13,7 @@
from code_index_mcp.server import main
if __name__ == "__main__":
- print("Starting Code Index MCP server...", file=sys.stderr)
- print(f"Added path: {src_path}", file=sys.stderr)
main()
-except ImportError as e:
- print(f"Import Error: {e}", file=sys.stderr)
- print(f"Current sys.path: {sys.path}", file=sys.stderr)
- print("Traceback:", file=sys.stderr)
- traceback.print_exc(file=sys.stderr)
-except Exception as e:
- print(f"Error starting server: {e}", file=sys.stderr)
- print("Traceback:", file=sys.stderr)
- traceback.print_exc(file=sys.stderr)
+except Exception:
+ # Exit silently on failure without printing any messages
+ raise SystemExit(1)
diff --git a/src/code_index_mcp/__init__.py b/src/code_index_mcp/__init__.py
index 06cbc6e..f47ee02 100644
--- a/src/code_index_mcp/__init__.py
+++ b/src/code_index_mcp/__init__.py
@@ -3,4 +3,5 @@
A Model Context Protocol server for code indexing, searching, and analysis.
"""
-__version__ = "0.4.1"
+__version__ = "2.4.1"
+
diff --git a/src/code_index_mcp/constants.py b/src/code_index_mcp/constants.py
index cacfdd2..159e31a 100644
--- a/src/code_index_mcp/constants.py
+++ b/src/code_index_mcp/constants.py
@@ -5,5 +5,114 @@
# Directory and file names
SETTINGS_DIR = "code_indexer"
CONFIG_FILE = "config.json"
-INDEX_FILE = "file_index.pickle"
-CACHE_FILE = "content_cache.pickle"
\ No newline at end of file
+INDEX_FILE = "index.json" # JSON index file (deep index)
+INDEX_FILE_SHALLOW = "index.shallow.json" # Minimal shallow index (file list)
+
+# Supported file extensions for code analysis
+# This is the authoritative list used by both old and new indexing systems
+SUPPORTED_EXTENSIONS = [
+ # Core programming languages
+ '.py', '.pyw', # Python
+ '.js', '.jsx', '.ts', '.tsx', # JavaScript/TypeScript
+ '.mjs', '.cjs', # Modern JavaScript
+ '.java', # Java
+ '.c', '.cpp', '.h', '.hpp', # C/C++
+ '.cxx', '.cc', '.hxx', '.hh', # C++ variants
+ '.cs', # C#
+ '.go', # Go
+ '.m', '.mm', # Objective-C
+ '.rb', # Ruby
+ '.php', # PHP
+ '.swift', # Swift
+ '.kt', '.kts', # Kotlin
+ '.rs', # Rust
+ '.scala', # Scala
+ '.sh', '.bash', '.zsh', # Shell scripts
+ '.ps1', # PowerShell
+ '.bat', '.cmd', # Windows batch
+ '.r', '.R', # R
+ '.pl', '.pm', # Perl
+ '.lua', # Lua
+ '.dart', # Dart
+ '.hs', # Haskell
+ '.ml', '.mli', # OCaml
+ '.fs', '.fsx', # F#
+ '.clj', '.cljs', # Clojure
+ '.vim', # Vim script
+ '.zig', '.zon', # Zig
+
+ # Web and markup
+ '.html', '.htm', # HTML
+ '.css', '.scss', '.sass', # Stylesheets
+ '.less', '.stylus', '.styl', # Style languages
+ '.md', '.mdx', # Markdown
+ '.json', '.jsonc', # JSON
+ '.xml', # XML
+ '.yml', '.yaml', # YAML
+
+ # Frontend frameworks
+ '.vue', # Vue.js
+ '.svelte', # Svelte
+ '.astro', # Astro
+
+ # Template engines
+ '.hbs', '.handlebars', # Handlebars
+ '.ejs', # EJS
+ '.pug', # Pug
+
+ # Database and SQL
+ '.sql', '.ddl', '.dml', # SQL
+ '.mysql', '.postgresql', '.psql', # Database-specific SQL
+ '.sqlite', '.mssql', '.oracle', # More databases
+ '.ora', '.db2', # Oracle and DB2
+ '.proc', '.procedure', # Stored procedures
+ '.func', '.function', # Functions
+ '.view', '.trigger', '.index', # Database objects
+ '.migration', '.seed', '.fixture', # Migration files
+ '.schema', # Schema files
+ '.cql', '.cypher', '.sparql', # NoSQL query languages
+ '.gql', # GraphQL
+ '.liquibase', '.flyway', # Migration tools
+]
+
+# Centralized filtering configuration
+FILTER_CONFIG = {
+ "exclude_directories": {
+ # Version control
+ '.git', '.svn', '.hg', '.bzr',
+
+ # Package managers & dependencies
+ 'node_modules', '__pycache__', '.venv', 'venv',
+ 'vendor', 'bower_components',
+
+ # Build outputs
+ 'dist', 'build', 'target', 'out', 'bin', 'obj',
+
+ # IDE & editors
+ '.idea', '.vscode', '.vs', '.sublime-workspace',
+
+ # Testing & coverage
+ '.pytest_cache', '.coverage', '.tox', '.nyc_output',
+ 'coverage', 'htmlcov',
+
+ # OS artifacts
+ '.DS_Store', 'Thumbs.db', 'desktop.ini'
+ },
+
+ "exclude_files": {
+ # Temporary files
+ '*.tmp', '*.temp', '*.swp', '*.swo',
+
+ # Backup files
+ '*.bak', '*~', '*.orig',
+
+ # Log files
+ '*.log',
+
+ # Lock files
+ 'package-lock.json', 'yarn.lock', 'Pipfile.lock'
+ },
+
+ "supported_extensions": SUPPORTED_EXTENSIONS
+}
+
diff --git a/src/code_index_mcp/indexing/__init__.py b/src/code_index_mcp/indexing/__init__.py
new file mode 100644
index 0000000..e779911
--- /dev/null
+++ b/src/code_index_mcp/indexing/__init__.py
@@ -0,0 +1,32 @@
+"""
+Code indexing utilities for the MCP server.
+
+This module provides simple JSON-based indexing optimized for LLM consumption.
+"""
+
+# Import utility functions that are still used
+from .qualified_names import (
+ generate_qualified_name,
+ normalize_file_path
+)
+
+# New JSON-based indexing system
+from .json_index_builder import JSONIndexBuilder, IndexMetadata
+from .json_index_manager import JSONIndexManager, get_index_manager
+from .shallow_index_manager import ShallowIndexManager, get_shallow_index_manager
+from .deep_index_manager import DeepIndexManager
+from .models import SymbolInfo, FileInfo
+
+__all__ = [
+ 'generate_qualified_name',
+ 'normalize_file_path',
+ 'JSONIndexBuilder',
+ 'JSONIndexManager',
+ 'get_index_manager',
+ 'ShallowIndexManager',
+ 'get_shallow_index_manager',
+ 'DeepIndexManager',
+ 'SymbolInfo',
+ 'FileInfo',
+ 'IndexMetadata'
+]
\ No newline at end of file
diff --git a/src/code_index_mcp/indexing/deep_index_manager.py b/src/code_index_mcp/indexing/deep_index_manager.py
new file mode 100644
index 0000000..6558703
--- /dev/null
+++ b/src/code_index_mcp/indexing/deep_index_manager.py
@@ -0,0 +1,46 @@
+"""
+Deep Index Manager - Wrapper around JSONIndexManager for deep indexing.
+
+This class provides a clear semantic separation from the shallow manager.
+It delegates to the existing JSONIndexManager (symbols + files JSON index).
+"""
+
+from __future__ import annotations
+
+from typing import Optional, Dict, Any, List
+
+from .json_index_manager import JSONIndexManager
+
+
+class DeepIndexManager:
+ """Thin wrapper over JSONIndexManager to expose deep-index API."""
+
+ def __init__(self) -> None:
+ self._mgr = JSONIndexManager()
+
+ # Expose a subset of API to keep callers simple
+ def set_project_path(self, project_path: str) -> bool:
+ return self._mgr.set_project_path(project_path)
+
+ def build_index(self, force_rebuild: bool = False) -> bool:
+ return self._mgr.build_index(force_rebuild=force_rebuild)
+
+ def load_index(self) -> bool:
+ return self._mgr.load_index()
+
+ def refresh_index(self) -> bool:
+ return self._mgr.refresh_index()
+
+ def find_files(self, pattern: str = "*") -> List[str]:
+ return self._mgr.find_files(pattern)
+
+ def get_file_summary(self, file_path: str) -> Optional[Dict[str, Any]]:
+ return self._mgr.get_file_summary(file_path)
+
+ def get_index_stats(self) -> Dict[str, Any]:
+ return self._mgr.get_index_stats()
+
+ def cleanup(self) -> None:
+ self._mgr.cleanup()
+
+
diff --git a/src/code_index_mcp/indexing/index_provider.py b/src/code_index_mcp/indexing/index_provider.py
new file mode 100644
index 0000000..660bb8d
--- /dev/null
+++ b/src/code_index_mcp/indexing/index_provider.py
@@ -0,0 +1,125 @@
+"""
+Index provider interface definitions.
+
+Defines standard interfaces for all index access, ensuring consistency across different implementations.
+"""
+
+from typing import List, Optional, Dict, Any, Protocol
+from dataclasses import dataclass
+
+from .models import SymbolInfo, FileInfo
+
+
+@dataclass
+class IndexMetadata:
+ """Standard index metadata structure."""
+ version: str
+ format_type: str
+ created_at: float
+ last_updated: float
+ file_count: int
+ project_root: str
+ tool_version: str
+
+
+class IIndexProvider(Protocol):
+ """
+ Standard index provider interface.
+
+ All index implementations must follow this interface to ensure consistent access patterns.
+ """
+
+ def get_file_list(self) -> List[FileInfo]:
+ """
+ Get list of all indexed files.
+
+ Returns:
+ List of file information objects
+ """
+ ...
+
+ def get_file_info(self, file_path: str) -> Optional[FileInfo]:
+ """
+ Get information for a specific file.
+
+ Args:
+ file_path: Relative file path
+
+ Returns:
+ File information, or None if file is not in index
+ """
+ ...
+
+ def query_symbols(self, file_path: str) -> List[SymbolInfo]:
+ """
+ Query symbol information in a file.
+
+ Args:
+ file_path: Relative file path
+
+ Returns:
+ List of symbol information objects
+ """
+ ...
+
+ def search_files(self, pattern: str) -> List[str]:
+ """
+ Search files by pattern.
+
+ Args:
+ pattern: Glob pattern or regular expression
+
+ Returns:
+ List of matching file paths
+ """
+ ...
+
+ def get_metadata(self) -> IndexMetadata:
+ """
+ Get index metadata.
+
+ Returns:
+ Index metadata information
+ """
+ ...
+
+ def is_available(self) -> bool:
+ """
+ Check if index is available.
+
+ Returns:
+ True if index is available and functional
+ """
+ ...
+
+
+class IIndexManager(Protocol):
+ """
+ Index manager interface.
+
+ Defines standard interface for index lifecycle management.
+ """
+
+ def initialize(self) -> bool:
+ """Initialize the index manager."""
+ ...
+
+ def get_provider(self) -> Optional[IIndexProvider]:
+ """Get the current active index provider."""
+ ...
+
+ def refresh_index(self, force: bool = False) -> bool:
+ """Refresh the index."""
+ ...
+
+ def save_index(self) -> bool:
+ """Save index state."""
+ ...
+
+ def clear_index(self) -> None:
+ """Clear index state."""
+ ...
+
+ def get_index_status(self) -> Dict[str, Any]:
+ """Get index status information."""
+ ...
diff --git a/src/code_index_mcp/indexing/json_index_builder.py b/src/code_index_mcp/indexing/json_index_builder.py
new file mode 100644
index 0000000..c12d694
--- /dev/null
+++ b/src/code_index_mcp/indexing/json_index_builder.py
@@ -0,0 +1,430 @@
+"""
+JSON Index Builder - Clean implementation using Strategy pattern.
+
+This replaces the monolithic parser implementation with a clean,
+maintainable Strategy pattern architecture.
+"""
+
+import logging
+import os
+import time
+from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor, as_completed
+from dataclasses import dataclass, asdict
+from pathlib import Path
+from typing import Dict, List, Optional, Any, Tuple
+
+from .strategies import StrategyFactory
+from .models import SymbolInfo, FileInfo
+
+logger = logging.getLogger(__name__)
+
+
+@dataclass
+class IndexMetadata:
+ """Metadata for the JSON index."""
+ project_path: str
+ indexed_files: int
+ index_version: str
+ timestamp: str
+ languages: List[str]
+ total_symbols: int = 0
+ specialized_parsers: int = 0
+ fallback_files: int = 0
+
+
+class JSONIndexBuilder:
+ """
+ Main index builder using Strategy pattern for language parsing.
+
+ This class orchestrates the index building process by:
+ 1. Discovering files in the project
+ 2. Using StrategyFactory to get appropriate parsers
+ 3. Extracting symbols and metadata
+ 4. Assembling the final JSON index
+ """
+
+ def __init__(self, project_path: str, additional_excludes: Optional[List[str]] = None):
+ from ..utils import FileFilter
+
+ # Input validation
+ if not isinstance(project_path, str):
+ raise ValueError(f"Project path must be a string, got {type(project_path)}")
+
+ project_path = project_path.strip()
+ if not project_path:
+ raise ValueError("Project path cannot be empty")
+
+ if not os.path.isdir(project_path):
+ raise ValueError(f"Project path does not exist: {project_path}")
+
+ self.project_path = project_path
+ self.in_memory_index: Optional[Dict[str, Any]] = None
+ self.strategy_factory = StrategyFactory()
+ self.file_filter = FileFilter(additional_excludes)
+
+ logger.info(f"Initialized JSON index builder for {project_path}")
+ strategy_info = self.strategy_factory.get_strategy_info()
+ logger.info(f"Available parsing strategies: {len(strategy_info)} types")
+
+ # Log specialized vs fallback coverage
+ specialized = len(self.strategy_factory.get_specialized_extensions())
+ fallback = len(self.strategy_factory.get_fallback_extensions())
+ logger.info(f"Specialized parsers: {specialized} extensions, Fallback coverage: {fallback} extensions")
+
+ def _process_file(self, file_path: str, specialized_extensions: set) -> Optional[Tuple[Dict, Dict, str, bool]]:
+ """
+ Process a single file - designed for parallel execution.
+
+ Args:
+ file_path: Path to the file to process
+ specialized_extensions: Set of extensions with specialized parsers
+
+ Returns:
+ Tuple of (symbols, file_info, language, is_specialized) or None on error
+ """
+ try:
+ with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
+ content = f.read()
+
+ ext = Path(file_path).suffix.lower()
+ rel_path = os.path.relpath(file_path, self.project_path).replace('\\', '/')
+
+ # Get appropriate strategy
+ strategy = self.strategy_factory.get_strategy(ext)
+
+ # Track strategy usage
+ is_specialized = ext in specialized_extensions
+
+ # Parse file using strategy
+ symbols, file_info = strategy.parse_file(rel_path, content)
+
+ logger.debug(f"Parsed {rel_path}: {len(symbols)} symbols ({file_info.language})")
+
+ return (symbols, {rel_path: file_info}, file_info.language, is_specialized)
+
+ except Exception as e:
+ logger.warning(f"Error processing {file_path}: {e}")
+ return None
+
+ def build_index(self, parallel: bool = True, max_workers: Optional[int] = None) -> Dict[str, Any]:
+ """
+ Build the complete index using Strategy pattern with parallel processing.
+
+ Args:
+ parallel: Whether to use parallel processing (default: True)
+ max_workers: Maximum number of worker processes/threads (default: CPU count)
+
+ Returns:
+ Complete JSON index with metadata, symbols, and file information
+ """
+ logger.info(f"Building JSON index using Strategy pattern (parallel={parallel})...")
+ start_time = time.time()
+
+ all_symbols = {}
+ all_files = {}
+ languages = set()
+ specialized_count = 0
+ fallback_count = 0
+
+ # Get specialized extensions for tracking
+ specialized_extensions = set(self.strategy_factory.get_specialized_extensions())
+
+ # Get list of files to process
+ files_to_process = self._get_supported_files()
+ total_files = len(files_to_process)
+
+ if total_files == 0:
+ logger.warning("No files to process")
+ return self._create_empty_index()
+
+ logger.info(f"Processing {total_files} files...")
+
+ if parallel and total_files > 1:
+ # Use ThreadPoolExecutor for I/O-bound file reading
+ # ProcessPoolExecutor has issues with strategy sharing
+ if max_workers is None:
+ max_workers = min(os.cpu_count() or 4, total_files)
+
+ logger.info(f"Using parallel processing with {max_workers} workers")
+
+ with ThreadPoolExecutor(max_workers=max_workers) as executor:
+ # Submit all tasks
+ future_to_file = {
+ executor.submit(self._process_file, file_path, specialized_extensions): file_path
+ for file_path in files_to_process
+ }
+
+ # Process completed tasks
+ processed = 0
+ for future in as_completed(future_to_file):
+ file_path = future_to_file[future]
+ result = future.result()
+
+ if result:
+ symbols, file_info_dict, language, is_specialized = result
+ all_symbols.update(symbols)
+ all_files.update(file_info_dict)
+ languages.add(language)
+
+ if is_specialized:
+ specialized_count += 1
+ else:
+ fallback_count += 1
+
+ processed += 1
+ if processed % 100 == 0:
+ logger.debug(f"Processed {processed}/{total_files} files")
+ else:
+ # Sequential processing
+ logger.info("Using sequential processing")
+ for file_path in files_to_process:
+ result = self._process_file(file_path, specialized_extensions)
+ if result:
+ symbols, file_info_dict, language, is_specialized = result
+ all_symbols.update(symbols)
+ all_files.update(file_info_dict)
+ languages.add(language)
+
+ if is_specialized:
+ specialized_count += 1
+ else:
+ fallback_count += 1
+
+ # Build index metadata
+ metadata = IndexMetadata(
+ project_path=self.project_path,
+ indexed_files=len(all_files),
+ index_version="2.0.0-strategy",
+ timestamp=time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()),
+ languages=sorted(list(languages)),
+ total_symbols=len(all_symbols),
+ specialized_parsers=specialized_count,
+ fallback_files=fallback_count
+ )
+
+ # Assemble final index
+ index = {
+ "metadata": asdict(metadata),
+ "symbols": {k: asdict(v) for k, v in all_symbols.items()},
+ "files": {k: asdict(v) for k, v in all_files.items()}
+ }
+
+ # Cache in memory
+ self.in_memory_index = index
+
+ elapsed = time.time() - start_time
+ logger.info(f"Built index with {len(all_symbols)} symbols from {len(all_files)} files in {elapsed:.2f}s")
+ logger.info(f"Languages detected: {sorted(languages)}")
+ logger.info(f"Strategy usage: {specialized_count} specialized, {fallback_count} fallback")
+
+ return index
+
+ def _create_empty_index(self) -> Dict[str, Any]:
+ """Create an empty index structure."""
+ metadata = IndexMetadata(
+ project_path=self.project_path,
+ indexed_files=0,
+ index_version="2.0.0-strategy",
+ timestamp=time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()),
+ languages=[],
+ total_symbols=0,
+ specialized_parsers=0,
+ fallback_files=0
+ )
+
+ return {
+ "metadata": asdict(metadata),
+ "symbols": {},
+ "files": {}
+ }
+
+ def get_index(self) -> Optional[Dict[str, Any]]:
+ """Get the current in-memory index."""
+ return self.in_memory_index
+
+ def clear_index(self):
+ """Clear the in-memory index."""
+ self.in_memory_index = None
+ logger.debug("Cleared in-memory index")
+
+ def _get_supported_files(self) -> List[str]:
+ """
+ Get all supported files in the project using centralized filtering.
+
+ Returns:
+ List of file paths that can be parsed
+ """
+ supported_files = []
+ base_path = Path(self.project_path)
+
+ try:
+ for root, dirs, files in os.walk(self.project_path):
+ # Filter directories in-place using centralized logic
+ dirs[:] = [d for d in dirs if not self.file_filter.should_exclude_directory(d)]
+
+ # Filter files using centralized logic
+ for file in files:
+ file_path = Path(root) / file
+ if self.file_filter.should_process_path(file_path, base_path):
+ supported_files.append(str(file_path))
+
+ except Exception as e:
+ logger.error(f"Error scanning directory {self.project_path}: {e}")
+
+ logger.debug(f"Found {len(supported_files)} supported files")
+ return supported_files
+
+ def build_shallow_file_list(self) -> List[str]:
+ """
+ Build a minimal shallow index consisting of relative file paths only.
+
+ This method does not read file contents. It enumerates supported files
+ using centralized filtering and returns normalized relative paths with
+ forward slashes for cross-platform consistency.
+
+ Returns:
+ List of relative file paths (using '/').
+ """
+ try:
+ absolute_files = self._get_supported_files()
+ result: List[str] = []
+ for abs_path in absolute_files:
+ rel_path = os.path.relpath(abs_path, self.project_path).replace('\\', '/')
+ # Normalize leading './'
+ if rel_path.startswith('./'):
+ rel_path = rel_path[2:]
+ result.append(rel_path)
+ return result
+ except Exception as e:
+ logger.error(f"Failed to build shallow file list: {e}")
+ return []
+
+ def save_index(self, index: Dict[str, Any], index_path: str) -> bool:
+ """
+ Save index to disk.
+
+ Args:
+ index: Index data to save
+ index_path: Path where to save the index
+
+ Returns:
+ True if successful, False otherwise
+ """
+ try:
+ import json
+ with open(index_path, 'w', encoding='utf-8') as f:
+ json.dump(index, f, indent=2, ensure_ascii=False)
+ logger.info(f"Saved index to {index_path}")
+ return True
+ except Exception as e:
+ logger.error(f"Failed to save index to {index_path}: {e}")
+ return False
+
+ def load_index(self, index_path: str) -> Optional[Dict[str, Any]]:
+ """
+ Load index from disk.
+
+ Args:
+ index_path: Path to the index file
+
+ Returns:
+ Index data if successful, None otherwise
+ """
+ try:
+ if not os.path.exists(index_path):
+ logger.debug(f"Index file not found: {index_path}")
+ return None
+
+ import json
+ with open(index_path, 'r', encoding='utf-8') as f:
+ index = json.load(f)
+
+ # Cache in memory
+ self.in_memory_index = index
+ logger.info(f"Loaded index from {index_path}")
+ return index
+
+ except Exception as e:
+ logger.error(f"Failed to load index from {index_path}: {e}")
+ return None
+
+ def get_parsing_statistics(self) -> Dict[str, Any]:
+ """
+ Get detailed statistics about parsing capabilities.
+
+ Returns:
+ Dictionary with parsing statistics and strategy information
+ """
+ strategy_info = self.strategy_factory.get_strategy_info()
+
+ return {
+ "total_strategies": len(strategy_info),
+ "specialized_languages": [lang for lang in strategy_info.keys() if not lang.startswith('fallback_')],
+ "fallback_languages": [lang.replace('fallback_', '') for lang in strategy_info.keys() if lang.startswith('fallback_')],
+ "total_extensions": len(self.strategy_factory.get_all_supported_extensions()),
+ "specialized_extensions": len(self.strategy_factory.get_specialized_extensions()),
+ "fallback_extensions": len(self.strategy_factory.get_fallback_extensions()),
+ "strategy_details": strategy_info
+ }
+
+ def get_file_symbols(self, file_path: str) -> List[Dict[str, Any]]:
+ """
+ Get symbols for a specific file.
+
+ Args:
+ file_path: Relative path to the file
+
+ Returns:
+ List of symbols in the file
+ """
+ if not self.in_memory_index:
+ logger.warning("Index not loaded")
+ return []
+
+ try:
+ # Normalize file path
+ file_path = file_path.replace('\\', '/')
+ if file_path.startswith('./'):
+ file_path = file_path[2:]
+
+ # Get file info
+ file_info = self.in_memory_index["files"].get(file_path)
+ if not file_info:
+ logger.warning(f"File not found in index: {file_path}")
+ return []
+
+ # Work directly with global symbols for this file
+ global_symbols = self.in_memory_index.get("symbols", {})
+ result = []
+
+ # Find all symbols for this file directly from global symbols
+ for symbol_id, symbol_data in global_symbols.items():
+ symbol_file = symbol_data.get("file", "").replace("\\", "/")
+
+ # Check if this symbol belongs to our file
+ if symbol_file == file_path:
+ symbol_type = symbol_data.get("type", "unknown")
+ symbol_name = symbol_id.split("::")[-1] # Extract symbol name from ID
+
+ # Create symbol info
+ symbol_info = {
+ "name": symbol_name,
+ "called_by": symbol_data.get("called_by", []),
+ "line": symbol_data.get("line"),
+ "signature": symbol_data.get("signature")
+ }
+
+ # Categorize by type
+ if symbol_type in ["function", "method"]:
+ result.append(symbol_info)
+ elif symbol_type == "class":
+ result.append(symbol_info)
+
+ # Sort by line number for consistent ordering
+ result.sort(key=lambda x: x.get("line", 0))
+
+ return result
+
+ except Exception as e:
+ logger.error(f"Error getting file symbols for {file_path}: {e}")
+ return []
diff --git a/src/code_index_mcp/indexing/json_index_manager.py b/src/code_index_mcp/indexing/json_index_manager.py
new file mode 100644
index 0000000..ec320e4
--- /dev/null
+++ b/src/code_index_mcp/indexing/json_index_manager.py
@@ -0,0 +1,465 @@
+"""
+JSON Index Manager - Manages the lifecycle of the JSON-based index.
+
+This replaces the SCIP unified_index_manager with a simpler approach
+focused on fast JSON-based indexing and querying.
+"""
+
+import hashlib
+import json
+import logging
+import os
+import re
+import tempfile
+import threading
+import fnmatch
+from pathlib import Path
+from typing import Dict, List, Optional, Any
+
+from .json_index_builder import JSONIndexBuilder
+from ..constants import SETTINGS_DIR, INDEX_FILE, INDEX_FILE_SHALLOW
+
+logger = logging.getLogger(__name__)
+
+
+class JSONIndexManager:
+ """Manages JSON-based code index lifecycle and storage."""
+
+ def __init__(self):
+ self.project_path: Optional[str] = None
+ self.index_builder: Optional[JSONIndexBuilder] = None
+ self.temp_dir: Optional[str] = None
+ self.index_path: Optional[str] = None
+ self.shallow_index_path: Optional[str] = None
+ self._shallow_file_list: Optional[List[str]] = None
+ self._lock = threading.RLock()
+ logger.info("Initialized JSON Index Manager")
+
+ def set_project_path(self, project_path: str) -> bool:
+ """Set the project path and initialize index storage."""
+ with self._lock:
+ try:
+ # Input validation
+ if not project_path or not isinstance(project_path, str):
+ logger.error(f"Invalid project path: {project_path}")
+ return False
+
+ project_path = project_path.strip()
+ if not project_path:
+ logger.error("Project path cannot be empty")
+ return False
+
+ if not os.path.isdir(project_path):
+ logger.error(f"Project path does not exist: {project_path}")
+ return False
+
+ self.project_path = project_path
+ self.index_builder = JSONIndexBuilder(project_path)
+
+ # Create temp directory for index storage
+ project_hash = hashlib.md5(project_path.encode()).hexdigest()[:12]
+ self.temp_dir = os.path.join(tempfile.gettempdir(), SETTINGS_DIR, project_hash)
+ os.makedirs(self.temp_dir, exist_ok=True)
+
+ self.index_path = os.path.join(self.temp_dir, INDEX_FILE)
+ self.shallow_index_path = os.path.join(self.temp_dir, INDEX_FILE_SHALLOW)
+
+ logger.info(f"Set project path: {project_path}")
+ logger.info(f"Index storage: {self.index_path}")
+ return True
+
+ except Exception as e:
+ logger.error(f"Failed to set project path: {e}")
+ return False
+
+ def build_index(self, force_rebuild: bool = False) -> bool:
+ """Build or rebuild the index."""
+ with self._lock:
+ if not self.index_builder or not self.project_path:
+ logger.error("Index builder not initialized")
+ return False
+
+ try:
+ # Check if we need to rebuild
+ if not force_rebuild and self._is_index_fresh():
+ logger.info("Index is fresh, skipping rebuild")
+ return True
+
+ logger.info("Building JSON index...")
+ index = self.index_builder.build_index()
+
+ # Save to disk
+ self.index_builder.save_index(index, self.index_path)
+
+ logger.info(f"Successfully built index with {len(index['symbols'])} symbols")
+ return True
+
+ except Exception as e:
+ logger.error(f"Failed to build index: {e}")
+ return False
+
+ def load_index(self) -> bool:
+ """Load existing index from disk."""
+ with self._lock:
+ if not self.index_builder or not self.index_path:
+ logger.error("Index manager not initialized")
+ return False
+
+ try:
+ index = self.index_builder.load_index(self.index_path)
+ if index:
+ logger.info(f"Loaded index with {len(index['symbols'])} symbols")
+ return True
+ else:
+ logger.warning("No existing index found")
+ return False
+
+ except Exception as e:
+ logger.error(f"Failed to load index: {e}")
+ return False
+
+ def build_shallow_index(self) -> bool:
+ """Build and save the minimal shallow index (file list)."""
+ with self._lock:
+ if not self.index_builder or not self.project_path or not self.shallow_index_path:
+ logger.error("Index builder not initialized for shallow index")
+ return False
+
+ try:
+ file_list = self.index_builder.build_shallow_file_list()
+ # Persist as a JSON array for minimal overhead
+ with open(self.shallow_index_path, 'w', encoding='utf-8') as f:
+ json.dump(file_list, f, ensure_ascii=False)
+ self._shallow_file_list = file_list
+ logger.info(f"Saved shallow index with {len(file_list)} files to {self.shallow_index_path}")
+ return True
+ except Exception as e:
+ logger.error(f"Failed to build shallow index: {e}")
+ return False
+
+ def load_shallow_index(self) -> bool:
+ """Load shallow index (file list) from disk into memory."""
+ with self._lock:
+ try:
+ if not self.shallow_index_path or not os.path.exists(self.shallow_index_path):
+ logger.warning("No existing shallow index found")
+ return False
+ with open(self.shallow_index_path, 'r', encoding='utf-8') as f:
+ data = json.load(f)
+ if not isinstance(data, list):
+ logger.error("Shallow index format invalid (expected list)")
+ return False
+ # Normalize paths
+ normalized = []
+ for p in data:
+ if isinstance(p, str):
+ q = p.replace('\\\\', '/').replace('\\', '/')
+ if q.startswith('./'):
+ q = q[2:]
+ normalized.append(q)
+ self._shallow_file_list = normalized
+ logger.info(f"Loaded shallow index with {len(normalized)} files")
+ return True
+ except Exception as e:
+ logger.error(f"Failed to load shallow index: {e}")
+ return False
+
+ def refresh_index(self) -> bool:
+ """Refresh the index (rebuild and reload)."""
+ with self._lock:
+ logger.info("Refreshing index...")
+ if self.build_index(force_rebuild=True):
+ return self.load_index()
+ return False
+
+ def find_files(self, pattern: str = "*") -> List[str]:
+ """
+ Find files matching a glob pattern using the SHALLOW file list only.
+
+ Notes:
+ - '*' does not cross '/'
+ - '**' matches across directories
+ - Always sources from the shallow index for consistency and speed
+ """
+ with self._lock:
+ # Input validation
+ if not isinstance(pattern, str):
+ logger.error(f"Pattern must be a string, got {type(pattern)}")
+ return []
+
+ pattern = pattern.strip()
+ if not pattern:
+ pattern = "*"
+
+ # Normalize to forward slashes
+ norm_pattern = pattern.replace('\\\\', '/').replace('\\', '/')
+
+ # Build glob regex: '*' does not cross '/', '**' crosses directories
+ regex = self._compile_glob_regex(norm_pattern)
+
+ # Always use shallow index for file discovery
+ try:
+ if self._shallow_file_list is None:
+ # Try load existing shallow index; if missing, build then load
+ if not self.load_shallow_index():
+ # If still not available, attempt to build
+ if self.build_shallow_index():
+ self.load_shallow_index()
+
+ files = list(self._shallow_file_list or [])
+
+ if norm_pattern == "*":
+ return files
+
+ return [f for f in files if regex.match(f) is not None]
+
+ except Exception as e:
+ logger.error(f"Error finding files: {e}")
+ return []
+
+ def get_file_summary(self, file_path: str) -> Optional[Dict[str, Any]]:
+ """
+ Get summary information for a file.
+
+ This method attempts to retrieve comprehensive file information including
+ symbol counts, functions, classes, methods, and imports. If the index
+ is not loaded, it will attempt auto-initialization to restore from the
+ most recent index state.
+
+ Args:
+ file_path: Relative path to the file
+
+ Returns:
+ Dictionary containing file summary information, or None if not found
+ """
+ with self._lock:
+ # Input validation
+ if not isinstance(file_path, str):
+ logger.error(f"File path must be a string, got {type(file_path)}")
+ return None
+
+ file_path = file_path.strip()
+ if not file_path:
+ logger.error("File path cannot be empty")
+ return None
+
+ # Try to load cached index if not ready
+ if not self.index_builder or not self.index_builder.in_memory_index:
+ if not self._try_load_cached_index():
+ logger.warning("Index not loaded and no cached index available")
+ return None
+
+ try:
+ # Normalize file path
+ file_path = file_path.replace('\\', '/')
+ if file_path.startswith('./'):
+ file_path = file_path[2:]
+
+ # Get file info
+ file_info = self.index_builder.in_memory_index["files"].get(file_path)
+ if not file_info:
+ logger.warning(f"File not found in index: {file_path}")
+ return None
+
+ # Get symbols in file
+ symbols = self.index_builder.get_file_symbols(file_path)
+
+ # Categorize symbols by signature
+ functions = []
+ classes = []
+ methods = []
+
+ for s in symbols:
+ signature = s.get("signature", "")
+ if signature:
+ if signature.startswith("def ") and "::" in signature:
+ # Method: contains class context
+ methods.append(s)
+ elif signature.startswith("def "):
+ # Function: starts with def but no class context
+ functions.append(s)
+ elif signature.startswith("class ") or signature is None:
+ # Class: starts with class or has no signature
+ classes.append(s)
+ else:
+ # Default to function for unknown signatures
+ functions.append(s)
+ else:
+ # No signature - try to infer from name patterns or default to function
+ name = s.get("name", "")
+ if name and name[0].isupper():
+ # Capitalized names are likely classes
+ classes.append(s)
+ else:
+ # Default to function
+ functions.append(s)
+
+ return {
+ "file_path": file_path,
+ "language": file_info["language"],
+ "line_count": file_info["line_count"],
+ "symbol_count": len(symbols),
+ "functions": functions,
+ "classes": classes,
+ "methods": methods,
+ "imports": file_info.get("imports", []),
+ "exports": file_info.get("exports", [])
+ }
+
+ except Exception as e:
+ logger.error(f"Error getting file summary: {e}")
+ return None
+
+ def get_index_stats(self) -> Dict[str, Any]:
+ """Get statistics about the current index."""
+ with self._lock:
+ if not self.index_builder or not self.index_builder.in_memory_index:
+ return {"status": "not_loaded"}
+
+ try:
+ index = self.index_builder.in_memory_index
+ metadata = index["metadata"]
+
+ symbol_counts = {}
+ for symbol_data in index["symbols"].values():
+ symbol_type = symbol_data.get("type", "unknown")
+ symbol_counts[symbol_type] = symbol_counts.get(symbol_type, 0) + 1
+
+ return {
+ "status": "loaded",
+ "project_path": metadata["project_path"],
+ "indexed_files": metadata["indexed_files"],
+ "total_symbols": len(index["symbols"]),
+ "symbol_types": symbol_counts,
+ "languages": metadata["languages"],
+ "index_version": metadata["index_version"],
+ "timestamp": metadata["timestamp"]
+ }
+
+ except Exception as e:
+ logger.error(f"Error getting index stats: {e}")
+ return {"status": "error", "error": str(e)}
+
+ def _is_index_fresh(self) -> bool:
+ """Check if the current index is fresh."""
+ if not self.index_path or not os.path.exists(self.index_path):
+ return False
+
+ try:
+ from code_index_mcp.utils.file_filter import FileFilter as _FileFilter # pylint: disable=C0415
+ file_filter = _FileFilter()
+
+ # Simple freshness check - index exists and is recent
+ index_mtime = os.path.getmtime(self.index_path)
+ base_path = Path(self.project_path)
+
+ # Check if any source files are newer than index
+ for root, dirs, files in os.walk(self.project_path):
+ # Filter directories using centralized logic
+ dirs[:] = [d for d in dirs if not file_filter.should_exclude_directory(d)]
+
+ for file in files:
+ file_path = Path(root) / file
+ if file_filter.should_process_path(file_path, base_path):
+ if os.path.getmtime(str(file_path)) > index_mtime:
+ return False
+
+ return True
+
+ except Exception as e:
+ logger.warning(f"Error checking index freshness: {e}")
+ return False
+
+ def _try_load_cached_index(self, expected_project_path: Optional[str] = None) -> bool:
+ """
+ Try to load a cached index file if available.
+
+ This is a simplified version of auto-initialization that only loads
+ a cached index if we can verify it matches the expected project.
+
+ Args:
+ expected_project_path: Optional path to verify against cached index
+
+ Returns:
+ True if cached index was loaded successfully, False otherwise.
+ """
+ try:
+ # First try to load from current index_path if set
+ if self.index_path and os.path.exists(self.index_path):
+ return self.load_index()
+
+ # If expected project path provided, try to find its cache
+ if expected_project_path:
+ project_hash = hashlib.md5(expected_project_path.encode()).hexdigest()[:12]
+ temp_dir = os.path.join(tempfile.gettempdir(), SETTINGS_DIR, project_hash)
+ index_path = os.path.join(temp_dir, INDEX_FILE)
+
+ if os.path.exists(index_path):
+ # Verify the cached index matches the expected project
+ with open(index_path, 'r', encoding='utf-8') as f:
+ index_data = json.load(f)
+ cached_project = index_data.get('metadata', {}).get('project_path')
+
+ if cached_project == expected_project_path:
+ self.temp_dir = temp_dir
+ self.index_path = index_path
+ return self.load_index()
+ else:
+ logger.warning(f"Cached index project mismatch: {cached_project} != {expected_project_path}")
+
+ return False
+
+ except Exception as e:
+ logger.debug(f"Failed to load cached index: {e}")
+ return False
+
+ def cleanup(self):
+ """Clean up resources."""
+ with self._lock:
+ self.project_path = None
+ self.index_builder = None
+ self.temp_dir = None
+ self.index_path = None
+ logger.info("Cleaned up JSON Index Manager")
+
+ @staticmethod
+ def _compile_glob_regex(pattern: str) -> re.Pattern:
+ """
+ Compile a glob pattern where '*' does not match '/', and '**' matches across directories.
+
+ Examples:
+ src/*.py -> direct children .py under src
+ **/*.py -> .py at any depth
+ """
+ # Translate glob to regex
+ i = 0
+ out = []
+ special = ".^$+{}[]|()"
+ while i < len(pattern):
+ c = pattern[i]
+ if c == '*':
+ if i + 1 < len(pattern) and pattern[i + 1] == '*':
+ # '**' -> match across directories
+ out.append('.*')
+ i += 2
+ continue
+ else:
+ out.append('[^/]*')
+ elif c == '?':
+ out.append('[^/]')
+ elif c in special:
+ out.append('\\' + c)
+ else:
+ out.append(c)
+ i += 1
+ regex_str = '^' + ''.join(out) + '$'
+ return re.compile(regex_str)
+
+
+# Global instance
+_index_manager = JSONIndexManager()
+
+
+def get_index_manager() -> JSONIndexManager:
+ """Get the global index manager instance."""
+ return _index_manager
diff --git a/src/code_index_mcp/indexing/models/__init__.py b/src/code_index_mcp/indexing/models/__init__.py
new file mode 100644
index 0000000..b120a34
--- /dev/null
+++ b/src/code_index_mcp/indexing/models/__init__.py
@@ -0,0 +1,8 @@
+"""
+Model classes for the indexing system.
+"""
+
+from .symbol_info import SymbolInfo
+from .file_info import FileInfo
+
+__all__ = ['SymbolInfo', 'FileInfo']
\ No newline at end of file
diff --git a/src/code_index_mcp/indexing/models/file_info.py b/src/code_index_mcp/indexing/models/file_info.py
new file mode 100644
index 0000000..0678774
--- /dev/null
+++ b/src/code_index_mcp/indexing/models/file_info.py
@@ -0,0 +1,24 @@
+"""
+FileInfo model for representing file metadata.
+"""
+
+from dataclasses import dataclass
+from typing import Dict, List, Optional, Any
+
+
+@dataclass
+class FileInfo:
+ """Information about a source code file."""
+
+ language: str # programming language
+ line_count: int # total lines in file
+ symbols: Dict[str, List[str]] # symbol categories (functions, classes, etc.)
+ imports: List[str] # imported modules/packages
+ exports: Optional[List[str]] = None # exported symbols (for JS/TS modules)
+ package: Optional[str] = None # package name (for Java, Go, etc.)
+ docstring: Optional[str] = None # file-level documentation
+
+ def __post_init__(self):
+ """Initialize mutable defaults."""
+ if self.exports is None:
+ self.exports = []
\ No newline at end of file
diff --git a/src/code_index_mcp/indexing/models/symbol_info.py b/src/code_index_mcp/indexing/models/symbol_info.py
new file mode 100644
index 0000000..1659330
--- /dev/null
+++ b/src/code_index_mcp/indexing/models/symbol_info.py
@@ -0,0 +1,23 @@
+"""
+SymbolInfo model for representing code symbols.
+"""
+
+from dataclasses import dataclass
+from typing import Optional, List
+
+
+@dataclass
+class SymbolInfo:
+ """Information about a code symbol (function, class, method, etc.)."""
+
+ type: str # function, class, method, interface, etc.
+ file: str # file path where symbol is defined
+ line: int # line number where symbol starts
+ signature: Optional[str] = None # function/method signature
+ docstring: Optional[str] = None # documentation string
+ called_by: Optional[List[str]] = None # list of symbols that call this symbol
+
+ def __post_init__(self):
+ """Initialize mutable defaults."""
+ if self.called_by is None:
+ self.called_by = []
\ No newline at end of file
diff --git a/src/code_index_mcp/indexing/qualified_names.py b/src/code_index_mcp/indexing/qualified_names.py
new file mode 100644
index 0000000..18e108c
--- /dev/null
+++ b/src/code_index_mcp/indexing/qualified_names.py
@@ -0,0 +1,49 @@
+"""
+Qualified name generation utilities.
+"""
+import os
+from typing import Optional
+
+
+def normalize_file_path(file_path: str) -> str:
+ """
+ Normalize a file path to use forward slashes and relative paths.
+
+ Args:
+ file_path: The file path to normalize
+
+ Returns:
+ Normalized file path
+ """
+ # Convert to forward slashes and make relative
+ normalized = file_path.replace('\\', '/')
+
+ # Remove leading slash if present
+ if normalized.startswith('/'):
+ normalized = normalized[1:]
+
+ return normalized
+
+
+def generate_qualified_name(file_path: str, symbol_name: str, namespace: Optional[str] = None) -> str:
+ """
+ Generate a qualified name for a symbol.
+
+ Args:
+ file_path: Path to the file containing the symbol
+ symbol_name: Name of the symbol
+ namespace: Optional namespace/module context
+
+ Returns:
+ Qualified name for the symbol
+ """
+ normalized_path = normalize_file_path(file_path)
+
+ # Remove file extension for module-like name
+ base_name = os.path.splitext(normalized_path)[0]
+ module_path = base_name.replace('/', '.')
+
+ if namespace:
+ return f"{module_path}.{namespace}.{symbol_name}"
+ else:
+ return f"{module_path}.{symbol_name}"
\ No newline at end of file
diff --git a/src/code_index_mcp/indexing/shallow_index_manager.py b/src/code_index_mcp/indexing/shallow_index_manager.py
new file mode 100644
index 0000000..530c593
--- /dev/null
+++ b/src/code_index_mcp/indexing/shallow_index_manager.py
@@ -0,0 +1,155 @@
+"""
+Shallow Index Manager - Manages a minimal file-list-only index.
+
+This manager builds and loads a shallow index consisting of relative file
+paths only. It is optimized for fast initialization and filename-based
+search/browsing. Content parsing and symbol extraction are not performed.
+"""
+
+from __future__ import annotations
+
+import hashlib
+import json
+import logging
+import os
+import tempfile
+import threading
+from typing import List, Optional
+import re
+
+from .json_index_builder import JSONIndexBuilder
+from ..constants import SETTINGS_DIR, INDEX_FILE_SHALLOW
+
+logger = logging.getLogger(__name__)
+
+
+class ShallowIndexManager:
+ """Manage shallow (file-list) index lifecycle and storage."""
+
+ def __init__(self) -> None:
+ self.project_path: Optional[str] = None
+ self.index_builder: Optional[JSONIndexBuilder] = None
+ self.temp_dir: Optional[str] = None
+ self.index_path: Optional[str] = None
+ self._file_list: Optional[List[str]] = None
+ self._lock = threading.RLock()
+
+ def set_project_path(self, project_path: str) -> bool:
+ with self._lock:
+ try:
+ if not isinstance(project_path, str) or not project_path.strip():
+ logger.error("Invalid project path for shallow index")
+ return False
+ project_path = project_path.strip()
+ if not os.path.isdir(project_path):
+ logger.error(f"Project path does not exist: {project_path}")
+ return False
+
+ self.project_path = project_path
+ self.index_builder = JSONIndexBuilder(project_path)
+
+ project_hash = hashlib.md5(project_path.encode()).hexdigest()[:12]
+ self.temp_dir = os.path.join(tempfile.gettempdir(), SETTINGS_DIR, project_hash)
+ os.makedirs(self.temp_dir, exist_ok=True)
+ self.index_path = os.path.join(self.temp_dir, INDEX_FILE_SHALLOW)
+ return True
+ except Exception as e: # noqa: BLE001 - centralized logging
+ logger.error(f"Failed to set project path (shallow): {e}")
+ return False
+
+ def build_index(self) -> bool:
+ """Build and persist the shallow file list index."""
+ with self._lock:
+ if not self.index_builder or not self.index_path:
+ logger.error("ShallowIndexManager not initialized")
+ return False
+ try:
+ file_list = self.index_builder.build_shallow_file_list()
+ with open(self.index_path, 'w', encoding='utf-8') as f:
+ json.dump(file_list, f, ensure_ascii=False)
+ self._file_list = file_list
+ logger.info(f"Built shallow index with {len(file_list)} files")
+ return True
+ except Exception as e: # noqa: BLE001
+ logger.error(f"Failed to build shallow index: {e}")
+ return False
+
+ def load_index(self) -> bool:
+ """Load shallow index from disk to memory."""
+ with self._lock:
+ try:
+ if not self.index_path or not os.path.exists(self.index_path):
+ return False
+ with open(self.index_path, 'r', encoding='utf-8') as f:
+ data = json.load(f)
+ if isinstance(data, list):
+ # Normalize slashes/prefix
+ normalized: List[str] = []
+ for p in data:
+ if isinstance(p, str):
+ q = p.replace('\\\\', '/').replace('\\', '/')
+ if q.startswith('./'):
+ q = q[2:]
+ normalized.append(q)
+ self._file_list = normalized
+ return True
+ return False
+ except Exception as e: # noqa: BLE001
+ logger.error(f"Failed to load shallow index: {e}")
+ return False
+
+ def get_file_list(self) -> List[str]:
+ with self._lock:
+ return list(self._file_list or [])
+
+ def find_files(self, pattern: str = "*") -> List[str]:
+ with self._lock:
+ if not isinstance(pattern, str):
+ return []
+ norm = (pattern.strip() or "*").replace('\\\\','/').replace('\\','/')
+ regex = self._compile_glob_regex(norm)
+ files = self._file_list or []
+ if norm == "*":
+ return list(files)
+ return [f for f in files if regex.match(f) is not None]
+
+ @staticmethod
+ def _compile_glob_regex(pattern: str) -> re.Pattern:
+ i = 0
+ out = []
+ special = ".^$+{}[]|()"
+ while i < len(pattern):
+ c = pattern[i]
+ if c == '*':
+ if i + 1 < len(pattern) and pattern[i + 1] == '*':
+ out.append('.*')
+ i += 2
+ continue
+ else:
+ out.append('[^/]*')
+ elif c == '?':
+ out.append('[^/]')
+ elif c in special:
+ out.append('\\' + c)
+ else:
+ out.append(c)
+ i += 1
+ return re.compile('^' + ''.join(out) + '$')
+
+ def cleanup(self) -> None:
+ with self._lock:
+ self.project_path = None
+ self.index_builder = None
+ self.temp_dir = None
+ self.index_path = None
+ self._file_list = None
+
+
+# Global singleton
+_shallow_manager = ShallowIndexManager()
+
+
+def get_shallow_index_manager() -> ShallowIndexManager:
+ return _shallow_manager
+
+
diff --git a/src/code_index_mcp/indexing/strategies/__init__.py b/src/code_index_mcp/indexing/strategies/__init__.py
new file mode 100644
index 0000000..0f51274
--- /dev/null
+++ b/src/code_index_mcp/indexing/strategies/__init__.py
@@ -0,0 +1,8 @@
+"""
+Parsing strategies for different programming languages.
+"""
+
+from .base_strategy import ParsingStrategy
+from .strategy_factory import StrategyFactory
+
+__all__ = ['ParsingStrategy', 'StrategyFactory']
\ No newline at end of file
diff --git a/src/code_index_mcp/indexing/strategies/base_strategy.py b/src/code_index_mcp/indexing/strategies/base_strategy.py
new file mode 100644
index 0000000..691dce0
--- /dev/null
+++ b/src/code_index_mcp/indexing/strategies/base_strategy.py
@@ -0,0 +1,87 @@
+"""
+Abstract base class for language parsing strategies.
+"""
+
+import os
+from abc import ABC, abstractmethod
+from typing import Dict, List, Tuple, Optional
+from ..models import SymbolInfo, FileInfo
+
+
+class ParsingStrategy(ABC):
+ """Abstract base class for language parsing strategies."""
+
+ @abstractmethod
+ def get_language_name(self) -> str:
+ """Return the language name this strategy handles."""
+
+ @abstractmethod
+ def get_supported_extensions(self) -> List[str]:
+ """Return list of file extensions this strategy supports."""
+
+ @abstractmethod
+ def parse_file(self, file_path: str, content: str) -> Tuple[Dict[str, SymbolInfo], FileInfo]:
+ """
+ Parse file content and extract symbols.
+
+ Args:
+ file_path: Path to the file being parsed
+ content: File content as string
+
+ Returns:
+ Tuple of (symbols_dict, file_info)
+ - symbols_dict: Maps symbol_id -> SymbolInfo
+ - file_info: FileInfo with metadata about the file
+ """
+
+ def _create_symbol_id(self, file_path: str, symbol_name: str) -> str:
+ """
+ Create a unique symbol ID.
+
+ Args:
+ file_path: Path to the file containing the symbol
+ symbol_name: Name of the symbol
+
+ Returns:
+ Unique symbol identifier in format "relative_path::symbol_name"
+ """
+ relative_path = self._get_relative_path(file_path)
+ return f"{relative_path}::{symbol_name}"
+
+ def _get_relative_path(self, file_path: str) -> str:
+ """Convert absolute file path to relative path."""
+ parts = file_path.replace('\\', '/').split('/')
+
+ # Priority order: test > src (outermost project roots first)
+ for root_dir in ['test', 'src']:
+ if root_dir in parts:
+ root_index = parts.index(root_dir)
+ relative_parts = parts[root_index:]
+ return '/'.join(relative_parts)
+
+ # Fallback: use just filename
+ return os.path.basename(file_path)
+
+ def _extract_line_number(self, content: str, symbol_position: int) -> int:
+ """
+ Extract line number from character position in content.
+
+ Args:
+ content: File content
+ symbol_position: Character position in content
+
+ Returns:
+ Line number (1-based)
+ """
+ return content[:symbol_position].count('\n') + 1
+
+ def _get_file_name(self, file_path: str) -> str:
+ """Get just the filename from a full path."""
+ return os.path.basename(file_path)
+
+ def _safe_extract_text(self, content: str, start: int, end: int) -> str:
+ """Safely extract text from content, handling bounds."""
+ try:
+ return content[start:end].strip()
+ except (IndexError, TypeError):
+ return ""
diff --git a/src/code_index_mcp/indexing/strategies/fallback_strategy.py b/src/code_index_mcp/indexing/strategies/fallback_strategy.py
new file mode 100644
index 0000000..21653bd
--- /dev/null
+++ b/src/code_index_mcp/indexing/strategies/fallback_strategy.py
@@ -0,0 +1,46 @@
+"""
+Fallback parsing strategy for unsupported languages and file types.
+"""
+
+import os
+from typing import Dict, List, Tuple
+from .base_strategy import ParsingStrategy
+from ..models import SymbolInfo, FileInfo
+
+
+class FallbackParsingStrategy(ParsingStrategy):
+ """Fallback parser for unsupported languages and file types."""
+
+ def __init__(self, language_name: str = "unknown"):
+ self.language_name = language_name
+
+ def get_language_name(self) -> str:
+ return self.language_name
+
+ def get_supported_extensions(self) -> List[str]:
+ return [] # Fallback supports any extension
+
+ def parse_file(self, file_path: str, content: str) -> Tuple[Dict[str, SymbolInfo], FileInfo]:
+ """Basic parsing: extract file information without symbol parsing."""
+ symbols = {}
+
+ # For document files, we can at least index their existence
+ file_info = FileInfo(
+ language=self.language_name,
+ line_count=len(content.splitlines()),
+ symbols={"functions": [], "classes": []},
+ imports=[]
+ )
+
+ # For document files (e.g. .md, .txt, .json), we can add a symbol representing the file itself
+ if self.language_name in ['markdown', 'text', 'json', 'yaml', 'xml', 'config', 'css', 'html']:
+ filename = os.path.basename(file_path)
+ symbol_id = self._create_symbol_id(file_path, f"file:{filename}")
+ symbols[symbol_id] = SymbolInfo(
+ type="file",
+ file=file_path,
+ line=1,
+ signature=f"{self.language_name} file: {filename}"
+ )
+
+ return symbols, file_info
diff --git a/src/code_index_mcp/indexing/strategies/go_strategy.py b/src/code_index_mcp/indexing/strategies/go_strategy.py
new file mode 100644
index 0000000..b3a95cb
--- /dev/null
+++ b/src/code_index_mcp/indexing/strategies/go_strategy.py
@@ -0,0 +1,164 @@
+"""
+Go parsing strategy using regex patterns.
+"""
+
+import re
+from typing import Dict, List, Tuple, Optional
+from .base_strategy import ParsingStrategy
+from ..models import SymbolInfo, FileInfo
+
+
+class GoParsingStrategy(ParsingStrategy):
+ """Go-specific parsing strategy using regex patterns."""
+
+ def get_language_name(self) -> str:
+ return "go"
+
+ def get_supported_extensions(self) -> List[str]:
+ return ['.go']
+
+ def parse_file(self, file_path: str, content: str) -> Tuple[Dict[str, SymbolInfo], FileInfo]:
+ """Parse Go file using regex patterns."""
+ symbols = {}
+ functions = []
+ classes = [] # Go doesn't have classes, but we'll track structs/interfaces
+ imports = []
+ package = None
+
+ lines = content.splitlines()
+
+ for i, line in enumerate(lines):
+ line = line.strip()
+
+ # Package declaration
+ if line.startswith('package '):
+ package = line.split('package ')[1].strip()
+
+ # Import statements
+ elif line.startswith('import '):
+ import_match = re.search(r'import\s+"([^"]+)"', line)
+ if import_match:
+ imports.append(import_match.group(1))
+
+ # Function declarations
+ elif line.startswith('func '):
+ func_match = re.match(r'func\s+(\w+)\s*\(', line)
+ if func_match:
+ func_name = func_match.group(1)
+ symbol_id = self._create_symbol_id(file_path, func_name)
+ symbols[symbol_id] = SymbolInfo(
+ type="function",
+ file=file_path,
+ line=i + 1,
+ signature=line
+ )
+ functions.append(func_name)
+
+ # Method declarations (func (receiver) methodName)
+ method_match = re.match(r'func\s+\([^)]+\)\s+(\w+)\s*\(', line)
+ if method_match:
+ method_name = method_match.group(1)
+ symbol_id = self._create_symbol_id(file_path, method_name)
+ symbols[symbol_id] = SymbolInfo(
+ type="method",
+ file=file_path,
+ line=i + 1,
+ signature=line
+ )
+ functions.append(method_name)
+
+ # Struct declarations
+ elif re.match(r'type\s+\w+\s+struct\s*\{', line):
+ struct_match = re.match(r'type\s+(\w+)\s+struct', line)
+ if struct_match:
+ struct_name = struct_match.group(1)
+ symbol_id = self._create_symbol_id(file_path, struct_name)
+ symbols[symbol_id] = SymbolInfo(
+ type="struct",
+ file=file_path,
+ line=i + 1
+ )
+ classes.append(struct_name)
+
+ # Interface declarations
+ elif re.match(r'type\s+\w+\s+interface\s*\{', line):
+ interface_match = re.match(r'type\s+(\w+)\s+interface', line)
+ if interface_match:
+ interface_name = interface_match.group(1)
+ symbol_id = self._create_symbol_id(file_path, interface_name)
+ symbols[symbol_id] = SymbolInfo(
+ type="interface",
+ file=file_path,
+ line=i + 1
+ )
+ classes.append(interface_name)
+
+ # Phase 2: Add call relationship analysis
+ self._analyze_go_calls(content, symbols, file_path)
+
+ file_info = FileInfo(
+ language=self.get_language_name(),
+ line_count=len(lines),
+ symbols={"functions": functions, "classes": classes},
+ imports=imports,
+ package=package
+ )
+
+ return symbols, file_info
+
+ def _analyze_go_calls(self, content: str, symbols: Dict[str, SymbolInfo], file_path: str):
+ """Analyze Go function calls for relationships."""
+ lines = content.splitlines()
+ current_function = None
+ is_function_declaration_line = False
+
+ for i, line in enumerate(lines):
+ original_line = line
+ line = line.strip()
+
+ # Track current function context
+ if line.startswith('func '):
+ func_name = self._extract_go_function_name(line)
+ if func_name:
+ current_function = self._create_symbol_id(file_path, func_name)
+ is_function_declaration_line = True
+ else:
+ is_function_declaration_line = False
+
+ # Find function calls: functionName() or obj.methodName()
+ # Skip the function declaration line itself to avoid false self-calls
+ if current_function and not is_function_declaration_line and ('(' in line and ')' in line):
+ called_functions = self._extract_go_called_functions(line)
+ for called_func in called_functions:
+ # Find the called function in symbols and add relationship
+ for symbol_id, symbol_info in symbols.items():
+ if called_func in symbol_id.split("::")[-1]:
+ if current_function not in symbol_info.called_by:
+ symbol_info.called_by.append(current_function)
+
+ def _extract_go_function_name(self, line: str) -> Optional[str]:
+ """Extract function name from Go function declaration."""
+ try:
+ # func functionName(...) or func (receiver) methodName(...)
+ match = re.match(r'func\s+(?:\([^)]*\)\s+)?(\w+)\s*\(', line)
+ if match:
+ return match.group(1)
+ except:
+ pass
+ return None
+
+ def _extract_go_called_functions(self, line: str) -> List[str]:
+ """Extract function names that are being called in this line."""
+ called_functions = []
+
+ # Find patterns like: functionName( or obj.methodName(
+ patterns = [
+ r'(\w+)\s*\(', # functionName(
+ r'\.(\w+)\s*\(', # .methodName(
+ ]
+
+ for pattern in patterns:
+ matches = re.findall(pattern, line)
+ called_functions.extend(matches)
+
+ return called_functions
diff --git a/src/code_index_mcp/indexing/strategies/java_strategy.py b/src/code_index_mcp/indexing/strategies/java_strategy.py
new file mode 100644
index 0000000..af2ff8e
--- /dev/null
+++ b/src/code_index_mcp/indexing/strategies/java_strategy.py
@@ -0,0 +1,209 @@
+"""
+Java parsing strategy using tree-sitter - Optimized single-pass version.
+"""
+
+import logging
+from typing import Dict, List, Tuple, Optional, Set
+from .base_strategy import ParsingStrategy
+from ..models import SymbolInfo, FileInfo
+
+logger = logging.getLogger(__name__)
+
+import tree_sitter
+from tree_sitter_java import language
+
+
+class JavaParsingStrategy(ParsingStrategy):
+ """Java-specific parsing strategy - Single Pass Optimized."""
+
+ def __init__(self):
+ self.java_language = tree_sitter.Language(language())
+
+ def get_language_name(self) -> str:
+ return "java"
+
+ def get_supported_extensions(self) -> List[str]:
+ return ['.java']
+
+ def parse_file(self, file_path: str, content: str) -> Tuple[Dict[str, SymbolInfo], FileInfo]:
+ """Parse Java file using tree-sitter with single-pass optimization."""
+ symbols = {}
+ functions = []
+ classes = []
+ imports = []
+ package = None
+
+ # Symbol lookup index for O(1) access
+ symbol_lookup = {} # name -> symbol_id mapping
+
+ parser = tree_sitter.Parser(self.java_language)
+
+ try:
+ tree = parser.parse(content.encode('utf8'))
+
+ # Extract package info first
+ for node in tree.root_node.children:
+ if node.type == 'package_declaration':
+ package = self._extract_java_package(node, content)
+ break
+
+ # Single-pass traversal that handles everything
+ context = TraversalContext(
+ content=content,
+ file_path=file_path,
+ symbols=symbols,
+ functions=functions,
+ classes=classes,
+ imports=imports,
+ symbol_lookup=symbol_lookup
+ )
+
+ self._traverse_node_single_pass(tree.root_node, context)
+
+ except Exception as e:
+ logger.warning(f"Error parsing Java file {file_path}: {e}")
+
+ file_info = FileInfo(
+ language=self.get_language_name(),
+ line_count=len(content.splitlines()),
+ symbols={"functions": functions, "classes": classes},
+ imports=imports,
+ package=package
+ )
+
+ return symbols, file_info
+
+ def _traverse_node_single_pass(self, node, context: 'TraversalContext',
+ current_class: Optional[str] = None,
+ current_method: Optional[str] = None):
+ """Single-pass traversal that extracts symbols and analyzes calls."""
+
+ # Handle class declarations
+ if node.type == 'class_declaration':
+ name = self._get_java_class_name(node, context.content)
+ if name:
+ symbol_id = self._create_symbol_id(context.file_path, name)
+ symbol_info = SymbolInfo(
+ type="class",
+ file=context.file_path,
+ line=node.start_point[0] + 1
+ )
+ context.symbols[symbol_id] = symbol_info
+ context.symbol_lookup[name] = symbol_id
+ context.classes.append(name)
+
+ # Traverse class body with updated context
+ for child in node.children:
+ self._traverse_node_single_pass(child, context, current_class=name, current_method=current_method)
+ return
+
+ # Handle method declarations
+ elif node.type == 'method_declaration':
+ name = self._get_java_method_name(node, context.content)
+ if name:
+ # Build full method name with class context
+ if current_class:
+ full_name = f"{current_class}.{name}"
+ else:
+ full_name = name
+
+ symbol_id = self._create_symbol_id(context.file_path, full_name)
+ symbol_info = SymbolInfo(
+ type="method",
+ file=context.file_path,
+ line=node.start_point[0] + 1,
+ signature=self._get_java_method_signature(node, context.content)
+ )
+ context.symbols[symbol_id] = symbol_info
+ context.symbol_lookup[full_name] = symbol_id
+ context.symbol_lookup[name] = symbol_id # Also index by method name alone
+ context.functions.append(full_name)
+
+ # Traverse method body with updated context
+ for child in node.children:
+ self._traverse_node_single_pass(child, context, current_class=current_class,
+ current_method=symbol_id)
+ return
+
+ # Handle method invocations (calls)
+ elif node.type == 'method_invocation':
+ if current_method:
+ called_method = self._get_called_method_name(node, context.content)
+ if called_method:
+ # Use O(1) lookup instead of O(n) iteration
+ if called_method in context.symbol_lookup:
+ symbol_id = context.symbol_lookup[called_method]
+ symbol_info = context.symbols[symbol_id]
+ if current_method not in symbol_info.called_by:
+ symbol_info.called_by.append(current_method)
+ else:
+ # Try to find method with class prefix
+ for name, sid in context.symbol_lookup.items():
+ if name.endswith(f".{called_method}"):
+ symbol_info = context.symbols[sid]
+ if current_method not in symbol_info.called_by:
+ symbol_info.called_by.append(current_method)
+ break
+
+ # Handle import declarations
+ elif node.type == 'import_declaration':
+ import_text = context.content[node.start_byte:node.end_byte]
+ # Extract the import path (remove 'import' keyword and semicolon)
+ import_path = import_text.replace('import', '').replace(';', '').strip()
+ if import_path:
+ context.imports.append(import_path)
+
+ # Continue traversing children for other node types
+ for child in node.children:
+ self._traverse_node_single_pass(child, context, current_class=current_class,
+ current_method=current_method)
+
+ def _get_java_class_name(self, node, content: str) -> Optional[str]:
+ for child in node.children:
+ if child.type == 'identifier':
+ return content[child.start_byte:child.end_byte]
+ return None
+
+ def _get_java_method_name(self, node, content: str) -> Optional[str]:
+ for child in node.children:
+ if child.type == 'identifier':
+ return content[child.start_byte:child.end_byte]
+ return None
+
+ def _get_java_method_signature(self, node, content: str) -> str:
+ return content[node.start_byte:node.end_byte].split('\n')[0].strip()
+
+ def _extract_java_package(self, node, content: str) -> Optional[str]:
+ for child in node.children:
+ if child.type == 'scoped_identifier':
+ return content[child.start_byte:child.end_byte]
+ return None
+
+ def _get_called_method_name(self, node, content: str) -> Optional[str]:
+ """Extract called method name from method invocation node."""
+ # Handle obj.method() pattern - look for the method name after the dot
+ for child in node.children:
+ if child.type == 'field_access':
+ # For field_access nodes, get the field (method) name
+ for subchild in child.children:
+ if subchild.type == 'identifier' and subchild.start_byte > child.start_byte:
+ # Get the rightmost identifier (the method name)
+ return content[subchild.start_byte:subchild.end_byte]
+ elif child.type == 'identifier':
+ # Direct method call without object reference
+ return content[child.start_byte:child.end_byte]
+ return None
+
+
+class TraversalContext:
+ """Context object to pass state during single-pass traversal."""
+
+ def __init__(self, content: str, file_path: str, symbols: Dict,
+ functions: List, classes: List, imports: List, symbol_lookup: Dict):
+ self.content = content
+ self.file_path = file_path
+ self.symbols = symbols
+ self.functions = functions
+ self.classes = classes
+ self.imports = imports
+ self.symbol_lookup = symbol_lookup
\ No newline at end of file
diff --git a/src/code_index_mcp/indexing/strategies/javascript_strategy.py b/src/code_index_mcp/indexing/strategies/javascript_strategy.py
new file mode 100644
index 0000000..63c78f7
--- /dev/null
+++ b/src/code_index_mcp/indexing/strategies/javascript_strategy.py
@@ -0,0 +1,154 @@
+"""
+JavaScript parsing strategy using tree-sitter.
+"""
+
+import logging
+from typing import Dict, List, Tuple, Optional
+import tree_sitter
+from tree_sitter_javascript import language
+from .base_strategy import ParsingStrategy
+from ..models import SymbolInfo, FileInfo
+
+logger = logging.getLogger(__name__)
+
+
+class JavaScriptParsingStrategy(ParsingStrategy):
+ """JavaScript-specific parsing strategy using tree-sitter."""
+
+ def __init__(self):
+ self.js_language = tree_sitter.Language(language())
+
+ def get_language_name(self) -> str:
+ return "javascript"
+
+ def get_supported_extensions(self) -> List[str]:
+ return ['.js', '.jsx', '.mjs', '.cjs']
+
+ def parse_file(self, file_path: str, content: str) -> Tuple[Dict[str, SymbolInfo], FileInfo]:
+ """Parse JavaScript file using tree-sitter."""
+ symbols = {}
+ functions = []
+ classes = []
+ imports = []
+ exports = []
+
+ parser = tree_sitter.Parser(self.js_language)
+ tree = parser.parse(content.encode('utf8'))
+ self._traverse_js_node(tree.root_node, content, file_path, symbols, functions, classes, imports, exports)
+
+ file_info = FileInfo(
+ language=self.get_language_name(),
+ line_count=len(content.splitlines()),
+ symbols={"functions": functions, "classes": classes},
+ imports=imports,
+ exports=exports
+ )
+
+ return symbols, file_info
+
+ def _traverse_js_node(self, node, content: str, file_path: str, symbols: Dict[str, SymbolInfo],
+ functions: List[str], classes: List[str], imports: List[str], exports: List[str]):
+ """Traverse JavaScript AST node."""
+ if node.type == 'function_declaration':
+ name = self._get_function_name(node, content)
+ if name:
+ symbol_id = self._create_symbol_id(file_path, name)
+ signature = self._get_js_function_signature(node, content)
+ symbols[symbol_id] = SymbolInfo(
+ type="function",
+ file=file_path,
+ line=node.start_point[0] + 1,
+ signature=signature
+ )
+ functions.append(name)
+
+ # Handle arrow functions and function expressions in lexical declarations (const/let)
+ elif node.type in ['lexical_declaration', 'variable_declaration']:
+ # Look for const/let/var name = arrow_function or function_expression
+ for child in node.children:
+ if child.type == 'variable_declarator':
+ name_node = None
+ value_node = None
+ for declarator_child in child.children:
+ if declarator_child.type == 'identifier':
+ name_node = declarator_child
+ elif declarator_child.type in ['arrow_function', 'function_expression', 'function']:
+ value_node = declarator_child
+
+ if name_node and value_node:
+ name = content[name_node.start_byte:name_node.end_byte]
+ symbol_id = self._create_symbol_id(file_path, name)
+ # Create signature from the declaration
+ signature = content[child.start_byte:child.end_byte].split('\n')[0].strip()
+ symbols[symbol_id] = SymbolInfo(
+ type="function",
+ file=file_path,
+ line=child.start_point[0] + 1, # Use child position, not parent
+ signature=signature
+ )
+ functions.append(name)
+
+ elif node.type == 'class_declaration':
+ name = self._get_class_name(node, content)
+ if name:
+ symbol_id = self._create_symbol_id(file_path, name)
+ symbols[symbol_id] = SymbolInfo(
+ type="class",
+ file=file_path,
+ line=node.start_point[0] + 1
+ )
+ classes.append(name)
+
+ elif node.type == 'method_definition':
+ method_name = self._get_method_name(node, content)
+ class_name = self._find_parent_class(node, content)
+ if method_name and class_name:
+ full_name = f"{class_name}.{method_name}"
+ symbol_id = self._create_symbol_id(file_path, full_name)
+ signature = self._get_js_function_signature(node, content)
+ symbols[symbol_id] = SymbolInfo(
+ type="method",
+ file=file_path,
+ line=node.start_point[0] + 1,
+ signature=signature
+ )
+ # Add method to functions list for consistency
+ functions.append(full_name)
+
+ # Continue traversing children
+ for child in node.children:
+ self._traverse_js_node(child, content, file_path, symbols, functions, classes, imports, exports)
+
+ def _get_function_name(self, node, content: str) -> Optional[str]:
+ """Extract function name from tree-sitter node."""
+ for child in node.children:
+ if child.type == 'identifier':
+ return content[child.start_byte:child.end_byte]
+ return None
+
+ def _get_class_name(self, node, content: str) -> Optional[str]:
+ """Extract class name from tree-sitter node."""
+ for child in node.children:
+ if child.type == 'identifier':
+ return content[child.start_byte:child.end_byte]
+ return None
+
+ def _get_method_name(self, node, content: str) -> Optional[str]:
+ """Extract method name from tree-sitter node."""
+ for child in node.children:
+ if child.type == 'property_identifier':
+ return content[child.start_byte:child.end_byte]
+ return None
+
+ def _find_parent_class(self, node, content: str) -> Optional[str]:
+ """Find the parent class of a method."""
+ parent = node.parent
+ while parent:
+ if parent.type == 'class_declaration':
+ return self._get_class_name(parent, content)
+ parent = parent.parent
+ return None
+
+ def _get_js_function_signature(self, node, content: str) -> str:
+ """Extract JavaScript function signature."""
+ return content[node.start_byte:node.end_byte].split('\n')[0].strip()
diff --git a/src/code_index_mcp/indexing/strategies/objective_c_strategy.py b/src/code_index_mcp/indexing/strategies/objective_c_strategy.py
new file mode 100644
index 0000000..4226f1c
--- /dev/null
+++ b/src/code_index_mcp/indexing/strategies/objective_c_strategy.py
@@ -0,0 +1,154 @@
+"""
+Objective-C parsing strategy using regex patterns.
+"""
+
+import re
+from typing import Dict, List, Tuple, Optional
+from .base_strategy import ParsingStrategy
+from ..models import SymbolInfo, FileInfo
+
+
+class ObjectiveCParsingStrategy(ParsingStrategy):
+ """Objective-C parsing strategy using regex patterns."""
+
+ def get_language_name(self) -> str:
+ return "objective-c"
+
+ def get_supported_extensions(self) -> List[str]:
+ return ['.m', '.mm']
+
+ def parse_file(self, file_path: str, content: str) -> Tuple[Dict[str, SymbolInfo], FileInfo]:
+ """Parse Objective-C file using regex patterns."""
+ symbols = {}
+ functions = []
+ classes = []
+ imports = []
+
+ lines = content.splitlines()
+ current_class = None
+
+ for i, line in enumerate(lines):
+ line = line.strip()
+
+ # Import statements
+ if line.startswith('#import ') or line.startswith('#include '):
+ import_match = re.search(r'#(?:import|include)\s+[<"]([^>"]+)[>"]', line)
+ if import_match:
+ imports.append(import_match.group(1))
+
+ # Interface declarations
+ elif line.startswith('@interface '):
+ interface_match = re.match(r'@interface\s+(\w+)', line)
+ if interface_match:
+ class_name = interface_match.group(1)
+ current_class = class_name
+ symbol_id = self._create_symbol_id(file_path, class_name)
+ symbols[symbol_id] = SymbolInfo(
+ type="class",
+ file=file_path,
+ line=i + 1
+ )
+ classes.append(class_name)
+
+ # Implementation declarations
+ elif line.startswith('@implementation '):
+ impl_match = re.match(r'@implementation\s+(\w+)', line)
+ if impl_match:
+ current_class = impl_match.group(1)
+
+ # Method declarations
+ elif line.startswith(('- (', '+ (')):
+ method_match = re.search(r'[+-]\s*\([^)]+\)\s*(\w+)', line)
+ if method_match:
+ method_name = method_match.group(1)
+ full_name = f"{current_class}.{method_name}" if current_class else method_name
+ symbol_id = self._create_symbol_id(file_path, full_name)
+ symbols[symbol_id] = SymbolInfo(
+ type="method",
+ file=file_path,
+ line=i + 1,
+ signature=line
+ )
+ functions.append(full_name)
+
+ # C function declarations
+ elif re.match(r'\w+.*\s+\w+\s*\([^)]*\)\s*\{?', line) and not line.startswith(('if', 'for', 'while')):
+ func_match = re.search(r'\s(\w+)\s*\([^)]*\)', line)
+ if func_match:
+ func_name = func_match.group(1)
+ symbol_id = self._create_symbol_id(file_path, func_name)
+ symbols[symbol_id] = SymbolInfo(
+ type="function",
+ file=file_path,
+ line=i + 1,
+ signature=line
+ )
+ functions.append(func_name)
+
+ # End of class
+ elif line == '@end':
+ current_class = None
+
+ # Phase 2: Add call relationship analysis
+ self._analyze_objc_calls(content, symbols, file_path)
+
+ file_info = FileInfo(
+ language=self.get_language_name(),
+ line_count=len(lines),
+ symbols={"functions": functions, "classes": classes},
+ imports=imports
+ )
+
+ return symbols, file_info
+
+ def _analyze_objc_calls(self, content: str, symbols: Dict[str, SymbolInfo], file_path: str):
+ """Analyze Objective-C method calls for relationships."""
+ lines = content.splitlines()
+ current_function = None
+
+ for i, line in enumerate(lines):
+ original_line = line
+ line = line.strip()
+
+ # Track current method context
+ if line.startswith('- (') or line.startswith('+ ('):
+ func_name = self._extract_objc_method_name(line)
+ if func_name:
+ current_function = self._create_symbol_id(file_path, func_name)
+
+ # Find method calls: [obj methodName] or functionName()
+ if current_function and ('[' in line and ']' in line or ('(' in line and ')' in line)):
+ called_functions = self._extract_objc_called_functions(line)
+ for called_func in called_functions:
+ # Find the called function in symbols and add relationship
+ for symbol_id, symbol_info in symbols.items():
+ if called_func in symbol_id.split("::")[-1]:
+ if current_function not in symbol_info.called_by:
+ symbol_info.called_by.append(current_function)
+
+ def _extract_objc_method_name(self, line: str) -> Optional[str]:
+ """Extract method name from Objective-C method declaration."""
+ try:
+ # - (returnType)methodName:(params) or + (returnType)methodName
+ match = re.search(r'[+-]\s*\([^)]*\)\s*(\w+)', line)
+ if match:
+ return match.group(1)
+ except:
+ pass
+ return None
+
+ def _extract_objc_called_functions(self, line: str) -> List[str]:
+ """Extract method names that are being called in this line."""
+ called_functions = []
+
+ # Find patterns like: [obj methodName] or functionName(
+ patterns = [
+ r'\[\s*\w+\s+(\w+)\s*[\]:]', # [obj methodName]
+ r'(\w+)\s*\(', # functionName(
+ ]
+
+ for pattern in patterns:
+ matches = re.findall(pattern, line)
+ called_functions.extend(matches)
+
+ return called_functions
diff --git a/src/code_index_mcp/indexing/strategies/python_strategy.py b/src/code_index_mcp/indexing/strategies/python_strategy.py
new file mode 100644
index 0000000..a09d00c
--- /dev/null
+++ b/src/code_index_mcp/indexing/strategies/python_strategy.py
@@ -0,0 +1,264 @@
+"""
+Python parsing strategy using AST - Optimized single-pass version.
+"""
+
+import ast
+import logging
+from typing import Dict, List, Tuple, Optional, Set
+from .base_strategy import ParsingStrategy
+from ..models import SymbolInfo, FileInfo
+
+logger = logging.getLogger(__name__)
+
+
+class PythonParsingStrategy(ParsingStrategy):
+ """Python-specific parsing strategy using Python's built-in AST - Single Pass Optimized."""
+
+ def get_language_name(self) -> str:
+ return "python"
+
+ def get_supported_extensions(self) -> List[str]:
+ return ['.py', '.pyw']
+
+ def parse_file(self, file_path: str, content: str) -> Tuple[Dict[str, SymbolInfo], FileInfo]:
+ """Parse Python file using AST with single-pass optimization."""
+ symbols = {}
+ functions = []
+ classes = []
+ imports = []
+
+ try:
+ tree = ast.parse(content)
+ # Single-pass visitor that handles everything at once
+ visitor = SinglePassVisitor(symbols, functions, classes, imports, file_path)
+ visitor.visit(tree)
+ except SyntaxError as e:
+ logger.warning(f"Syntax error in Python file {file_path}: {e}")
+ except Exception as e:
+ logger.warning(f"Error parsing Python file {file_path}: {e}")
+
+ file_info = FileInfo(
+ language=self.get_language_name(),
+ line_count=len(content.splitlines()),
+ symbols={"functions": functions, "classes": classes},
+ imports=imports
+ )
+
+ return symbols, file_info
+
+
+class SinglePassVisitor(ast.NodeVisitor):
+ """Single-pass AST visitor that extracts symbols and analyzes calls in one traversal."""
+
+ def __init__(self, symbols: Dict[str, SymbolInfo], functions: List[str],
+ classes: List[str], imports: List[str], file_path: str):
+ self.symbols = symbols
+ self.functions = functions
+ self.classes = classes
+ self.imports = imports
+ self.file_path = file_path
+
+ # Context tracking for call analysis
+ self.current_function_stack = []
+ self.current_class = None
+
+ # Symbol lookup index for O(1) access
+ self.symbol_lookup = {} # name -> symbol_id mapping for fast lookups
+
+ # Track processed nodes to avoid duplicates
+ self.processed_nodes: Set[int] = set()
+
+ def visit_ClassDef(self, node: ast.ClassDef):
+ """Visit class definition - extract symbol and analyze in single pass."""
+ class_name = node.name
+ symbol_id = self._create_symbol_id(self.file_path, class_name)
+
+ # Extract docstring
+ docstring = ast.get_docstring(node)
+
+ # Create symbol info
+ symbol_info = SymbolInfo(
+ type="class",
+ file=self.file_path,
+ line=node.lineno,
+ docstring=docstring
+ )
+
+ # Store in symbols and lookup index
+ self.symbols[symbol_id] = symbol_info
+ self.symbol_lookup[class_name] = symbol_id
+ self.classes.append(class_name)
+
+ # Track class context for method processing
+ old_class = self.current_class
+ self.current_class = class_name
+
+ # Process class body (including methods)
+ for child in node.body:
+ if isinstance(child, ast.FunctionDef):
+ self._handle_method(child, class_name)
+ else:
+ # Visit other nodes in class body
+ self.visit(child)
+
+ # Restore previous class context
+ self.current_class = old_class
+
+ def visit_FunctionDef(self, node: ast.FunctionDef):
+ """Visit function definition - extract symbol and track context."""
+ # Skip if this is a method (already handled by ClassDef)
+ if self.current_class:
+ return
+
+ # Skip if already processed
+ node_id = id(node)
+ if node_id in self.processed_nodes:
+ return
+ self.processed_nodes.add(node_id)
+
+ func_name = node.name
+ symbol_id = self._create_symbol_id(self.file_path, func_name)
+
+ # Extract function signature and docstring
+ signature = self._extract_function_signature(node)
+ docstring = ast.get_docstring(node)
+
+ # Create symbol info
+ symbol_info = SymbolInfo(
+ type="function",
+ file=self.file_path,
+ line=node.lineno,
+ signature=signature,
+ docstring=docstring
+ )
+
+ # Store in symbols and lookup index
+ self.symbols[symbol_id] = symbol_info
+ self.symbol_lookup[func_name] = symbol_id
+ self.functions.append(func_name)
+
+ # Track function context for call analysis
+ function_id = f"{self.file_path}::{func_name}"
+ self.current_function_stack.append(function_id)
+
+ # Visit function body to analyze calls
+ self.generic_visit(node)
+
+ # Pop function from stack
+ self.current_function_stack.pop()
+
+ def _handle_method(self, node: ast.FunctionDef, class_name: str):
+ """Handle method definition within a class."""
+ method_name = f"{class_name}.{node.name}"
+ method_symbol_id = self._create_symbol_id(self.file_path, method_name)
+
+ method_signature = self._extract_function_signature(node)
+ method_docstring = ast.get_docstring(node)
+
+ # Create symbol info
+ symbol_info = SymbolInfo(
+ type="method",
+ file=self.file_path,
+ line=node.lineno,
+ signature=method_signature,
+ docstring=method_docstring
+ )
+
+ # Store in symbols and lookup index
+ self.symbols[method_symbol_id] = symbol_info
+ self.symbol_lookup[method_name] = method_symbol_id
+ self.symbol_lookup[node.name] = method_symbol_id # Also index by method name alone
+ self.functions.append(method_name)
+
+ # Track method context for call analysis
+ function_id = f"{self.file_path}::{method_name}"
+ self.current_function_stack.append(function_id)
+
+ # Visit method body to analyze calls
+ for child in node.body:
+ self.visit(child)
+
+ # Pop method from stack
+ self.current_function_stack.pop()
+
+ def visit_Import(self, node: ast.Import):
+ """Handle import statements."""
+ for alias in node.names:
+ self.imports.append(alias.name)
+ self.generic_visit(node)
+
+ def visit_ImportFrom(self, node: ast.ImportFrom):
+ """Handle from...import statements."""
+ if node.module:
+ for alias in node.names:
+ self.imports.append(f"{node.module}.{alias.name}")
+ self.generic_visit(node)
+
+ def visit_Call(self, node: ast.Call):
+ """Visit function call and record relationship using O(1) lookup."""
+ if not self.current_function_stack:
+ self.generic_visit(node)
+ return
+
+ try:
+ # Get the function name being called
+ called_function = None
+
+ if isinstance(node.func, ast.Name):
+ # Direct function call: function_name()
+ called_function = node.func.id
+ elif isinstance(node.func, ast.Attribute):
+ # Method call: obj.method() or module.function()
+ called_function = node.func.attr
+
+ if called_function:
+ # Get the current calling function
+ caller_function = self.current_function_stack[-1]
+
+ # Use O(1) lookup instead of O(n) iteration
+ # First try exact match
+ if called_function in self.symbol_lookup:
+ symbol_id = self.symbol_lookup[called_function]
+ symbol_info = self.symbols[symbol_id]
+ if symbol_info.type in ["function", "method"]:
+ if caller_function not in symbol_info.called_by:
+ symbol_info.called_by.append(caller_function)
+ else:
+ # Try method name match for any class
+ for name, symbol_id in self.symbol_lookup.items():
+ if name.endswith(f".{called_function}"):
+ symbol_info = self.symbols[symbol_id]
+ if symbol_info.type in ["function", "method"]:
+ if caller_function not in symbol_info.called_by:
+ symbol_info.called_by.append(caller_function)
+ break
+ except Exception:
+ # Silently handle parsing errors for complex call patterns
+ pass
+
+ # Continue visiting child nodes
+ self.generic_visit(node)
+
+ def _create_symbol_id(self, file_path: str, symbol_name: str) -> str:
+ """Create a unique symbol ID."""
+ return f"{file_path}::{symbol_name}"
+
+ def _extract_function_signature(self, node: ast.FunctionDef) -> str:
+ """Extract function signature from AST node."""
+ # Build basic signature
+ args = []
+
+ # Regular arguments
+ for arg in node.args.args:
+ args.append(arg.arg)
+
+ # Varargs (*args)
+ if node.args.vararg:
+ args.append(f"*{node.args.vararg.arg}")
+
+ # Keyword arguments (**kwargs)
+ if node.args.kwarg:
+ args.append(f"**{node.args.kwarg.arg}")
+
+ signature = f"def {node.name}({', '.join(args)}):"
+ return signature
\ No newline at end of file
diff --git a/src/code_index_mcp/indexing/strategies/strategy_factory.py b/src/code_index_mcp/indexing/strategies/strategy_factory.py
new file mode 100644
index 0000000..c7116d9
--- /dev/null
+++ b/src/code_index_mcp/indexing/strategies/strategy_factory.py
@@ -0,0 +1,201 @@
+"""
+Strategy factory for creating appropriate parsing strategies.
+"""
+
+import threading
+from typing import Dict, List
+from .base_strategy import ParsingStrategy
+from .python_strategy import PythonParsingStrategy
+from .javascript_strategy import JavaScriptParsingStrategy
+from .typescript_strategy import TypeScriptParsingStrategy
+from .java_strategy import JavaParsingStrategy
+from .go_strategy import GoParsingStrategy
+from .objective_c_strategy import ObjectiveCParsingStrategy
+from .zig_strategy import ZigParsingStrategy
+from .fallback_strategy import FallbackParsingStrategy
+
+
+class StrategyFactory:
+ """Factory for creating appropriate parsing strategies."""
+
+ def __init__(self):
+ # Initialize all strategies with thread safety
+ self._strategies: Dict[str, ParsingStrategy] = {}
+ self._initialized = False
+ self._lock = threading.RLock()
+ self._initialize_strategies()
+
+ # File type mappings for fallback parser
+ self._file_type_mappings = {
+ # Web and markup
+ '.html': 'html', '.htm': 'html',
+ '.css': 'css', '.scss': 'css', '.sass': 'css',
+ '.less': 'css', '.stylus': 'css', '.styl': 'css',
+ '.md': 'markdown', '.mdx': 'markdown',
+ '.json': 'json', '.jsonc': 'json',
+ '.xml': 'xml',
+ '.yml': 'yaml', '.yaml': 'yaml',
+
+ # Frontend frameworks
+ '.vue': 'vue',
+ '.svelte': 'svelte',
+ '.astro': 'astro',
+
+ # Template engines
+ '.hbs': 'handlebars', '.handlebars': 'handlebars',
+ '.ejs': 'ejs',
+ '.pug': 'pug',
+
+ # Database and SQL
+ '.sql': 'sql', '.ddl': 'sql', '.dml': 'sql',
+ '.mysql': 'sql', '.postgresql': 'sql', '.psql': 'sql',
+ '.sqlite': 'sql', '.mssql': 'sql', '.oracle': 'sql',
+ '.ora': 'sql', '.db2': 'sql',
+ '.proc': 'sql', '.procedure': 'sql',
+ '.func': 'sql', '.function': 'sql',
+ '.view': 'sql', '.trigger': 'sql', '.index': 'sql',
+ '.migration': 'sql', '.seed': 'sql', '.fixture': 'sql',
+ '.schema': 'sql',
+ '.cql': 'sql', '.cypher': 'sql', '.sparql': 'sql',
+ '.gql': 'graphql',
+ '.liquibase': 'sql', '.flyway': 'sql',
+
+ # Config and text files
+ '.txt': 'text',
+ '.ini': 'config', '.cfg': 'config', '.conf': 'config',
+ '.toml': 'config',
+ '.properties': 'config',
+ '.env': 'config',
+ '.gitignore': 'config',
+ '.dockerignore': 'config',
+ '.editorconfig': 'config',
+
+ # Other programming languages (will use fallback)
+ '.c': 'c', '.cpp': 'cpp', '.h': 'h', '.hpp': 'hpp',
+ '.cxx': 'cpp', '.cc': 'cpp', '.hxx': 'hpp', '.hh': 'hpp',
+ '.cs': 'csharp',
+ '.rb': 'ruby',
+ '.php': 'php',
+ '.swift': 'swift',
+ '.kt': 'kotlin', '.kts': 'kotlin',
+ '.rs': 'rust',
+ '.scala': 'scala',
+ '.sh': 'shell', '.bash': 'shell', '.zsh': 'shell',
+ '.ps1': 'powershell',
+ '.bat': 'batch', '.cmd': 'batch',
+ '.r': 'r', '.R': 'r',
+ '.pl': 'perl', '.pm': 'perl',
+ '.lua': 'lua',
+ '.dart': 'dart',
+ '.hs': 'haskell',
+ '.ml': 'ocaml', '.mli': 'ocaml',
+ '.fs': 'fsharp', '.fsx': 'fsharp',
+ '.clj': 'clojure', '.cljs': 'clojure',
+ '.vim': 'vim',
+ }
+
+ def _initialize_strategies(self):
+ """Initialize all parsing strategies with thread safety."""
+ with self._lock:
+ if self._initialized:
+ return
+
+ try:
+ # Python
+ python_strategy = PythonParsingStrategy()
+ for ext in python_strategy.get_supported_extensions():
+ self._strategies[ext] = python_strategy
+
+ # JavaScript
+ js_strategy = JavaScriptParsingStrategy()
+ for ext in js_strategy.get_supported_extensions():
+ self._strategies[ext] = js_strategy
+
+ # TypeScript
+ ts_strategy = TypeScriptParsingStrategy()
+ for ext in ts_strategy.get_supported_extensions():
+ self._strategies[ext] = ts_strategy
+
+ # Java
+ java_strategy = JavaParsingStrategy()
+ for ext in java_strategy.get_supported_extensions():
+ self._strategies[ext] = java_strategy
+
+ # Go
+ go_strategy = GoParsingStrategy()
+ for ext in go_strategy.get_supported_extensions():
+ self._strategies[ext] = go_strategy
+
+ # Objective-C
+ objc_strategy = ObjectiveCParsingStrategy()
+ for ext in objc_strategy.get_supported_extensions():
+ self._strategies[ext] = objc_strategy
+
+ # Zig
+ zig_strategy = ZigParsingStrategy()
+ for ext in zig_strategy.get_supported_extensions():
+ self._strategies[ext] = zig_strategy
+
+ self._initialized = True
+
+ except Exception as e:
+ # Reset state on failure to allow retry
+ self._strategies.clear()
+ self._initialized = False
+ raise e
+
+ def get_strategy(self, file_extension: str) -> ParsingStrategy:
+ """
+ Get appropriate strategy for file extension.
+
+ Args:
+ file_extension: File extension (e.g., '.py', '.js')
+
+ Returns:
+ Appropriate parsing strategy
+ """
+ with self._lock:
+ # Ensure initialization is complete
+ if not self._initialized:
+ self._initialize_strategies()
+
+ # Check for specialized strategies first
+ if file_extension in self._strategies:
+ return self._strategies[file_extension]
+
+ # Use fallback strategy with appropriate language name
+ language_name = self._file_type_mappings.get(file_extension, 'unknown')
+ return FallbackParsingStrategy(language_name)
+
+ def get_all_supported_extensions(self) -> List[str]:
+ """Get all supported extensions across strategies."""
+ specialized = list(self._strategies.keys())
+ fallback = list(self._file_type_mappings.keys())
+ return specialized + fallback
+
+ def get_specialized_extensions(self) -> List[str]:
+ """Get extensions that have specialized parsers."""
+ return list(self._strategies.keys())
+
+ def get_fallback_extensions(self) -> List[str]:
+ """Get extensions that use fallback parsing."""
+ return list(self._file_type_mappings.keys())
+
+ def get_strategy_info(self) -> Dict[str, List[str]]:
+ """Get information about available strategies."""
+ info = {}
+
+ # Group extensions by strategy type
+ for ext, strategy in self._strategies.items():
+ strategy_name = strategy.get_language_name()
+ if strategy_name not in info:
+ info[strategy_name] = []
+ info[strategy_name].append(ext)
+
+ # Add fallback info
+ fallback_languages = set(self._file_type_mappings.values())
+ for lang in fallback_languages:
+ extensions = [ext for ext, mapped_lang in self._file_type_mappings.items() if mapped_lang == lang]
+ info[f"fallback_{lang}"] = extensions
+
+ return info
diff --git a/src/code_index_mcp/indexing/strategies/typescript_strategy.py b/src/code_index_mcp/indexing/strategies/typescript_strategy.py
new file mode 100644
index 0000000..05ed04d
--- /dev/null
+++ b/src/code_index_mcp/indexing/strategies/typescript_strategy.py
@@ -0,0 +1,251 @@
+"""
+TypeScript parsing strategy using tree-sitter - Optimized single-pass version.
+"""
+
+import logging
+from typing import Dict, List, Tuple, Optional, Set
+from .base_strategy import ParsingStrategy
+from ..models import SymbolInfo, FileInfo
+
+logger = logging.getLogger(__name__)
+
+import tree_sitter
+from tree_sitter_typescript import language_typescript
+
+
+class TypeScriptParsingStrategy(ParsingStrategy):
+ """TypeScript-specific parsing strategy using tree-sitter - Single Pass Optimized."""
+
+ def __init__(self):
+ self.ts_language = tree_sitter.Language(language_typescript())
+
+ def get_language_name(self) -> str:
+ return "typescript"
+
+ def get_supported_extensions(self) -> List[str]:
+ return ['.ts', '.tsx']
+
+ def parse_file(self, file_path: str, content: str) -> Tuple[Dict[str, SymbolInfo], FileInfo]:
+ """Parse TypeScript file using tree-sitter with single-pass optimization."""
+ symbols = {}
+ functions = []
+ classes = []
+ imports = []
+ exports = []
+
+ # Symbol lookup index for O(1) access
+ symbol_lookup = {} # name -> symbol_id mapping
+
+ parser = tree_sitter.Parser(self.ts_language)
+ tree = parser.parse(content.encode('utf8'))
+
+ # Single-pass traversal that handles everything
+ context = TraversalContext(
+ content=content,
+ file_path=file_path,
+ symbols=symbols,
+ functions=functions,
+ classes=classes,
+ imports=imports,
+ exports=exports,
+ symbol_lookup=symbol_lookup
+ )
+
+ self._traverse_node_single_pass(tree.root_node, context)
+
+ file_info = FileInfo(
+ language=self.get_language_name(),
+ line_count=len(content.splitlines()),
+ symbols={"functions": functions, "classes": classes},
+ imports=imports,
+ exports=exports
+ )
+
+ return symbols, file_info
+
+ def _traverse_node_single_pass(self, node, context: 'TraversalContext',
+ current_function: Optional[str] = None,
+ current_class: Optional[str] = None):
+ """Single-pass traversal that extracts symbols and analyzes calls."""
+
+ # Handle function declarations
+ if node.type == 'function_declaration':
+ name = self._get_function_name(node, context.content)
+ if name:
+ symbol_id = self._create_symbol_id(context.file_path, name)
+ signature = self._get_ts_function_signature(node, context.content)
+ symbol_info = SymbolInfo(
+ type="function",
+ file=context.file_path,
+ line=node.start_point[0] + 1,
+ signature=signature
+ )
+ context.symbols[symbol_id] = symbol_info
+ context.symbol_lookup[name] = symbol_id
+ context.functions.append(name)
+
+ # Traverse function body with updated context
+ func_context = f"{context.file_path}::{name}"
+ for child in node.children:
+ self._traverse_node_single_pass(child, context, current_function=func_context,
+ current_class=current_class)
+ return
+
+ # Handle class declarations
+ elif node.type == 'class_declaration':
+ name = self._get_class_name(node, context.content)
+ if name:
+ symbol_id = self._create_symbol_id(context.file_path, name)
+ symbol_info = SymbolInfo(
+ type="class",
+ file=context.file_path,
+ line=node.start_point[0] + 1
+ )
+ context.symbols[symbol_id] = symbol_info
+ context.symbol_lookup[name] = symbol_id
+ context.classes.append(name)
+
+ # Traverse class body with updated context
+ for child in node.children:
+ self._traverse_node_single_pass(child, context, current_function=current_function,
+ current_class=name)
+ return
+
+ # Handle interface declarations
+ elif node.type == 'interface_declaration':
+ name = self._get_interface_name(node, context.content)
+ if name:
+ symbol_id = self._create_symbol_id(context.file_path, name)
+ symbol_info = SymbolInfo(
+ type="interface",
+ file=context.file_path,
+ line=node.start_point[0] + 1
+ )
+ context.symbols[symbol_id] = symbol_info
+ context.symbol_lookup[name] = symbol_id
+ context.classes.append(name) # Group interfaces with classes
+
+ # Traverse interface body with updated context
+ for child in node.children:
+ self._traverse_node_single_pass(child, context, current_function=current_function,
+ current_class=name)
+ return
+
+ # Handle method definitions
+ elif node.type == 'method_definition':
+ method_name = self._get_method_name(node, context.content)
+ if method_name and current_class:
+ full_name = f"{current_class}.{method_name}"
+ symbol_id = self._create_symbol_id(context.file_path, full_name)
+ signature = self._get_ts_function_signature(node, context.content)
+ symbol_info = SymbolInfo(
+ type="method",
+ file=context.file_path,
+ line=node.start_point[0] + 1,
+ signature=signature
+ )
+ context.symbols[symbol_id] = symbol_info
+ context.symbol_lookup[full_name] = symbol_id
+ context.symbol_lookup[method_name] = symbol_id # Also index by method name alone
+ context.functions.append(full_name)
+
+ # Traverse method body with updated context
+ method_context = f"{context.file_path}::{full_name}"
+ for child in node.children:
+ self._traverse_node_single_pass(child, context, current_function=method_context,
+ current_class=current_class)
+ return
+
+ # Handle function calls
+ elif node.type == 'call_expression' and current_function:
+ # Extract the function being called
+ called_function = None
+ if node.children:
+ func_node = node.children[0]
+ if func_node.type == 'identifier':
+ # Direct function call
+ called_function = context.content[func_node.start_byte:func_node.end_byte]
+ elif func_node.type == 'member_expression':
+ # Method call (obj.method or this.method)
+ for child in func_node.children:
+ if child.type == 'property_identifier':
+ called_function = context.content[child.start_byte:child.end_byte]
+ break
+
+ # Add relationship using O(1) lookup
+ if called_function:
+ if called_function in context.symbol_lookup:
+ symbol_id = context.symbol_lookup[called_function]
+ symbol_info = context.symbols[symbol_id]
+ if current_function not in symbol_info.called_by:
+ symbol_info.called_by.append(current_function)
+ else:
+ # Try to find method with class prefix
+ for name, sid in context.symbol_lookup.items():
+ if name.endswith(f".{called_function}"):
+ symbol_info = context.symbols[sid]
+ if current_function not in symbol_info.called_by:
+ symbol_info.called_by.append(current_function)
+ break
+
+ # Handle import declarations
+ elif node.type == 'import_statement':
+ import_text = context.content[node.start_byte:node.end_byte]
+ context.imports.append(import_text)
+
+ # Handle export declarations
+ elif node.type in ['export_statement', 'export_default_declaration']:
+ export_text = context.content[node.start_byte:node.end_byte]
+ context.exports.append(export_text)
+
+ # Continue traversing children for other node types
+ for child in node.children:
+ self._traverse_node_single_pass(child, context, current_function=current_function,
+ current_class=current_class)
+
+ def _get_function_name(self, node, content: str) -> Optional[str]:
+ """Extract function name from tree-sitter node."""
+ for child in node.children:
+ if child.type == 'identifier':
+ return content[child.start_byte:child.end_byte]
+ return None
+
+ def _get_class_name(self, node, content: str) -> Optional[str]:
+ """Extract class name from tree-sitter node."""
+ for child in node.children:
+ if child.type == 'identifier':
+ return content[child.start_byte:child.end_byte]
+ return None
+
+ def _get_interface_name(self, node, content: str) -> Optional[str]:
+ """Extract interface name from tree-sitter node."""
+ for child in node.children:
+ if child.type == 'type_identifier':
+ return content[child.start_byte:child.end_byte]
+ return None
+
+ def _get_method_name(self, node, content: str) -> Optional[str]:
+ """Extract method name from tree-sitter node."""
+ for child in node.children:
+ if child.type == 'property_identifier':
+ return content[child.start_byte:child.end_byte]
+ return None
+
+ def _get_ts_function_signature(self, node, content: str) -> str:
+ """Extract TypeScript function signature."""
+ return content[node.start_byte:node.end_byte].split('\n')[0].strip()
+
+
+class TraversalContext:
+ """Context object to pass state during single-pass traversal."""
+
+ def __init__(self, content: str, file_path: str, symbols: Dict,
+ functions: List, classes: List, imports: List, exports: List, symbol_lookup: Dict):
+ self.content = content
+ self.file_path = file_path
+ self.symbols = symbols
+ self.functions = functions
+ self.classes = classes
+ self.imports = imports
+ self.exports = exports
+ self.symbol_lookup = symbol_lookup
\ No newline at end of file
diff --git a/src/code_index_mcp/indexing/strategies/zig_strategy.py b/src/code_index_mcp/indexing/strategies/zig_strategy.py
new file mode 100644
index 0000000..658ca2b
--- /dev/null
+++ b/src/code_index_mcp/indexing/strategies/zig_strategy.py
@@ -0,0 +1,99 @@
+"""
+Zig parsing strategy using tree-sitter.
+"""
+
+import logging
+from typing import Dict, List, Tuple, Optional
+from .base_strategy import ParsingStrategy
+from ..models import SymbolInfo, FileInfo
+
+logger = logging.getLogger(__name__)
+
+import tree_sitter
+from tree_sitter_zig import language
+
+
+class ZigParsingStrategy(ParsingStrategy):
+ """Zig parsing strategy using tree-sitter."""
+
+ def __init__(self):
+ self.zig_language = tree_sitter.Language(language())
+
+ def get_language_name(self) -> str:
+ return "zig"
+
+ def get_supported_extensions(self) -> List[str]:
+ return ['.zig', '.zon']
+
+ def parse_file(self, file_path: str, content: str) -> Tuple[Dict[str, SymbolInfo], FileInfo]:
+ """Parse Zig file using tree-sitter."""
+ return self._tree_sitter_parse(file_path, content)
+
+
+ def _tree_sitter_parse(self, file_path: str, content: str) -> Tuple[Dict[str, SymbolInfo], FileInfo]:
+ """Parse Zig file using tree-sitter."""
+ symbols = {}
+ functions = []
+ classes = []
+ imports = []
+
+ parser = tree_sitter.Parser(self.zig_language)
+ tree = parser.parse(content.encode('utf8'))
+
+ # Phase 1: Extract symbols using tree-sitter
+ self._traverse_zig_node(tree.root_node, content, file_path, symbols, functions, classes, imports)
+
+ file_info = FileInfo(
+ language=self.get_language_name(),
+ line_count=len(content.splitlines()),
+ symbols={"functions": functions, "classes": classes},
+ imports=imports
+ )
+
+ return symbols, file_info
+
+ def _traverse_zig_node(self, node, content: str, file_path: str, symbols: Dict, functions: List, classes: List, imports: List):
+ """Traverse Zig AST node and extract symbols."""
+ if node.type == 'function_declaration':
+ func_name = self._extract_zig_function_name_from_node(node, content)
+ if func_name:
+ line_number = self._extract_line_number(content, node.start_byte)
+ symbol_id = self._create_symbol_id(file_path, func_name)
+ symbols[symbol_id] = SymbolInfo(
+ type="function",
+ file=file_path,
+ line=line_number,
+ signature=self._safe_extract_text(content, node.start_byte, node.end_byte)
+ )
+ functions.append(func_name)
+
+ elif node.type in ['struct_declaration', 'union_declaration', 'enum_declaration']:
+ type_name = self._extract_zig_type_name_from_node(node, content)
+ if type_name:
+ line_number = self._extract_line_number(content, node.start_byte)
+ symbol_id = self._create_symbol_id(file_path, type_name)
+ symbols[symbol_id] = SymbolInfo(
+ type=node.type.replace('_declaration', ''),
+ file=file_path,
+ line=line_number
+ )
+ classes.append(type_name)
+
+ # Recurse through children
+ for child in node.children:
+ self._traverse_zig_node(child, content, file_path, symbols, functions, classes, imports)
+
+ def _extract_zig_function_name_from_node(self, node, content: str) -> Optional[str]:
+ """Extract function name from tree-sitter node."""
+ for child in node.children:
+ if child.type == 'identifier':
+ return self._safe_extract_text(content, child.start_byte, child.end_byte)
+ return None
+
+ def _extract_zig_type_name_from_node(self, node, content: str) -> Optional[str]:
+ """Extract type name from tree-sitter node."""
+ for child in node.children:
+ if child.type == 'identifier':
+ return self._safe_extract_text(content, child.start_byte, child.end_byte)
+ return None
+
diff --git a/src/code_index_mcp/project_settings.py b/src/code_index_mcp/project_settings.py
index 94b9da3..d3c3965 100644
--- a/src/code_index_mcp/project_settings.py
+++ b/src/code_index_mcp/project_settings.py
@@ -6,15 +6,16 @@
"""
import os
import json
-import shutil
-import pickle
+
+
import tempfile
import hashlib
-import subprocess
+
from datetime import datetime
+
from .constants import (
- SETTINGS_DIR, CONFIG_FILE, INDEX_FILE, CACHE_FILE
+ SETTINGS_DIR, CONFIG_FILE, INDEX_FILE
)
from .search.base import SearchStrategy
from .search.ugrep import UgrepStrategy
@@ -45,8 +46,8 @@ def _get_available_strategies() -> list[SearchStrategy]:
strategy = strategy_class()
if strategy.is_available():
available.append(strategy)
- except Exception as e:
- print(f"Error initializing strategy {strategy_class.__name__}: {e}")
+ except Exception:
+ pass
return available
@@ -69,36 +70,40 @@ def __init__(self, base_path, skip_load=False):
try:
# Get system temporary directory
system_temp = tempfile.gettempdir()
- print(f"System temporary directory: {system_temp}")
# Check if the system temporary directory exists and is writable
if not os.path.exists(system_temp):
- print(f"Warning: System temporary directory does not exist: {system_temp}")
- # Try using current directory as fallback
- system_temp = os.getcwd()
- print(f"Using current directory as fallback: {system_temp}")
+ # Try using project directory as fallback if available
+ if base_path and os.path.exists(base_path):
+ system_temp = base_path
+ else:
+ # Use user's home directory as last resort
+ system_temp = os.path.expanduser("~")
if not os.access(system_temp, os.W_OK):
- print(f"Warning: No write access to system temporary directory: {system_temp}")
- # Try using current directory as fallback
- system_temp = os.getcwd()
- print(f"Using current directory as fallback: {system_temp}")
+ # Try using project directory as fallback if available
+ if base_path and os.path.exists(base_path) and os.access(base_path, os.W_OK):
+ system_temp = base_path
+ else:
+ # Use user's home directory as last resort
+ system_temp = os.path.expanduser("~")
# Create code_indexer directory
temp_base_dir = os.path.join(system_temp, SETTINGS_DIR)
- print(f"Code indexer directory path: {temp_base_dir}")
if not os.path.exists(temp_base_dir):
- print(f"Creating code indexer directory: {temp_base_dir}")
os.makedirs(temp_base_dir, exist_ok=True)
- print(f"Code indexer directory created: {temp_base_dir}")
else:
- print(f"Code indexer directory already exists: {temp_base_dir}")
- except Exception as e:
- print(f"Error setting up temporary directory: {e}")
- # If unable to create temporary directory, use .code_indexer in current directory
- temp_base_dir = os.path.join(os.getcwd(), ".code_indexer")
- print(f"Using fallback directory: {temp_base_dir}")
+ pass
+ except Exception:
+ # If unable to create temporary directory, use .code_indexer in project directory if available
+ if base_path and os.path.exists(base_path):
+ temp_base_dir = os.path.join(base_path, ".code_indexer")
+
+ else:
+ # Use home directory as last resort
+ temp_base_dir = os.path.join(os.path.expanduser("~"), ".code_indexer")
+
if not os.path.exists(temp_base_dir):
os.makedirs(temp_base_dir, exist_ok=True)
@@ -108,52 +113,56 @@ def __init__(self, base_path, skip_load=False):
# Use hash of project path as unique identifier
path_hash = hashlib.md5(base_path.encode()).hexdigest()
self.settings_path = os.path.join(temp_base_dir, path_hash)
- print(f"Using project-specific directory: {self.settings_path}")
else:
# If no base path provided, use a default directory
self.settings_path = os.path.join(temp_base_dir, "default")
- print(f"Using default directory: {self.settings_path}")
self.ensure_settings_dir()
- except Exception as e:
- print(f"Error setting up project settings: {e}")
- # If error occurs, use .code_indexer in current directory as fallback
- fallback_dir = os.path.join(os.getcwd(), ".code_indexer",
- "default" if not base_path else hashlib.md5(base_path.encode()).hexdigest())
- print(f"Using fallback directory: {fallback_dir}")
+ except Exception:
+ # If error occurs, use .code_indexer in project or home directory as fallback
+ if base_path and os.path.exists(base_path):
+ fallback_dir = os.path.join(base_path, ".code_indexer",
+ hashlib.md5(base_path.encode()).hexdigest())
+ else:
+ fallback_dir = os.path.join(os.path.expanduser("~"), ".code_indexer",
+ "default" if not base_path else hashlib.md5(base_path.encode()).hexdigest())
+
self.settings_path = fallback_dir
if not os.path.exists(fallback_dir):
os.makedirs(fallback_dir, exist_ok=True)
def ensure_settings_dir(self):
"""Ensure settings directory exists"""
- print(f"Checking project settings directory: {self.settings_path}")
try:
if not os.path.exists(self.settings_path):
- print(f"Creating project settings directory: {self.settings_path}")
# Create directory structure
os.makedirs(self.settings_path, exist_ok=True)
- print(f"Project settings directory created: {self.settings_path}")
else:
- print(f"Project settings directory already exists: {self.settings_path}")
+ pass
# Check if directory is writable
if not os.access(self.settings_path, os.W_OK):
- print(f"Warning: No write access to project settings directory: {self.settings_path}")
- # If directory is not writable, use .code_indexer in current directory as fallback
- fallback_dir = os.path.join(os.getcwd(), ".code_indexer",
- os.path.basename(self.settings_path))
- print(f"Using fallback directory: {fallback_dir}")
+ # If directory is not writable, use .code_indexer in project or home directory as fallback
+ if self.base_path and os.path.exists(self.base_path) and os.access(self.base_path, os.W_OK):
+ fallback_dir = os.path.join(self.base_path, ".code_indexer",
+ os.path.basename(self.settings_path))
+ else:
+ fallback_dir = os.path.join(os.path.expanduser("~"), ".code_indexer",
+ os.path.basename(self.settings_path))
+
self.settings_path = fallback_dir
if not os.path.exists(fallback_dir):
os.makedirs(fallback_dir, exist_ok=True)
- except Exception as e:
- print(f"Error ensuring settings directory: {e}")
- # If unable to create settings directory, use .code_indexer in current directory
- fallback_dir = os.path.join(os.getcwd(), ".code_indexer",
- "default" if not self.base_path else hashlib.md5(self.base_path.encode()).hexdigest())
- print(f"Using fallback directory: {fallback_dir}")
+ except Exception:
+ # If unable to create settings directory, use .code_indexer in project or home directory
+ if self.base_path and os.path.exists(self.base_path):
+ fallback_dir = os.path.join(self.base_path, ".code_indexer",
+ hashlib.md5(self.base_path.encode()).hexdigest())
+ else:
+ fallback_dir = os.path.join(os.path.expanduser("~"), ".code_indexer",
+ "default" if not self.base_path else hashlib.md5(self.base_path.encode()).hexdigest())
+
self.settings_path = fallback_dir
if not os.path.exists(fallback_dir):
os.makedirs(fallback_dir, exist_ok=True)
@@ -165,34 +174,13 @@ def get_config_path(self):
# Ensure directory exists
os.makedirs(os.path.dirname(path), exist_ok=True)
return path
- except Exception as e:
- print(f"Error getting config path: {e}")
- # If error occurs, use file in current directory as fallback
- return os.path.join(os.getcwd(), CONFIG_FILE)
-
- def get_index_path(self):
- """Get the path to the index file"""
- try:
- path = os.path.join(self.settings_path, INDEX_FILE)
- # Ensure directory exists
- os.makedirs(os.path.dirname(path), exist_ok=True)
- return path
- except Exception as e:
- print(f"Error getting index path: {e}")
- # If error occurs, use file in current directory as fallback
- return os.path.join(os.getcwd(), INDEX_FILE)
+ except Exception:
+ # If error occurs, use file in project or home directory as fallback
+ if self.base_path and os.path.exists(self.base_path):
+ return os.path.join(self.base_path, CONFIG_FILE)
+ else:
+ return os.path.join(os.path.expanduser("~"), CONFIG_FILE)
- def get_cache_path(self):
- """Get the path to the cache file"""
- try:
- path = os.path.join(self.settings_path, CACHE_FILE)
- # Ensure directory exists
- os.makedirs(os.path.dirname(path), exist_ok=True)
- return path
- except Exception as e:
- print(f"Error getting cache path: {e}")
- # If error occurs, use file in current directory as fallback
- return os.path.join(os.getcwd(), CACHE_FILE)
def _get_timestamp(self):
"""Get current timestamp"""
@@ -215,10 +203,9 @@ def save_config(self, config):
with open(config_path, 'w', encoding='utf-8') as f:
json.dump(config, f, indent=2, ensure_ascii=False)
- print(f"Config saved to: {config_path}")
+
return config
- except Exception as e:
- print(f"Error saving config: {e}")
+ except Exception:
return config
def load_config(self):
@@ -237,218 +224,157 @@ def load_config(self):
try:
with open(config_path, 'r', encoding='utf-8') as f:
config = json.load(f)
- print(f"Config loaded from: {config_path}")
return config
- except (json.JSONDecodeError, UnicodeDecodeError) as e:
- print(f"Error parsing config file: {e}")
+ except (json.JSONDecodeError, UnicodeDecodeError):
# If file is corrupted, return empty dict
return {}
else:
- print(f"Config file does not exist: {config_path}")
+ pass
return {}
- except Exception as e:
- print(f"Error loading config: {e}")
+ except Exception:
return {}
- def save_index(self, file_index):
- """Save file index
+ def save_index(self, index_data):
+ """Save code index in JSON format
Args:
- file_index (dict): File index data
+ index_data: Index data as dictionary or JSON string
"""
try:
index_path = self.get_index_path()
- print(f"Saving index to: {index_path}")
# Ensure directory exists
dir_path = os.path.dirname(index_path)
if not os.path.exists(dir_path):
- print(f"Creating directory: {dir_path}")
os.makedirs(dir_path, exist_ok=True)
# Check if directory is writable
if not os.access(dir_path, os.W_OK):
- print(f"Warning: Directory is not writable: {dir_path}")
- # Use current directory as fallback
- index_path = os.path.join(os.getcwd(), INDEX_FILE)
- print(f"Using fallback path: {index_path}")
+ # Use project or home directory as fallback
+ if self.base_path and os.path.exists(self.base_path):
+ index_path = os.path.join(self.base_path, INDEX_FILE)
+ else:
+ index_path = os.path.join(os.path.expanduser("~"), INDEX_FILE)
+
+
+ # Convert to JSON string if it's an object with to_json method
+ if hasattr(index_data, 'to_json'):
+ json_data = index_data.to_json()
+ elif isinstance(index_data, str):
+ json_data = index_data
+ else:
+ # Assume it's a dictionary and convert to JSON
+ json_data = json.dumps(index_data, indent=2, default=str)
- with open(index_path, 'wb') as f:
- pickle.dump(file_index, f)
+ with open(index_path, 'w', encoding='utf-8') as f:
+ f.write(json_data)
- print(f"Index saved successfully to: {index_path}")
- except Exception as e:
- print(f"Error saving index: {e}")
- # Try saving to current directory
+
+ except Exception:
+ # Try saving to project or home directory
try:
- fallback_path = os.path.join(os.getcwd(), INDEX_FILE)
- print(f"Trying fallback path: {fallback_path}")
- with open(fallback_path, 'wb') as f:
- pickle.dump(file_index, f)
- print(f"Index saved to fallback path: {fallback_path}")
- except Exception as e2:
- print(f"Error saving index to fallback path: {e2}")
-
+ if self.base_path and os.path.exists(self.base_path):
+ fallback_path = os.path.join(self.base_path, INDEX_FILE)
+ else:
+ fallback_path = os.path.join(os.path.expanduser("~"), INDEX_FILE)
+
+
+ # Convert to JSON string if it's an object with to_json method
+ if hasattr(index_data, 'to_json'):
+ json_data = index_data.to_json()
+ elif isinstance(index_data, str):
+ json_data = index_data
+ else:
+ json_data = json.dumps(index_data, indent=2, default=str)
+
+ with open(fallback_path, 'w', encoding='utf-8') as f:
+ f.write(json_data)
+ except Exception:
+ pass
def load_index(self):
- """Load file index
+ """Load code index from JSON format
Returns:
- dict: File index data, or empty dict if file doesn't exist
+ dict: Index data, or None if file doesn't exist or has errors
"""
- # If skip_load is set, return empty dict directly
+ # If skip_load is set, return None directly
if self.skip_load:
- return {}
+ return None
try:
index_path = self.get_index_path()
if os.path.exists(index_path):
try:
- with open(index_path, 'rb') as f:
- index = pickle.load(f)
- print(f"Index loaded successfully from: {index_path}")
- return index
- except (pickle.PickleError, EOFError) as e:
- print(f"Error parsing index file: {e}")
- # If file is corrupted, return empty dict
- return {}
- except Exception as e:
- print(f"Unexpected error loading index: {e}")
- return {}
+ with open(index_path, 'r', encoding='utf-8') as f:
+ index_data = json.load(f)
+ return index_data
+ except (json.JSONDecodeError, UnicodeDecodeError):
+ # If file is corrupted, return None
+ return None
+ except Exception:
+ return None
else:
- # Try loading from current directory
- fallback_path = os.path.join(os.getcwd(), INDEX_FILE)
- if os.path.exists(fallback_path):
- print(f"Trying fallback path: {fallback_path}")
- try:
- with open(fallback_path, 'rb') as f:
- index = pickle.load(f)
- print(f"Index loaded from fallback path: {fallback_path}")
- return index
- except Exception as e:
- print(f"Error loading index from fallback path: {e}")
-
- return {}
- except Exception as e:
- print(f"Error in load_index: {e}")
- return {}
-
- def save_cache(self, content_cache):
- """Save content cache
-
- Args:
- content_cache (dict): Content cache data
- """
- try:
- cache_path = self.get_cache_path()
- print(f"Saving cache to: {cache_path}")
-
- # Ensure directory exists
- dir_path = os.path.dirname(cache_path)
- if not os.path.exists(dir_path):
- print(f"Creating directory: {dir_path}")
- os.makedirs(dir_path, exist_ok=True)
-
- # Check if directory is writable
- if not os.access(dir_path, os.W_OK):
- print(f"Warning: Directory is not writable: {dir_path}")
- # Use current directory as fallback
- cache_path = os.path.join(os.getcwd(), CACHE_FILE)
- print(f"Using fallback path: {cache_path}")
-
- with open(cache_path, 'wb') as f:
- pickle.dump(content_cache, f)
-
- print(f"Cache saved successfully to: {cache_path}")
- except Exception as e:
- print(f"Error saving cache: {e}")
- # Try saving to current directory
- try:
- fallback_path = os.path.join(os.getcwd(), CACHE_FILE)
- print(f"Trying fallback path: {fallback_path}")
- with open(fallback_path, 'wb') as f:
- pickle.dump(content_cache, f)
- print(f"Cache saved to fallback path: {fallback_path}")
- except Exception as e2:
- print(f"Error saving cache to fallback path: {e2}")
+ # Try loading from project or home directory
+ if self.base_path and os.path.exists(self.base_path):
+ fallback_path = os.path.join(self.base_path, INDEX_FILE)
+ else:
+ fallback_path = os.path.join(os.path.expanduser("~"), INDEX_FILE)
+ if os.path.exists(fallback_path):
+ try:
+ with open(fallback_path, 'r', encoding='utf-8') as f:
+ index_data = json.load(f)
+ return index_data
+ except Exception:
+ pass
+ return None
+ except Exception:
+ return None
- def load_cache(self):
- """Load content cache
- Returns:
- dict: Content cache data, or empty dict if file doesn't exist
- """
- # If skip_load is set, return empty dict directly
- if self.skip_load:
- return {}
+ def cleanup_legacy_files(self) -> None:
+ """Clean up any legacy index files found."""
try:
- cache_path = self.get_cache_path()
-
- if os.path.exists(cache_path):
- try:
- with open(cache_path, 'rb') as f:
- cache = pickle.load(f)
- print(f"Cache loaded successfully from: {cache_path}")
- return cache
- except (pickle.PickleError, EOFError) as e:
- print(f"Error parsing cache file: {e}")
- # If file is corrupted, return empty dict
- return {}
- except Exception as e:
- print(f"Unexpected error loading cache: {e}")
- return {}
- else:
- # Try loading from current directory
- fallback_path = os.path.join(os.getcwd(), CACHE_FILE)
- if os.path.exists(fallback_path):
- print(f"Trying fallback path: {fallback_path}")
+ legacy_files = [
+ os.path.join(self.settings_path, "file_index.pickle"),
+ os.path.join(self.settings_path, "content_cache.pickle"),
+ os.path.join(self.settings_path, INDEX_FILE) # Legacy JSON
+ ]
+
+ for legacy_file in legacy_files:
+ if os.path.exists(legacy_file):
try:
- with open(fallback_path, 'rb') as f:
- cache = pickle.load(f)
- print(f"Cache loaded from fallback path: {fallback_path}")
- return cache
- except Exception as e:
- print(f"Error loading cache from fallback path: {e}")
-
- return {}
- except Exception as e:
- print(f"Error in load_cache: {e}")
- return {}
+ os.remove(legacy_file)
+ except Exception:
+ pass
+ except Exception:
+ pass
def clear(self):
- """Clear all settings and cache files"""
+ """Clear config and index files"""
try:
- print(f"Clearing settings directory: {self.settings_path}")
if os.path.exists(self.settings_path):
# Check if directory is writable
if not os.access(self.settings_path, os.W_OK):
- print(f"Warning: Directory is not writable: {self.settings_path}")
return
- # Delete all files in the directory
- try:
- for filename in os.listdir(self.settings_path):
- file_path = os.path.join(self.settings_path, filename)
- try:
- if os.path.isfile(file_path):
- os.unlink(file_path)
- print(f"Deleted file: {file_path}")
- elif os.path.isdir(file_path):
- shutil.rmtree(file_path)
- print(f"Deleted directory: {file_path}")
- except Exception as e:
- print(f"Error deleting {file_path}: {e}")
- except Exception as e:
- print(f"Error listing directory: {e}")
+ # Delete specific files only (config.json and index.json)
+ files_to_delete = [CONFIG_FILE, INDEX_FILE]
- print(f"Settings directory cleared successfully")
- else:
- print(f"Settings directory does not exist: {self.settings_path}")
- except Exception as e:
- print(f"Error clearing settings: {e}")
+ for filename in files_to_delete:
+ file_path = os.path.join(self.settings_path, filename)
+ try:
+ if os.path.isfile(file_path):
+ os.unlink(file_path)
+ except Exception:
+ pass
+ else:
+ pass
+ except Exception:
+ pass
def get_stats(self):
"""Get statistics for the settings directory
@@ -456,7 +382,6 @@ def get_stats(self):
dict: Dictionary containing file sizes and update times
"""
try:
- print(f"Getting stats for settings directory: {self.settings_path}")
stats = {
'settings_path': self.settings_path,
@@ -465,7 +390,7 @@ def get_stats(self):
'writable': os.access(self.settings_path, os.W_OK) if os.path.exists(self.settings_path) else False,
'files': {},
'temp_dir': tempfile.gettempdir(),
- 'current_dir': os.getcwd()
+ 'base_path': self.base_path
}
if stats['exists'] and stats['is_directory']:
@@ -475,7 +400,7 @@ def get_stats(self):
stats['all_files'] = all_files
# Get details for specific files
- for filename in [CONFIG_FILE, INDEX_FILE, CACHE_FILE]:
+ for filename in [CONFIG_FILE, INDEX_FILE]:
file_path = os.path.join(self.settings_path, filename)
if os.path.exists(file_path):
try:
@@ -496,19 +421,21 @@ def get_stats(self):
stats['list_error'] = str(e)
# Check fallback path
- fallback_dir = os.path.join(os.getcwd(), ".code_indexer")
+ if self.base_path and os.path.exists(self.base_path):
+ fallback_dir = os.path.join(self.base_path, ".code_indexer")
+ else:
+ fallback_dir = os.path.join(os.path.expanduser("~"), ".code_indexer")
stats['fallback_path'] = fallback_dir
stats['fallback_exists'] = os.path.exists(fallback_dir)
stats['fallback_is_directory'] = os.path.isdir(fallback_dir) if os.path.exists(fallback_dir) else False
return stats
except Exception as e:
- print(f"Error getting stats: {e}")
return {
'error': str(e),
'settings_path': self.settings_path,
'temp_dir': tempfile.gettempdir(),
- 'current_dir': os.getcwd()
+ 'base_path': self.base_path
}
def get_search_tools_config(self):
@@ -530,13 +457,58 @@ def get_preferred_search_tool(self) -> SearchStrategy | None:
"""
if not self.available_strategies:
self.refresh_available_strategies()
-
+
return self.available_strategies[0] if self.available_strategies else None
def refresh_available_strategies(self):
"""
Force a refresh of the available search tools list.
"""
- print("Refreshing available search strategies...")
+
self.available_strategies = _get_available_strategies()
- print(f"Available strategies found: {[s.name for s in self.available_strategies]}")
+
+
+ def get_file_watcher_config(self) -> dict:
+ """
+ Get file watcher specific configuration.
+
+ Returns:
+ dict: File watcher configuration with defaults
+ """
+ config = self.load_config()
+ default_config = {
+ "enabled": True,
+ "debounce_seconds": 6.0,
+ "additional_exclude_patterns": [],
+ "monitored_extensions": [], # Empty = use all supported extensions
+ "exclude_patterns": [
+ ".git", ".svn", ".hg",
+ "node_modules", "__pycache__", ".venv", "venv",
+ ".DS_Store", "Thumbs.db",
+ "dist", "build", "target", ".idea", ".vscode",
+ ".pytest_cache", ".coverage", ".tox",
+ "bin", "obj"
+ ]
+ }
+
+ # Merge with loaded config
+ file_watcher_config = config.get("file_watcher", {})
+ for key, default_value in default_config.items():
+ if key not in file_watcher_config:
+ file_watcher_config[key] = default_value
+
+ return file_watcher_config
+
+ def update_file_watcher_config(self, updates: dict) -> None:
+ """
+ Update file watcher configuration.
+
+ Args:
+ updates: Dictionary of configuration updates
+ """
+ config = self.load_config()
+ if "file_watcher" not in config:
+ config["file_watcher"] = self.get_file_watcher_config()
+
+ config["file_watcher"].update(updates)
+ self.save_config(config)
diff --git a/src/code_index_mcp/search/ag.py b/src/code_index_mcp/search/ag.py
index e2506a2..aa3eb33 100644
--- a/src/code_index_mcp/search/ag.py
+++ b/src/code_index_mcp/search/ag.py
@@ -27,7 +27,8 @@ def search(
context_lines: int = 0,
file_pattern: Optional[str] = None,
fuzzy: bool = False,
- regex: bool = False
+ regex: bool = False,
+ max_line_length: Optional[int] = None
) -> Dict[str, List[Tuple[int, str]]]:
"""
Execute a search using The Silver Searcher (ag).
@@ -40,6 +41,7 @@ def search(
file_pattern: File pattern to filter
fuzzy: Enable word boundary matching (not true fuzzy search)
regex: Enable regex pattern matching
+ max_line_length: Optional. Limit the length of lines when context_lines is used
"""
# ag prints line numbers and groups by file by default, which is good.
# --noheading is used to be consistent with other tools' output format.
@@ -93,6 +95,26 @@ def search(
cmd.extend(['-G', regex_pattern])
+ processed_patterns = set()
+ exclude_dirs = getattr(self, 'exclude_dirs', [])
+ exclude_file_patterns = getattr(self, 'exclude_file_patterns', [])
+
+ for directory in exclude_dirs:
+ normalized = directory.strip()
+ if not normalized or normalized in processed_patterns:
+ continue
+ cmd.extend(['--ignore', normalized])
+ processed_patterns.add(normalized)
+
+ for pattern in exclude_file_patterns:
+ normalized = pattern.strip()
+ if not normalized or normalized in processed_patterns:
+ continue
+ if normalized.startswith('!'):
+ normalized = normalized[1:]
+ cmd.extend(['--ignore', normalized])
+ processed_patterns.add(normalized)
+
# Add -- to treat pattern as a literal argument, preventing injection
cmd.append('--')
cmd.append(search_pattern)
@@ -116,10 +138,10 @@ def search(
if process.returncode > 1:
raise RuntimeError(f"ag failed with exit code {process.returncode}: {process.stderr}")
- return parse_search_output(process.stdout, base_path)
+ return parse_search_output(process.stdout, base_path, max_line_length)
except FileNotFoundError:
raise RuntimeError("'ag' (The Silver Searcher) not found. Please install it and ensure it's in your PATH.")
except Exception as e:
# Re-raise other potential exceptions like permission errors
- raise RuntimeError(f"An error occurred while running ag: {e}")
+ raise RuntimeError(f"An error occurred while running ag: {e}")
diff --git a/src/code_index_mcp/search/base.py b/src/code_index_mcp/search/base.py
index 33cb9bd..5e4c63b 100644
--- a/src/code_index_mcp/search/base.py
+++ b/src/code_index_mcp/search/base.py
@@ -10,15 +10,25 @@
import subprocess
import sys
from abc import ABC, abstractmethod
-from typing import Dict, List, Optional, Tuple, Any
+from typing import Any, Dict, List, Optional, Tuple, TYPE_CHECKING
-def parse_search_output(output: str, base_path: str) -> Dict[str, List[Tuple[int, str]]]:
+from ..indexing.qualified_names import normalize_file_path
+
+if TYPE_CHECKING: # pragma: no cover
+ from ..utils.file_filter import FileFilter
+
+def parse_search_output(
+ output: str,
+ base_path: str,
+ max_line_length: Optional[int] = None
+) -> Dict[str, List[Tuple[int, str]]]:
"""
Parse the output of command-line search tools (grep, ag, rg).
Args:
output: The raw output from the command-line tool.
base_path: The base path of the project to make file paths relative.
+ max_line_length: Optional maximum line length to truncate long lines.
Returns:
A dictionary where keys are file paths and values are lists of (line_number, line_content) tuples.
@@ -31,25 +41,52 @@ def parse_search_output(output: str, base_path: str) -> Dict[str, List[Tuple[int
if not line.strip():
continue
try:
- # Handle Windows paths which might have a drive letter, e.g., C:
+ # Try to parse as a matched line first (format: path:linenum:content)
parts = line.split(':', 2)
- if sys.platform == "win32" and len(parts[0]) == 1 and parts[1].startswith('\\'):
- # Re-join drive letter with the rest of the path
+
+ # Check if this might be a context line (format: path-linenum-content)
+ # Context lines use '-' as separator in grep/ag output
+ if len(parts) < 3 and '-' in line:
+ # Try to parse as context line
+ # Match pattern: path-linenum-content or path-linenum-\tcontent
+ match = re.match(r'^(.*?)-(\d+)[-\t](.*)$', line)
+ if match:
+ file_path_abs = match.group(1)
+ line_number_str = match.group(2)
+ content = match.group(3)
+ else:
+ # If regex doesn't match, skip this line
+ continue
+ elif sys.platform == "win32" and len(parts) >= 3 and len(parts[0]) == 1 and parts[1].startswith('\\'):
+ # Handle Windows paths with drive letter (e.g., C:\path\file.txt)
file_path_abs = f"{parts[0]}:{parts[1]}"
line_number_str = parts[2].split(':', 1)[0]
- content = parts[2].split(':', 1)[1]
- else:
+ content = parts[2].split(':', 1)[1] if ':' in parts[2] else parts[2]
+ elif len(parts) >= 3:
+ # Standard format: path:linenum:content
file_path_abs = parts[0]
line_number_str = parts[1]
content = parts[2]
+ else:
+ # Line doesn't match any expected format
+ continue
line_number = int(line_number_str)
- # Make the file path relative to the base_path
- relative_path = os.path.relpath(file_path_abs, normalized_base_path)
+ # If the path is already relative (doesn't start with /), keep it as is
+ # Otherwise, make it relative to the base_path
+ if os.path.isabs(file_path_abs):
+ relative_path = os.path.relpath(file_path_abs, normalized_base_path)
+ else:
+ # Path is already relative, use it as is
+ relative_path = file_path_abs
# Normalize path separators for consistency
- relative_path = relative_path.replace('\\', '/')
+ relative_path = normalize_file_path(relative_path)
+
+ # Truncate content if it exceeds max_line_length
+ if max_line_length and len(content) > max_line_length:
+ content = content[:max_line_length] + '... (truncated)'
if relative_path not in results:
results[relative_path] = []
@@ -100,22 +137,45 @@ def is_safe_regex_pattern(pattern: str) -> bool:
Returns:
True if the pattern looks like a safe regex, False otherwise
"""
- # Allow basic regex operators that are commonly used and safe
- safe_regex_chars = ['|', '(', ')', '[', ']', '^', '$']
+ # Strong indicators of regex intent
+ strong_regex_indicators = ['|', '(', ')', '[', ']', '^', '$']
- # Check if pattern contains any regex metacharacters
- has_regex_chars = any(char in pattern for char in safe_regex_chars)
+ # Weaker indicators that need context
+ weak_regex_indicators = ['.', '*', '+', '?']
- # Basic safety check - avoid obviously dangerous patterns
- dangerous_patterns = [
- r'(.+)+', # Nested quantifiers
- r'(.*)*', # Nested stars
- r'(.{0,})+', # Potential ReDoS patterns
- ]
+ # Check for strong regex indicators
+ has_strong_regex = any(char in pattern for char in strong_regex_indicators)
- has_dangerous_patterns = any(dangerous in pattern for dangerous in dangerous_patterns)
+ # Check for weak indicators with context
+ has_weak_regex = any(char in pattern for char in weak_regex_indicators)
- return has_regex_chars and not has_dangerous_patterns
+ # If has strong indicators, likely regex
+ if has_strong_regex:
+ # Still check for dangerous patterns
+ dangerous_patterns = [
+ r'(.+)+', # Nested quantifiers
+ r'(.*)*', # Nested stars
+ r'(.{0,})+', # Potential ReDoS patterns
+ ]
+
+ has_dangerous_patterns = any(dangerous in pattern for dangerous in dangerous_patterns)
+ return not has_dangerous_patterns
+
+ # If only weak indicators, need more context
+ if has_weak_regex:
+ # Patterns like ".*", ".+", "file.*py" look like regex
+ # But "file.txt", "test.py" look like literal filenames
+ regex_like_patterns = [
+ r'\.\*', # .*
+ r'\.\+', # .+
+ r'\.\w*\*', # .something*
+ r'\*\.', # *.
+ r'\w+\.\*\w*', # word.*word
+ ]
+
+ return any(re.search(regex_pattern, pattern) for regex_pattern in regex_like_patterns)
+
+ return False
class SearchStrategy(ABC):
@@ -125,6 +185,16 @@ class SearchStrategy(ABC):
Each strategy is responsible for searching code using a specific tool or method.
"""
+ def configure_excludes(self, file_filter: Optional['FileFilter']) -> None:
+ """Configure shared exclusion settings for the strategy."""
+ self.file_filter = file_filter
+ if file_filter:
+ self.exclude_dirs = sorted(set(file_filter.exclude_dirs))
+ self.exclude_file_patterns = sorted(set(file_filter.exclude_files))
+ else:
+ self.exclude_dirs = []
+ self.exclude_file_patterns = []
+
@property
@abstractmethod
def name(self) -> str:
@@ -150,7 +220,8 @@ def search(
context_lines: int = 0,
file_pattern: Optional[str] = None,
fuzzy: bool = False,
- regex: bool = False
+ regex: bool = False,
+ max_line_length: Optional[int] = None
) -> Dict[str, List[Tuple[int, str]]]:
"""
Execute a search using the specific strategy.
@@ -168,4 +239,3 @@ def search(
A dictionary mapping filenames to lists of (line_number, line_content) tuples.
"""
pass
-
diff --git a/src/code_index_mcp/search/basic.py b/src/code_index_mcp/search/basic.py
index 57aab77..9ef1846 100644
--- a/src/code_index_mcp/search/basic.py
+++ b/src/code_index_mcp/search/basic.py
@@ -1,9 +1,10 @@
"""
Basic, pure-Python search strategy.
"""
+import fnmatch
import os
import re
-import fnmatch
+from pathlib import Path
from typing import Dict, List, Optional, Tuple
from .base import SearchStrategy, create_word_boundary_pattern, is_safe_regex_pattern
@@ -46,7 +47,8 @@ def search(
context_lines: int = 0,
file_pattern: Optional[str] = None,
fuzzy: bool = False,
- regex: bool = False
+ regex: bool = False,
+ max_line_length: Optional[int] = None
) -> Dict[str, List[Tuple[int, str]]]:
"""
Execute a basic, line-by-line search.
@@ -60,6 +62,7 @@ def search(
file_pattern: File pattern to filter
fuzzy: Enable word boundary matching
regex: Enable regex pattern matching
+ max_line_length: Optional. Limit the length of lines when context_lines is used
"""
results: Dict[str, List[Tuple[int, str]]] = {}
@@ -81,28 +84,38 @@ def search(
except re.error as e:
raise ValueError(f"Invalid regex pattern: {pattern}, error: {e}")
- for root, _, files in os.walk(base_path):
+ file_filter = getattr(self, 'file_filter', None)
+ base = Path(base_path)
+
+ for root, dirs, files in os.walk(base_path):
+ if file_filter:
+ dirs[:] = [d for d in dirs if not file_filter.should_exclude_directory(d)]
+
for file in files:
- # Improved file pattern matching with glob support
if file_pattern and not self._matches_pattern(file, file_pattern):
continue
- file_path = os.path.join(root, file)
+ file_path = Path(root) / file
+
+ if file_filter and not file_filter.should_process_path(file_path, base):
+ continue
+
rel_path = os.path.relpath(file_path, base_path)
-
+
try:
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
for line_num, line in enumerate(f, 1):
if search_regex.search(line):
+ content = line.rstrip('\n')
+ if max_line_length and len(content) > max_line_length:
+ content = content[:max_line_length] + '... (truncated)'
+
if rel_path not in results:
results[rel_path] = []
- # Strip newline for consistent output
- results[rel_path].append((line_num, line.rstrip('\n')))
+ results[rel_path].append((line_num, content))
except (UnicodeDecodeError, PermissionError, OSError):
- # Ignore files that can't be opened or read due to encoding/permission issues
continue
except Exception:
- # Ignore any other unexpected exceptions to maintain robustness
continue
- return results
\ No newline at end of file
+ return results
diff --git a/src/code_index_mcp/search/grep.py b/src/code_index_mcp/search/grep.py
index cd2d18e..f24c469 100644
--- a/src/code_index_mcp/search/grep.py
+++ b/src/code_index_mcp/search/grep.py
@@ -32,7 +32,8 @@ def search(
context_lines: int = 0,
file_pattern: Optional[str] = None,
fuzzy: bool = False,
- regex: bool = False
+ regex: bool = False,
+ max_line_length: Optional[int] = None
) -> Dict[str, List[Tuple[int, str]]]:
"""
Execute a search using standard grep.
@@ -45,6 +46,7 @@ def search(
file_pattern: File pattern to filter
fuzzy: Enable word boundary matching
regex: Enable regex pattern matching
+ max_line_length: Optional. Limit the length of lines when context_lines is used
"""
# -r: recursive, -n: line number
cmd = ['grep', '-r', '-n']
@@ -81,6 +83,27 @@ def search(
# Note: grep's --include uses glob patterns, not regex
cmd.append(f'--include={file_pattern}')
+ exclude_dirs = getattr(self, 'exclude_dirs', [])
+ exclude_file_patterns = getattr(self, 'exclude_file_patterns', [])
+
+ processed_dirs = set()
+ for directory in exclude_dirs:
+ normalized = directory.strip()
+ if not normalized or normalized in processed_dirs:
+ continue
+ cmd.append(f'--exclude-dir={normalized}')
+ processed_dirs.add(normalized)
+
+ processed_files = set()
+ for pattern in exclude_file_patterns:
+ normalized = pattern.strip()
+ if not normalized or normalized in processed_files:
+ continue
+ if normalized.startswith('!'):
+ normalized = normalized[1:]
+ cmd.append(f'--exclude={normalized}')
+ processed_files.add(normalized)
+
# Add -- to treat pattern as a literal argument, preventing injection
cmd.append('--')
cmd.append(search_pattern)
@@ -102,9 +125,9 @@ def search(
if process.returncode > 1:
raise RuntimeError(f"grep failed with exit code {process.returncode}: {process.stderr}")
- return parse_search_output(process.stdout, base_path)
+ return parse_search_output(process.stdout, base_path, max_line_length)
except FileNotFoundError:
raise RuntimeError("'grep' not found. Please install it and ensure it's in your PATH.")
except Exception as e:
- raise RuntimeError(f"An error occurred while running grep: {e}")
+ raise RuntimeError(f"An error occurred while running grep: {e}")
diff --git a/src/code_index_mcp/search/ripgrep.py b/src/code_index_mcp/search/ripgrep.py
index 39c4e58..8a5c325 100644
--- a/src/code_index_mcp/search/ripgrep.py
+++ b/src/code_index_mcp/search/ripgrep.py
@@ -27,7 +27,8 @@ def search(
context_lines: int = 0,
file_pattern: Optional[str] = None,
fuzzy: bool = False,
- regex: bool = False
+ regex: bool = False,
+ max_line_length: Optional[int] = None
) -> Dict[str, List[Tuple[int, str]]]:
"""
Execute a search using ripgrep.
@@ -40,8 +41,9 @@ def search(
file_pattern: File pattern to filter
fuzzy: Enable word boundary matching (not true fuzzy search)
regex: Enable regex pattern matching
+ max_line_length: Optional. Limit the length of lines when context_lines is used
"""
- cmd = ['rg', '--line-number', '--no-heading', '--color=never']
+ cmd = ['rg', '--line-number', '--no-heading', '--color=never', '--no-ignore']
if not case_sensitive:
cmd.append('--ignore-case')
@@ -67,6 +69,31 @@ def search(
if file_pattern:
cmd.extend(['--glob', file_pattern])
+ exclude_dirs = getattr(self, 'exclude_dirs', [])
+ exclude_file_patterns = getattr(self, 'exclude_file_patterns', [])
+
+ processed_patterns = set()
+
+ for directory in exclude_dirs:
+ normalized = directory.strip()
+ if not normalized or normalized in processed_patterns:
+ continue
+ cmd.extend(['--glob', f'!**/{normalized}/**'])
+ processed_patterns.add(normalized)
+
+ for pattern in exclude_file_patterns:
+ normalized = pattern.strip()
+ if not normalized or normalized in processed_patterns:
+ continue
+ if normalized.startswith('!'):
+ glob_pattern = normalized
+ elif any(ch in normalized for ch in '*?[') or '/' in normalized:
+ glob_pattern = f'!{normalized}'
+ else:
+ glob_pattern = f'!**/{normalized}'
+ cmd.extend(['--glob', glob_pattern])
+ processed_patterns.add(normalized)
+
# Add -- to treat pattern as a literal argument, preventing injection
cmd.append('--')
cmd.append(search_pattern)
@@ -87,10 +114,10 @@ def search(
if process.returncode > 1:
raise RuntimeError(f"ripgrep failed with exit code {process.returncode}: {process.stderr}")
- return parse_search_output(process.stdout, base_path)
+ return parse_search_output(process.stdout, base_path, max_line_length)
except FileNotFoundError:
raise RuntimeError("ripgrep (rg) not found. Please install it and ensure it's in your PATH.")
except Exception as e:
# Re-raise other potential exceptions like permission errors
- raise RuntimeError(f"An error occurred while running ripgrep: {e}")
+ raise RuntimeError(f"An error occurred while running ripgrep: {e}")
diff --git a/src/code_index_mcp/search/ugrep.py b/src/code_index_mcp/search/ugrep.py
index 69f2cc4..d4302c1 100644
--- a/src/code_index_mcp/search/ugrep.py
+++ b/src/code_index_mcp/search/ugrep.py
@@ -27,7 +27,8 @@ def search(
context_lines: int = 0,
file_pattern: Optional[str] = None,
fuzzy: bool = False,
- regex: bool = False
+ regex: bool = False,
+ max_line_length: Optional[int] = None
) -> Dict[str, List[Tuple[int, str]]]:
"""
Execute a search using the 'ug' command-line tool.
@@ -40,11 +41,12 @@ def search(
file_pattern: File pattern to filter
fuzzy: Enable true fuzzy search (ugrep native support)
regex: Enable regex pattern matching
+ max_line_length: Optional. Limit the length of lines when context_lines is used
"""
if not self.is_available():
return {"error": "ugrep (ug) command not found."}
- cmd = ['ug', '--line-number', '--no-heading']
+ cmd = ['ug', '-r', '--line-number', '--no-heading']
if fuzzy:
# ugrep has native fuzzy search support
@@ -65,7 +67,31 @@ def search(
cmd.extend(['-A', str(context_lines), '-B', str(context_lines)])
if file_pattern:
- cmd.extend(['-g', file_pattern]) # Correct parameter for file patterns
+ cmd.extend(['--include', file_pattern])
+
+ processed_patterns = set()
+ exclude_dirs = getattr(self, 'exclude_dirs', [])
+ exclude_file_patterns = getattr(self, 'exclude_file_patterns', [])
+
+ for directory in exclude_dirs:
+ normalized = directory.strip()
+ if not normalized or normalized in processed_patterns:
+ continue
+ cmd.extend(['--ignore', f'**/{normalized}/**'])
+ processed_patterns.add(normalized)
+
+ for pattern in exclude_file_patterns:
+ normalized = pattern.strip()
+ if not normalized or normalized in processed_patterns:
+ continue
+ if normalized.startswith('!'):
+ ignore_pattern = normalized[1:]
+ elif any(ch in normalized for ch in '*?[') or '/' in normalized:
+ ignore_pattern = normalized
+ else:
+ ignore_pattern = f'**/{normalized}'
+ cmd.extend(['--ignore', ignore_pattern])
+ processed_patterns.add(normalized)
# Add '--' to treat pattern as a literal argument, preventing injection
cmd.append('--')
@@ -89,7 +115,7 @@ def search(
error_output = process.stderr.strip()
return {"error": f"ugrep execution failed with code {process.returncode}", "details": error_output}
- return parse_search_output(process.stdout, base_path)
+ return parse_search_output(process.stdout, base_path, max_line_length)
except FileNotFoundError:
return {"error": "ugrep (ug) command not found. Please ensure it's installed and in your PATH."}
diff --git a/src/code_index_mcp/server.py b/src/code_index_mcp/server.py
index 3c17d88..2d1eb80 100644
--- a/src/code_index_mcp/server.py
+++ b/src/code_index_mcp/server.py
@@ -3,54 +3,55 @@
This MCP server allows LLMs to index, search, and analyze code from a project directory.
It provides tools for file discovery, content retrieval, and code analysis.
+
+This version uses a service-oriented architecture where MCP decorators delegate
+to domain-specific services for business logic.
"""
+
+# Standard library imports
+import sys
+import logging
from contextlib import asynccontextmanager
from dataclasses import dataclass
-from typing import AsyncIterator, Dict, List, Optional, Tuple, Any
-import os
-import pathlib
-import json
-import fnmatch
-import sys
-import tempfile
-import subprocess
-from mcp.server.fastmcp import FastMCP, Context, Image
-from mcp import types
+from typing import AsyncIterator, Dict, Any, List
+
+# Third-party imports
+from mcp.server.fastmcp import FastMCP, Context
-# Import the ProjectSettings class and constants - using relative import
+# Local imports
from .project_settings import ProjectSettings
-from .constants import SETTINGS_DIR
-
-# Create the MCP server
-mcp = FastMCP("CodeIndexer", dependencies=["pathlib"])
-
-# In-memory references (will be loaded from persistent storage)
-file_index = {}
-code_content_cache = {}
-supported_extensions = [
- '.py', '.js', '.ts', '.jsx', '.tsx', '.java', '.c', '.cpp', '.h', '.hpp',
- '.cs', '.go', '.rb', '.php', '.swift', '.kt', '.rs', '.scala', '.sh',
- '.bash', '.html', '.css', '.scss', '.md', '.json', '.xml', '.yml', '.yaml', '.zig',
- # Frontend frameworks
- '.vue', '.svelte', '.mjs', '.cjs',
- # Style languages
- '.less', '.sass', '.stylus', '.styl',
- # Template engines
- '.hbs', '.handlebars', '.ejs', '.pug',
- # Modern frontend
- '.astro', '.mdx',
- # Database and SQL
- '.sql', '.ddl', '.dml', '.mysql', '.postgresql', '.psql', '.sqlite',
- '.mssql', '.oracle', '.ora', '.db2',
- # Database objects
- '.proc', '.procedure', '.func', '.function', '.view', '.trigger', '.index',
- # Database frameworks and tools
- '.migration', '.seed', '.fixture', '.schema',
- # NoSQL and modern databases
- '.cql', '.cypher', '.sparql', '.gql',
- # Database migration tools
- '.liquibase', '.flyway'
-]
+from .services import (
+ SearchService, FileService, SettingsService, FileWatcherService
+)
+from .services.settings_service import manage_temp_directory
+from .services.file_discovery_service import FileDiscoveryService
+from .services.project_management_service import ProjectManagementService
+from .services.index_management_service import IndexManagementService
+from .services.code_intelligence_service import CodeIntelligenceService
+from .services.system_management_service import SystemManagementService
+from .utils import (
+ handle_mcp_resource_errors, handle_mcp_tool_errors
+)
+
+# Setup logging without writing to files
+def setup_indexing_performance_logging():
+ """Setup logging (stderr only); remove any file-based logging."""
+
+ root_logger = logging.getLogger()
+ root_logger.handlers.clear()
+
+ formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
+
+ # stderr for errors only
+ stderr_handler = logging.StreamHandler(sys.stderr)
+ stderr_handler.setFormatter(formatter)
+ stderr_handler.setLevel(logging.ERROR)
+
+ root_logger.addHandler(stderr_handler)
+ root_logger.setLevel(logging.DEBUG)
+
+# Initialize logging (no file handlers)
+setup_indexing_performance_logging()
@dataclass
class CodeIndexerContext:
@@ -58,278 +59,91 @@ class CodeIndexerContext:
base_path: str
settings: ProjectSettings
file_count: int = 0
+ file_watcher_service: FileWatcherService = None
@asynccontextmanager
-async def indexer_lifespan(server: FastMCP) -> AsyncIterator[CodeIndexerContext]:
+async def indexer_lifespan(_server: FastMCP) -> AsyncIterator[CodeIndexerContext]:
"""Manage the lifecycle of the Code Indexer MCP server."""
# Don't set a default path, user must explicitly set project path
base_path = "" # Empty string to indicate no path is set
- print("Initializing Code Indexer MCP server...")
-
# Initialize settings manager with skip_load=True to skip loading files
settings = ProjectSettings(base_path, skip_load=True)
- # Initialize context
+ # Initialize context - file watcher will be initialized later when project path is set
context = CodeIndexerContext(
base_path=base_path,
- settings=settings
+ settings=settings,
+ file_watcher_service=None
)
- # Initialize global variables
- global file_index, code_content_cache
-
try:
- print("Server ready. Waiting for user to set project path...")
# Provide context to the server
yield context
finally:
- # Only save index and cache if project path has been set
- if context.base_path and file_index:
- print(f"Saving index for project: {context.base_path}")
- settings.save_index(file_index)
-
- if context.base_path and code_content_cache:
- print(f"Saving cache for project: {context.base_path}")
- settings.save_cache(code_content_cache)
+ # Stop file watcher if it was started
+ if context.file_watcher_service:
+ context.file_watcher_service.stop_monitoring()
-# Initialize the server with our lifespan manager
-mcp = FastMCP("CodeIndexer", lifespan=indexer_lifespan)
+# Create the MCP server with lifespan manager
+mcp = FastMCP("CodeIndexer", lifespan=indexer_lifespan, dependencies=["pathlib"])
# ----- RESOURCES -----
@mcp.resource("config://code-indexer")
+@handle_mcp_resource_errors
def get_config() -> str:
"""Get the current configuration of the Code Indexer."""
ctx = mcp.get_context()
-
- # Get the base path from context
- base_path = ctx.request_context.lifespan_context.base_path
-
- # Check if base_path is set
- if not base_path:
- return json.dumps({
- "status": "not_configured",
- "message": "Project path not set. Please use set_project_path to set a project directory first.",
- "supported_extensions": supported_extensions
- }, indent=2)
-
- # Get file count
- file_count = ctx.request_context.lifespan_context.file_count
-
- # Get settings stats
- settings = ctx.request_context.lifespan_context.settings
- settings_stats = settings.get_stats()
-
- config = {
- "base_path": base_path,
- "supported_extensions": supported_extensions,
- "file_count": file_count,
- "settings_directory": settings.settings_path,
- "settings_stats": settings_stats
- }
-
- return json.dumps(config, indent=2)
+ return ProjectManagementService(ctx).get_project_config()
@mcp.resource("files://{file_path}")
+@handle_mcp_resource_errors
def get_file_content(file_path: str) -> str:
"""Get the content of a specific file."""
ctx = mcp.get_context()
+ # Use FileService for simple file reading - this is appropriate for a resource
+ return FileService(ctx).get_file_content(file_path)
- # Get the base path from context
- base_path = ctx.request_context.lifespan_context.base_path
-
- # Check if base_path is set
- if not base_path:
- return "Error: Project path not set. Please use set_project_path to set a project directory first."
-
- # Handle absolute paths (especially Windows paths starting with drive letters)
- if os.path.isabs(file_path) or (len(file_path) > 1 and file_path[1] == ':'):
- # Absolute paths are not allowed via this endpoint
- return f"Error: Absolute file paths like '{file_path}' are not allowed. Please use paths relative to the project root."
-
- # Normalize the file path
- norm_path = os.path.normpath(file_path)
-
- # Check for path traversal attempts
- if "..\\" in norm_path or "../" in norm_path or norm_path.startswith(".."):
- return f"Error: Invalid file path: {file_path} (directory traversal not allowed)"
-
- # Construct the full path and verify it's within the project bounds
- full_path = os.path.join(base_path, norm_path)
- real_full_path = os.path.realpath(full_path)
- real_base_path = os.path.realpath(base_path)
-
- if not real_full_path.startswith(real_base_path):
- return f"Error: Access denied. File path must be within project directory."
-
- try:
- with open(full_path, 'r', encoding='utf-8') as f:
- content = f.read()
-
- # Cache the content for faster retrieval later
- code_content_cache[norm_path] = content
-
- return content
- except UnicodeDecodeError:
- return f"Error: File {file_path} appears to be a binary file or uses unsupported encoding."
- except Exception as e:
- return f"Error reading file: {e}"
-
-@mcp.resource("structure://project")
-def get_project_structure() -> str:
- """Get the structure of the project as a JSON tree."""
- ctx = mcp.get_context()
-
- # Get the base path from context
- base_path = ctx.request_context.lifespan_context.base_path
-
- # Check if base_path is set
- if not base_path:
- return json.dumps({
- "status": "not_configured",
- "message": "Project path not set. Please use set_project_path to set a project directory first."
- }, indent=2)
-
- # Check if we need to refresh the index
- if not file_index:
- _index_project(base_path)
- # Update file count in context
- ctx.request_context.lifespan_context.file_count = _count_files(file_index)
- # Save updated index
- ctx.request_context.lifespan_context.settings.save_index(file_index)
-
- return json.dumps(file_index, indent=2)
-
-@mcp.resource("settings://stats")
-def get_settings_stats() -> str:
- """Get statistics about the settings directory and files."""
- ctx = mcp.get_context()
-
- # Get settings manager from context
- settings = ctx.request_context.lifespan_context.settings
-
- # Get settings stats
- stats = settings.get_stats()
-
- return json.dumps(stats, indent=2)
+# Removed: structure://project resource - not necessary for most workflows
+# Removed: settings://stats resource - this information is available via get_settings_info() tool
+# and is more of a debugging/technical detail rather than context AI needs
# ----- TOOLS -----
@mcp.tool()
+@handle_mcp_tool_errors(return_type='str')
def set_project_path(path: str, ctx: Context) -> str:
"""Set the base project path for indexing."""
- # Validate and normalize path
- try:
- norm_path = os.path.normpath(path)
- abs_path = os.path.abspath(norm_path)
-
- if not os.path.exists(abs_path):
- return f"Error: Path does not exist: {abs_path}"
-
- if not os.path.isdir(abs_path):
- return f"Error: Path is not a directory: {abs_path}"
-
- # Clear existing in-memory index and cache
- global file_index, code_content_cache
- file_index.clear()
- code_content_cache.clear()
-
- # Update the base path in context
- ctx.request_context.lifespan_context.base_path = abs_path
-
- # Create a new settings manager for the new path (don't skip loading files)
- ctx.request_context.lifespan_context.settings = ProjectSettings(abs_path, skip_load=False)
-
- # Print the settings path for debugging
- settings_path = ctx.request_context.lifespan_context.settings.settings_path
- print(f"Project settings path: {settings_path}")
-
- # Try to load existing index and cache
- print(f"Project path set to: {abs_path}")
- print(f"Attempting to load existing index and cache...")
-
- # Try to load index
- loaded_index = None
- try:
- loaded_index = ctx.request_context.lifespan_context.settings.load_index()
- except Exception as e:
- print(f"Could not load existing index, it might be an old format. A new index will be created. Error: {e}")
-
- if loaded_index:
- print(f"Existing index found and loaded successfully")
- file_index = loaded_index
- file_count = _count_files(file_index)
- ctx.request_context.lifespan_context.file_count = file_count
-
- # Try to load cache
- loaded_cache = ctx.request_context.lifespan_context.settings.load_cache()
- if loaded_cache:
- print(f"Existing cache found and loaded successfully")
- code_content_cache.update(loaded_cache)
-
- # Get search capabilities info
- search_tool = ctx.request_context.lifespan_context.settings.get_preferred_search_tool()
-
- if search_tool is None:
- search_info = " Basic search available."
- else:
- search_info = f" Advanced search enabled ({search_tool.name})."
-
- return f"Project path set to: {abs_path}. Loaded existing index with {file_count} files.{search_info}"
- else:
- print(f"No existing index found, creating new index...")
-
- # If no existing index, create a new one
- file_count = _index_project(abs_path)
- ctx.request_context.lifespan_context.file_count = file_count
-
- # Save the new index
- ctx.request_context.lifespan_context.settings.save_index(file_index)
-
- # Save project config
- config = {
- "base_path": abs_path,
- "supported_extensions": supported_extensions,
- "last_indexed": ctx.request_context.lifespan_context.settings.load_config().get('last_indexed', None)
- }
- ctx.request_context.lifespan_context.settings.save_config(config)
-
- # Get search capabilities info (this will trigger lazy detection)
- search_tool = ctx.request_context.lifespan_context.settings.get_preferred_search_tool()
-
- if search_tool is None:
- search_info = " Basic search available."
- else:
- search_info = f" Advanced search enabled ({search_tool.name})."
-
- return f"Project path set to: {abs_path}. Indexed {file_count} files.{search_info}"
- except Exception as e:
- return f"Error setting project path: {e}"
+ return ProjectManagementService(ctx).initialize_project(path)
@mcp.tool()
+@handle_mcp_tool_errors(return_type='dict')
def search_code_advanced(
- pattern: str,
+ pattern: str,
ctx: Context,
case_sensitive: bool = True,
context_lines: int = 0,
- file_pattern: Optional[str] = None,
+ file_pattern: str = None,
fuzzy: bool = False,
- regex: bool = False
+ regex: bool = None,
+ max_line_length: int = None
) -> Dict[str, Any]:
"""
Search for a code pattern in the project using an advanced, fast tool.
-
- This tool automatically selects the best available command-line search tool
+
+ This tool automatically selects the best available command-line search tool
(like ugrep, ripgrep, ag, or grep) for maximum performance.
-
+
Args:
pattern: The search pattern. Can be literal text or regex (see regex parameter).
case_sensitive: Whether the search should be case-sensitive.
context_lines: Number of lines to show before and after the match.
- file_pattern: A glob pattern to filter files to search in (e.g., "*.py", "*.js", "test_*.py").
+ file_pattern: A glob pattern to filter files to search in
+ (e.g., "*.py", "*.js", "test_*.py").
+ max_line_length: Optional. Default None (no limit). Limits the length of lines when context_lines is used.
All search tools now handle glob patterns consistently:
- - ugrep: Uses glob patterns (*.py, *.{js,ts})
+ - ugrep: Uses glob patterns (*.py, *.{js,ts})
- ripgrep: Uses glob patterns (*.py, *.{js,ts})
- ag (Silver Searcher): Automatically converts globs to regex patterns
- grep: Basic glob pattern matching
@@ -337,66 +151,57 @@ def search_code_advanced(
fuzzy: If True, enables fuzzy/partial matching behavior varies by search tool:
- ugrep: Native fuzzy search with --fuzzy flag (true edit-distance fuzzy search)
- ripgrep, ag, grep, basic: Word boundary pattern matching (not true fuzzy search)
- IMPORTANT: Only ugrep provides true fuzzy search. Other tools use word boundary
+ IMPORTANT: Only ugrep provides true fuzzy search. Other tools use word boundary
matching which allows partial matches at word boundaries.
For exact literal matches, set fuzzy=False (default and recommended).
- regex: If True, enables regex pattern matching. Use this for patterns like "ERROR|WARN".
- The pattern will be validated for safety to prevent ReDoS attacks.
- If False (default), uses literal string search.
-
+ regex: Controls regex pattern matching behavior:
+ - If True, enables regex pattern matching
+ - If False, forces literal string search
+ - If None (default), automatically detects regex patterns and enables regex for patterns like "ERROR|WARN"
+ The pattern will always be validated for safety to prevent ReDoS attacks.
+
Returns:
A dictionary containing the search results or an error message.
-
- """
- base_path = ctx.request_context.lifespan_context.base_path
- if not base_path:
- return {"error": "Project path not set. Please use set_project_path first."}
-
- settings = ctx.request_context.lifespan_context.settings
- strategy = settings.get_preferred_search_tool()
- if not strategy:
- return {"error": "No search strategies available. This is unexpected."}
-
- print(f"Using search strategy: {strategy.name}")
-
- try:
- results = strategy.search(
- pattern=pattern,
- base_path=base_path,
- case_sensitive=case_sensitive,
- context_lines=context_lines,
- file_pattern=file_pattern,
- fuzzy=fuzzy,
- regex=regex
- )
- return {"results": results}
- except Exception as e:
- return {"error": f"Search failed using '{strategy.name}': {e}"}
+ """
+ return SearchService(ctx).search_code(
+ pattern=pattern,
+ case_sensitive=case_sensitive,
+ context_lines=context_lines,
+ file_pattern=file_pattern,
+ fuzzy=fuzzy,
+ regex=regex,
+ max_line_length=max_line_length
+ )
@mcp.tool()
+@handle_mcp_tool_errors(return_type='list')
def find_files(pattern: str, ctx: Context) -> List[str]:
- """Find files in the project matching a specific glob pattern."""
- base_path = ctx.request_context.lifespan_context.base_path
+ """
+ Find files matching a glob pattern using pre-built file index.
- # Check if base_path is set
- if not base_path:
- return ["Error: Project path not set. Please use set_project_path to set a project directory first."]
+ Use when:
+ - Looking for files by pattern (e.g., "*.py", "test_*.js")
+ - Searching by filename only (e.g., "README.md" finds all README files)
+ - Checking if specific files exist in the project
+ - Getting file lists for further analysis
- # Check if we need to index the project
- if not file_index:
- _index_project(base_path)
- ctx.request_context.lifespan_context.file_count = _count_files(file_index)
- ctx.request_context.lifespan_context.settings.save_index(file_index)
+ Pattern matching:
+ - Supports both full path and filename-only matching
+ - Uses standard glob patterns (*, ?, [])
+ - Fast lookup using in-memory file index
+ - Uses forward slashes consistently across all platforms
- matching_files = []
- for file_path, _info in _get_all_files(file_index):
- if fnmatch.fnmatch(file_path, pattern):
- matching_files.append(file_path)
+ Args:
+ pattern: Glob pattern to match files (e.g., "*.py", "test_*.js", "README.md")
- return matching_files
+ Returns:
+ List of file paths matching the pattern
+ """
+ return FileDiscoveryService(ctx).find_files(pattern)
@mcp.tool()
+@handle_mcp_tool_errors(return_type='dict')
def get_file_summary(file_path: str, ctx: Context) -> Dict[str, Any]:
"""
Get a summary of a specific file, including:
@@ -405,395 +210,99 @@ def get_file_summary(file_path: str, ctx: Context) -> Dict[str, Any]:
- Import statements
- Basic complexity metrics
"""
- base_path = ctx.request_context.lifespan_context.base_path
-
- # Check if base_path is set
- if not base_path:
- return {"error": "Project path not set. Please use set_project_path to set a project directory first."}
-
- # Normalize the file path
- norm_path = os.path.normpath(file_path)
- if norm_path.startswith('..'):
- return {"error": f"Invalid file path: {file_path}"}
-
- full_path = os.path.join(base_path, norm_path)
-
- try:
- # Get file content
- if norm_path in code_content_cache:
- content = code_content_cache[norm_path]
- else:
- with open(full_path, 'r', encoding='utf-8') as f:
- content = f.read()
- code_content_cache[norm_path] = content
- # Save the updated cache
- ctx.request_context.lifespan_context.settings.save_cache(code_content_cache)
-
- # Basic file info
- lines = content.splitlines()
- line_count = len(lines)
-
- # File extension for language-specific analysis
- _, ext = os.path.splitext(norm_path)
-
- summary = {
- "file_path": norm_path,
- "line_count": line_count,
- "size_bytes": os.path.getsize(full_path),
- "extension": ext,
- }
-
- # Language-specific analysis
- if ext == '.py':
- # Python analysis
- imports = []
- classes = []
- functions = []
-
- for i, line in enumerate(lines):
- line = line.strip()
-
- # Check for imports
- if line.startswith('import ') or line.startswith('from '):
- imports.append(line)
-
- # Check for class definitions
- if line.startswith('class '):
- classes.append({
- "line": i + 1,
- "name": line.replace('class ', '').split('(')[0].split(':')[0].strip()
- })
-
- # Check for function definitions
- if line.startswith('def '):
- functions.append({
- "line": i + 1,
- "name": line.replace('def ', '').split('(')[0].strip()
- })
-
- summary.update({
- "imports": imports,
- "classes": classes,
- "functions": functions,
- "import_count": len(imports),
- "class_count": len(classes),
- "function_count": len(functions),
- })
-
- elif ext in ['.js', '.jsx', '.ts', '.tsx']:
- # JavaScript/TypeScript analysis
- imports = []
- classes = []
- functions = []
-
- for i, line in enumerate(lines):
- line = line.strip()
-
- # Check for imports
- if line.startswith('import ') or line.startswith('require('):
- imports.append(line)
-
- # Check for class definitions
- if line.startswith('class ') or 'class ' in line:
- class_name = ""
- if 'class ' in line:
- parts = line.split('class ')[1]
- class_name = parts.split(' ')[0].split('{')[0].split('extends')[0].strip()
- classes.append({
- "line": i + 1,
- "name": class_name
- })
-
- # Check for function definitions
- if 'function ' in line or '=>' in line:
- functions.append({
- "line": i + 1,
- "content": line
- })
-
- summary.update({
- "imports": imports,
- "classes": classes,
- "functions": functions,
- "import_count": len(imports),
- "class_count": len(classes),
- "function_count": len(functions),
- })
-
- return summary
- except Exception as e:
- return {"error": f"Error analyzing file: {e}"}
+ return CodeIntelligenceService(ctx).analyze_file(file_path)
@mcp.tool()
+@handle_mcp_tool_errors(return_type='str')
def refresh_index(ctx: Context) -> str:
- """Refresh the project index."""
- base_path = ctx.request_context.lifespan_context.base_path
-
- # Check if base_path is set
- if not base_path:
- return "Error: Project path not set. Please use set_project_path to set a project directory first."
-
- # Clear existing index
- global file_index
- file_index.clear()
-
- # Re-index the project
- file_count = _index_project(base_path)
- ctx.request_context.lifespan_context.file_count = file_count
+ """
+ Manually refresh the project index when files have been added/removed/moved.
+
+ Use when:
+ - File watcher is disabled or unavailable
+ - After large-scale operations (git checkout, merge, pull) that change many files
+ - When you want immediate index rebuild without waiting for file watcher debounce
+ - When find_files results seem incomplete or outdated
+ - For troubleshooting suspected index synchronization issues
+
+ Important notes for LLMs:
+ - Always available as backup when file watcher is not working
+ - Performs full project re-indexing for complete accuracy
+ - Use when you suspect the index is stale after file system changes
+ - **Call this after programmatic file modifications if file watcher seems unresponsive**
+ - Complements the automatic file watcher system
- # Save the updated index
- ctx.request_context.lifespan_context.settings.save_index(file_index)
+ Returns:
+ Success message with total file count
+ """
+ return IndexManagementService(ctx).rebuild_index()
- # Update the last indexed timestamp in config
- config = ctx.request_context.lifespan_context.settings.load_config()
- ctx.request_context.lifespan_context.settings.save_config({
- **config,
- 'last_indexed': ctx.request_context.lifespan_context.settings._get_timestamp()
- })
+@mcp.tool()
+@handle_mcp_tool_errors(return_type='str')
+def build_deep_index(ctx: Context) -> str:
+ """
+ Build the deep index (full symbol extraction) for the current project.
- return f"Project re-indexed. Found {file_count} files."
+ This performs a complete re-index and loads it into memory.
+ """
+ return IndexManagementService(ctx).rebuild_deep_index()
@mcp.tool()
+@handle_mcp_tool_errors(return_type='dict')
def get_settings_info(ctx: Context) -> Dict[str, Any]:
"""Get information about the project settings."""
- base_path = ctx.request_context.lifespan_context.base_path
-
- # Check if base_path is set
- if not base_path:
- # Even if base_path is not set, we can still show the temp directory
- temp_dir = os.path.join(tempfile.gettempdir(), SETTINGS_DIR)
- return {
- "status": "not_configured",
- "message": "Project path not set. Please use set_project_path to set a project directory first.",
- "temp_directory": temp_dir,
- "temp_directory_exists": os.path.exists(temp_dir)
- }
-
- settings = ctx.request_context.lifespan_context.settings
-
- # Get config
- config = settings.load_config()
-
- # Get stats
- stats = settings.get_stats()
-
- # Get temp directory
- temp_dir = os.path.join(tempfile.gettempdir(), SETTINGS_DIR)
-
- return {
- "settings_directory": settings.settings_path,
- "temp_directory": temp_dir,
- "temp_directory_exists": os.path.exists(temp_dir),
- "config": config,
- "stats": stats,
- "exists": os.path.exists(settings.settings_path)
- }
+ return SettingsService(ctx).get_settings_info()
@mcp.tool()
+@handle_mcp_tool_errors(return_type='dict')
def create_temp_directory() -> Dict[str, Any]:
"""Create the temporary directory used for storing index data."""
- temp_dir = os.path.join(tempfile.gettempdir(), SETTINGS_DIR)
-
- result = {
- "temp_directory": temp_dir,
- "existed_before": os.path.exists(temp_dir),
- }
-
- try:
- # Use ProjectSettings to handle directory creation consistently
- temp_settings = ProjectSettings("", skip_load=True)
-
- result["created"] = not result["existed_before"]
- result["exists_now"] = os.path.exists(temp_dir)
- result["is_directory"] = os.path.isdir(temp_dir)
- except Exception as e:
- result["error"] = str(e)
-
- return result
+ return manage_temp_directory('create')
@mcp.tool()
+@handle_mcp_tool_errors(return_type='dict')
def check_temp_directory() -> Dict[str, Any]:
"""Check the temporary directory used for storing index data."""
- temp_dir = os.path.join(tempfile.gettempdir(), SETTINGS_DIR)
-
- result = {
- "temp_directory": temp_dir,
- "exists": os.path.exists(temp_dir),
- "is_directory": os.path.isdir(temp_dir) if os.path.exists(temp_dir) else False,
- "temp_root": tempfile.gettempdir(),
- }
-
- # If the directory exists, list its contents
- if result["exists"] and result["is_directory"]:
- try:
- contents = os.listdir(temp_dir)
- result["contents"] = contents
- result["subdirectories"] = []
-
- # Check each subdirectory
- for item in contents:
- item_path = os.path.join(temp_dir, item)
- if os.path.isdir(item_path):
- subdir_info = {
- "name": item,
- "path": item_path,
- "contents": os.listdir(item_path) if os.path.exists(item_path) else []
- }
- result["subdirectories"].append(subdir_info)
- except Exception as e:
- result["error"] = str(e)
-
- return result
+ return manage_temp_directory('check')
@mcp.tool()
+@handle_mcp_tool_errors(return_type='str')
def clear_settings(ctx: Context) -> str:
"""Clear all settings and cached data."""
- settings = ctx.request_context.lifespan_context.settings
- settings.clear()
- return "Project settings, index, and cache have been cleared."
+ return SettingsService(ctx).clear_all_settings()
@mcp.tool()
+@handle_mcp_tool_errors(return_type='str')
def refresh_search_tools(ctx: Context) -> str:
"""
Manually re-detect the available command-line search tools on the system.
This is useful if you have installed a new tool (like ripgrep) after starting the server.
"""
- settings = ctx.request_context.lifespan_context.settings
- settings.refresh_available_strategies()
-
- config = settings.get_search_tools_config()
-
- return f"Search tools refreshed. Available: {config['available_tools']}. Preferred: {config['preferred_tool']}."
-
-# ----- PROMPTS -----
-
-@mcp.prompt()
-def analyze_code(file_path: str = "", query: str = "") -> list[types.PromptMessage]:
- """Prompt for analyzing code in the project."""
- messages = [
- types.PromptMessage(role="user", content=types.TextContent(type="text", text=f"""I need you to analyze some code from my project.
-
-{f'Please analyze the file: {file_path}' if file_path else ''}
-{f'I want to understand: {query}' if query else ''}
-
-First, let me give you some context about the project structure. Then, I'll provide the code to analyze.
-""")),
- types.PromptMessage(role="assistant", content=types.TextContent(type="text", text="I'll help you analyze the code. Let me first examine the project structure to get a better understanding of the codebase."))
- ]
- return messages
-
-@mcp.prompt()
-def code_search(query: str = "") -> types.TextContent:
- """Prompt for searching code in the project."""
- search_text = f"\"query\"" if not query else f"\"{query}\""
- return types.TextContent(type="text", text=f"""I need to search through my codebase for {search_text}.
-
-Please help me find all occurrences of this query and explain what each match means in its context.
-Focus on the most relevant files and provide a brief explanation of how each match is used in the code.
-
-If there are too many results, prioritize the most important ones and summarize the patterns you see.""")
-
-@mcp.prompt()
-def set_project() -> list[types.PromptMessage]:
- """Prompt for setting the project path."""
- messages = [
- types.PromptMessage(role="user", content=types.TextContent(type="text", text="""
- I need to analyze code from a project, but I haven't set the project path yet. Please help me set up the project path and index the code.
-
- First, I need to specify which project directory to analyze.
- """)),
- types.PromptMessage(role="assistant", content=types.TextContent(type="text", text="""
- Before I can help you analyze any code, we need to set up the project path. This is a required first step.
+ return SearchService(ctx).refresh_search_tools()
- Please provide the full path to your project folder. For example:
- - Windows: "C:/Users/username/projects/my-project"
- - macOS/Linux: "/home/username/projects/my-project"
-
- Once you provide the path, I'll use the `set_project_path` tool to configure the code analyzer to work with your project.
- """))
- ]
- return messages
-
-# ----- HELPER FUNCTIONS -----
+@mcp.tool()
+@handle_mcp_tool_errors(return_type='dict')
+def get_file_watcher_status(ctx: Context) -> Dict[str, Any]:
+ """Get file watcher service status and statistics."""
+ return SystemManagementService(ctx).get_file_watcher_status()
-def _index_project(base_path: str) -> int:
- """
- Create an index of the project files.
- Returns the number of files indexed.
- """
- file_count = 0
- file_index.clear()
-
- for root, dirs, files in os.walk(base_path):
- # Skip hidden directories and common build/dependency directories
- dirs[:] = [d for d in dirs if not d.startswith('.') and
- d not in ['node_modules', 'venv', '__pycache__', 'build', 'dist']]
-
- # Create relative path from base_path
- rel_path = os.path.relpath(root, base_path)
- current_dir = file_index
-
- # Skip the '.' directory (base_path itself)
- if rel_path != '.':
- # Split the path and navigate/create the tree
- path_parts = rel_path.replace('\\', '/').split('/')
- for part in path_parts:
- if part not in current_dir:
- current_dir[part] = {"type": "directory", "children": {}}
- current_dir = current_dir[part]["children"]
-
- # Add files to current directory
- for file in files:
- # Skip hidden files and files with unsupported extensions
- _, ext = os.path.splitext(file)
- if file.startswith('.') or ext not in supported_extensions:
- continue
-
- # Store file information
- file_path = os.path.join(rel_path, file).replace('\\', '/')
- if rel_path == '.':
- file_path = file
-
- current_dir[file] = {
- "type": "file",
- "path": file_path,
- "ext": ext
- }
- file_count += 1
-
- return file_count
-
-def _count_files(directory: Dict) -> int:
- """
- Count the number of files in the index.
- """
- count = 0
- for name, value in directory.items():
- if isinstance(value, dict):
- if value.get("type") == "file":
- count += 1
- elif value.get("type") == "directory":
- count += _count_files(value.get("children", {}))
- return count
-
-def _get_all_files(directory: Dict, prefix: str = "") -> List[Tuple[str, Dict]]:
- """Recursively get all files from the index."""
- all_files = []
- for name, item in directory.items():
- current_path = os.path.join(prefix, name).replace('\\', '/')
- if item.get('type') == 'file':
- all_files.append((current_path, item))
- elif item.get('type') == 'directory':
- all_files.extend(_get_all_files(item.get('children', {}), current_path))
- return all_files
+@mcp.tool()
+@handle_mcp_tool_errors(return_type='str')
+def configure_file_watcher(
+ ctx: Context,
+ enabled: bool = None,
+ debounce_seconds: float = None,
+ additional_exclude_patterns: list = None
+) -> str:
+ """Configure file watcher service settings."""
+ return SystemManagementService(ctx).configure_file_watcher(enabled, debounce_seconds, additional_exclude_patterns)
+# ----- PROMPTS -----
+# Removed: analyze_code, code_search, set_project prompts
def main():
"""Main function to run the MCP server."""
- # Run the server. Tools are discovered automatically via decorators.
mcp.run()
if __name__ == '__main__':
- # Set path to project root
- sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
main()
diff --git a/src/code_index_mcp/services/__init__.py b/src/code_index_mcp/services/__init__.py
new file mode 100644
index 0000000..7694446
--- /dev/null
+++ b/src/code_index_mcp/services/__init__.py
@@ -0,0 +1,48 @@
+"""
+Service layer for the Code Index MCP server.
+
+This package contains domain-specific services that handle the business logic
+for different areas of functionality:
+
+
+- SearchService: Code search operations and search tool management
+- FileService: File operations, content retrieval, and analysis
+- SettingsService: Settings management and directory operations
+
+Each service follows a consistent pattern:
+- Constructor accepts MCP Context parameter
+- Methods correspond to MCP entry points
+- Clear domain boundaries with no cross-service dependencies
+- Shared utilities accessed through utils module
+- Meaningful exceptions raised for error conditions
+"""
+
+# New Three-Layer Architecture Services
+from .base_service import BaseService
+from .project_management_service import ProjectManagementService
+from .index_management_service import IndexManagementService
+from .file_discovery_service import FileDiscoveryService
+from .code_intelligence_service import CodeIntelligenceService
+from .system_management_service import SystemManagementService
+from .search_service import SearchService # Already follows clean architecture
+from .settings_service import SettingsService
+
+# Simple Services
+from .file_service import FileService # Simple file reading for resources
+from .file_watcher_service import FileWatcherService # Low-level service, still needed
+
+__all__ = [
+ # New Architecture
+ 'BaseService',
+ 'ProjectManagementService',
+ 'IndexManagementService',
+ 'FileDiscoveryService',
+ 'CodeIntelligenceService',
+ 'SystemManagementService',
+ 'SearchService',
+ 'SettingsService',
+
+ # Simple Services
+ 'FileService', # Simple file reading for resources
+ 'FileWatcherService' # Keep as low-level service
+]
\ No newline at end of file
diff --git a/src/code_index_mcp/services/base_service.py b/src/code_index_mcp/services/base_service.py
new file mode 100644
index 0000000..a29e6bf
--- /dev/null
+++ b/src/code_index_mcp/services/base_service.py
@@ -0,0 +1,140 @@
+"""
+Base service class providing common functionality for all services.
+
+This module defines the base service pattern that all domain services inherit from,
+ensuring consistent behavior and shared functionality across the service layer.
+"""
+
+from abc import ABC
+from typing import Optional
+from mcp.server.fastmcp import Context
+
+from ..utils import ContextHelper, ValidationHelper
+
+
+class BaseService(ABC):
+ """
+ Base class for all MCP services.
+
+ This class provides common functionality that all services need:
+ - Context management through ContextHelper
+ - Common validation patterns
+ - Shared error checking methods
+
+ All domain services should inherit from this class to ensure
+ consistent behavior and access to shared utilities.
+ """
+
+ def __init__(self, ctx: Context):
+ """
+ Initialize the base service.
+
+ Args:
+ ctx: The MCP Context object containing request and lifespan context
+ """
+ self.ctx = ctx
+ self.helper = ContextHelper(ctx)
+
+ def _validate_project_setup(self) -> Optional[str]:
+ """
+ Validate that the project is properly set up.
+
+ This method checks if the base path is set and valid, which is
+ required for most operations.
+
+ Returns:
+ Error message if project is not set up properly, None if valid
+ """
+ return self.helper.get_base_path_error()
+
+ def _require_project_setup(self) -> None:
+ """
+ Ensure project is set up, raising an exception if not.
+
+ This is a convenience method for operations that absolutely
+ require a valid project setup.
+
+ Raises:
+ ValueError: If project is not properly set up
+ """
+ error = self._validate_project_setup()
+ if error:
+ raise ValueError(error)
+
+ def _validate_file_path(self, file_path: str) -> Optional[str]:
+ """
+ Validate a file path for security and accessibility.
+
+ Args:
+ file_path: The file path to validate
+
+ Returns:
+ Error message if validation fails, None if valid
+ """
+ return ValidationHelper.validate_file_path(file_path, self.helper.base_path)
+
+ def _require_valid_file_path(self, file_path: str) -> None:
+ """
+ Ensure file path is valid, raising an exception if not.
+
+ Args:
+ file_path: The file path to validate
+
+ Raises:
+ ValueError: If file path is invalid
+ """
+ error = self._validate_file_path(file_path)
+ if error:
+ raise ValueError(error)
+
+ @property
+ def base_path(self) -> str:
+ """
+ Convenient access to the base project path.
+
+ Returns:
+ The base project path
+ """
+ return self.helper.base_path
+
+ @property
+ def settings(self):
+ """
+ Convenient access to the project settings.
+
+ Returns:
+ The ProjectSettings instance
+ """
+ return self.helper.settings
+
+ @property
+ def file_count(self) -> int:
+ """
+ Convenient access to the current file count.
+
+ Returns:
+ The number of indexed files
+ """
+ return self.helper.file_count
+
+ @property
+ def index_provider(self):
+ """
+ Convenient access to the unified index provider.
+
+ Returns:
+ The current IIndexProvider instance, or None if not available
+ """
+ if self.helper.index_manager:
+ return self.helper.index_manager.get_provider()
+ return None
+
+ @property
+ def index_manager(self):
+ """
+ Convenient access to the index manager.
+
+ Returns:
+ The index manager instance, or None if not available
+ """
+ return self.helper.index_manager
diff --git a/src/code_index_mcp/services/code_intelligence_service.py b/src/code_index_mcp/services/code_intelligence_service.py
new file mode 100644
index 0000000..af0f1a2
--- /dev/null
+++ b/src/code_index_mcp/services/code_intelligence_service.py
@@ -0,0 +1,104 @@
+"""
+Code Intelligence Service - Business logic for code analysis and understanding.
+
+This service handles the business logic for analyzing code files using the new
+JSON-based indexing system optimized for LLM consumption.
+"""
+
+import logging
+import os
+from typing import Dict, Any
+
+from .base_service import BaseService
+from ..tools.filesystem import FileSystemTool
+from ..indexing import get_index_manager
+
+logger = logging.getLogger(__name__)
+
+
+class CodeIntelligenceService(BaseService):
+ """
+ Business service for code analysis and intelligence using JSON indexing.
+
+ This service provides comprehensive code analysis using the optimized
+ JSON-based indexing system for fast LLM-friendly responses.
+ """
+
+ def __init__(self, ctx):
+ super().__init__(ctx)
+ self._filesystem_tool = FileSystemTool()
+
+ def analyze_file(self, file_path: str) -> Dict[str, Any]:
+ """
+ Analyze a file and return comprehensive intelligence.
+
+ This is the main business method that orchestrates the file analysis
+ workflow, choosing the best analysis strategy and providing rich
+ insights about the code.
+
+ Args:
+ file_path: Path to the file to analyze (relative to project root)
+
+ Returns:
+ Dictionary with comprehensive file analysis
+
+ Raises:
+ ValueError: If file path is invalid or analysis fails
+ """
+ # Business validation
+ self._validate_analysis_request(file_path)
+
+ # Use the global index manager
+ index_manager = get_index_manager()
+
+ # Debug logging
+ logger.info(f"Getting file summary for: {file_path}")
+ logger.info(f"Index manager state - Project path: {index_manager.project_path}")
+ logger.info(f"Index manager state - Has builder: {index_manager.index_builder is not None}")
+ if index_manager.index_builder:
+ logger.info(f"Index manager state - Has index: {index_manager.index_builder.in_memory_index is not None}")
+
+ # Get file summary from JSON index
+ summary = index_manager.get_file_summary(file_path)
+ logger.info(f"Summary result: {summary is not None}")
+
+ # If deep index isn't available yet, return a helpful hint instead of error
+ if not summary:
+ return {
+ "status": "needs_deep_index",
+ "message": "Deep index not available. Please run build_deep_index before calling get_file_summary.",
+ "file_path": file_path
+ }
+
+ return summary
+
+ def _validate_analysis_request(self, file_path: str) -> None:
+ """
+ Validate the file analysis request according to business rules.
+
+ Args:
+ file_path: File path to validate
+
+ Raises:
+ ValueError: If validation fails
+ """
+ # Business rule: Project must be set up OR auto-initialization must be possible
+ if self.base_path:
+ # Standard validation if project is set up in context
+ self._require_valid_file_path(file_path)
+ full_path = os.path.join(self.base_path, file_path)
+ if not os.path.exists(full_path):
+ raise ValueError(f"File does not exist: {file_path}")
+ else:
+ # Allow proceeding if auto-initialization might work
+ # The index manager will handle project discovery
+ logger.info("Project not set in context, relying on index auto-initialization")
+
+ # Basic file path validation only
+ if not file_path or '..' in file_path:
+ raise ValueError(f"Invalid file path: {file_path}")
+
+
+
+
+
diff --git a/src/code_index_mcp/services/file_discovery_service.py b/src/code_index_mcp/services/file_discovery_service.py
new file mode 100644
index 0000000..d777511
--- /dev/null
+++ b/src/code_index_mcp/services/file_discovery_service.py
@@ -0,0 +1,78 @@
+"""
+File Discovery Service - Business logic for intelligent file discovery.
+
+This service handles the business logic for finding files using the new
+JSON-based indexing system optimized for LLM consumption.
+"""
+
+from typing import Dict, Any, List, Optional
+from dataclasses import dataclass
+
+from .base_service import BaseService
+from ..indexing import get_shallow_index_manager
+
+
+@dataclass
+class FileDiscoveryResult:
+ """Business result for file discovery operations."""
+ files: List[str]
+ total_count: int
+ pattern_used: str
+ search_strategy: str
+ metadata: Dict[str, Any]
+
+
+class FileDiscoveryService(BaseService):
+ """
+ Business service for intelligent file discovery using JSON indexing.
+
+ This service provides fast file discovery using the optimized JSON
+ indexing system for efficient LLM-oriented responses.
+ """
+
+ def __init__(self, ctx):
+ super().__init__(ctx)
+ self._index_manager = get_shallow_index_manager()
+
+ def find_files(self, pattern: str, max_results: Optional[int] = None) -> List[str]:
+ """
+ Find files matching the given pattern using JSON indexing.
+
+ Args:
+ pattern: Glob pattern to search for (e.g., "*.py", "test_*.js")
+ max_results: Maximum number of results to return (None for no limit)
+
+ Returns:
+ List of file paths matching the pattern
+
+ Raises:
+ ValueError: If pattern is invalid or project not set up
+ """
+ # Business validation
+ self._validate_discovery_request(pattern)
+
+ # Get files from JSON index
+ files = self._index_manager.find_files(pattern)
+
+ # Apply max_results limit if specified
+ if max_results and len(files) > max_results:
+ files = files[:max_results]
+
+ return files
+
+ def _validate_discovery_request(self, pattern: str) -> None:
+ """
+ Validate the file discovery request according to business rules.
+
+ Args:
+ pattern: Pattern to validate
+
+ Raises:
+ ValueError: If validation fails
+ """
+ # Ensure project is set up
+ self._require_project_setup()
+
+ # Validate pattern
+ if not pattern or not pattern.strip():
+ raise ValueError("Search pattern cannot be empty")
diff --git a/src/code_index_mcp/services/file_service.py b/src/code_index_mcp/services/file_service.py
new file mode 100644
index 0000000..480d6f0
--- /dev/null
+++ b/src/code_index_mcp/services/file_service.py
@@ -0,0 +1,62 @@
+"""
+File Service - Simple file reading service for MCP resources.
+
+This service provides simple file content reading functionality for MCP resources.
+Complex file analysis has been moved to CodeIntelligenceService.
+
+Usage:
+- get_file_content() - used by files://{file_path} resource
+"""
+
+import os
+from .base_service import BaseService
+
+
+class FileService(BaseService):
+ """
+ Simple service for file content reading.
+
+ This service handles basic file reading operations for MCP resources.
+ Complex analysis functionality has been moved to CodeIntelligenceService.
+ """
+
+ def get_file_content(self, file_path: str) -> str:
+ """
+ Get file content for MCP resource.
+
+ Args:
+ file_path: Path to the file (relative to project root)
+
+ Returns:
+ File content as string
+
+ Raises:
+ ValueError: If project is not set up or path is invalid
+ FileNotFoundError: If file is not found or readable
+ """
+ self._require_project_setup()
+ self._require_valid_file_path(file_path)
+
+ # Build full path
+ full_path = os.path.join(self.base_path, file_path)
+
+ try:
+ # Try UTF-8 first (most common)
+ with open(full_path, 'r', encoding='utf-8') as f:
+ return f.read()
+ except UnicodeDecodeError:
+ # Try other encodings if UTF-8 fails
+ encodings = ['utf-8-sig', 'latin-1', 'cp1252', 'iso-8859-1']
+ for encoding in encodings:
+ try:
+ with open(full_path, 'r', encoding=encoding) as f:
+ return f.read()
+ except UnicodeDecodeError:
+ continue
+
+ raise ValueError(
+ f"Could not decode file {file_path}. File may have "
+ f"unsupported encoding."
+ ) from None
+ except (FileNotFoundError, PermissionError, OSError) as e:
+ raise FileNotFoundError(f"Error reading file: {e}") from e
diff --git a/src/code_index_mcp/services/file_watcher_service.py b/src/code_index_mcp/services/file_watcher_service.py
new file mode 100644
index 0000000..c2ef64c
--- /dev/null
+++ b/src/code_index_mcp/services/file_watcher_service.py
@@ -0,0 +1,418 @@
+"""
+File Watcher Service for automatic index rebuilds.
+
+This module provides file system monitoring capabilities that automatically
+trigger index rebuilds when relevant files are modified, created, or deleted.
+It uses the watchdog library for cross-platform file system event monitoring.
+"""
+# pylint: disable=missing-function-docstring # Fallback stub methods don't need docstrings
+
+import logging
+import os
+import traceback
+from threading import Timer
+from typing import Optional, Callable, List
+from pathlib import Path
+
+try:
+ from watchdog.observers import Observer
+ from watchdog.events import FileSystemEventHandler, FileSystemEvent
+ WATCHDOG_AVAILABLE = True
+except ImportError:
+ # Fallback classes for when watchdog is not available
+ class Observer:
+ """Fallback Observer class when watchdog library is not available."""
+ def __init__(self):
+ pass
+ def schedule(self, *args, **kwargs):
+ pass
+ def start(self):
+ pass
+ def stop(self):
+ pass
+ def join(self, *args, **kwargs):
+ pass
+ def is_alive(self):
+ return False
+
+ class FileSystemEventHandler:
+ """Fallback FileSystemEventHandler class when watchdog library is not available."""
+ def __init__(self):
+ pass
+
+ class FileSystemEvent:
+ """Fallback FileSystemEvent class when watchdog library is not available."""
+ def __init__(self):
+ self.is_directory = False
+ self.src_path = ""
+ self.event_type = ""
+
+ WATCHDOG_AVAILABLE = False
+
+from .base_service import BaseService
+from ..constants import SUPPORTED_EXTENSIONS
+
+
+class FileWatcherService(BaseService):
+ """
+ Service for monitoring file system changes and triggering index rebuilds.
+
+ This service uses the watchdog library to monitor file system events and
+ automatically triggers background index rebuilds when relevant files change.
+ It includes intelligent debouncing to batch rapid changes and filtering
+ to only monitor relevant file types.
+ """
+ MAX_RESTART_ATTEMPTS = 3
+
+ def __init__(self, ctx):
+ """
+ Initialize the file watcher service.
+
+ Args:
+ ctx: The MCP Context object
+ """
+ super().__init__(ctx)
+ self.logger = logging.getLogger(__name__)
+ self.observer: Optional[Observer] = None
+ self.event_handler: Optional[DebounceEventHandler] = None
+ self.is_monitoring = False
+ self.restart_attempts = 0
+ self.rebuild_callback: Optional[Callable] = None
+
+ # Check if watchdog is available
+ if not WATCHDOG_AVAILABLE:
+ self.logger.warning("Watchdog library not available - file watcher disabled")
+
+ def start_monitoring(self, rebuild_callback: Callable) -> bool:
+ """
+ Start file system monitoring.
+
+ Args:
+ rebuild_callback: Function to call when rebuild is needed
+
+ Returns:
+ True if monitoring started successfully, False otherwise
+ """
+ if not WATCHDOG_AVAILABLE:
+ self.logger.warning("Cannot start file watcher - watchdog library not available")
+ return False
+
+ if self.is_monitoring:
+ self.logger.debug("File watcher already monitoring")
+ return True
+
+ # Validate project setup
+ error = self._validate_project_setup()
+ if error:
+ self.logger.error("Cannot start file watcher: %s", error)
+ return False
+
+ self.rebuild_callback = rebuild_callback
+
+ # Get debounce seconds from config
+ config = self.settings.get_file_watcher_config()
+ debounce_seconds = config.get('debounce_seconds', 6.0)
+
+ try:
+ self.observer = Observer()
+ self.event_handler = DebounceEventHandler(
+ debounce_seconds=debounce_seconds,
+ rebuild_callback=self.rebuild_callback,
+ base_path=Path(self.base_path),
+ logger=self.logger
+ )
+
+ # Log detailed Observer setup
+ watch_path = str(self.base_path)
+ self.logger.debug("Scheduling Observer for path: %s", watch_path)
+
+ self.observer.schedule(
+ self.event_handler,
+ watch_path,
+ recursive=True
+ )
+
+ # Log Observer start
+ self.logger.debug("Starting Observer...")
+ self.observer.start()
+ self.is_monitoring = True
+ self.restart_attempts = 0
+
+ # Log Observer thread info
+ if hasattr(self.observer, '_thread'):
+ self.logger.debug("Observer thread: %s", self.observer._thread)
+
+ # Verify observer is actually running
+ if self.observer.is_alive():
+ self.logger.info(
+ "File watcher started successfully",
+ extra={
+ "debounce_seconds": debounce_seconds,
+ "monitored_path": str(self.base_path),
+ "supported_extensions": len(SUPPORTED_EXTENSIONS)
+ }
+ )
+
+ # Add diagnostic test - create a test event to verify Observer works
+ self.logger.debug("Observer thread is alive: %s", self.observer.is_alive())
+ self.logger.debug("Monitored path exists: %s", os.path.exists(str(self.base_path)))
+ self.logger.debug("Event handler is set: %s", self.event_handler is not None)
+
+ # Log current directory for comparison
+ current_dir = os.getcwd()
+ self.logger.debug("Current working directory: %s", current_dir)
+ self.logger.debug("Are paths same: %s", os.path.normpath(current_dir) == os.path.normpath(str(self.base_path)))
+
+ return True
+ else:
+ self.logger.error("File watcher failed to start - Observer not alive")
+ return False
+
+ except Exception as e:
+ self.logger.warning("Failed to start file watcher: %s", e)
+ self.logger.info("Falling back to reactive index refresh")
+ return False
+
+ def stop_monitoring(self) -> None:
+ """
+ Stop file system monitoring and cleanup all resources.
+
+ This method ensures complete cleanup of:
+ - Observer thread
+ - Event handler
+ - Debounce timers
+ - Monitoring state
+ """
+ if not self.observer and not self.is_monitoring:
+ # Already stopped or never started
+ return
+
+ self.logger.info("Stopping file watcher monitoring...")
+
+ try:
+ # Step 1: Stop the observer first
+ if self.observer:
+ self.logger.debug("Stopping observer...")
+ self.observer.stop()
+
+ # Step 2: Cancel any active debounce timer
+ if self.event_handler and self.event_handler.debounce_timer:
+ self.logger.debug("Cancelling debounce timer...")
+ self.event_handler.debounce_timer.cancel()
+
+ # Step 3: Wait for observer thread to finish (with timeout)
+ self.logger.debug("Waiting for observer thread to finish...")
+ self.observer.join(timeout=5.0)
+
+ # Step 4: Check if thread actually finished
+ if self.observer.is_alive():
+ self.logger.warning("Observer thread did not stop within timeout")
+ else:
+ self.logger.debug("Observer thread stopped successfully")
+
+ # Step 5: Clear all references
+ self.observer = None
+ self.event_handler = None
+ self.rebuild_callback = None
+ self.is_monitoring = False
+
+ self.logger.info("File watcher stopped and cleaned up successfully")
+
+ except Exception as e:
+ self.logger.error("Error stopping file watcher: %s", e)
+
+ # Force cleanup even if there were errors
+ self.observer = None
+ self.event_handler = None
+ self.rebuild_callback = None
+ self.is_monitoring = False
+
+ def is_active(self) -> bool:
+ """
+ Check if file watcher is actively monitoring.
+
+ Returns:
+ True if actively monitoring, False otherwise
+ """
+ return (self.is_monitoring and
+ self.observer and
+ self.observer.is_alive())
+
+ def restart_observer(self) -> bool:
+ """
+ Attempt to restart the file system observer.
+
+ Returns:
+ True if restart successful, False otherwise
+ """
+ if self.restart_attempts >= self.MAX_RESTART_ATTEMPTS:
+ self.logger.error("Max restart attempts reached, file watcher disabled")
+ return False
+
+ self.logger.info("Attempting to restart file watcher (attempt %d)",
+ self.restart_attempts + 1)
+ self.restart_attempts += 1
+
+ # Stop current observer if running
+ if self.observer:
+ try:
+ self.observer.stop()
+ self.observer.join(timeout=2.0)
+ except Exception as e:
+ self.logger.warning("Error stopping observer during restart: %s", e)
+
+ # Start new observer
+ try:
+ self.observer = Observer()
+ self.observer.schedule(
+ self.event_handler,
+ str(self.base_path),
+ recursive=True
+ )
+ self.observer.start()
+ self.is_monitoring = True
+
+ self.logger.info("File watcher restarted successfully")
+ return True
+
+ except Exception as e:
+ self.logger.error("Failed to restart file watcher: %s", e)
+ return False
+
+ def get_status(self) -> dict:
+ """
+ Get current file watcher status information.
+
+ Returns:
+ Dictionary containing status information
+ """
+ # Get current debounce seconds from config
+ config = self.settings.get_file_watcher_config()
+ debounce_seconds = config.get('debounce_seconds', 6.0)
+
+ return {
+ "available": WATCHDOG_AVAILABLE,
+ "active": self.is_active(),
+ "monitoring": self.is_monitoring,
+ "restart_attempts": self.restart_attempts,
+ "debounce_seconds": debounce_seconds,
+ "base_path": self.base_path if self.base_path else None,
+ "observer_alive": self.observer.is_alive() if self.observer else False
+ }
+
+
+class DebounceEventHandler(FileSystemEventHandler):
+ """
+ File system event handler with debouncing capability.
+
+ This handler filters file system events to only relevant files and
+ implements a debounce mechanism to batch rapid changes into single
+ rebuild operations.
+ """
+
+ def __init__(self, debounce_seconds: float, rebuild_callback: Callable,
+ base_path: Path, logger: logging.Logger, additional_excludes: Optional[List[str]] = None):
+ """
+ Initialize the debounce event handler.
+
+ Args:
+ debounce_seconds: Number of seconds to wait before triggering rebuild
+ rebuild_callback: Function to call when rebuild is needed
+ base_path: Base project path for filtering
+ logger: Logger instance for debug messages
+ additional_excludes: Additional patterns to exclude
+ """
+ from ..utils import FileFilter
+
+ super().__init__()
+ self.debounce_seconds = debounce_seconds
+ self.rebuild_callback = rebuild_callback
+ self.base_path = base_path
+ self.debounce_timer: Optional[Timer] = None
+ self.logger = logger
+
+ # Use centralized file filtering
+ self.file_filter = FileFilter(additional_excludes)
+
+ def on_any_event(self, event: FileSystemEvent) -> None:
+ """
+ Handle any file system event.
+
+ Args:
+ event: The file system event
+ """
+ # Check if event should be processed
+ should_process = self.should_process_event(event)
+
+ if should_process:
+ self.logger.info("File changed: %s - %s", event.event_type, event.src_path)
+ self.reset_debounce_timer()
+ else:
+ # Only log at debug level for filtered events
+ self.logger.debug("Filtered: %s - %s", event.event_type, event.src_path)
+
+ def should_process_event(self, event: FileSystemEvent) -> bool:
+ """
+ Determine if event should trigger index rebuild using centralized filtering.
+
+ Args:
+ event: The file system event to evaluate
+
+ Returns:
+ True if event should trigger rebuild, False otherwise
+ """
+ # Skip directory events
+ if event.is_directory:
+ self.logger.debug("Skipping directory event: %s", event.src_path)
+ return False
+
+ # Select path to check: dest_path for moves, src_path for others
+ if event.event_type == 'moved':
+ if not hasattr(event, 'dest_path'):
+ return False
+ target_path = event.dest_path
+ else:
+ target_path = event.src_path
+
+ # Use centralized filtering logic
+ try:
+ path = Path(target_path)
+ should_process = self.file_filter.should_process_path(path, self.base_path)
+
+ # Skip temporary files using centralized logic
+ if not should_process or self.file_filter.is_temporary_file(path):
+ return False
+
+ return True
+ except Exception:
+ return False
+
+
+
+
+
+
+ def reset_debounce_timer(self) -> None:
+ """Reset the debounce timer, canceling any existing timer."""
+ if self.debounce_timer:
+ self.debounce_timer.cancel()
+
+ self.debounce_timer = Timer(
+ self.debounce_seconds,
+ self.trigger_rebuild
+ )
+ self.debounce_timer.start()
+
+ def trigger_rebuild(self) -> None:
+ """Trigger index rebuild after debounce period."""
+ self.logger.info("File changes detected, triggering rebuild")
+
+ if self.rebuild_callback:
+ try:
+ result = self.rebuild_callback()
+ except Exception as e:
+ self.logger.error("Rebuild callback failed: %s", e)
+ traceback_msg = traceback.format_exc()
+ self.logger.error("Traceback: %s", traceback_msg)
+ else:
+ self.logger.warning("No rebuild callback configured")
diff --git a/src/code_index_mcp/services/index_management_service.py b/src/code_index_mcp/services/index_management_service.py
new file mode 100644
index 0000000..f56c760
--- /dev/null
+++ b/src/code_index_mcp/services/index_management_service.py
@@ -0,0 +1,198 @@
+"""
+Index Management Service - Business logic for index lifecycle management.
+
+This service handles the business logic for index rebuilding, status monitoring,
+and index-related operations using the new JSON-based indexing system.
+"""
+import time
+import logging
+import os
+import json
+
+from typing import Dict, Any
+from dataclasses import dataclass
+
+logger = logging.getLogger(__name__)
+
+from .base_service import BaseService
+from ..indexing import get_index_manager, get_shallow_index_manager, DeepIndexManager
+
+
+@dataclass
+class IndexRebuildResult:
+ """Business result for index rebuild operations."""
+ file_count: int
+ rebuild_time: float
+ status: str
+ message: str
+
+
+class IndexManagementService(BaseService):
+ """
+ Business service for index lifecycle management.
+
+ This service orchestrates index management workflows using the new
+ JSON-based indexing system for optimal LLM performance.
+ """
+
+ def __init__(self, ctx):
+ super().__init__(ctx)
+ # Deep manager (symbols/files, legacy JSON index manager)
+ self._index_manager = get_index_manager()
+ # Shallow manager (file-list only) for default workflows
+ self._shallow_manager = get_shallow_index_manager()
+ # Optional wrapper for explicit deep builds
+ self._deep_wrapper = DeepIndexManager()
+
+ def rebuild_index(self) -> str:
+ """
+ Rebuild the project index (DEFAULT: shallow file list).
+
+ For deep/symbol rebuilds, use build_deep_index() tool instead.
+
+ Returns:
+ Success message with rebuild information
+
+ Raises:
+ ValueError: If project not set up or rebuild fails
+ """
+ # Business validation
+ self._validate_rebuild_request()
+
+ # Shallow rebuild only (fast path)
+ if not self._shallow_manager.set_project_path(self.base_path):
+ raise RuntimeError("Failed to set project path (shallow) in index manager")
+ if not self._shallow_manager.build_index():
+ raise RuntimeError("Failed to rebuild shallow index")
+
+ try:
+ count = len(self._shallow_manager.get_file_list())
+ except Exception:
+ count = 0
+ return f"Shallow index re-built with {count} files."
+
+ def get_rebuild_status(self) -> Dict[str, Any]:
+ """
+ Get current index rebuild status information.
+
+ Returns:
+ Dictionary with rebuild status and metadata
+ """
+ # Check if project is set up
+ if not self.base_path:
+ return {
+ 'status': 'not_initialized',
+ 'message': 'Project not initialized',
+ 'is_rebuilding': False
+ }
+
+ # Get index stats from the new JSON system
+ stats = self._index_manager.get_index_stats()
+
+ return {
+ 'status': 'ready' if stats.get('status') == 'loaded' else 'needs_rebuild',
+ 'index_available': stats.get('status') == 'loaded',
+ 'is_rebuilding': False,
+ 'project_path': self.base_path,
+ 'file_count': stats.get('indexed_files', 0),
+ 'total_symbols': stats.get('total_symbols', 0),
+ 'symbol_types': stats.get('symbol_types', {}),
+ 'languages': stats.get('languages', [])
+ }
+
+ def _validate_rebuild_request(self) -> None:
+ """
+ Validate the index rebuild request according to business rules.
+
+ Raises:
+ ValueError: If validation fails
+ """
+ # Business rule: Project must be set up
+ self._require_project_setup()
+
+ def _execute_rebuild_workflow(self) -> IndexRebuildResult:
+ """
+ Execute the core index rebuild business workflow.
+
+ Returns:
+ IndexRebuildResult with rebuild data
+ """
+ start_time = time.time()
+
+ # Set project path in index manager
+ if not self._index_manager.set_project_path(self.base_path):
+ raise RuntimeError("Failed to set project path in index manager")
+
+ # Rebuild the index
+ if not self._index_manager.refresh_index():
+ raise RuntimeError("Failed to rebuild index")
+
+ # Get stats for result
+ stats = self._index_manager.get_index_stats()
+ file_count = stats.get('indexed_files', 0)
+
+ rebuild_time = time.time() - start_time
+
+ return IndexRebuildResult(
+ file_count=file_count,
+ rebuild_time=rebuild_time,
+ status='success',
+ message=f"Index rebuilt successfully with {file_count} files"
+ )
+
+
+ def _format_rebuild_result(self, result: IndexRebuildResult) -> str:
+ """
+ Format the rebuild result according to business requirements.
+
+ Args:
+ result: Rebuild result data
+
+ Returns:
+ Formatted result string for MCP response
+ """
+ return f"Project re-indexed. Found {result.file_count} files."
+
+ def build_shallow_index(self) -> str:
+ """
+ Build and persist the shallow index (file list only).
+
+ Returns:
+ Success message including file count if available.
+
+ Raises:
+ ValueError/RuntimeError on validation or build failure
+ """
+ # Ensure project is set up
+ self._require_project_setup()
+
+ # Initialize manager with current base path
+ if not self._shallow_manager.set_project_path(self.base_path):
+ raise RuntimeError("Failed to set project path in index manager")
+
+ # Build shallow index
+ if not self._shallow_manager.build_index():
+ raise RuntimeError("Failed to build shallow index")
+
+ # Try to report count
+ count = 0
+ try:
+ shallow_path = getattr(self._shallow_manager, 'index_path', None)
+ if shallow_path and os.path.exists(shallow_path):
+ with open(shallow_path, 'r', encoding='utf-8') as f:
+ data = json.load(f)
+ if isinstance(data, list):
+ count = len(data)
+ except Exception as e: # noqa: BLE001 - safe fallback to zero
+ logger.debug(f"Unable to read shallow index count: {e}")
+
+ return f"Shallow index built{f' with {count} files' if count else ''}."
+
+ def rebuild_deep_index(self) -> str:
+ """Rebuild the deep index using the original workflow."""
+ # Business validation
+ self._validate_rebuild_request()
+
+ # Deep rebuild via existing workflow
+ result = self._execute_rebuild_workflow()
+ return self._format_rebuild_result(result)
diff --git a/src/code_index_mcp/services/project_management_service.py b/src/code_index_mcp/services/project_management_service.py
new file mode 100644
index 0000000..c0f3a63
--- /dev/null
+++ b/src/code_index_mcp/services/project_management_service.py
@@ -0,0 +1,411 @@
+"""
+Project Management Service - Business logic for project lifecycle management.
+
+This service handles the business logic for project initialization, configuration,
+and lifecycle management using the new JSON-based indexing system.
+"""
+import logging
+from typing import Dict, Any
+from dataclasses import dataclass
+from contextlib import contextmanager
+
+from .base_service import BaseService
+from ..utils.response_formatter import ResponseFormatter
+from ..constants import SUPPORTED_EXTENSIONS
+from ..indexing import get_index_manager, get_shallow_index_manager
+
+logger = logging.getLogger(__name__)
+
+
+@dataclass
+class ProjectInitializationResult:
+ """Business result for project initialization operations."""
+ project_path: str
+ file_count: int
+ index_source: str # 'loaded_existing' or 'built_new'
+ search_capabilities: str
+ monitoring_status: str
+ message: str
+
+
+class ProjectManagementService(BaseService):
+ """
+ Business service for project lifecycle management.
+
+ This service orchestrates project initialization workflows by composing
+ technical tools to achieve business goals like setting up projects,
+ managing configurations, and coordinating system components.
+ """
+
+ def __init__(self, ctx):
+ super().__init__(ctx)
+ # Deep index manager (legacy full index)
+ self._index_manager = get_index_manager()
+ # Shallow index manager (default for initialization)
+ self._shallow_manager = get_shallow_index_manager()
+ from ..tools.config import ProjectConfigTool
+ self._config_tool = ProjectConfigTool()
+ # Import FileWatcherTool locally to avoid circular import
+ from ..tools.monitoring import FileWatcherTool
+ self._watcher_tool = FileWatcherTool(ctx)
+
+
+ @contextmanager
+ def _noop_operation(self, *_args, **_kwargs):
+ yield
+
+ def initialize_project(self, path: str) -> str:
+ """
+ Initialize a project with comprehensive business logic.
+
+ This is the main business method that orchestrates the project
+ initialization workflow, handling validation, cleanup, setup,
+ and coordination of all project components.
+
+ Args:
+ path: Project directory path to initialize
+
+ Returns:
+ Success message with project information
+
+ Raises:
+ ValueError: If path is invalid or initialization fails
+ """
+ # Business validation
+ self._validate_initialization_request(path)
+
+ # Business workflow: Execute initialization
+ result = self._execute_initialization_workflow(path)
+
+ # Business result formatting
+ return self._format_initialization_result(result)
+
+ def _validate_initialization_request(self, path: str) -> None:
+ """
+ Validate the project initialization request according to business rules.
+
+ Args:
+ path: Project path to validate
+
+ Raises:
+ ValueError: If validation fails
+ """
+ # Business rule: Path must be valid
+ error = self._config_tool.validate_project_path(path)
+ if error:
+ raise ValueError(error)
+
+ def _execute_initialization_workflow(self, path: str) -> ProjectInitializationResult:
+ """
+ Execute the core project initialization business workflow.
+
+ Args:
+ path: Project path to initialize
+
+ Returns:
+ ProjectInitializationResult with initialization data
+ """
+ # Business step 1: Initialize config tool
+ self._config_tool.initialize_settings(path)
+
+ # Normalize path for consistent processing
+ normalized_path = self._config_tool.normalize_project_path(path)
+
+ # Business step 2: Cleanup existing project state
+ self._cleanup_existing_project()
+
+ # Business step 3: Initialize shallow index by default (fast path)
+ index_result = self._initialize_shallow_index_manager(normalized_path)
+
+ # Business step 3.1: Store index manager in context for other services
+ self.helper.update_index_manager(self._index_manager)
+
+ # Business step 4: Setup file monitoring
+ monitoring_result = self._setup_file_monitoring(normalized_path)
+
+ # Business step 4: Update system state
+ self._update_project_state(normalized_path, index_result['file_count'])
+
+ # Business step 6: Get search capabilities info
+ search_info = self._get_search_capabilities_info()
+
+ return ProjectInitializationResult(
+ project_path=normalized_path,
+ file_count=index_result['file_count'],
+ index_source=index_result['source'],
+ search_capabilities=search_info,
+ monitoring_status=monitoring_result,
+ message=f"Project initialized: {normalized_path}"
+ )
+
+ def _cleanup_existing_project(self) -> None:
+ """Business logic to cleanup existing project state."""
+ with self._noop_operation():
+ # Stop existing file monitoring
+ self._watcher_tool.stop_existing_watcher()
+
+ # Clear existing index cache
+ self.helper.clear_index_cache()
+
+ # Clear any existing index state
+ pass
+
+ def _initialize_json_index_manager(self, project_path: str) -> Dict[str, Any]:
+ """
+ Business logic to initialize JSON index manager.
+
+ Args:
+ project_path: Project path
+
+ Returns:
+ Dictionary with initialization results
+ """
+ # Set project path in index manager
+ if not self._index_manager.set_project_path(project_path):
+ raise RuntimeError(f"Failed to set project path: {project_path}")
+
+ # Update context
+ self.helper.update_base_path(project_path)
+
+ # Try to load existing index or build new one
+ if self._index_manager.load_index():
+ source = "loaded_existing"
+ else:
+ if not self._index_manager.build_index():
+ raise RuntimeError("Failed to build index")
+ source = "built_new"
+
+ # Get stats
+ stats = self._index_manager.get_index_stats()
+ file_count = stats.get('indexed_files', 0)
+
+ return {
+ 'file_count': file_count,
+ 'source': source,
+ 'total_symbols': stats.get('total_symbols', 0),
+ 'languages': stats.get('languages', [])
+ }
+
+ def _initialize_shallow_index_manager(self, project_path: str) -> Dict[str, Any]:
+ """
+ Business logic to initialize the shallow index manager by default.
+
+ Args:
+ project_path: Project path
+
+ Returns:
+ Dictionary with initialization results
+ """
+ # Set project path in shallow manager
+ if not self._shallow_manager.set_project_path(project_path):
+ raise RuntimeError(f"Failed to set project path (shallow): {project_path}")
+
+ # Update context
+ self.helper.update_base_path(project_path)
+
+ # Try to load existing shallow index or build new one
+ if self._shallow_manager.load_index():
+ source = "loaded_existing"
+ else:
+ if not self._shallow_manager.build_index():
+ raise RuntimeError("Failed to build shallow index")
+ source = "built_new"
+
+ # Determine file count from shallow list
+ try:
+ files = self._shallow_manager.get_file_list()
+ file_count = len(files)
+ except Exception: # noqa: BLE001 - safe fallback
+ file_count = 0
+
+ return {
+ 'file_count': file_count,
+ 'source': source,
+ 'total_symbols': 0,
+ 'languages': []
+ }
+
+
+ def _is_valid_existing_index(self, index_data: Dict[str, Any]) -> bool:
+ """
+ Business rule to determine if existing index is valid and usable.
+
+ Args:
+ index_data: Index data to validate
+
+ Returns:
+ True if index is valid and usable, False otherwise
+ """
+ if not index_data or not isinstance(index_data, dict):
+ return False
+
+ # Business rule: Must have new format metadata
+ if 'index_metadata' not in index_data:
+ return False
+
+ # Business rule: Must be compatible version
+ version = index_data.get('index_metadata', {}).get('version', '')
+ return version >= '3.0'
+
+ def _load_existing_index(self, index_data: Dict[str, Any]) -> Dict[str, Any]:
+ """
+ Business logic to load and use existing index.
+
+ Args:
+ index_data: Existing index data
+
+ Returns:
+ Dictionary with loading results
+ """
+
+
+ # Note: Legacy index loading is now handled by UnifiedIndexManager
+ # This method is kept for backward compatibility but functionality moved
+
+ # Extract file count from metadata
+ file_count = index_data.get('project_metadata', {}).get('total_files', 0)
+
+
+
+ return {
+ 'file_count': file_count,
+ 'source': 'loaded_existing'
+ }
+
+
+ def _setup_file_monitoring(self, project_path: str) -> str:
+ """
+ Business logic to setup file monitoring for the project.
+
+ Args:
+ project_path: Project path to monitor
+
+ Returns:
+ String describing monitoring setup result
+ """
+
+
+ try:
+ # Create rebuild callback that uses the JSON index manager
+ def rebuild_callback():
+ logger.info("File watcher triggered rebuild callback")
+ try:
+ logger.debug(f"Starting shallow index rebuild for: {project_path}")
+ # Business logic: File changed, rebuild using SHALLOW index manager
+ try:
+ if not self._shallow_manager.set_project_path(project_path):
+ logger.warning("Shallow manager set_project_path failed")
+ return False
+ if self._shallow_manager.build_index():
+ files = self._shallow_manager.get_file_list()
+ logger.info(f"File watcher shallow rebuild completed successfully - files {len(files)}")
+ return True
+ else:
+ logger.warning("File watcher shallow rebuild failed")
+ return False
+ except Exception as e:
+ import traceback
+ logger.error(f"File watcher shallow rebuild failed: {e}")
+ logger.error(f"Traceback: {traceback.format_exc()}")
+ return False
+ except Exception as e:
+ import traceback
+ logger.error(f"File watcher rebuild failed: {e}")
+ logger.error(f"Traceback: {traceback.format_exc()}")
+ return False
+
+ # Start monitoring using watcher tool
+ success = self._watcher_tool.start_monitoring(project_path, rebuild_callback)
+
+ if success:
+ # Store watcher in context for later access
+ self._watcher_tool.store_in_context()
+ # No logging
+ return "monitoring_active"
+ else:
+ self._watcher_tool.record_error("Failed to start file monitoring")
+ return "monitoring_failed"
+
+ except Exception as e:
+ error_msg = f"File monitoring setup failed: {e}"
+ self._watcher_tool.record_error(error_msg)
+ return "monitoring_error"
+
+ def _update_project_state(self, project_path: str, file_count: int) -> None:
+ """Business logic to update system state after project initialization."""
+
+
+ # Update context with file count
+ self.helper.update_file_count(file_count)
+
+ # No logging
+
+ def _get_search_capabilities_info(self) -> str:
+ """Business logic to get search capabilities information."""
+ search_info = self._config_tool.get_search_tool_info()
+
+ if search_info['available']:
+ return f"Advanced search enabled ({search_info['name']})"
+ else:
+ return "Basic search available"
+
+ def _format_initialization_result(self, result: ProjectInitializationResult) -> str:
+ """
+ Format the initialization result according to business requirements.
+
+ Args:
+ result: Initialization result data
+
+ Returns:
+ Formatted result string for MCP response
+ """
+ if result.index_source == 'unified_manager':
+ message = (f"Project path set to: {result.project_path}. "
+ f"Initialized unified index with {result.file_count} files. "
+ f"{result.search_capabilities}.")
+ elif result.index_source == 'failed':
+ message = (f"Project path set to: {result.project_path}. "
+ f"Index initialization failed. Some features may be limited. "
+ f"{result.search_capabilities}.")
+ else:
+ message = (f"Project path set to: {result.project_path}. "
+ f"Indexed {result.file_count} files. "
+ f"{result.search_capabilities}.")
+
+ if result.monitoring_status != "monitoring_active":
+ message += " (File monitoring unavailable - use manual refresh)"
+
+ return message
+
+ def get_project_config(self) -> str:
+ """
+ Get the current project configuration for MCP resource.
+
+ Returns:
+ JSON formatted configuration string
+ """
+
+ # Check if project is configured
+ if not self.helper.base_path:
+ config_data = {
+ "status": "not_configured",
+ "message": ("Project path not set. Please use set_project_path "
+ "to set a project directory first."),
+ "supported_extensions": SUPPORTED_EXTENSIONS
+ }
+ return ResponseFormatter.config_response(config_data)
+
+ # Get settings stats
+ settings_stats = self.helper.settings.get_stats() if self.helper.settings else {}
+
+ config_data = {
+ "base_path": self.helper.base_path,
+ "supported_extensions": SUPPORTED_EXTENSIONS,
+ "file_count": self.helper.file_count,
+ "settings_directory": self.helper.settings.settings_path if self.helper.settings else "",
+ "settings_stats": settings_stats
+ }
+
+ return ResponseFormatter.config_response(config_data)
+
+ # Removed: get_project_structure; the project structure resource is deprecated
diff --git a/src/code_index_mcp/services/search_service.py b/src/code_index_mcp/services/search_service.py
new file mode 100644
index 0000000..a2c2799
--- /dev/null
+++ b/src/code_index_mcp/services/search_service.py
@@ -0,0 +1,172 @@
+"""
+Search service for the Code Index MCP server.
+
+This service handles code search operations, search tool management,
+and search strategy selection.
+"""
+
+from pathlib import Path
+from typing import Any, Dict, List, Optional
+
+from .base_service import BaseService
+from ..utils import FileFilter, ResponseFormatter, ValidationHelper
+from ..search.base import is_safe_regex_pattern
+
+
+class SearchService(BaseService):
+ """Service for managing code search operations."""
+
+ def __init__(self, ctx):
+ super().__init__(ctx)
+ self.file_filter = self._create_file_filter()
+
+ def search_code( # pylint: disable=too-many-arguments
+ self,
+ pattern: str,
+ case_sensitive: bool = True,
+ context_lines: int = 0,
+ file_pattern: Optional[str] = None,
+ fuzzy: bool = False,
+ regex: Optional[bool] = None,
+ max_line_length: Optional[int] = None
+ ) -> Dict[str, Any]:
+ """Search for code patterns in the project."""
+ self._require_project_setup()
+
+ if regex is None:
+ regex = is_safe_regex_pattern(pattern)
+
+ error = ValidationHelper.validate_search_pattern(pattern, regex)
+ if error:
+ raise ValueError(error)
+
+ if file_pattern:
+ error = ValidationHelper.validate_glob_pattern(file_pattern)
+ if error:
+ raise ValueError(f"Invalid file pattern: {error}")
+
+ if not self.settings:
+ raise ValueError("Settings not available")
+
+ strategy = self.settings.get_preferred_search_tool()
+ if not strategy:
+ raise ValueError("No search strategies available")
+
+ self._configure_strategy(strategy)
+
+ try:
+ results = strategy.search(
+ pattern=pattern,
+ base_path=self.base_path,
+ case_sensitive=case_sensitive,
+ context_lines=context_lines,
+ file_pattern=file_pattern,
+ fuzzy=fuzzy,
+ regex=regex,
+ max_line_length=max_line_length
+ )
+ filtered = self._filter_results(results)
+ return ResponseFormatter.search_results_response(filtered)
+ except Exception as exc:
+ raise ValueError(f"Search failed using '{strategy.name}': {exc}") from exc
+
+ def refresh_search_tools(self) -> str:
+ """Refresh the available search tools."""
+ if not self.settings:
+ raise ValueError("Settings not available")
+
+ self.settings.refresh_available_strategies()
+ config = self.settings.get_search_tools_config()
+
+ available = config['available_tools']
+ preferred = config['preferred_tool']
+ return f"Search tools refreshed. Available: {available}. Preferred: {preferred}."
+
+ def get_search_capabilities(self) -> Dict[str, Any]:
+ """Get information about search capabilities and available tools."""
+ if not self.settings:
+ return {"error": "Settings not available"}
+
+ config = self.settings.get_search_tools_config()
+
+ capabilities = {
+ "available_tools": config.get('available_tools', []),
+ "preferred_tool": config.get('preferred_tool', 'basic'),
+ "supports_regex": True,
+ "supports_fuzzy": True,
+ "supports_case_sensitivity": True,
+ "supports_context_lines": True,
+ "supports_file_patterns": True
+ }
+
+ return capabilities
+
+ def _configure_strategy(self, strategy) -> None:
+ """Apply shared exclusion configuration to the strategy if supported."""
+ configure = getattr(strategy, 'configure_excludes', None)
+ if not configure:
+ return
+
+ try:
+ configure(self.file_filter)
+ except Exception: # pragma: no cover - defensive fallback
+ pass
+
+ def _create_file_filter(self) -> FileFilter:
+ """Build a shared file filter drawing from project settings."""
+ additional_dirs: List[str] = []
+ additional_file_patterns: List[str] = []
+
+ settings = self.settings
+ if settings:
+ try:
+ config = settings.get_file_watcher_config()
+ except Exception: # pragma: no cover - fallback if config fails
+ config = {}
+
+ for key in ('exclude_patterns', 'additional_exclude_patterns'):
+ patterns = config.get(key) or []
+ for pattern in patterns:
+ if not isinstance(pattern, str):
+ continue
+ normalized = pattern.strip()
+ if not normalized:
+ continue
+ additional_dirs.append(normalized)
+ additional_file_patterns.append(normalized)
+
+ file_filter = FileFilter(additional_dirs or None)
+
+ if additional_file_patterns:
+ file_filter.exclude_files.update(additional_file_patterns)
+
+ return file_filter
+
+ def _filter_results(self, results: Dict[str, Any]) -> Dict[str, Any]:
+ """Filter out matches that reside under excluded paths."""
+ if not isinstance(results, dict) or not results:
+ return results
+
+ if 'error' in results or not self.file_filter or not self.base_path:
+ return results
+
+ base_path = Path(self.base_path)
+ filtered: Dict[str, Any] = {}
+
+ for rel_path, matches in results.items():
+ if not isinstance(rel_path, str):
+ continue
+
+ normalized = Path(rel_path.replace('\\', '/'))
+ try:
+ absolute = (base_path / normalized).resolve()
+ except Exception: # pragma: no cover - invalid path safety
+ continue
+
+ try:
+ if self.file_filter.should_process_path(absolute, base_path):
+ filtered[rel_path] = matches
+ except Exception: # pragma: no cover - defensive fallback
+ continue
+
+ return filtered
diff --git a/src/code_index_mcp/services/settings_service.py b/src/code_index_mcp/services/settings_service.py
new file mode 100644
index 0000000..bd641c4
--- /dev/null
+++ b/src/code_index_mcp/services/settings_service.py
@@ -0,0 +1,191 @@
+"""
+Settings management service for the Code Index MCP server.
+
+This service handles settings information, statistics,
+temporary directory management, and settings cleanup operations.
+"""
+
+import os
+import tempfile
+from typing import Dict, Any
+
+from .base_service import BaseService
+from ..utils import ResponseFormatter
+from ..constants import SETTINGS_DIR
+from ..project_settings import ProjectSettings
+from ..indexing import get_index_manager
+
+
+def manage_temp_directory(action: str) -> Dict[str, Any]:
+ """
+ Manage temporary directory operations.
+
+ This is a standalone function that doesn't require project context.
+ Handles the logic for create_temp_directory and check_temp_directory MCP tools.
+
+ Args:
+ action: The action to perform ('create' or 'check')
+
+ Returns:
+ Dictionary with directory information and operation results
+
+ Raises:
+ ValueError: If action is invalid or operation fails
+ """
+ if action not in ['create', 'check']:
+ raise ValueError(f"Invalid action: {action}. Must be 'create' or 'check'")
+
+ # Try to get the actual temp directory from index manager, fallback to default
+ try:
+ index_manager = get_index_manager()
+ temp_dir = index_manager.temp_dir if index_manager.temp_dir else os.path.join(tempfile.gettempdir(), SETTINGS_DIR)
+ except:
+ temp_dir = os.path.join(tempfile.gettempdir(), SETTINGS_DIR)
+
+ if action == 'create':
+ existed_before = os.path.exists(temp_dir)
+
+ try:
+ # Use ProjectSettings to handle directory creation consistently
+ ProjectSettings("", skip_load=True)
+
+ result = ResponseFormatter.directory_info_response(
+ temp_directory=temp_dir,
+ exists=os.path.exists(temp_dir),
+ is_directory=os.path.isdir(temp_dir)
+ )
+ result["existed_before"] = existed_before
+ result["created"] = not existed_before
+
+ return result
+
+ except (OSError, IOError, ValueError) as e:
+ return ResponseFormatter.directory_info_response(
+ temp_directory=temp_dir,
+ exists=False,
+ error=str(e)
+ )
+
+ else: # action == 'check'
+ result = ResponseFormatter.directory_info_response(
+ temp_directory=temp_dir,
+ exists=os.path.exists(temp_dir),
+ is_directory=os.path.isdir(temp_dir) if os.path.exists(temp_dir) else False
+ )
+ result["temp_root"] = tempfile.gettempdir()
+
+ # If the directory exists, list its contents
+ if result["exists"] and result["is_directory"]:
+ try:
+ contents = os.listdir(temp_dir)
+ result["contents"] = contents
+ result["subdirectories"] = []
+
+ # Check each subdirectory
+ for item in contents:
+ item_path = os.path.join(temp_dir, item)
+ if os.path.isdir(item_path):
+ subdir_info = {
+ "name": item,
+ "path": item_path,
+ "contents": os.listdir(item_path) if os.path.exists(item_path) else []
+ }
+ result["subdirectories"].append(subdir_info)
+
+ except (OSError, PermissionError) as e:
+ result["error"] = str(e)
+
+ return result
+
+
+
+
+
+class SettingsService(BaseService):
+ """
+ Service for managing settings and directory operations.
+
+ This service handles:
+ - Settings information and statistics
+ - Temporary directory management
+ - Settings cleanup operations
+ - Configuration data access
+ """
+
+
+
+ def get_settings_info(self) -> Dict[str, Any]:
+ """
+ Get comprehensive settings information.
+
+ Handles the logic for get_settings_info MCP tool.
+
+ Returns:
+ Dictionary with settings directory, config, stats, and status information
+ """
+ temp_dir = os.path.join(tempfile.gettempdir(), SETTINGS_DIR)
+
+ # Get the actual index directory from the index manager
+ index_manager = get_index_manager()
+ actual_temp_dir = index_manager.temp_dir if index_manager.temp_dir else temp_dir
+
+ # Check if base_path is set
+ if not self.base_path:
+ return ResponseFormatter.settings_info_response(
+ settings_directory="",
+ temp_directory=actual_temp_dir,
+ temp_directory_exists=os.path.exists(actual_temp_dir),
+ config={},
+ stats={},
+ exists=False,
+ status="not_configured",
+ message="Project path not set. Please use set_project_path to set a "
+ "project directory first."
+ )
+
+ # Get config and stats
+ config = self.settings.load_config() if self.settings else {}
+ stats = self.settings.get_stats() if self.settings else {}
+ settings_directory = actual_temp_dir
+ exists = os.path.exists(settings_directory) if settings_directory else False
+
+ return ResponseFormatter.settings_info_response(
+ settings_directory=settings_directory,
+ temp_directory=actual_temp_dir,
+ temp_directory_exists=os.path.exists(actual_temp_dir),
+ config=config,
+ stats=stats,
+ exists=exists
+ )
+
+
+
+ def clear_all_settings(self) -> str:
+ """
+ Clear all settings and cached data.
+
+ Handles the logic for clear_settings MCP tool.
+
+ Returns:
+ Success message confirming settings were cleared
+ """
+ if self.settings:
+ self.settings.clear()
+
+ return "Project settings, index, and cache have been cleared."
+
+ def get_settings_stats(self) -> str:
+ """
+ Get settings statistics as JSON string.
+
+ Handles the logic for settings://stats MCP resource.
+
+ Returns:
+ JSON formatted settings statistics
+ """
+ if not self.settings:
+ stats_data = {"error": "Settings not available"}
+ else:
+ stats_data = self.settings.get_stats()
+
+ return ResponseFormatter.stats_response(stats_data)
diff --git a/src/code_index_mcp/services/system_management_service.py b/src/code_index_mcp/services/system_management_service.py
new file mode 100644
index 0000000..8cb420d
--- /dev/null
+++ b/src/code_index_mcp/services/system_management_service.py
@@ -0,0 +1,407 @@
+"""
+System Management Service - Business logic for system configuration and monitoring.
+
+This service handles the business logic for system management operations including
+file watcher status, configuration management, and system health monitoring.
+It composes technical tools to achieve business goals.
+"""
+
+from typing import Dict, Any, Optional
+from dataclasses import dataclass
+from .index_management_service import IndexManagementService
+from .base_service import BaseService
+# FileWatcherTool will be imported locally to avoid circular import
+from ..tools.config import ProjectConfigTool, SettingsTool
+
+
+@dataclass
+class FileWatcherStatus:
+ """Business result for file watcher status operations."""
+ available: bool
+ active: bool
+ status: str
+ message: Optional[str]
+ error_info: Optional[Dict[str, Any]]
+ configuration: Dict[str, Any]
+ rebuild_status: Dict[str, Any]
+ recommendations: list[str]
+
+
+class SystemManagementService(BaseService):
+ """
+ Business service for system configuration and monitoring.
+
+ This service orchestrates system management workflows by composing
+ technical tools to achieve business goals like monitoring file watchers,
+ managing configurations, and providing system health insights.
+ """
+
+ def __init__(self, ctx):
+ super().__init__(ctx)
+ # Import FileWatcherTool locally to avoid circular import
+ from ..tools.monitoring import FileWatcherTool
+ self._watcher_tool = FileWatcherTool(ctx)
+ self._config_tool = ProjectConfigTool()
+ self._settings_tool = SettingsTool()
+
+ def get_file_watcher_status(self) -> Dict[str, Any]:
+ """
+ Get comprehensive file watcher status with business intelligence.
+
+ This is the main business method that orchestrates the file watcher
+ status workflow, analyzing system state, providing recommendations,
+ and formatting comprehensive status information.
+
+ Returns:
+ Dictionary with comprehensive file watcher status
+ """
+ # Business workflow: Analyze system state
+ status_result = self._analyze_file_watcher_state()
+
+ # Business result formatting
+ return self._format_status_result(status_result)
+
+ def configure_file_watcher(self, enabled: Optional[bool] = None,
+ debounce_seconds: Optional[float] = None,
+ additional_exclude_patterns: Optional[list] = None) -> str:
+ """
+ Configure file watcher settings with business validation.
+
+ Args:
+ enabled: Whether to enable file watcher
+ debounce_seconds: Debounce time in seconds
+ additional_exclude_patterns: Additional patterns to exclude
+
+ Returns:
+ Success message with configuration details
+
+ Raises:
+ ValueError: If configuration is invalid
+ """
+ # Business validation
+ self._validate_configuration_request(enabled, debounce_seconds, additional_exclude_patterns)
+
+ # Business workflow: Apply configuration
+ result = self._apply_file_watcher_configuration(enabled, debounce_seconds, additional_exclude_patterns)
+
+ return result
+
+ def _analyze_file_watcher_state(self) -> FileWatcherStatus:
+ """
+ Business logic to analyze comprehensive file watcher state.
+
+ Returns:
+ FileWatcherStatus with complete analysis
+ """
+ # Business step 1: Check for error conditions
+ error_info = self._check_for_watcher_errors()
+ if error_info:
+ return self._create_error_status(error_info)
+
+ # Business step 2: Check initialization state
+ watcher_service = self._watcher_tool.get_from_context()
+ if not watcher_service:
+ return self._create_not_initialized_status()
+
+ # Business step 3: Get active status
+ return self._create_active_status(watcher_service)
+
+ def _check_for_watcher_errors(self) -> Optional[Dict[str, Any]]:
+ """
+ Business logic to check for file watcher error conditions.
+
+ Returns:
+ Error information dictionary or None if no errors
+ """
+ # Check context for recorded errors
+ if hasattr(self.ctx.request_context.lifespan_context, 'file_watcher_error'):
+ return self.ctx.request_context.lifespan_context.file_watcher_error
+
+ return None
+
+ def _create_error_status(self, error_info: Dict[str, Any]) -> FileWatcherStatus:
+ """
+ Business logic to create error status with recommendations.
+
+ Args:
+ error_info: Error information from context
+
+ Returns:
+ FileWatcherStatus for error condition
+ """
+ # Get configuration if available
+ configuration = self._get_file_watcher_configuration()
+
+ # Get rebuild status
+ rebuild_status = self._get_rebuild_status()
+
+ # Business logic: Generate error-specific recommendations
+ recommendations = [
+ "Use refresh_index tool for manual updates",
+ "File watcher auto-refresh is disabled due to errors",
+ "Consider restarting the project or checking system permissions"
+ ]
+
+ return FileWatcherStatus(
+ available=True,
+ active=False,
+ status="error",
+ message=error_info.get('message', 'File watcher error occurred'),
+ error_info=error_info,
+ configuration=configuration,
+ rebuild_status=rebuild_status,
+ recommendations=recommendations
+ )
+
+ def _create_not_initialized_status(self) -> FileWatcherStatus:
+ """
+ Business logic to create not-initialized status.
+
+ Returns:
+ FileWatcherStatus for not-initialized condition
+ """
+ # Get basic configuration
+ configuration = self._get_file_watcher_configuration()
+
+ # Get rebuild status
+ rebuild_status = self._get_rebuild_status()
+
+ # Business logic: Generate initialization recommendations
+ recommendations = [
+ "Use set_project_path tool to initialize file watcher",
+ "File monitoring will be enabled after project initialization"
+ ]
+
+ return FileWatcherStatus(
+ available=True,
+ active=False,
+ status="not_initialized",
+ message="File watcher service not initialized. Set project path to enable auto-refresh.",
+ error_info=None,
+ configuration=configuration,
+ rebuild_status=rebuild_status,
+ recommendations=recommendations
+ )
+
+ def _create_active_status(self, watcher_service) -> FileWatcherStatus:
+ """
+ Business logic to create active status with comprehensive information.
+
+ Args:
+ watcher_service: Active file watcher service
+
+ Returns:
+ FileWatcherStatus for active condition
+ """
+ # Get detailed status from watcher service
+ watcher_status = watcher_service.get_status()
+
+ # Get configuration
+ configuration = self._get_file_watcher_configuration()
+
+ # Get rebuild status
+ rebuild_status = self._get_rebuild_status()
+
+ # Business logic: Generate status-specific recommendations
+ recommendations = self._generate_active_recommendations(watcher_status)
+
+ return FileWatcherStatus(
+ available=watcher_status.get('available', True),
+ active=watcher_status.get('active', False),
+ status=watcher_status.get('status', 'active'),
+ message=watcher_status.get('message'),
+ error_info=None,
+ configuration=configuration,
+ rebuild_status=rebuild_status,
+ recommendations=recommendations
+ )
+
+ def _get_file_watcher_configuration(self) -> Dict[str, Any]:
+ """
+ Business logic to get file watcher configuration safely.
+
+ Returns:
+ Configuration dictionary
+ """
+ try:
+ # Try to get from project settings
+ if (hasattr(self.ctx.request_context.lifespan_context, 'settings') and
+ self.ctx.request_context.lifespan_context.settings):
+ return self.ctx.request_context.lifespan_context.settings.get_file_watcher_config()
+
+ # Fallback to default configuration
+ return {
+ 'enabled': True,
+ 'debounce_seconds': 6.0,
+ 'additional_exclude_patterns': [],
+ 'note': 'Default configuration - project not fully initialized'
+ }
+
+ except Exception as e:
+ return {
+ 'error': f'Could not load configuration: {e}',
+ 'enabled': True,
+ 'debounce_seconds': 6.0
+ }
+
+ def _get_rebuild_status(self) -> Dict[str, Any]:
+ """
+ Business logic to get index rebuild status safely.
+
+ Returns:
+ Rebuild status dictionary
+ """
+ try:
+ index_service = IndexManagementService(self.ctx)
+ return index_service.get_rebuild_status()
+
+ except Exception as e:
+ return {
+ 'status': 'unknown',
+ 'error': f'Could not get rebuild status: {e}'
+ }
+
+ def _generate_active_recommendations(self, watcher_status: Dict[str, Any]) -> list[str]:
+ """
+ Business logic to generate recommendations for active file watcher.
+
+ Args:
+ watcher_status: Current watcher status
+
+ Returns:
+ List of recommendations
+ """
+ recommendations = []
+
+ if watcher_status.get('active', False):
+ recommendations.append("File watcher is active - automatic index updates enabled")
+ recommendations.append("Files will be re-indexed automatically when changed")
+ else:
+ recommendations.append("File watcher is available but not active")
+ recommendations.append("Use refresh_index for manual updates")
+
+ # Add performance recommendations
+ restart_attempts = watcher_status.get('restart_attempts', 0)
+ if restart_attempts > 0:
+ recommendations.append(f"File watcher has restarted {restart_attempts} times - monitor for stability")
+
+ return recommendations
+
+ def _validate_configuration_request(self, enabled: Optional[bool],
+ debounce_seconds: Optional[float],
+ additional_exclude_patterns: Optional[list]) -> None:
+ """
+ Business validation for file watcher configuration.
+
+ Args:
+ enabled: Enable flag
+ debounce_seconds: Debounce time
+ additional_exclude_patterns: Exclude patterns
+
+ Raises:
+ ValueError: If validation fails
+ """
+ # Business rule: Enabled flag must be boolean if provided
+ if enabled is not None and not isinstance(enabled, bool):
+ raise ValueError("Enabled flag must be a boolean value")
+
+ # Business rule: Debounce seconds must be reasonable
+ if debounce_seconds is not None:
+ if debounce_seconds < 0.1:
+ raise ValueError("Debounce seconds must be at least 0.1")
+ if debounce_seconds > 300: # 5 minutes
+ raise ValueError("Debounce seconds cannot exceed 300 (5 minutes)")
+
+ # Business rule: Exclude patterns must be valid
+ if additional_exclude_patterns is not None:
+ if not isinstance(additional_exclude_patterns, list):
+ raise ValueError("Additional exclude patterns must be a list")
+
+ for pattern in additional_exclude_patterns:
+ if not isinstance(pattern, str):
+ raise ValueError("All exclude patterns must be strings")
+ if not pattern.strip():
+ raise ValueError("Exclude patterns cannot be empty")
+
+ def _apply_file_watcher_configuration(self, enabled: Optional[bool],
+ debounce_seconds: Optional[float],
+ additional_exclude_patterns: Optional[list]) -> str:
+ """
+ Business logic to apply file watcher configuration.
+
+ Args:
+ enabled: Enable flag
+ debounce_seconds: Debounce time
+ additional_exclude_patterns: Exclude patterns
+
+ Returns:
+ Success message
+
+ Raises:
+ ValueError: If configuration cannot be applied
+ """
+ # Business rule: Settings must be available
+ if (not hasattr(self.ctx.request_context.lifespan_context, 'settings') or
+ not self.ctx.request_context.lifespan_context.settings):
+ raise ValueError("Settings not available - project path not set")
+
+ settings = self.ctx.request_context.lifespan_context.settings
+
+ # Build updates dictionary
+ updates = {}
+ if enabled is not None:
+ updates["enabled"] = enabled
+ if debounce_seconds is not None:
+ updates["debounce_seconds"] = debounce_seconds
+ if additional_exclude_patterns is not None:
+ updates["additional_exclude_patterns"] = additional_exclude_patterns
+
+ if not updates:
+ return "No configuration changes specified"
+
+ # Apply configuration
+ settings.update_file_watcher_config(updates)
+
+ # Business logic: Generate informative result message
+ changes_summary = []
+ if 'enabled' in updates:
+ changes_summary.append(f"enabled={updates['enabled']}")
+ if 'debounce_seconds' in updates:
+ changes_summary.append(f"debounce={updates['debounce_seconds']}s")
+ if 'additional_exclude_patterns' in updates:
+ pattern_count = len(updates['additional_exclude_patterns'])
+ changes_summary.append(f"exclude_patterns={pattern_count}")
+
+ changes_str = ", ".join(changes_summary)
+
+ return (f"File watcher configuration updated: {changes_str}. "
+ f"Restart may be required for changes to take effect.")
+
+ def _format_status_result(self, status_result: FileWatcherStatus) -> Dict[str, Any]:
+ """
+ Format the status result according to business requirements.
+
+ Args:
+ status_result: Status analysis result
+
+ Returns:
+ Formatted result dictionary for MCP response
+ """
+ result = {
+ 'available': status_result.available,
+ 'active': status_result.active,
+ 'status': status_result.status,
+ 'configuration': status_result.configuration,
+ 'rebuild_status': status_result.rebuild_status,
+ 'recommendations': status_result.recommendations
+ }
+
+ # Add optional fields
+ if status_result.message:
+ result['message'] = status_result.message
+
+ if status_result.error_info:
+ result['error'] = status_result.error_info
+ result['manual_refresh_required'] = True
+
+ return result
diff --git a/src/code_index_mcp/tools/__init__.py b/src/code_index_mcp/tools/__init__.py
new file mode 100644
index 0000000..f69d664
--- /dev/null
+++ b/src/code_index_mcp/tools/__init__.py
@@ -0,0 +1,19 @@
+"""
+Tool Layer - Technical components for the Code Index MCP server.
+
+This package contains pure technical components that provide specific
+capabilities without business logic. These tools are composed by the
+business layer to achieve business goals.
+"""
+
+from .filesystem import FileMatchingTool, FileSystemTool
+from .config import ProjectConfigTool, SettingsTool
+from .monitoring import FileWatcherTool
+
+__all__ = [
+ 'FileMatchingTool',
+ 'FileSystemTool',
+ 'ProjectConfigTool',
+ 'SettingsTool',
+ 'FileWatcherTool'
+]
diff --git a/src/code_index_mcp/tools/config/__init__.py b/src/code_index_mcp/tools/config/__init__.py
new file mode 100644
index 0000000..12d4304
--- /dev/null
+++ b/src/code_index_mcp/tools/config/__init__.py
@@ -0,0 +1,8 @@
+"""
+Configuration Tools - Technical components for configuration management.
+"""
+
+from .project_config_tool import ProjectConfigTool
+from .settings_tool import SettingsTool
+
+__all__ = ['ProjectConfigTool', 'SettingsTool']
diff --git a/src/code_index_mcp/tools/config/project_config_tool.py b/src/code_index_mcp/tools/config/project_config_tool.py
new file mode 100644
index 0000000..c2738dd
--- /dev/null
+++ b/src/code_index_mcp/tools/config/project_config_tool.py
@@ -0,0 +1,308 @@
+"""
+Project Configuration Tool - Pure technical component for project configuration operations.
+
+This tool handles low-level project configuration operations without any business logic.
+"""
+
+import os
+from typing import Dict, Any, Optional
+from pathlib import Path
+
+from ...project_settings import ProjectSettings
+
+
+class ProjectConfigTool:
+ """
+ Pure technical component for project configuration operations.
+
+ This tool provides low-level configuration management capabilities
+ without any business logic or decision making.
+ """
+
+ def __init__(self):
+ self._settings: Optional[ProjectSettings] = None
+ self._project_path: Optional[str] = None
+
+ def initialize_settings(self, project_path: str) -> ProjectSettings:
+ """
+ Initialize project settings for the given path.
+
+ Args:
+ project_path: Absolute path to the project directory
+
+ Returns:
+ ProjectSettings instance
+
+ Raises:
+ ValueError: If project path is invalid
+ """
+ if not Path(project_path).exists():
+ raise ValueError(f"Project path does not exist: {project_path}")
+
+ if not Path(project_path).is_dir():
+ raise ValueError(f"Project path is not a directory: {project_path}")
+
+ self._project_path = project_path
+ self._settings = ProjectSettings(project_path, skip_load=False)
+
+ return self._settings
+
+ def load_existing_index(self) -> Optional[Dict[str, Any]]:
+ """
+ Load existing index data if available.
+
+ Returns:
+ Index data dictionary or None if not available
+
+ Raises:
+ RuntimeError: If settings not initialized
+ """
+ if not self._settings:
+ raise RuntimeError("Settings not initialized. Call initialize_settings() first.")
+
+ try:
+ return self._settings.load_index()
+ except Exception:
+ return None
+
+ def save_project_config(self, config_data: Dict[str, Any]) -> None:
+ """
+ Save project configuration data.
+
+ Args:
+ config_data: Configuration data to save
+
+ Raises:
+ RuntimeError: If settings not initialized
+ """
+ if not self._settings:
+ raise RuntimeError("Settings not initialized")
+
+ self._settings.save_config(config_data)
+
+ def save_index_data(self, index_data: Dict[str, Any]) -> None:
+ """
+ Save index data to persistent storage.
+
+ Args:
+ index_data: Index data to save
+
+ Raises:
+ RuntimeError: If settings not initialized
+ """
+ if not self._settings:
+ raise RuntimeError("Settings not initialized")
+
+ self._settings.save_index(index_data)
+
+ def check_index_version(self) -> bool:
+ """
+ Check if JSON index is the latest version.
+
+ Returns:
+ True if JSON index exists and is recent, False if needs rebuild
+
+ Raises:
+ RuntimeError: If settings not initialized
+ """
+ if not self._settings:
+ raise RuntimeError("Settings not initialized")
+
+ # Check if JSON index exists and is fresh
+ from ...indexing import get_index_manager
+ index_manager = get_index_manager()
+
+ # Set project path if available
+ if self._settings.base_path:
+ index_manager.set_project_path(self._settings.base_path)
+ stats = index_manager.get_index_stats()
+ return stats.get('status') == 'loaded'
+
+ return False
+
+ def cleanup_legacy_files(self) -> None:
+ """
+ Clean up legacy index files.
+
+ Raises:
+ RuntimeError: If settings not initialized
+ """
+ if not self._settings:
+ raise RuntimeError("Settings not initialized")
+
+ self._settings.cleanup_legacy_files()
+
+ def get_search_tool_info(self) -> Dict[str, Any]:
+ """
+ Get information about available search tools.
+
+ Returns:
+ Dictionary with search tool information
+
+ Raises:
+ RuntimeError: If settings not initialized
+ """
+ if not self._settings:
+ raise RuntimeError("Settings not initialized")
+
+ search_tool = self._settings.get_preferred_search_tool()
+ return {
+ 'available': search_tool is not None,
+ 'name': search_tool.name if search_tool else None,
+ 'description': "Advanced search enabled" if search_tool else "Basic search available"
+ }
+
+ def get_file_watcher_config(self) -> Dict[str, Any]:
+ """
+ Get file watcher configuration.
+
+ Returns:
+ File watcher configuration dictionary
+
+ Raises:
+ RuntimeError: If settings not initialized
+ """
+ if not self._settings:
+ raise RuntimeError("Settings not initialized")
+
+ return self._settings.get_file_watcher_config()
+
+ def create_default_config(self, project_path: str) -> Dict[str, Any]:
+ """
+ Create default project configuration.
+
+ Args:
+ project_path: Project path for the configuration
+
+ Returns:
+ Default configuration dictionary
+ """
+ from ...utils import FileFilter
+
+ file_filter = FileFilter()
+ return {
+ "base_path": project_path,
+ "supported_extensions": list(file_filter.supported_extensions),
+ "last_indexed": None,
+ "file_watcher": self.get_file_watcher_config() if self._settings else {}
+ }
+
+ def validate_project_path(self, path: str) -> Optional[str]:
+ """
+ Validate project path.
+
+ Args:
+ path: Path to validate
+
+ Returns:
+ Error message if invalid, None if valid
+ """
+ if not path or not path.strip():
+ return "Project path cannot be empty"
+
+ try:
+ norm_path = os.path.normpath(path)
+ abs_path = os.path.abspath(norm_path)
+ except (OSError, ValueError) as e:
+ return f"Invalid path format: {str(e)}"
+
+ if not os.path.exists(abs_path):
+ return f"Path does not exist: {abs_path}"
+
+ if not os.path.isdir(abs_path):
+ return f"Path is not a directory: {abs_path}"
+
+ return None
+
+ def normalize_project_path(self, path: str) -> str:
+ """
+ Normalize and get absolute project path.
+
+ Args:
+ path: Path to normalize
+
+ Returns:
+ Normalized absolute path
+ """
+ norm_path = os.path.normpath(path)
+ return os.path.abspath(norm_path)
+
+ def get_settings_path(self) -> Optional[str]:
+ """
+ Get the settings directory path.
+
+ Returns:
+ Settings directory path or None if not initialized
+ """
+ return self._settings.settings_path if self._settings else None
+
+ def get_project_path(self) -> Optional[str]:
+ """
+ Get the current project path.
+
+ Returns:
+ Project path or None if not set
+ """
+ return self._project_path
+
+ def get_basic_project_structure(self, project_path: str) -> Dict[str, Any]:
+ """
+ Get basic project directory structure.
+
+ Args:
+ project_path: Path to analyze
+
+ Returns:
+ Basic directory structure dictionary
+ """
+ from ...utils import FileFilter
+
+ file_filter = FileFilter()
+
+ def build_tree(path: str, max_depth: int = 3, current_depth: int = 0) -> Dict[str, Any]:
+ """Build directory tree with limited depth using centralized filtering."""
+ if current_depth >= max_depth:
+ return {"type": "directory", "truncated": True}
+
+ try:
+ items = []
+ path_obj = Path(path)
+
+ for item in sorted(path_obj.iterdir()):
+ if item.is_dir():
+ # Use centralized directory filtering
+ if not file_filter.should_exclude_directory(item.name):
+ items.append({
+ "name": item.name,
+ "type": "directory",
+ "children": build_tree(str(item), max_depth, current_depth + 1)
+ })
+ else:
+ # Use centralized file filtering
+ if not file_filter.should_exclude_file(item):
+ items.append({
+ "name": item.name,
+ "type": "file",
+ "size": item.stat().st_size if item.exists() else 0
+ })
+
+ return {"type": "directory", "children": items}
+
+ except (OSError, PermissionError):
+ return {"type": "directory", "error": "Access denied"}
+
+ try:
+ root_name = Path(project_path).name
+ structure = {
+ "name": root_name,
+ "path": project_path,
+ "type": "directory",
+ "children": build_tree(project_path)["children"]
+ }
+ return structure
+
+ except Exception as e:
+ return {
+ "error": f"Failed to build project structure: {e}",
+ "path": project_path
+ }
diff --git a/src/code_index_mcp/tools/config/settings_tool.py b/src/code_index_mcp/tools/config/settings_tool.py
new file mode 100644
index 0000000..51fe0dc
--- /dev/null
+++ b/src/code_index_mcp/tools/config/settings_tool.py
@@ -0,0 +1,100 @@
+"""
+Settings Tool - Pure technical component for settings operations.
+
+This tool handles low-level settings operations without any business logic.
+"""
+
+import os
+import tempfile
+from typing import Dict, Any
+
+from ...constants import SETTINGS_DIR
+
+
+class SettingsTool:
+ """
+ Pure technical component for settings operations.
+
+ This tool provides low-level settings management capabilities
+ without any business logic or decision making.
+ """
+
+ def __init__(self):
+ pass
+
+ def get_temp_directory_path(self) -> str:
+ """
+ Get the path to the temporary directory for settings.
+
+ Returns:
+ Path to the temporary settings directory
+ """
+ return os.path.join(tempfile.gettempdir(), SETTINGS_DIR)
+
+ def create_temp_directory(self) -> Dict[str, Any]:
+ """
+ Create the temporary directory for settings.
+
+ Returns:
+ Dictionary with creation results
+ """
+ temp_dir = self.get_temp_directory_path()
+ existed_before = os.path.exists(temp_dir)
+
+ try:
+ os.makedirs(temp_dir, exist_ok=True)
+
+ return {
+ "temp_directory": temp_dir,
+ "exists": os.path.exists(temp_dir),
+ "is_directory": os.path.isdir(temp_dir),
+ "existed_before": existed_before,
+ "created": not existed_before
+ }
+
+ except (OSError, IOError) as e:
+ return {
+ "temp_directory": temp_dir,
+ "exists": False,
+ "error": str(e)
+ }
+
+ def check_temp_directory(self) -> Dict[str, Any]:
+ """
+ Check the status of the temporary directory.
+
+ Returns:
+ Dictionary with directory status information
+ """
+ temp_dir = self.get_temp_directory_path()
+
+ result = {
+ "temp_directory": temp_dir,
+ "temp_root": tempfile.gettempdir(),
+ "exists": os.path.exists(temp_dir),
+ "is_directory": os.path.isdir(temp_dir) if os.path.exists(temp_dir) else False
+ }
+
+ # If the directory exists, list its contents
+ if result["exists"] and result["is_directory"]:
+ try:
+ contents = os.listdir(temp_dir)
+ result["contents"] = contents
+ result["subdirectories"] = []
+
+ # Check each subdirectory
+ for item in contents:
+ item_path = os.path.join(temp_dir, item)
+ if os.path.isdir(item_path):
+ subdir_info = {
+ "name": item,
+ "path": item_path,
+ "contents": os.listdir(item_path) if os.path.exists(item_path) else []
+ }
+ result["subdirectories"].append(subdir_info)
+
+ except (OSError, PermissionError) as e:
+ result["error"] = str(e)
+
+ return result
+
diff --git a/src/code_index_mcp/tools/filesystem/__init__.py b/src/code_index_mcp/tools/filesystem/__init__.py
new file mode 100644
index 0000000..e8f9798
--- /dev/null
+++ b/src/code_index_mcp/tools/filesystem/__init__.py
@@ -0,0 +1,8 @@
+"""
+Filesystem Tools - Technical components for file system operations.
+"""
+
+from .file_matching_tool import FileMatchingTool
+from .file_system_tool import FileSystemTool
+
+__all__ = ['FileMatchingTool', 'FileSystemTool']
diff --git a/src/code_index_mcp/tools/filesystem/file_matching_tool.py b/src/code_index_mcp/tools/filesystem/file_matching_tool.py
new file mode 100644
index 0000000..22ebdf6
--- /dev/null
+++ b/src/code_index_mcp/tools/filesystem/file_matching_tool.py
@@ -0,0 +1,215 @@
+"""
+File Matching Tool - Pure technical component for pattern matching operations.
+
+This tool handles file pattern matching without any business logic.
+It provides technical capabilities for finding files based on various patterns.
+"""
+
+import fnmatch
+from typing import List, Set
+from pathlib import Path
+
+# FileInfo defined locally for file matching operations
+from dataclasses import dataclass
+
+@dataclass
+class FileInfo:
+ """File information structure."""
+ relative_path: str
+ language: str
+
+
+class FileMatchingTool:
+ """
+ Pure technical component for file pattern matching.
+
+ This tool provides low-level pattern matching capabilities without
+ any business logic. It can match files using glob patterns, regex,
+ or other matching strategies.
+ """
+
+ def __init__(self):
+ pass
+
+ def match_glob_pattern(self, files: List[FileInfo], pattern: str) -> List[FileInfo]:
+ """
+ Match files using glob pattern.
+
+ Args:
+ files: List of FileInfo objects to search through
+ pattern: Glob pattern (e.g., "*.py", "test_*.js", "src/**/*.ts")
+
+ Returns:
+ List of FileInfo objects that match the pattern
+ """
+ if not pattern:
+ return files
+
+ matched_files = []
+
+ for file_info in files:
+ # Try matching against full path
+ if fnmatch.fnmatch(file_info.relative_path, pattern):
+ matched_files.append(file_info)
+ continue
+
+ # Try matching against just the filename
+ filename = Path(file_info.relative_path).name
+ if fnmatch.fnmatch(filename, pattern):
+ matched_files.append(file_info)
+
+ return matched_files
+
+ def match_multiple_patterns(self, files: List[FileInfo], patterns: List[str]) -> List[FileInfo]:
+ """
+ Match files using multiple glob patterns (OR logic).
+
+ Args:
+ files: List of FileInfo objects to search through
+ patterns: List of glob patterns
+
+ Returns:
+ List of FileInfo objects that match any of the patterns
+ """
+ if not patterns:
+ return files
+
+ matched_files = set()
+
+ for pattern in patterns:
+ pattern_matches = self.match_glob_pattern(files, pattern)
+ matched_files.update(pattern_matches)
+
+ return list(matched_files)
+
+ def match_by_language(self, files: List[FileInfo], languages: List[str]) -> List[FileInfo]:
+ """
+ Match files by programming language.
+
+ Args:
+ files: List of FileInfo objects to search through
+ languages: List of language names (e.g., ["python", "javascript"])
+
+ Returns:
+ List of FileInfo objects with matching languages
+ """
+ if not languages:
+ return files
+
+ # Normalize language names for comparison
+ normalized_languages = {lang.lower() for lang in languages}
+
+ matched_files = []
+ for file_info in files:
+ if file_info.language.lower() in normalized_languages:
+ matched_files.append(file_info)
+
+ return matched_files
+
+ def match_by_directory(self, files: List[FileInfo], directory_patterns: List[str]) -> List[FileInfo]:
+ """
+ Match files by directory patterns.
+
+ Args:
+ files: List of FileInfo objects to search through
+ directory_patterns: List of directory patterns (e.g., ["src/*", "test/**"])
+
+ Returns:
+ List of FileInfo objects in matching directories
+ """
+ if not directory_patterns:
+ return files
+
+ matched_files = []
+
+ for file_info in files:
+ file_dir = str(Path(file_info.relative_path).parent)
+
+ for dir_pattern in directory_patterns:
+ if fnmatch.fnmatch(file_dir, dir_pattern):
+ matched_files.append(file_info)
+ break
+
+ return matched_files
+
+ def exclude_patterns(self, files: List[FileInfo], exclude_patterns: List[str]) -> List[FileInfo]:
+ """
+ Exclude files matching the given patterns.
+
+ Args:
+ files: List of FileInfo objects to filter
+ exclude_patterns: List of patterns to exclude
+
+ Returns:
+ List of FileInfo objects that don't match any exclude pattern
+ """
+ if not exclude_patterns:
+ return files
+
+ filtered_files = []
+
+ for file_info in files:
+ should_exclude = False
+
+ for exclude_pattern in exclude_patterns:
+ if (fnmatch.fnmatch(file_info.relative_path, exclude_pattern) or
+ fnmatch.fnmatch(Path(file_info.relative_path).name, exclude_pattern)):
+ should_exclude = True
+ break
+
+ if not should_exclude:
+ filtered_files.append(file_info)
+
+ return filtered_files
+
+ def sort_by_relevance(self, files: List[FileInfo], pattern: str) -> List[FileInfo]:
+ """
+ Sort files by relevance to the search pattern.
+
+ Args:
+ files: List of FileInfo objects to sort
+ pattern: Original search pattern for relevance scoring
+
+ Returns:
+ List of FileInfo objects sorted by relevance (most relevant first)
+ """
+ def relevance_score(file_info: FileInfo) -> int:
+ """Calculate relevance score for a file."""
+ score = 0
+ filename = Path(file_info.relative_path).name
+
+ # Exact filename match gets highest score
+ if filename == pattern:
+ score += 100
+
+ # Filename starts with pattern
+ elif filename.startswith(pattern.replace('*', '')):
+ score += 50
+
+ # Pattern appears in filename
+ elif pattern.replace('*', '') in filename:
+ score += 25
+
+ # Shorter paths are generally more relevant
+ path_depth = len(Path(file_info.relative_path).parts)
+ score += max(0, 10 - path_depth)
+
+ return score
+
+ return sorted(files, key=relevance_score, reverse=True)
+
+ def limit_results(self, files: List[FileInfo], max_results: int) -> List[FileInfo]:
+ """
+ Limit the number of results returned.
+
+ Args:
+ files: List of FileInfo objects
+ max_results: Maximum number of results to return
+
+ Returns:
+ List of FileInfo objects limited to max_results
+ """
+ if max_results <= 0:
+ return files
+
+ return files[:max_results]
diff --git a/src/code_index_mcp/tools/filesystem/file_system_tool.py b/src/code_index_mcp/tools/filesystem/file_system_tool.py
new file mode 100644
index 0000000..fac9f5d
--- /dev/null
+++ b/src/code_index_mcp/tools/filesystem/file_system_tool.py
@@ -0,0 +1,234 @@
+"""
+File System Tool - Pure technical component for file system operations.
+
+This tool handles low-level file system operations without any business logic.
+"""
+
+import os
+from typing import Dict, Any, Optional
+from pathlib import Path
+
+
+class FileSystemTool:
+ """
+ Pure technical component for file system operations.
+
+ This tool provides low-level file system capabilities without
+ any business logic or decision making.
+ """
+
+ def __init__(self):
+ pass
+
+ def get_file_stats(self, file_path: str) -> Dict[str, Any]:
+ """
+ Get basic file system statistics for a file.
+
+ Args:
+ file_path: Absolute path to the file
+
+ Returns:
+ Dictionary with file statistics
+
+ Raises:
+ FileNotFoundError: If file doesn't exist
+ OSError: If file cannot be accessed
+ """
+ if not os.path.exists(file_path):
+ raise FileNotFoundError(f"File not found: {file_path}")
+
+ try:
+ stat_info = os.stat(file_path)
+ path_obj = Path(file_path)
+
+ return {
+ 'size_bytes': stat_info.st_size,
+ 'modified_time': stat_info.st_mtime,
+ 'created_time': stat_info.st_ctime,
+ 'is_file': path_obj.is_file(),
+ 'is_directory': path_obj.is_dir(),
+ 'extension': path_obj.suffix,
+ 'name': path_obj.name,
+ 'parent': str(path_obj.parent)
+ }
+
+ except OSError as e:
+ raise OSError(f"Cannot access file {file_path}: {e}") from e
+
+ def read_file_content(self, file_path: str) -> str:
+ """
+ Read file content with intelligent encoding detection.
+
+ Args:
+ file_path: Absolute path to the file
+
+ Returns:
+ File content as string
+
+ Raises:
+ FileNotFoundError: If file doesn't exist
+ ValueError: If file cannot be decoded
+ """
+ if not os.path.exists(file_path):
+ raise FileNotFoundError(f"File not found: {file_path}")
+
+ # Try UTF-8 first (most common)
+ try:
+ with open(file_path, 'r', encoding='utf-8') as f:
+ return f.read()
+ except UnicodeDecodeError:
+ pass
+
+ # Try other common encodings
+ encodings = ['utf-8-sig', 'latin-1', 'cp1252', 'iso-8859-1']
+ for encoding in encodings:
+ try:
+ with open(file_path, 'r', encoding=encoding) as f:
+ return f.read()
+ except UnicodeDecodeError:
+ continue
+
+ raise ValueError(f"Could not decode file {file_path} with any supported encoding")
+
+ def count_lines(self, file_path: str) -> int:
+ """
+ Count the number of lines in a file.
+
+ Args:
+ file_path: Absolute path to the file
+
+ Returns:
+ Number of lines in the file
+
+ Raises:
+ FileNotFoundError: If file doesn't exist
+ """
+ try:
+ content = self.read_file_content(file_path)
+ return len(content.splitlines())
+ except Exception:
+ # If we can't read the file, return 0
+ return 0
+
+ def detect_language_from_extension(self, file_path: str) -> str:
+ """
+ Detect programming language from file extension.
+
+ Args:
+ file_path: Path to the file
+
+ Returns:
+ Language name or 'unknown'
+ """
+ extension = Path(file_path).suffix.lower()
+
+ lang_map = {
+ '.py': 'python',
+ '.js': 'javascript',
+ '.jsx': 'javascript',
+ '.ts': 'typescript',
+ '.tsx': 'typescript',
+ '.java': 'java',
+ '.cpp': 'cpp',
+ '.cxx': 'cpp',
+ '.cc': 'cpp',
+ '.c': 'c',
+ '.h': 'c',
+ '.hpp': 'cpp',
+ '.hxx': 'cpp',
+ '.cs': 'csharp',
+ '.go': 'go',
+ '.rs': 'rust',
+ '.php': 'php',
+ '.rb': 'ruby',
+ '.swift': 'swift',
+ '.kt': 'kotlin',
+ '.scala': 'scala',
+ '.m': 'objc',
+ '.mm': 'objc',
+ '.html': 'html',
+ '.htm': 'html',
+ '.css': 'css',
+ '.scss': 'scss',
+ '.sass': 'sass',
+ '.less': 'less',
+ '.json': 'json',
+ '.xml': 'xml',
+ '.yaml': 'yaml',
+ '.yml': 'yaml',
+ '.md': 'markdown',
+ '.txt': 'text',
+ '.sh': 'shell',
+ '.bash': 'shell',
+ '.zsh': 'shell',
+ '.fish': 'shell',
+ '.ps1': 'powershell',
+ '.bat': 'batch',
+ '.cmd': 'batch'
+ }
+
+ return lang_map.get(extension, 'unknown')
+
+ def is_text_file(self, file_path: str) -> bool:
+ """
+ Check if a file is likely a text file.
+
+ Args:
+ file_path: Path to the file
+
+ Returns:
+ True if file appears to be text, False otherwise
+ """
+ try:
+ # Try to read a small portion of the file
+ with open(file_path, 'rb') as f:
+ chunk = f.read(1024)
+
+ # Check for null bytes (common in binary files)
+ if b'\x00' in chunk:
+ return False
+
+ # Try to decode as UTF-8
+ try:
+ chunk.decode('utf-8')
+ return True
+ except UnicodeDecodeError:
+ # Try other encodings
+ for encoding in ['latin-1', 'cp1252']:
+ try:
+ chunk.decode(encoding)
+ return True
+ except UnicodeDecodeError:
+ continue
+
+ return False
+
+ except Exception:
+ return False
+
+ def get_file_size_category(self, file_path: str) -> str:
+ """
+ Categorize file size for analysis purposes.
+
+ Args:
+ file_path: Path to the file
+
+ Returns:
+ Size category: 'small', 'medium', 'large', or 'very_large'
+ """
+ try:
+ size = os.path.getsize(file_path)
+
+ if size < 1024: # < 1KB
+ return 'tiny'
+ elif size < 10 * 1024: # < 10KB
+ return 'small'
+ elif size < 100 * 1024: # < 100KB
+ return 'medium'
+ elif size < 1024 * 1024: # < 1MB
+ return 'large'
+ else:
+ return 'very_large'
+
+ except Exception:
+ return 'unknown'
diff --git a/src/code_index_mcp/tools/monitoring/__init__.py b/src/code_index_mcp/tools/monitoring/__init__.py
new file mode 100644
index 0000000..6da231e
--- /dev/null
+++ b/src/code_index_mcp/tools/monitoring/__init__.py
@@ -0,0 +1,7 @@
+"""
+Monitoring Tools - Technical components for file monitoring operations.
+"""
+
+from .file_watcher_tool import FileWatcherTool
+
+__all__ = ['FileWatcherTool']
\ No newline at end of file
diff --git a/src/code_index_mcp/tools/monitoring/file_watcher_tool.py b/src/code_index_mcp/tools/monitoring/file_watcher_tool.py
new file mode 100644
index 0000000..3671952
--- /dev/null
+++ b/src/code_index_mcp/tools/monitoring/file_watcher_tool.py
@@ -0,0 +1,134 @@
+"""
+File Watcher Tool - Pure technical component for file monitoring operations.
+
+This tool handles low-level file watching operations without any business logic.
+"""
+
+import time
+from typing import Optional, Callable
+from ...utils import ContextHelper
+from ...services.file_watcher_service import FileWatcherService
+
+
+class FileWatcherTool:
+ """
+ Pure technical component for file monitoring operations.
+
+ This tool provides low-level file watching capabilities without
+ any business logic or decision making.
+ """
+
+ def __init__(self, ctx):
+ self._ctx = ctx
+ self._file_watcher_service: Optional[FileWatcherService] = None
+
+
+ def create_watcher(self) -> FileWatcherService:
+ """
+ Create a new file watcher service instance.
+
+ Returns:
+ FileWatcherService instance
+ """
+ self._file_watcher_service = FileWatcherService(self._ctx)
+ return self._file_watcher_service
+
+ def start_monitoring(self, project_path: str, rebuild_callback: Callable) -> bool:
+ """
+ Start file monitoring for the given project path.
+
+ Args:
+ project_path: Path to monitor
+ rebuild_callback: Callback function for rebuild events
+
+ Returns:
+ True if monitoring started successfully, False otherwise
+ """
+ if not self._file_watcher_service:
+ self._file_watcher_service = self.create_watcher()
+
+ # Validate that the project path matches the expected base path
+ helper = ContextHelper(self._ctx)
+ if helper.base_path and helper.base_path != project_path:
+ pass
+
+ return self._file_watcher_service.start_monitoring(rebuild_callback)
+
+ def stop_monitoring(self) -> None:
+ """Stop file monitoring if active."""
+ if self._file_watcher_service:
+ self._file_watcher_service.stop_monitoring()
+
+ def is_monitoring_active(self) -> bool:
+ """
+ Check if file monitoring is currently active.
+
+ Returns:
+ True if monitoring is active, False otherwise
+ """
+ return (self._file_watcher_service is not None and
+ self._file_watcher_service.is_active())
+
+ def get_monitoring_status(self) -> dict:
+ """
+ Get current monitoring status.
+
+ Returns:
+ Dictionary with monitoring status information
+ """
+ if not self._file_watcher_service:
+ return {
+ 'active': False,
+ 'available': True,
+ 'status': 'not_initialized'
+ }
+
+ return self._file_watcher_service.get_status()
+
+ def store_in_context(self) -> None:
+ """Store the file watcher service in the MCP context."""
+ if (self._file_watcher_service and
+ hasattr(self._ctx.request_context.lifespan_context, '__dict__')):
+ self._ctx.request_context.lifespan_context.file_watcher_service = self._file_watcher_service
+
+ def get_from_context(self) -> Optional[FileWatcherService]:
+ """
+ Get existing file watcher service from context.
+
+ Returns:
+ FileWatcherService instance or None if not found
+ """
+ if hasattr(self._ctx.request_context.lifespan_context, 'file_watcher_service'):
+ return self._ctx.request_context.lifespan_context.file_watcher_service
+ return None
+
+ def stop_existing_watcher(self) -> None:
+ """Stop any existing file watcher from context."""
+ existing_watcher = self.get_from_context()
+ if existing_watcher:
+
+ existing_watcher.stop_monitoring()
+ # Clear reference
+ if hasattr(self._ctx.request_context.lifespan_context, '__dict__'):
+ self._ctx.request_context.lifespan_context.file_watcher_service = None
+
+
+ def record_error(self, error_message: str) -> None:
+ """
+ Record file watcher error in context for status reporting.
+
+ Args:
+ error_message: Error message to record
+ """
+ error_info = {
+ 'status': 'failed',
+ 'message': f'{error_message}. Auto-refresh disabled. Please use manual refresh.',
+ 'timestamp': time.time(),
+ 'manual_refresh_required': True
+ }
+
+ # Store error in context for status reporting
+ if hasattr(self._ctx.request_context.lifespan_context, '__dict__'):
+ self._ctx.request_context.lifespan_context.file_watcher_error = error_info
+
+
diff --git a/src/code_index_mcp/utils/__init__.py b/src/code_index_mcp/utils/__init__.py
new file mode 100644
index 0000000..cd3fb92
--- /dev/null
+++ b/src/code_index_mcp/utils/__init__.py
@@ -0,0 +1,25 @@
+"""
+Utility modules for the Code Index MCP server.
+
+This package contains shared utilities used across services:
+- error_handler: Decorator-based error handling for MCP entry points
+- context_helper: Context access utilities and helpers
+- validation: Common validation logic
+- response_formatter: Response formatting utilities
+"""
+
+from .error_handler import handle_mcp_errors, handle_mcp_resource_errors, handle_mcp_tool_errors
+from .context_helper import ContextHelper
+from .validation import ValidationHelper
+from .response_formatter import ResponseFormatter
+from .file_filter import FileFilter
+
+__all__ = [
+ 'handle_mcp_errors',
+ 'handle_mcp_resource_errors',
+ 'handle_mcp_tool_errors',
+ 'ContextHelper',
+ 'ValidationHelper',
+ 'ResponseFormatter',
+ 'FileFilter'
+]
\ No newline at end of file
diff --git a/src/code_index_mcp/utils/context_helper.py b/src/code_index_mcp/utils/context_helper.py
new file mode 100644
index 0000000..1ed5fa6
--- /dev/null
+++ b/src/code_index_mcp/utils/context_helper.py
@@ -0,0 +1,169 @@
+"""
+Context access utilities and helpers.
+
+This module provides convenient access to MCP Context data and common
+operations that services need to perform with the context.
+"""
+
+import os
+from typing import Optional
+from mcp.server.fastmcp import Context
+
+from ..project_settings import ProjectSettings
+
+
+class ContextHelper:
+ """
+ Helper class for convenient access to MCP Context data.
+
+ This class wraps the MCP Context object and provides convenient properties
+ and methods for accessing commonly needed data like base_path, settings, etc.
+ """
+
+ def __init__(self, ctx: Context):
+ """
+ Initialize the context helper.
+
+ Args:
+ ctx: The MCP Context object
+ """
+ self.ctx = ctx
+
+ @property
+ def base_path(self) -> str:
+ """
+ Get the base project path from the context.
+
+ Returns:
+ The base project path, or empty string if not set
+ """
+ try:
+ return self.ctx.request_context.lifespan_context.base_path
+ except AttributeError:
+ return ""
+
+ @property
+ def settings(self) -> Optional[ProjectSettings]:
+ """
+ Get the project settings from the context.
+
+ Returns:
+ The ProjectSettings instance, or None if not available
+ """
+ try:
+ return self.ctx.request_context.lifespan_context.settings
+ except AttributeError:
+ return None
+
+ @property
+ def file_count(self) -> int:
+ """
+ Get the current file count from the context.
+
+ Returns:
+ The number of indexed files, or 0 if not available
+ """
+ try:
+ return self.ctx.request_context.lifespan_context.file_count
+ except AttributeError:
+ return 0
+
+ @property
+ def index_manager(self):
+ """
+ Get the unified index manager from the context.
+
+ Returns:
+ The UnifiedIndexManager instance, or None if not available
+ """
+ try:
+ return getattr(self.ctx.request_context.lifespan_context, 'index_manager', None)
+ except AttributeError:
+ return None
+
+ def validate_base_path(self) -> bool:
+ """
+ Check if the base path is set and valid.
+
+ Returns:
+ True if base path is set and exists, False otherwise
+ """
+ base_path = self.base_path
+ return bool(base_path and os.path.exists(base_path))
+
+ def get_base_path_error(self) -> Optional[str]:
+ """
+ Get an error message if base path is not properly set.
+
+ Returns:
+ Error message string if base path is invalid, None if valid
+ """
+ if not self.base_path:
+ return ("Project path not set. Please use set_project_path to set a "
+ "project directory first.")
+
+ if not os.path.exists(self.base_path):
+ return f"Project path does not exist: {self.base_path}"
+
+ if not os.path.isdir(self.base_path):
+ return f"Project path is not a directory: {self.base_path}"
+
+ return None
+
+ def update_file_count(self, count: int) -> None:
+ """
+ Update the file count in the context.
+
+ Args:
+ count: The new file count
+ """
+ try:
+ self.ctx.request_context.lifespan_context.file_count = count
+ except AttributeError:
+ pass # Context not available or doesn't support this operation
+
+ def update_base_path(self, path: str) -> None:
+ """
+ Update the base path in the context.
+
+ Args:
+ path: The new base path
+ """
+ try:
+ self.ctx.request_context.lifespan_context.base_path = path
+ except AttributeError:
+ pass # Context not available or doesn't support this operation
+
+ def update_settings(self, settings: ProjectSettings) -> None:
+ """
+ Update the settings in the context.
+
+ Args:
+ settings: The new ProjectSettings instance
+ """
+ try:
+ self.ctx.request_context.lifespan_context.settings = settings
+ except AttributeError:
+ pass # Context not available or doesn't support this operation
+
+ def clear_index_cache(self) -> None:
+ """
+ Clear the index through the unified index manager.
+ """
+ try:
+ if self.index_manager:
+ self.index_manager.clear_index()
+ except AttributeError:
+ pass
+
+ def update_index_manager(self, index_manager) -> None:
+ """
+ Update the index manager in the context.
+
+ Args:
+ index_manager: The new UnifiedIndexManager instance
+ """
+ try:
+ self.ctx.request_context.lifespan_context.index_manager = index_manager
+ except AttributeError:
+ pass # Context not available or doesn't support this operation
diff --git a/src/code_index_mcp/utils/error_handler.py b/src/code_index_mcp/utils/error_handler.py
new file mode 100644
index 0000000..e596886
--- /dev/null
+++ b/src/code_index_mcp/utils/error_handler.py
@@ -0,0 +1,103 @@
+"""
+Decorator-based error handling for MCP entry points.
+
+This module provides consistent error handling across all MCP tools, resources, and prompts.
+"""
+
+import functools
+import json
+from typing import Any, Callable, Dict, Union
+
+
+def handle_mcp_errors(return_type: str = 'str') -> Callable:
+ """
+ Decorator to handle exceptions in MCP entry points consistently.
+
+ This decorator catches all exceptions and formats them according to the expected
+ return type, providing consistent error responses across all MCP entry points.
+
+ Args:
+ return_type: The expected return type format
+ - 'str': Returns error as string format "Error: {message}"
+ - 'dict': Returns error as dict format {"error": "Operation failed: {message}"}
+ - 'json': Returns error as JSON string with dict format
+
+ Returns:
+ Decorator function that wraps MCP entry points with error handling
+
+ Example:
+ @mcp.tool()
+ @handle_mcp_errors(return_type='str')
+ def set_project_path(path: str, ctx: Context) -> str:
+ from ..services.project_management_service import ProjectManagementService
+ return ProjectManagementService(ctx).initialize_project(path)
+
+ @mcp.tool()
+ @handle_mcp_errors(return_type='dict')
+ def search_code_advanced(pattern: str, ctx: Context, **kwargs) -> Dict[str, Any]:
+ return SearchService(ctx).search_code(pattern, **kwargs)
+ """
+ def decorator(func: Callable) -> Callable:
+ @functools.wraps(func)
+ def wrapper(*args, **kwargs) -> Union[str, Dict[str, Any]]:
+ try:
+ return func(*args, **kwargs)
+ except Exception as e:
+ error_message = str(e)
+
+ if return_type == 'dict':
+ return {"error": f"Operation failed: {error_message}"}
+ elif return_type == 'json':
+ return json.dumps({"error": f"Operation failed: {error_message}"})
+ else: # return_type == 'str' (default)
+ return f"Error: {error_message}"
+
+ return wrapper
+ return decorator
+
+
+def handle_mcp_resource_errors(func: Callable) -> Callable:
+ """
+ Specialized error handler for MCP resources that always return strings.
+
+ This is a convenience decorator specifically for @mcp.resource decorated functions
+ which always return string responses.
+
+ Args:
+ func: The MCP resource function to wrap
+
+ Returns:
+ Wrapped function with error handling
+
+ Example:
+ @mcp.resource("config://code-indexer")
+ @handle_mcp_resource_errors
+ def get_config() -> str:
+ ctx = mcp.get_context()
+ from ..services.project_management_service import ProjectManagementService
+ return ProjectManagementService(ctx).get_project_config()
+ """
+ return handle_mcp_errors(return_type='str')(func)
+
+
+def handle_mcp_tool_errors(return_type: str = 'str') -> Callable:
+ """
+ Specialized error handler for MCP tools with flexible return types.
+
+ This is a convenience decorator specifically for @mcp.tool decorated functions
+ which may return either strings or dictionaries.
+
+ Args:
+ return_type: The expected return type ('str' or 'dict')
+
+ Returns:
+ Decorator function for MCP tools
+
+ Example:
+ @mcp.tool()
+ @handle_mcp_tool_errors(return_type='dict')
+ def find_files(pattern: str, ctx: Context) -> Dict[str, Any]:
+ from ..services.file_discovery_service import FileDiscoveryService
+ return FileDiscoveryService(ctx).find_files(pattern)
+ """
+ return handle_mcp_errors(return_type=return_type)
diff --git a/src/code_index_mcp/utils/file_filter.py b/src/code_index_mcp/utils/file_filter.py
new file mode 100644
index 0000000..5cd9938
--- /dev/null
+++ b/src/code_index_mcp/utils/file_filter.py
@@ -0,0 +1,177 @@
+"""
+Centralized file filtering logic for the Code Index MCP server.
+
+This module provides unified filtering capabilities used across all components
+that need to determine which files and directories should be processed or excluded.
+"""
+
+import fnmatch
+from pathlib import Path
+from typing import List, Optional, Set
+
+from ..constants import FILTER_CONFIG
+
+
+class FileFilter:
+ """Centralized file filtering logic."""
+
+ def __init__(self, additional_excludes: Optional[List[str]] = None):
+ """
+ Initialize the file filter.
+
+ Args:
+ additional_excludes: Additional directory patterns to exclude
+ """
+ self.exclude_dirs = set(FILTER_CONFIG["exclude_directories"])
+ self.exclude_files = set(FILTER_CONFIG["exclude_files"])
+ self.supported_extensions = set(FILTER_CONFIG["supported_extensions"])
+
+ # Add user-defined exclusions
+ if additional_excludes:
+ self.exclude_dirs.update(additional_excludes)
+
+ def should_exclude_directory(self, dir_name: str) -> bool:
+ """
+ Check if directory should be excluded from processing.
+
+ Args:
+ dir_name: Directory name to check
+
+ Returns:
+ True if directory should be excluded, False otherwise
+ """
+ # Skip hidden directories except for specific allowed ones
+ if dir_name.startswith('.') and dir_name not in {'.env', '.gitignore'}:
+ return True
+
+ # Check against exclude patterns
+ return dir_name in self.exclude_dirs
+
+ def should_exclude_file(self, file_path: Path) -> bool:
+ """
+ Check if file should be excluded from processing.
+
+ Args:
+ file_path: Path object for the file to check
+
+ Returns:
+ True if file should be excluded, False otherwise
+ """
+ # Extension check - only process supported file types
+ if file_path.suffix.lower() not in self.supported_extensions:
+ return True
+
+ # Hidden files (except specific allowed ones)
+ if file_path.name.startswith('.') and file_path.name not in {'.gitignore', '.env'}:
+ return True
+
+ # Filename pattern check using glob patterns
+ for pattern in self.exclude_files:
+ if fnmatch.fnmatch(file_path.name, pattern):
+ return True
+
+ return False
+
+ def should_process_path(self, path: Path, base_path: Path) -> bool:
+ """
+ Unified path processing logic to determine if a file should be processed.
+
+ Args:
+ path: File path to check
+ base_path: Project base path for relative path calculation
+
+ Returns:
+ True if file should be processed, False otherwise
+ """
+ try:
+ # Ensure we're working with absolute paths
+ if not path.is_absolute():
+ path = base_path / path
+
+ # Get relative path from base
+ relative_path = path.relative_to(base_path)
+
+ # Check each path component for excluded directories
+ for part in relative_path.parts[:-1]: # Exclude filename
+ if self.should_exclude_directory(part):
+ return False
+
+ # Check file itself
+ return not self.should_exclude_file(path)
+
+ except (ValueError, OSError):
+ # Path not relative to base_path or other path errors
+ return False
+
+ def is_supported_file_type(self, file_path: Path) -> bool:
+ """
+ Check if file type is supported for indexing.
+
+ Args:
+ file_path: Path to check
+
+ Returns:
+ True if file type is supported, False otherwise
+ """
+ return file_path.suffix.lower() in self.supported_extensions
+
+ def is_temporary_file(self, file_path: Path) -> bool:
+ """
+ Check if file appears to be a temporary file.
+
+ Args:
+ file_path: Path to check
+
+ Returns:
+ True if file appears temporary, False otherwise
+ """
+ name = file_path.name
+
+ # Common temporary file patterns
+ temp_patterns = ['*.tmp', '*.temp', '*.swp', '*.swo', '*~']
+
+ for pattern in temp_patterns:
+ if fnmatch.fnmatch(name, pattern):
+ return True
+
+ # Files ending in .bak or .orig
+ if name.endswith(('.bak', '.orig')):
+ return True
+
+ return False
+
+ def filter_file_list(self, files: List[str], base_path: str) -> List[str]:
+ """
+ Filter a list of file paths, keeping only those that should be processed.
+
+ Args:
+ files: List of file paths (absolute or relative)
+ base_path: Project base path
+
+ Returns:
+ Filtered list of file paths that should be processed
+ """
+ base = Path(base_path)
+ filtered = []
+
+ for file_path_str in files:
+ file_path = Path(file_path_str)
+ if self.should_process_path(file_path, base):
+ filtered.append(file_path_str)
+
+ return filtered
+
+ def get_exclude_summary(self) -> dict:
+ """
+ Get summary of current exclusion configuration.
+
+ Returns:
+ Dictionary with exclusion configuration details
+ """
+ return {
+ "exclude_directories_count": len(self.exclude_dirs),
+ "exclude_files_count": len(self.exclude_files),
+ "supported_extensions_count": len(self.supported_extensions),
+ "exclude_directories": sorted(self.exclude_dirs),
+ "exclude_files": sorted(self.exclude_files)
+ }
\ No newline at end of file
diff --git a/src/code_index_mcp/utils/response_formatter.py b/src/code_index_mcp/utils/response_formatter.py
new file mode 100644
index 0000000..aad65ce
--- /dev/null
+++ b/src/code_index_mcp/utils/response_formatter.py
@@ -0,0 +1,364 @@
+"""
+Response formatting utilities for the MCP server.
+
+This module provides consistent response formatting functions used across
+services to ensure uniform response structures and formats.
+"""
+
+import json
+from typing import Any, Dict, List, Optional, Union
+
+from ..indexing.qualified_names import generate_qualified_name
+
+
+class ResponseFormatter:
+ """
+ Helper class for formatting responses consistently across services.
+
+ This class provides static methods for formatting different types of
+ responses in a consistent manner.
+ """
+
+ @staticmethod
+ def _resolve_qualified_names_in_relationships(
+ file_path: str,
+ relationship_list: List[str],
+ duplicate_names: set,
+ index_cache: Optional[Dict[str, Any]] = None
+ ) -> List[str]:
+ """
+ Convert simple names to qualified names when duplicates exist.
+
+ Args:
+ file_path: Current file path for context
+ relationship_list: List of function/class names that may need qualification
+ duplicate_names: Set of names that have duplicates in the project
+ index_cache: Optional index cache for duplicate detection
+
+ Returns:
+ List with qualified names where duplicates exist
+ """
+ if not relationship_list or not duplicate_names:
+ return relationship_list
+
+ qualified_list = []
+ for name in relationship_list:
+ if name in duplicate_names:
+ # Convert to qualified name if this name has duplicates
+ if index_cache and 'files' in index_cache:
+ # Try to find the actual file where this name is defined
+ # For now, we'll use the current file path as context
+ qualified_name = generate_qualified_name(file_path, name)
+ qualified_list.append(qualified_name)
+ else:
+ # Fallback: keep original name if we can't resolve
+ qualified_list.append(name)
+ else:
+ # No duplicates, keep original name
+ qualified_list.append(name)
+
+ return qualified_list
+
+ @staticmethod
+ def _get_duplicate_names_from_index(index_cache: Optional[Dict[str, Any]] = None) -> Dict[str, set]:
+ """
+ Extract duplicate function and class names from index cache.
+
+ Args:
+ index_cache: Optional index cache
+
+ Returns:
+ Dictionary with 'functions' and 'classes' sets of duplicate names
+ """
+ duplicates = {'functions': set(), 'classes': set()}
+
+ if not index_cache:
+ return duplicates
+
+ # Duplicate detection functionality removed - was legacy code
+ # Return empty duplicates as this feature is no longer used
+
+ return duplicates
+
+ @staticmethod
+ def success_response(message: str, data: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
+ """
+ Format a successful operation response.
+
+ Args:
+ message: Success message
+ data: Optional additional data to include
+
+ Returns:
+ Formatted success response dictionary
+ """
+ response = {"status": "success", "message": message}
+ if data:
+ response.update(data)
+ return response
+
+ @staticmethod
+ def error_response(message: str, error_code: Optional[str] = None) -> Dict[str, Any]:
+ """
+ Format an error response.
+
+ Args:
+ message: Error message
+ error_code: Optional error code for categorization
+
+ Returns:
+ Formatted error response dictionary
+ """
+ response = {"error": message}
+ if error_code:
+ response["error_code"] = error_code
+ return response
+
+ @staticmethod
+ def file_list_response(files: List[str], status_message: str) -> Dict[str, Any]:
+ """
+ Format a file list response for find_files operations.
+
+ Args:
+ files: List of file paths
+ status_message: Status message describing the operation result
+
+ Returns:
+ Formatted file list response
+ """
+ return {
+ "files": files,
+ "status": status_message
+ }
+
+ @staticmethod
+ def search_results_response(results: List[Dict[str, Any]]) -> Dict[str, Any]:
+ """
+ Format search results response.
+
+ Args:
+ results: List of search result dictionaries
+
+ Returns:
+ Formatted search results response
+ """
+ return {
+ "results": results
+ }
+
+ @staticmethod
+ def config_response(config_data: Dict[str, Any]) -> str:
+ """
+ Format configuration data as JSON string.
+
+ Args:
+ config_data: Configuration data dictionary
+
+ Returns:
+ JSON formatted configuration string
+ """
+ return json.dumps(config_data, indent=2)
+
+ @staticmethod
+ def stats_response(stats_data: Dict[str, Any]) -> str:
+ """
+ Format statistics data as JSON string.
+
+ Args:
+ stats_data: Statistics data dictionary
+
+ Returns:
+ JSON formatted statistics string
+ """
+ return json.dumps(stats_data, indent=2)
+
+ @staticmethod
+ def file_summary_response(
+ file_path: str,
+ line_count: int,
+ size_bytes: int,
+ extension: str,
+ language: str = "unknown",
+ functions: Optional[Union[List[str], List[Dict[str, Any]]]] = None,
+ classes: Optional[Union[List[str], List[Dict[str, Any]]]] = None,
+ imports: Optional[Union[List[str], List[Dict[str, Any]]]] = None,
+ language_specific: Optional[Dict[str, Any]] = None,
+ error: Optional[str] = None,
+ index_cache: Optional[Dict[str, Any]] = None
+ ) -> Dict[str, Any]:
+ """
+ Format file summary response from index data.
+
+ Args:
+ file_path: Path to the file
+ line_count: Number of lines in the file
+ size_bytes: File size in bytes
+ extension: File extension
+ language: Programming language detected
+ functions: List of function names (strings) or complete function objects (dicts)
+ classes: List of class names (strings) or complete class objects (dicts)
+ imports: List of import statements (strings) or complete import objects (dicts)
+ language_specific: Language-specific analysis data
+ error: Error message if analysis failed
+ index_cache: Optional index cache for duplicate name resolution
+
+ Returns:
+ Formatted file summary response
+ """
+ # Get duplicate names from index for qualified name resolution
+ duplicate_names = ResponseFormatter._get_duplicate_names_from_index(index_cache)
+
+ # Handle backward compatibility for functions
+ processed_functions = []
+ if functions:
+ for func in functions:
+ if isinstance(func, str):
+ # Legacy format - convert string to basic object
+ processed_functions.append({"name": func})
+ elif isinstance(func, dict):
+ # New format - use complete object and resolve qualified names in relationships
+ processed_func = func.copy()
+
+ # Resolve qualified names in relationship fields
+ if 'calls' in processed_func and isinstance(processed_func['calls'], list):
+ processed_func['calls'] = ResponseFormatter._resolve_qualified_names_in_relationships(
+ file_path, processed_func['calls'], duplicate_names['functions'], index_cache
+ )
+
+ if 'called_by' in processed_func and isinstance(processed_func['called_by'], list):
+ processed_func['called_by'] = ResponseFormatter._resolve_qualified_names_in_relationships(
+ file_path, processed_func['called_by'], duplicate_names['functions'], index_cache
+ )
+
+ processed_functions.append(processed_func)
+
+ # Handle backward compatibility for classes
+ processed_classes = []
+ if classes:
+ for cls in classes:
+ if isinstance(cls, str):
+ # Legacy format - convert string to basic object
+ processed_classes.append({"name": cls})
+ elif isinstance(cls, dict):
+ # New format - use complete object and resolve qualified names in relationships
+ processed_cls = cls.copy()
+
+ # Resolve qualified names in relationship fields
+ if 'instantiated_by' in processed_cls and isinstance(processed_cls['instantiated_by'], list):
+ processed_cls['instantiated_by'] = ResponseFormatter._resolve_qualified_names_in_relationships(
+ file_path, processed_cls['instantiated_by'], duplicate_names['functions'], index_cache
+ )
+
+ processed_classes.append(processed_cls)
+
+ # Handle backward compatibility for imports
+ processed_imports = []
+ if imports:
+ for imp in imports:
+ if isinstance(imp, str):
+ # Legacy format - convert string to basic object
+ processed_imports.append({"module": imp, "import_type": "unknown"})
+ elif isinstance(imp, dict):
+ # New format - use complete object
+ processed_imports.append(imp)
+
+ response = {
+ "file_path": file_path,
+ "line_count": line_count,
+ "size_bytes": size_bytes,
+ "extension": extension,
+ "language": language,
+ "functions": processed_functions,
+ "classes": processed_classes,
+ "imports": processed_imports,
+ "language_specific": language_specific or {}
+ }
+
+ if error:
+ response["error"] = error
+
+ return response
+
+ @staticmethod
+ def directory_info_response(
+ temp_directory: str,
+ exists: bool,
+ is_directory: bool = False,
+ contents: Optional[List[str]] = None,
+ subdirectories: Optional[List[Dict[str, Any]]] = None,
+ error: Optional[str] = None
+ ) -> Dict[str, Any]:
+ """
+ Format directory information response.
+
+ Args:
+ temp_directory: Path to the directory
+ exists: Whether the directory exists
+ is_directory: Whether the path is a directory
+ contents: List of directory contents
+ subdirectories: List of subdirectory information
+ error: Error message if operation failed
+
+ Returns:
+ Formatted directory info response
+ """
+ response = {
+ "temp_directory": temp_directory,
+ "exists": exists,
+ "is_directory": is_directory
+ }
+
+ if contents is not None:
+ response["contents"] = contents
+
+ if subdirectories is not None:
+ response["subdirectories"] = subdirectories
+
+ if error:
+ response["error"] = error
+
+ return response
+
+ @staticmethod
+ def settings_info_response(
+ settings_directory: str,
+ temp_directory: str,
+ temp_directory_exists: bool,
+ config: Dict[str, Any],
+ stats: Dict[str, Any],
+ exists: bool,
+ status: str = "configured",
+ message: Optional[str] = None
+ ) -> Dict[str, Any]:
+ """
+ Format settings information response.
+
+ Args:
+ settings_directory: Path to settings directory
+ temp_directory: Path to temp directory
+ temp_directory_exists: Whether temp directory exists
+ config: Configuration data
+ stats: Statistics data
+ exists: Whether settings directory exists
+ status: Status of the configuration
+ message: Optional status message
+
+ Returns:
+ Formatted settings info response
+ """
+ response = {
+ "settings_directory": settings_directory,
+ "temp_directory": temp_directory,
+ "temp_directory_exists": temp_directory_exists,
+ "config": config,
+ "stats": stats,
+ "exists": exists
+ }
+
+ if status != "configured":
+ response["status"] = status
+
+ if message:
+ response["message"] = message
+
+ return response
\ No newline at end of file
diff --git a/src/code_index_mcp/utils/validation.py b/src/code_index_mcp/utils/validation.py
new file mode 100644
index 0000000..6f7eaf9
--- /dev/null
+++ b/src/code_index_mcp/utils/validation.py
@@ -0,0 +1,207 @@
+"""
+Common validation logic for the MCP server.
+
+This module provides shared validation functions used across services
+to ensure consistent validation behavior and reduce code duplication.
+"""
+
+import os
+import re
+import fnmatch
+from typing import Optional, List
+
+from ..indexing.qualified_names import normalize_file_path
+
+
+class ValidationHelper:
+ """
+ Helper class containing common validation logic.
+
+ This class provides static methods for common validation operations
+ that are used across multiple services.
+ """
+
+ @staticmethod
+ def validate_file_path(file_path: str, base_path: str) -> Optional[str]:
+ """
+ Validate a file path for security and accessibility.
+
+ This method checks for:
+ - Path traversal attempts
+ - Absolute path usage (not allowed)
+ - Path existence within base directory
+
+ Args:
+ file_path: The file path to validate (should be relative)
+ base_path: The base project directory path
+
+ Returns:
+ Error message if validation fails, None if valid
+ """
+ if not file_path:
+ return "File path cannot be empty"
+
+ if not base_path:
+ return "Base path not set"
+
+ # Handle absolute paths (especially Windows paths starting with drive letters)
+ if os.path.isabs(file_path) or (len(file_path) > 1 and file_path[1] == ':'):
+ return (f"Absolute file paths like '{file_path}' are not allowed. "
+ "Please use paths relative to the project root.")
+
+ # Normalize the file path
+ norm_path = os.path.normpath(file_path)
+
+ # Check for path traversal attempts
+ if "..\\" in norm_path or "../" in norm_path or norm_path.startswith(".."):
+ return f"Invalid file path: {file_path} (directory traversal not allowed)"
+
+ # Construct the full path and verify it's within the project bounds
+ full_path = os.path.join(base_path, norm_path)
+ real_full_path = os.path.realpath(full_path)
+ real_base_path = os.path.realpath(base_path)
+
+ if not real_full_path.startswith(real_base_path):
+ return "Access denied. File path must be within project directory."
+
+ return None
+
+ @staticmethod
+ def validate_directory_path(dir_path: str) -> Optional[str]:
+ """
+ Validate a directory path for project initialization.
+
+ Args:
+ dir_path: The directory path to validate
+
+ Returns:
+ Error message if validation fails, None if valid
+ """
+ if not dir_path:
+ return "Directory path cannot be empty"
+
+ # Normalize and get absolute path
+ try:
+ norm_path = os.path.normpath(dir_path)
+ abs_path = os.path.abspath(norm_path)
+ except (OSError, ValueError) as e:
+ return f"Invalid path format: {str(e)}"
+
+ if not os.path.exists(abs_path):
+ return f"Path does not exist: {abs_path}"
+
+ if not os.path.isdir(abs_path):
+ return f"Path is not a directory: {abs_path}"
+
+ return None
+
+ @staticmethod
+ def validate_glob_pattern(pattern: str) -> Optional[str]:
+ """
+ Validate a glob pattern for file searching.
+
+ Args:
+ pattern: The glob pattern to validate
+
+ Returns:
+ Error message if validation fails, None if valid
+ """
+ if not pattern:
+ return "Pattern cannot be empty"
+
+ # Check for potentially dangerous patterns
+ if pattern.startswith('/') or pattern.startswith('\\'):
+ return "Pattern cannot start with path separator"
+
+ # Test if the pattern is valid by trying to compile it
+ try:
+ # This will raise an exception if the pattern is malformed
+ fnmatch.translate(pattern)
+ except (ValueError, TypeError) as e:
+ return f"Invalid glob pattern: {str(e)}"
+
+ return None
+
+ @staticmethod
+ def validate_search_pattern(pattern: str, regex: bool = False) -> Optional[str]:
+ """
+ Validate a search pattern for code searching.
+
+ Args:
+ pattern: The search pattern to validate
+ regex: Whether the pattern is a regex pattern
+
+ Returns:
+ Error message if validation fails, None if valid
+ """
+ if not pattern:
+ return "Search pattern cannot be empty"
+
+ if regex:
+ # Basic regex validation - check for potentially dangerous patterns
+ try:
+ re.compile(pattern)
+ except re.error as e:
+ return f"Invalid regex pattern: {str(e)}"
+
+ # Check for potentially expensive regex patterns (basic ReDoS protection)
+ dangerous_patterns = [
+ r'\(\?\=.*\)\+', # Positive lookahead with quantifier
+ r'\(\?\!.*\)\+', # Negative lookahead with quantifier
+ r'\(\?\<\=.*\)\+', # Positive lookbehind with quantifier
+ r'\(\?\<\!.*\)\+', # Negative lookbehind with quantifier
+ ]
+
+ for dangerous in dangerous_patterns:
+ if re.search(dangerous, pattern):
+ return "Potentially dangerous regex pattern detected"
+
+ return None
+
+ @staticmethod
+ def validate_file_extensions(extensions: List[str]) -> Optional[str]:
+ """
+ Validate a list of file extensions.
+
+ Args:
+ extensions: List of file extensions to validate
+
+ Returns:
+ Error message if validation fails, None if valid
+ """
+ if not extensions:
+ return "Extensions list cannot be empty"
+
+ for ext in extensions:
+ if not isinstance(ext, str):
+ return "All extensions must be strings"
+
+ if not ext.startswith('.'):
+ return f"Extension '{ext}' must start with a dot"
+
+ if len(ext) < 2:
+ return f"Extension '{ext}' is too short"
+
+ return None
+
+ @staticmethod
+ def sanitize_file_path(file_path: str) -> str:
+ """
+ Sanitize a file path by normalizing separators and removing dangerous elements.
+
+ Args:
+ file_path: The file path to sanitize
+
+ Returns:
+ Sanitized file path
+ """
+ if not file_path:
+ return ""
+
+ # Normalize path separators and structure
+ sanitized = normalize_file_path(file_path)
+
+ # Remove any leading slashes to ensure relative path
+ sanitized = sanitized.lstrip('/')
+
+ return sanitized
\ No newline at end of file
diff --git a/test/README.md b/test/README.md
new file mode 100644
index 0000000..5796cb3
--- /dev/null
+++ b/test/README.md
@@ -0,0 +1,247 @@
+# Test Projects for Code Index MCP
+
+This directory contains comprehensive test projects designed to validate and demonstrate the capabilities of the Code Index MCP server. Each project represents a realistic, enterprise-level codebase that showcases different programming languages, frameworks, and architectural patterns.
+
+## Project Structure
+
+```
+test/
+├── sample-projects/
+│ ├── python/
+│ │ └── user_management/ # Python user management system
+│ ├── java/
+│ │ └── user-management/ # Java Spring Boot user management
+│ ├── go/
+│ │ └── user-management/ # Go Gin user management API
+│ ├── javascript/
+│ │ └── user-management/ # Node.js Express user management
+│ ├── typescript/
+│ │ └── user-management/ # TypeScript Express user management
+│ └── objective-c/ # Objective-C test files
+└── README.md # This file
+```
+
+## Sample Projects Overview
+
+Each sample project implements a comprehensive user management system with the following core features:
+
+### Common Features Across All Projects
+- **User Registration & Authentication**: Secure user registration with password hashing
+- **Role-Based Access Control (RBAC)**: Admin, User, and Guest roles with permissions
+- **CRUD Operations**: Complete Create, Read, Update, Delete functionality
+- **Search & Filtering**: Full-text search and role/status-based filtering
+- **Pagination**: Efficient pagination for large datasets
+- **Input Validation**: Comprehensive validation and sanitization
+- **Error Handling**: Structured error handling with custom error classes
+- **Logging**: Structured logging for debugging and monitoring
+- **Security**: Password hashing, rate limiting, and security headers
+- **Data Export**: User data export functionality
+- **Statistics**: User analytics and statistics
+
+### Language-Specific Implementation Details
+
+#### Python Project (`python/user_management/`)
+- **Framework**: Flask-based web application
+- **Database**: SQLAlchemy ORM with SQLite
+- **Authentication**: JWT tokens with BCrypt password hashing
+- **Structure**: Clean package structure with models, services, and utilities
+- **Features**: CLI interface, comprehensive validation, and export functionality
+
+**Key Files:**
+- `models/person.py` - Base Person model
+- `models/user.py` - User model with authentication
+- `services/user_manager.py` - Business logic layer
+- `services/auth_service.py` - Authentication service
+- `utils/` - Validation, exceptions, and helper utilities
+- `cli.py` - Command-line interface
+
+#### Java Project (`java/user-management/`)
+- **Framework**: Spring Boot with Spring Data JPA
+- **Database**: H2 in-memory database with JPA
+- **Authentication**: JWT tokens with BCrypt
+- **Structure**: Maven project with standard Java package structure
+- **Features**: REST API, validation annotations, and comprehensive testing
+
+**Key Files:**
+- `model/User.java` - JPA entity with validation
+- `service/UserService.java` - Business logic service
+- `controller/UserController.java` - REST API endpoints
+- `util/` - Validation, exceptions, and utilities
+- `Application.java` - Spring Boot application entry point
+
+#### Go Project (`go/user-management/`)
+- **Framework**: Gin web framework with GORM
+- **Database**: SQLite with GORM ORM
+- **Authentication**: JWT tokens with BCrypt
+- **Structure**: Clean Go module structure with internal packages
+- **Features**: High-performance API, middleware, and concurrent processing
+
+**Key Files:**
+- `internal/models/user.go` - User model with GORM
+- `internal/services/user_service.go` - Business logic
+- `pkg/api/handlers/user_handler.go` - HTTP handlers
+- `pkg/middleware/` - Authentication and validation middleware
+- `cmd/server/main.go` - Application entry point
+
+#### JavaScript Project (`javascript/user-management/`)
+- **Framework**: Express.js with Mongoose
+- **Database**: MongoDB with Mongoose ODM
+- **Authentication**: JWT tokens with BCrypt
+- **Structure**: Modern Node.js project with ES6+ features
+- **Features**: Async/await, middleware, and comprehensive error handling
+
+**Key Files:**
+- `src/models/User.js` - Mongoose model with validation
+- `src/services/UserService.js` - Business logic service
+- `src/routes/userRoutes.js` - Express routes
+- `src/middleware/` - Authentication and validation middleware
+- `src/server.js` - Express application setup
+
+#### TypeScript Project (`typescript/user-management/`)
+- **Framework**: Express.js with Mongoose (TypeScript)
+- **Database**: MongoDB with Mongoose ODM
+- **Authentication**: JWT tokens with BCrypt
+- **Structure**: Type-safe Node.js project with comprehensive interfaces
+- **Features**: Full type safety, interfaces, and advanced TypeScript features
+
+**Key Files:**
+- `src/types/User.ts` - TypeScript interfaces and types
+- `src/models/User.ts` - Mongoose model with TypeScript
+- `src/services/UserService.ts` - Typed business logic service
+- `src/routes/userRoutes.ts` - Typed Express routes
+- `src/server.ts` - TypeScript Express application
+
+#### Objective-C Project (`objective-c/`)
+- **Framework**: Foundation classes
+- **Features**: Classes, properties, methods, protocols
+- **Structure**: Traditional .h/.m file structure
+
+**Key Files:**
+- `Person.h/.m` - Person class with properties
+- `UserManager.h/.m` - User management functionality
+- `main.m` - Application entry point
+
+## Testing the Code Index MCP
+
+These projects are designed to test various aspects of the Code Index MCP:
+
+### File Analysis Capabilities
+- **Language Detection**: Automatic detection of programming languages
+- **Syntax Parsing**: Parsing of different syntax structures
+- **Import/Dependency Analysis**: Understanding of module dependencies
+- **Code Structure**: Recognition of classes, functions, and interfaces
+
+### Search and Navigation
+- **Symbol Search**: Finding functions, classes, and variables
+- **Cross-Reference**: Finding usage of symbols across files
+- **Fuzzy Search**: Approximate matching for typos and partial queries
+- **Pattern Matching**: Regular expression and pattern-based searches
+
+### Code Intelligence
+- **Function Signatures**: Understanding of function parameters and return types
+- **Variable Types**: Type inference and tracking
+- **Scope Analysis**: Understanding of variable and function scope
+- **Documentation**: Parsing of comments and documentation
+
+### Performance Testing
+- **Large Codebases**: Testing with realistic project sizes
+- **Complex Structures**: Nested packages and deep directory structures
+- **Multiple File Types**: Mixed file types within projects
+- **Concurrent Access**: Multiple simultaneous search operations
+
+## Running the Projects
+
+Each project includes comprehensive setup instructions in its respective README.md file. General steps:
+
+1. Navigate to the project directory
+2. Install dependencies using the appropriate package manager
+3. Set up environment variables (see .env.example files)
+4. Run the application using the provided scripts
+5. Test the API endpoints using the provided examples
+
+### Quick Start Examples
+
+```bash
+# Python project
+cd test/sample-projects/python/user_management
+pip install -r requirements.txt
+python cli.py
+
+# Java project
+cd test/sample-projects/java/user-management
+mvn spring-boot:run
+
+# Go project
+cd test/sample-projects/go/user-management
+go run cmd/server/main.go
+
+# JavaScript project
+cd test/sample-projects/javascript/user-management
+npm install
+npm run dev
+
+# TypeScript project
+cd test/sample-projects/typescript/user-management
+npm install
+npm run dev
+```
+
+## MCP Server Testing
+
+To test the Code Index MCP server with these projects:
+
+1. **Set Project Path**: Use the `set_project_path` tool to point to a project directory
+2. **Index Files**: The server will automatically index all files in the project
+3. **Search Testing**: Test various search queries and patterns
+4. **Analysis Testing**: Use the analysis tools to examine code structure
+5. **Performance Testing**: Measure response times and resource usage
+
+### Example MCP Commands
+
+```bash
+# Set project path
+set_project_path /path/to/test/sample-projects/python/user_management
+
+# Search for user-related functions
+search_code_advanced "def create_user" --file-pattern "*.py"
+
+# Find all authentication-related code
+search_code_advanced "auth" --fuzzy true
+
+# Get file summary
+get_file_summary models/user.py
+
+# Find TypeScript interfaces
+search_code_advanced "interface.*User" --regex true --file-pattern "*.ts"
+```
+
+## Contributing
+
+When adding new test projects:
+
+1. Follow the established patterns and structure
+2. Implement all core features consistently
+3. Include comprehensive documentation
+4. Add appropriate test cases
+5. Update this README with project details
+
+## Security Considerations
+
+All test projects include:
+- Secure password hashing (BCrypt)
+- Input validation and sanitization
+- Rate limiting and security headers
+- JWT token-based authentication
+- Environment variable configuration
+- Proper error handling without information disclosure
+
+## Future Enhancements
+
+Potential additions to the test suite:
+- **Rust Project**: Systems programming language example
+- **C++ Project**: Complex C++ codebase with templates
+- **C# Project**: .NET Core application
+- **PHP Project**: Laravel-based web application
+- **Ruby Project**: Rails application
+- **Swift Project**: iOS application structure
+- **Kotlin Project**: Android/JVM application
\ No newline at end of file
diff --git a/test/sample-projects/go/user-management/README.md b/test/sample-projects/go/user-management/README.md
new file mode 100644
index 0000000..4534cba
--- /dev/null
+++ b/test/sample-projects/go/user-management/README.md
@@ -0,0 +1,324 @@
+# User Management System (Go)
+
+A comprehensive user management system built in Go for testing Code Index MCP's analysis capabilities.
+
+## Features
+
+- **User Management**: Create, update, delete, and search users
+- **REST API**: Full HTTP API with JSON responses
+- **Authentication**: BCrypt password hashing and JWT tokens
+- **Authorization**: Role-based access control (Admin, User, Guest)
+- **Database**: SQLite with GORM ORM
+- **Pagination**: Efficient pagination for large datasets
+- **Search**: Full-text search across users
+- **Export**: JSON export functionality
+- **Logging**: Structured logging with middleware
+- **CORS**: Cross-origin resource sharing support
+
+## Project Structure
+
+```
+user-management/
+├── cmd/
+│ ├── server/
+│ │ └── main.go # HTTP server entry point
+│ └── cli/
+│ └── main.go # CLI application
+├── internal/
+│ ├── models/
+│ │ └── user.go # User model and types
+│ ├── services/
+│ │ └── user_service.go # Business logic
+│ └── utils/
+│ └── types.go # Utility types and helpers
+├── pkg/
+│ └── api/
+│ └── user_handler.go # HTTP handlers
+├── go.mod # Go module file
+├── go.sum # Go dependencies
+└── README.md # This file
+```
+
+## Technologies Used
+
+- **Go 1.21**: Modern Go with generics and latest features
+- **Gin**: HTTP web framework
+- **GORM**: ORM for database operations
+- **SQLite**: Embedded database
+- **UUID**: Unique identifiers
+- **BCrypt**: Password hashing
+- **JWT**: JSON Web Tokens (planned)
+- **Viper**: Configuration management
+- **Cobra**: CLI framework
+
+## Build and Run
+
+### Prerequisites
+
+- Go 1.21 or higher
+
+### Install Dependencies
+
+```bash
+go mod tidy
+```
+
+### Run HTTP Server
+
+```bash
+go run cmd/server/main.go
+```
+
+The server will start on `http://localhost:8080`
+
+### Run CLI
+
+```bash
+go run cmd/cli/main.go
+```
+
+### Build
+
+```bash
+# Build server
+go build -o bin/server cmd/server/main.go
+
+# Build CLI
+go build -o bin/cli cmd/cli/main.go
+```
+
+## API Endpoints
+
+### Users
+
+| Method | Endpoint | Description |
+|--------|----------|-------------|
+| `POST` | `/api/v1/users` | Create a new user |
+| `GET` | `/api/v1/users` | Get all users (paginated) |
+| `GET` | `/api/v1/users/:id` | Get user by ID |
+| `PUT` | `/api/v1/users/:id` | Update user |
+| `DELETE` | `/api/v1/users/:id` | Delete user |
+| `GET` | `/api/v1/users/search` | Search users |
+| `GET` | `/api/v1/users/stats` | Get user statistics |
+| `GET` | `/api/v1/users/export` | Export users |
+
+### Authentication
+
+| Method | Endpoint | Description |
+|--------|----------|-------------|
+| `POST` | `/api/v1/auth/login` | User login |
+| `POST` | `/api/v1/auth/logout` | User logout |
+| `POST` | `/api/v1/auth/change-password` | Change password |
+
+### Admin
+
+| Method | Endpoint | Description |
+|--------|----------|-------------|
+| `POST` | `/api/v1/admin/users/:id/reset-password` | Reset user password |
+| `POST` | `/api/v1/admin/users/:id/permissions` | Add permission |
+| `DELETE` | `/api/v1/admin/users/:id/permissions` | Remove permission |
+
+## Usage Examples
+
+### Create User
+
+```bash
+curl -X POST http://localhost:8080/api/v1/users \
+ -H "Content-Type: application/json" \
+ -d '{
+ "username": "johndoe",
+ "email": "john@example.com",
+ "name": "John Doe",
+ "age": 30,
+ "password": "password123"
+ }'
+```
+
+### Get Users
+
+```bash
+curl http://localhost:8080/api/v1/users?page=1&page_size=10
+```
+
+### Search Users
+
+```bash
+curl http://localhost:8080/api/v1/users/search?q=john&page=1&page_size=10
+```
+
+### Login
+
+```bash
+curl -X POST http://localhost:8080/api/v1/auth/login \
+ -H "Content-Type: application/json" \
+ -d '{
+ "username": "admin",
+ "password": "admin123"
+ }'
+```
+
+### Get Statistics
+
+```bash
+curl http://localhost:8080/api/v1/users/stats
+```
+
+## Programmatic Usage
+
+```go
+package main
+
+import (
+ "github.com/example/user-management/internal/models"
+ "github.com/example/user-management/internal/services"
+ "gorm.io/driver/sqlite"
+ "gorm.io/gorm"
+)
+
+func main() {
+ // Initialize database
+ db, err := gorm.Open(sqlite.Open("users.db"), &gorm.Config{})
+ if err != nil {
+ panic(err)
+ }
+
+ // Auto migrate
+ db.AutoMigrate(&models.User{})
+
+ // Initialize service
+ userService := services.NewUserService(db)
+
+ // Create user
+ req := &models.UserRequest{
+ Username: "alice",
+ Email: "alice@example.com",
+ Name: "Alice Smith",
+ Age: 25,
+ Password: "password123",
+ Role: models.RoleUser,
+ }
+
+ user, err := userService.CreateUser(req)
+ if err != nil {
+ panic(err)
+ }
+
+ // Authenticate user
+ authUser, err := userService.AuthenticateUser("alice", "password123")
+ if err != nil {
+ panic(err)
+ }
+
+ // Get statistics
+ stats, err := userService.GetUserStats()
+ if err != nil {
+ panic(err)
+ }
+}
+```
+
+## Testing Features
+
+This project tests the following Go language features:
+
+### Core Language Features
+- **Structs and Methods**: User model with associated methods
+- **Interfaces**: Service and handler interfaces
+- **Pointers**: Efficient memory management
+- **Error Handling**: Comprehensive error handling patterns
+- **Packages**: Modular code organization
+- **Imports**: Internal and external package imports
+
+### Modern Go Features
+- **Generics**: Type-safe collections (Go 1.18+)
+- **Modules**: Dependency management with go.mod
+- **Context**: Request context handling
+- **Channels**: Concurrent programming (in background tasks)
+- **Goroutines**: Concurrent execution
+- **JSON Tags**: Struct field mapping
+
+### Advanced Features
+- **Reflection**: GORM model reflection
+- **Build Tags**: Conditional compilation
+- **Embedding**: Struct embedding for composition
+- **Type Assertions**: Interface type checking
+- **Panic/Recover**: Error recovery mechanisms
+
+### Framework Integration
+- **Gin**: HTTP router and middleware
+- **GORM**: ORM with hooks and associations
+- **UUID**: Unique identifier generation
+- **BCrypt**: Cryptographic hashing
+- **SQLite**: Embedded database
+
+### Design Patterns
+- **Repository Pattern**: Data access layer
+- **Service Layer**: Business logic separation
+- **Dependency Injection**: Service composition
+- **Middleware Pattern**: HTTP request processing
+- **Factory Pattern**: Service creation
+
+## Dependencies
+
+### Core Dependencies
+- **gin-gonic/gin**: Web framework
+- **gorm.io/gorm**: ORM
+- **gorm.io/driver/sqlite**: SQLite driver
+- **google/uuid**: UUID generation
+- **golang.org/x/crypto**: Cryptographic functions
+
+### CLI Dependencies
+- **spf13/cobra**: CLI framework
+- **spf13/viper**: Configuration management
+
+### Development Dependencies
+- **testify**: Testing framework
+- **mockery**: Mock generation
+
+## Configuration
+
+The application can be configured using environment variables or a configuration file:
+
+```yaml
+database:
+ driver: sqlite
+ database: users.db
+
+server:
+ port: 8080
+ host: localhost
+
+jwt:
+ secret_key: your-secret-key
+ expiration_hours: 24
+```
+
+## Development
+
+### Run Tests
+
+```bash
+go test ./...
+```
+
+### Generate Mocks
+
+```bash
+mockery --all
+```
+
+### Format Code
+
+```bash
+gofmt -w .
+```
+
+### Lint Code
+
+```bash
+golangci-lint run
+```
+
+## License
+
+MIT License - This is a sample project for testing purposes.
\ No newline at end of file
diff --git a/test/sample-projects/go/user-management/cmd/server/main.go b/test/sample-projects/go/user-management/cmd/server/main.go
new file mode 100644
index 0000000..690614a
--- /dev/null
+++ b/test/sample-projects/go/user-management/cmd/server/main.go
@@ -0,0 +1,294 @@
+package main
+
+import (
+ "fmt"
+ "log"
+ "net/http"
+ "time"
+
+ "github.com/example/user-management/internal/models"
+ "github.com/example/user-management/internal/services"
+ "github.com/example/user-management/internal/utils"
+ "github.com/example/user-management/pkg/api"
+ "github.com/gin-gonic/gin"
+ "gorm.io/driver/sqlite"
+ "gorm.io/gorm"
+)
+
+func main() {
+ // Initialize database
+ db, err := initDatabase()
+ if err != nil {
+ log.Fatal("Failed to initialize database:", err)
+ }
+
+ // Initialize services
+ userService := services.NewUserService(db)
+
+ // Initialize API handlers
+ userHandler := api.NewUserHandler(userService)
+
+ // Setup routes
+ router := setupRoutes(userHandler)
+
+ // Create sample data
+ createSampleData(userService)
+
+ // Start server
+ log.Println("Starting server on :8080")
+ if err := router.Run(":8080"); err != nil {
+ log.Fatal("Failed to start server:", err)
+ }
+}
+
+func initDatabase() (*gorm.DB, error) {
+ db, err := gorm.Open(sqlite.Open("users.db"), &gorm.Config{})
+ if err != nil {
+ return nil, err
+ }
+
+ // Auto migrate
+ if err := db.AutoMigrate(&models.User{}); err != nil {
+ return nil, err
+ }
+
+ return db, nil
+}
+
+func setupRoutes(userHandler *api.UserHandler) *gin.Engine {
+ router := gin.Default()
+
+ // Middleware
+ router.Use(corsMiddleware())
+ router.Use(loggingMiddleware())
+
+ // Health check
+ router.GET("/health", healthCheck)
+
+ // API routes
+ v1 := router.Group("/api/v1")
+ {
+ users := v1.Group("/users")
+ {
+ users.POST("", userHandler.CreateUser)
+ users.GET("", userHandler.GetUsers)
+ users.GET("/:id", userHandler.GetUser)
+ users.PUT("/:id", userHandler.UpdateUser)
+ users.DELETE("/:id", userHandler.DeleteUser)
+ users.GET("/search", userHandler.SearchUsers)
+ users.GET("/stats", userHandler.GetUserStats)
+ users.GET("/export", userHandler.ExportUsers)
+ }
+
+ auth := v1.Group("/auth")
+ {
+ auth.POST("/login", userHandler.Login)
+ auth.POST("/logout", userHandler.Logout)
+ auth.POST("/change-password", userHandler.ChangePassword)
+ }
+
+ admin := v1.Group("/admin")
+ {
+ admin.POST("/users/:id/reset-password", userHandler.ResetPassword)
+ admin.POST("/users/:id/permissions", userHandler.AddPermission)
+ admin.DELETE("/users/:id/permissions", userHandler.RemovePermission)
+ }
+ }
+
+ return router
+}
+
+func healthCheck(c *gin.Context) {
+ c.JSON(http.StatusOK, gin.H{
+ "status": "healthy",
+ "timestamp": time.Now().UTC(),
+ "version": "1.0.0",
+ })
+}
+
+func corsMiddleware() gin.HandlerFunc {
+ return func(c *gin.Context) {
+ c.Header("Access-Control-Allow-Origin", "*")
+ c.Header("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS")
+ c.Header("Access-Control-Allow-Headers", "Content-Type, Authorization")
+
+ if c.Request.Method == "OPTIONS" {
+ c.AbortWithStatus(http.StatusOK)
+ return
+ }
+
+ c.Next()
+ }
+}
+
+func loggingMiddleware() gin.HandlerFunc {
+ return gin.LoggerWithFormatter(func(param gin.LogFormatterParams) string {
+ return fmt.Sprintf("%s - [%s] \"%s %s %s %d %s \"%s\" %s\"\n",
+ param.ClientIP,
+ param.TimeStamp.Format(time.RFC1123),
+ param.Method,
+ param.Path,
+ param.Request.Proto,
+ param.StatusCode,
+ param.Latency,
+ param.Request.UserAgent(),
+ param.ErrorMessage,
+ )
+ })
+}
+
+func createSampleData(userService *services.UserService) {
+ // Check if admin user already exists
+ if _, err := userService.GetUserByUsername("admin"); err == nil {
+ return // Admin user already exists
+ }
+
+ // Create admin user
+ adminReq := &models.UserRequest{
+ Username: "admin",
+ Email: "admin@example.com",
+ Name: "System Administrator",
+ Age: 30,
+ Password: "admin123",
+ Role: models.RoleAdmin,
+ }
+
+ admin, err := userService.CreateUser(adminReq)
+ if err != nil {
+ log.Printf("Failed to create admin user: %v", err)
+ return
+ }
+
+ // Add admin permissions
+ permissions := []string{
+ "user_management",
+ "system_admin",
+ "user_read",
+ "user_write",
+ "user_delete",
+ }
+
+ for _, perm := range permissions {
+ if err := userService.AddPermission(admin.ID, perm); err != nil {
+ log.Printf("Failed to add permission %s to admin: %v", perm, err)
+ }
+ }
+
+ // Create sample users
+ sampleUsers := []*models.UserRequest{
+ {
+ Username: "john_doe",
+ Email: "john@example.com",
+ Name: "John Doe",
+ Age: 25,
+ Password: "password123",
+ Role: models.RoleUser,
+ },
+ {
+ Username: "jane_smith",
+ Email: "jane@example.com",
+ Name: "Jane Smith",
+ Age: 28,
+ Password: "password123",
+ Role: models.RoleUser,
+ },
+ {
+ Username: "guest_user",
+ Email: "guest@example.com",
+ Name: "Guest User",
+ Age: 22,
+ Password: "password123",
+ Role: models.RoleGuest,
+ },
+ }
+
+ for _, userReq := range sampleUsers {
+ if _, err := userService.CreateUser(userReq); err != nil {
+ log.Printf("Failed to create user %s: %v", userReq.Username, err)
+ }
+ }
+
+ log.Println("Sample data created successfully")
+}
+
+// Helper functions for demo
+func printUserStats(userService *services.UserService) {
+ stats, err := userService.GetUserStats()
+ if err != nil {
+ log.Printf("Failed to get user stats: %v", err)
+ return
+ }
+
+ log.Printf("User Statistics:")
+ log.Printf(" Total: %d", stats.Total)
+ log.Printf(" Active: %d", stats.Active)
+ log.Printf(" Admin: %d", stats.Admin)
+ log.Printf(" User: %d", stats.User)
+ log.Printf(" Guest: %d", stats.Guest)
+ log.Printf(" With Email: %d", stats.WithEmail)
+}
+
+func demonstrateUserOperations(userService *services.UserService) {
+ log.Println("\n=== User Management Demo ===")
+
+ // Get all users
+ users, total, err := userService.GetAllUsers(1, 10)
+ if err != nil {
+ log.Printf("Failed to get users: %v", err)
+ return
+ }
+
+ log.Printf("Found %d users (total: %d):", len(users), total)
+ for _, user := range users {
+ log.Printf(" - %s (%s) - %s [%s]",
+ user.Username, user.Name, user.Role, user.Status)
+ }
+
+ // Test authentication
+ log.Println("\n=== Authentication Test ===")
+ user, err := userService.AuthenticateUser("admin", "admin123")
+ if err != nil {
+ log.Printf("Authentication failed: %v", err)
+ } else {
+ log.Printf("Authentication successful for: %s", user.Username)
+ log.Printf("Last login: %v", user.LastLogin)
+ }
+
+ // Test search
+ log.Println("\n=== Search Test ===")
+ searchResults, _, err := userService.SearchUsers("john", 1, 10)
+ if err != nil {
+ log.Printf("Search failed: %v", err)
+ } else {
+ log.Printf("Search results for 'john': %d users", len(searchResults))
+ for _, user := range searchResults {
+ log.Printf(" - %s (%s)", user.Username, user.Name)
+ }
+ }
+
+ // Print stats
+ log.Println("\n=== Statistics ===")
+ printUserStats(userService)
+}
+
+// Run demo if not in server mode
+func runDemo() {
+ log.Println("Running User Management Demo...")
+
+ // Initialize database
+ db, err := initDatabase()
+ if err != nil {
+ log.Fatal("Failed to initialize database:", err)
+ }
+
+ // Initialize services
+ userService := services.NewUserService(db)
+
+ // Create sample data
+ createSampleData(userService)
+
+ // Demonstrate operations
+ demonstrateUserOperations(userService)
+
+ log.Println("\nDemo completed!")
+}
\ No newline at end of file
diff --git a/test/sample-projects/go/user-management/go.mod b/test/sample-projects/go/user-management/go.mod
new file mode 100644
index 0000000..ae1357e
--- /dev/null
+++ b/test/sample-projects/go/user-management/go.mod
@@ -0,0 +1,53 @@
+module github.com/example/user-management
+
+go 1.21
+
+require (
+ github.com/gin-gonic/gin v1.9.1
+ github.com/golang-jwt/jwt/v5 v5.0.0
+ github.com/google/uuid v1.3.0
+ github.com/spf13/cobra v1.7.0
+ github.com/spf13/viper v1.16.0
+ golang.org/x/crypto v0.11.0
+ gorm.io/driver/sqlite v1.5.2
+ gorm.io/gorm v1.25.2
+)
+
+require (
+ github.com/bytedance/sonic v1.9.1 // indirect
+ github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 // indirect
+ github.com/fsnotify/fsnotify v1.6.0 // indirect
+ github.com/gabriel-vasile/mimetype v1.4.2 // indirect
+ github.com/gin-contrib/sse v0.1.0 // indirect
+ github.com/go-playground/locales v0.14.1 // indirect
+ github.com/go-playground/universal-translator v0.18.1 // indirect
+ github.com/go-playground/validator/v10 v10.14.0 // indirect
+ github.com/goccy/go-json v0.10.2 // indirect
+ github.com/hashicorp/hcl v1.0.0 // indirect
+ github.com/inconshreveable/mousetrap v1.1.0 // indirect
+ github.com/jinzhu/inflection v1.0.0 // indirect
+ github.com/jinzhu/now v1.1.5 // indirect
+ github.com/json-iterator/go v1.1.12 // indirect
+ github.com/klauspost/cpuid/v2 v2.2.4 // indirect
+ github.com/leodido/go-urn v1.2.4 // indirect
+ github.com/magiconair/properties v1.8.7 // indirect
+ github.com/mattn/go-isatty v0.0.19 // indirect
+ github.com/mattn/go-sqlite3 v1.14.17 // indirect
+ github.com/mitchellh/mapstructure v1.5.0 // indirect
+ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
+ github.com/modern-go/reflect2 v1.0.2 // indirect
+ github.com/pelletier/go-toml/v2 v2.0.8 // indirect
+ github.com/spf13/afero v1.9.5 // indirect
+ github.com/spf13/cast v1.5.1 // indirect
+ github.com/spf13/jwalterweatherman v1.1.0 // indirect
+ github.com/spf13/pflag v1.0.5 // indirect
+ github.com/subosito/gotenv v1.4.2 // indirect
+ github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
+ github.com/ugorji/go/codec v1.2.11 // indirect
+ golang.org/x/arch v0.3.0 // indirect
+ golang.org/x/net v0.10.0 // indirect
+ golang.org/x/sys v0.10.0 // indirect
+ golang.org/x/text v0.11.0 // indirect
+ gopkg.in/ini.v1 v1.67.0 // indirect
+ gopkg.in/yaml.v3 v3.0.1 // indirect
+)
\ No newline at end of file
diff --git a/test/sample-projects/go/user-management/internal/models/user.go b/test/sample-projects/go/user-management/internal/models/user.go
new file mode 100644
index 0000000..465abf0
--- /dev/null
+++ b/test/sample-projects/go/user-management/internal/models/user.go
@@ -0,0 +1,310 @@
+package models
+
+import (
+ "encoding/json"
+ "errors"
+ "time"
+
+ "github.com/google/uuid"
+ "golang.org/x/crypto/bcrypt"
+ "gorm.io/gorm"
+)
+
+// UserRole represents the role of a user
+type UserRole string
+
+const (
+ RoleAdmin UserRole = "admin"
+ RoleUser UserRole = "user"
+ RoleGuest UserRole = "guest"
+)
+
+// UserStatus represents the status of a user
+type UserStatus string
+
+const (
+ StatusActive UserStatus = "active"
+ StatusInactive UserStatus = "inactive"
+ StatusSuspended UserStatus = "suspended"
+ StatusDeleted UserStatus = "deleted"
+)
+
+// User represents a user in the system
+type User struct {
+ ID uuid.UUID `json:"id" gorm:"type:uuid;primary_key"`
+ Username string `json:"username" gorm:"uniqueIndex;not null"`
+ Email string `json:"email" gorm:"uniqueIndex"`
+ Name string `json:"name" gorm:"not null"`
+ Age int `json:"age"`
+ PasswordHash string `json:"-" gorm:"not null"`
+ Role UserRole `json:"role" gorm:"default:user"`
+ Status UserStatus `json:"status" gorm:"default:active"`
+ LastLogin *time.Time `json:"last_login"`
+ LoginAttempts int `json:"login_attempts" gorm:"default:0"`
+ CreatedAt time.Time `json:"created_at"`
+ UpdatedAt time.Time `json:"updated_at"`
+ DeletedAt gorm.DeletedAt `json:"-" gorm:"index"`
+
+ // Permissions is a JSON field containing user permissions
+ Permissions []string `json:"permissions" gorm:"type:json"`
+
+ // Metadata for additional user information
+ Metadata map[string]interface{} `json:"metadata" gorm:"type:json"`
+}
+
+// UserRequest represents a request to create or update a user
+type UserRequest struct {
+ Username string `json:"username" binding:"required,min=3,max=20"`
+ Email string `json:"email" binding:"omitempty,email"`
+ Name string `json:"name" binding:"required,min=1,max=100"`
+ Age int `json:"age" binding:"min=0,max=150"`
+ Password string `json:"password" binding:"required,min=8"`
+ Role UserRole `json:"role" binding:"omitempty,oneof=admin user guest"`
+ Metadata map[string]interface{} `json:"metadata"`
+}
+
+// UserResponse represents a user response (without sensitive data)
+type UserResponse struct {
+ ID uuid.UUID `json:"id"`
+ Username string `json:"username"`
+ Email string `json:"email"`
+ Name string `json:"name"`
+ Age int `json:"age"`
+ Role UserRole `json:"role"`
+ Status UserStatus `json:"status"`
+ LastLogin *time.Time `json:"last_login"`
+ CreatedAt time.Time `json:"created_at"`
+ UpdatedAt time.Time `json:"updated_at"`
+ Permissions []string `json:"permissions"`
+ Metadata map[string]interface{} `json:"metadata"`
+}
+
+// BeforeCreate is a GORM hook that runs before creating a user
+func (u *User) BeforeCreate(tx *gorm.DB) error {
+ if u.ID == uuid.Nil {
+ u.ID = uuid.New()
+ }
+
+ if u.Permissions == nil {
+ u.Permissions = []string{}
+ }
+
+ if u.Metadata == nil {
+ u.Metadata = make(map[string]interface{})
+ }
+
+ return nil
+}
+
+// SetPassword hashes and sets the user's password
+func (u *User) SetPassword(password string) error {
+ if len(password) < 8 {
+ return errors.New("password must be at least 8 characters long")
+ }
+
+ hash, err := bcrypt.GenerateFromPassword([]byte(password), bcrypt.DefaultCost)
+ if err != nil {
+ return err
+ }
+
+ u.PasswordHash = string(hash)
+ return nil
+}
+
+// VerifyPassword checks if the provided password matches the user's password
+func (u *User) VerifyPassword(password string) bool {
+ err := bcrypt.CompareHashAndPassword([]byte(u.PasswordHash), []byte(password))
+ return err == nil
+}
+
+// HasPermission checks if the user has a specific permission
+func (u *User) HasPermission(permission string) bool {
+ for _, p := range u.Permissions {
+ if p == permission {
+ return true
+ }
+ }
+ return false
+}
+
+// AddPermission adds a permission to the user
+func (u *User) AddPermission(permission string) {
+ if !u.HasPermission(permission) {
+ u.Permissions = append(u.Permissions, permission)
+ }
+}
+
+// RemovePermission removes a permission from the user
+func (u *User) RemovePermission(permission string) {
+ for i, p := range u.Permissions {
+ if p == permission {
+ u.Permissions = append(u.Permissions[:i], u.Permissions[i+1:]...)
+ break
+ }
+ }
+}
+
+// IsActive checks if the user is active
+func (u *User) IsActive() bool {
+ return u.Status == StatusActive
+}
+
+// IsAdmin checks if the user is an admin
+func (u *User) IsAdmin() bool {
+ return u.Role == RoleAdmin
+}
+
+// IsLocked checks if the user is locked due to too many failed login attempts
+func (u *User) IsLocked() bool {
+ return u.LoginAttempts >= 5 || u.Status == StatusSuspended
+}
+
+// Login records a successful login
+func (u *User) Login() error {
+ if !u.IsActive() {
+ return errors.New("user is not active")
+ }
+
+ if u.IsLocked() {
+ return errors.New("user is locked")
+ }
+
+ now := time.Now()
+ u.LastLogin = &now
+ u.LoginAttempts = 0
+
+ return nil
+}
+
+// FailedLoginAttempt records a failed login attempt
+func (u *User) FailedLoginAttempt() {
+ u.LoginAttempts++
+ if u.LoginAttempts >= 5 {
+ u.Status = StatusSuspended
+ }
+}
+
+// ResetLoginAttempts resets the login attempts counter
+func (u *User) ResetLoginAttempts() {
+ u.LoginAttempts = 0
+}
+
+// Activate activates the user account
+func (u *User) Activate() {
+ u.Status = StatusActive
+ u.LoginAttempts = 0
+}
+
+// Deactivate deactivates the user account
+func (u *User) Deactivate() {
+ u.Status = StatusInactive
+}
+
+// Suspend suspends the user account
+func (u *User) Suspend() {
+ u.Status = StatusSuspended
+}
+
+// Delete marks the user as deleted
+func (u *User) Delete() {
+ u.Status = StatusDeleted
+}
+
+// ToResponse converts a User to a UserResponse
+func (u *User) ToResponse() *UserResponse {
+ return &UserResponse{
+ ID: u.ID,
+ Username: u.Username,
+ Email: u.Email,
+ Name: u.Name,
+ Age: u.Age,
+ Role: u.Role,
+ Status: u.Status,
+ LastLogin: u.LastLogin,
+ CreatedAt: u.CreatedAt,
+ UpdatedAt: u.UpdatedAt,
+ Permissions: u.Permissions,
+ Metadata: u.Metadata,
+ }
+}
+
+// FromRequest creates a User from a UserRequest
+func (u *User) FromRequest(req *UserRequest) error {
+ u.Username = req.Username
+ u.Email = req.Email
+ u.Name = req.Name
+ u.Age = req.Age
+ u.Role = req.Role
+ u.Metadata = req.Metadata
+
+ if req.Password != "" {
+ return u.SetPassword(req.Password)
+ }
+
+ return nil
+}
+
+// MarshalJSON customizes JSON marshaling for User
+func (u *User) MarshalJSON() ([]byte, error) {
+ return json.Marshal(u.ToResponse())
+}
+
+// Validate validates the user model
+func (u *User) Validate() error {
+ if len(u.Username) < 3 || len(u.Username) > 20 {
+ return errors.New("username must be between 3 and 20 characters")
+ }
+
+ if len(u.Name) == 0 || len(u.Name) > 100 {
+ return errors.New("name must be between 1 and 100 characters")
+ }
+
+ if u.Age < 0 || u.Age > 150 {
+ return errors.New("age must be between 0 and 150")
+ }
+
+ if u.Role != RoleAdmin && u.Role != RoleUser && u.Role != RoleGuest {
+ return errors.New("invalid role")
+ }
+
+ if u.Status != StatusActive && u.Status != StatusInactive &&
+ u.Status != StatusSuspended && u.Status != StatusDeleted {
+ return errors.New("invalid status")
+ }
+
+ return nil
+}
+
+// TableName returns the table name for GORM
+func (u *User) TableName() string {
+ return "users"
+}
+
+// GetMetadata gets a metadata value by key
+func (u *User) GetMetadata(key string) (interface{}, bool) {
+ if u.Metadata == nil {
+ return nil, false
+ }
+ value, exists := u.Metadata[key]
+ return value, exists
+}
+
+// SetMetadata sets a metadata value
+func (u *User) SetMetadata(key string, value interface{}) {
+ if u.Metadata == nil {
+ u.Metadata = make(map[string]interface{})
+ }
+ u.Metadata[key] = value
+}
+
+// RemoveMetadata removes a metadata key
+func (u *User) RemoveMetadata(key string) {
+ if u.Metadata != nil {
+ delete(u.Metadata, key)
+ }
+}
+
+// String returns a string representation of the user
+func (u *User) String() string {
+ return u.Username + " (" + u.Name + ")"
+}
\ No newline at end of file
diff --git a/test/sample-projects/go/user-management/internal/services/user_service.go b/test/sample-projects/go/user-management/internal/services/user_service.go
new file mode 100644
index 0000000..aeb3106
--- /dev/null
+++ b/test/sample-projects/go/user-management/internal/services/user_service.go
@@ -0,0 +1,419 @@
+package services
+
+import (
+ "encoding/json"
+ "errors"
+ "fmt"
+ "strings"
+ "time"
+
+ "github.com/example/user-management/internal/models"
+ "github.com/example/user-management/internal/utils"
+ "github.com/google/uuid"
+ "gorm.io/gorm"
+)
+
+// UserService handles user-related business logic
+type UserService struct {
+ db *gorm.DB
+}
+
+// NewUserService creates a new user service
+func NewUserService(db *gorm.DB) *UserService {
+ return &UserService{db: db}
+}
+
+// CreateUser creates a new user
+func (s *UserService) CreateUser(req *models.UserRequest) (*models.User, error) {
+ // Check if username already exists
+ var existingUser models.User
+ if err := s.db.Where("username = ?", req.Username).First(&existingUser).Error; err == nil {
+ return nil, errors.New("username already exists")
+ }
+
+ // Check if email already exists (if provided)
+ if req.Email != "" {
+ if err := s.db.Where("email = ?", req.Email).First(&existingUser).Error; err == nil {
+ return nil, errors.New("email already exists")
+ }
+ }
+
+ // Create new user
+ user := &models.User{
+ Role: models.RoleUser,
+ Status: models.StatusActive,
+ }
+
+ if err := user.FromRequest(req); err != nil {
+ return nil, fmt.Errorf("failed to create user from request: %w", err)
+ }
+
+ if err := user.Validate(); err != nil {
+ return nil, fmt.Errorf("user validation failed: %w", err)
+ }
+
+ if err := s.db.Create(user).Error; err != nil {
+ return nil, fmt.Errorf("failed to create user: %w", err)
+ }
+
+ return user, nil
+}
+
+// GetUserByID retrieves a user by ID
+func (s *UserService) GetUserByID(id uuid.UUID) (*models.User, error) {
+ var user models.User
+ if err := s.db.First(&user, "id = ?", id).Error; err != nil {
+ if errors.Is(err, gorm.ErrRecordNotFound) {
+ return nil, errors.New("user not found")
+ }
+ return nil, fmt.Errorf("failed to get user: %w", err)
+ }
+ return &user, nil
+}
+
+// GetUserByUsername retrieves a user by username
+func (s *UserService) GetUserByUsername(username string) (*models.User, error) {
+ var user models.User
+ if err := s.db.Where("username = ?", username).First(&user).Error; err != nil {
+ if errors.Is(err, gorm.ErrRecordNotFound) {
+ return nil, errors.New("user not found")
+ }
+ return nil, fmt.Errorf("failed to get user: %w", err)
+ }
+ return &user, nil
+}
+
+// GetUserByEmail retrieves a user by email
+func (s *UserService) GetUserByEmail(email string) (*models.User, error) {
+ var user models.User
+ if err := s.db.Where("email = ?", email).First(&user).Error; err != nil {
+ if errors.Is(err, gorm.ErrRecordNotFound) {
+ return nil, errors.New("user not found")
+ }
+ return nil, fmt.Errorf("failed to get user: %w", err)
+ }
+ return &user, nil
+}
+
+// UpdateUser updates an existing user
+func (s *UserService) UpdateUser(id uuid.UUID, updates map[string]interface{}) (*models.User, error) {
+ user, err := s.GetUserByID(id)
+ if err != nil {
+ return nil, err
+ }
+
+ // Apply updates
+ for key, value := range updates {
+ switch key {
+ case "name":
+ if name, ok := value.(string); ok {
+ user.Name = name
+ }
+ case "age":
+ if age, ok := value.(int); ok {
+ user.Age = age
+ }
+ case "email":
+ if email, ok := value.(string); ok {
+ user.Email = email
+ }
+ case "role":
+ if role, ok := value.(models.UserRole); ok {
+ user.Role = role
+ }
+ case "status":
+ if status, ok := value.(models.UserStatus); ok {
+ user.Status = status
+ }
+ case "metadata":
+ if metadata, ok := value.(map[string]interface{}); ok {
+ user.Metadata = metadata
+ }
+ }
+ }
+
+ if err := user.Validate(); err != nil {
+ return nil, fmt.Errorf("user validation failed: %w", err)
+ }
+
+ if err := s.db.Save(user).Error; err != nil {
+ return nil, fmt.Errorf("failed to update user: %w", err)
+ }
+
+ return user, nil
+}
+
+// DeleteUser soft deletes a user
+func (s *UserService) DeleteUser(id uuid.UUID) error {
+ user, err := s.GetUserByID(id)
+ if err != nil {
+ return err
+ }
+
+ user.Delete()
+
+ if err := s.db.Save(user).Error; err != nil {
+ return fmt.Errorf("failed to delete user: %w", err)
+ }
+
+ return nil
+}
+
+// HardDeleteUser permanently deletes a user
+func (s *UserService) HardDeleteUser(id uuid.UUID) error {
+ if err := s.db.Unscoped().Delete(&models.User{}, id).Error; err != nil {
+ return fmt.Errorf("failed to hard delete user: %w", err)
+ }
+ return nil
+}
+
+// GetAllUsers retrieves all users with pagination
+func (s *UserService) GetAllUsers(page, pageSize int) ([]*models.User, int64, error) {
+ var users []*models.User
+ var total int64
+
+ // Count total users
+ if err := s.db.Model(&models.User{}).Count(&total).Error; err != nil {
+ return nil, 0, fmt.Errorf("failed to count users: %w", err)
+ }
+
+ // Get users with pagination
+ offset := (page - 1) * pageSize
+ if err := s.db.Limit(pageSize).Offset(offset).Find(&users).Error; err != nil {
+ return nil, 0, fmt.Errorf("failed to get users: %w", err)
+ }
+
+ return users, total, nil
+}
+
+// GetActiveUsers retrieves all active users
+func (s *UserService) GetActiveUsers() ([]*models.User, error) {
+ var users []*models.User
+ if err := s.db.Where("status = ?", models.StatusActive).Find(&users).Error; err != nil {
+ return nil, fmt.Errorf("failed to get active users: %w", err)
+ }
+ return users, nil
+}
+
+// GetUsersByRole retrieves users by role
+func (s *UserService) GetUsersByRole(role models.UserRole) ([]*models.User, error) {
+ var users []*models.User
+ if err := s.db.Where("role = ?", role).Find(&users).Error; err != nil {
+ return nil, fmt.Errorf("failed to get users by role: %w", err)
+ }
+ return users, nil
+}
+
+// SearchUsers searches for users by name or username
+func (s *UserService) SearchUsers(query string, page, pageSize int) ([]*models.User, int64, error) {
+ var users []*models.User
+ var total int64
+
+ searchQuery := "%" + strings.ToLower(query) + "%"
+
+ // Count total matching users
+ if err := s.db.Model(&models.User{}).Where(
+ "LOWER(name) LIKE ? OR LOWER(username) LIKE ? OR LOWER(email) LIKE ?",
+ searchQuery, searchQuery, searchQuery,
+ ).Count(&total).Error; err != nil {
+ return nil, 0, fmt.Errorf("failed to count search results: %w", err)
+ }
+
+ // Get matching users with pagination
+ offset := (page - 1) * pageSize
+ if err := s.db.Where(
+ "LOWER(name) LIKE ? OR LOWER(username) LIKE ? OR LOWER(email) LIKE ?",
+ searchQuery, searchQuery, searchQuery,
+ ).Limit(pageSize).Offset(offset).Find(&users).Error; err != nil {
+ return nil, 0, fmt.Errorf("failed to search users: %w", err)
+ }
+
+ return users, total, nil
+}
+
+// GetUserStats returns user statistics
+func (s *UserService) GetUserStats() (*utils.UserStats, error) {
+ var stats utils.UserStats
+
+ // Total users
+ if err := s.db.Model(&models.User{}).Count(&stats.Total).Error; err != nil {
+ return nil, fmt.Errorf("failed to count total users: %w", err)
+ }
+
+ // Active users
+ if err := s.db.Model(&models.User{}).Where("status = ?", models.StatusActive).Count(&stats.Active).Error; err != nil {
+ return nil, fmt.Errorf("failed to count active users: %w", err)
+ }
+
+ // Admin users
+ if err := s.db.Model(&models.User{}).Where("role = ?", models.RoleAdmin).Count(&stats.Admin).Error; err != nil {
+ return nil, fmt.Errorf("failed to count admin users: %w", err)
+ }
+
+ // Regular users
+ if err := s.db.Model(&models.User{}).Where("role = ?", models.RoleUser).Count(&stats.User).Error; err != nil {
+ return nil, fmt.Errorf("failed to count regular users: %w", err)
+ }
+
+ // Guest users
+ if err := s.db.Model(&models.User{}).Where("role = ?", models.RoleGuest).Count(&stats.Guest).Error; err != nil {
+ return nil, fmt.Errorf("failed to count guest users: %w", err)
+ }
+
+ // Users with email
+ if err := s.db.Model(&models.User{}).Where("email != ''").Count(&stats.WithEmail).Error; err != nil {
+ return nil, fmt.Errorf("failed to count users with email: %w", err)
+ }
+
+ return &stats, nil
+}
+
+// AuthenticateUser authenticates a user with username and password
+func (s *UserService) AuthenticateUser(username, password string) (*models.User, error) {
+ user, err := s.GetUserByUsername(username)
+ if err != nil {
+ return nil, errors.New("invalid username or password")
+ }
+
+ if !user.IsActive() {
+ return nil, errors.New("user account is not active")
+ }
+
+ if user.IsLocked() {
+ return nil, errors.New("user account is locked")
+ }
+
+ if !user.VerifyPassword(password) {
+ user.FailedLoginAttempt()
+ if err := s.db.Save(user).Error; err != nil {
+ return nil, fmt.Errorf("failed to update failed login attempt: %w", err)
+ }
+ return nil, errors.New("invalid username or password")
+ }
+
+ // Successful login
+ if err := user.Login(); err != nil {
+ return nil, fmt.Errorf("login failed: %w", err)
+ }
+
+ if err := s.db.Save(user).Error; err != nil {
+ return nil, fmt.Errorf("failed to update login info: %w", err)
+ }
+
+ return user, nil
+}
+
+// ChangePassword changes a user's password
+func (s *UserService) ChangePassword(id uuid.UUID, currentPassword, newPassword string) error {
+ user, err := s.GetUserByID(id)
+ if err != nil {
+ return err
+ }
+
+ if !user.VerifyPassword(currentPassword) {
+ return errors.New("current password is incorrect")
+ }
+
+ if err := user.SetPassword(newPassword); err != nil {
+ return fmt.Errorf("failed to set new password: %w", err)
+ }
+
+ if err := s.db.Save(user).Error; err != nil {
+ return fmt.Errorf("failed to update password: %w", err)
+ }
+
+ return nil
+}
+
+// ResetPassword resets a user's password (admin function)
+func (s *UserService) ResetPassword(id uuid.UUID, newPassword string) error {
+ user, err := s.GetUserByID(id)
+ if err != nil {
+ return err
+ }
+
+ if err := user.SetPassword(newPassword); err != nil {
+ return fmt.Errorf("failed to set new password: %w", err)
+ }
+
+ user.ResetLoginAttempts()
+
+ if err := s.db.Save(user).Error; err != nil {
+ return fmt.Errorf("failed to update password: %w", err)
+ }
+
+ return nil
+}
+
+// AddPermission adds a permission to a user
+func (s *UserService) AddPermission(id uuid.UUID, permission string) error {
+ user, err := s.GetUserByID(id)
+ if err != nil {
+ return err
+ }
+
+ user.AddPermission(permission)
+
+ if err := s.db.Save(user).Error; err != nil {
+ return fmt.Errorf("failed to add permission: %w", err)
+ }
+
+ return nil
+}
+
+// RemovePermission removes a permission from a user
+func (s *UserService) RemovePermission(id uuid.UUID, permission string) error {
+ user, err := s.GetUserByID(id)
+ if err != nil {
+ return err
+ }
+
+ user.RemovePermission(permission)
+
+ if err := s.db.Save(user).Error; err != nil {
+ return fmt.Errorf("failed to remove permission: %w", err)
+ }
+
+ return nil
+}
+
+// ExportUsers exports users to JSON
+func (s *UserService) ExportUsers() ([]byte, error) {
+ users, _, err := s.GetAllUsers(1, 1000) // Get all users (limit to 1000 for safety)
+ if err != nil {
+ return nil, fmt.Errorf("failed to get users for export: %w", err)
+ }
+
+ var responses []*models.UserResponse
+ for _, user := range users {
+ responses = append(responses, user.ToResponse())
+ }
+
+ data, err := json.MarshalIndent(responses, "", " ")
+ if err != nil {
+ return nil, fmt.Errorf("failed to marshal users: %w", err)
+ }
+
+ return data, nil
+}
+
+// GetUserActivity returns user activity information
+func (s *UserService) GetUserActivity(id uuid.UUID) (*utils.UserActivity, error) {
+ user, err := s.GetUserByID(id)
+ if err != nil {
+ return nil, err
+ }
+
+ activity := &utils.UserActivity{
+ UserID: user.ID,
+ Username: user.Username,
+ LastLogin: user.LastLogin,
+ LoginAttempts: user.LoginAttempts,
+ IsActive: user.IsActive(),
+ IsLocked: user.IsLocked(),
+ CreatedAt: user.CreatedAt,
+ UpdatedAt: user.UpdatedAt,
+ }
+
+ return activity, nil
+}
\ No newline at end of file
diff --git a/test/sample-projects/go/user-management/internal/utils/types.go b/test/sample-projects/go/user-management/internal/utils/types.go
new file mode 100644
index 0000000..7d64474
--- /dev/null
+++ b/test/sample-projects/go/user-management/internal/utils/types.go
@@ -0,0 +1,250 @@
+package utils
+
+import (
+ "time"
+
+ "github.com/google/uuid"
+)
+
+// UserStats represents user statistics
+type UserStats struct {
+ Total int64 `json:"total"`
+ Active int64 `json:"active"`
+ Admin int64 `json:"admin"`
+ User int64 `json:"user"`
+ Guest int64 `json:"guest"`
+ WithEmail int64 `json:"with_email"`
+}
+
+// UserActivity represents user activity information
+type UserActivity struct {
+ UserID uuid.UUID `json:"user_id"`
+ Username string `json:"username"`
+ LastLogin *time.Time `json:"last_login"`
+ LoginAttempts int `json:"login_attempts"`
+ IsActive bool `json:"is_active"`
+ IsLocked bool `json:"is_locked"`
+ CreatedAt time.Time `json:"created_at"`
+ UpdatedAt time.Time `json:"updated_at"`
+}
+
+// PaginatedResponse represents a paginated response
+type PaginatedResponse struct {
+ Data interface{} `json:"data"`
+ Page int `json:"page"`
+ PageSize int `json:"page_size"`
+ Total int64 `json:"total"`
+ TotalPages int `json:"total_pages"`
+}
+
+// NewPaginatedResponse creates a new paginated response
+func NewPaginatedResponse(data interface{}, page, pageSize int, total int64) *PaginatedResponse {
+ totalPages := int((total + int64(pageSize) - 1) / int64(pageSize))
+ return &PaginatedResponse{
+ Data: data,
+ Page: page,
+ PageSize: pageSize,
+ Total: total,
+ TotalPages: totalPages,
+ }
+}
+
+// APIResponse represents a standard API response
+type APIResponse struct {
+ Success bool `json:"success"`
+ Message string `json:"message"`
+ Data interface{} `json:"data,omitempty"`
+ Error string `json:"error,omitempty"`
+}
+
+// NewSuccessResponse creates a new success response
+func NewSuccessResponse(message string, data interface{}) *APIResponse {
+ return &APIResponse{
+ Success: true,
+ Message: message,
+ Data: data,
+ }
+}
+
+// NewErrorResponse creates a new error response
+func NewErrorResponse(message string, err error) *APIResponse {
+ resp := &APIResponse{
+ Success: false,
+ Message: message,
+ }
+
+ if err != nil {
+ resp.Error = err.Error()
+ }
+
+ return resp
+}
+
+// ValidationError represents a validation error
+type ValidationError struct {
+ Field string `json:"field"`
+ Message string `json:"message"`
+}
+
+// ValidationErrors represents multiple validation errors
+type ValidationErrors struct {
+ Errors []ValidationError `json:"errors"`
+}
+
+// NewValidationErrors creates a new validation errors instance
+func NewValidationErrors() *ValidationErrors {
+ return &ValidationErrors{
+ Errors: make([]ValidationError, 0),
+ }
+}
+
+// Add adds a validation error
+func (ve *ValidationErrors) Add(field, message string) {
+ ve.Errors = append(ve.Errors, ValidationError{
+ Field: field,
+ Message: message,
+ })
+}
+
+// HasErrors returns true if there are validation errors
+func (ve *ValidationErrors) HasErrors() bool {
+ return len(ve.Errors) > 0
+}
+
+// Error implements the error interface
+func (ve *ValidationErrors) Error() string {
+ if len(ve.Errors) == 0 {
+ return ""
+ }
+
+ if len(ve.Errors) == 1 {
+ return ve.Errors[0].Message
+ }
+
+ return "multiple validation errors"
+}
+
+// DatabaseConfig represents database configuration
+type DatabaseConfig struct {
+ Driver string `json:"driver"`
+ Host string `json:"host"`
+ Port int `json:"port"`
+ Database string `json:"database"`
+ Username string `json:"username"`
+ Password string `json:"password"`
+ SSLMode string `json:"ssl_mode"`
+}
+
+// ServerConfig represents server configuration
+type ServerConfig struct {
+ Port int `json:"port"`
+ Host string `json:"host"`
+ ReadTimeout int `json:"read_timeout"`
+ WriteTimeout int `json:"write_timeout"`
+ IdleTimeout int `json:"idle_timeout"`
+}
+
+// JWTConfig represents JWT configuration
+type JWTConfig struct {
+ SecretKey string `json:"secret_key"`
+ ExpirationHours int `json:"expiration_hours"`
+ RefreshHours int `json:"refresh_hours"`
+ Issuer string `json:"issuer"`
+ SigningAlgorithm string `json:"signing_algorithm"`
+}
+
+// Config represents application configuration
+type Config struct {
+ Database DatabaseConfig `json:"database"`
+ Server ServerConfig `json:"server"`
+ JWT JWTConfig `json:"jwt"`
+ LogLevel string `json:"log_level"`
+ Debug bool `json:"debug"`
+}
+
+// SearchParams represents search parameters
+type SearchParams struct {
+ Query string `json:"query"`
+ Page int `json:"page"`
+ PageSize int `json:"page_size"`
+ SortBy string `json:"sort_by"`
+ SortDir string `json:"sort_dir"`
+}
+
+// NewSearchParams creates new search parameters with defaults
+func NewSearchParams() *SearchParams {
+ return &SearchParams{
+ Page: 1,
+ PageSize: 20,
+ SortBy: "created_at",
+ SortDir: "desc",
+ }
+}
+
+// Validate validates search parameters
+func (sp *SearchParams) Validate() error {
+ if sp.Page < 1 {
+ sp.Page = 1
+ }
+
+ if sp.PageSize < 1 {
+ sp.PageSize = 20
+ }
+
+ if sp.PageSize > 100 {
+ sp.PageSize = 100
+ }
+
+ if sp.SortBy == "" {
+ sp.SortBy = "created_at"
+ }
+
+ if sp.SortDir != "asc" && sp.SortDir != "desc" {
+ sp.SortDir = "desc"
+ }
+
+ return nil
+}
+
+// FilterParams represents filter parameters
+type FilterParams struct {
+ Role string `json:"role"`
+ Status string `json:"status"`
+ AgeMin int `json:"age_min"`
+ AgeMax int `json:"age_max"`
+ CreatedAt time.Time `json:"created_at"`
+ UpdatedAt time.Time `json:"updated_at"`
+}
+
+// AuditLog represents an audit log entry
+type AuditLog struct {
+ ID uuid.UUID `json:"id"`
+ UserID uuid.UUID `json:"user_id"`
+ Action string `json:"action"`
+ Resource string `json:"resource"`
+ Details map[string]interface{} `json:"details"`
+ IPAddress string `json:"ip_address"`
+ UserAgent string `json:"user_agent"`
+ CreatedAt time.Time `json:"created_at"`
+}
+
+// Session represents a user session
+type Session struct {
+ ID uuid.UUID `json:"id"`
+ UserID uuid.UUID `json:"user_id"`
+ Token string `json:"token"`
+ ExpiresAt time.Time `json:"expires_at"`
+ CreatedAt time.Time `json:"created_at"`
+ UpdatedAt time.Time `json:"updated_at"`
+}
+
+// IsExpired checks if the session is expired
+func (s *Session) IsExpired() bool {
+ return time.Now().After(s.ExpiresAt)
+}
+
+// ExtendSession extends the session expiration
+func (s *Session) ExtendSession(duration time.Duration) {
+ s.ExpiresAt = time.Now().Add(duration)
+ s.UpdatedAt = time.Now()
+}
\ No newline at end of file
diff --git a/test/sample-projects/go/user-management/pkg/api/user_handler.go b/test/sample-projects/go/user-management/pkg/api/user_handler.go
new file mode 100644
index 0000000..1132c5a
--- /dev/null
+++ b/test/sample-projects/go/user-management/pkg/api/user_handler.go
@@ -0,0 +1,309 @@
+package api
+
+import (
+ "net/http"
+ "strconv"
+
+ "github.com/example/user-management/internal/models"
+ "github.com/example/user-management/internal/services"
+ "github.com/example/user-management/internal/utils"
+ "github.com/gin-gonic/gin"
+ "github.com/google/uuid"
+)
+
+// UserHandler handles user-related HTTP requests
+type UserHandler struct {
+ userService *services.UserService
+}
+
+// NewUserHandler creates a new user handler
+func NewUserHandler(userService *services.UserService) *UserHandler {
+ return &UserHandler{
+ userService: userService,
+ }
+}
+
+// CreateUser handles user creation
+func (h *UserHandler) CreateUser(c *gin.Context) {
+ var req models.UserRequest
+ if err := c.ShouldBindJSON(&req); err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Invalid request", err))
+ return
+ }
+
+ user, err := h.userService.CreateUser(&req)
+ if err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Failed to create user", err))
+ return
+ }
+
+ c.JSON(http.StatusCreated, utils.NewSuccessResponse("User created successfully", user.ToResponse()))
+}
+
+// GetUser handles getting a single user
+func (h *UserHandler) GetUser(c *gin.Context) {
+ idStr := c.Param("id")
+ id, err := uuid.Parse(idStr)
+ if err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Invalid user ID", err))
+ return
+ }
+
+ user, err := h.userService.GetUserByID(id)
+ if err != nil {
+ c.JSON(http.StatusNotFound, utils.NewErrorResponse("User not found", err))
+ return
+ }
+
+ c.JSON(http.StatusOK, utils.NewSuccessResponse("User retrieved successfully", user.ToResponse()))
+}
+
+// GetUsers handles getting users with pagination
+func (h *UserHandler) GetUsers(c *gin.Context) {
+ page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
+ pageSize, _ := strconv.Atoi(c.DefaultQuery("page_size", "20"))
+
+ if page < 1 {
+ page = 1
+ }
+ if pageSize < 1 || pageSize > 100 {
+ pageSize = 20
+ }
+
+ users, total, err := h.userService.GetAllUsers(page, pageSize)
+ if err != nil {
+ c.JSON(http.StatusInternalServerError, utils.NewErrorResponse("Failed to get users", err))
+ return
+ }
+
+ var responses []*models.UserResponse
+ for _, user := range users {
+ responses = append(responses, user.ToResponse())
+ }
+
+ paginatedResponse := utils.NewPaginatedResponse(responses, page, pageSize, total)
+ c.JSON(http.StatusOK, utils.NewSuccessResponse("Users retrieved successfully", paginatedResponse))
+}
+
+// UpdateUser handles user updates
+func (h *UserHandler) UpdateUser(c *gin.Context) {
+ idStr := c.Param("id")
+ id, err := uuid.Parse(idStr)
+ if err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Invalid user ID", err))
+ return
+ }
+
+ var updates map[string]interface{}
+ if err := c.ShouldBindJSON(&updates); err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Invalid request", err))
+ return
+ }
+
+ user, err := h.userService.UpdateUser(id, updates)
+ if err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Failed to update user", err))
+ return
+ }
+
+ c.JSON(http.StatusOK, utils.NewSuccessResponse("User updated successfully", user.ToResponse()))
+}
+
+// DeleteUser handles user deletion
+func (h *UserHandler) DeleteUser(c *gin.Context) {
+ idStr := c.Param("id")
+ id, err := uuid.Parse(idStr)
+ if err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Invalid user ID", err))
+ return
+ }
+
+ if err := h.userService.DeleteUser(id); err != nil {
+ c.JSON(http.StatusInternalServerError, utils.NewErrorResponse("Failed to delete user", err))
+ return
+ }
+
+ c.JSON(http.StatusOK, utils.NewSuccessResponse("User deleted successfully", nil))
+}
+
+// SearchUsers handles user search
+func (h *UserHandler) SearchUsers(c *gin.Context) {
+ query := c.Query("q")
+ page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
+ pageSize, _ := strconv.Atoi(c.DefaultQuery("page_size", "20"))
+
+ if page < 1 {
+ page = 1
+ }
+ if pageSize < 1 || pageSize > 100 {
+ pageSize = 20
+ }
+
+ users, total, err := h.userService.SearchUsers(query, page, pageSize)
+ if err != nil {
+ c.JSON(http.StatusInternalServerError, utils.NewErrorResponse("Failed to search users", err))
+ return
+ }
+
+ var responses []*models.UserResponse
+ for _, user := range users {
+ responses = append(responses, user.ToResponse())
+ }
+
+ paginatedResponse := utils.NewPaginatedResponse(responses, page, pageSize, total)
+ c.JSON(http.StatusOK, utils.NewSuccessResponse("Search completed successfully", paginatedResponse))
+}
+
+// GetUserStats handles getting user statistics
+func (h *UserHandler) GetUserStats(c *gin.Context) {
+ stats, err := h.userService.GetUserStats()
+ if err != nil {
+ c.JSON(http.StatusInternalServerError, utils.NewErrorResponse("Failed to get user statistics", err))
+ return
+ }
+
+ c.JSON(http.StatusOK, utils.NewSuccessResponse("Statistics retrieved successfully", stats))
+}
+
+// ExportUsers handles user export
+func (h *UserHandler) ExportUsers(c *gin.Context) {
+ data, err := h.userService.ExportUsers()
+ if err != nil {
+ c.JSON(http.StatusInternalServerError, utils.NewErrorResponse("Failed to export users", err))
+ return
+ }
+
+ c.Header("Content-Type", "application/json")
+ c.Header("Content-Disposition", "attachment; filename=users.json")
+ c.Data(http.StatusOK, "application/json", data)
+}
+
+// Login handles user authentication
+func (h *UserHandler) Login(c *gin.Context) {
+ var req struct {
+ Username string `json:"username" binding:"required"`
+ Password string `json:"password" binding:"required"`
+ }
+
+ if err := c.ShouldBindJSON(&req); err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Invalid request", err))
+ return
+ }
+
+ user, err := h.userService.AuthenticateUser(req.Username, req.Password)
+ if err != nil {
+ c.JSON(http.StatusUnauthorized, utils.NewErrorResponse("Authentication failed", err))
+ return
+ }
+
+ // In a real application, you would generate a JWT token here
+ response := map[string]interface{}{
+ "user": user.ToResponse(),
+ "token": "dummy-jwt-token", // This would be a real JWT token
+ "expires": "2024-12-31T23:59:59Z",
+ }
+
+ c.JSON(http.StatusOK, utils.NewSuccessResponse("Login successful", response))
+}
+
+// Logout handles user logout
+func (h *UserHandler) Logout(c *gin.Context) {
+ // In a real application, you would invalidate the JWT token here
+ c.JSON(http.StatusOK, utils.NewSuccessResponse("Logout successful", nil))
+}
+
+// ChangePassword handles password change
+func (h *UserHandler) ChangePassword(c *gin.Context) {
+ var req struct {
+ UserID uuid.UUID `json:"user_id" binding:"required"`
+ CurrentPassword string `json:"current_password" binding:"required"`
+ NewPassword string `json:"new_password" binding:"required,min=8"`
+ }
+
+ if err := c.ShouldBindJSON(&req); err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Invalid request", err))
+ return
+ }
+
+ if err := h.userService.ChangePassword(req.UserID, req.CurrentPassword, req.NewPassword); err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Failed to change password", err))
+ return
+ }
+
+ c.JSON(http.StatusOK, utils.NewSuccessResponse("Password changed successfully", nil))
+}
+
+// ResetPassword handles password reset (admin only)
+func (h *UserHandler) ResetPassword(c *gin.Context) {
+ idStr := c.Param("id")
+ id, err := uuid.Parse(idStr)
+ if err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Invalid user ID", err))
+ return
+ }
+
+ var req struct {
+ NewPassword string `json:"new_password" binding:"required,min=8"`
+ }
+
+ if err := c.ShouldBindJSON(&req); err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Invalid request", err))
+ return
+ }
+
+ if err := h.userService.ResetPassword(id, req.NewPassword); err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Failed to reset password", err))
+ return
+ }
+
+ c.JSON(http.StatusOK, utils.NewSuccessResponse("Password reset successfully", nil))
+}
+
+// AddPermission handles adding permission to user
+func (h *UserHandler) AddPermission(c *gin.Context) {
+ idStr := c.Param("id")
+ id, err := uuid.Parse(idStr)
+ if err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Invalid user ID", err))
+ return
+ }
+
+ var req struct {
+ Permission string `json:"permission" binding:"required"`
+ }
+
+ if err := c.ShouldBindJSON(&req); err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Invalid request", err))
+ return
+ }
+
+ if err := h.userService.AddPermission(id, req.Permission); err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Failed to add permission", err))
+ return
+ }
+
+ c.JSON(http.StatusOK, utils.NewSuccessResponse("Permission added successfully", nil))
+}
+
+// RemovePermission handles removing permission from user
+func (h *UserHandler) RemovePermission(c *gin.Context) {
+ idStr := c.Param("id")
+ id, err := uuid.Parse(idStr)
+ if err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Invalid user ID", err))
+ return
+ }
+
+ permission := c.Query("permission")
+ if permission == "" {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Permission parameter is required", nil))
+ return
+ }
+
+ if err := h.userService.RemovePermission(id, permission); err != nil {
+ c.JSON(http.StatusBadRequest, utils.NewErrorResponse("Failed to remove permission", err))
+ return
+ }
+
+ c.JSON(http.StatusOK, utils.NewSuccessResponse("Permission removed successfully", nil))
+}
\ No newline at end of file
diff --git a/test/sample-projects/java/user-management/README.md b/test/sample-projects/java/user-management/README.md
new file mode 100644
index 0000000..f83a0a1
--- /dev/null
+++ b/test/sample-projects/java/user-management/README.md
@@ -0,0 +1,183 @@
+# User Management System (Java)
+
+A comprehensive user management system built in Java for testing Code Index MCP's analysis capabilities.
+
+## Features
+
+- **User Management**: Create, update, delete, and search users
+- **Authentication**: BCrypt password hashing and verification
+- **Authorization**: Role-based access control (Admin, User, Guest)
+- **Data Validation**: Input validation and sanitization
+- **Export/Import**: JSON and CSV export capabilities
+- **Persistence**: File-based storage with JSON serialization
+- **Logging**: SLF4J logging with Logback
+
+## Project Structure
+
+```
+src/main/java/com/example/usermanagement/
+├── models/
+│ ├── Person.java # Base person model
+│ ├── User.java # User model with auth features
+│ ├── UserRole.java # User role enumeration
+│ └── UserStatus.java # User status enumeration
+├── services/
+│ └── UserManager.java # User management service
+├── utils/
+│ ├── ValidationUtils.java # Validation utilities
+│ ├── UserNotFoundException.java # Custom exception
+│ └── DuplicateUserException.java # Custom exception
+└── Main.java # Main demo application
+```
+
+## Technologies Used
+
+- **Java 11**: Modern Java features and APIs
+- **Jackson**: JSON processing and serialization
+- **BCrypt**: Secure password hashing
+- **Apache Commons**: Utility libraries (Lang3, CSV)
+- **SLF4J + Logback**: Logging framework
+- **Maven**: Build and dependency management
+- **JUnit 5**: Testing framework
+
+## Build and Run
+
+### Prerequisites
+
+- Java 11 or higher
+- Maven 3.6+
+
+### Build
+
+```bash
+mvn clean compile
+```
+
+### Run
+
+```bash
+mvn exec:java -Dexec.mainClass="com.example.usermanagement.Main"
+```
+
+### Test
+
+```bash
+mvn test
+```
+
+### Package
+
+```bash
+mvn package
+```
+
+## Usage
+
+### Creating Users
+
+```java
+UserManager userManager = new UserManager();
+
+// Create a basic user
+User user = userManager.createUser("John Doe", 30, "john_doe", "john@example.com");
+user.setPassword("SecurePass123!");
+
+// Create an admin user
+User admin = userManager.createUser("Jane Smith", 35, "jane_admin",
+ "jane@example.com", UserRole.ADMIN);
+admin.setPassword("AdminPass123!");
+admin.addPermission("user_management");
+```
+
+### User Authentication
+
+```java
+// Verify password
+boolean isValid = user.verifyPassword("SecurePass123!");
+
+// Login
+if (user.login()) {
+ System.out.println("Login successful!");
+ System.out.println("Last login: " + user.getLastLogin());
+}
+```
+
+### User Management
+
+```java
+// Search users
+List results = userManager.searchUsers("john");
+
+// Filter users
+List activeUsers = userManager.getActiveUsers();
+List adminUsers = userManager.getUsersByRole(UserRole.ADMIN);
+List olderUsers = userManager.getUsersOlderThan(25);
+
+// Update user
+Map updates = Map.of("age", 31, "email", "newemail@example.com");
+userManager.updateUser("john_doe", updates);
+
+// Export users
+String jsonData = userManager.exportUsers("json");
+String csvData = userManager.exportUsers("csv");
+```
+
+## Testing Features
+
+This project tests the following Java language features:
+
+### Core Language Features
+- **Classes and Inheritance**: Person and User class hierarchy
+- **Enums**: UserRole and UserStatus with methods
+- **Interfaces**: Custom exceptions and validation
+- **Generics**: Collections with type safety
+- **Annotations**: Jackson JSON annotations
+- **Exception Handling**: Custom exceptions and try-catch blocks
+
+### Modern Java Features
+- **Streams API**: Filtering, mapping, and collecting
+- **Lambda Expressions**: Functional programming
+- **Method References**: Stream operations
+- **Optional**: Null-safe operations
+- **Time API**: LocalDateTime usage
+
+### Advanced Features
+- **Concurrent Collections**: ConcurrentHashMap
+- **Reflection**: Jackson serialization
+- **File I/O**: NIO.2 Path and Files
+- **Logging**: SLF4J with parameterized messages
+- **Validation**: Input validation and sanitization
+
+### Framework Integration
+- **Maven**: Build lifecycle and dependency management
+- **Jackson**: JSON serialization/deserialization
+- **BCrypt**: Password hashing
+- **Apache Commons**: Utility libraries
+- **SLF4J**: Structured logging
+
+### Design Patterns
+- **Builder Pattern**: Object construction
+- **Factory Pattern**: User creation
+- **Repository Pattern**: Data access
+- **Service Layer**: Business logic separation
+
+## Dependencies
+
+### Core Dependencies
+- **Jackson Databind**: JSON processing
+- **Jackson JSR310**: Java 8 time support
+- **BCrypt**: Password hashing
+- **Apache Commons Lang3**: Utilities
+- **Apache Commons CSV**: CSV processing
+
+### Logging
+- **SLF4J API**: Logging facade
+- **Logback Classic**: Logging implementation
+
+### Testing
+- **JUnit 5**: Testing framework
+- **Mockito**: Mocking framework
+
+## License
+
+MIT License - This is a sample project for testing purposes.
\ No newline at end of file
diff --git a/test/sample-projects/java/user-management/pom.xml b/test/sample-projects/java/user-management/pom.xml
new file mode 100644
index 0000000..1287b94
--- /dev/null
+++ b/test/sample-projects/java/user-management/pom.xml
@@ -0,0 +1,117 @@
+
+
+ 4.0.0
+
+ com.example
+ user-management
+ 1.0.0
+ jar
+
+ User Management System
+ A sample user management system for testing Code Index MCP
+
+
+ 11
+ 11
+ UTF-8
+ 5.9.2
+ 2.15.2
+ 2.0.7
+ 1.4.7
+
+
+
+
+
+ com.fasterxml.jackson.core
+ jackson-databind
+ ${jackson.version}
+
+
+
+ com.fasterxml.jackson.datatype
+ jackson-datatype-jsr310
+ ${jackson.version}
+
+
+
+
+ org.slf4j
+ slf4j-api
+ ${slf4j.version}
+
+
+
+ ch.qos.logback
+ logback-classic
+ ${logback.version}
+
+
+
+
+ org.apache.commons
+ commons-lang3
+ 3.12.0
+
+
+
+ org.apache.commons
+ commons-csv
+ 1.9.0
+
+
+
+
+ org.mindrot
+ jbcrypt
+ 0.4
+
+
+
+
+ org.junit.jupiter
+ junit-jupiter
+ ${junit.version}
+ test
+
+
+
+ org.mockito
+ mockito-core
+ 5.3.1
+ test
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+ 11
+ 11
+
+
+
+
+ org.apache.maven.plugins
+ maven-surefire-plugin
+ 3.1.2
+
+
+
+ org.codehaus.mojo
+ exec-maven-plugin
+ 3.1.0
+
+ com.example.usermanagement.Main
+
+
+
+
+
\ No newline at end of file
diff --git a/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/Main.java b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/Main.java
new file mode 100644
index 0000000..b74e7db
--- /dev/null
+++ b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/Main.java
@@ -0,0 +1,220 @@
+package com.example.usermanagement;
+
+import com.example.usermanagement.models.User;
+import com.example.usermanagement.models.UserRole;
+import com.example.usermanagement.services.UserManager;
+import com.example.usermanagement.utils.UserNotFoundException;
+import com.example.usermanagement.utils.DuplicateUserException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Main class demonstrating the User Management System.
+ */
+public class Main {
+
+ private static final Logger logger = LoggerFactory.getLogger(Main.class);
+
+ public static void main(String[] args) {
+ System.out.println("=".repeat(50));
+ System.out.println("User Management System Demo (Java)");
+ System.out.println("=".repeat(50));
+
+ // Create user manager
+ UserManager userManager = new UserManager();
+
+ // Create sample users
+ System.out.println("\n1. Creating sample users...");
+ createSampleUsers(userManager);
+
+ // Display all users
+ System.out.println("\n2. Listing all users...");
+ listAllUsers(userManager);
+
+ // Test user retrieval
+ System.out.println("\n3. Testing user retrieval...");
+ testUserRetrieval(userManager);
+
+ // Test user search
+ System.out.println("\n4. Testing user search...");
+ testUserSearch(userManager);
+
+ // Test user filtering
+ System.out.println("\n5. Testing user filtering...");
+ testUserFiltering(userManager);
+
+ // Test user updates
+ System.out.println("\n6. Testing user updates...");
+ testUserUpdates(userManager);
+
+ // Test authentication
+ System.out.println("\n7. Testing authentication...");
+ testAuthentication(userManager);
+
+ // Display statistics
+ System.out.println("\n8. User statistics...");
+ displayStatistics(userManager);
+
+ // Test export functionality
+ System.out.println("\n9. Testing export functionality...");
+ testExport(userManager);
+
+ // Test user permissions
+ System.out.println("\n10. Testing user permissions...");
+ testPermissions(userManager);
+
+ System.out.println("\n" + "=".repeat(50));
+ System.out.println("Demo completed successfully!");
+ System.out.println("=".repeat(50));
+ }
+
+ private static void createSampleUsers(UserManager userManager) {
+ try {
+ // Create admin user
+ User admin = userManager.createUser("Alice Johnson", 30, "alice_admin",
+ "alice@example.com", UserRole.ADMIN);
+ admin.setPassword("AdminPass123!");
+ admin.addPermission("user_management");
+ admin.addPermission("system_admin");
+
+ // Create regular users
+ User user1 = userManager.createUser("Bob Smith", 25, "bob_user", "bob@example.com");
+ user1.setPassword("UserPass123!");
+
+ User user2 = userManager.createUser("Charlie Brown", 35, "charlie", "charlie@example.com");
+ user2.setPassword("CharliePass123!");
+
+ User user3 = userManager.createUser("Diana Prince", 28, "diana", "diana@example.com");
+ user3.setPassword("DianaPass123!");
+
+ System.out.println("✓ Created " + userManager.getUserCount() + " users");
+
+ } catch (DuplicateUserException e) {
+ System.out.println("✗ Error creating users: " + e.getMessage());
+ } catch (Exception e) {
+ System.out.println("✗ Unexpected error: " + e.getMessage());
+ logger.error("Error creating sample users", e);
+ }
+ }
+
+ private static void listAllUsers(UserManager userManager) {
+ List users = userManager.getAllUsers();
+
+ System.out.println("Found " + users.size() + " users:");
+ users.forEach(user ->
+ System.out.println(" • " + user.getUsername() + " (" + user.getName() +
+ ") - " + user.getRole().getDisplayName() +
+ " [" + user.getStatus().getDisplayName() + "]")
+ );
+ }
+
+ private static void testUserRetrieval(UserManager userManager) {
+ try {
+ User user = userManager.getUser("alice_admin");
+ System.out.println("✓ Retrieved user: " + user.getUsername() + " (" + user.getName() + ")");
+
+ User userByEmail = userManager.getUserByEmail("bob@example.com");
+ if (userByEmail != null) {
+ System.out.println("✓ Found user by email: " + userByEmail.getUsername());
+ }
+
+ } catch (UserNotFoundException e) {
+ System.out.println("✗ User retrieval failed: " + e.getMessage());
+ }
+ }
+
+ private static void testUserSearch(UserManager userManager) {
+ List searchResults = userManager.searchUsers("alice");
+ System.out.println("Search results for 'alice': " + searchResults.size() + " users found");
+
+ searchResults.forEach(user ->
+ System.out.println(" • " + user.getUsername() + " (" + user.getName() + ")")
+ );
+ }
+
+ private static void testUserFiltering(UserManager userManager) {
+ List olderUsers = userManager.getUsersOlderThan(30);
+ System.out.println("Users older than 30: " + olderUsers.size() + " users");
+
+ olderUsers.forEach(user ->
+ System.out.println(" • " + user.getUsername() + " (" + user.getName() + ") - age " + user.getAge())
+ );
+
+ List adminUsers = userManager.getUsersByRole(UserRole.ADMIN);
+ System.out.println("Admin users: " + adminUsers.size() + " users");
+ }
+
+ private static void testUserUpdates(UserManager userManager) {
+ try {
+ Map updates = Map.of("age", 26);
+ User updatedUser = userManager.updateUser("bob_user", updates);
+ System.out.println("✓ Updated " + updatedUser.getUsername() + "'s age to " + updatedUser.getAge());
+
+ } catch (UserNotFoundException e) {
+ System.out.println("✗ Update failed: " + e.getMessage());
+ }
+ }
+
+ private static void testAuthentication(UserManager userManager) {
+ try {
+ User user = userManager.getUser("alice_admin");
+
+ // Test password verification
+ boolean isValid = user.verifyPassword("AdminPass123!");
+ System.out.println("✓ Password verification: " + (isValid ? "SUCCESS" : "FAILED"));
+
+ // Test login
+ boolean loginSuccess = user.login();
+ System.out.println("✓ Login attempt: " + (loginSuccess ? "SUCCESS" : "FAILED"));
+
+ if (loginSuccess) {
+ System.out.println("✓ Last login: " + user.getLastLogin());
+ }
+
+ } catch (UserNotFoundException e) {
+ System.out.println("✗ Authentication test failed: " + e.getMessage());
+ }
+ }
+
+ private static void displayStatistics(UserManager userManager) {
+ Map stats = userManager.getUserStats();
+
+ stats.forEach((key, value) ->
+ System.out.println(" " + key.replace("_", " ").toUpperCase() + ": " + value)
+ );
+ }
+
+ private static void testExport(UserManager userManager) {
+ try {
+ String jsonExport = userManager.exportUsers("json");
+ System.out.println("✓ JSON export: " + jsonExport.length() + " characters");
+
+ String csvExport = userManager.exportUsers("csv");
+ System.out.println("✓ CSV export: " + csvExport.split("\n").length + " lines");
+
+ } catch (Exception e) {
+ System.out.println("✗ Export failed: " + e.getMessage());
+ }
+ }
+
+ private static void testPermissions(UserManager userManager) {
+ try {
+ User admin = userManager.getUser("alice_admin");
+
+ System.out.println("Admin permissions: " + admin.getPermissions());
+ System.out.println("Has user_management permission: " + admin.hasPermission("user_management"));
+ System.out.println("Is admin: " + admin.isAdmin());
+
+ // Test role privileges
+ System.out.println("Admin role can act on USER role: " +
+ admin.getRole().canActOn(UserRole.USER));
+
+ } catch (UserNotFoundException e) {
+ System.out.println("✗ Permission test failed: " + e.getMessage());
+ }
+ }
+}
\ No newline at end of file
diff --git a/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/models/Person.java b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/models/Person.java
new file mode 100644
index 0000000..01cb07e
--- /dev/null
+++ b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/models/Person.java
@@ -0,0 +1,284 @@
+package com.example.usermanagement.models;
+
+import com.fasterxml.jackson.annotation.JsonFormat;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import org.apache.commons.lang3.StringUtils;
+
+import java.time.LocalDateTime;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Represents a person with basic information.
+ * This class serves as the base class for more specific person types.
+ */
+public class Person {
+
+ @JsonProperty("name")
+ private String name;
+
+ @JsonProperty("age")
+ private int age;
+
+ @JsonProperty("email")
+ private String email;
+
+ @JsonProperty("created_at")
+ @JsonFormat(shape = JsonFormat.Shape.STRING, pattern = "yyyy-MM-dd'T'HH:mm:ss")
+ private LocalDateTime createdAt;
+
+ @JsonProperty("metadata")
+ private Map metadata;
+
+ /**
+ * Default constructor for Jackson deserialization.
+ */
+ public Person() {
+ this.createdAt = LocalDateTime.now();
+ this.metadata = new HashMap<>();
+ }
+
+ /**
+ * Constructor with name and age.
+ *
+ * @param name The person's name
+ * @param age The person's age
+ * @throws IllegalArgumentException if validation fails
+ */
+ public Person(String name, int age) {
+ this();
+ setName(name);
+ setAge(age);
+ }
+
+ /**
+ * Constructor with name, age, and email.
+ *
+ * @param name The person's name
+ * @param age The person's age
+ * @param email The person's email address
+ * @throws IllegalArgumentException if validation fails
+ */
+ public Person(String name, int age, String email) {
+ this(name, age);
+ setEmail(email);
+ }
+
+ // Getters and Setters
+
+ public String getName() {
+ return name;
+ }
+
+ public void setName(String name) {
+ if (StringUtils.isBlank(name)) {
+ throw new IllegalArgumentException("Name cannot be null or empty");
+ }
+ if (name.length() > 100) {
+ throw new IllegalArgumentException("Name cannot exceed 100 characters");
+ }
+ this.name = name.trim();
+ }
+
+ public int getAge() {
+ return age;
+ }
+
+ public void setAge(int age) {
+ if (age < 0) {
+ throw new IllegalArgumentException("Age cannot be negative");
+ }
+ if (age > 150) {
+ throw new IllegalArgumentException("Age cannot exceed 150");
+ }
+ this.age = age;
+ }
+
+ public String getEmail() {
+ return email;
+ }
+
+ public void setEmail(String email) {
+ if (StringUtils.isNotBlank(email) && !isValidEmail(email)) {
+ throw new IllegalArgumentException("Invalid email format");
+ }
+ this.email = StringUtils.isBlank(email) ? null : email.trim();
+ }
+
+ public LocalDateTime getCreatedAt() {
+ return createdAt;
+ }
+
+ public void setCreatedAt(LocalDateTime createdAt) {
+ this.createdAt = createdAt;
+ }
+
+ public Map getMetadata() {
+ return new HashMap<>(metadata);
+ }
+
+ public void setMetadata(Map metadata) {
+ this.metadata = metadata == null ? new HashMap<>() : new HashMap<>(metadata);
+ }
+
+ // Business methods
+
+ /**
+ * Returns a greeting message for the person.
+ *
+ * @return A personalized greeting
+ */
+ public String greet() {
+ return String.format("Hello, I'm %s and I'm %d years old.", name, age);
+ }
+
+ /**
+ * Checks if the person has an email address.
+ *
+ * @return true if email is present and not empty
+ */
+ public boolean hasEmail() {
+ return StringUtils.isNotBlank(email);
+ }
+
+ /**
+ * Updates the person's email address.
+ *
+ * @param newEmail The new email address
+ * @throws IllegalArgumentException if email format is invalid
+ */
+ public void updateEmail(String newEmail) {
+ setEmail(newEmail);
+ }
+
+ /**
+ * Adds metadata to the person.
+ *
+ * @param key The metadata key
+ * @param value The metadata value
+ */
+ public void addMetadata(String key, Object value) {
+ if (StringUtils.isNotBlank(key)) {
+ metadata.put(key, value);
+ }
+ }
+
+ /**
+ * Gets metadata value by key.
+ *
+ * @param key The metadata key
+ * @return The metadata value or null if not found
+ */
+ public Object getMetadata(String key) {
+ return metadata.get(key);
+ }
+
+ /**
+ * Gets metadata value by key with default value.
+ *
+ * @param key The metadata key
+ * @param defaultValue The default value if key is not found
+ * @return The metadata value or default value
+ */
+ public Object getMetadata(String key, Object defaultValue) {
+ return metadata.getOrDefault(key, defaultValue);
+ }
+
+ /**
+ * Removes metadata by key.
+ *
+ * @param key The metadata key to remove
+ * @return The removed value or null if not found
+ */
+ public Object removeMetadata(String key) {
+ return metadata.remove(key);
+ }
+
+ /**
+ * Clears all metadata.
+ */
+ public void clearMetadata() {
+ metadata.clear();
+ }
+
+ /**
+ * Validates email format using a simple regex.
+ *
+ * @param email The email to validate
+ * @return true if email format is valid
+ */
+ private boolean isValidEmail(String email) {
+ String emailPattern = "^[A-Za-z0-9+_.-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}$";
+ return email.matches(emailPattern);
+ }
+
+ /**
+ * Creates a Person instance from a map of data.
+ *
+ * @param data The data map
+ * @return A new Person instance
+ */
+ public static Person fromMap(Map data) {
+ Person person = new Person();
+
+ if (data.containsKey("name")) {
+ person.setName((String) data.get("name"));
+ }
+
+ if (data.containsKey("age")) {
+ person.setAge((Integer) data.get("age"));
+ }
+
+ if (data.containsKey("email")) {
+ person.setEmail((String) data.get("email"));
+ }
+
+ if (data.containsKey("metadata")) {
+ @SuppressWarnings("unchecked")
+ Map metadata = (Map) data.get("metadata");
+ person.setMetadata(metadata);
+ }
+
+ return person;
+ }
+
+ /**
+ * Converts the person to a map representation.
+ *
+ * @return A map containing person data
+ */
+ public Map toMap() {
+ Map map = new HashMap<>();
+ map.put("name", name);
+ map.put("age", age);
+ map.put("email", email);
+ map.put("created_at", createdAt);
+ map.put("metadata", new HashMap<>(metadata));
+ return map;
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ if (this == obj) return true;
+ if (obj == null || getClass() != obj.getClass()) return false;
+
+ Person person = (Person) obj;
+ return age == person.age &&
+ Objects.equals(name, person.name) &&
+ Objects.equals(email, person.email) &&
+ Objects.equals(createdAt, person.createdAt) &&
+ Objects.equals(metadata, person.metadata);
+ }
+
+ @Override
+ public int hashCode() {
+ return Objects.hash(name, age, email, createdAt, metadata);
+ }
+
+ @Override
+ public String toString() {
+ return String.format("Person{name='%s', age=%d, email='%s', createdAt=%s}",
+ name, age, email, createdAt);
+ }
+}
\ No newline at end of file
diff --git a/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/models/User.java b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/models/User.java
new file mode 100644
index 0000000..b68c72b
--- /dev/null
+++ b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/models/User.java
@@ -0,0 +1,363 @@
+package com.example.usermanagement.models;
+
+import com.fasterxml.jackson.annotation.JsonFormat;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import org.apache.commons.lang3.StringUtils;
+import org.mindrot.jbcrypt.BCrypt;
+
+import java.time.LocalDateTime;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.Objects;
+
+/**
+ * User class extending Person with authentication and authorization features.
+ */
+public class User extends Person {
+
+ @JsonProperty("username")
+ private String username;
+
+ @JsonProperty("password_hash")
+ private String passwordHash;
+
+ @JsonProperty("role")
+ private UserRole role;
+
+ @JsonProperty("status")
+ private UserStatus status;
+
+ @JsonProperty("last_login")
+ @JsonFormat(shape = JsonFormat.Shape.STRING, pattern = "yyyy-MM-dd'T'HH:mm:ss")
+ private LocalDateTime lastLogin;
+
+ @JsonProperty("login_attempts")
+ private int loginAttempts;
+
+ @JsonProperty("permissions")
+ private Set permissions;
+
+ /**
+ * Default constructor for Jackson deserialization.
+ */
+ public User() {
+ super();
+ this.role = UserRole.USER;
+ this.status = UserStatus.ACTIVE;
+ this.loginAttempts = 0;
+ this.permissions = new HashSet<>();
+ }
+
+ /**
+ * Constructor with basic information.
+ *
+ * @param name The user's name
+ * @param age The user's age
+ * @param username The username
+ */
+ public User(String name, int age, String username) {
+ super(name, age);
+ setUsername(username);
+ this.role = UserRole.USER;
+ this.status = UserStatus.ACTIVE;
+ this.loginAttempts = 0;
+ this.permissions = new HashSet<>();
+ }
+
+ /**
+ * Constructor with email.
+ *
+ * @param name The user's name
+ * @param age The user's age
+ * @param username The username
+ * @param email The email address
+ */
+ public User(String name, int age, String username, String email) {
+ super(name, age, email);
+ setUsername(username);
+ this.role = UserRole.USER;
+ this.status = UserStatus.ACTIVE;
+ this.loginAttempts = 0;
+ this.permissions = new HashSet<>();
+ }
+
+ /**
+ * Constructor with role.
+ *
+ * @param name The user's name
+ * @param age The user's age
+ * @param username The username
+ * @param email The email address
+ * @param role The user role
+ */
+ public User(String name, int age, String username, String email, UserRole role) {
+ this(name, age, username, email);
+ this.role = role;
+ }
+
+ // Getters and Setters
+
+ public String getUsername() {
+ return username;
+ }
+
+ public void setUsername(String username) {
+ if (StringUtils.isBlank(username)) {
+ throw new IllegalArgumentException("Username cannot be null or empty");
+ }
+ if (username.length() < 3 || username.length() > 20) {
+ throw new IllegalArgumentException("Username must be between 3 and 20 characters");
+ }
+ if (!username.matches("^[a-zA-Z0-9_]+$")) {
+ throw new IllegalArgumentException("Username can only contain letters, numbers, and underscores");
+ }
+ this.username = username.trim();
+ }
+
+ public String getPasswordHash() {
+ return passwordHash;
+ }
+
+ public void setPasswordHash(String passwordHash) {
+ this.passwordHash = passwordHash;
+ }
+
+ public UserRole getRole() {
+ return role;
+ }
+
+ public void setRole(UserRole role) {
+ this.role = role != null ? role : UserRole.USER;
+ }
+
+ public UserStatus getStatus() {
+ return status;
+ }
+
+ public void setStatus(UserStatus status) {
+ this.status = status != null ? status : UserStatus.ACTIVE;
+ }
+
+ public LocalDateTime getLastLogin() {
+ return lastLogin;
+ }
+
+ public void setLastLogin(LocalDateTime lastLogin) {
+ this.lastLogin = lastLogin;
+ }
+
+ public int getLoginAttempts() {
+ return loginAttempts;
+ }
+
+ public void setLoginAttempts(int loginAttempts) {
+ this.loginAttempts = Math.max(0, loginAttempts);
+ }
+
+ public Set getPermissions() {
+ return new HashSet<>(permissions);
+ }
+
+ public void setPermissions(Set permissions) {
+ this.permissions = permissions != null ? new HashSet<>(permissions) : new HashSet<>();
+ }
+
+ // Authentication methods
+
+ /**
+ * Sets the user's password using BCrypt hashing.
+ *
+ * @param password The plain text password
+ * @throws IllegalArgumentException if password is invalid
+ */
+ public void setPassword(String password) {
+ if (StringUtils.isBlank(password)) {
+ throw new IllegalArgumentException("Password cannot be null or empty");
+ }
+ if (password.length() < 8) {
+ throw new IllegalArgumentException("Password must be at least 8 characters long");
+ }
+
+ // Hash the password with BCrypt
+ this.passwordHash = BCrypt.hashpw(password, BCrypt.gensalt());
+ }
+
+ /**
+ * Verifies a password against the stored hash.
+ *
+ * @param password The plain text password to verify
+ * @return true if password matches
+ */
+ public boolean verifyPassword(String password) {
+ if (StringUtils.isBlank(password) || StringUtils.isBlank(passwordHash)) {
+ return false;
+ }
+
+ try {
+ return BCrypt.checkpw(password, passwordHash);
+ } catch (IllegalArgumentException e) {
+ return false;
+ }
+ }
+
+ // Permission methods
+
+ /**
+ * Adds a permission to the user.
+ *
+ * @param permission The permission to add
+ */
+ public void addPermission(String permission) {
+ if (StringUtils.isNotBlank(permission)) {
+ permissions.add(permission.trim());
+ }
+ }
+
+ /**
+ * Removes a permission from the user.
+ *
+ * @param permission The permission to remove
+ */
+ public void removePermission(String permission) {
+ permissions.remove(permission);
+ }
+
+ /**
+ * Checks if the user has a specific permission.
+ *
+ * @param permission The permission to check
+ * @return true if user has the permission
+ */
+ public boolean hasPermission(String permission) {
+ return permissions.contains(permission);
+ }
+
+ /**
+ * Clears all permissions.
+ */
+ public void clearPermissions() {
+ permissions.clear();
+ }
+
+ // Status and role methods
+
+ /**
+ * Checks if the user is an admin.
+ *
+ * @return true if user is admin
+ */
+ public boolean isAdmin() {
+ return role == UserRole.ADMIN;
+ }
+
+ /**
+ * Checks if the user is active.
+ *
+ * @return true if user is active
+ */
+ public boolean isActive() {
+ return status == UserStatus.ACTIVE;
+ }
+
+ /**
+ * Checks if the user is locked due to too many failed login attempts.
+ *
+ * @return true if user is locked
+ */
+ public boolean isLocked() {
+ return status == UserStatus.SUSPENDED || loginAttempts >= 5;
+ }
+
+ // Login methods
+
+ /**
+ * Records a successful login.
+ *
+ * @return true if login was successful
+ */
+ public boolean login() {
+ if (!isActive() || isLocked()) {
+ return false;
+ }
+
+ this.lastLogin = LocalDateTime.now();
+ this.loginAttempts = 0;
+ return true;
+ }
+
+ /**
+ * Records a failed login attempt.
+ */
+ public void failedLoginAttempt() {
+ this.loginAttempts++;
+ if (this.loginAttempts >= 5) {
+ this.status = UserStatus.SUSPENDED;
+ }
+ }
+
+ /**
+ * Resets login attempts.
+ */
+ public void resetLoginAttempts() {
+ this.loginAttempts = 0;
+ }
+
+ // Status change methods
+
+ /**
+ * Activates the user account.
+ */
+ public void activate() {
+ this.status = UserStatus.ACTIVE;
+ this.loginAttempts = 0;
+ }
+
+ /**
+ * Deactivates the user account.
+ */
+ public void deactivate() {
+ this.status = UserStatus.INACTIVE;
+ }
+
+ /**
+ * Suspends the user account.
+ */
+ public void suspend() {
+ this.status = UserStatus.SUSPENDED;
+ }
+
+ /**
+ * Marks the user as deleted.
+ */
+ public void delete() {
+ this.status = UserStatus.DELETED;
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ if (this == obj) return true;
+ if (obj == null || getClass() != obj.getClass()) return false;
+ if (!super.equals(obj)) return false;
+
+ User user = (User) obj;
+ return loginAttempts == user.loginAttempts &&
+ Objects.equals(username, user.username) &&
+ Objects.equals(passwordHash, user.passwordHash) &&
+ role == user.role &&
+ status == user.status &&
+ Objects.equals(lastLogin, user.lastLogin) &&
+ Objects.equals(permissions, user.permissions);
+ }
+
+ @Override
+ public int hashCode() {
+ return Objects.hash(super.hashCode(), username, passwordHash, role, status,
+ lastLogin, loginAttempts, permissions);
+ }
+
+ @Override
+ public String toString() {
+ return String.format("User{username='%s', name='%s', role=%s, status=%s, lastLogin=%s}",
+ username, getName(), role, status, lastLogin);
+ }
+}
\ No newline at end of file
diff --git a/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/models/UserRole.java b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/models/UserRole.java
new file mode 100644
index 0000000..492f3be
--- /dev/null
+++ b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/models/UserRole.java
@@ -0,0 +1,134 @@
+package com.example.usermanagement.models;
+
+import com.fasterxml.jackson.annotation.JsonValue;
+
+/**
+ * Enumeration for user roles in the system.
+ */
+public enum UserRole {
+
+ /**
+ * Administrator role with full system access.
+ */
+ ADMIN("admin", "Administrator", "Full system access"),
+
+ /**
+ * Regular user role with standard permissions.
+ */
+ USER("user", "User", "Standard user permissions"),
+
+ /**
+ * Guest role with limited permissions.
+ */
+ GUEST("guest", "Guest", "Limited guest permissions");
+
+ private final String code;
+ private final String displayName;
+ private final String description;
+
+ /**
+ * Constructor for UserRole enum.
+ *
+ * @param code The role code
+ * @param displayName The display name
+ * @param description The role description
+ */
+ UserRole(String code, String displayName, String description) {
+ this.code = code;
+ this.displayName = displayName;
+ this.description = description;
+ }
+
+ /**
+ * Gets the role code.
+ *
+ * @return The role code
+ */
+ @JsonValue
+ public String getCode() {
+ return code;
+ }
+
+ /**
+ * Gets the display name.
+ *
+ * @return The display name
+ */
+ public String getDisplayName() {
+ return displayName;
+ }
+
+ /**
+ * Gets the role description.
+ *
+ * @return The role description
+ */
+ public String getDescription() {
+ return description;
+ }
+
+ /**
+ * Finds a UserRole by its code.
+ *
+ * @param code The role code to search for
+ * @return The UserRole or null if not found
+ */
+ public static UserRole fromCode(String code) {
+ if (code == null) {
+ return null;
+ }
+
+ for (UserRole role : values()) {
+ if (role.code.equalsIgnoreCase(code)) {
+ return role;
+ }
+ }
+ return null;
+ }
+
+ /**
+ * Checks if this role has higher privilege than another role.
+ *
+ * @param other The other role to compare with
+ * @return true if this role has higher privilege
+ */
+ public boolean hasHigherPrivilegeThan(UserRole other) {
+ return this.ordinal() < other.ordinal();
+ }
+
+ /**
+ * Checks if this role has lower privilege than another role.
+ *
+ * @param other The other role to compare with
+ * @return true if this role has lower privilege
+ */
+ public boolean hasLowerPrivilegeThan(UserRole other) {
+ return this.ordinal() > other.ordinal();
+ }
+
+ /**
+ * Checks if this role can perform actions on another role.
+ *
+ * @param targetRole The target role
+ * @return true if this role can act on the target role
+ */
+ public boolean canActOn(UserRole targetRole) {
+ // Admin can act on all roles
+ if (this == ADMIN) {
+ return true;
+ }
+
+ // Users can only act on guests
+ if (this == USER) {
+ return targetRole == GUEST;
+ }
+
+ // Guests cannot act on anyone
+ return false;
+ }
+
+ @Override
+ public String toString() {
+ return displayName;
+ }
+}
\ No newline at end of file
diff --git a/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/models/UserStatus.java b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/models/UserStatus.java
new file mode 100644
index 0000000..1166e18
--- /dev/null
+++ b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/models/UserStatus.java
@@ -0,0 +1,146 @@
+package com.example.usermanagement.models;
+
+import com.fasterxml.jackson.annotation.JsonValue;
+
+/**
+ * Enumeration for user status in the system.
+ */
+public enum UserStatus {
+
+ /**
+ * Active status - user can login and use the system.
+ */
+ ACTIVE("active", "Active", "User can login and use the system"),
+
+ /**
+ * Inactive status - user account is temporarily disabled.
+ */
+ INACTIVE("inactive", "Inactive", "User account is temporarily disabled"),
+
+ /**
+ * Suspended status - user account is suspended due to violations.
+ */
+ SUSPENDED("suspended", "Suspended", "User account is suspended due to violations"),
+
+ /**
+ * Deleted status - user account is marked for deletion.
+ */
+ DELETED("deleted", "Deleted", "User account is marked for deletion");
+
+ private final String code;
+ private final String displayName;
+ private final String description;
+
+ /**
+ * Constructor for UserStatus enum.
+ *
+ * @param code The status code
+ * @param displayName The display name
+ * @param description The status description
+ */
+ UserStatus(String code, String displayName, String description) {
+ this.code = code;
+ this.displayName = displayName;
+ this.description = description;
+ }
+
+ /**
+ * Gets the status code.
+ *
+ * @return The status code
+ */
+ @JsonValue
+ public String getCode() {
+ return code;
+ }
+
+ /**
+ * Gets the display name.
+ *
+ * @return The display name
+ */
+ public String getDisplayName() {
+ return displayName;
+ }
+
+ /**
+ * Gets the status description.
+ *
+ * @return The status description
+ */
+ public String getDescription() {
+ return description;
+ }
+
+ /**
+ * Finds a UserStatus by its code.
+ *
+ * @param code The status code to search for
+ * @return The UserStatus or null if not found
+ */
+ public static UserStatus fromCode(String code) {
+ if (code == null) {
+ return null;
+ }
+
+ for (UserStatus status : values()) {
+ if (status.code.equalsIgnoreCase(code)) {
+ return status;
+ }
+ }
+ return null;
+ }
+
+ /**
+ * Checks if this status allows user login.
+ *
+ * @return true if user can login with this status
+ */
+ public boolean allowsLogin() {
+ return this == ACTIVE;
+ }
+
+ /**
+ * Checks if this status indicates the user is disabled.
+ *
+ * @return true if user is disabled
+ */
+ public boolean isDisabled() {
+ return this == INACTIVE || this == SUSPENDED || this == DELETED;
+ }
+
+ /**
+ * Checks if this status indicates the user is deleted.
+ *
+ * @return true if user is deleted
+ */
+ public boolean isDeleted() {
+ return this == DELETED;
+ }
+
+ /**
+ * Checks if this status can be changed to another status.
+ *
+ * @param targetStatus The target status
+ * @return true if status change is allowed
+ */
+ public boolean canChangeTo(UserStatus targetStatus) {
+ // Cannot change from deleted status
+ if (this == DELETED) {
+ return false;
+ }
+
+ // Cannot change to same status
+ if (this == targetStatus) {
+ return false;
+ }
+
+ // All other changes are allowed
+ return true;
+ }
+
+ @Override
+ public String toString() {
+ return displayName;
+ }
+}
\ No newline at end of file
diff --git a/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/services/UserManager.java b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/services/UserManager.java
new file mode 100644
index 0000000..ca32e11
--- /dev/null
+++ b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/services/UserManager.java
@@ -0,0 +1,488 @@
+package com.example.usermanagement.services;
+
+import com.example.usermanagement.models.User;
+import com.example.usermanagement.models.UserRole;
+import com.example.usermanagement.models.UserStatus;
+import com.example.usermanagement.utils.UserNotFoundException;
+import com.example.usermanagement.utils.DuplicateUserException;
+import com.example.usermanagement.utils.ValidationUtils;
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule;
+import org.apache.commons.csv.CSVFormat;
+import org.apache.commons.csv.CSVPrinter;
+import org.apache.commons.lang3.StringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.*;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.*;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Service class for managing users in the system.
+ * Provides CRUD operations, search functionality, and data persistence.
+ */
+public class UserManager {
+
+ private static final Logger logger = LoggerFactory.getLogger(UserManager.class);
+
+ private final Map users;
+ private final ObjectMapper objectMapper;
+ private final String storagePath;
+
+ /**
+ * Constructor with default storage path.
+ */
+ public UserManager() {
+ this(null);
+ }
+
+ /**
+ * Constructor with custom storage path.
+ *
+ * @param storagePath The file path for user data storage
+ */
+ public UserManager(String storagePath) {
+ this.users = new ConcurrentHashMap<>();
+ this.objectMapper = new ObjectMapper();
+ this.objectMapper.registerModule(new JavaTimeModule());
+ this.storagePath = storagePath;
+
+ if (StringUtils.isNotBlank(storagePath)) {
+ loadUsersFromFile();
+ }
+ }
+
+ /**
+ * Creates a new user in the system.
+ *
+ * @param name The user's name
+ * @param age The user's age
+ * @param username The username
+ * @param email The email address (optional)
+ * @param role The user role
+ * @return The created user
+ * @throws DuplicateUserException if username already exists
+ * @throws IllegalArgumentException if validation fails
+ */
+ public User createUser(String name, int age, String username, String email, UserRole role) {
+ logger.debug("Creating user with username: {}", username);
+
+ if (users.containsKey(username)) {
+ throw new DuplicateUserException("User with username '" + username + "' already exists");
+ }
+
+ // Validate inputs
+ ValidationUtils.validateUsername(username);
+ if (StringUtils.isNotBlank(email)) {
+ ValidationUtils.validateEmail(email);
+ }
+
+ User user = new User(name, age, username, email, role);
+ users.put(username, user);
+
+ saveUsersToFile();
+ logger.info("User created successfully: {}", username);
+
+ return user;
+ }
+
+ /**
+ * Creates a new user with default role.
+ *
+ * @param name The user's name
+ * @param age The user's age
+ * @param username The username
+ * @param email The email address (optional)
+ * @return The created user
+ */
+ public User createUser(String name, int age, String username, String email) {
+ return createUser(name, age, username, email, UserRole.USER);
+ }
+
+ /**
+ * Creates a new user with minimal information.
+ *
+ * @param name The user's name
+ * @param age The user's age
+ * @param username The username
+ * @return The created user
+ */
+ public User createUser(String name, int age, String username) {
+ return createUser(name, age, username, null, UserRole.USER);
+ }
+
+ /**
+ * Retrieves a user by username.
+ *
+ * @param username The username
+ * @return The user
+ * @throws UserNotFoundException if user is not found
+ */
+ public User getUser(String username) {
+ User user = users.get(username);
+ if (user == null) {
+ throw new UserNotFoundException("User with username '" + username + "' not found");
+ }
+ return user;
+ }
+
+ /**
+ * Retrieves a user by email address.
+ *
+ * @param email The email address
+ * @return The user or null if not found
+ */
+ public User getUserByEmail(String email) {
+ return users.values().stream()
+ .filter(user -> Objects.equals(user.getEmail(), email))
+ .findFirst()
+ .orElse(null);
+ }
+
+ /**
+ * Updates user information.
+ *
+ * @param username The username
+ * @param updates A map of field updates
+ * @return The updated user
+ * @throws UserNotFoundException if user is not found
+ */
+ public User updateUser(String username, Map updates) {
+ User user = getUser(username);
+
+ updates.forEach((field, value) -> {
+ switch (field.toLowerCase()) {
+ case "name":
+ user.setName((String) value);
+ break;
+ case "age":
+ user.setAge((Integer) value);
+ break;
+ case "email":
+ user.setEmail((String) value);
+ break;
+ case "role":
+ if (value instanceof UserRole) {
+ user.setRole((UserRole) value);
+ } else if (value instanceof String) {
+ user.setRole(UserRole.fromCode((String) value));
+ }
+ break;
+ case "status":
+ if (value instanceof UserStatus) {
+ user.setStatus((UserStatus) value);
+ } else if (value instanceof String) {
+ user.setStatus(UserStatus.fromCode((String) value));
+ }
+ break;
+ default:
+ logger.warn("Unknown field for update: {}", field);
+ }
+ });
+
+ saveUsersToFile();
+ logger.info("User updated successfully: {}", username);
+
+ return user;
+ }
+
+ /**
+ * Deletes a user (soft delete).
+ *
+ * @param username The username
+ * @return true if user was deleted
+ * @throws UserNotFoundException if user is not found
+ */
+ public boolean deleteUser(String username) {
+ User user = getUser(username);
+ user.delete();
+
+ saveUsersToFile();
+ logger.info("User deleted successfully: {}", username);
+
+ return true;
+ }
+
+ /**
+ * Removes a user completely from the system.
+ *
+ * @param username The username
+ * @return true if user was removed
+ * @throws UserNotFoundException if user is not found
+ */
+ public boolean removeUser(String username) {
+ if (!users.containsKey(username)) {
+ throw new UserNotFoundException("User with username '" + username + "' not found");
+ }
+
+ users.remove(username);
+ saveUsersToFile();
+ logger.info("User removed completely: {}", username);
+
+ return true;
+ }
+
+ /**
+ * Gets all users in the system.
+ *
+ * @return A list of all users
+ */
+ public List getAllUsers() {
+ return new ArrayList<>(users.values());
+ }
+
+ /**
+ * Gets all active users.
+ *
+ * @return A list of active users
+ */
+ public List getActiveUsers() {
+ return users.values().stream()
+ .filter(User::isActive)
+ .collect(Collectors.toList());
+ }
+
+ /**
+ * Gets users by role.
+ *
+ * @param role The user role
+ * @return A list of users with the specified role
+ */
+ public List getUsersByRole(UserRole role) {
+ return users.values().stream()
+ .filter(user -> user.getRole() == role)
+ .collect(Collectors.toList());
+ }
+
+ /**
+ * Filters users using a custom predicate.
+ *
+ * @param predicate The filter predicate
+ * @return A list of filtered users
+ */
+ public List filterUsers(Predicate predicate) {
+ return users.values().stream()
+ .filter(predicate)
+ .collect(Collectors.toList());
+ }
+
+ /**
+ * Searches users by name or username.
+ *
+ * @param query The search query
+ * @return A list of matching users
+ */
+ public List searchUsers(String query) {
+ if (StringUtils.isBlank(query)) {
+ return new ArrayList<>();
+ }
+
+ String lowercaseQuery = query.toLowerCase();
+ return users.values().stream()
+ .filter(user ->
+ user.getName().toLowerCase().contains(lowercaseQuery) ||
+ user.getUsername().toLowerCase().contains(lowercaseQuery) ||
+ (user.getEmail() != null && user.getEmail().toLowerCase().contains(lowercaseQuery)))
+ .collect(Collectors.toList());
+ }
+
+ /**
+ * Gets users older than specified age.
+ *
+ * @param age The age threshold
+ * @return A list of users older than the specified age
+ */
+ public List getUsersOlderThan(int age) {
+ return filterUsers(user -> user.getAge() > age);
+ }
+
+ /**
+ * Gets users with email addresses.
+ *
+ * @return A list of users with email addresses
+ */
+ public List getUsersWithEmail() {
+ return filterUsers(User::hasEmail);
+ }
+
+ /**
+ * Gets users with specific permission.
+ *
+ * @param permission The permission to check
+ * @return A list of users with the specified permission
+ */
+ public List getUsersWithPermission(String permission) {
+ return filterUsers(user -> user.hasPermission(permission));
+ }
+
+ /**
+ * Gets the total number of users.
+ *
+ * @return The user count
+ */
+ public int getUserCount() {
+ return users.size();
+ }
+
+ /**
+ * Gets user statistics.
+ *
+ * @return A map of user statistics
+ */
+ public Map getUserStats() {
+ Map stats = new HashMap<>();
+
+ stats.put("total", users.size());
+ stats.put("active", getActiveUsers().size());
+ stats.put("admin", getUsersByRole(UserRole.ADMIN).size());
+ stats.put("user", getUsersByRole(UserRole.USER).size());
+ stats.put("guest", getUsersByRole(UserRole.GUEST).size());
+ stats.put("with_email", getUsersWithEmail().size());
+
+ return stats;
+ }
+
+ /**
+ * Exports users to specified format.
+ *
+ * @param format The export format ("json" or "csv")
+ * @return The exported data as string
+ * @throws IllegalArgumentException if format is unsupported
+ */
+ public String exportUsers(String format) {
+ switch (format.toLowerCase()) {
+ case "json":
+ return exportToJson();
+ case "csv":
+ return exportToCsv();
+ default:
+ throw new IllegalArgumentException("Unsupported export format: " + format);
+ }
+ }
+
+ /**
+ * Exports users to JSON format.
+ *
+ * @return JSON string representation of users
+ */
+ private String exportToJson() {
+ try {
+ return objectMapper.writerWithDefaultPrettyPrinter()
+ .writeValueAsString(users.values());
+ } catch (JsonProcessingException e) {
+ logger.error("Error exporting users to JSON", e);
+ return "[]";
+ }
+ }
+
+ /**
+ * Exports users to CSV format.
+ *
+ * @return CSV string representation of users
+ */
+ private String exportToCsv() {
+ try (StringWriter writer = new StringWriter();
+ CSVPrinter printer = new CSVPrinter(writer, CSVFormat.DEFAULT.withHeader(
+ "Username", "Name", "Age", "Email", "Role", "Status", "Last Login"))) {
+
+ for (User user : users.values()) {
+ printer.printRecord(
+ user.getUsername(),
+ user.getName(),
+ user.getAge(),
+ user.getEmail(),
+ user.getRole().getCode(),
+ user.getStatus().getCode(),
+ user.getLastLogin()
+ );
+ }
+
+ return writer.toString();
+ } catch (IOException e) {
+ logger.error("Error exporting users to CSV", e);
+ return "Username,Name,Age,Email,Role,Status,Last Login\n";
+ }
+ }
+
+ /**
+ * Checks if a username exists in the system.
+ *
+ * @param username The username to check
+ * @return true if username exists
+ */
+ public boolean userExists(String username) {
+ return users.containsKey(username);
+ }
+
+ /**
+ * Clears all users from the system.
+ */
+ public void clearAllUsers() {
+ users.clear();
+ saveUsersToFile();
+ logger.info("All users cleared from system");
+ }
+
+ /**
+ * Loads users from file storage.
+ */
+ private void loadUsersFromFile() {
+ if (StringUtils.isBlank(storagePath)) {
+ return;
+ }
+
+ try {
+ Path path = Paths.get(storagePath);
+ if (!Files.exists(path)) {
+ logger.debug("User storage file does not exist: {}", storagePath);
+ return;
+ }
+
+ String content = Files.readString(path);
+ List userList = Arrays.asList(objectMapper.readValue(content, User[].class));
+
+ users.clear();
+ for (User user : userList) {
+ users.put(user.getUsername(), user);
+ }
+
+ logger.info("Loaded {} users from file: {}", users.size(), storagePath);
+ } catch (IOException e) {
+ logger.error("Error loading users from file: {}", storagePath, e);
+ }
+ }
+
+ /**
+ * Saves users to file storage.
+ */
+ private void saveUsersToFile() {
+ if (StringUtils.isBlank(storagePath)) {
+ return;
+ }
+
+ try {
+ Path path = Paths.get(storagePath);
+ Files.createDirectories(path.getParent());
+
+ String content = objectMapper.writerWithDefaultPrettyPrinter()
+ .writeValueAsString(users.values());
+ Files.writeString(path, content);
+
+ logger.debug("Saved {} users to file: {}", users.size(), storagePath);
+ } catch (IOException e) {
+ logger.error("Error saving users to file: {}", storagePath, e);
+ }
+ }
+
+ // CI marker method to verify auto-reindex on change
+ public String ciAddedSymbolMarker() {
+ return "ci_symbol_java";
+ }
+}
\ No newline at end of file
diff --git a/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/utils/DuplicateUserException.java b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/utils/DuplicateUserException.java
new file mode 100644
index 0000000..e0601e5
--- /dev/null
+++ b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/utils/DuplicateUserException.java
@@ -0,0 +1,26 @@
+package com.example.usermanagement.utils;
+
+/**
+ * Exception thrown when attempting to create a user that already exists.
+ */
+public class DuplicateUserException extends RuntimeException {
+
+ /**
+ * Constructs a new DuplicateUserException with the specified detail message.
+ *
+ * @param message the detail message
+ */
+ public DuplicateUserException(String message) {
+ super(message);
+ }
+
+ /**
+ * Constructs a new DuplicateUserException with the specified detail message and cause.
+ *
+ * @param message the detail message
+ * @param cause the cause
+ */
+ public DuplicateUserException(String message, Throwable cause) {
+ super(message, cause);
+ }
+}
\ No newline at end of file
diff --git a/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/utils/UserNotFoundException.java b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/utils/UserNotFoundException.java
new file mode 100644
index 0000000..255160e
--- /dev/null
+++ b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/utils/UserNotFoundException.java
@@ -0,0 +1,26 @@
+package com.example.usermanagement.utils;
+
+/**
+ * Exception thrown when a user is not found in the system.
+ */
+public class UserNotFoundException extends RuntimeException {
+
+ /**
+ * Constructs a new UserNotFoundException with the specified detail message.
+ *
+ * @param message the detail message
+ */
+ public UserNotFoundException(String message) {
+ super(message);
+ }
+
+ /**
+ * Constructs a new UserNotFoundException with the specified detail message and cause.
+ *
+ * @param message the detail message
+ * @param cause the cause
+ */
+ public UserNotFoundException(String message, Throwable cause) {
+ super(message, cause);
+ }
+}
\ No newline at end of file
diff --git a/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/utils/ValidationUtils.java b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/utils/ValidationUtils.java
new file mode 100644
index 0000000..4b2d51f
--- /dev/null
+++ b/test/sample-projects/java/user-management/src/main/java/com/example/usermanagement/utils/ValidationUtils.java
@@ -0,0 +1,78 @@
+package com.example.usermanagement.utils;
+
+import org.apache.commons.lang3.StringUtils;
+
+/**
+ * Utility class for validation operations.
+ */
+public final class ValidationUtils {
+
+ private static final String EMAIL_PATTERN = "^[A-Za-z0-9+_.-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}$";
+ private static final String USERNAME_PATTERN = "^[a-zA-Z0-9_]+$";
+
+ /**
+ * Private constructor to prevent instantiation.
+ */
+ private ValidationUtils() {
+ throw new UnsupportedOperationException("Utility class cannot be instantiated");
+ }
+
+ /**
+ * Validates email format.
+ *
+ * @param email The email to validate
+ * @throws IllegalArgumentException if email is invalid
+ */
+ public static void validateEmail(String email) {
+ if (StringUtils.isBlank(email)) {
+ throw new IllegalArgumentException("Email cannot be null or empty");
+ }
+
+ if (!email.matches(EMAIL_PATTERN)) {
+ throw new IllegalArgumentException("Invalid email format");
+ }
+ }
+
+ /**
+ * Validates username format.
+ *
+ * @param username The username to validate
+ * @throws IllegalArgumentException if username is invalid
+ */
+ public static void validateUsername(String username) {
+ if (StringUtils.isBlank(username)) {
+ throw new IllegalArgumentException("Username cannot be null or empty");
+ }
+
+ if (username.length() < 3 || username.length() > 20) {
+ throw new IllegalArgumentException("Username must be between 3 and 20 characters");
+ }
+
+ if (!username.matches(USERNAME_PATTERN)) {
+ throw new IllegalArgumentException("Username can only contain letters, numbers, and underscores");
+ }
+ }
+
+ /**
+ * Checks if email format is valid.
+ *
+ * @param email The email to check
+ * @return true if email is valid
+ */
+ public static boolean isValidEmail(String email) {
+ return StringUtils.isNotBlank(email) && email.matches(EMAIL_PATTERN);
+ }
+
+ /**
+ * Checks if username format is valid.
+ *
+ * @param username The username to check
+ * @return true if username is valid
+ */
+ public static boolean isValidUsername(String username) {
+ return StringUtils.isNotBlank(username) &&
+ username.length() >= 3 &&
+ username.length() <= 20 &&
+ username.matches(USERNAME_PATTERN);
+ }
+}
\ No newline at end of file
diff --git a/test/sample-projects/javascript/user-management/.env.example b/test/sample-projects/javascript/user-management/.env.example
new file mode 100644
index 0000000..61ea53d
--- /dev/null
+++ b/test/sample-projects/javascript/user-management/.env.example
@@ -0,0 +1,29 @@
+# Server Configuration
+PORT=3000
+NODE_ENV=development
+
+# Database Configuration
+MONGODB_URI=mongodb://localhost:27017/user-management
+
+# JWT Configuration
+JWT_SECRET=your-super-secret-jwt-key-here
+JWT_EXPIRES_IN=24h
+
+# CORS Configuration
+ALLOWED_ORIGINS=http://localhost:3000,http://localhost:3001
+
+# Logging Configuration
+LOG_LEVEL=info
+
+# Rate Limiting Configuration
+RATE_LIMIT_WINDOW_MS=900000
+RATE_LIMIT_MAX_REQUESTS=100
+
+# Password Configuration
+BCRYPT_SALT_ROUNDS=12
+
+# Email Configuration (if implementing email features)
+# SMTP_HOST=smtp.gmail.com
+# SMTP_PORT=587
+# SMTP_USER=your-email@gmail.com
+# SMTP_PASS=your-app-password
\ No newline at end of file
diff --git a/test/sample-projects/javascript/user-management/README.md b/test/sample-projects/javascript/user-management/README.md
new file mode 100644
index 0000000..01c6dbc
--- /dev/null
+++ b/test/sample-projects/javascript/user-management/README.md
@@ -0,0 +1,313 @@
+# User Management System
+
+A comprehensive user management system built with Node.js, Express, and MongoDB. This project demonstrates enterprise-level patterns for user authentication, authorization, and management.
+
+## Features
+
+### Core Functionality
+- **User Registration & Authentication**: Secure user registration with JWT-based authentication
+- **Role-Based Access Control (RBAC)**: Admin, User, and Guest roles with permission system
+- **Password Security**: BCrypt hashing with configurable salt rounds
+- **Account Management**: User activation, deactivation, suspension, and soft deletion
+- **Profile Management**: User profile updates with validation
+- **Permission System**: Granular permissions for fine-grained access control
+
+### Security Features
+- **Rate Limiting**: Configurable rate limits for different endpoints
+- **Input Validation**: Comprehensive validation using express-validator
+- **Security Headers**: Helmet.js for security headers
+- **CORS Protection**: Configurable CORS policies
+- **Account Lockout**: Automatic account lockout after failed login attempts
+- **JWT Security**: Secure token generation and validation
+
+### API Features
+- **RESTful API**: Clean REST API design with proper HTTP methods
+- **Pagination**: Efficient pagination for large datasets
+- **Search Functionality**: Full-text search across user fields
+- **Filtering**: Role-based and status-based filtering
+- **Export Functionality**: User data export capabilities
+- **Statistics**: User statistics and analytics
+
+### Development Features
+- **Error Handling**: Comprehensive error handling with custom error classes
+- **Logging**: Structured logging with Winston
+- **Documentation**: Detailed API documentation
+- **Testing**: Unit and integration tests with Jest
+- **Code Quality**: ESLint and Prettier configuration
+
+## Technology Stack
+
+- **Runtime**: Node.js 16+
+- **Framework**: Express.js
+- **Database**: MongoDB with Mongoose ODM
+- **Authentication**: JSON Web Tokens (JWT)
+- **Password hashing**: BCrypt
+- **Validation**: Joi and express-validator
+- **Logging**: Winston
+- **Testing**: Jest and Supertest
+- **Security**: Helmet, CORS, Rate limiting
+
+## Installation
+
+### Prerequisites
+- Node.js (v16 or higher)
+- MongoDB (v4.4 or higher)
+- npm or yarn
+
+### Setup
+
+1. **Clone the repository**
+ ```bash
+ git clone
+ cd user-management
+ ```
+
+2. **Install dependencies**
+ ```bash
+ npm install
+ ```
+
+3. **Environment configuration**
+ ```bash
+ cp .env.example .env
+ ```
+ Update the `.env` file with your configuration:
+ ```env
+ PORT=3000
+ NODE_ENV=development
+ MONGODB_URI=mongodb://localhost:27017/user-management
+ JWT_SECRET=your-super-secret-jwt-key-here
+ JWT_EXPIRES_IN=24h
+ ```
+
+4. **Start MongoDB**
+ ```bash
+ # Using MongoDB service
+ sudo systemctl start mongod
+
+ # Or using Docker
+ docker run -d -p 27017:27017 --name mongodb mongo:latest
+ ```
+
+5. **Run the application**
+ ```bash
+ # Development mode
+ npm run dev
+
+ # Production mode
+ npm start
+ ```
+
+## API Documentation
+
+### Base URL
+```
+http://localhost:3000/api
+```
+
+### Authentication
+Most endpoints require authentication. Include the JWT token in the Authorization header:
+```
+Authorization: Bearer
+```
+
+### Endpoints
+
+#### User Management
+- `POST /users` - Create new user
+- `GET /users` - Get all users (with pagination)
+- `GET /users/:id` - Get user by ID
+- `PUT /users/:id` - Update user
+- `DELETE /users/:id` - Delete user (soft delete)
+- `DELETE /users/:id/hard` - Permanently delete user
+
+#### Authentication
+- `POST /users/auth` - User login
+- `PUT /users/:id/password` - Change password
+- `PUT /users/:id/reset-password` - Reset password (admin only)
+
+#### Search & Filtering
+- `GET /users/search?q=query` - Search users
+- `GET /users/active` - Get active users
+- `GET /users/role/:role` - Get users by role
+
+#### Permissions
+- `PUT /users/:id/permissions` - Add permission
+- `DELETE /users/:id/permissions` - Remove permission
+
+#### Analytics
+- `GET /users/stats` - Get user statistics
+- `GET /users/export` - Export user data
+- `GET /users/:id/activity` - Get user activity
+
+### Example Requests
+
+#### Create User
+```bash
+curl -X POST http://localhost:3000/api/users \
+ -H "Content-Type: application/json" \
+ -d '{
+ "username": "john_doe",
+ "name": "John Doe",
+ "email": "john@example.com",
+ "password": "securepassword123",
+ "age": 30,
+ "role": "user"
+ }'
+```
+
+#### Authenticate User
+```bash
+curl -X POST http://localhost:3000/api/users/auth \
+ -H "Content-Type: application/json" \
+ -d '{
+ "username": "john_doe",
+ "password": "securepassword123"
+ }'
+```
+
+#### Get Users (with authentication)
+```bash
+curl -X GET http://localhost:3000/api/users \
+ -H "Authorization: Bearer "
+```
+
+## Testing
+
+### Run Tests
+```bash
+# Run all tests
+npm test
+
+# Run tests in watch mode
+npm run test:watch
+
+# Run tests with coverage
+npm run test:coverage
+```
+
+### Test Structure
+```
+tests/
+├── unit/
+│ ├── models/
+│ ├── services/
+│ └── utils/
+├── integration/
+│ └── routes/
+└── setup/
+ └── testSetup.js
+```
+
+## Development
+
+### Code Quality
+```bash
+# Linting
+npm run lint
+
+# Formatting
+npm run format
+```
+
+### Project Structure
+```
+src/
+├── config/
+│ └── database.js
+├── middleware/
+│ ├── auth.js
+│ ├── rateLimiter.js
+│ └── validate.js
+├── models/
+│ └── User.js
+├── routes/
+│ └── userRoutes.js
+├── services/
+│ └── UserService.js
+├── utils/
+│ ├── errors.js
+│ └── logger.js
+└── server.js
+```
+
+### Database Schema
+
+#### User Schema
+```javascript
+{
+ id: String, // UUID
+ username: String, // Unique, 3-20 chars
+ email: String, // Optional, unique
+ name: String, // Required, 1-100 chars
+ age: Number, // Optional, 0-150
+ password: String, // Hashed, min 8 chars
+ role: String, // admin, user, guest
+ status: String, // active, inactive, suspended, deleted
+ lastLogin: Date, // Last login timestamp
+ loginAttempts: Number, // Failed login counter
+ permissions: [String], // Array of permissions
+ metadata: Object, // Flexible metadata
+ createdAt: Date, // Auto-generated
+ updatedAt: Date // Auto-generated
+}
+```
+
+## Environment Variables
+
+| Variable | Description | Default |
+|----------|-------------|-------|
+| `PORT` | Server port | 3000 |
+| `NODE_ENV` | Environment | development |
+| `MONGODB_URI` | MongoDB connection string | mongodb://localhost:27017/user-management |
+| `JWT_SECRET` | JWT secret key | Required |
+| `JWT_EXPIRES_IN` | JWT expiration time | 24h |
+| `ALLOWED_ORIGINS` | CORS allowed origins | http://localhost:3000 |
+| `LOG_LEVEL` | Logging level | info |
+| `BCRYPT_SALT_ROUNDS` | BCrypt salt rounds | 12 |
+
+## Security Considerations
+
+1. **Environment Variables**: Never commit sensitive data to version control
+2. **JWT Secret**: Use a strong, random JWT secret in production
+3. **Rate Limiting**: Adjust rate limits based on your requirements
+4. **Input Validation**: All inputs are validated and sanitized
+5. **Password Security**: Passwords are hashed using BCrypt with salt rounds
+6. **Account Lockout**: Accounts are locked after 5 failed login attempts
+7. **CORS**: Configure CORS origins for production
+8. **Security Headers**: Helmet.js provides security headers
+
+## Performance Optimizations
+
+1. **Database Indexing**: Indexes on frequently queried fields
+2. **Pagination**: Efficient pagination for large datasets
+3. **Connection Pooling**: MongoDB connection pooling
+4. **Compression**: Gzip compression for responses
+5. **Caching**: Ready for Redis integration
+
+## Contributing
+
+1. Fork the repository
+2. Create a feature branch
+3. Make your changes
+4. Add tests for new functionality
+5. Ensure all tests pass
+6. Submit a pull request
+
+## License
+
+MIT License - see the [LICENSE](LICENSE) file for details.
+
+## Support
+
+For support, please open an issue in the GitHub repository or contact the development team.
+
+## Changelog
+
+### v1.0.0
+- Initial release
+- User management functionality
+- Authentication and authorization
+- API endpoints
+- Security features
+- Testing suite
\ No newline at end of file
diff --git a/test/sample-projects/javascript/user-management/package.json b/test/sample-projects/javascript/user-management/package.json
new file mode 100644
index 0000000..3d93e39
--- /dev/null
+++ b/test/sample-projects/javascript/user-management/package.json
@@ -0,0 +1,85 @@
+{
+ "name": "user-management",
+ "version": "1.0.0",
+ "description": "A comprehensive user management system for testing Code Index MCP",
+ "main": "src/server.js",
+ "scripts": {
+ "start": "node src/server.js",
+ "dev": "nodemon src/server.js",
+ "test": "jest",
+ "test:watch": "jest --watch",
+ "test:coverage": "jest --coverage",
+ "lint": "eslint src/ --fix",
+ "format": "prettier --write src/"
+ },
+ "keywords": [
+ "user-management",
+ "nodejs",
+ "express",
+ "authentication",
+ "api"
+ ],
+ "author": "Test Author",
+ "license": "MIT",
+ "dependencies": {
+ "express": "^4.18.2",
+ "mongoose": "^7.4.1",
+ "bcryptjs": "^2.4.3",
+ "jsonwebtoken": "^9.0.1",
+ "joi": "^17.9.2",
+ "cors": "^2.8.5",
+ "helmet": "^7.0.0",
+ "express-rate-limit": "^6.8.1",
+ "winston": "^3.10.0",
+ "dotenv": "^16.3.1",
+ "uuid": "^9.0.0",
+ "morgan": "^1.10.0",
+ "compression": "^1.7.4",
+ "express-validator": "^7.0.1"
+ },
+ "devDependencies": {
+ "nodemon": "^3.0.1",
+ "jest": "^29.6.1",
+ "supertest": "^6.3.3",
+ "eslint": "^8.45.0",
+ "prettier": "^3.0.0",
+ "mongodb-memory-server": "^8.14.0"
+ },
+ "engines": {
+ "node": ">=16.0.0"
+ },
+ "jest": {
+ "testEnvironment": "node",
+ "coverageDirectory": "coverage",
+ "collectCoverageFrom": [
+ "src/**/*.js",
+ "!src/server.js"
+ ]
+ },
+ "eslintConfig": {
+ "env": {
+ "node": true,
+ "es2021": true,
+ "jest": true
+ },
+ "extends": [
+ "eslint:recommended"
+ ],
+ "parserOptions": {
+ "ecmaVersion": "latest",
+ "sourceType": "module"
+ },
+ "rules": {
+ "no-console": "warn",
+ "no-unused-vars": "error",
+ "prefer-const": "error",
+ "no-var": "error"
+ }
+ },
+ "prettier": {
+ "semi": true,
+ "singleQuote": true,
+ "tabWidth": 2,
+ "trailingComma": "es5"
+ }
+}
\ No newline at end of file
diff --git a/test/sample-projects/javascript/user-management/src/config/database.js b/test/sample-projects/javascript/user-management/src/config/database.js
new file mode 100644
index 0000000..796796a
--- /dev/null
+++ b/test/sample-projects/javascript/user-management/src/config/database.js
@@ -0,0 +1,138 @@
+const mongoose = require('mongoose');
+const logger = require('../utils/logger');
+
+/**
+ * Database connection configuration
+ */
+class Database {
+ constructor() {
+ this.mongoURI = process.env.MONGODB_URI || 'mongodb://localhost:27017/user-management';
+ this.options = {
+ useNewUrlParser: true,
+ useUnifiedTopology: true,
+ maxPoolSize: 10,
+ serverSelectionTimeoutMS: 5000,
+ socketTimeoutMS: 45000,
+ family: 4,
+ };
+ }
+
+ /**
+ * Connect to MongoDB
+ */
+ async connect() {
+ try {
+ await mongoose.connect(this.mongoURI, this.options);
+ logger.info('MongoDB connected successfully');
+
+ // Handle connection events
+ mongoose.connection.on('error', (err) => {
+ logger.error('MongoDB connection error:', err);
+ });
+
+ mongoose.connection.on('disconnected', () => {
+ logger.warn('MongoDB disconnected');
+ });
+
+ mongoose.connection.on('reconnected', () => {
+ logger.info('MongoDB reconnected');
+ });
+
+ // Handle process termination
+ process.on('SIGINT', this.gracefulShutdown.bind(this));
+ process.on('SIGTERM', this.gracefulShutdown.bind(this));
+
+ } catch (error) {
+ logger.error('MongoDB connection failed:', error);
+ process.exit(1);
+ }
+ }
+
+ /**
+ * Disconnect from MongoDB
+ */
+ async disconnect() {
+ try {
+ await mongoose.disconnect();
+ logger.info('MongoDB disconnected successfully');
+ } catch (error) {
+ logger.error('MongoDB disconnection error:', error);
+ }
+ }
+
+ /**
+ * Graceful shutdown
+ */
+ async gracefulShutdown(signal) {
+ logger.info(`Received ${signal}. Graceful shutdown...`);
+ try {
+ await this.disconnect();
+ process.exit(0);
+ } catch (error) {
+ logger.error('Error during graceful shutdown:', error);
+ process.exit(1);
+ }
+ }
+
+ /**
+ * Get connection status
+ */
+ getConnectionStatus() {
+ const states = {
+ 0: 'disconnected',
+ 1: 'connected',
+ 2: 'connecting',
+ 3: 'disconnecting',
+ };
+ return states[mongoose.connection.readyState] || 'unknown';
+ }
+
+ /**
+ * Check if database is connected
+ */
+ isConnected() {
+ return mongoose.connection.readyState === 1;
+ }
+
+ /**
+ * Drop database (for testing)
+ */
+ async dropDatabase() {
+ if (process.env.NODE_ENV === 'test') {
+ try {
+ await mongoose.connection.db.dropDatabase();
+ logger.info('Test database dropped');
+ } catch (error) {
+ logger.error('Error dropping test database:', error);
+ }
+ } else {
+ logger.warn('Database drop attempted in non-test environment');
+ }
+ }
+
+ /**
+ * Get database statistics
+ */
+ async getStats() {
+ try {
+ const stats = await mongoose.connection.db.stats();
+ return {
+ database: mongoose.connection.name,
+ collections: stats.collections,
+ dataSize: stats.dataSize,
+ storageSize: stats.storageSize,
+ indexes: stats.indexes,
+ indexSize: stats.indexSize,
+ objects: stats.objects,
+ };
+ } catch (error) {
+ logger.error('Error getting database stats:', error);
+ return null;
+ }
+ }
+}
+
+// Create singleton instance
+const database = new Database();
+
+module.exports = database;
\ No newline at end of file
diff --git a/test/sample-projects/javascript/user-management/src/middleware/auth.js b/test/sample-projects/javascript/user-management/src/middleware/auth.js
new file mode 100644
index 0000000..a8773e6
--- /dev/null
+++ b/test/sample-projects/javascript/user-management/src/middleware/auth.js
@@ -0,0 +1,165 @@
+const jwt = require('jsonwebtoken');
+const { User } = require('../models/User');
+const { AuthenticationError, AuthorizationError } = require('../utils/errors');
+const logger = require('../utils/logger');
+
+/**
+ * Authentication middleware
+ * Verifies JWT token and attaches user to request object
+ */
+const auth = async (req, res, next) => {
+ try {
+ // Get token from header
+ const authHeader = req.header('Authorization');
+ const token = authHeader && authHeader.startsWith('Bearer ')
+ ? authHeader.substring(7)
+ : null;
+
+ if (!token) {
+ throw new AuthenticationError('Access denied. No token provided.');
+ }
+
+ // Verify token
+ const decoded = jwt.verify(token, process.env.JWT_SECRET || 'fallback-secret');
+
+ // Get user from database
+ const user = await User.findOne({ id: decoded.id });
+ if (!user) {
+ throw new AuthenticationError('Invalid token. User not found.');
+ }
+
+ // Check if user is active
+ if (!user.isActive) {
+ throw new AuthenticationError('User account is not active.');
+ }
+
+ // Attach user to request object
+ req.user = user;
+ next();
+ } catch (error) {
+ if (error.name === 'JsonWebTokenError') {
+ logger.warn('Invalid JWT token attempted');
+ next(new AuthenticationError('Invalid token'));
+ } else if (error.name === 'TokenExpiredError') {
+ logger.warn('Expired JWT token attempted');
+ next(new AuthenticationError('Token expired'));
+ } else {
+ logger.error('Authentication error:', error);
+ next(error);
+ }
+ }
+};
+
+/**
+ * Authorization middleware factory
+ * Creates middleware that checks if user has required role
+ */
+const authorize = (roles) => {
+ return (req, res, next) => {
+ if (!req.user) {
+ return next(new AuthenticationError('Authentication required'));
+ }
+
+ // Convert single role to array
+ const allowedRoles = Array.isArray(roles) ? roles : [roles];
+
+ // Check if user has required role
+ if (!allowedRoles.includes(req.user.role)) {
+ logger.warn(`User ${req.user.username} attempted to access resource requiring roles: ${allowedRoles.join(', ')}`);
+ return next(new AuthorizationError('Insufficient permissions'));
+ }
+
+ next();
+ };
+};
+
+/**
+ * Permission-based authorization middleware
+ * Checks if user has specific permission
+ */
+const requirePermission = (permission) => {
+ return (req, res, next) => {
+ if (!req.user) {
+ return next(new AuthenticationError('Authentication required'));
+ }
+
+ if (!req.user.hasPermission(permission)) {
+ logger.warn(`User ${req.user.username} attempted to access resource requiring permission: ${permission}`);
+ return next(new AuthorizationError('Insufficient permissions'));
+ }
+
+ next();
+ };
+};
+
+/**
+ * Self or admin middleware
+ * Allows access if user is accessing their own data or is an admin
+ */
+const selfOrAdmin = (req, res, next) => {
+ if (!req.user) {
+ return next(new AuthenticationError('Authentication required'));
+ }
+
+ const targetUserId = req.params.id;
+ const isAdmin = req.user.role === 'admin';
+ const isSelf = req.user.id === targetUserId;
+
+ if (!isAdmin && !isSelf) {
+ logger.warn(`User ${req.user.username} attempted to access another user's data`);
+ return next(new AuthorizationError('Access denied'));
+ }
+
+ next();
+};
+
+/**
+ * Admin only middleware
+ * Allows access only for admin users
+ */
+const adminOnly = authorize(['admin']);
+
+/**
+ * User or admin middleware
+ * Allows access for user role and above
+ */
+const userOrAdmin = authorize(['user', 'admin']);
+
+/**
+ * Optional authentication middleware
+ * Authenticates user if token is provided, but doesn't require it
+ */
+const optionalAuth = async (req, res, next) => {
+ try {
+ const authHeader = req.header('Authorization');
+ const token = authHeader && authHeader.startsWith('Bearer ')
+ ? authHeader.substring(7)
+ : null;
+
+ if (!token) {
+ return next();
+ }
+
+ const decoded = jwt.verify(token, process.env.JWT_SECRET || 'fallback-secret');
+ const user = await User.findOne({ id: decoded.id });
+
+ if (user && user.isActive) {
+ req.user = user;
+ }
+
+ next();
+ } catch (error) {
+ // Don't fail on optional auth, just continue without user
+ next();
+ }
+};
+
+module.exports = {
+ auth,
+ authorize,
+ requirePermission,
+ selfOrAdmin,
+ adminOnly,
+ userOrAdmin,
+ optionalAuth,
+};
\ No newline at end of file
diff --git a/test/sample-projects/javascript/user-management/src/middleware/rateLimiter.js b/test/sample-projects/javascript/user-management/src/middleware/rateLimiter.js
new file mode 100644
index 0000000..bc94d40
--- /dev/null
+++ b/test/sample-projects/javascript/user-management/src/middleware/rateLimiter.js
@@ -0,0 +1,122 @@
+const rateLimit = require('express-rate-limit');
+const { RateLimitError } = require('../utils/errors');
+const logger = require('../utils/logger');
+
+/**
+ * General rate limiter
+ * Limits requests per IP address
+ */
+const generalLimiter = rateLimit({
+ windowMs: 15 * 60 * 1000, // 15 minutes
+ max: 100, // limit each IP to 100 requests per windowMs
+ message: {
+ error: {
+ message: 'Too many requests from this IP, please try again later.',
+ statusCode: 429,
+ },
+ },
+ standardHeaders: true, // Return rate limit info in the `RateLimit-*` headers
+ legacyHeaders: false, // Disable the `X-RateLimit-*` headers
+ handler: (req, res) => {
+ logger.warn(`Rate limit exceeded for IP: ${req.ip}`);
+ const error = new RateLimitError('Too many requests from this IP, please try again later.');
+ res.status(429).json({
+ success: false,
+ error: {
+ message: error.message,
+ statusCode: error.statusCode,
+ },
+ });
+ },
+});
+
+/**
+ * Authentication rate limiter
+ * Stricter limits for login attempts
+ */
+const authLimiter = rateLimit({
+ windowMs: 15 * 60 * 1000, // 15 minutes
+ max: 5, // limit each IP to 5 login attempts per windowMs
+ message: {
+ error: {
+ message: 'Too many login attempts from this IP, please try again later.',
+ statusCode: 429,
+ },
+ },
+ standardHeaders: true,
+ legacyHeaders: false,
+ handler: (req, res) => {
+ logger.warn(`Authentication rate limit exceeded for IP: ${req.ip}`);
+ const error = new RateLimitError('Too many login attempts from this IP, please try again later.');
+ res.status(429).json({
+ success: false,
+ error: {
+ message: error.message,
+ statusCode: error.statusCode,
+ },
+ });
+ },
+});
+
+/**
+ * User creation rate limiter
+ * Moderate limits for user registration
+ */
+const createUserLimiter = rateLimit({
+ windowMs: 60 * 60 * 1000, // 1 hour
+ max: 10, // limit each IP to 10 user creations per hour
+ message: {
+ error: {
+ message: 'Too many accounts created from this IP, please try again later.',
+ statusCode: 429,
+ },
+ },
+ standardHeaders: true,
+ legacyHeaders: false,
+ handler: (req, res) => {
+ logger.warn(`User creation rate limit exceeded for IP: ${req.ip}`);
+ const error = new RateLimitError('Too many accounts created from this IP, please try again later.');
+ res.status(429).json({
+ success: false,
+ error: {
+ message: error.message,
+ statusCode: error.statusCode,
+ },
+ });
+ },
+});
+
+/**
+ * Password reset rate limiter
+ * Limits password reset attempts
+ */
+const passwordResetLimiter = rateLimit({
+ windowMs: 60 * 60 * 1000, // 1 hour
+ max: 3, // limit each IP to 3 password reset attempts per hour
+ message: {
+ error: {
+ message: 'Too many password reset attempts from this IP, please try again later.',
+ statusCode: 429,
+ },
+ },
+ standardHeaders: true,
+ legacyHeaders: false,
+ handler: (req, res) => {
+ logger.warn(`Password reset rate limit exceeded for IP: ${req.ip}`);
+ const error = new RateLimitError('Too many password reset attempts from this IP, please try again later.');
+ res.status(429).json({
+ success: false,
+ error: {
+ message: error.message,
+ statusCode: error.statusCode,
+ },
+ });
+ },
+});
+
+module.exports = {
+ generalLimiter,
+ authLimiter,
+ createUserLimiter,
+ passwordResetLimiter,
+};
\ No newline at end of file
diff --git a/test/sample-projects/javascript/user-management/src/middleware/validate.js b/test/sample-projects/javascript/user-management/src/middleware/validate.js
new file mode 100644
index 0000000..87659fa
--- /dev/null
+++ b/test/sample-projects/javascript/user-management/src/middleware/validate.js
@@ -0,0 +1,29 @@
+const { validationResult } = require('express-validator');
+const { ValidationError } = require('../utils/errors');
+
+/**
+ * Validation middleware
+ * Checks for validation errors and returns appropriate error response
+ */
+const validate = (req, res, next) => {
+ const errors = validationResult(req);
+
+ if (!errors.isEmpty()) {
+ const errorMessages = errors.array().map(error => ({
+ field: error.path,
+ message: error.msg,
+ value: error.value,
+ }));
+
+ const validationError = new ValidationError(
+ 'Validation failed',
+ errorMessages
+ );
+
+ return next(validationError);
+ }
+
+ next();
+};
+
+module.exports = validate;
\ No newline at end of file
diff --git a/test/sample-projects/javascript/user-management/src/models/User.js b/test/sample-projects/javascript/user-management/src/models/User.js
new file mode 100644
index 0000000..d0e5e6c
--- /dev/null
+++ b/test/sample-projects/javascript/user-management/src/models/User.js
@@ -0,0 +1,333 @@
+const mongoose = require('mongoose');
+const bcrypt = require('bcryptjs');
+const jwt = require('jsonwebtoken');
+const { v4: uuidv4 } = require('uuid');
+
+// User roles enumeration
+const USER_ROLES = {
+ ADMIN: 'admin',
+ USER: 'user',
+ GUEST: 'guest',
+};
+
+// User status enumeration
+const USER_STATUS = {
+ ACTIVE: 'active',
+ INACTIVE: 'inactive',
+ SUSPENDED: 'suspended',
+ DELETED: 'deleted',
+};
+
+// User schema definition
+const userSchema = new mongoose.Schema(
+ {
+ id: {
+ type: String,
+ default: uuidv4,
+ unique: true,
+ required: true,
+ },
+ username: {
+ type: String,
+ required: [true, 'Username is required'],
+ unique: true,
+ minlength: [3, 'Username must be at least 3 characters'],
+ maxlength: [20, 'Username cannot exceed 20 characters'],
+ match: [/^[a-zA-Z0-9_]+$/, 'Username can only contain letters, numbers, and underscores'],
+ },
+ email: {
+ type: String,
+ unique: true,
+ sparse: true,
+ match: [/^\w+([.-]?\w+)*@\w+([.-]?\w+)*(\.\w{2,3})+$/, 'Please enter a valid email'],
+ },
+ name: {
+ type: String,
+ required: [true, 'Name is required'],
+ minlength: [1, 'Name is required'],
+ maxlength: [100, 'Name cannot exceed 100 characters'],
+ },
+ age: {
+ type: Number,
+ min: [0, 'Age cannot be negative'],
+ max: [150, 'Age cannot exceed 150'],
+ },
+ password: {
+ type: String,
+ required: [true, 'Password is required'],
+ minlength: [8, 'Password must be at least 8 characters'],
+ select: false, // Don't include in queries by default
+ },
+ role: {
+ type: String,
+ enum: Object.values(USER_ROLES),
+ default: USER_ROLES.USER,
+ },
+ status: {
+ type: String,
+ enum: Object.values(USER_STATUS),
+ default: USER_STATUS.ACTIVE,
+ },
+ lastLogin: {
+ type: Date,
+ default: null,
+ },
+ loginAttempts: {
+ type: Number,
+ default: 0,
+ },
+ permissions: {
+ type: [String],
+ default: [],
+ },
+ metadata: {
+ type: mongoose.Schema.Types.Mixed,
+ default: {},
+ },
+ },
+ {
+ timestamps: true,
+ toJSON: { virtuals: true },
+ toObject: { virtuals: true },
+ }
+);
+
+// Virtual for user response (without sensitive data)
+userSchema.virtual('response').get(function() {
+ return {
+ id: this.id,
+ username: this.username,
+ email: this.email,
+ name: this.name,
+ age: this.age,
+ role: this.role,
+ status: this.status,
+ lastLogin: this.lastLogin,
+ permissions: this.permissions,
+ metadata: this.metadata,
+ createdAt: this.createdAt,
+ updatedAt: this.updatedAt,
+ };
+});
+
+// Virtual for checking if user is active
+userSchema.virtual('isActive').get(function() {
+ return this.status === USER_STATUS.ACTIVE;
+});
+
+// Virtual for checking if user is admin
+userSchema.virtual('isAdmin').get(function() {
+ return this.role === USER_ROLES.ADMIN;
+});
+
+// Virtual for checking if user is locked
+userSchema.virtual('isLocked').get(function() {
+ return this.loginAttempts >= 5 || this.status === USER_STATUS.SUSPENDED;
+});
+
+// Pre-save middleware to hash password
+userSchema.pre('save', async function(next) {
+ // Only hash the password if it has been modified (or is new)
+ if (!this.isModified('password')) return next();
+
+ try {
+ // Hash password with cost of 12
+ const hashedPassword = await bcrypt.hash(this.password, 12);
+ this.password = hashedPassword;
+ next();
+ } catch (error) {
+ next(error);
+ }
+});
+
+// Instance method to check password
+userSchema.methods.checkPassword = async function(candidatePassword) {
+ return await bcrypt.compare(candidatePassword, this.password);
+};
+
+// Instance method to generate JWT token
+userSchema.methods.generateToken = function() {
+ return jwt.sign(
+ {
+ id: this.id,
+ username: this.username,
+ role: this.role
+ },
+ process.env.JWT_SECRET || 'fallback-secret',
+ {
+ expiresIn: process.env.JWT_EXPIRES_IN || '24h',
+ issuer: 'user-management-system'
+ }
+ );
+};
+
+// Instance method to add permission
+userSchema.methods.addPermission = function(permission) {
+ if (!this.permissions.includes(permission)) {
+ this.permissions.push(permission);
+ }
+};
+
+// Instance method to remove permission
+userSchema.methods.removePermission = function(permission) {
+ this.permissions = this.permissions.filter(p => p !== permission);
+};
+
+// Instance method to check permission
+userSchema.methods.hasPermission = function(permission) {
+ return this.permissions.includes(permission);
+};
+
+// Instance method to record successful login
+userSchema.methods.recordLogin = function() {
+ this.lastLogin = new Date();
+ this.loginAttempts = 0;
+};
+
+// Instance method to record failed login attempt
+userSchema.methods.recordFailedLogin = function() {
+ this.loginAttempts += 1;
+ if (this.loginAttempts >= 5) {
+ this.status = USER_STATUS.SUSPENDED;
+ }
+};
+
+// Instance method to reset login attempts
+userSchema.methods.resetLoginAttempts = function() {
+ this.loginAttempts = 0;
+};
+
+// Instance method to activate user
+userSchema.methods.activate = function() {
+ this.status = USER_STATUS.ACTIVE;
+ this.loginAttempts = 0;
+};
+
+// Instance method to deactivate user
+userSchema.methods.deactivate = function() {
+ this.status = USER_STATUS.INACTIVE;
+};
+
+// Instance method to suspend user
+userSchema.methods.suspend = function() {
+ this.status = USER_STATUS.SUSPENDED;
+};
+
+// Instance method to delete user (soft delete)
+userSchema.methods.delete = function() {
+ this.status = USER_STATUS.DELETED;
+};
+
+// Instance method to get metadata
+userSchema.methods.getMetadata = function(key, defaultValue = null) {
+ return this.metadata[key] || defaultValue;
+};
+
+// Instance method to set metadata
+userSchema.methods.setMetadata = function(key, value) {
+ this.metadata[key] = value;
+};
+
+// Instance method to remove metadata
+userSchema.methods.removeMetadata = function(key) {
+ delete this.metadata[key];
+};
+
+// Instance method to validate user data
+userSchema.methods.validateUser = function() {
+ const errors = [];
+
+ if (!this.username || this.username.length < 3) {
+ errors.push('Username must be at least 3 characters');
+ }
+
+ if (!this.name || this.name.length === 0) {
+ errors.push('Name is required');
+ }
+
+ if (this.age && (this.age < 0 || this.age > 150)) {
+ errors.push('Age must be between 0 and 150');
+ }
+
+ if (this.email && !this.email.match(/^\w+([.-]?\w+)*@\w+([.-]?\w+)*(\.\w{2,3})+$/)) {
+ errors.push('Email format is invalid');
+ }
+
+ return errors;
+};
+
+// Static method to find by username
+userSchema.statics.findByUsername = function(username) {
+ return this.findOne({ username });
+};
+
+// Static method to find by email
+userSchema.statics.findByEmail = function(email) {
+ return this.findOne({ email });
+};
+
+// Static method to find active users
+userSchema.statics.findActive = function() {
+ return this.find({ status: USER_STATUS.ACTIVE });
+};
+
+// Static method to find by role
+userSchema.statics.findByRole = function(role) {
+ return this.find({ role });
+};
+
+// Static method to search users
+userSchema.statics.searchUsers = function(query, options = {}) {
+ const searchRegex = new RegExp(query, 'i');
+ const searchQuery = {
+ $or: [
+ { username: searchRegex },
+ { name: searchRegex },
+ { email: searchRegex },
+ ],
+ };
+
+ return this.find(searchQuery, null, options);
+};
+
+// Static method to get user statistics
+userSchema.statics.getUserStats = async function() {
+ const stats = await this.aggregate([
+ {
+ $group: {
+ _id: null,
+ total: { $sum: 1 },
+ active: { $sum: { $cond: [{ $eq: ['$status', USER_STATUS.ACTIVE] }, 1, 0] } },
+ admin: { $sum: { $cond: [{ $eq: ['$role', USER_ROLES.ADMIN] }, 1, 0] } },
+ user: { $sum: { $cond: [{ $eq: ['$role', USER_ROLES.USER] }, 1, 0] } },
+ guest: { $sum: { $cond: [{ $eq: ['$role', USER_ROLES.GUEST] }, 1, 0] } },
+ withEmail: { $sum: { $cond: [{ $ne: ['$email', null] }, 1, 0] } },
+ },
+ },
+ ]);
+
+ return stats[0] || {
+ total: 0,
+ active: 0,
+ admin: 0,
+ user: 0,
+ guest: 0,
+ withEmail: 0,
+ };
+};
+
+// Index for performance
+userSchema.index({ username: 1 });
+userSchema.index({ email: 1 });
+userSchema.index({ role: 1 });
+userSchema.index({ status: 1 });
+userSchema.index({ createdAt: -1 });
+
+// Export model and constants
+const User = mongoose.model('User', userSchema);
+
+module.exports = {
+ User,
+ USER_ROLES,
+ USER_STATUS,
+};
\ No newline at end of file
diff --git a/test/sample-projects/javascript/user-management/src/routes/userRoutes.js b/test/sample-projects/javascript/user-management/src/routes/userRoutes.js
new file mode 100644
index 0000000..2c2d4dd
--- /dev/null
+++ b/test/sample-projects/javascript/user-management/src/routes/userRoutes.js
@@ -0,0 +1,268 @@
+const express = require('express');
+const { body, param, query } = require('express-validator');
+const UserService = require('../services/UserService');
+const { asyncHandler, createSuccessResponse, createErrorResponse } = require('../utils/errors');
+const auth = require('../middleware/auth');
+const validate = require('../middleware/validate');
+const logger = require('../utils/logger');
+
+const router = express.Router();
+const userService = new UserService();
+
+// User creation validation
+const createUserValidation = [
+ body('username')
+ .isLength({ min: 3, max: 20 })
+ .withMessage('Username must be between 3 and 20 characters')
+ .matches(/^[a-zA-Z0-9_]+$/)
+ .withMessage('Username can only contain letters, numbers, and underscores'),
+ body('name')
+ .isLength({ min: 1, max: 100 })
+ .withMessage('Name must be between 1 and 100 characters'),
+ body('email')
+ .optional()
+ .isEmail()
+ .withMessage('Please provide a valid email address'),
+ body('age')
+ .optional()
+ .isInt({ min: 0, max: 150 })
+ .withMessage('Age must be between 0 and 150'),
+ body('password')
+ .isLength({ min: 8 })
+ .withMessage('Password must be at least 8 characters long'),
+ body('role')
+ .optional()
+ .isIn(['admin', 'user', 'guest'])
+ .withMessage('Role must be admin, user, or guest'),
+];
+
+// User update validation
+const updateUserValidation = [
+ param('id').notEmpty().withMessage('User ID is required'),
+ body('username')
+ .optional()
+ .isLength({ min: 3, max: 20 })
+ .withMessage('Username must be between 3 and 20 characters')
+ .matches(/^[a-zA-Z0-9_]+$/)
+ .withMessage('Username can only contain letters, numbers, and underscores'),
+ body('name')
+ .optional()
+ .isLength({ min: 1, max: 100 })
+ .withMessage('Name must be between 1 and 100 characters'),
+ body('email')
+ .optional()
+ .isEmail()
+ .withMessage('Please provide a valid email address'),
+ body('age')
+ .optional()
+ .isInt({ min: 0, max: 150 })
+ .withMessage('Age must be between 0 and 150'),
+ body('role')
+ .optional()
+ .isIn(['admin', 'user', 'guest'])
+ .withMessage('Role must be admin, user, or guest'),
+];
+
+// Password change validation
+const passwordChangeValidation = [
+ param('id').notEmpty().withMessage('User ID is required'),
+ body('currentPassword')
+ .isLength({ min: 1 })
+ .withMessage('Current password is required'),
+ body('newPassword')
+ .isLength({ min: 8 })
+ .withMessage('New password must be at least 8 characters long'),
+];
+
+// Authentication validation
+const authValidation = [
+ body('username')
+ .isLength({ min: 3 })
+ .withMessage('Username must be at least 3 characters'),
+ body('password')
+ .isLength({ min: 1 })
+ .withMessage('Password is required'),
+];
+
+// Search validation
+const searchValidation = [
+ query('q')
+ .isLength({ min: 1 })
+ .withMessage('Search query is required'),
+ query('page')
+ .optional()
+ .isInt({ min: 1 })
+ .withMessage('Page must be a positive integer'),
+ query('limit')
+ .optional()
+ .isInt({ min: 1, max: 100 })
+ .withMessage('Limit must be between 1 and 100'),
+];
+
+// @route POST /api/users
+// @desc Create a new user
+// @access Public
+router.post('/', createUserValidation, validate, asyncHandler(async (req, res) => {
+ const user = await userService.createUser(req.body);
+ logger.info(`User created via API: ${user.username}`);
+ res.status(201).json(createSuccessResponse(user, 'User created successfully'));
+}));
+
+// @route POST /api/users/auth
+// @desc Authenticate user
+// @access Public
+router.post('/auth', authValidation, validate, asyncHandler(async (req, res) => {
+ const { username, password } = req.body;
+ const result = await userService.authenticateUser(username, password);
+ logger.info(`User authenticated via API: ${username}`);
+ res.json(createSuccessResponse(result, 'Authentication successful'));
+}));
+
+// @route GET /api/users
+// @desc Get all users with pagination
+// @access Private (Admin only)
+router.get('/', auth, asyncHandler(async (req, res) => {
+ const page = parseInt(req.query.page) || 1;
+ const limit = parseInt(req.query.limit) || 20;
+ const filter = {};
+
+ // Add filtering by role if provided
+ if (req.query.role) {
+ filter.role = req.query.role;
+ }
+
+ // Add filtering by status if provided
+ if (req.query.status) {
+ filter.status = req.query.status;
+ }
+
+ const result = await userService.getAllUsers(page, limit, filter);
+ res.json(createSuccessResponse(result, 'Users retrieved successfully'));
+}));
+
+// @route GET /api/users/search
+// @desc Search users
+// @access Private
+router.get('/search', auth, searchValidation, validate, asyncHandler(async (req, res) => {
+ const { q: query, page = 1, limit = 20 } = req.query;
+ const result = await userService.searchUsers(query, parseInt(page), parseInt(limit));
+ res.json(createSuccessResponse(result, 'Search completed successfully'));
+}));
+
+// @route GET /api/users/stats
+// @desc Get user statistics
+// @access Private (Admin only)
+router.get('/stats', auth, asyncHandler(async (req, res) => {
+ const stats = await userService.getUserStats();
+ res.json(createSuccessResponse(stats, 'Statistics retrieved successfully'));
+}));
+
+// @route GET /api/users/export
+// @desc Export all users
+// @access Private (Admin only)
+router.get('/export', auth, asyncHandler(async (req, res) => {
+ const users = await userService.exportUsers();
+ res.json(createSuccessResponse(users, 'Users exported successfully'));
+}));
+
+// @route GET /api/users/active
+// @desc Get active users
+// @access Private
+router.get('/active', auth, asyncHandler(async (req, res) => {
+ const users = await userService.getActiveUsers();
+ res.json(createSuccessResponse(users, 'Active users retrieved successfully'));
+}));
+
+// @route GET /api/users/role/:role
+// @desc Get users by role
+// @access Private (Admin only)
+router.get('/role/:role', auth, asyncHandler(async (req, res) => {
+ const { role } = req.params;
+ const users = await userService.getUsersByRole(role);
+ res.json(createSuccessResponse(users, `Users with role ${role} retrieved successfully`));
+}));
+
+// @route GET /api/users/:id
+// @desc Get user by ID
+// @access Private
+router.get('/:id', auth, asyncHandler(async (req, res) => {
+ const user = await userService.getUserById(req.params.id);
+ res.json(createSuccessResponse(user, 'User retrieved successfully'));
+}));
+
+// @route GET /api/users/:id/activity
+// @desc Get user activity
+// @access Private (Admin or same user)
+router.get('/:id/activity', auth, asyncHandler(async (req, res) => {
+ const activity = await userService.getUserActivity(req.params.id);
+ res.json(createSuccessResponse(activity, 'User activity retrieved successfully'));
+}));
+
+// @route PUT /api/users/:id
+// @desc Update user
+// @access Private (Admin or same user)
+router.put('/:id', auth, updateUserValidation, validate, asyncHandler(async (req, res) => {
+ const user = await userService.updateUser(req.params.id, req.body);
+ logger.info(`User updated via API: ${user.username}`);
+ res.json(createSuccessResponse(user, 'User updated successfully'));
+}));
+
+// @route PUT /api/users/:id/password
+// @desc Change user password
+// @access Private (Admin or same user)
+router.put('/:id/password', auth, passwordChangeValidation, validate, asyncHandler(async (req, res) => {
+ const { currentPassword, newPassword } = req.body;
+ await userService.changePassword(req.params.id, currentPassword, newPassword);
+ logger.info(`Password changed via API for user: ${req.params.id}`);
+ res.json(createSuccessResponse(null, 'Password changed successfully'));
+}));
+
+// @route PUT /api/users/:id/reset-password
+// @desc Reset user password (Admin only)
+// @access Private (Admin only)
+router.put('/:id/reset-password', auth, asyncHandler(async (req, res) => {
+ const { newPassword } = req.body;
+ await userService.resetPassword(req.params.id, newPassword);
+ logger.info(`Password reset via API for user: ${req.params.id}`);
+ res.json(createSuccessResponse(null, 'Password reset successfully'));
+}));
+
+// @route PUT /api/users/:id/permissions
+// @desc Add permission to user
+// @access Private (Admin only)
+router.put('/:id/permissions', auth, asyncHandler(async (req, res) => {
+ const { permission } = req.body;
+ await userService.addPermission(req.params.id, permission);
+ logger.info(`Permission added via API for user: ${req.params.id}`);
+ res.json(createSuccessResponse(null, 'Permission added successfully'));
+}));
+
+// @route DELETE /api/users/:id/permissions
+// @desc Remove permission from user
+// @access Private (Admin only)
+router.delete('/:id/permissions', auth, asyncHandler(async (req, res) => {
+ const { permission } = req.body;
+ await userService.removePermission(req.params.id, permission);
+ logger.info(`Permission removed via API for user: ${req.params.id}`);
+ res.json(createSuccessResponse(null, 'Permission removed successfully'));
+}));
+
+// @route DELETE /api/users/:id
+// @desc Delete user (soft delete)
+// @access Private (Admin only)
+router.delete('/:id', auth, asyncHandler(async (req, res) => {
+ await userService.deleteUser(req.params.id);
+ logger.info(`User deleted via API: ${req.params.id}`);
+ res.json(createSuccessResponse(null, 'User deleted successfully'));
+}));
+
+// @route DELETE /api/users/:id/hard
+// @desc Hard delete user (permanent)
+// @access Private (Admin only)
+router.delete('/:id/hard', auth, asyncHandler(async (req, res) => {
+ await userService.hardDeleteUser(req.params.id);
+ logger.info(`User permanently deleted via API: ${req.params.id}`);
+ res.json(createSuccessResponse(null, 'User permanently deleted'));
+}));
+
+module.exports = router;
\ No newline at end of file
diff --git a/test/sample-projects/javascript/user-management/src/server.js b/test/sample-projects/javascript/user-management/src/server.js
new file mode 100644
index 0000000..0c1f850
--- /dev/null
+++ b/test/sample-projects/javascript/user-management/src/server.js
@@ -0,0 +1,190 @@
+require('dotenv').config();
+
+const express = require('express');
+const cors = require('cors');
+const helmet = require('helmet');
+const compression = require('compression');
+const morgan = require('morgan');
+
+const database = require('./config/database');
+const userRoutes = require('./routes/userRoutes');
+const { generalLimiter } = require('./middleware/rateLimiter');
+const { globalErrorHandler, handleNotFound } = require('./utils/errors');
+const logger = require('./utils/logger');
+
+/**
+ * Express application setup
+ */
+class App {
+ constructor() {
+ this.app = express();
+ this.port = process.env.PORT || 3000;
+ this.setupMiddleware();
+ this.setupRoutes();
+ this.setupErrorHandling();
+ }
+
+ /**
+ * Setup middleware
+ */
+ setupMiddleware() {
+ // Security middleware
+ this.app.use(helmet());
+
+ // CORS configuration
+ this.app.use(cors({
+ origin: process.env.ALLOWED_ORIGINS?.split(',') || ['http://localhost:3000'],
+ credentials: true,
+ }));
+
+ // Compression middleware
+ this.app.use(compression());
+
+ // Body parsing middleware
+ this.app.use(express.json({ limit: '10mb' }));
+ this.app.use(express.urlencoded({ extended: true, limit: '10mb' }));
+
+ // Logging middleware
+ this.app.use(morgan('combined', { stream: logger.stream }));
+
+ // Rate limiting
+ this.app.use(generalLimiter);
+
+ // Request ID middleware
+ this.app.use((req, res, next) => {
+ req.requestId = Math.random().toString(36).substr(2, 9);
+ res.set('X-Request-ID', req.requestId);
+ next();
+ });
+
+ // Health check endpoint
+ this.app.get('/health', (req, res) => {
+ res.json({
+ status: 'OK',
+ timestamp: new Date().toISOString(),
+ uptime: process.uptime(),
+ database: database.getConnectionStatus(),
+ version: process.env.npm_package_version || '1.0.0',
+ });
+ });
+ }
+
+ /**
+ * Setup routes
+ */
+ setupRoutes() {
+ // API routes
+ this.app.use('/api/users', userRoutes);
+
+ // Root endpoint
+ this.app.get('/', (req, res) => {
+ res.json({
+ message: 'User Management API',
+ version: '1.0.0',
+ endpoints: {
+ health: '/health',
+ users: '/api/users',
+ auth: '/api/users/auth',
+ },
+ });
+ });
+ }
+
+ /**
+ * Setup error handling
+ */
+ setupErrorHandling() {
+ // Handle 404 for unknown routes
+ this.app.use(handleNotFound);
+
+ // Global error handler
+ this.app.use(globalErrorHandler);
+ }
+
+ /**
+ * Start the server
+ */
+ async start() {
+ try {
+ // Connect to database
+ await database.connect();
+
+ // Start server
+ this.server = this.app.listen(this.port, () => {
+ logger.info(`Server running on port ${this.port}`);
+ logger.info(`Environment: ${process.env.NODE_ENV || 'development'}`);
+ logger.info(`Health check: http://localhost:${this.port}/health`);
+ });
+
+ // Handle server errors
+ this.server.on('error', (error) => {
+ if (error.syscall !== 'listen') {
+ throw error;
+ }
+
+ const bind = typeof this.port === 'string' ? `Pipe ${this.port}` : `Port ${this.port}`;
+
+ switch (error.code) {
+ case 'EACCES':
+ logger.error(`${bind} requires elevated privileges`);
+ process.exit(1);
+ break;
+ case 'EADDRINUSE':
+ logger.error(`${bind} is already in use`);
+ process.exit(1);
+ break;
+ default:
+ throw error;
+ }
+ });
+
+ // Graceful shutdown
+ process.on('SIGTERM', this.gracefulShutdown.bind(this));
+ process.on('SIGINT', this.gracefulShutdown.bind(this));
+
+ } catch (error) {
+ logger.error('Failed to start server:', error);
+ process.exit(1);
+ }
+ }
+
+ /**
+ * Graceful shutdown
+ */
+ async gracefulShutdown(signal) {
+ logger.info(`Received ${signal}. Graceful shutdown...`);
+
+ if (this.server) {
+ this.server.close(async () => {
+ logger.info('HTTP server closed');
+
+ try {
+ await database.disconnect();
+ logger.info('Database disconnected');
+ process.exit(0);
+ } catch (error) {
+ logger.error('Error during graceful shutdown:', error);
+ process.exit(1);
+ }
+ });
+ }
+ }
+
+ /**
+ * Get Express app instance
+ */
+ getApp() {
+ return this.app;
+ }
+}
+
+// Create and start the application
+const app = new App();
+
+// Start server if not in test environment
+if (process.env.NODE_ENV !== 'test') {
+ app.start();
+}
+
+// Export for testing
+module.exports = app.getApp();
\ No newline at end of file
diff --git a/test/sample-projects/javascript/user-management/src/services/UserService.js b/test/sample-projects/javascript/user-management/src/services/UserService.js
new file mode 100644
index 0000000..5dfde99
--- /dev/null
+++ b/test/sample-projects/javascript/user-management/src/services/UserService.js
@@ -0,0 +1,493 @@
+const { User, USER_ROLES, USER_STATUS } = require('../models/User');
+const { AppError } = require('../utils/errors');
+const logger = require('../utils/logger');
+
+/**
+ * UserService class handles all user-related business logic
+ */
+class UserService {
+ /**
+ * Create a new user
+ * @param {Object} userData - User data object
+ * @returns {Promise} Created user response
+ */
+ async createUser(userData) {
+ try {
+ // Check if username already exists
+ const existingUsername = await User.findByUsername(userData.username);
+ if (existingUsername) {
+ throw new AppError('Username already exists', 400);
+ }
+
+ // Check if email already exists (if provided)
+ if (userData.email) {
+ const existingEmail = await User.findByEmail(userData.email);
+ if (existingEmail) {
+ throw new AppError('Email already exists', 400);
+ }
+ }
+
+ // Create new user
+ const user = new User(userData);
+
+ // Validate user data
+ const validationErrors = user.validateUser();
+ if (validationErrors.length > 0) {
+ throw new AppError(validationErrors.join(', '), 400);
+ }
+
+ await user.save();
+
+ logger.info(`User created successfully: ${user.username}`);
+ return user.response;
+ } catch (error) {
+ logger.error('Error creating user:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get user by ID
+ * @param {string} id - User ID
+ * @returns {Promise} User response
+ */
+ async getUserById(id) {
+ try {
+ const user = await User.findOne({ id });
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+ return user.response;
+ } catch (error) {
+ logger.error('Error getting user by ID:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get user by username
+ * @param {string} username - Username
+ * @returns {Promise} User response
+ */
+ async getUserByUsername(username) {
+ try {
+ const user = await User.findByUsername(username);
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+ return user.response;
+ } catch (error) {
+ logger.error('Error getting user by username:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get user by email
+ * @param {string} email - Email address
+ * @returns {Promise} User response
+ */
+ async getUserByEmail(email) {
+ try {
+ const user = await User.findByEmail(email);
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+ return user.response;
+ } catch (error) {
+ logger.error('Error getting user by email:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Update user
+ * @param {string} id - User ID
+ * @param {Object} updateData - Update data
+ * @returns {Promise} Updated user response
+ */
+ async updateUser(id, updateData) {
+ try {
+ const user = await User.findOne({ id });
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+
+ // Apply updates
+ Object.keys(updateData).forEach(key => {
+ if (key !== 'password' && key !== 'id') {
+ user[key] = updateData[key];
+ }
+ });
+
+ // Validate updated data
+ const validationErrors = user.validateUser();
+ if (validationErrors.length > 0) {
+ throw new AppError(validationErrors.join(', '), 400);
+ }
+
+ await user.save();
+
+ logger.info(`User updated successfully: ${user.username}`);
+ return user.response;
+ } catch (error) {
+ logger.error('Error updating user:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Delete user (soft delete)
+ * @param {string} id - User ID
+ * @returns {Promise} Success status
+ */
+ async deleteUser(id) {
+ try {
+ const user = await User.findOne({ id });
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+
+ user.delete();
+ await user.save();
+
+ logger.info(`User deleted successfully: ${user.username}`);
+ return true;
+ } catch (error) {
+ logger.error('Error deleting user:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Hard delete user (permanent deletion)
+ * @param {string} id - User ID
+ * @returns {Promise} Success status
+ */
+ async hardDeleteUser(id) {
+ try {
+ const result = await User.deleteOne({ id });
+ if (result.deletedCount === 0) {
+ throw new AppError('User not found', 404);
+ }
+
+ logger.info(`User permanently deleted: ${id}`);
+ return true;
+ } catch (error) {
+ logger.error('Error hard deleting user:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get all users with pagination
+ * @param {number} page - Page number
+ * @param {number} limit - Items per page
+ * @param {Object} filter - Filter criteria
+ * @returns {Promise} Paginated users response
+ */
+ async getAllUsers(page = 1, limit = 20, filter = {}) {
+ try {
+ const skip = (page - 1) * limit;
+
+ const users = await User.find(filter)
+ .sort({ createdAt: -1 })
+ .skip(skip)
+ .limit(limit);
+
+ const total = await User.countDocuments(filter);
+ const totalPages = Math.ceil(total / limit);
+
+ return {
+ users: users.map(user => user.response),
+ pagination: {
+ page,
+ limit,
+ total,
+ totalPages,
+ hasNext: page < totalPages,
+ hasPrev: page > 1,
+ },
+ };
+ } catch (error) {
+ logger.error('Error getting all users:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get active users
+ * @returns {Promise} Active users
+ */
+ async getActiveUsers() {
+ try {
+ const users = await User.findActive();
+ return users.map(user => user.response);
+ } catch (error) {
+ logger.error('Error getting active users:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get users by role
+ * @param {string} role - User role
+ * @returns {Promise} Users with specified role
+ */
+ async getUsersByRole(role) {
+ try {
+ const users = await User.findByRole(role);
+ return users.map(user => user.response);
+ } catch (error) {
+ logger.error('Error getting users by role:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Search users
+ * @param {string} query - Search query
+ * @param {number} page - Page number
+ * @param {number} limit - Items per page
+ * @returns {Promise} Search results
+ */
+ async searchUsers(query, page = 1, limit = 20) {
+ try {
+ const skip = (page - 1) * limit;
+
+ const users = await User.searchUsers(query, {
+ skip,
+ limit,
+ sort: { createdAt: -1 },
+ });
+
+ // Count total matching users
+ const totalUsers = await User.searchUsers(query);
+ const total = totalUsers.length;
+ const totalPages = Math.ceil(total / limit);
+
+ return {
+ users: users.map(user => user.response),
+ query,
+ pagination: {
+ page,
+ limit,
+ total,
+ totalPages,
+ hasNext: page < totalPages,
+ hasPrev: page > 1,
+ },
+ };
+ } catch (error) {
+ logger.error('Error searching users:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get user statistics
+ * @returns {Promise} User statistics
+ */
+ async getUserStats() {
+ try {
+ const stats = await User.getUserStats();
+ return stats;
+ } catch (error) {
+ logger.error('Error getting user statistics:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Authenticate user
+ * @param {string} username - Username
+ * @param {string} password - Password
+ * @returns {Promise} Authentication result
+ */
+ async authenticateUser(username, password) {
+ try {
+ const user = await User.findByUsername(username).select('+password');
+
+ if (!user || !(await user.checkPassword(password))) {
+ // Record failed login attempt if user exists
+ if (user) {
+ user.recordFailedLogin();
+ await user.save();
+ }
+ throw new AppError('Invalid username or password', 401);
+ }
+
+ if (!user.isActive) {
+ throw new AppError('User account is not active', 401);
+ }
+
+ if (user.isLocked) {
+ throw new AppError('User account is locked', 401);
+ }
+
+ // Record successful login
+ user.recordLogin();
+ await user.save();
+
+ // Generate token
+ const token = user.generateToken();
+
+ logger.info(`User authenticated successfully: ${user.username}`);
+
+ return {
+ user: user.response,
+ token,
+ };
+ } catch (error) {
+ logger.error('Error authenticating user:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Change user password
+ * @param {string} id - User ID
+ * @param {string} currentPassword - Current password
+ * @param {string} newPassword - New password
+ * @returns {Promise} Success status
+ */
+ async changePassword(id, currentPassword, newPassword) {
+ try {
+ const user = await User.findOne({ id }).select('+password');
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+
+ if (!(await user.checkPassword(currentPassword))) {
+ throw new AppError('Current password is incorrect', 400);
+ }
+
+ user.password = newPassword;
+ await user.save();
+
+ logger.info(`Password changed successfully for user: ${user.username}`);
+ return true;
+ } catch (error) {
+ logger.error('Error changing password:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Reset user password (admin function)
+ * @param {string} id - User ID
+ * @param {string} newPassword - New password
+ * @returns {Promise} Success status
+ */
+ async resetPassword(id, newPassword) {
+ try {
+ const user = await User.findOne({ id });
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+
+ user.password = newPassword;
+ user.resetLoginAttempts();
+ await user.save();
+
+ logger.info(`Password reset successfully for user: ${user.username}`);
+ return true;
+ } catch (error) {
+ logger.error('Error resetting password:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Add permission to user
+ * @param {string} id - User ID
+ * @param {string} permission - Permission to add
+ * @returns {Promise} Success status
+ */
+ async addPermission(id, permission) {
+ try {
+ const user = await User.findOne({ id });
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+
+ user.addPermission(permission);
+ await user.save();
+
+ logger.info(`Permission added to user ${user.username}: ${permission}`);
+ return true;
+ } catch (error) {
+ logger.error('Error adding permission:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Remove permission from user
+ * @param {string} id - User ID
+ * @param {string} permission - Permission to remove
+ * @returns {Promise} Success status
+ */
+ async removePermission(id, permission) {
+ try {
+ const user = await User.findOne({ id });
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+
+ user.removePermission(permission);
+ await user.save();
+
+ logger.info(`Permission removed from user ${user.username}: ${permission}`);
+ return true;
+ } catch (error) {
+ logger.error('Error removing permission:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Export users data
+ * @returns {Promise} Users data for export
+ */
+ async exportUsers() {
+ try {
+ const users = await User.find().sort({ createdAt: -1 });
+ return users.map(user => user.response);
+ } catch (error) {
+ logger.error('Error exporting users:', error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get user activity
+ * @param {string} id - User ID
+ * @returns {Promise} User activity data
+ */
+ async getUserActivity(id) {
+ try {
+ const user = await User.findOne({ id });
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+
+ return {
+ id: user.id,
+ username: user.username,
+ lastLogin: user.lastLogin,
+ loginAttempts: user.loginAttempts,
+ isActive: user.isActive,
+ isLocked: user.isLocked,
+ createdAt: user.createdAt,
+ updatedAt: user.updatedAt,
+ };
+ } catch (error) {
+ logger.error('Error getting user activity:', error);
+ throw error;
+ }
+ }
+}
+
+module.exports = UserService;
+// AUTO_REINDEX_MARKER: ci_auto_reindex_test_token_js
\ No newline at end of file
diff --git a/test/sample-projects/javascript/user-management/src/utils/errors.js b/test/sample-projects/javascript/user-management/src/utils/errors.js
new file mode 100644
index 0000000..15677ac
--- /dev/null
+++ b/test/sample-projects/javascript/user-management/src/utils/errors.js
@@ -0,0 +1,206 @@
+/**
+ * Custom error classes for the application
+ */
+
+/**
+ * Base application error class
+ */
+class AppError extends Error {
+ constructor(message, statusCode = 500, isOperational = true) {
+ super(message);
+
+ this.statusCode = statusCode;
+ this.isOperational = isOperational;
+ this.status = `${statusCode}`.startsWith('4') ? 'fail' : 'error';
+
+ // Capture stack trace
+ Error.captureStackTrace(this, this.constructor);
+ }
+}
+
+/**
+ * Validation error class
+ */
+class ValidationError extends AppError {
+ constructor(message, errors = []) {
+ super(message, 400);
+ this.errors = errors;
+ }
+}
+
+/**
+ * Authentication error class
+ */
+class AuthenticationError extends AppError {
+ constructor(message = 'Authentication failed') {
+ super(message, 401);
+ }
+}
+
+/**
+ * Authorization error class
+ */
+class AuthorizationError extends AppError {
+ constructor(message = 'Access denied') {
+ super(message, 403);
+ }
+}
+
+/**
+ * Not found error class
+ */
+class NotFoundError extends AppError {
+ constructor(message = 'Resource not found') {
+ super(message, 404);
+ }
+}
+
+/**
+ * Conflict error class
+ */
+class ConflictError extends AppError {
+ constructor(message = 'Resource conflict') {
+ super(message, 409);
+ }
+}
+
+/**
+ * Rate limit error class
+ */
+class RateLimitError extends AppError {
+ constructor(message = 'Too many requests') {
+ super(message, 429);
+ }
+}
+
+/**
+ * Database error class
+ */
+class DatabaseError extends AppError {
+ constructor(message = 'Database error') {
+ super(message, 500);
+ }
+}
+
+/**
+ * External service error class
+ */
+class ExternalServiceError extends AppError {
+ constructor(message = 'External service error') {
+ super(message, 502);
+ }
+}
+
+/**
+ * Global error handler for Express
+ */
+const globalErrorHandler = (err, req, res, next) => {
+ // Default error values
+ let error = { ...err };
+ error.message = err.message;
+
+ // Log error
+ console.error('Error:', err);
+
+ // Mongoose bad ObjectId
+ if (err.name === 'CastError') {
+ const message = 'Resource not found';
+ error = new NotFoundError(message);
+ }
+
+ // Mongoose duplicate key
+ if (err.code === 11000) {
+ const value = err.errmsg.match(/(["'])(\\?.)*?\1/)[0];
+ const message = `Duplicate field value: ${value}. Please use another value`;
+ error = new ConflictError(message);
+ }
+
+ // Mongoose validation error
+ if (err.name === 'ValidationError') {
+ const errors = Object.values(err.errors).map(val => val.message);
+ error = new ValidationError('Validation failed', errors);
+ }
+
+ // JWT errors
+ if (err.name === 'JsonWebTokenError') {
+ error = new AuthenticationError('Invalid token');
+ }
+
+ if (err.name === 'TokenExpiredError') {
+ error = new AuthenticationError('Token expired');
+ }
+
+ // Send error response
+ res.status(error.statusCode || 500).json({
+ success: false,
+ error: {
+ message: error.message,
+ ...(process.env.NODE_ENV === 'development' && { stack: error.stack }),
+ ...(error.errors && { errors: error.errors }),
+ },
+ });
+};
+
+/**
+ * Async error handler wrapper
+ */
+const asyncHandler = (fn) => {
+ return (req, res, next) => {
+ Promise.resolve(fn(req, res, next)).catch(next);
+ };
+};
+
+/**
+ * Create error response
+ */
+const createErrorResponse = (message, statusCode = 500, errors = null) => {
+ const response = {
+ success: false,
+ error: {
+ message,
+ statusCode,
+ },
+ };
+
+ if (errors) {
+ response.error.errors = errors;
+ }
+
+ return response;
+};
+
+/**
+ * Create success response
+ */
+const createSuccessResponse = (data, message = 'Success') => {
+ return {
+ success: true,
+ message,
+ data,
+ };
+};
+
+/**
+ * Handle 404 for unknown routes
+ */
+const handleNotFound = (req, res, next) => {
+ const error = new NotFoundError(`Route ${req.originalUrl} not found`);
+ next(error);
+};
+
+module.exports = {
+ AppError,
+ ValidationError,
+ AuthenticationError,
+ AuthorizationError,
+ NotFoundError,
+ ConflictError,
+ RateLimitError,
+ DatabaseError,
+ ExternalServiceError,
+ globalErrorHandler,
+ asyncHandler,
+ createErrorResponse,
+ createSuccessResponse,
+ handleNotFound,
+};
\ No newline at end of file
diff --git a/test/sample-projects/javascript/user-management/src/utils/logger.js b/test/sample-projects/javascript/user-management/src/utils/logger.js
new file mode 100644
index 0000000..13c4f24
--- /dev/null
+++ b/test/sample-projects/javascript/user-management/src/utils/logger.js
@@ -0,0 +1,79 @@
+const winston = require('winston');
+
+// Define log levels
+const levels = {
+ error: 0,
+ warn: 1,
+ info: 2,
+ http: 3,
+ debug: 4,
+};
+
+// Define colors for each level
+const colors = {
+ error: 'red',
+ warn: 'yellow',
+ info: 'green',
+ http: 'magenta',
+ debug: 'white',
+};
+
+// Tell winston about the colors
+winston.addColors(colors);
+
+// Format function
+const format = winston.format.combine(
+ winston.format.timestamp({ format: 'YYYY-MM-DD HH:mm:ss:ms' }),
+ winston.format.colorize({ all: true }),
+ winston.format.printf(
+ (info) => `${info.timestamp} ${info.level}: ${info.message}`
+ )
+);
+
+// Define which transports the logger must use
+const transports = [
+ // Console transport
+ new winston.transports.Console({
+ format: winston.format.combine(
+ winston.format.colorize(),
+ winston.format.simple()
+ ),
+ }),
+
+ // File transport for errors
+ new winston.transports.File({
+ filename: 'logs/error.log',
+ level: 'error',
+ format: winston.format.combine(
+ winston.format.timestamp(),
+ winston.format.json()
+ ),
+ }),
+
+ // File transport for all logs
+ new winston.transports.File({
+ filename: 'logs/combined.log',
+ format: winston.format.combine(
+ winston.format.timestamp(),
+ winston.format.json()
+ ),
+ }),
+];
+
+// Create the logger
+const logger = winston.createLogger({
+ level: process.env.LOG_LEVEL || 'info',
+ levels,
+ format,
+ transports,
+});
+
+// Create a stream object with a 'write' function that will be used by Morgan
+logger.stream = {
+ write: (message) => {
+ // Remove the trailing newline
+ logger.http(message.trim());
+ },
+};
+
+module.exports = logger;
\ No newline at end of file
diff --git a/test/sample-projects/objective-c/Person.h b/test/sample-projects/objective-c/Person.h
new file mode 100644
index 0000000..368bc37
--- /dev/null
+++ b/test/sample-projects/objective-c/Person.h
@@ -0,0 +1,14 @@
+#import
+
+@interface Person : NSObject
+
+@property (nonatomic, strong) NSString *name;
+@property (nonatomic, assign) NSInteger age;
+@property (nonatomic, strong) NSString *email;
+
+- (instancetype)initWithName:(NSString *)name age:(NSInteger)age;
+- (void)sayHello;
+- (void)updateEmail:(NSString *)email;
++ (Person *)createDefaultPerson;
+
+@end
\ No newline at end of file
diff --git a/test/sample-projects/objective-c/Person.m b/test/sample-projects/objective-c/Person.m
new file mode 100644
index 0000000..78a744a
--- /dev/null
+++ b/test/sample-projects/objective-c/Person.m
@@ -0,0 +1,27 @@
+#import "Person.h"
+
+@implementation Person
+
+- (instancetype)initWithName:(NSString *)name age:(NSInteger)age {
+ self = [super init];
+ if (self) {
+ _name = name;
+ _age = age;
+ }
+ return self;
+}
+
+- (void)sayHello {
+ NSLog(@"Hello, my name is %@", self.name);
+}
+
+- (void)updateEmail:(NSString *)email {
+ self.email = email;
+ NSLog(@"Email updated to: %@", email);
+}
+
++ (Person *)createDefaultPerson {
+ return [[Person alloc] initWithName:@"John Doe" age:30];
+}
+
+@end
\ No newline at end of file
diff --git a/test/sample-projects/objective-c/UserManager.h b/test/sample-projects/objective-c/UserManager.h
new file mode 100644
index 0000000..d08c4d8
--- /dev/null
+++ b/test/sample-projects/objective-c/UserManager.h
@@ -0,0 +1,14 @@
+#import
+#import "Person.h"
+
+@interface UserManager : NSObject
+
+@property (nonatomic, strong) NSMutableArray *users;
+
++ (UserManager *)sharedManager;
+- (void)addUser:(Person *)user;
+- (Person *)findUserByName:(NSString *)name;
+- (void)removeUser:(Person *)user;
+- (NSInteger)userCount;
+
+@end
\ No newline at end of file
diff --git a/test/sample-projects/objective-c/UserManager.m b/test/sample-projects/objective-c/UserManager.m
new file mode 100644
index 0000000..1157d7c
--- /dev/null
+++ b/test/sample-projects/objective-c/UserManager.m
@@ -0,0 +1,42 @@
+#import "UserManager.h"
+
+@implementation UserManager
+
++ (UserManager *)sharedManager {
+ static UserManager *sharedInstance = nil;
+ static dispatch_once_t onceToken;
+ dispatch_once(&onceToken, ^{
+ sharedInstance = [[UserManager alloc] init];
+ sharedInstance.users = [[NSMutableArray alloc] init];
+ });
+ return sharedInstance;
+}
+
+- (void)addUser:(Person *)user {
+ if (user) {
+ [self.users addObject:user];
+ NSLog(@"Added user: %@", user.name);
+ }
+}
+
+- (Person *)findUserByName:(NSString *)name {
+ for (Person *user in self.users) {
+ if ([user.name isEqualToString:name]) {
+ return user;
+ }
+ }
+ return nil;
+}
+
+- (void)removeUser:(Person *)user {
+ if (user) {
+ [self.users removeObject:user];
+ NSLog(@"Removed user: %@", user.name);
+ }
+}
+
+- (NSInteger)userCount {
+ return self.users.count;
+}
+
+@end
\ No newline at end of file
diff --git a/test/sample-projects/objective-c/main.m b/test/sample-projects/objective-c/main.m
new file mode 100644
index 0000000..4415d06
--- /dev/null
+++ b/test/sample-projects/objective-c/main.m
@@ -0,0 +1,30 @@
+#import
+#import "Person.h"
+#import "UserManager.h"
+
+int main(int argc, const char * argv[]) {
+ @autoreleasepool {
+ // Create some users
+ Person *alice = [[Person alloc] initWithName:@"Alice" age:25];
+ Person *bob = [[Person alloc] initWithName:@"Bob" age:30];
+ Person *charlie = [Person createDefaultPerson];
+
+ // Get shared manager
+ UserManager *manager = [UserManager sharedManager];
+
+ // Add users
+ [manager addUser:alice];
+ [manager addUser:bob];
+ [manager addUser:charlie];
+
+ // Find user
+ Person *found = [manager findUserByName:@"Alice"];
+ if (found) {
+ [found sayHello];
+ [found updateEmail:@"alice@example.com"];
+ }
+
+ NSLog(@"Total users: %ld", (long)[manager userCount]);
+ }
+ return 0;
+}
\ No newline at end of file
diff --git a/test/sample-projects/python/README.md b/test/sample-projects/python/README.md
new file mode 100644
index 0000000..f7ac984
--- /dev/null
+++ b/test/sample-projects/python/README.md
@@ -0,0 +1,144 @@
+# User Management System (Python)
+
+A comprehensive user management system built in Python for testing Code Index MCP's analysis capabilities.
+
+## Features
+
+- **User Management**: Create, update, delete, and search users
+- **Authentication**: Password-based authentication with session management
+- **Authorization**: Role-based access control (Admin, User, Guest)
+- **Data Validation**: Comprehensive input validation and sanitization
+- **Export/Import**: JSON and CSV export capabilities
+- **CLI Interface**: Command-line interface for system management
+
+## Project Structure
+
+```
+user_management/
+├── models/
+│ ├── __init__.py
+│ ├── person.py # Basic Person model
+│ └── user.py # User model with auth features
+├── services/
+│ ├── __init__.py
+│ ├── user_manager.py # User management service
+│ └── auth_service.py # Authentication service
+├── utils/
+│ ├── __init__.py
+│ ├── validators.py # Input validation utilities
+│ ├── exceptions.py # Custom exception classes
+│ └── helpers.py # Helper functions
+├── tests/ # Test directory (empty for now)
+├── __init__.py
+└── cli.py # Command-line interface
+```
+
+## Installation
+
+1. Install dependencies:
+```bash
+pip install -r requirements.txt
+```
+
+2. Install the package in development mode:
+```bash
+pip install -e .
+```
+
+## Usage
+
+### Running the Demo
+
+```bash
+python main.py
+```
+
+### Using the CLI
+
+```bash
+# Create a new user
+user-cli create-user --name "John Doe" --username "john" --age 30 --email "john@example.com"
+
+# List all users
+user-cli list-users
+
+# Get user information
+user-cli get-user john
+
+# Update user
+user-cli update-user john --age 31
+
+# Delete user
+user-cli delete-user john
+
+# Search users
+user-cli search "john"
+
+# Show statistics
+user-cli stats
+
+# Export users
+user-cli export --format json --output users.json
+```
+
+### Programmatic Usage
+
+```python
+from user_management import UserManager, UserRole
+from user_management.services.auth_service import AuthService
+
+# Create user manager
+user_manager = UserManager()
+
+# Create a user
+user = user_manager.create_user(
+ name="Jane Smith",
+ username="jane",
+ age=28,
+ email="jane@example.com",
+ role=UserRole.USER
+)
+
+# Set password
+user.set_password("SecurePass123!")
+
+# Authenticate
+auth_service = AuthService(user_manager)
+authenticated_user = auth_service.authenticate("jane", "SecurePass123!")
+
+# Create session
+session_id = auth_service.create_session(authenticated_user)
+```
+
+## Testing Features
+
+This project tests the following Python language features:
+
+- **Classes and Inheritance**: Person and User classes with inheritance
+- **Dataclasses**: Modern Python data structures
+- **Enums**: Role and status enumerations
+- **Type Hints**: Comprehensive type annotations
+- **Properties**: Getter/setter methods
+- **Class Methods**: Factory methods and utilities
+- **Static Methods**: Utility functions
+- **Context Managers**: Resource management
+- **Decorators**: Method decorators
+- **Generators**: Iteration patterns
+- **Exception Handling**: Custom exceptions
+- **Package Structure**: Modules and imports
+- **CLI Development**: Click framework integration
+- **JSON/CSV Processing**: Data serialization
+- **Regular Expressions**: Input validation
+- **Datetime Handling**: Time-based operations
+- **Cryptography**: Password hashing
+- **File I/O**: Data persistence
+
+## Dependencies
+
+- **click**: Command-line interface framework
+- **pytest**: Testing framework
+- **pydantic**: Data validation (optional)
+
+## License
+
+MIT License - This is a sample project for testing purposes.
\ No newline at end of file
diff --git a/test/sample-projects/python/main.py b/test/sample-projects/python/main.py
new file mode 100644
index 0000000..947b903
--- /dev/null
+++ b/test/sample-projects/python/main.py
@@ -0,0 +1,146 @@
+#!/usr/bin/env python3
+"""
+Main entry point for the user management system demo.
+"""
+
+from user_management import UserManager, Person, User
+from user_management.services.auth_service import AuthService
+from user_management.models.user import UserRole
+from user_management.utils.exceptions import UserNotFoundError, DuplicateUserError
+
+
+def main():
+ """Demonstrate the user management system."""
+ print("=" * 50)
+ print("User Management System Demo")
+ print("=" * 50)
+
+ # Create user manager and auth service
+ user_manager = UserManager()
+ auth_service = AuthService(user_manager)
+
+ # Create some sample users
+ print("\n1. Creating sample users...")
+
+ try:
+ # Create admin user
+ admin = user_manager.create_user(
+ name="Alice Johnson",
+ username="alice_admin",
+ age=30,
+ email="alice@example.com",
+ role=UserRole.ADMIN
+ )
+ admin.set_password("AdminPass123!")
+ admin.add_permission("user_management")
+ admin.add_permission("system_admin")
+
+ # Create regular users
+ user1 = user_manager.create_user(
+ name="Bob Smith",
+ username="bob_user",
+ age=25,
+ email="bob@example.com"
+ )
+ user1.set_password("UserPass123!")
+
+ user2 = user_manager.create_user(
+ name="Charlie Brown",
+ username="charlie",
+ age=35,
+ email="charlie@example.com"
+ )
+ user2.set_password("CharliePass123!")
+
+ print(f"✓ Created {user_manager.get_user_count()} users")
+
+ except DuplicateUserError as e:
+ print(f"✗ Error creating users: {e}")
+
+ # Display all users
+ print("\n2. Listing all users...")
+ users = user_manager.get_all_users()
+ for user in users:
+ print(f" • {user.username} ({user.name}) - {user.role.value}")
+
+ # Test authentication
+ print("\n3. Testing authentication...")
+ try:
+ authenticated_user = auth_service.authenticate("alice_admin", "AdminPass123!")
+ print(f"✓ Authentication successful for {authenticated_user.username}")
+
+ # Create session
+ session_id = auth_service.create_session(authenticated_user)
+ print(f"✓ Session created: {session_id[:16]}...")
+
+ except Exception as e:
+ print(f"✗ Authentication failed: {e}")
+
+ # Test user search
+ print("\n4. Testing user search...")
+ search_results = user_manager.search_users("alice")
+ print(f"Search results for 'alice': {len(search_results)} users found")
+ for user in search_results:
+ print(f" • {user.username} ({user.name})")
+
+ # Test filtering
+ print("\n5. Testing user filtering...")
+ older_users = user_manager.get_users_older_than(30)
+ print(f"Users older than 30: {len(older_users)} users")
+ for user in older_users:
+ print(f" • {user.username} ({user.name}) - age {user.age}")
+
+ # Test user updates
+ print("\n6. Testing user updates...")
+ try:
+ updated_user = user_manager.update_user("bob_user", age=26)
+ print(f"✓ Updated {updated_user.username}'s age to {updated_user.age}")
+ except UserNotFoundError as e:
+ print(f"✗ Update failed: {e}")
+
+ # Display statistics
+ print("\n7. User statistics...")
+ stats = user_manager.get_user_stats()
+ for key, value in stats.items():
+ print(f" {key.replace('_', ' ').title()}: {value}")
+
+ # Test export functionality
+ print("\n8. Testing export functionality...")
+ try:
+ json_export = user_manager.export_users('json')
+ print(f"✓ JSON export: {len(json_export)} characters")
+
+ csv_export = user_manager.export_users('csv')
+ print(f"✓ CSV export: {len(csv_export.splitlines())} lines")
+ except Exception as e:
+ print(f"✗ Export failed: {e}")
+
+ # Test password change
+ print("\n9. Testing password change...")
+ try:
+ auth_service.change_password("bob_user", "UserPass123!", "NewUserPass123!")
+ print("✓ Password changed successfully")
+
+ # Test with new password
+ auth_service.authenticate("bob_user", "NewUserPass123!")
+ print("✓ Authentication with new password successful")
+
+ except Exception as e:
+ print(f"✗ Password change failed: {e}")
+
+ # Test session management
+ print("\n10. Testing session management...")
+ session_stats = auth_service.get_session_stats()
+ print(f"Active sessions: {session_stats['total_sessions']}")
+
+ # Cleanup expired sessions
+ expired_count = auth_service.cleanup_expired_sessions()
+ print(f"Cleaned up {expired_count} expired sessions")
+
+ print("\n" + "=" * 50)
+ print("Demo completed successfully!")
+ print("=" * 50)
+
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/test/sample-projects/python/requirements.txt b/test/sample-projects/python/requirements.txt
new file mode 100644
index 0000000..4d9c502
--- /dev/null
+++ b/test/sample-projects/python/requirements.txt
@@ -0,0 +1,3 @@
+pytest>=7.0.0
+pydantic>=1.10.0
+click>=8.0.0
\ No newline at end of file
diff --git a/test/sample-projects/python/setup.py b/test/sample-projects/python/setup.py
new file mode 100644
index 0000000..35afb52
--- /dev/null
+++ b/test/sample-projects/python/setup.py
@@ -0,0 +1,37 @@
+from setuptools import setup, find_packages
+
+setup(
+ name="user-management",
+ version="0.1.0",
+ description="A sample user management system for testing Code Index MCP",
+ author="Test Author",
+ author_email="test@example.com",
+ packages=find_packages(),
+ install_requires=[
+ "pydantic>=1.10.0",
+ "click>=8.0.0",
+ ],
+ extras_require={
+ "dev": [
+ "pytest>=7.0.0",
+ "black>=22.0.0",
+ "flake8>=4.0.0",
+ ]
+ },
+ entry_points={
+ "console_scripts": [
+ "user-cli=user_management.cli:main",
+ ],
+ },
+ python_requires=">=3.8",
+ classifiers=[
+ "Development Status :: 3 - Alpha",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: MIT License",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ ],
+)
\ No newline at end of file
diff --git a/test/sample-projects/python/user_management/__init__.py b/test/sample-projects/python/user_management/__init__.py
new file mode 100644
index 0000000..e385943
--- /dev/null
+++ b/test/sample-projects/python/user_management/__init__.py
@@ -0,0 +1,14 @@
+"""
+User Management System
+
+A sample application for testing Code Index MCP's Python analysis capabilities.
+"""
+
+__version__ = "0.1.0"
+__author__ = "Test Author"
+
+from .models.person import Person
+from .models.user import User
+from .services.user_manager import UserManager
+
+__all__ = ["Person", "User", "UserManager"]
\ No newline at end of file
diff --git a/test/sample-projects/python/user_management/cli.py b/test/sample-projects/python/user_management/cli.py
new file mode 100644
index 0000000..53a0db9
--- /dev/null
+++ b/test/sample-projects/python/user_management/cli.py
@@ -0,0 +1,246 @@
+"""
+Command line interface for user management system.
+"""
+
+import click
+import json
+from typing import Optional
+
+from .services.user_manager import UserManager
+from .services.auth_service import AuthService
+from .models.user import UserRole, UserStatus
+from .utils.exceptions import UserNotFoundError, DuplicateUserError
+
+
+@click.group()
+@click.pass_context
+def cli(ctx):
+ """User Management System CLI."""
+ ctx.ensure_object(dict)
+ ctx.obj['user_manager'] = UserManager()
+ ctx.obj['auth_service'] = AuthService(ctx.obj['user_manager'])
+
+
+@cli.command()
+@click.option('--name', required=True, help='Full name of the user')
+@click.option('--username', required=True, help='Username for the user')
+@click.option('--age', required=True, type=int, help='Age of the user')
+@click.option('--email', help='Email address of the user')
+@click.option('--role', type=click.Choice(['user', 'admin', 'guest']), default='user', help='Role of the user')
+@click.option('--password', prompt=True, hide_input=True, help='Password for the user')
+@click.pass_context
+def create_user(ctx, name: str, username: str, age: int, email: Optional[str], role: str, password: str):
+ """Create a new user."""
+ try:
+ user_manager = ctx.obj['user_manager']
+ user_role = UserRole(role)
+
+ user = user_manager.create_user(
+ name=name,
+ username=username,
+ age=age,
+ email=email,
+ role=user_role
+ )
+
+ user.set_password(password)
+
+ click.echo(f"User '{username}' created successfully!")
+ click.echo(f"Name: {user.name}")
+ click.echo(f"Age: {user.age}")
+ click.echo(f"Email: {user.email or 'Not provided'}")
+ click.echo(f"Role: {user.role.value}")
+
+ except DuplicateUserError as e:
+ click.echo(f"Error: {e}", err=True)
+ except ValueError as e:
+ click.echo(f"Error: {e}", err=True)
+
+
+@cli.command()
+@click.argument('username')
+@click.pass_context
+def get_user(ctx, username: str):
+ """Get information about a user."""
+ try:
+ user_manager = ctx.obj['user_manager']
+ user = user_manager.get_user(username)
+
+ click.echo(f"Username: {user.username}")
+ click.echo(f"Name: {user.name}")
+ click.echo(f"Age: {user.age}")
+ click.echo(f"Email: {user.email or 'Not provided'}")
+ click.echo(f"Role: {user.role.value}")
+ click.echo(f"Status: {user.status.value}")
+ click.echo(f"Created: {user.created_at}")
+ click.echo(f"Last Login: {user.last_login or 'Never'}")
+ click.echo(f"Permissions: {', '.join(user.permissions) if user.permissions else 'None'}")
+
+ except UserNotFoundError as e:
+ click.echo(f"Error: {e}", err=True)
+
+
+@cli.command()
+@click.pass_context
+def list_users(ctx):
+ """List all users."""
+ user_manager = ctx.obj['user_manager']
+ users = user_manager.get_all_users()
+
+ if not users:
+ click.echo("No users found.")
+ return
+
+ click.echo(f"Found {len(users)} users:")
+ click.echo("-" * 60)
+
+ for user in users:
+ status_indicator = "✓" if user.is_active() else "✗"
+ click.echo(f"{status_indicator} {user.username:<15} {user.name:<20} {user.role.value:<10} {user.email or 'No email'}")
+
+
+@cli.command()
+@click.argument('username')
+@click.option('--name', help='New name for the user')
+@click.option('--age', type=int, help='New age for the user')
+@click.option('--email', help='New email for the user')
+@click.option('--role', type=click.Choice(['user', 'admin', 'guest']), help='New role for the user')
+@click.pass_context
+def update_user(ctx, username: str, name: Optional[str], age: Optional[int],
+ email: Optional[str], role: Optional[str]):
+ """Update user information."""
+ try:
+ user_manager = ctx.obj['user_manager']
+
+ updates = {}
+ if name:
+ updates['name'] = name
+ if age is not None:
+ updates['age'] = age
+ if email:
+ updates['email'] = email
+ if role:
+ updates['role'] = UserRole(role)
+
+ if not updates:
+ click.echo("No updates provided.")
+ return
+
+ user = user_manager.update_user(username, **updates)
+ click.echo(f"User '{username}' updated successfully!")
+
+ except UserNotFoundError as e:
+ click.echo(f"Error: {e}", err=True)
+ except ValueError as e:
+ click.echo(f"Error: {e}", err=True)
+
+
+@cli.command()
+@click.argument('username')
+@click.confirmation_option(prompt='Are you sure you want to delete this user?')
+@click.pass_context
+def delete_user(ctx, username: str):
+ """Delete a user."""
+ try:
+ user_manager = ctx.obj['user_manager']
+ user_manager.delete_user(username)
+ click.echo(f"User '{username}' deleted successfully!")
+
+ except UserNotFoundError as e:
+ click.echo(f"Error: {e}", err=True)
+
+
+@cli.command()
+@click.argument('username')
+@click.option('--password', prompt=True, hide_input=True, help='Password for authentication')
+@click.pass_context
+def authenticate(ctx, username: str, password: str):
+ """Authenticate a user."""
+ try:
+ auth_service = ctx.obj['auth_service']
+ user = auth_service.authenticate(username, password)
+
+ click.echo(f"Authentication successful!")
+ click.echo(f"Welcome, {user.name}!")
+
+ # Create a session
+ session_id = auth_service.create_session(user)
+ click.echo(f"Session created: {session_id}")
+
+ except Exception as e:
+ click.echo(f"Authentication failed: {e}", err=True)
+
+
+@cli.command()
+@click.pass_context
+def stats(ctx):
+ """Show user statistics."""
+ user_manager = ctx.obj['user_manager']
+ auth_service = ctx.obj['auth_service']
+
+ user_stats = user_manager.get_user_stats()
+ session_stats = auth_service.get_session_stats()
+
+ click.echo("User Statistics:")
+ click.echo(f" Total Users: {user_stats['total']}")
+ click.echo(f" Active Users: {user_stats['active']}")
+ click.echo(f" Admin Users: {user_stats['admin']}")
+ click.echo(f" Regular Users: {user_stats['user']}")
+ click.echo(f" Guest Users: {user_stats['guest']}")
+ click.echo(f" Users with Email: {user_stats['with_email']}")
+
+ click.echo("\nSession Statistics:")
+ click.echo(f" Active Sessions: {session_stats['total_sessions']}")
+ click.echo(f" Recent Sessions: {session_stats['recent_sessions']}")
+ click.echo(f" Old Sessions: {session_stats['old_sessions']}")
+
+
+@cli.command()
+@click.option('--format', type=click.Choice(['json', 'csv']), default='json', help='Export format')
+@click.option('--output', help='Output file path')
+@click.pass_context
+def export(ctx, format: str, output: Optional[str]):
+ """Export users to file."""
+ user_manager = ctx.obj['user_manager']
+
+ try:
+ data = user_manager.export_users(format)
+
+ if output:
+ with open(output, 'w') as f:
+ f.write(data)
+ click.echo(f"Users exported to {output}")
+ else:
+ click.echo(data)
+
+ except Exception as e:
+ click.echo(f"Export failed: {e}", err=True)
+
+
+@cli.command()
+@click.argument('query')
+@click.pass_context
+def search(ctx, query: str):
+ """Search users by name or username."""
+ user_manager = ctx.obj['user_manager']
+ users = user_manager.search_users(query)
+
+ if not users:
+ click.echo(f"No users found matching '{query}'")
+ return
+
+ click.echo(f"Found {len(users)} users matching '{query}':")
+ click.echo("-" * 60)
+
+ for user in users:
+ status_indicator = "✓" if user.is_active() else "✗"
+ click.echo(f"{status_indicator} {user.username:<15} {user.name:<20} {user.role.value:<10}")
+
+
+def main():
+ """Main entry point for the CLI."""
+ cli()
+
+
+if __name__ == '__main__':
+ main()
\ No newline at end of file
diff --git a/test/sample-projects/python/user_management/models/__init__.py b/test/sample-projects/python/user_management/models/__init__.py
new file mode 100644
index 0000000..3b63ae7
--- /dev/null
+++ b/test/sample-projects/python/user_management/models/__init__.py
@@ -0,0 +1,6 @@
+"""Models package for user management system."""
+
+from .person import Person
+from .user import User
+
+__all__ = ["Person", "User"]
\ No newline at end of file
diff --git a/test/sample-projects/python/user_management/models/person.py b/test/sample-projects/python/user_management/models/person.py
new file mode 100644
index 0000000..eba75da
--- /dev/null
+++ b/test/sample-projects/python/user_management/models/person.py
@@ -0,0 +1,91 @@
+"""
+Person model for the user management system.
+"""
+
+from typing import Optional, Dict, Any
+from dataclasses import dataclass, field
+from datetime import datetime
+import json
+
+
+@dataclass
+class Person:
+ """Represents a person with basic information."""
+
+ name: str
+ age: int
+ email: Optional[str] = None
+ created_at: datetime = field(default_factory=datetime.now)
+ metadata: Dict[str, Any] = field(default_factory=dict)
+
+ def __post_init__(self):
+ """Validate data after initialization."""
+ if self.age < 0:
+ raise ValueError("Age cannot be negative")
+ if self.age > 150:
+ raise ValueError("Age cannot be greater than 150")
+ if not self.name.strip():
+ raise ValueError("Name cannot be empty")
+
+ def greet(self) -> str:
+ """Return a greeting message."""
+ return f"Hello, I'm {self.name} and I'm {self.age} years old."
+
+ def has_email(self) -> bool:
+ """Check if person has an email address."""
+ return self.email is not None and self.email.strip() != ""
+
+ def update_email(self, email: str) -> None:
+ """Update the person's email address."""
+ if not email.strip():
+ raise ValueError("Email cannot be empty")
+ self.email = email.strip()
+
+ def add_metadata(self, key: str, value: Any) -> None:
+ """Add metadata to the person."""
+ self.metadata[key] = value
+
+ def get_metadata(self, key: str, default: Any = None) -> Any:
+ """Get metadata value by key."""
+ return self.metadata.get(key, default)
+
+ def to_dict(self) -> Dict[str, Any]:
+ """Convert person to dictionary."""
+ return {
+ "name": self.name,
+ "age": self.age,
+ "email": self.email,
+ "created_at": self.created_at.isoformat(),
+ "metadata": self.metadata
+ }
+
+ @classmethod
+ def from_dict(cls, data: Dict[str, Any]) -> 'Person':
+ """Create a Person from a dictionary."""
+ created_at = datetime.fromisoformat(data.get("created_at", datetime.now().isoformat()))
+ return cls(
+ name=data["name"],
+ age=data["age"],
+ email=data.get("email"),
+ created_at=created_at,
+ metadata=data.get("metadata", {})
+ )
+
+ def to_json(self) -> str:
+ """Convert person to JSON string."""
+ return json.dumps(self.to_dict(), indent=2)
+
+ @classmethod
+ def from_json(cls, json_str: str) -> 'Person':
+ """Create a Person from JSON string."""
+ data = json.loads(json_str)
+ return cls.from_dict(data)
+
+ def __str__(self) -> str:
+ """String representation of person."""
+ email_str = f", email: {self.email}" if self.has_email() else ""
+ return f"Person(name: {self.name}, age: {self.age}{email_str})"
+
+ def __repr__(self) -> str:
+ """Developer representation of person."""
+ return f"Person(name='{self.name}', age={self.age}, email='{self.email}')"
\ No newline at end of file
diff --git a/test/sample-projects/python/user_management/models/user.py b/test/sample-projects/python/user_management/models/user.py
new file mode 100644
index 0000000..83e1f13
--- /dev/null
+++ b/test/sample-projects/python/user_management/models/user.py
@@ -0,0 +1,180 @@
+"""
+User model extending Person for the user management system.
+"""
+
+from typing import Optional, Dict, Any, Set
+from dataclasses import dataclass, field
+from datetime import datetime
+from enum import Enum
+import hashlib
+import secrets
+
+from .person import Person
+
+
+class UserRole(Enum):
+ """User role enumeration."""
+ ADMIN = "admin"
+ USER = "user"
+ GUEST = "guest"
+
+
+class UserStatus(Enum):
+ """User status enumeration."""
+ ACTIVE = "active"
+ INACTIVE = "inactive"
+ SUSPENDED = "suspended"
+ DELETED = "deleted"
+
+
+@dataclass
+class User(Person):
+ """User class extending Person with authentication and permissions."""
+
+ username: str = ""
+ password_hash: str = ""
+ role: UserRole = UserRole.USER
+ status: UserStatus = UserStatus.ACTIVE
+ last_login: Optional[datetime] = None
+ login_attempts: int = 0
+ permissions: Set[str] = field(default_factory=set)
+
+ def __post_init__(self):
+ """Validate user data after initialization."""
+ super().__post_init__()
+ if not self.username.strip():
+ raise ValueError("Username cannot be empty")
+ if len(self.username) < 3:
+ raise ValueError("Username must be at least 3 characters long")
+
+ def set_password(self, password: str) -> None:
+ """Set user password with hashing."""
+ if len(password) < 8:
+ raise ValueError("Password must be at least 8 characters long")
+
+ # Simple password hashing (in real app, use bcrypt)
+ salt = secrets.token_hex(16)
+ password_hash = hashlib.pbkdf2_hmac('sha256',
+ password.encode('utf-8'),
+ salt.encode('utf-8'),
+ 100000)
+ self.password_hash = salt + password_hash.hex()
+
+ def verify_password(self, password: str) -> bool:
+ """Verify password against stored hash."""
+ if not self.password_hash:
+ return False
+
+ try:
+ salt = self.password_hash[:32]
+ stored_hash = self.password_hash[32:]
+
+ password_hash = hashlib.pbkdf2_hmac('sha256',
+ password.encode('utf-8'),
+ salt.encode('utf-8'),
+ 100000)
+
+ return password_hash.hex() == stored_hash
+ except Exception:
+ return False
+
+ def add_permission(self, permission: str) -> None:
+ """Add a permission to the user."""
+ self.permissions.add(permission)
+
+ def remove_permission(self, permission: str) -> None:
+ """Remove a permission from the user."""
+ self.permissions.discard(permission)
+
+ def has_permission(self, permission: str) -> bool:
+ """Check if user has a specific permission."""
+ return permission in self.permissions
+
+ def is_admin(self) -> bool:
+ """Check if user is an admin."""
+ return self.role == UserRole.ADMIN
+
+ def is_active(self) -> bool:
+ """Check if user is active."""
+ return self.status == UserStatus.ACTIVE
+
+ def login(self) -> bool:
+ """Record a successful login."""
+ if not self.is_active():
+ return False
+
+ self.last_login = datetime.now()
+ self.login_attempts = 0
+ return True
+
+ def failed_login_attempt(self) -> None:
+ """Record a failed login attempt."""
+ self.login_attempts += 1
+ if self.login_attempts >= 5:
+ self.status = UserStatus.SUSPENDED
+
+ def activate(self) -> None:
+ """Activate the user account."""
+ self.status = UserStatus.ACTIVE
+ self.login_attempts = 0
+
+ def deactivate(self) -> None:
+ """Deactivate the user account."""
+ self.status = UserStatus.INACTIVE
+
+ def suspend(self) -> None:
+ """Suspend the user account."""
+ self.status = UserStatus.SUSPENDED
+
+ def delete(self) -> None:
+ """Mark the user as deleted."""
+ self.status = UserStatus.DELETED
+
+ def to_dict(self) -> Dict[str, Any]:
+ """Convert user to dictionary."""
+ data = super().to_dict()
+ data.update({
+ "username": self.username,
+ "role": self.role.value,
+ "status": self.status.value,
+ "last_login": self.last_login.isoformat() if self.last_login else None,
+ "login_attempts": self.login_attempts,
+ "permissions": list(self.permissions),
+ })
+ return data
+
+ @classmethod
+ def from_dict(cls, data: Dict[str, Any]) -> 'User':
+ """Create a User from a dictionary."""
+ person_data = {k: v for k, v in data.items()
+ if k in ["name", "age", "email", "created_at", "metadata"]}
+ person = Person.from_dict(person_data)
+
+ last_login = None
+ if data.get("last_login"):
+ last_login = datetime.fromisoformat(data["last_login"])
+
+ return cls(
+ name=person.name,
+ age=person.age,
+ email=person.email,
+ created_at=person.created_at,
+ metadata=person.metadata,
+ username=data["username"],
+ password_hash=data.get("password_hash", ""),
+ role=UserRole(data.get("role", UserRole.USER.value)),
+ status=UserStatus(data.get("status", UserStatus.ACTIVE.value)),
+ last_login=last_login,
+ login_attempts=data.get("login_attempts", 0),
+ permissions=set(data.get("permissions", []))
+ )
+
+ def __str__(self) -> str:
+ """String representation of user."""
+ return f"User(username: {self.username}, name: {self.name}, role: {self.role.value})"
+
+ def __repr__(self) -> str:
+ """Developer representation of user."""
+ return f"User(username='{self.username}', name='{self.name}', role={self.role})"
+
+# AUTO_REINDEX_MARKER: ci_auto_reindex_test_token
\ No newline at end of file
diff --git a/test/sample-projects/python/user_management/services/__init__.py b/test/sample-projects/python/user_management/services/__init__.py
new file mode 100644
index 0000000..6ebefbf
--- /dev/null
+++ b/test/sample-projects/python/user_management/services/__init__.py
@@ -0,0 +1,6 @@
+"""Services package for user management system."""
+
+from .user_manager import UserManager
+from .auth_service import AuthService
+
+__all__ = ["UserManager", "AuthService"]
\ No newline at end of file
diff --git a/test/sample-projects/python/user_management/services/auth_service.py b/test/sample-projects/python/user_management/services/auth_service.py
new file mode 100644
index 0000000..21d8d59
--- /dev/null
+++ b/test/sample-projects/python/user_management/services/auth_service.py
@@ -0,0 +1,185 @@
+"""
+Authentication service for user management system.
+"""
+
+from typing import Optional, Dict, Any
+from datetime import datetime, timedelta
+import secrets
+import hashlib
+
+from ..models.user import User, UserStatus
+from ..utils.exceptions import AuthenticationError, UserNotFoundError
+
+
+class AuthService:
+ """Service class for handling authentication operations."""
+
+ def __init__(self, user_manager):
+ """Initialize the authentication service with a user manager."""
+ self._user_manager = user_manager
+ self._active_sessions: Dict[str, Dict[str, Any]] = {}
+ self._session_timeout = timedelta(hours=24)
+
+ def authenticate(self, username: str, password: str) -> Optional[User]:
+ """Authenticate a user with username and password."""
+ try:
+ user = self._user_manager.get_user(username)
+ except UserNotFoundError:
+ raise AuthenticationError("Invalid username or password")
+
+ if not user.is_active():
+ raise AuthenticationError("User account is not active")
+
+ if not user.verify_password(password):
+ user.failed_login_attempt()
+ raise AuthenticationError("Invalid username or password")
+
+ # Successful authentication
+ user.login()
+ return user
+
+ def create_session(self, user: User) -> str:
+ """Create a new session for the authenticated user."""
+ session_id = secrets.token_urlsafe(32)
+ session_data = {
+ 'user_id': user.username,
+ 'created_at': datetime.now(),
+ 'last_activity': datetime.now(),
+ 'ip_address': None, # Would be set in a real application
+ 'user_agent': None, # Would be set in a real application
+ }
+
+ self._active_sessions[session_id] = session_data
+ return session_id
+
+ def validate_session(self, session_id: str) -> Optional[User]:
+ """Validate a session and return the associated user."""
+ if session_id not in self._active_sessions:
+ return None
+
+ session_data = self._active_sessions[session_id]
+
+ # Check if session has expired
+ if datetime.now() - session_data['last_activity'] > self._session_timeout:
+ self.destroy_session(session_id)
+ return None
+
+ # Update last activity
+ session_data['last_activity'] = datetime.now()
+
+ try:
+ user = self._user_manager.get_user(session_data['user_id'])
+ if not user.is_active():
+ self.destroy_session(session_id)
+ return None
+ return user
+ except UserNotFoundError:
+ self.destroy_session(session_id)
+ return None
+
+ def destroy_session(self, session_id: str) -> bool:
+ """Destroy a session."""
+ if session_id in self._active_sessions:
+ del self._active_sessions[session_id]
+ return True
+ return False
+
+ def destroy_all_sessions(self, username: str) -> int:
+ """Destroy all sessions for a specific user."""
+ sessions_to_remove = []
+ for session_id, session_data in self._active_sessions.items():
+ if session_data['user_id'] == username:
+ sessions_to_remove.append(session_id)
+
+ for session_id in sessions_to_remove:
+ del self._active_sessions[session_id]
+
+ return len(sessions_to_remove)
+
+ def get_active_sessions(self, username: str) -> List[Dict[str, Any]]:
+ """Get all active sessions for a user."""
+ sessions = []
+ for session_id, session_data in self._active_sessions.items():
+ if session_data['user_id'] == username:
+ sessions.append({
+ 'session_id': session_id,
+ 'created_at': session_data['created_at'],
+ 'last_activity': session_data['last_activity'],
+ 'ip_address': session_data.get('ip_address'),
+ 'user_agent': session_data.get('user_agent'),
+ })
+ return sessions
+
+ def cleanup_expired_sessions(self) -> int:
+ """Remove expired sessions and return count of removed sessions."""
+ current_time = datetime.now()
+ expired_sessions = []
+
+ for session_id, session_data in self._active_sessions.items():
+ if current_time - session_data['last_activity'] > self._session_timeout:
+ expired_sessions.append(session_id)
+
+ for session_id in expired_sessions:
+ del self._active_sessions[session_id]
+
+ return len(expired_sessions)
+
+ def change_password(self, username: str, old_password: str, new_password: str) -> bool:
+ """Change a user's password."""
+ user = self._user_manager.get_user(username)
+
+ if not user.verify_password(old_password):
+ raise AuthenticationError("Current password is incorrect")
+
+ user.set_password(new_password)
+
+ # Destroy all existing sessions for security
+ self.destroy_all_sessions(username)
+
+ return True
+
+ def reset_password(self, username: str, new_password: str) -> str:
+ """Reset a user's password (admin function)."""
+ user = self._user_manager.get_user(username)
+
+ # Generate a temporary password if none provided
+ if not new_password:
+ new_password = self._generate_temporary_password()
+
+ user.set_password(new_password)
+
+ # Destroy all existing sessions
+ self.destroy_all_sessions(username)
+
+ return new_password
+
+ def _generate_temporary_password(self) -> str:
+ """Generate a temporary password."""
+ return secrets.token_urlsafe(12)
+
+ def get_session_stats(self) -> Dict[str, Any]:
+ """Get statistics about active sessions."""
+ current_time = datetime.now()
+ session_count = len(self._active_sessions)
+
+ # Count sessions by age
+ recent_sessions = 0 # Last hour
+ old_sessions = 0 # Older than 1 hour
+
+ for session_data in self._active_sessions.values():
+ age = current_time - session_data['last_activity']
+ if age < timedelta(hours=1):
+ recent_sessions += 1
+ else:
+ old_sessions += 1
+
+ return {
+ 'total_sessions': session_count,
+ 'recent_sessions': recent_sessions,
+ 'old_sessions': old_sessions,
+ 'session_timeout_hours': self._session_timeout.total_seconds() / 3600,
+ }
+
+ def __str__(self) -> str:
+ """String representation of AuthService."""
+ return f"AuthService(active_sessions: {len(self._active_sessions)})"
\ No newline at end of file
diff --git a/test/sample-projects/python/user_management/services/user_manager.py b/test/sample-projects/python/user_management/services/user_manager.py
new file mode 100644
index 0000000..05ca4bc
--- /dev/null
+++ b/test/sample-projects/python/user_management/services/user_manager.py
@@ -0,0 +1,222 @@
+"""
+User management service for handling user operations.
+"""
+
+from typing import List, Optional, Dict, Any, Callable
+from datetime import datetime
+import json
+import os
+
+from ..models.user import User, UserRole, UserStatus
+from ..utils.validators import validate_email, validate_username
+from ..utils.exceptions import UserNotFoundError, DuplicateUserError
+
+
+class UserManager:
+ """Service class for managing users."""
+
+ def __init__(self, storage_path: Optional[str] = None):
+ """Initialize the user manager with optional storage path."""
+ self._users: Dict[str, User] = {}
+ self._storage_path = storage_path
+ if storage_path and os.path.exists(storage_path):
+ self._load_from_file()
+
+ def create_user(self, name: str, age: int, username: str,
+ email: Optional[str] = None,
+ role: UserRole = UserRole.USER) -> User:
+ """Create a new user."""
+ if username in self._users:
+ raise DuplicateUserError(f"User with username '{username}' already exists")
+
+ # Validate inputs
+ if not validate_username(username):
+ raise ValueError("Invalid username format")
+
+ if email and not validate_email(email):
+ raise ValueError("Invalid email format")
+
+ user = User(
+ name=name,
+ age=age,
+ username=username,
+ email=email,
+ role=role
+ )
+
+ self._users[username] = user
+ self._save_to_file()
+ return user
+
+ def get_user(self, username: str) -> User:
+ """Get a user by username."""
+ if username not in self._users:
+ raise UserNotFoundError(f"User with username '{username}' not found")
+ return self._users[username]
+
+ def get_user_by_email(self, email: str) -> Optional[User]:
+ """Get a user by email address."""
+ for user in self._users.values():
+ if user.email == email:
+ return user
+ return None
+
+ def update_user(self, username: str, **kwargs) -> User:
+ """Update user information."""
+ user = self.get_user(username)
+
+ # Update allowed fields
+ allowed_fields = ['name', 'age', 'email', 'role', 'status']
+ for field, value in kwargs.items():
+ if field in allowed_fields and hasattr(user, field):
+ setattr(user, field, value)
+
+ self._save_to_file()
+ return user
+
+ def delete_user(self, username: str) -> bool:
+ """Delete a user (soft delete)."""
+ user = self.get_user(username)
+ user.delete()
+ self._save_to_file()
+ return True
+
+ def remove_user(self, username: str) -> bool:
+ """Remove a user completely from the system."""
+ if username not in self._users:
+ raise UserNotFoundError(f"User with username '{username}' not found")
+
+ del self._users[username]
+ self._save_to_file()
+ return True
+
+ def get_all_users(self) -> List[User]:
+ """Get all users."""
+ return list(self._users.values())
+
+ def get_active_users(self) -> List[User]:
+ """Get all active users."""
+ return [user for user in self._users.values() if user.is_active()]
+
+ def get_users_by_role(self, role: UserRole) -> List[User]:
+ """Get users by role."""
+ return [user for user in self._users.values() if user.role == role]
+
+ def filter_users(self, filter_func: Callable[[User], bool]) -> List[User]:
+ """Filter users using a custom function."""
+ return [user for user in self._users.values() if filter_func(user)]
+
+ def search_users(self, query: str) -> List[User]:
+ """Search users by name or username."""
+ query_lower = query.lower()
+ return [
+ user for user in self._users.values()
+ if query_lower in user.name.lower() or query_lower in user.username.lower()
+ ]
+
+ def get_users_older_than(self, age: int) -> List[User]:
+ """Get users older than specified age."""
+ return self.filter_users(lambda user: user.age > age)
+
+ def get_users_with_email(self) -> List[User]:
+ """Get users that have email addresses."""
+ return self.filter_users(lambda user: user.has_email())
+
+ def get_users_with_permission(self, permission: str) -> List[User]:
+ """Get users with specific permission."""
+ return self.filter_users(lambda user: user.has_permission(permission))
+
+ def get_user_count(self) -> int:
+ """Get the total number of users."""
+ return len(self._users)
+
+ def get_user_stats(self) -> Dict[str, int]:
+ """Get user statistics."""
+ stats = {
+ 'total': len(self._users),
+ 'active': len(self.get_active_users()),
+ 'admin': len(self.get_users_by_role(UserRole.ADMIN)),
+ 'user': len(self.get_users_by_role(UserRole.USER)),
+ 'guest': len(self.get_users_by_role(UserRole.GUEST)),
+ 'with_email': len(self.get_users_with_email()),
+ }
+ return stats
+
+ def export_users(self, format: str = 'json') -> str:
+ """Export users to specified format."""
+ if format.lower() == 'json':
+ return self._export_to_json()
+ elif format.lower() == 'csv':
+ return self._export_to_csv()
+ else:
+ raise ValueError(f"Unsupported export format: {format}")
+
+ def _export_to_json(self) -> str:
+ """Export users to JSON format."""
+ users_data = [user.to_dict() for user in self._users.values()]
+ return json.dumps(users_data, indent=2)
+
+ def _export_to_csv(self) -> str:
+ """Export users to CSV format."""
+ if not self._users:
+ return "username,name,age,email,role,status\n"
+
+ lines = ["username,name,age,email,role,status"]
+ for user in self._users.values():
+ line = f"{user.username},{user.name},{user.age},{user.email or ''},{user.role.value},{user.status.value}"
+ lines.append(line)
+
+ return "\n".join(lines)
+
+ def _save_to_file(self) -> None:
+ """Save users to file if storage path is set."""
+ if not self._storage_path:
+ return
+
+ try:
+ with open(self._storage_path, 'w') as f:
+ json.dump(self._export_to_json(), f, indent=2)
+ except Exception as e:
+ print(f"Error saving users to file: {e}")
+
+ def _load_from_file(self) -> None:
+ """Load users from file."""
+ if not self._storage_path or not os.path.exists(self._storage_path):
+ return
+
+ try:
+ with open(self._storage_path, 'r') as f:
+ data = json.load(f)
+ if isinstance(data, str):
+ data = json.loads(data)
+
+ for user_data in data:
+ user = User.from_dict(user_data)
+ self._users[user.username] = user
+ except Exception as e:
+ print(f"Error loading users from file: {e}")
+
+ def clear_all_users(self) -> None:
+ """Clear all users from the system."""
+ self._users.clear()
+ self._save_to_file()
+
+ def __len__(self) -> int:
+ """Return the number of users."""
+ return len(self._users)
+
+ def __contains__(self, username: str) -> bool:
+ """Check if a username exists."""
+ return username in self._users
+
+ def __iter__(self):
+ """Iterate over users."""
+ return iter(self._users.values())
+
+ def __str__(self) -> str:
+ """String representation of UserManager."""
+ return f"UserManager(users: {len(self._users)})"
+
+ # CI marker method to verify auto-reindex on change
+ def _ci_added_symbol_marker(self) -> str:
+ return "ci_symbol_python"
\ No newline at end of file
diff --git a/test/sample-projects/python/user_management/utils/__init__.py b/test/sample-projects/python/user_management/utils/__init__.py
new file mode 100644
index 0000000..82b229c
--- /dev/null
+++ b/test/sample-projects/python/user_management/utils/__init__.py
@@ -0,0 +1,17 @@
+"""Utilities package for user management system."""
+
+from .validators import validate_email, validate_username, validate_password
+from .exceptions import UserNotFoundError, DuplicateUserError, AuthenticationError
+from .helpers import generate_random_string, format_datetime, parse_datetime
+
+__all__ = [
+ "validate_email",
+ "validate_username",
+ "validate_password",
+ "UserNotFoundError",
+ "DuplicateUserError",
+ "AuthenticationError",
+ "generate_random_string",
+ "format_datetime",
+ "parse_datetime"
+]
\ No newline at end of file
diff --git a/test/sample-projects/python/user_management/utils/exceptions.py b/test/sample-projects/python/user_management/utils/exceptions.py
new file mode 100644
index 0000000..cc84479
--- /dev/null
+++ b/test/sample-projects/python/user_management/utils/exceptions.py
@@ -0,0 +1,80 @@
+"""
+Custom exceptions for user management system.
+"""
+
+
+class UserManagementError(Exception):
+ """Base exception for user management errors."""
+ pass
+
+
+class UserNotFoundError(UserManagementError):
+ """Exception raised when a user is not found."""
+
+ def __init__(self, message: str = "User not found"):
+ self.message = message
+ super().__init__(self.message)
+
+
+class DuplicateUserError(UserManagementError):
+ """Exception raised when trying to create a user that already exists."""
+
+ def __init__(self, message: str = "User already exists"):
+ self.message = message
+ super().__init__(self.message)
+
+
+class AuthenticationError(UserManagementError):
+ """Exception raised when authentication fails."""
+
+ def __init__(self, message: str = "Authentication failed"):
+ self.message = message
+ super().__init__(self.message)
+
+
+class AuthorizationError(UserManagementError):
+ """Exception raised when authorization fails."""
+
+ def __init__(self, message: str = "Authorization failed"):
+ self.message = message
+ super().__init__(self.message)
+
+
+class ValidationError(UserManagementError):
+ """Exception raised when validation fails."""
+
+ def __init__(self, message: str = "Validation failed"):
+ self.message = message
+ super().__init__(self.message)
+
+
+class PermissionError(UserManagementError):
+ """Exception raised when user lacks required permissions."""
+
+ def __init__(self, message: str = "Permission denied"):
+ self.message = message
+ super().__init__(self.message)
+
+
+class SessionError(UserManagementError):
+ """Exception raised when session operations fail."""
+
+ def __init__(self, message: str = "Session error"):
+ self.message = message
+ super().__init__(self.message)
+
+
+class StorageError(UserManagementError):
+ """Exception raised when storage operations fail."""
+
+ def __init__(self, message: str = "Storage error"):
+ self.message = message
+ super().__init__(self.message)
+
+
+class ConfigurationError(UserManagementError):
+ """Exception raised when configuration is invalid."""
+
+ def __init__(self, message: str = "Configuration error"):
+ self.message = message
+ super().__init__(self.message)
\ No newline at end of file
diff --git a/test/sample-projects/python/user_management/utils/helpers.py b/test/sample-projects/python/user_management/utils/helpers.py
new file mode 100644
index 0000000..0c18d06
--- /dev/null
+++ b/test/sample-projects/python/user_management/utils/helpers.py
@@ -0,0 +1,206 @@
+"""
+Helper utilities for user management system.
+"""
+
+import secrets
+import string
+from datetime import datetime, timezone
+from typing import Optional, Dict, Any, List
+import json
+import hashlib
+
+
+def generate_random_string(length: int = 16,
+ include_digits: bool = True,
+ include_symbols: bool = False) -> str:
+ """Generate a random string of specified length."""
+ characters = string.ascii_letters
+
+ if include_digits:
+ characters += string.digits
+
+ if include_symbols:
+ characters += "!@#$%^&*"
+
+ return ''.join(secrets.choice(characters) for _ in range(length))
+
+
+def generate_secure_token(length: int = 32) -> str:
+ """Generate a secure URL-safe token."""
+ return secrets.token_urlsafe(length)
+
+
+def generate_hash(input_string: str, salt: str = "") -> str:
+ """Generate a SHA-256 hash of the input string."""
+ combined = f"{input_string}{salt}"
+ return hashlib.sha256(combined.encode()).hexdigest()
+
+
+def format_datetime(dt: datetime, format_str: str = "%Y-%m-%d %H:%M:%S") -> str:
+ """Format datetime object to string."""
+ if dt is None:
+ return ""
+ return dt.strftime(format_str)
+
+
+def parse_datetime(date_string: str, format_str: str = "%Y-%m-%d %H:%M:%S") -> Optional[datetime]:
+ """Parse string to datetime object."""
+ try:
+ return datetime.strptime(date_string, format_str)
+ except ValueError:
+ return None
+
+
+def get_current_timestamp() -> str:
+ """Get current timestamp as ISO format string."""
+ return datetime.now(timezone.utc).isoformat()
+
+
+def is_valid_json(json_string: str) -> bool:
+ """Check if a string is valid JSON."""
+ try:
+ json.loads(json_string)
+ return True
+ except (ValueError, TypeError):
+ return False
+
+
+def deep_merge_dicts(dict1: Dict[str, Any], dict2: Dict[str, Any]) -> Dict[str, Any]:
+ """Deep merge two dictionaries."""
+ result = dict1.copy()
+
+ for key, value in dict2.items():
+ if key in result and isinstance(result[key], dict) and isinstance(value, dict):
+ result[key] = deep_merge_dicts(result[key], value)
+ else:
+ result[key] = value
+
+ return result
+
+
+def flatten_dict(nested_dict: Dict[str, Any], parent_key: str = '', sep: str = '.') -> Dict[str, Any]:
+ """Flatten a nested dictionary."""
+ items = []
+
+ for key, value in nested_dict.items():
+ new_key = f"{parent_key}{sep}{key}" if parent_key else key
+
+ if isinstance(value, dict):
+ items.extend(flatten_dict(value, new_key, sep=sep).items())
+ else:
+ items.append((new_key, value))
+
+ return dict(items)
+
+
+def chunk_list(input_list: List[Any], chunk_size: int) -> List[List[Any]]:
+ """Split a list into chunks of specified size."""
+ return [input_list[i:i + chunk_size] for i in range(0, len(input_list), chunk_size)]
+
+
+def remove_duplicates(input_list: List[Any]) -> List[Any]:
+ """Remove duplicates from a list while preserving order."""
+ seen = set()
+ result = []
+
+ for item in input_list:
+ if item not in seen:
+ seen.add(item)
+ result.append(item)
+
+ return result
+
+
+def safe_dict_get(dictionary: Dict[str, Any], key_path: str, default: Any = None) -> Any:
+ """Safely get value from nested dictionary using dot notation."""
+ keys = key_path.split('.')
+ current = dictionary
+
+ try:
+ for key in keys:
+ current = current[key]
+ return current
+ except (KeyError, TypeError):
+ return default
+
+
+def calculate_age(birth_date: datetime) -> int:
+ """Calculate age from birth date."""
+ today = datetime.now()
+ age = today.year - birth_date.year
+
+ # Adjust if birthday hasn't occurred this year
+ if today.month < birth_date.month or (today.month == birth_date.month and today.day < birth_date.day):
+ age -= 1
+
+ return age
+
+
+def mask_email(email: str) -> str:
+ """Mask email address for privacy."""
+ if not email or '@' not in email:
+ return email
+
+ username, domain = email.split('@', 1)
+
+ if len(username) <= 2:
+ masked_username = username
+ else:
+ masked_username = username[0] + '*' * (len(username) - 2) + username[-1]
+
+ return f"{masked_username}@{domain}"
+
+
+def format_file_size(size_bytes: int) -> str:
+ """Format file size in human readable format."""
+ if size_bytes == 0:
+ return "0 B"
+
+ size_names = ["B", "KB", "MB", "GB", "TB"]
+ i = 0
+
+ while size_bytes >= 1024 and i < len(size_names) - 1:
+ size_bytes /= 1024.0
+ i += 1
+
+ return f"{size_bytes:.1f} {size_names[i]}"
+
+
+def truncate_string(text: str, max_length: int, suffix: str = "...") -> str:
+ """Truncate string to specified length with optional suffix."""
+ if len(text) <= max_length:
+ return text
+
+ return text[:max_length - len(suffix)] + suffix
+
+
+def camel_to_snake(name: str) -> str:
+ """Convert camelCase to snake_case."""
+ import re
+ s1 = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', name)
+ return re.sub('([a-z0-9])([A-Z])', r'\1_\2', s1).lower()
+
+
+def snake_to_camel(name: str) -> str:
+ """Convert snake_case to camelCase."""
+ components = name.split('_')
+ return components[0] + ''.join(x.title() for x in components[1:])
+
+
+def is_email_domain_valid(email: str, allowed_domains: List[str]) -> bool:
+ """Check if email domain is in allowed list."""
+ if not email or '@' not in email:
+ return False
+
+ domain = email.split('@')[1].lower()
+ return domain in [d.lower() for d in allowed_domains]
+
+
+def get_initials(name: str) -> str:
+ """Get initials from a name."""
+ if not name:
+ return ""
+
+ words = name.strip().split()
+ initials = ''.join(word[0].upper() for word in words if word)
+ return initials[:3] # Limit to 3 characters
\ No newline at end of file
diff --git a/test/sample-projects/python/user_management/utils/validators.py b/test/sample-projects/python/user_management/utils/validators.py
new file mode 100644
index 0000000..55d8dc3
--- /dev/null
+++ b/test/sample-projects/python/user_management/utils/validators.py
@@ -0,0 +1,132 @@
+"""
+Validation utilities for user management system.
+"""
+
+import re
+from typing import Optional
+
+
+def validate_email(email: str) -> bool:
+ """Validate email address format."""
+ if not email:
+ return False
+
+ # Basic email validation pattern
+ pattern = r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$'
+ return bool(re.match(pattern, email))
+
+
+def validate_username(username: str) -> bool:
+ """Validate username format."""
+ if not username:
+ return False
+
+ # Username must be 3-20 characters, alphanumeric and underscores only
+ if len(username) < 3 or len(username) > 20:
+ return False
+
+ pattern = r'^[a-zA-Z0-9_]+$'
+ return bool(re.match(pattern, username))
+
+
+def validate_password(password: str) -> tuple[bool, Optional[str]]:
+ """
+ Validate password strength.
+
+ Returns:
+ tuple: (is_valid, error_message)
+ """
+ if not password:
+ return False, "Password cannot be empty"
+
+ if len(password) < 8:
+ return False, "Password must be at least 8 characters long"
+
+ if len(password) > 128:
+ return False, "Password must be no more than 128 characters long"
+
+ # Check for at least one uppercase letter
+ if not re.search(r'[A-Z]', password):
+ return False, "Password must contain at least one uppercase letter"
+
+ # Check for at least one lowercase letter
+ if not re.search(r'[a-z]', password):
+ return False, "Password must contain at least one lowercase letter"
+
+ # Check for at least one digit
+ if not re.search(r'\d', password):
+ return False, "Password must contain at least one digit"
+
+ # Check for at least one special character
+ if not re.search(r'[!@#$%^&*(),.?":{}|<>]', password):
+ return False, "Password must contain at least one special character"
+
+ return True, None
+
+
+def validate_age(age: int) -> bool:
+ """Validate age value."""
+ return 0 <= age <= 150
+
+
+def validate_name(name: str) -> bool:
+ """Validate name format."""
+ if not name or not name.strip():
+ return False
+
+ # Name should be 1-50 characters, letters, spaces, hyphens, and apostrophes
+ if len(name.strip()) > 50:
+ return False
+
+ pattern = r"^[a-zA-Z\s\-']+$"
+ return bool(re.match(pattern, name.strip()))
+
+
+def validate_phone(phone: str) -> bool:
+ """Validate phone number format."""
+ if not phone:
+ return False
+
+ # Remove all non-digit characters
+ digits_only = re.sub(r'\D', '', phone)
+
+ # Check if it's a valid length (10-15 digits)
+ return 10 <= len(digits_only) <= 15
+
+
+def sanitize_input(input_str: str) -> str:
+ """Sanitize user input by removing potentially dangerous characters."""
+ if not input_str:
+ return ""
+
+ # Remove HTML tags
+ clean_str = re.sub(r'<[^>]+>', '', input_str)
+
+ # Remove script tags and their content
+ clean_str = re.sub(r'', '', clean_str, flags=re.DOTALL | re.IGNORECASE)
+
+ # Remove dangerous characters
+ dangerous_chars = ['<', '>', '"', "'", '&', ';', '`', '|', '$', '(', ')', '{', '}', '[', ']']
+ for char in dangerous_chars:
+ clean_str = clean_str.replace(char, '')
+
+ return clean_str.strip()
+
+
+def validate_url(url: str) -> bool:
+ """Validate URL format."""
+ if not url:
+ return False
+
+ pattern = r'^https?://(?:[-\w.])+(?::[0-9]+)?(?:/[^?\s]*)?(?:\?[^#\s]*)?(?:#[^\s]*)?$'
+ return bool(re.match(pattern, url))
+
+
+def validate_json_string(json_str: str) -> bool:
+ """Validate if a string is valid JSON."""
+ try:
+ import json
+ json.loads(json_str)
+ return True
+ except (ValueError, TypeError):
+ return False
\ No newline at end of file
diff --git a/test/sample-projects/typescript/sample.ts b/test/sample-projects/typescript/sample.ts
new file mode 100644
index 0000000..d5c3a6d
--- /dev/null
+++ b/test/sample-projects/typescript/sample.ts
@@ -0,0 +1,187 @@
+/**
+ * Sample TypeScript file for testing Code Index MCP analysis.
+ */
+
+interface PersonInterface {
+ name: string;
+ age: number;
+ email?: string;
+}
+
+interface UserManagerInterface {
+ addUser(person: Person): void;
+ findByName(name: string): Person | null;
+ getAllUsers(): Person[];
+ getUserCount(): number;
+}
+
+/**
+ * Represents a person with basic information.
+ */
+class Person implements PersonInterface {
+ public readonly name: string;
+ public readonly age: number;
+ public email?: string;
+
+ constructor(name: string, age: number, email?: string) {
+ if (age < 0) {
+ throw new Error("Age cannot be negative");
+ }
+
+ this.name = name;
+ this.age = age;
+ this.email = email;
+ }
+
+ /**
+ * Returns a greeting message.
+ */
+ greet(): string {
+ return `Hello, I'm ${this.name} and I'm ${this.age} years old.`;
+ }
+
+ /**
+ * Update the person's email.
+ */
+ updateEmail(email: string): void {
+ this.email = email;
+ }
+
+ /**
+ * Create a Person from an object.
+ */
+ static fromObject(data: PersonInterface): Person {
+ return new Person(data.name, data.age, data.email);
+ }
+
+ /**
+ * Convert person to JSON-serializable object.
+ */
+ toJSON(): PersonInterface {
+ return {
+ name: this.name,
+ age: this.age,
+ email: this.email
+ };
+ }
+}
+
+/**
+ * Generic utility type for filtering arrays.
+ */
+type FilterFunction = (item: T) => boolean;
+
+/**
+ * Manages a collection of users.
+ */
+class UserManager implements UserManagerInterface {
+ private users: Person[] = [];
+
+ /**
+ * Add a user to the collection.
+ */
+ addUser(person: Person): void {
+ this.users.push(person);
+ }
+
+ /**
+ * Find a user by name.
+ */
+ findByName(name: string): Person | null {
+ const user = this.users.find(user => user.name === name);
+ return user || null;
+ }
+
+ /**
+ * Get all users.
+ */
+ getAllUsers(): Person[] {
+ return [...this.users];
+ }
+
+ /**
+ * Filter users by a custom function.
+ */
+ filterUsers(filterFn: FilterFunction): Person[] {
+ return this.users.filter(filterFn);
+ }
+
+ /**
+ * Get users older than specified age.
+ */
+ getUsersOlderThan(age: number): Person[] {
+ return this.filterUsers(user => user.age > age);
+ }
+
+ /**
+ * Get users with email addresses.
+ */
+ getUsersWithEmail(): Person[] {
+ return this.filterUsers(user => !!user.email);
+ }
+
+ /**
+ * Get the number of users.
+ */
+ getUserCount(): number {
+ return this.users.length;
+ }
+
+ /**
+ * Export all users as JSON.
+ */
+ exportToJSON(): string {
+ return JSON.stringify(this.users.map(user => user.toJSON()), null, 2);
+ }
+}
+
+/**
+ * Utility functions for working with users.
+ */
+namespace UserUtils {
+ export function validateAge(age: number): boolean {
+ return age >= 0 && age <= 150;
+ }
+
+ export function formatUserList(users: Person[]): string {
+ return users.map(user => user.greet()).join('\n');
+ }
+
+ export const DEFAULT_USERS: PersonInterface[] = [
+ { name: "Alice", age: 30, email: "alice@example.com" },
+ { name: "Bob", age: 25 },
+ { name: "Charlie", age: 35, email: "charlie@example.com" }
+ ];
+}
+
+/**
+ * Main function to demonstrate usage.
+ */
+function main(): void {
+ const manager = new UserManager();
+
+ // Add some users
+ UserUtils.DEFAULT_USERS.forEach(userData => {
+ const person = Person.fromObject(userData);
+ manager.addUser(person);
+ console.log(person.greet());
+ });
+
+ console.log(`Total users: ${manager.getUserCount()}`);
+
+ // Find users older than 25
+ const olderUsers = manager.getUsersOlderThan(25);
+ console.log(`Users older than 25: ${olderUsers.length}`);
+
+ // Export to JSON
+ console.log("Users as JSON:");
+ console.log(manager.exportToJSON());
+}
+
+// Export for module usage
+export { Person, UserManager, UserUtils, PersonInterface, UserManagerInterface };
+
+// Run main if this is the entry point
+if (require.main === module) {
+ main();
+}
\ No newline at end of file
diff --git a/test/sample-projects/typescript/user-management/package.json b/test/sample-projects/typescript/user-management/package.json
new file mode 100644
index 0000000..5a7454f
--- /dev/null
+++ b/test/sample-projects/typescript/user-management/package.json
@@ -0,0 +1,121 @@
+{
+ "name": "user-management-ts",
+ "version": "1.0.0",
+ "description": "A comprehensive TypeScript user management system for testing Code Index MCP",
+ "main": "dist/server.js",
+ "scripts": {
+ "build": "tsc",
+ "start": "node dist/server.js",
+ "dev": "ts-node-dev --respawn --transpile-only src/server.ts",
+ "test": "jest",
+ "test:watch": "jest --watch",
+ "test:coverage": "jest --coverage",
+ "lint": "eslint src/ --ext .ts --fix",
+ "format": "prettier --write src/",
+ "clean": "rimraf dist",
+ "prebuild": "npm run clean",
+ "prestart": "npm run build"
+ },
+ "keywords": [
+ "user-management",
+ "typescript",
+ "nodejs",
+ "express",
+ "authentication",
+ "api"
+ ],
+ "author": "Test Author",
+ "license": "MIT",
+ "dependencies": {
+ "express": "^4.18.2",
+ "mongoose": "^7.4.1",
+ "bcryptjs": "^2.4.3",
+ "jsonwebtoken": "^9.0.1",
+ "joi": "^17.9.2",
+ "cors": "^2.8.5",
+ "helmet": "^7.0.0",
+ "express-rate-limit": "^6.8.1",
+ "winston": "^3.10.0",
+ "dotenv": "^16.3.1",
+ "uuid": "^9.0.0",
+ "morgan": "^1.10.0",
+ "compression": "^1.7.4",
+ "express-validator": "^7.0.1",
+ "class-transformer": "^0.5.1",
+ "class-validator": "^0.14.0",
+ "reflect-metadata": "^0.1.13"
+ },
+ "devDependencies": {
+ "@types/express": "^4.17.17",
+ "@types/node": "^20.4.2",
+ "@types/bcryptjs": "^2.4.2",
+ "@types/jsonwebtoken": "^9.0.2",
+ "@types/cors": "^2.8.13",
+ "@types/morgan": "^1.9.4",
+ "@types/compression": "^1.7.2",
+ "@types/uuid": "^9.0.2",
+ "@types/jest": "^29.5.3",
+ "@types/supertest": "^2.0.12",
+ "typescript": "^5.1.6",
+ "ts-node": "^10.9.1",
+ "ts-node-dev": "^2.0.0",
+ "jest": "^29.6.1",
+ "ts-jest": "^29.1.1",
+ "supertest": "^6.3.3",
+ "@typescript-eslint/eslint-plugin": "^6.2.0",
+ "@typescript-eslint/parser": "^6.2.0",
+ "eslint": "^8.45.0",
+ "prettier": "^3.0.0",
+ "rimraf": "^5.0.1",
+ "mongodb-memory-server": "^8.14.0"
+ },
+ "engines": {
+ "node": ">=16.0.0"
+ },
+ "jest": {
+ "preset": "ts-jest",
+ "testEnvironment": "node",
+ "roots": ["/src"],
+ "testMatch": ["**/__tests__/**/*.test.ts"],
+ "transform": {
+ "^.+\.tsx?$": "ts-jest"
+ },
+ "coverageDirectory": "coverage",
+ "collectCoverageFrom": [
+ "src/**/*.ts",
+ "!src/server.ts",
+ "!src/**/*.d.ts"
+ ]
+ },
+ "eslintConfig": {
+ "parser": "@typescript-eslint/parser",
+ "extends": [
+ "eslint:recommended",
+ "@typescript-eslint/recommended"
+ ],
+ "env": {
+ "node": true,
+ "es2021": true,
+ "jest": true
+ },
+ "parserOptions": {
+ "ecmaVersion": "latest",
+ "sourceType": "module"
+ },
+ "rules": {
+ "@typescript-eslint/no-explicit-any": "warn",
+ "@typescript-eslint/no-unused-vars": "error",
+ "@typescript-eslint/explicit-function-return-type": "warn",
+ "no-console": "warn",
+ "prefer-const": "error"
+ }
+ },
+ "prettier": {
+ "semi": true,
+ "singleQuote": true,
+ "tabWidth": 2,
+ "trailingComma": "es5",
+ "printWidth": 80,
+ "arrowParens": "avoid"
+ }
+}
\ No newline at end of file
diff --git a/test/sample-projects/typescript/user-management/src/config/database.ts b/test/sample-projects/typescript/user-management/src/config/database.ts
new file mode 100644
index 0000000..6142fd8
--- /dev/null
+++ b/test/sample-projects/typescript/user-management/src/config/database.ts
@@ -0,0 +1,165 @@
+import mongoose from 'mongoose';
+import logger from '../utils/logger';
+
+/**
+ * Database connection configuration
+ */
+class Database {
+ private mongoURI: string;
+ private options: mongoose.ConnectOptions;
+
+ constructor() {
+ this.mongoURI = process.env.MONGODB_URI || 'mongodb://localhost:27017/user-management-ts';
+ this.options = {
+ maxPoolSize: 10,
+ serverSelectionTimeoutMS: 5000,
+ socketTimeoutMS: 45000,
+ family: 4,
+ };
+ }
+
+ /**
+ * Connect to MongoDB
+ */
+ public async connect(): Promise {
+ try {
+ await mongoose.connect(this.mongoURI, this.options);
+ logger.info('MongoDB connected successfully');
+
+ // Handle connection events
+ mongoose.connection.on('error', (err: Error) => {
+ logger.error('MongoDB connection error:', err);
+ });
+
+ mongoose.connection.on('disconnected', () => {
+ logger.warn('MongoDB disconnected');
+ });
+
+ mongoose.connection.on('reconnected', () => {
+ logger.info('MongoDB reconnected');
+ });
+
+ // Handle process termination
+ process.on('SIGINT', () => this.gracefulShutdown('SIGINT'));
+ process.on('SIGTERM', () => this.gracefulShutdown('SIGTERM'));
+
+ } catch (error) {
+ logger.error('MongoDB connection failed:', error as Error);
+ process.exit(1);
+ }
+ }
+
+ /**
+ * Disconnect from MongoDB
+ */
+ public async disconnect(): Promise {
+ try {
+ await mongoose.disconnect();
+ logger.info('MongoDB disconnected successfully');
+ } catch (error) {
+ logger.error('MongoDB disconnection error:', error as Error);
+ }
+ }
+
+ /**
+ * Graceful shutdown
+ */
+ private async gracefulShutdown(signal: string): Promise {
+ logger.info(`Received ${signal}. Graceful shutdown...`);
+ try {
+ await this.disconnect();
+ process.exit(0);
+ } catch (error) {
+ logger.error('Error during graceful shutdown:', error as Error);
+ process.exit(1);
+ }
+ }
+
+ /**
+ * Get connection status
+ */
+ public getConnectionStatus(): string {
+ const states: Record = {
+ 0: 'disconnected',
+ 1: 'connected',
+ 2: 'connecting',
+ 3: 'disconnecting',
+ };
+ return states[mongoose.connection.readyState] || 'unknown';
+ }
+
+ /**
+ * Check if database is connected
+ */
+ public isConnected(): boolean {
+ return mongoose.connection.readyState === 1;
+ }
+
+ /**
+ * Drop database (for testing)
+ */
+ public async dropDatabase(): Promise {
+ if (process.env.NODE_ENV === 'test') {
+ try {
+ await mongoose.connection.db.dropDatabase();
+ logger.info('Test database dropped');
+ } catch (error) {
+ logger.error('Error dropping test database:', error as Error);
+ }
+ } else {
+ logger.warn('Database drop attempted in non-test environment');
+ }
+ }
+
+ /**
+ * Get database statistics
+ */
+ public async getStats(): Promise {
+ try {
+ const stats = await mongoose.connection.db.stats();
+ return {
+ database: mongoose.connection.name,
+ collections: stats.collections,
+ dataSize: stats.dataSize,
+ storageSize: stats.storageSize,
+ indexes: stats.indexes,
+ indexSize: stats.indexSize,
+ objects: stats.objects,
+ };
+ } catch (error) {
+ logger.error('Error getting database stats:', error as Error);
+ return null;
+ }
+ }
+
+ /**
+ * Create indexes for performance
+ */
+ public async createIndexes(): Promise {
+ try {
+ // This would be called after models are loaded
+ // Indexes are already defined in the model schemas
+ logger.info('Database indexes created');
+ } catch (error) {
+ logger.error('Error creating indexes:', error as Error);
+ }
+ }
+
+ /**
+ * Health check
+ */
+ public async healthCheck(): Promise {
+ try {
+ await mongoose.connection.db.admin().ping();
+ return true;
+ } catch (error) {
+ logger.error('Database health check failed:', error as Error);
+ return false;
+ }
+ }
+}
+
+// Create singleton instance
+const database = new Database();
+
+export default database;
\ No newline at end of file
diff --git a/test/sample-projects/typescript/user-management/src/middleware/rateLimiter.ts b/test/sample-projects/typescript/user-management/src/middleware/rateLimiter.ts
new file mode 100644
index 0000000..68a9cbd
--- /dev/null
+++ b/test/sample-projects/typescript/user-management/src/middleware/rateLimiter.ts
@@ -0,0 +1,238 @@
+import rateLimit from 'express-rate-limit';
+import { Request, Response } from 'express';
+import { RateLimitError } from '../utils/errors';
+import logger from '../utils/logger';
+import { IApiResponse } from '../types/User';
+
+/**
+ * General rate limiter
+ * Limits requests per IP address
+ */
+export const generalLimiter = rateLimit({
+ windowMs: 15 * 60 * 1000, // 15 minutes
+ max: 100, // limit each IP to 100 requests per windowMs
+ message: {
+ success: false,
+ message: 'Too many requests from this IP, please try again later.',
+ error: {
+ message: 'Too many requests from this IP, please try again later.',
+ statusCode: 429,
+ },
+ },
+ standardHeaders: true, // Return rate limit info in the `RateLimit-*` headers
+ legacyHeaders: false, // Disable the `X-RateLimit-*` headers
+ handler: (req: Request, res: Response) => {
+ logger.warn(`Rate limit exceeded for IP: ${req.ip}`);
+ const error = new RateLimitError('Too many requests from this IP, please try again later.');
+ const response: IApiResponse = {
+ success: false,
+ message: error.message,
+ error: {
+ message: error.message,
+ statusCode: error.statusCode,
+ },
+ };
+ res.status(429).json(response);
+ },
+});
+
+/**
+ * Authentication rate limiter
+ * Stricter limits for login attempts
+ */
+export const authLimiter = rateLimit({
+ windowMs: 15 * 60 * 1000, // 15 minutes
+ max: 5, // limit each IP to 5 login attempts per windowMs
+ message: {
+ success: false,
+ message: 'Too many login attempts from this IP, please try again later.',
+ error: {
+ message: 'Too many login attempts from this IP, please try again later.',
+ statusCode: 429,
+ },
+ },
+ standardHeaders: true,
+ legacyHeaders: false,
+ handler: (req: Request, res: Response) => {
+ logger.warn(`Authentication rate limit exceeded for IP: ${req.ip}`);
+ const error = new RateLimitError('Too many login attempts from this IP, please try again later.');
+ const response: IApiResponse = {
+ success: false,
+ message: error.message,
+ error: {
+ message: error.message,
+ statusCode: error.statusCode,
+ },
+ };
+ res.status(429).json(response);
+ },
+});
+
+/**
+ * User creation rate limiter
+ * Moderate limits for user registration
+ */
+export const createUserLimiter = rateLimit({
+ windowMs: 60 * 60 * 1000, // 1 hour
+ max: 10, // limit each IP to 10 user creations per hour
+ message: {
+ success: false,
+ message: 'Too many accounts created from this IP, please try again later.',
+ error: {
+ message: 'Too many accounts created from this IP, please try again later.',
+ statusCode: 429,
+ },
+ },
+ standardHeaders: true,
+ legacyHeaders: false,
+ handler: (req: Request, res: Response) => {
+ logger.warn(`User creation rate limit exceeded for IP: ${req.ip}`);
+ const error = new RateLimitError('Too many accounts created from this IP, please try again later.');
+ const response: IApiResponse = {
+ success: false,
+ message: error.message,
+ error: {
+ message: error.message,
+ statusCode: error.statusCode,
+ },
+ };
+ res.status(429).json(response);
+ },
+});
+
+/**
+ * Password reset rate limiter
+ * Limits password reset attempts
+ */
+export const passwordResetLimiter = rateLimit({
+ windowMs: 60 * 60 * 1000, // 1 hour
+ max: 3, // limit each IP to 3 password reset attempts per hour
+ message: {
+ success: false,
+ message: 'Too many password reset attempts from this IP, please try again later.',
+ error: {
+ message: 'Too many password reset attempts from this IP, please try again later.',
+ statusCode: 429,
+ },
+ },
+ standardHeaders: true,
+ legacyHeaders: false,
+ handler: (req: Request, res: Response) => {
+ logger.warn(`Password reset rate limit exceeded for IP: ${req.ip}`);
+ const error = new RateLimitError('Too many password reset attempts from this IP, please try again later.');
+ const response: IApiResponse = {
+ success: false,
+ message: error.message,
+ error: {
+ message: error.message,
+ statusCode: error.statusCode,
+ },
+ };
+ res.status(429).json(response);
+ },
+});
+
+/**
+ * API documentation rate limiter
+ * Limits access to API documentation
+ */
+export const docsLimiter = rateLimit({
+ windowMs: 15 * 60 * 1000, // 15 minutes
+ max: 50, // limit each IP to 50 documentation requests per windowMs
+ message: {
+ success: false,
+ message: 'Too many documentation requests from this IP, please try again later.',
+ error: {
+ message: 'Too many documentation requests from this IP, please try again later.',
+ statusCode: 429,
+ },
+ },
+ standardHeaders: true,
+ legacyHeaders: false,
+ handler: (req: Request, res: Response) => {
+ logger.warn(`Documentation rate limit exceeded for IP: ${req.ip}`);
+ const error = new RateLimitError('Too many documentation requests from this IP, please try again later.');
+ const response: IApiResponse = {
+ success: false,
+ message: error.message,
+ error: {
+ message: error.message,
+ statusCode: error.statusCode,
+ },
+ };
+ res.status(429).json(response);
+ },
+});
+
+/**
+ * Export rate limiter
+ * Limits data export requests
+ */
+export const exportLimiter = rateLimit({
+ windowMs: 60 * 60 * 1000, // 1 hour
+ max: 5, // limit each IP to 5 export requests per hour
+ message: {
+ success: false,
+ message: 'Too many export requests from this IP, please try again later.',
+ error: {
+ message: 'Too many export requests from this IP, please try again later.',
+ statusCode: 429,
+ },
+ },
+ standardHeaders: true,
+ legacyHeaders: false,
+ handler: (req: Request, res: Response) => {
+ logger.warn(`Export rate limit exceeded for IP: ${req.ip}`);
+ const error = new RateLimitError('Too many export requests from this IP, please try again later.');
+ const response: IApiResponse = {
+ success: false,
+ message: error.message,
+ error: {
+ message: error.message,
+ statusCode: error.statusCode,
+ },
+ };
+ res.status(429).json(response);
+ },
+});
+
+/**
+ * Create custom rate limiter
+ */
+export const createCustomLimiter = (options: {
+ windowMs: number;
+ max: number;
+ message: string;
+ skipSuccessfulRequests?: boolean;
+ skipFailedRequests?: boolean;
+}) => {
+ return rateLimit({
+ windowMs: options.windowMs,
+ max: options.max,
+ message: {
+ success: false,
+ message: options.message,
+ error: {
+ message: options.message,
+ statusCode: 429,
+ },
+ },
+ standardHeaders: true,
+ legacyHeaders: false,
+ skipSuccessfulRequests: options.skipSuccessfulRequests || false,
+ skipFailedRequests: options.skipFailedRequests || false,
+ handler: (req: Request, res: Response) => {
+ logger.warn(`Custom rate limit exceeded for IP: ${req.ip} - ${options.message}`);
+ const error = new RateLimitError(options.message);
+ const response: IApiResponse = {
+ success: false,
+ message: error.message,
+ error: {
+ message: error.message,
+ statusCode: error.statusCode,
+ },
+ };
+ res.status(429).json(response);
+ },
+ });
+};
\ No newline at end of file
diff --git a/test/sample-projects/typescript/user-management/src/models/User.ts b/test/sample-projects/typescript/user-management/src/models/User.ts
new file mode 100644
index 0000000..c7137a0
--- /dev/null
+++ b/test/sample-projects/typescript/user-management/src/models/User.ts
@@ -0,0 +1,389 @@
+import mongoose, { Schema, Model } from 'mongoose';
+import bcrypt from 'bcryptjs';
+import jwt from 'jsonwebtoken';
+import { v4 as uuidv4 } from 'uuid';
+
+import {
+ IUser,
+ IUserDocument,
+ IUserResponse,
+ IUserStats,
+ UserRole,
+ UserStatus,
+ IJWTPayload,
+} from '../types/User';
+
+// User schema definition
+const userSchema = new Schema(
+ {
+ id: {
+ type: String,
+ default: uuidv4,
+ unique: true,
+ required: true,
+ },
+ username: {
+ type: String,
+ required: [true, 'Username is required'],
+ unique: true,
+ minlength: [3, 'Username must be at least 3 characters'],
+ maxlength: [20, 'Username cannot exceed 20 characters'],
+ match: [
+ /^[a-zA-Z0-9_]+$/,
+ 'Username can only contain letters, numbers, and underscores',
+ ],
+ },
+ email: {
+ type: String,
+ unique: true,
+ sparse: true,
+ match: [
+ /^\w+([.-]?\w+)*@\w+([.-]?\w+)*(\.\w{2,3})+$/,
+ 'Please enter a valid email',
+ ],
+ },
+ name: {
+ type: String,
+ required: [true, 'Name is required'],
+ minlength: [1, 'Name is required'],
+ maxlength: [100, 'Name cannot exceed 100 characters'],
+ },
+ age: {
+ type: Number,
+ min: [0, 'Age cannot be negative'],
+ max: [150, 'Age cannot exceed 150'],
+ },
+ password: {
+ type: String,
+ required: [true, 'Password is required'],
+ minlength: [8, 'Password must be at least 8 characters'],
+ select: false,
+ },
+ role: {
+ type: String,
+ enum: Object.values(UserRole),
+ default: UserRole.USER,
+ },
+ status: {
+ type: String,
+ enum: Object.values(UserStatus),
+ default: UserStatus.ACTIVE,
+ },
+ lastLogin: {
+ type: Date,
+ default: null,
+ },
+ loginAttempts: {
+ type: Number,
+ default: 0,
+ },
+ permissions: {
+ type: [String],
+ default: [],
+ },
+ metadata: {
+ type: Schema.Types.Mixed,
+ default: {},
+ },
+ },
+ {
+ timestamps: true,
+ toJSON: { virtuals: true },
+ toObject: { virtuals: true },
+ }
+);
+
+// Virtual for user response (without sensitive data)
+userSchema.virtual('response').get(function (this: IUserDocument): IUserResponse {
+ return {
+ id: this.id,
+ username: this.username,
+ email: this.email,
+ name: this.name,
+ age: this.age,
+ role: this.role,
+ status: this.status,
+ lastLogin: this.lastLogin,
+ permissions: this.permissions,
+ metadata: this.metadata,
+ createdAt: this.createdAt,
+ updatedAt: this.updatedAt,
+ };
+});
+
+// Virtual for checking if user is active
+userSchema.virtual('isActive').get(function (this: IUserDocument): boolean {
+ return this.status === UserStatus.ACTIVE;
+});
+
+// Virtual for checking if user is admin
+userSchema.virtual('isAdmin').get(function (this: IUserDocument): boolean {
+ return this.role === UserRole.ADMIN;
+});
+
+// Virtual for checking if user is locked
+userSchema.virtual('isLocked').get(function (this: IUserDocument): boolean {
+ return this.loginAttempts >= 5 || this.status === UserStatus.SUSPENDED;
+});
+
+// Pre-save middleware to hash password
+userSchema.pre('save', async function (this: IUserDocument, next) {
+ if (!this.isModified('password')) return next();
+
+ try {
+ const saltRounds = parseInt(process.env.BCRYPT_SALT_ROUNDS || '12', 10);
+ const hashedPassword = await bcrypt.hash(this.password, saltRounds);
+ this.password = hashedPassword;
+ next();
+ } catch (error) {
+ next(error as Error);
+ }
+});
+
+// Instance method to check password
+userSchema.methods.checkPassword = async function (
+ this: IUserDocument,
+ candidatePassword: string
+): Promise {
+ return await bcrypt.compare(candidatePassword, this.password);
+};
+
+// Instance method to generate JWT token
+userSchema.methods.generateToken = function (this: IUserDocument): string {
+ const payload: IJWTPayload = {
+ id: this.id,
+ username: this.username,
+ role: this.role,
+ };
+
+ return jwt.sign(payload, process.env.JWT_SECRET || 'fallback-secret', {
+ expiresIn: process.env.JWT_EXPIRES_IN || '24h',
+ issuer: 'user-management-system',
+ });
+};
+
+// Instance method to add permission
+userSchema.methods.addPermission = function (
+ this: IUserDocument,
+ permission: string
+): void {
+ if (!this.permissions.includes(permission)) {
+ this.permissions.push(permission);
+ }
+};
+
+// Instance method to remove permission
+userSchema.methods.removePermission = function (
+ this: IUserDocument,
+ permission: string
+): void {
+ this.permissions = this.permissions.filter(p => p !== permission);
+};
+
+// Instance method to check permission
+userSchema.methods.hasPermission = function (
+ this: IUserDocument,
+ permission: string
+): boolean {
+ return this.permissions.includes(permission);
+};
+
+// Instance method to record successful login
+userSchema.methods.recordLogin = function (this: IUserDocument): void {
+ this.lastLogin = new Date();
+ this.loginAttempts = 0;
+};
+
+// Instance method to record failed login attempt
+userSchema.methods.recordFailedLogin = function (this: IUserDocument): void {
+ this.loginAttempts += 1;
+ if (this.loginAttempts >= 5) {
+ this.status = UserStatus.SUSPENDED;
+ }
+};
+
+// Instance method to reset login attempts
+userSchema.methods.resetLoginAttempts = function (this: IUserDocument): void {
+ this.loginAttempts = 0;
+};
+
+// Instance method to activate user
+userSchema.methods.activate = function (this: IUserDocument): void {
+ this.status = UserStatus.ACTIVE;
+ this.loginAttempts = 0;
+};
+
+// Instance method to deactivate user
+userSchema.methods.deactivate = function (this: IUserDocument): void {
+ this.status = UserStatus.INACTIVE;
+};
+
+// Instance method to suspend user
+userSchema.methods.suspend = function (this: IUserDocument): void {
+ this.status = UserStatus.SUSPENDED;
+};
+
+// Instance method to delete user (soft delete)
+userSchema.methods.delete = function (this: IUserDocument): void {
+ this.status = UserStatus.DELETED;
+};
+
+// Instance method to get metadata
+userSchema.methods.getMetadata = function (
+ this: IUserDocument,
+ key: string,
+ defaultValue: any = null
+): any {
+ return this.metadata[key] || defaultValue;
+};
+
+// Instance method to set metadata
+userSchema.methods.setMetadata = function (
+ this: IUserDocument,
+ key: string,
+ value: any
+): void {
+ this.metadata[key] = value;
+};
+
+// Instance method to remove metadata
+userSchema.methods.removeMetadata = function (
+ this: IUserDocument,
+ key: string
+): void {
+ delete this.metadata[key];
+};
+
+// Instance method to validate user data
+userSchema.methods.validateUser = function (this: IUserDocument): string[] {
+ const errors: string[] = [];
+
+ if (!this.username || this.username.length < 3) {
+ errors.push('Username must be at least 3 characters');
+ }
+
+ if (!this.name || this.name.length === 0) {
+ errors.push('Name is required');
+ }
+
+ if (this.age && (this.age < 0 || this.age > 150)) {
+ errors.push('Age must be between 0 and 150');
+ }
+
+ if (
+ this.email &&
+ !this.email.match(/^\w+([.-]?\w+)*@\w+([.-]?\w+)*(\.\w{2,3})+$/)
+ ) {
+ errors.push('Email format is invalid');
+ }
+
+ return errors;
+};
+
+// Static method to find by username
+userSchema.statics.findByUsername = function (
+ this: Model,
+ username: string
+) {
+ return this.findOne({ username });
+};
+
+// Static method to find by email
+userSchema.statics.findByEmail = function (
+ this: Model,
+ email: string
+) {
+ return this.findOne({ email });
+};
+
+// Static method to find active users
+userSchema.statics.findActive = function (this: Model) {
+ return this.find({ status: UserStatus.ACTIVE });
+};
+
+// Static method to find by role
+userSchema.statics.findByRole = function (
+ this: Model,
+ role: UserRole
+) {
+ return this.find({ role });
+};
+
+// Static method to search users
+userSchema.statics.searchUsers = function (
+ this: Model,
+ query: string,
+ options: any = {}
+) {
+ const searchRegex = new RegExp(query, 'i');
+ const searchQuery = {
+ $or: [
+ { username: searchRegex },
+ { name: searchRegex },
+ { email: searchRegex },
+ ],
+ };
+
+ return this.find(searchQuery, null, options);
+};
+
+// Static method to get user statistics
+userSchema.statics.getUserStats = async function (
+ this: Model
+): Promise {
+ const stats = await this.aggregate([
+ {
+ $group: {
+ _id: null,
+ total: { $sum: 1 },
+ active: {
+ $sum: { $cond: [{ $eq: ['$status', UserStatus.ACTIVE] }, 1, 0] },
+ },
+ admin: {
+ $sum: { $cond: [{ $eq: ['$role', UserRole.ADMIN] }, 1, 0] },
+ },
+ user: {
+ $sum: { $cond: [{ $eq: ['$role', UserRole.USER] }, 1, 0] },
+ },
+ guest: {
+ $sum: { $cond: [{ $eq: ['$role', UserRole.GUEST] }, 1, 0] },
+ },
+ withEmail: { $sum: { $cond: [{ $ne: ['$email', null] }, 1, 0] } },
+ },
+ },
+ ]);
+
+ return (
+ stats[0] || {
+ total: 0,
+ active: 0,
+ admin: 0,
+ user: 0,
+ guest: 0,
+ withEmail: 0,
+ }
+ );
+};
+
+// Indexes for performance
+userSchema.index({ username: 1 });
+userSchema.index({ email: 1 });
+userSchema.index({ role: 1 });
+userSchema.index({ status: 1 });
+userSchema.index({ createdAt: -1 });
+
+// Interface for the model
+interface IUserModel extends Model {
+ findByUsername(username: string): Promise;
+ findByEmail(email: string): Promise;
+ findActive(): Promise;
+ findByRole(role: UserRole): Promise;
+ searchUsers(query: string, options?: any): Promise;
+ getUserStats(): Promise;
+}
+
+// Export model and types
+const User = mongoose.model('User', userSchema);
+
+export { User, UserRole, UserStatus };
+export default User;
\ No newline at end of file
diff --git a/test/sample-projects/typescript/user-management/src/routes/userRoutes.ts b/test/sample-projects/typescript/user-management/src/routes/userRoutes.ts
new file mode 100644
index 0000000..9d29498
--- /dev/null
+++ b/test/sample-projects/typescript/user-management/src/routes/userRoutes.ts
@@ -0,0 +1,364 @@
+import express from 'express';
+import { body, param, query, ValidationChain } from 'express-validator';
+import { UserService } from '../services/UserService';
+import { UserRole } from '../types/User';
+import { asyncHandler, createSuccessResponse } from '../utils/errors';
+import { authLimiter, createUserLimiter, exportLimiter } from '../middleware/rateLimiter';
+import logger from '../utils/logger';
+
+const router = express.Router();
+const userService = new UserService();
+
+// User creation validation
+const createUserValidation: ValidationChain[] = [
+ body('username')
+ .isLength({ min: 3, max: 20 })
+ .withMessage('Username must be between 3 and 20 characters')
+ .matches(/^[a-zA-Z0-9_]+$/)
+ .withMessage('Username can only contain letters, numbers, and underscores'),
+ body('name')
+ .isLength({ min: 1, max: 100 })
+ .withMessage('Name must be between 1 and 100 characters'),
+ body('email')
+ .optional()
+ .isEmail()
+ .withMessage('Please provide a valid email address'),
+ body('age')
+ .optional()
+ .isInt({ min: 0, max: 150 })
+ .withMessage('Age must be between 0 and 150'),
+ body('password')
+ .isLength({ min: 8 })
+ .withMessage('Password must be at least 8 characters long'),
+ body('role')
+ .optional()
+ .isIn(Object.values(UserRole))
+ .withMessage('Role must be admin, user, or guest'),
+];
+
+// User update validation
+const updateUserValidation: ValidationChain[] = [
+ param('id').notEmpty().withMessage('User ID is required'),
+ body('username')
+ .optional()
+ .isLength({ min: 3, max: 20 })
+ .withMessage('Username must be between 3 and 20 characters')
+ .matches(/^[a-zA-Z0-9_]+$/)
+ .withMessage('Username can only contain letters, numbers, and underscores'),
+ body('name')
+ .optional()
+ .isLength({ min: 1, max: 100 })
+ .withMessage('Name must be between 1 and 100 characters'),
+ body('email')
+ .optional()
+ .isEmail()
+ .withMessage('Please provide a valid email address'),
+ body('age')
+ .optional()
+ .isInt({ min: 0, max: 150 })
+ .withMessage('Age must be between 0 and 150'),
+ body('role')
+ .optional()
+ .isIn(Object.values(UserRole))
+ .withMessage('Role must be admin, user, or guest'),
+];
+
+// Password change validation
+const passwordChangeValidation: ValidationChain[] = [
+ param('id').notEmpty().withMessage('User ID is required'),
+ body('currentPassword')
+ .isLength({ min: 1 })
+ .withMessage('Current password is required'),
+ body('newPassword')
+ .isLength({ min: 8 })
+ .withMessage('New password must be at least 8 characters long'),
+];
+
+// Authentication validation
+const authValidation: ValidationChain[] = [
+ body('username')
+ .isLength({ min: 3 })
+ .withMessage('Username must be at least 3 characters'),
+ body('password')
+ .isLength({ min: 1 })
+ .withMessage('Password is required'),
+];
+
+// Search validation
+const searchValidation: ValidationChain[] = [
+ query('q')
+ .isLength({ min: 1 })
+ .withMessage('Search query is required'),
+ query('page')
+ .optional()
+ .isInt({ min: 1 })
+ .withMessage('Page must be a positive integer'),
+ query('limit')
+ .optional()
+ .isInt({ min: 1, max: 100 })
+ .withMessage('Limit must be between 1 and 100'),
+];
+
+// Validation middleware
+const validate = (req: express.Request, res: express.Response, next: express.NextFunction) => {
+ // This would typically use express-validator's validationResult
+ // For brevity, we'll assume validation passes
+ next();
+};
+
+// Authentication middleware (simplified)
+const auth = (req: express.Request, res: express.Response, next: express.NextFunction) => {
+ // This would typically verify JWT token
+ // For brevity, we'll assume authentication passes
+ next();
+};
+
+// @route POST /api/users
+// @desc Create a new user
+// @access Public
+router.post(
+ '/',
+ createUserLimiter,
+ createUserValidation,
+ validate,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ const user = await userService.createUser(req.body);
+ logger.info(`User created via API: ${user.username}`);
+ res.status(201).json(createSuccessResponse(user, 'User created successfully'));
+ })
+);
+
+// @route POST /api/users/auth
+// @desc Authenticate user
+// @access Public
+router.post(
+ '/auth',
+ authLimiter,
+ authValidation,
+ validate,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ const { username, password } = req.body;
+ const result = await userService.authenticateUser(username, password);
+ logger.info(`User authenticated via API: ${username}`);
+ res.json(createSuccessResponse(result, 'Authentication successful'));
+ })
+);
+
+// @route GET /api/users
+// @desc Get all users with pagination
+// @access Private (Admin only)
+router.get(
+ '/',
+ auth,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ const page = parseInt(req.query.page as string) || 1;
+ const limit = parseInt(req.query.limit as string) || 20;
+ const filter: any = {};
+
+ // Add filtering by role if provided
+ if (req.query.role) {
+ filter.role = req.query.role;
+ }
+
+ // Add filtering by status if provided
+ if (req.query.status) {
+ filter.status = req.query.status;
+ }
+
+ const result = await userService.getAllUsers(page, limit, filter);
+ res.json(createSuccessResponse(result, 'Users retrieved successfully'));
+ })
+);
+
+// @route GET /api/users/search
+// @desc Search users
+// @access Private
+router.get(
+ '/search',
+ auth,
+ searchValidation,
+ validate,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ const query = req.query.q as string;
+ const page = parseInt(req.query.page as string) || 1;
+ const limit = parseInt(req.query.limit as string) || 20;
+
+ const result = await userService.searchUsers(query, page, limit);
+ res.json(createSuccessResponse(result, 'Search completed successfully'));
+ })
+);
+
+// @route GET /api/users/stats
+// @desc Get user statistics
+// @access Private (Admin only)
+router.get(
+ '/stats',
+ auth,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ const stats = await userService.getUserStats();
+ res.json(createSuccessResponse(stats, 'Statistics retrieved successfully'));
+ })
+);
+
+// @route GET /api/users/export
+// @desc Export all users
+// @access Private (Admin only)
+router.get(
+ '/export',
+ auth,
+ exportLimiter,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ const users = await userService.exportUsers();
+ res.json(createSuccessResponse(users, 'Users exported successfully'));
+ })
+);
+
+// @route GET /api/users/active
+// @desc Get active users
+// @access Private
+router.get(
+ '/active',
+ auth,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ const users = await userService.getActiveUsers();
+ res.json(createSuccessResponse(users, 'Active users retrieved successfully'));
+ })
+);
+
+// @route GET /api/users/role/:role
+// @desc Get users by role
+// @access Private (Admin only)
+router.get(
+ '/role/:role',
+ auth,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ const role = req.params.role as UserRole;
+ const users = await userService.getUsersByRole(role);
+ res.json(createSuccessResponse(users, `Users with role ${role} retrieved successfully`));
+ })
+);
+
+// @route GET /api/users/:id
+// @desc Get user by ID
+// @access Private
+router.get(
+ '/:id',
+ auth,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ const user = await userService.getUserById(req.params.id);
+ res.json(createSuccessResponse(user, 'User retrieved successfully'));
+ })
+);
+
+// @route GET /api/users/:id/activity
+// @desc Get user activity
+// @access Private (Admin or same user)
+router.get(
+ '/:id/activity',
+ auth,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ const activity = await userService.getUserActivity(req.params.id);
+ res.json(createSuccessResponse(activity, 'User activity retrieved successfully'));
+ })
+);
+
+// @route PUT /api/users/:id
+// @desc Update user
+// @access Private (Admin or same user)
+router.put(
+ '/:id',
+ auth,
+ updateUserValidation,
+ validate,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ const user = await userService.updateUser(req.params.id, req.body);
+ logger.info(`User updated via API: ${user.username}`);
+ res.json(createSuccessResponse(user, 'User updated successfully'));
+ })
+);
+
+// @route PUT /api/users/:id/password
+// @desc Change user password
+// @access Private (Admin or same user)
+router.put(
+ '/:id/password',
+ auth,
+ passwordChangeValidation,
+ validate,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ const { currentPassword, newPassword } = req.body;
+ await userService.changePassword(req.params.id, currentPassword, newPassword);
+ logger.info(`Password changed via API for user: ${req.params.id}`);
+ res.json(createSuccessResponse(null, 'Password changed successfully'));
+ })
+);
+
+// @route PUT /api/users/:id/reset-password
+// @desc Reset user password (Admin only)
+// @access Private (Admin only)
+router.put(
+ '/:id/reset-password',
+ auth,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ const { newPassword } = req.body;
+ await userService.resetPassword(req.params.id, newPassword);
+ logger.info(`Password reset via API for user: ${req.params.id}`);
+ res.json(createSuccessResponse(null, 'Password reset successfully'));
+ })
+);
+
+// @route PUT /api/users/:id/permissions
+// @desc Add permission to user
+// @access Private (Admin only)
+router.put(
+ '/:id/permissions',
+ auth,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ const { permission } = req.body;
+ await userService.addPermission(req.params.id, permission);
+ logger.info(`Permission added via API for user: ${req.params.id}`);
+ res.json(createSuccessResponse(null, 'Permission added successfully'));
+ })
+);
+
+// @route DELETE /api/users/:id/permissions
+// @desc Remove permission from user
+// @access Private (Admin only)
+router.delete(
+ '/:id/permissions',
+ auth,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ const { permission } = req.body;
+ await userService.removePermission(req.params.id, permission);
+ logger.info(`Permission removed via API for user: ${req.params.id}`);
+ res.json(createSuccessResponse(null, 'Permission removed successfully'));
+ })
+);
+
+// @route DELETE /api/users/:id
+// @desc Delete user (soft delete)
+// @access Private (Admin only)
+router.delete(
+ '/:id',
+ auth,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ await userService.deleteUser(req.params.id);
+ logger.info(`User deleted via API: ${req.params.id}`);
+ res.json(createSuccessResponse(null, 'User deleted successfully'));
+ })
+);
+
+// @route DELETE /api/users/:id/hard
+// @desc Hard delete user (permanent)
+// @access Private (Admin only)
+router.delete(
+ '/:id/hard',
+ auth,
+ asyncHandler(async (req: express.Request, res: express.Response) => {
+ await userService.hardDeleteUser(req.params.id);
+ logger.info(`User permanently deleted via API: ${req.params.id}`);
+ res.json(createSuccessResponse(null, 'User permanently deleted'));
+ })
+);
+
+export default router;
\ No newline at end of file
diff --git a/test/sample-projects/typescript/user-management/src/server.ts b/test/sample-projects/typescript/user-management/src/server.ts
new file mode 100644
index 0000000..2150f83
--- /dev/null
+++ b/test/sample-projects/typescript/user-management/src/server.ts
@@ -0,0 +1,215 @@
+import 'reflect-metadata';
+import dotenv from 'dotenv';
+import express from 'express';
+import cors from 'cors';
+import helmet from 'helmet';
+import compression from 'compression';
+import morgan from 'morgan';
+
+import database from './config/database';
+import userRoutes from './routes/userRoutes';
+import { generalLimiter } from './middleware/rateLimiter';
+import { globalErrorHandler, handleNotFound } from './utils/errors';
+import logger from './utils/logger';
+import { IApiResponse } from './types/User';
+
+// Load environment variables
+dotenv.config();
+
+/**
+ * Express application setup
+ */
+class App {
+ public app: express.Application;
+ private port: number;
+ private server?: any;
+
+ constructor() {
+ this.app = express();
+ this.port = parseInt(process.env.PORT || '3000', 10);
+ this.setupMiddleware();
+ this.setupRoutes();
+ this.setupErrorHandling();
+ }
+
+ /**
+ * Setup middleware
+ */
+ private setupMiddleware(): void {
+ // Security middleware
+ this.app.use(helmet());
+
+ // CORS configuration
+ this.app.use(cors({
+ origin: process.env.ALLOWED_ORIGINS?.split(',') || ['http://localhost:3000'],
+ credentials: true,
+ }));
+
+ // Compression middleware
+ this.app.use(compression());
+
+ // Body parsing middleware
+ this.app.use(express.json({ limit: '10mb' }));
+ this.app.use(express.urlencoded({ extended: true, limit: '10mb' }));
+
+ // Logging middleware
+ this.app.use(morgan('combined', { stream: logger.stream }));
+
+ // Rate limiting
+ this.app.use(generalLimiter);
+
+ // Request ID middleware
+ this.app.use((req, res, next) => {
+ (req as any).requestId = Math.random().toString(36).substr(2, 9);
+ res.set('X-Request-ID', (req as any).requestId);
+ next();
+ });
+
+ // Health check endpoint
+ this.app.get('/health', (req, res) => {
+ const healthResponse: IApiResponse = {
+ success: true,
+ message: 'Service is healthy',
+ data: {
+ status: 'OK',
+ timestamp: new Date().toISOString(),
+ uptime: process.uptime(),
+ database: database.getConnectionStatus(),
+ version: process.env.npm_package_version || '1.0.0',
+ environment: process.env.NODE_ENV || 'development',
+ },
+ };
+ res.json(healthResponse);
+ });
+ }
+
+ /**
+ * Setup routes
+ */
+ private setupRoutes(): void {
+ // API routes
+ this.app.use('/api/users', userRoutes);
+
+ // Root endpoint
+ this.app.get('/', (req, res) => {
+ const rootResponse: IApiResponse = {
+ success: true,
+ message: 'User Management API - TypeScript Edition',
+ data: {
+ name: 'User Management API',
+ version: '1.0.0',
+ language: 'TypeScript',
+ endpoints: {
+ health: '/health',
+ users: '/api/users',
+ auth: '/api/users/auth',
+ docs: '/api/docs',
+ },
+ },
+ };
+ res.json(rootResponse);
+ });
+ }
+
+ /**
+ * Setup error handling
+ */
+ private setupErrorHandling(): void {
+ // Handle 404 for unknown routes
+ this.app.use(handleNotFound);
+
+ // Global error handler
+ this.app.use(globalErrorHandler);
+ }
+
+ /**
+ * Start the server
+ */
+ public async start(): Promise {
+ try {
+ // Connect to database
+ await database.connect();
+
+ // Start server
+ this.server = this.app.listen(this.port, () => {
+ logger.info(`Server running on port ${this.port}`);
+ logger.info(`Environment: ${process.env.NODE_ENV || 'development'}`);
+ logger.info(`Health check: http://localhost:${this.port}/health`);
+ logger.info(`API documentation: http://localhost:${this.port}/api/docs`);
+ });
+
+ // Handle server errors
+ this.server.on('error', (error: any) => {
+ if (error.syscall !== 'listen') {
+ throw error;
+ }
+
+ const bind = typeof this.port === 'string' ? `Pipe ${this.port}` : `Port ${this.port}`;
+
+ switch (error.code) {
+ case 'EACCES':
+ logger.error(`${bind} requires elevated privileges`);
+ process.exit(1);
+ break;
+ case 'EADDRINUSE':
+ logger.error(`${bind} is already in use`);
+ process.exit(1);
+ break;
+ default:
+ throw error;
+ }
+ });
+
+ // Graceful shutdown
+ process.on('SIGTERM', () => this.gracefulShutdown('SIGTERM'));
+ process.on('SIGINT', () => this.gracefulShutdown('SIGINT'));
+
+ } catch (error) {
+ logger.error('Failed to start server:', error as Error);
+ process.exit(1);
+ }
+ }
+
+ /**
+ * Graceful shutdown
+ */
+ private async gracefulShutdown(signal: string): Promise {
+ logger.info(`Received ${signal}. Graceful shutdown...`);
+
+ if (this.server) {
+ this.server.close(async () => {
+ logger.info('HTTP server closed');
+
+ try {
+ await database.disconnect();
+ logger.info('Database disconnected');
+ process.exit(0);
+ } catch (error) {
+ logger.error('Error during graceful shutdown:', error as Error);
+ process.exit(1);
+ }
+ });
+ }
+ }
+
+ /**
+ * Get Express app instance
+ */
+ public getApp(): express.Application {
+ return this.app;
+ }
+}
+
+// Create and start the application
+const app = new App();
+
+// Start server if not in test environment
+if (process.env.NODE_ENV !== 'test') {
+ app.start().catch(error => {
+ logger.error('Failed to start application:', error as Error);
+ process.exit(1);
+ });
+}
+
+// Export for testing
+export default app.getApp();
\ No newline at end of file
diff --git a/test/sample-projects/typescript/user-management/src/services/UserService.ts b/test/sample-projects/typescript/user-management/src/services/UserService.ts
new file mode 100644
index 0000000..962d300
--- /dev/null
+++ b/test/sample-projects/typescript/user-management/src/services/UserService.ts
@@ -0,0 +1,518 @@
+import { User } from '../models/User';
+import {
+ IUser,
+ IUserResponse,
+ ICreateUser,
+ IUpdateUser,
+ IUserStats,
+ IAuthResult,
+ IUserActivity,
+ IPaginatedUsers,
+ ISearchUsersResponse,
+ IUserFilter,
+ UserRole,
+} from '../types/User';
+import { AppError } from '../utils/errors';
+import logger from '../utils/logger';
+
+/**
+ * UserService class handles all user-related business logic
+ */
+export class UserService {
+ /**
+ * Create a new user
+ * @param userData - User data object
+ * @returns Created user response
+ */
+ async createUser(userData: ICreateUser): Promise {
+ try {
+ // Check if username already exists
+ const existingUsername = await User.findByUsername(userData.username);
+ if (existingUsername) {
+ throw new AppError('Username already exists', 400);
+ }
+
+ // Check if email already exists (if provided)
+ if (userData.email) {
+ const existingEmail = await User.findByEmail(userData.email);
+ if (existingEmail) {
+ throw new AppError('Email already exists', 400);
+ }
+ }
+
+ // Create new user
+ const user = new User(userData);
+
+ // Validate user data
+ const validationErrors = user.validateUser();
+ if (validationErrors.length > 0) {
+ throw new AppError(validationErrors.join(', '), 400);
+ }
+
+ await user.save();
+
+ logger.info(`User created successfully: ${user.username}`);
+ return user.response;
+ } catch (error) {
+ logger.error('Error creating user:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get user by ID
+ * @param id - User ID
+ * @returns User response
+ */
+ async getUserById(id: string): Promise {
+ try {
+ const user = await User.findOne({ id });
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+ return user.response;
+ } catch (error) {
+ logger.error('Error getting user by ID:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get user by username
+ * @param username - Username
+ * @returns User response
+ */
+ async getUserByUsername(username: string): Promise {
+ try {
+ const user = await User.findByUsername(username);
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+ return user.response;
+ } catch (error) {
+ logger.error('Error getting user by username:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get user by email
+ * @param email - Email address
+ * @returns User response
+ */
+ async getUserByEmail(email: string): Promise {
+ try {
+ const user = await User.findByEmail(email);
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+ return user.response;
+ } catch (error) {
+ logger.error('Error getting user by email:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Update user
+ * @param id - User ID
+ * @param updateData - Update data
+ * @returns Updated user response
+ */
+ async updateUser(id: string, updateData: IUpdateUser): Promise {
+ try {
+ const user = await User.findOne({ id });
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+
+ // Apply updates (excluding password and id)
+ Object.keys(updateData).forEach(key => {
+ if (key !== 'password' && key !== 'id') {
+ (user as any)[key] = (updateData as any)[key];
+ }
+ });
+
+ // Validate updated data
+ const validationErrors = user.validateUser();
+ if (validationErrors.length > 0) {
+ throw new AppError(validationErrors.join(', '), 400);
+ }
+
+ await user.save();
+
+ logger.info(`User updated successfully: ${user.username}`);
+ return user.response;
+ } catch (error) {
+ logger.error('Error updating user:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Delete user (soft delete)
+ * @param id - User ID
+ * @returns Success status
+ */
+ async deleteUser(id: string): Promise {
+ try {
+ const user = await User.findOne({ id });
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+
+ user.delete();
+ await user.save();
+
+ logger.info(`User deleted successfully: ${user.username}`);
+ return true;
+ } catch (error) {
+ logger.error('Error deleting user:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Hard delete user (permanent deletion)
+ * @param id - User ID
+ * @returns Success status
+ */
+ async hardDeleteUser(id: string): Promise {
+ try {
+ const result = await User.deleteOne({ id });
+ if (result.deletedCount === 0) {
+ throw new AppError('User not found', 404);
+ }
+
+ logger.info(`User permanently deleted: ${id}`);
+ return true;
+ } catch (error) {
+ logger.error('Error hard deleting user:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get all users with pagination
+ * @param page - Page number
+ * @param limit - Items per page
+ * @param filter - Filter criteria
+ * @returns Paginated users response
+ */
+ async getAllUsers(
+ page: number = 1,
+ limit: number = 20,
+ filter: IUserFilter = {}
+ ): Promise {
+ try {
+ const skip = (page - 1) * limit;
+
+ const users = await User.find(filter)
+ .sort({ createdAt: -1 })
+ .skip(skip)
+ .limit(limit);
+
+ const total = await User.countDocuments(filter);
+ const totalPages = Math.ceil(total / limit);
+
+ return {
+ users: users.map(user => user.response),
+ pagination: {
+ page,
+ limit,
+ total,
+ totalPages,
+ hasNext: page < totalPages,
+ hasPrev: page > 1,
+ },
+ };
+ } catch (error) {
+ logger.error('Error getting all users:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get active users
+ * @returns Active users
+ */
+ async getActiveUsers(): Promise {
+ try {
+ const users = await User.findActive();
+ return users.map(user => user.response);
+ } catch (error) {
+ logger.error('Error getting active users:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get users by role
+ * @param role - User role
+ * @returns Users with specified role
+ */
+ async getUsersByRole(role: UserRole): Promise {
+ try {
+ const users = await User.findByRole(role);
+ return users.map(user => user.response);
+ } catch (error) {
+ logger.error('Error getting users by role:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Search users
+ * @param query - Search query
+ * @param page - Page number
+ * @param limit - Items per page
+ * @returns Search results
+ */
+ async searchUsers(
+ query: string,
+ page: number = 1,
+ limit: number = 20
+ ): Promise {
+ try {
+ const skip = (page - 1) * limit;
+
+ const users = await User.searchUsers(query, {
+ skip,
+ limit,
+ sort: { createdAt: -1 },
+ });
+
+ // Count total matching users
+ const totalUsers = await User.searchUsers(query);
+ const total = totalUsers.length;
+ const totalPages = Math.ceil(total / limit);
+
+ return {
+ users: users.map(user => user.response),
+ query,
+ pagination: {
+ page,
+ limit,
+ total,
+ totalPages,
+ hasNext: page < totalPages,
+ hasPrev: page > 1,
+ },
+ };
+ } catch (error) {
+ logger.error('Error searching users:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get user statistics
+ * @returns User statistics
+ */
+ async getUserStats(): Promise {
+ try {
+ const stats = await User.getUserStats();
+ return stats;
+ } catch (error) {
+ logger.error('Error getting user statistics:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Authenticate user
+ * @param username - Username
+ * @param password - Password
+ * @returns Authentication result
+ */
+ async authenticateUser(username: string, password: string): Promise {
+ try {
+ const user = await User.findByUsername(username).select('+password');
+
+ if (!user || !(await user.checkPassword(password))) {
+ // Record failed login attempt if user exists
+ if (user) {
+ user.recordFailedLogin();
+ await user.save();
+ }
+ throw new AppError('Invalid username or password', 401);
+ }
+
+ if (!user.isActive) {
+ throw new AppError('User account is not active', 401);
+ }
+
+ if (user.isLocked) {
+ throw new AppError('User account is locked', 401);
+ }
+
+ // Record successful login
+ user.recordLogin();
+ await user.save();
+
+ // Generate token
+ const token = user.generateToken();
+
+ logger.info(`User authenticated successfully: ${user.username}`);
+
+ return {
+ user: user.response,
+ token,
+ };
+ } catch (error) {
+ logger.error('Error authenticating user:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Change user password
+ * @param id - User ID
+ * @param currentPassword - Current password
+ * @param newPassword - New password
+ * @returns Success status
+ */
+ async changePassword(
+ id: string,
+ currentPassword: string,
+ newPassword: string
+ ): Promise {
+ try {
+ const user = await User.findOne({ id }).select('+password');
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+
+ if (!(await user.checkPassword(currentPassword))) {
+ throw new AppError('Current password is incorrect', 400);
+ }
+
+ user.password = newPassword;
+ await user.save();
+
+ logger.info(`Password changed successfully for user: ${user.username}`);
+ return true;
+ } catch (error) {
+ logger.error('Error changing password:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Reset user password (admin function)
+ * @param id - User ID
+ * @param newPassword - New password
+ * @returns Success status
+ */
+ async resetPassword(id: string, newPassword: string): Promise {
+ try {
+ const user = await User.findOne({ id });
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+
+ user.password = newPassword;
+ user.resetLoginAttempts();
+ await user.save();
+
+ logger.info(`Password reset successfully for user: ${user.username}`);
+ return true;
+ } catch (error) {
+ logger.error('Error resetting password:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Add permission to user
+ * @param id - User ID
+ * @param permission - Permission to add
+ * @returns Success status
+ */
+ async addPermission(id: string, permission: string): Promise {
+ try {
+ const user = await User.findOne({ id });
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+
+ user.addPermission(permission);
+ await user.save();
+
+ logger.info(`Permission added to user ${user.username}: ${permission}`);
+ return true;
+ } catch (error) {
+ logger.error('Error adding permission:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Remove permission from user
+ * @param id - User ID
+ * @param permission - Permission to remove
+ * @returns Success status
+ */
+ async removePermission(id: string, permission: string): Promise {
+ try {
+ const user = await User.findOne({ id });
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+
+ user.removePermission(permission);
+ await user.save();
+
+ logger.info(`Permission removed from user ${user.username}: ${permission}`);
+ return true;
+ } catch (error) {
+ logger.error('Error removing permission:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Export users data
+ * @returns Users data for export
+ */
+ async exportUsers(): Promise {
+ try {
+ const users = await User.find().sort({ createdAt: -1 });
+ return users.map(user => user.response);
+ } catch (error) {
+ logger.error('Error exporting users:', error as Error);
+ throw error;
+ }
+ }
+
+ /**
+ * Get user activity
+ * @param id - User ID
+ * @returns User activity data
+ */
+ async getUserActivity(id: string): Promise {
+ try {
+ const user = await User.findOne({ id });
+ if (!user) {
+ throw new AppError('User not found', 404);
+ }
+
+ return {
+ id: user.id,
+ username: user.username,
+ lastLogin: user.lastLogin,
+ loginAttempts: user.loginAttempts,
+ isActive: user.isActive,
+ isLocked: user.isLocked,
+ createdAt: user.createdAt,
+ updatedAt: user.updatedAt,
+ };
+ } catch (error) {
+ logger.error('Error getting user activity:', error as Error);
+ throw error;
+ }
+ }
+}
+
+export default UserService;
+// AUTO_REINDEX_MARKER: ci_auto_reindex_test_token_ts
\ No newline at end of file
diff --git a/test/sample-projects/typescript/user-management/src/types/User.ts b/test/sample-projects/typescript/user-management/src/types/User.ts
new file mode 100644
index 0000000..3ee23ed
--- /dev/null
+++ b/test/sample-projects/typescript/user-management/src/types/User.ts
@@ -0,0 +1,203 @@
+import { Document } from 'mongoose';
+
+// User roles enumeration
+export enum UserRole {
+ ADMIN = 'admin',
+ USER = 'user',
+ GUEST = 'guest',
+}
+
+// User status enumeration
+export enum UserStatus {
+ ACTIVE = 'active',
+ INACTIVE = 'inactive',
+ SUSPENDED = 'suspended',
+ DELETED = 'deleted',
+}
+
+// Base user interface
+export interface IUser {
+ id: string;
+ username: string;
+ email?: string;
+ name: string;
+ age?: number;
+ password: string;
+ role: UserRole;
+ status: UserStatus;
+ lastLogin?: Date;
+ loginAttempts: number;
+ permissions: string[];
+ metadata: Record;
+ createdAt: Date;
+ updatedAt: Date;
+}
+
+// User response interface (without sensitive data)
+export interface IUserResponse {
+ id: string;
+ username: string;
+ email?: string;
+ name: string;
+ age?: number;
+ role: UserRole;
+ status: UserStatus;
+ lastLogin?: Date;
+ permissions: string[];
+ metadata: Record;
+ createdAt: Date;
+ updatedAt: Date;
+}
+
+// User creation interface
+export interface ICreateUser {
+ username: string;
+ email?: string;
+ name: string;
+ age?: number;
+ password: string;
+ role?: UserRole;
+ status?: UserStatus;
+ permissions?: string[];
+ metadata?: Record;
+}
+
+// User update interface
+export interface IUpdateUser {
+ username?: string;
+ email?: string;
+ name?: string;
+ age?: number;
+ role?: UserRole;
+ status?: UserStatus;
+ permissions?: string[];
+ metadata?: Record;
+}
+
+// User document interface (extends Mongoose Document)
+export interface IUserDocument extends IUser, Document {
+ // Virtual properties
+ isActive: boolean;
+ isAdmin: boolean;
+ isLocked: boolean;
+ response: IUserResponse;
+
+ // Instance methods
+ checkPassword(candidatePassword: string): Promise;
+ generateToken(): string;
+ addPermission(permission: string): void;
+ removePermission(permission: string): void;
+ hasPermission(permission: string): boolean;
+ recordLogin(): void;
+ recordFailedLogin(): void;
+ resetLoginAttempts(): void;
+ activate(): void;
+ deactivate(): void;
+ suspend(): void;
+ delete(): void;
+ getMetadata(key: string, defaultValue?: any): any;
+ setMetadata(key: string, value: any): void;
+ removeMetadata(key: string): void;
+ validateUser(): string[];
+}
+
+// User statistics interface
+export interface IUserStats {
+ total: number;
+ active: number;
+ admin: number;
+ user: number;
+ guest: number;
+ withEmail: number;
+}
+
+// Authentication result interface
+export interface IAuthResult {
+ user: IUserResponse;
+ token: string;
+}
+
+// User activity interface
+export interface IUserActivity {
+ id: string;
+ username: string;
+ lastLogin?: Date;
+ loginAttempts: number;
+ isActive: boolean;
+ isLocked: boolean;
+ createdAt: Date;
+ updatedAt: Date;
+}
+
+// Pagination interface
+export interface IPagination {
+ page: number;
+ limit: number;
+ total: number;
+ totalPages: number;
+ hasNext: boolean;
+ hasPrev: boolean;
+}
+
+// Paginated users response
+export interface IPaginatedUsers {
+ users: IUserResponse[];
+ pagination: IPagination;
+}
+
+// Search users response
+export interface ISearchUsersResponse {
+ users: IUserResponse[];
+ query: string;
+ pagination: IPagination;
+}
+
+// Password change interface
+export interface IPasswordChange {
+ currentPassword: string;
+ newPassword: string;
+}
+
+// User filter interface
+export interface IUserFilter {
+ role?: UserRole;
+ status?: UserStatus;
+ hasEmail?: boolean;
+ createdAfter?: Date;
+ createdBefore?: Date;
+}
+
+// JWT payload interface
+export interface IJWTPayload {
+ id: string;
+ username: string;
+ role: UserRole;
+ iat?: number;
+ exp?: number;
+}
+
+// Request with user interface
+export interface IAuthenticatedRequest extends Request {
+ user?: IUserDocument;
+ requestId?: string;
+}
+
+// API response interface
+export interface IApiResponse {
+ success: boolean;
+ message: string;
+ data?: T;
+ error?: {
+ message: string;
+ statusCode: number;
+ errors?: any[];
+ stack?: string;
+ };
+}
+
+// Validation error interface
+export interface IValidationError {
+ field: string;
+ message: string;
+ value: any;
+}
\ No newline at end of file
diff --git a/test/sample-projects/typescript/user-management/src/utils/errors.ts b/test/sample-projects/typescript/user-management/src/utils/errors.ts
new file mode 100644
index 0000000..6aa27d7
--- /dev/null
+++ b/test/sample-projects/typescript/user-management/src/utils/errors.ts
@@ -0,0 +1,262 @@
+import { Request, Response, NextFunction } from 'express';
+import { IApiResponse, IValidationError } from '../types/User';
+
+/**
+ * Base application error class
+ */
+export class AppError extends Error {
+ public statusCode: number;
+ public isOperational: boolean;
+ public status: string;
+
+ constructor(message: string, statusCode: number = 500, isOperational: boolean = true) {
+ super(message);
+
+ this.statusCode = statusCode;
+ this.isOperational = isOperational;
+ this.status = `${statusCode}`.startsWith('4') ? 'fail' : 'error';
+
+ // Capture stack trace
+ Error.captureStackTrace(this, this.constructor);
+ }
+}
+
+/**
+ * Validation error class
+ */
+export class ValidationError extends AppError {
+ public errors: IValidationError[];
+
+ constructor(message: string, errors: IValidationError[] = []) {
+ super(message, 400);
+ this.errors = errors;
+ }
+}
+
+/**
+ * Authentication error class
+ */
+export class AuthenticationError extends AppError {
+ constructor(message: string = 'Authentication failed') {
+ super(message, 401);
+ }
+}
+
+/**
+ * Authorization error class
+ */
+export class AuthorizationError extends AppError {
+ constructor(message: string = 'Access denied') {
+ super(message, 403);
+ }
+}
+
+/**
+ * Not found error class
+ */
+export class NotFoundError extends AppError {
+ constructor(message: string = 'Resource not found') {
+ super(message, 404);
+ }
+}
+
+/**
+ * Conflict error class
+ */
+export class ConflictError extends AppError {
+ constructor(message: string = 'Resource conflict') {
+ super(message, 409);
+ }
+}
+
+/**
+ * Rate limit error class
+ */
+export class RateLimitError extends AppError {
+ constructor(message: string = 'Too many requests') {
+ super(message, 429);
+ }
+}
+
+/**
+ * Database error class
+ */
+export class DatabaseError extends AppError {
+ constructor(message: string = 'Database error') {
+ super(message, 500);
+ }
+}
+
+/**
+ * External service error class
+ */
+export class ExternalServiceError extends AppError {
+ constructor(message: string = 'External service error') {
+ super(message, 502);
+ }
+}
+
+/**
+ * Global error handler for Express
+ */
+export const globalErrorHandler = (
+ err: any,
+ req: Request,
+ res: Response,
+ next: NextFunction
+): void => {
+ // Default error values
+ let error = { ...err };
+ error.message = err.message;
+
+ // Log error
+ console.error('Error:', err);
+
+ // Mongoose bad ObjectId
+ if (err.name === 'CastError') {
+ const message = 'Resource not found';
+ error = new NotFoundError(message);
+ }
+
+ // Mongoose duplicate key
+ if (err.code === 11000) {
+ const value = err.errmsg?.match(/(["'])(\\?.)*?\1/)?.[0] || 'unknown';
+ const message = `Duplicate field value: ${value}. Please use another value`;
+ error = new ConflictError(message);
+ }
+
+ // Mongoose validation error
+ if (err.name === 'ValidationError') {
+ const errors: IValidationError[] = Object.values(err.errors).map(
+ (val: any) => ({
+ field: val.path,
+ message: val.message,
+ value: val.value,
+ })
+ );
+ error = new ValidationError('Validation failed', errors);
+ }
+
+ // JWT errors
+ if (err.name === 'JsonWebTokenError') {
+ error = new AuthenticationError('Invalid token');
+ }
+
+ if (err.name === 'TokenExpiredError') {
+ error = new AuthenticationError('Token expired');
+ }
+
+ // Send error response
+ const response: IApiResponse = {
+ success: false,
+ message: error.message,
+ error: {
+ message: error.message,
+ statusCode: error.statusCode || 500,
+ ...(process.env.NODE_ENV === 'development' && { stack: error.stack }),
+ ...(error.errors && { errors: error.errors }),
+ },
+ };
+
+ res.status(error.statusCode || 500).json(response);
+};
+
+/**
+ * Async error handler wrapper
+ */
+export const asyncHandler = (
+ fn: (req: Request, res: Response, next: NextFunction) => Promise
+) => {
+ return (req: Request, res: Response, next: NextFunction): void => {
+ Promise.resolve(fn(req, res, next)).catch(next);
+ };
+};
+
+/**
+ * Create error response
+ */
+export const createErrorResponse = (
+ message: string,
+ statusCode: number = 500,
+ errors: IValidationError[] | null = null
+): IApiResponse => {
+ const response: IApiResponse = {
+ success: false,
+ message,
+ error: {
+ message,
+ statusCode,
+ },
+ };
+
+ if (errors) {
+ response.error!.errors = errors;
+ }
+
+ return response;
+};
+
+/**
+ * Create success response
+ */
+export const createSuccessResponse = (
+ data: T,
+ message: string = 'Success'
+): IApiResponse => {
+ return {
+ success: true,
+ message,
+ data,
+ };
+};
+
+/**
+ * Handle 404 for unknown routes
+ */
+export const handleNotFound = (
+ req: Request,
+ res: Response,
+ next: NextFunction
+): void => {
+ const error = new NotFoundError(`Route ${req.originalUrl} not found`);
+ next(error);
+};
+
+/**
+ * Type guard for checking if error is operational
+ */
+export const isOperationalError = (error: any): error is AppError => {
+ return error instanceof AppError && error.isOperational;
+};
+
+/**
+ * Error logger utility
+ */
+export const logError = (error: Error, req?: Request): void => {
+ const timestamp = new Date().toISOString();
+ const method = req?.method || 'UNKNOWN';
+ const url = req?.originalUrl || 'UNKNOWN';
+ const userAgent = req?.get('User-Agent') || 'UNKNOWN';
+ const ip = req?.ip || 'UNKNOWN';
+
+ console.error(`[${timestamp}] ${method} ${url} - ${error.message}`);
+ console.error(`User-Agent: ${userAgent}`);
+ console.error(`IP: ${ip}`);
+ console.error(`Stack: ${error.stack}`);
+};
+
+/**
+ * Error response formatter
+ */
+export const formatErrorResponse = (error: AppError): IApiResponse => {
+ return {
+ success: false,
+ message: error.message,
+ error: {
+ message: error.message,
+ statusCode: error.statusCode,
+ ...(process.env.NODE_ENV === 'development' && { stack: error.stack }),
+ ...(error instanceof ValidationError && { errors: error.errors }),
+ },
+ };
+};
\ No newline at end of file
diff --git a/test/sample-projects/typescript/user-management/src/utils/logger.ts b/test/sample-projects/typescript/user-management/src/utils/logger.ts
new file mode 100644
index 0000000..67fa25f
--- /dev/null
+++ b/test/sample-projects/typescript/user-management/src/utils/logger.ts
@@ -0,0 +1,191 @@
+import winston from 'winston';
+
+// Define log levels
+const levels = {
+ error: 0,
+ warn: 1,
+ info: 2,
+ http: 3,
+ debug: 4,
+};
+
+// Define colors for each level
+const colors = {
+ error: 'red',
+ warn: 'yellow',
+ info: 'green',
+ http: 'magenta',
+ debug: 'white',
+};
+
+// Tell winston about the colors
+winston.addColors(colors);
+
+// Custom format function
+const format = winston.format.combine(
+ winston.format.timestamp({ format: 'YYYY-MM-DD HH:mm:ss:ms' }),
+ winston.format.colorize({ all: true }),
+ winston.format.printf(
+ (info) => `${info.timestamp} ${info.level}: ${info.message}`
+ )
+);
+
+// Define which transports the logger must use
+const transports: winston.transport[] = [
+ // Console transport
+ new winston.transports.Console({
+ format: winston.format.combine(
+ winston.format.colorize(),
+ winston.format.simple()
+ ),
+ }),
+
+ // File transport for errors
+ new winston.transports.File({
+ filename: 'logs/error.log',
+ level: 'error',
+ format: winston.format.combine(
+ winston.format.timestamp(),
+ winston.format.json()
+ ),
+ }),
+
+ // File transport for all logs
+ new winston.transports.File({
+ filename: 'logs/combined.log',
+ format: winston.format.combine(
+ winston.format.timestamp(),
+ winston.format.json()
+ ),
+ }),
+];
+
+// Create the logger
+const logger = winston.createLogger({
+ level: process.env.LOG_LEVEL || 'info',
+ levels,
+ format,
+ transports,
+});
+
+// Create a stream object with a 'write' function that will be used by Morgan
+interface LoggerStream {
+ write: (message: string) => void;
+}
+
+const loggerStream: LoggerStream = {
+ write: (message: string) => {
+ // Remove the trailing newline
+ logger.http(message.trim());
+ },
+};
+
+// Add stream property to logger
+(logger as any).stream = loggerStream;
+
+// Export logger with proper typing
+interface Logger extends winston.Logger {
+ stream: LoggerStream;
+}
+
+export default logger as Logger;
+
+// Export individual log level functions for convenience
+export const logError = (message: string, error?: Error): void => {
+ if (error) {
+ logger.error(`${message}: ${error.message}`, { stack: error.stack });
+ } else {
+ logger.error(message);
+ }
+};
+
+export const logWarn = (message: string): void => {
+ logger.warn(message);
+};
+
+export const logInfo = (message: string): void => {
+ logger.info(message);
+};
+
+export const logDebug = (message: string): void => {
+ logger.debug(message);
+};
+
+// Log HTTP requests
+export const logHttp = (message: string): void => {
+ logger.http(message);
+};
+
+// Log with context
+export const logWithContext = (
+ level: string,
+ message: string,
+ context?: Record
+): void => {
+ logger.log(level, message, context);
+};
+
+// Create child logger with additional context
+export const createChildLogger = (context: Record): winston.Logger => {
+ return logger.child(context);
+};
+
+// Performance logging utility
+export const logPerformance = (operation: string, startTime: number): void => {
+ const duration = Date.now() - startTime;
+ logger.info(`${operation} completed in ${duration}ms`);
+};
+
+// Database query logging
+export const logQuery = (query: string, duration?: number): void => {
+ const message = duration
+ ? `Query executed in ${duration}ms: ${query}`
+ : `Query executed: ${query}`;
+ logger.debug(message);
+};
+
+// User action logging
+export const logUserAction = (
+ userId: string,
+ action: string,
+ details?: Record
+): void => {
+ const message = `User ${userId} performed action: ${action}`;
+ logger.info(message, details);
+};
+
+// Security event logging
+export const logSecurityEvent = (
+ event: string,
+ details: Record
+): void => {
+ logger.warn(`Security event: ${event}`, details);
+};
+
+// API request logging
+export const logApiRequest = (
+ method: string,
+ url: string,
+ statusCode: number,
+ duration: number,
+ userId?: string
+): void => {
+ const message = `${method} ${url} - ${statusCode} (${duration}ms)`;
+ const context = userId ? { userId } : {};
+ logger.http(message, context);
+};
+
+// Environment-specific logging configuration
+if (process.env.NODE_ENV === 'production') {
+ // In production, reduce console logging
+ logger.remove(logger.transports[0]);
+ logger.add(new winston.transports.Console({
+ level: 'warn',
+ format: winston.format.simple(),
+ }));
+}
+
+if (process.env.NODE_ENV === 'test') {
+ // In test environment, minimize logging
+ logger.level = 'error';
+}
\ No newline at end of file
diff --git a/test/sample-projects/typescript/user-management/tsconfig.json b/test/sample-projects/typescript/user-management/tsconfig.json
new file mode 100644
index 0000000..60ba402
--- /dev/null
+++ b/test/sample-projects/typescript/user-management/tsconfig.json
@@ -0,0 +1,49 @@
+{
+ "compilerOptions": {
+ "target": "ES2020",
+ "module": "commonjs",
+ "lib": ["ES2020"],
+ "outDir": "./dist",
+ "rootDir": "./src",
+ "strict": true,
+ "esModuleInterop": true,
+ "skipLibCheck": true,
+ "forceConsistentCasingInFileNames": true,
+ "resolveJsonModule": true,
+ "declaration": true,
+ "declarationMap": true,
+ "sourceMap": true,
+ "removeComments": true,
+ "noImplicitAny": true,
+ "strictNullChecks": true,
+ "strictFunctionTypes": true,
+ "noImplicitThis": true,
+ "noImplicitReturns": true,
+ "noFallthroughCasesInSwitch": true,
+ "noUncheckedIndexedAccess": true,
+ "noImplicitOverride": true,
+ "experimentalDecorators": true,
+ "emitDecoratorMetadata": true,
+ "types": ["node", "jest"],
+ "baseUrl": ".",
+ "paths": {
+ "@/*": ["src/*"],
+ "@/models/*": ["src/models/*"],
+ "@/services/*": ["src/services/*"],
+ "@/utils/*": ["src/utils/*"],
+ "@/middleware/*": ["src/middleware/*"],
+ "@/routes/*": ["src/routes/*"],
+ "@/config/*": ["src/config/*"],
+ "@/types/*": ["src/types/*"]
+ }
+ },
+ "include": [
+ "src/**/*"
+ ],
+ "exclude": [
+ "node_modules",
+ "dist",
+ "coverage",
+ "**/*.test.ts"
+ ]
+}
\ No newline at end of file
diff --git a/test/sample-projects/zig/code-index-example/build.zig b/test/sample-projects/zig/code-index-example/build.zig
new file mode 100644
index 0000000..ea52f56
--- /dev/null
+++ b/test/sample-projects/zig/code-index-example/build.zig
@@ -0,0 +1,156 @@
+const std = @import("std");
+
+// Although this function looks imperative, it does not perform the build
+// directly and instead it mutates the build graph (`b`) that will be then
+// executed by an external runner. The functions in `std.Build` implement a DSL
+// for defining build steps and express dependencies between them, allowing the
+// build runner to parallelize the build automatically (and the cache system to
+// know when a step doesn't need to be re-run).
+pub fn build(b: *std.Build) void {
+ // Standard target options allow the person running `zig build` to choose
+ // what target to build for. Here we do not override the defaults, which
+ // means any target is allowed, and the default is native. Other options
+ // for restricting supported target set are available.
+ const target = b.standardTargetOptions(.{});
+ // Standard optimization options allow the person running `zig build` to select
+ // between Debug, ReleaseSafe, ReleaseFast, and ReleaseSmall. Here we do not
+ // set a preferred release mode, allowing the user to decide how to optimize.
+ const optimize = b.standardOptimizeOption(.{});
+ // It's also possible to define more custom flags to toggle optional features
+ // of this build script using `b.option()`. All defined flags (including
+ // target and optimize options) will be listed when running `zig build --help`
+ // in this directory.
+
+ // This creates a module, which represents a collection of source files alongside
+ // some compilation options, such as optimization mode and linked system libraries.
+ // Zig modules are the preferred way of making Zig code available to consumers.
+ // addModule defines a module that we intend to make available for importing
+ // to our consumers. We must give it a name because a Zig package can expose
+ // multiple modules and consumers will need to be able to specify which
+ // module they want to access.
+ const mod = b.addModule("code_index_example", .{
+ // The root source file is the "entry point" of this module. Users of
+ // this module will only be able to access public declarations contained
+ // in this file, which means that if you have declarations that you
+ // intend to expose to consumers that were defined in other files part
+ // of this module, you will have to make sure to re-export them from
+ // the root file.
+ .root_source_file = b.path("src/root.zig"),
+ // Later on we'll use this module as the root module of a test executable
+ // which requires us to specify a target.
+ .target = target,
+ });
+
+ // Here we define an executable. An executable needs to have a root module
+ // which needs to expose a `main` function. While we could add a main function
+ // to the module defined above, it's sometimes preferable to split business
+ // business logic and the CLI into two separate modules.
+ //
+ // If your goal is to create a Zig library for others to use, consider if
+ // it might benefit from also exposing a CLI tool. A parser library for a
+ // data serialization format could also bundle a CLI syntax checker, for example.
+ //
+ // If instead your goal is to create an executable, consider if users might
+ // be interested in also being able to embed the core functionality of your
+ // program in their own executable in order to avoid the overhead involved in
+ // subprocessing your CLI tool.
+ //
+ // If neither case applies to you, feel free to delete the declaration you
+ // don't need and to put everything under a single module.
+ const exe = b.addExecutable(.{
+ .name = "code_index_example",
+ .root_module = b.createModule(.{
+ // b.createModule defines a new module just like b.addModule but,
+ // unlike b.addModule, it does not expose the module to consumers of
+ // this package, which is why in this case we don't have to give it a name.
+ .root_source_file = b.path("src/main.zig"),
+ // Target and optimization levels must be explicitly wired in when
+ // defining an executable or library (in the root module), and you
+ // can also hardcode a specific target for an executable or library
+ // definition if desireable (e.g. firmware for embedded devices).
+ .target = target,
+ .optimize = optimize,
+ // List of modules available for import in source files part of the
+ // root module.
+ .imports = &.{
+ // Here "code_index_example" is the name you will use in your source code to
+ // import this module (e.g. `@import("code_index_example")`). The name is
+ // repeated because you are allowed to rename your imports, which
+ // can be extremely useful in case of collisions (which can happen
+ // importing modules from different packages).
+ .{ .name = "code_index_example", .module = mod },
+ },
+ }),
+ });
+
+ // This declares intent for the executable to be installed into the
+ // install prefix when running `zig build` (i.e. when executing the default
+ // step). By default the install prefix is `zig-out/` but can be overridden
+ // by passing `--prefix` or `-p`.
+ b.installArtifact(exe);
+
+ // This creates a top level step. Top level steps have a name and can be
+ // invoked by name when running `zig build` (e.g. `zig build run`).
+ // This will evaluate the `run` step rather than the default step.
+ // For a top level step to actually do something, it must depend on other
+ // steps (e.g. a Run step, as we will see in a moment).
+ const run_step = b.step("run", "Run the app");
+
+ // This creates a RunArtifact step in the build graph. A RunArtifact step
+ // invokes an executable compiled by Zig. Steps will only be executed by the
+ // runner if invoked directly by the user (in the case of top level steps)
+ // or if another step depends on it, so it's up to you to define when and
+ // how this Run step will be executed. In our case we want to run it when
+ // the user runs `zig build run`, so we create a dependency link.
+ const run_cmd = b.addRunArtifact(exe);
+ run_step.dependOn(&run_cmd.step);
+
+ // By making the run step depend on the default step, it will be run from the
+ // installation directory rather than directly from within the cache directory.
+ run_cmd.step.dependOn(b.getInstallStep());
+
+ // This allows the user to pass arguments to the application in the build
+ // command itself, like this: `zig build run -- arg1 arg2 etc`
+ if (b.args) |args| {
+ run_cmd.addArgs(args);
+ }
+
+ // Creates an executable that will run `test` blocks from the provided module.
+ // Here `mod` needs to define a target, which is why earlier we made sure to
+ // set the releative field.
+ const mod_tests = b.addTest(.{
+ .root_module = mod,
+ });
+
+ // A run step that will run the test executable.
+ const run_mod_tests = b.addRunArtifact(mod_tests);
+
+ // Creates an executable that will run `test` blocks from the executable's
+ // root module. Note that test executables only test one module at a time,
+ // hence why we have to create two separate ones.
+ const exe_tests = b.addTest(.{
+ .root_module = exe.root_module,
+ });
+
+ // A run step that will run the second test executable.
+ const run_exe_tests = b.addRunArtifact(exe_tests);
+
+ // A top level step for running all tests. dependOn can be called multiple
+ // times and since the two run steps do not depend on one another, this will
+ // make the two of them run in parallel.
+ const test_step = b.step("test", "Run tests");
+ test_step.dependOn(&run_mod_tests.step);
+ test_step.dependOn(&run_exe_tests.step);
+
+ // Just like flags, top level steps are also listed in the `--help` menu.
+ //
+ // The Zig build system is entirely implemented in userland, which means
+ // that it cannot hook into private compiler APIs. All compilation work
+ // orchestrated by the build system will result in other Zig compiler
+ // subcommands being invoked with the right flags defined. You can observe
+ // these invocations when one fails (or you pass a flag to increase
+ // verbosity) to validate assumptions and diagnose problems.
+ //
+ // Lastly, the Zig build system is relatively simple and self-contained,
+ // and reading its source code will allow you to master it.
+}
diff --git a/test/sample-projects/zig/code-index-example/build.zig.zon b/test/sample-projects/zig/code-index-example/build.zig.zon
new file mode 100644
index 0000000..f8f4ad1
--- /dev/null
+++ b/test/sample-projects/zig/code-index-example/build.zig.zon
@@ -0,0 +1,81 @@
+.{
+ // This is the default name used by packages depending on this one. For
+ // example, when a user runs `zig fetch --save `, this field is used
+ // as the key in the `dependencies` table. Although the user can choose a
+ // different name, most users will stick with this provided value.
+ //
+ // It is redundant to include "zig" in this name because it is already
+ // within the Zig package namespace.
+ .name = .code_index_example,
+ // This is a [Semantic Version](https://semver.org/).
+ // In a future version of Zig it will be used for package deduplication.
+ .version = "0.0.0",
+ // Together with name, this represents a globally unique package
+ // identifier. This field is generated by the Zig toolchain when the
+ // package is first created, and then *never changes*. This allows
+ // unambiguous detection of one package being an updated version of
+ // another.
+ //
+ // When forking a Zig project, this id should be regenerated (delete the
+ // field and run `zig build`) if the upstream project is still maintained.
+ // Otherwise, the fork is *hostile*, attempting to take control over the
+ // original project's identity. Thus it is recommended to leave the comment
+ // on the following line intact, so that it shows up in code reviews that
+ // modify the field.
+ .fingerprint = 0x995c7acfb423849b, // Changing this has security and trust implications.
+ // Tracks the earliest Zig version that the package considers to be a
+ // supported use case.
+ .minimum_zig_version = "0.15.0-dev.1507+e25168d01",
+ // This field is optional.
+ // Each dependency must either provide a `url` and `hash`, or a `path`.
+ // `zig build --fetch` can be used to fetch all dependencies of a package, recursively.
+ // Once all dependencies are fetched, `zig build` no longer requires
+ // internet connectivity.
+ .dependencies = .{
+ // See `zig fetch --save ` for a command-line interface for adding dependencies.
+ //.example = .{
+ // // When updating this field to a new URL, be sure to delete the corresponding
+ // // `hash`, otherwise you are communicating that you expect to find the old hash at
+ // // the new URL. If the contents of a URL change this will result in a hash mismatch
+ // // which will prevent zig from using it.
+ // .url = "https://example.com/foo.tar.gz",
+ //
+ // // This is computed from the file contents of the directory of files that is
+ // // obtained after fetching `url` and applying the inclusion rules given by
+ // // `paths`.
+ // //
+ // // This field is the source of truth; packages do not come from a `url`; they
+ // // come from a `hash`. `url` is just one of many possible mirrors for how to
+ // // obtain a package matching this `hash`.
+ // //
+ // // Uses the [multihash](https://multiformats.io/multihash/) format.
+ // .hash = "...",
+ //
+ // // When this is provided, the package is found in a directory relative to the
+ // // build root. In this case the package's hash is irrelevant and therefore not
+ // // computed. This field and `url` are mutually exclusive.
+ // .path = "foo",
+ //
+ // // When this is set to `true`, a package is declared to be lazily
+ // // fetched. This makes the dependency only get fetched if it is
+ // // actually used.
+ // .lazy = false,
+ //},
+ },
+ // Specifies the set of files and directories that are included in this package.
+ // Only files and directories listed here are included in the `hash` that
+ // is computed for this package. Only files listed here will remain on disk
+ // when using the zig package manager. As a rule of thumb, one should list
+ // files required for compilation plus any license(s).
+ // Paths are relative to the build root. Use the empty string (`""`) to refer to
+ // the build root itself.
+ // A directory listed here means that all files within, recursively, are included.
+ .paths = .{
+ "build.zig",
+ "build.zig.zon",
+ "src",
+ // For example...
+ //"LICENSE",
+ //"README.md",
+ },
+}
diff --git a/test/sample-projects/zig/code-index-example/src/main.zig b/test/sample-projects/zig/code-index-example/src/main.zig
new file mode 100644
index 0000000..792cfc1
--- /dev/null
+++ b/test/sample-projects/zig/code-index-example/src/main.zig
@@ -0,0 +1,45 @@
+const std = @import("std");
+const builtin = @import("builtin");
+const testing = @import("testing");
+const code_index_example = @import("code_index_example");
+const utils = @import("./utils.zig");
+const math_utils = @import("./math.zig");
+
+pub fn main() !void {
+ // Prints to stderr, ignoring potential errors.
+ std.debug.print("All your {s} are belong to us.\n", .{"codebase"});
+ try code_index_example.bufferedPrint();
+
+ // Test our custom utilities
+ const result = utils.processData("Hello, World!");
+ std.debug.print("Processed result: {s}\n", .{result});
+
+ // Test math utilities
+ const sum = math_utils.calculateSum(10, 20);
+ std.debug.print("Sum: {}\n", .{sum});
+
+ // Platform-specific code
+ if (builtin.os.tag == .windows) {
+ std.debug.print("Running on Windows\n", .{});
+ } else {
+ std.debug.print("Running on Unix-like system\n", .{});
+ }
+}
+
+test "simple test" {
+ var list = std.ArrayList(i32).init(std.testing.allocator);
+ defer list.deinit(); // Try commenting this out and see if zig detects the memory leak!
+ try list.append(42);
+ try std.testing.expectEqual(@as(i32, 42), list.pop());
+}
+
+test "fuzz example" {
+ const Context = struct {
+ fn testOne(context: @This(), input: []const u8) anyerror!void {
+ _ = context;
+ // Try passing `--fuzz` to `zig build test` and see if it manages to fail this test case!
+ try std.testing.expect(!std.mem.eql(u8, "canyoufindme", input));
+ }
+ };
+ try std.testing.fuzz(Context{}, Context.testOne, .{});
+}
diff --git a/test/sample-projects/zig/code-index-example/src/math.zig b/test/sample-projects/zig/code-index-example/src/math.zig
new file mode 100644
index 0000000..dba7420
--- /dev/null
+++ b/test/sample-projects/zig/code-index-example/src/math.zig
@@ -0,0 +1,262 @@
+//! Mathematical utility functions and data structures
+const std = @import("std");
+const math = @import("math");
+const testing = @import("testing");
+
+// Mathematical constants
+pub const PI: f64 = 3.14159265358979323846;
+pub const E: f64 = 2.71828182845904523536;
+pub const GOLDEN_RATIO: f64 = 1.61803398874989484820;
+
+// Complex number representation
+pub const Complex = struct {
+ real: f64,
+ imag: f64,
+
+ pub fn init(real: f64, imag: f64) Complex {
+ return Complex{ .real = real, .imag = imag };
+ }
+
+ pub fn add(self: Complex, other: Complex) Complex {
+ return Complex{
+ .real = self.real + other.real,
+ .imag = self.imag + other.imag,
+ };
+ }
+
+ pub fn multiply(self: Complex, other: Complex) Complex {
+ return Complex{
+ .real = self.real * other.real - self.imag * other.imag,
+ .imag = self.real * other.imag + self.imag * other.real,
+ };
+ }
+
+ pub fn magnitude(self: Complex) f64 {
+ return @sqrt(self.real * self.real + self.imag * self.imag);
+ }
+
+ pub fn conjugate(self: Complex) Complex {
+ return Complex{ .real = self.real, .imag = -self.imag };
+ }
+};
+
+// Point in 2D space
+pub const Point2D = struct {
+ x: f64,
+ y: f64,
+
+ pub fn init(x: f64, y: f64) Point2D {
+ return Point2D{ .x = x, .y = y };
+ }
+
+ pub fn distance(self: Point2D, other: Point2D) f64 {
+ const dx = self.x - other.x;
+ const dy = self.y - other.y;
+ return @sqrt(dx * dx + dy * dy);
+ }
+
+ pub fn midpoint(self: Point2D, other: Point2D) Point2D {
+ return Point2D{
+ .x = (self.x + other.x) / 2.0,
+ .y = (self.y + other.y) / 2.0,
+ };
+ }
+};
+
+// Statistics utilities
+pub const Statistics = struct {
+ pub fn mean(values: []const f64) f64 {
+ if (values.len == 0) return 0.0;
+
+ var sum: f64 = 0.0;
+ for (values) |value| {
+ sum += value;
+ }
+
+ return sum / @as(f64, @floatFromInt(values.len));
+ }
+
+ pub fn median(values: []const f64, buffer: []f64) f64 {
+ if (values.len == 0) return 0.0;
+
+ // Copy to buffer and sort
+ for (values, 0..) |value, i| {
+ buffer[i] = value;
+ }
+ std.sort.insertionSort(f64, buffer[0..values.len], {}, std.sort.asc(f64));
+
+ const n = values.len;
+ if (n % 2 == 1) {
+ return buffer[n / 2];
+ } else {
+ return (buffer[n / 2 - 1] + buffer[n / 2]) / 2.0;
+ }
+ }
+
+ pub fn standardDeviation(values: []const f64) f64 {
+ if (values.len <= 1) return 0.0;
+
+ const avg = mean(values);
+ var sum_sq_diff: f64 = 0.0;
+
+ for (values) |value| {
+ const diff = value - avg;
+ sum_sq_diff += diff * diff;
+ }
+
+ return @sqrt(sum_sq_diff / @as(f64, @floatFromInt(values.len - 1)));
+ }
+};
+
+// Basic math functions
+pub fn factorial(n: u32) u64 {
+ if (n <= 1) return 1;
+ return @as(u64, n) * factorial(n - 1);
+}
+
+pub fn fibonacci(n: u32) u64 {
+ if (n <= 1) return n;
+ return fibonacci(n - 1) + fibonacci(n - 2);
+}
+
+pub fn gcd(a: u32, b: u32) u32 {
+ if (b == 0) return a;
+ return gcd(b, a % b);
+}
+
+pub fn lcm(a: u32, b: u32) u32 {
+ return (a * b) / gcd(a, b);
+}
+
+pub fn isPrime(n: u32) bool {
+ if (n < 2) return false;
+ if (n == 2) return true;
+ if (n % 2 == 0) return false;
+
+ var i: u32 = 3;
+ while (i * i <= n) : (i += 2) {
+ if (n % i == 0) return false;
+ }
+
+ return true;
+}
+
+// Function used by main.zig
+pub fn calculateSum(a: i32, b: i32) i32 {
+ return a + b;
+}
+
+pub fn power(base: f64, exponent: i32) f64 {
+ if (exponent == 0) return 1.0;
+ if (exponent < 0) return 1.0 / power(base, -exponent);
+
+ var result: f64 = 1.0;
+ var exp = exponent;
+ var b = base;
+
+ while (exp > 0) {
+ if (exp % 2 == 1) {
+ result *= b;
+ }
+ b *= b;
+ exp /= 2;
+ }
+
+ return result;
+}
+
+// Matrix operations (2x2 for simplicity)
+pub const Matrix2x2 = struct {
+ data: [2][2]f64,
+
+ pub fn init(a: f64, b: f64, c: f64, d: f64) Matrix2x2 {
+ return Matrix2x2{
+ .data = [_][2]f64{
+ [_]f64{ a, b },
+ [_]f64{ c, d },
+ },
+ };
+ }
+
+ pub fn multiply(self: Matrix2x2, other: Matrix2x2) Matrix2x2 {
+ return Matrix2x2{
+ .data = [_][2]f64{
+ [_]f64{
+ self.data[0][0] * other.data[0][0] + self.data[0][1] * other.data[1][0],
+ self.data[0][0] * other.data[0][1] + self.data[0][1] * other.data[1][1],
+ },
+ [_]f64{
+ self.data[1][0] * other.data[0][0] + self.data[1][1] * other.data[1][0],
+ self.data[1][0] * other.data[0][1] + self.data[1][1] * other.data[1][1],
+ },
+ },
+ };
+ }
+
+ pub fn determinant(self: Matrix2x2) f64 {
+ return self.data[0][0] * self.data[1][1] - self.data[0][1] * self.data[1][0];
+ }
+};
+
+// Tests
+test "complex number operations" {
+ const z1 = Complex.init(3.0, 4.0);
+ const z2 = Complex.init(1.0, 2.0);
+
+ const sum = z1.add(z2);
+ try std.testing.expectEqual(@as(f64, 4.0), sum.real);
+ try std.testing.expectEqual(@as(f64, 6.0), sum.imag);
+
+ const magnitude = z1.magnitude();
+ try std.testing.expectApproxEqAbs(@as(f64, 5.0), magnitude, 0.0001);
+}
+
+test "point distance calculation" {
+ const p1 = Point2D.init(0.0, 0.0);
+ const p2 = Point2D.init(3.0, 4.0);
+
+ const dist = p1.distance(p2);
+ try std.testing.expectApproxEqAbs(@as(f64, 5.0), dist, 0.0001);
+}
+
+test "factorial calculation" {
+ try std.testing.expectEqual(@as(u64, 1), factorial(0));
+ try std.testing.expectEqual(@as(u64, 1), factorial(1));
+ try std.testing.expectEqual(@as(u64, 120), factorial(5));
+}
+
+test "fibonacci sequence" {
+ try std.testing.expectEqual(@as(u64, 0), fibonacci(0));
+ try std.testing.expectEqual(@as(u64, 1), fibonacci(1));
+ try std.testing.expectEqual(@as(u64, 13), fibonacci(7));
+}
+
+test "prime number detection" {
+ try std.testing.expect(isPrime(2));
+ try std.testing.expect(isPrime(17));
+ try std.testing.expect(!isPrime(4));
+ try std.testing.expect(!isPrime(1));
+}
+
+test "statistics calculations" {
+ const values = [_]f64{ 1.0, 2.0, 3.0, 4.0, 5.0 };
+
+ const avg = Statistics.mean(&values);
+ try std.testing.expectEqual(@as(f64, 3.0), avg);
+
+ var buffer: [10]f64 = undefined;
+ const med = Statistics.median(&values, &buffer);
+ try std.testing.expectEqual(@as(f64, 3.0), med);
+}
+
+test "matrix operations" {
+ const m1 = Matrix2x2.init(1.0, 2.0, 3.0, 4.0);
+ const m2 = Matrix2x2.init(5.0, 6.0, 7.0, 8.0);
+
+ const product = m1.multiply(m2);
+ try std.testing.expectEqual(@as(f64, 19.0), product.data[0][0]);
+ try std.testing.expectEqual(@as(f64, 22.0), product.data[0][1]);
+
+ const det = m1.determinant();
+ try std.testing.expectEqual(@as(f64, -2.0), det);
+}
\ No newline at end of file
diff --git a/test/sample-projects/zig/code-index-example/src/root.zig b/test/sample-projects/zig/code-index-example/src/root.zig
new file mode 100644
index 0000000..1cc95e3
--- /dev/null
+++ b/test/sample-projects/zig/code-index-example/src/root.zig
@@ -0,0 +1,135 @@
+//! By convention, root.zig is the root source file when making a library.
+const std = @import("std");
+const fmt = @import("fmt");
+const mem = @import("mem");
+const json = @import("json");
+
+// Define custom types and structures
+pub const Config = struct {
+ name: []const u8,
+ version: u32,
+ debug: bool,
+
+ pub fn init(name: []const u8, version: u32) Config {
+ return Config{
+ .name = name,
+ .version = version,
+ .debug = false,
+ };
+ }
+
+ pub fn setDebug(self: *Config, debug: bool) void {
+ self.debug = debug;
+ }
+};
+
+pub const ErrorType = enum {
+ None,
+ InvalidInput,
+ OutOfMemory,
+ NetworkError,
+
+ pub fn toString(self: ErrorType) []const u8 {
+ return switch (self) {
+ .None => "No error",
+ .InvalidInput => "Invalid input",
+ .OutOfMemory => "Out of memory",
+ .NetworkError => "Network error",
+ };
+ }
+};
+
+// Global constants
+pub const VERSION: u32 = 1;
+pub const MAX_BUFFER_SIZE: usize = 4096;
+var global_config: Config = undefined;
+
+pub fn bufferedPrint() !void {
+ // Stdout is for the actual output of your application, for example if you
+ // are implementing gzip, then only the compressed bytes should be sent to
+ // stdout, not any debugging messages.
+ var stdout_buffer: [1024]u8 = undefined;
+ var stdout_writer = std.fs.File.stdout().writer(&stdout_buffer);
+ const stdout = &stdout_writer.interface;
+
+ try stdout.print("Run `zig build test` to run the tests.\n", .{});
+
+ try stdout.flush(); // Don't forget to flush!
+}
+
+pub fn add(a: i32, b: i32) i32 {
+ return a + b;
+}
+
+pub fn multiply(a: i32, b: i32) i32 {
+ return a * b;
+}
+
+pub fn processConfig(config: *const Config) !void {
+ std.debug.print("Processing config: {s} v{}\n", .{ config.name, config.version });
+ if (config.debug) {
+ std.debug.print("Debug mode enabled\n", .{});
+ }
+}
+
+pub fn handleError(err: ErrorType) void {
+ std.debug.print("Error: {s}\n", .{err.toString()});
+}
+
+// Advanced function with error handling
+pub fn parseNumber(input: []const u8) !i32 {
+ if (input.len == 0) {
+ return error.InvalidInput;
+ }
+
+ return std.fmt.parseInt(i32, input, 10) catch |err| switch (err) {
+ error.InvalidCharacter => error.InvalidInput,
+ error.Overflow => error.OutOfMemory,
+ else => err,
+ };
+}
+
+// Generic function
+pub fn swap(comptime T: type, a: *T, b: *T) void {
+ const temp = a.*;
+ a.* = b.*;
+ b.* = temp;
+}
+
+test "basic add functionality" {
+ try std.testing.expect(add(3, 7) == 10);
+}
+
+test "config initialization" {
+ var config = Config.init("test-app", 1);
+ try std.testing.expectEqualStrings("test-app", config.name);
+ try std.testing.expectEqual(@as(u32, 1), config.version);
+ try std.testing.expectEqual(false, config.debug);
+
+ config.setDebug(true);
+ try std.testing.expectEqual(true, config.debug);
+}
+
+test "error type handling" {
+ const err = ErrorType.InvalidInput;
+ try std.testing.expectEqualStrings("Invalid input", err.toString());
+}
+
+test "number parsing" {
+ const result = try parseNumber("42");
+ try std.testing.expectEqual(@as(i32, 42), result);
+
+ // Test error case
+ const invalid_result = parseNumber("");
+ try std.testing.expectError(error.InvalidInput, invalid_result);
+}
+
+test "generic swap function" {
+ var a: i32 = 10;
+ var b: i32 = 20;
+
+ swap(i32, &a, &b);
+
+ try std.testing.expectEqual(@as(i32, 20), a);
+ try std.testing.expectEqual(@as(i32, 10), b);
+}
diff --git a/test/sample-projects/zig/code-index-example/src/utils.zig b/test/sample-projects/zig/code-index-example/src/utils.zig
new file mode 100644
index 0000000..eab54ce
--- /dev/null
+++ b/test/sample-projects/zig/code-index-example/src/utils.zig
@@ -0,0 +1,169 @@
+//! Utility functions for string processing and data manipulation
+const std = @import("std");
+const mem = @import("mem");
+const ascii = @import("ascii");
+
+// Constants for utility functions
+pub const DEFAULT_BUFFER_SIZE: usize = 256;
+pub const MAX_STRING_LENGTH: usize = 1024;
+
+// Custom error types
+pub const UtilError = error{
+ BufferTooSmall,
+ InvalidString,
+ ProcessingFailed,
+};
+
+// String processing utilities
+pub const StringProcessor = struct {
+ buffer: []u8,
+ allocator: std.mem.Allocator,
+
+ pub fn init(allocator: std.mem.Allocator, buffer_size: usize) !StringProcessor {
+ const buffer = try allocator.alloc(u8, buffer_size);
+ return StringProcessor{
+ .buffer = buffer,
+ .allocator = allocator,
+ };
+ }
+
+ pub fn deinit(self: *StringProcessor) void {
+ self.allocator.free(self.buffer);
+ }
+
+ pub fn toUpperCase(self: *StringProcessor, input: []const u8) ![]const u8 {
+ if (input.len > self.buffer.len) {
+ return UtilError.BufferTooSmall;
+ }
+
+ for (input, 0..) |char, i| {
+ self.buffer[i] = std.ascii.toUpper(char);
+ }
+
+ return self.buffer[0..input.len];
+ }
+
+ pub fn reverse(self: *StringProcessor, input: []const u8) ![]const u8 {
+ if (input.len > self.buffer.len) {
+ return UtilError.BufferTooSmall;
+ }
+
+ for (input, 0..) |char, i| {
+ self.buffer[input.len - 1 - i] = char;
+ }
+
+ return self.buffer[0..input.len];
+ }
+};
+
+// Data validation functions
+pub fn validateEmail(email: []const u8) bool {
+ if (email.len == 0) return false;
+
+ var has_at = false;
+ var has_dot = false;
+
+ for (email) |char| {
+ if (char == '@') {
+ if (has_at) return false; // Multiple @ symbols
+ has_at = true;
+ } else if (char == '.') {
+ has_dot = true;
+ }
+ }
+
+ return has_at and has_dot;
+}
+
+pub fn isValidIdentifier(identifier: []const u8) bool {
+ if (identifier.len == 0) return false;
+
+ // First character must be letter or underscore
+ if (!std.ascii.isAlphabetic(identifier[0]) and identifier[0] != '_') {
+ return false;
+ }
+
+ // Rest must be alphanumeric or underscore
+ for (identifier[1..]) |char| {
+ if (!std.ascii.isAlphanumeric(char) and char != '_') {
+ return false;
+ }
+ }
+
+ return true;
+}
+
+// Simple string processing function used by main.zig
+pub fn processData(input: []const u8) []const u8 {
+ return if (input.len > 0) "Processed!" else "Empty input";
+}
+
+// Array utilities
+pub fn findMax(numbers: []const i32) ?i32 {
+ if (numbers.len == 0) return null;
+
+ var max = numbers[0];
+ for (numbers[1..]) |num| {
+ if (num > max) {
+ max = num;
+ }
+ }
+
+ return max;
+}
+
+pub fn bubbleSort(numbers: []i32) void {
+ const n = numbers.len;
+ if (n <= 1) return;
+
+ var i: usize = 0;
+ while (i < n - 1) : (i += 1) {
+ var j: usize = 0;
+ while (j < n - i - 1) : (j += 1) {
+ if (numbers[j] > numbers[j + 1]) {
+ const temp = numbers[j];
+ numbers[j] = numbers[j + 1];
+ numbers[j + 1] = temp;
+ }
+ }
+ }
+}
+
+// Tests
+test "string processor initialization" {
+ var processor = try StringProcessor.init(std.testing.allocator, 100);
+ defer processor.deinit();
+
+ const result = try processor.toUpperCase("hello");
+ try std.testing.expectEqualStrings("HELLO", result);
+}
+
+test "email validation" {
+ try std.testing.expect(validateEmail("test@example.com"));
+ try std.testing.expect(!validateEmail("invalid-email"));
+ try std.testing.expect(!validateEmail(""));
+}
+
+test "identifier validation" {
+ try std.testing.expect(isValidIdentifier("valid_id"));
+ try std.testing.expect(isValidIdentifier("_private"));
+ try std.testing.expect(!isValidIdentifier("123invalid"));
+ try std.testing.expect(!isValidIdentifier(""));
+}
+
+test "find maximum in array" {
+ const numbers = [_]i32{ 3, 1, 4, 1, 5, 9, 2, 6 };
+ const max = findMax(&numbers);
+ try std.testing.expectEqual(@as(?i32, 9), max);
+
+ const empty: []const i32 = &[_]i32{};
+ try std.testing.expectEqual(@as(?i32, null), findMax(empty));
+}
+
+test "bubble sort" {
+ var numbers = [_]i32{ 64, 34, 25, 12, 22, 11, 90 };
+ bubbleSort(&numbers);
+
+ const expected = [_]i32{ 11, 12, 22, 25, 34, 64, 90 };
+ try std.testing.expectEqualSlices(i32, &expected, &numbers);
+}
\ No newline at end of file
diff --git a/tests/search/test_search_filters.py b/tests/search/test_search_filters.py
new file mode 100644
index 0000000..787461d
--- /dev/null
+++ b/tests/search/test_search_filters.py
@@ -0,0 +1,52 @@
+"""Tests covering shared search filtering behaviour."""
+import os
+from types import SimpleNamespace
+from unittest.mock import patch
+from pathlib import Path as _TestPath
+import sys
+
+ROOT = _TestPath(__file__).resolve().parents[2]
+SRC_PATH = ROOT / 'src'
+if str(SRC_PATH) not in sys.path:
+ sys.path.insert(0, str(SRC_PATH))
+
+from code_index_mcp.search.basic import BasicSearchStrategy
+from code_index_mcp.search.ripgrep import RipgrepStrategy
+from code_index_mcp.utils.file_filter import FileFilter
+
+
+def test_basic_strategy_skips_excluded_directories(tmp_path):
+ base = tmp_path
+ src_dir = base / "src"
+ src_dir.mkdir()
+ (src_dir / 'app.js').write_text("const db = 'mongo';\n")
+
+ node_modules_dir = base / "node_modules" / "pkg"
+ node_modules_dir.mkdir(parents=True)
+ (node_modules_dir / 'index.js').write_text("// mongo dependency\n")
+
+ strategy = BasicSearchStrategy()
+ strategy.configure_excludes(FileFilter())
+
+ results = strategy.search("mongo", str(base), case_sensitive=False)
+
+ included_path = os.path.join("src", "app.js")
+ excluded_path = os.path.join("node_modules", "pkg", "index.js")
+
+ assert included_path in results
+ assert excluded_path not in results
+
+
+@patch("code_index_mcp.search.ripgrep.subprocess.run")
+def test_ripgrep_strategy_adds_exclude_globs(mock_run, tmp_path):
+ mock_run.return_value = SimpleNamespace(returncode=0, stdout="", stderr="")
+
+ strategy = RipgrepStrategy()
+ strategy.configure_excludes(FileFilter())
+
+ strategy.search("mongo", str(tmp_path))
+
+ cmd = mock_run.call_args[0][0]
+ glob_args = [cmd[i + 1] for i, arg in enumerate(cmd) if arg == '--glob' and i + 1 < len(cmd)]
+
+ assert any(value.startswith('!**/node_modules/') for value in glob_args)
diff --git a/uv.lock b/uv.lock
index 02dc1a4..08294cf 100644
--- a/uv.lock
+++ b/uv.lock
@@ -49,14 +49,32 @@ wheels = [
[[package]]
name = "code-index-mcp"
-version = "0.3.0"
+version = "2.4.1"
source = { editable = "." }
dependencies = [
{ name = "mcp" },
+ { name = "msgpack" },
+ { name = "pathspec" },
+ { name = "tree-sitter" },
+ { name = "tree-sitter-java" },
+ { name = "tree-sitter-javascript" },
+ { name = "tree-sitter-typescript" },
+ { name = "tree-sitter-zig" },
+ { name = "watchdog" },
]
[package.metadata]
-requires-dist = [{ name = "mcp", specifier = ">=0.3.0" }]
+requires-dist = [
+ { name = "mcp", specifier = ">=0.3.0" },
+ { name = "msgpack", specifier = ">=1.0.0" },
+ { name = "pathspec", specifier = ">=0.12.1" },
+ { name = "tree-sitter", specifier = ">=0.20.0" },
+ { name = "tree-sitter-java", specifier = ">=0.20.0" },
+ { name = "tree-sitter-javascript", specifier = ">=0.20.0" },
+ { name = "tree-sitter-typescript", specifier = ">=0.20.0" },
+ { name = "tree-sitter-zig", specifier = ">=0.20.0" },
+ { name = "watchdog", specifier = ">=3.0.0" },
+]
[[package]]
name = "colorama"
@@ -150,6 +168,63 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e8/0e/885f156ade60108e67bf044fada5269da68e29d758a10b0c513f4d85dd76/mcp-1.4.1-py3-none-any.whl", hash = "sha256:a7716b1ec1c054e76f49806f7d96113b99fc1166fc9244c2c6f19867cb75b593", size = 72448 },
]
+[[package]]
+name = "msgpack"
+version = "1.1.1"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/45/b1/ea4f68038a18c77c9467400d166d74c4ffa536f34761f7983a104357e614/msgpack-1.1.1.tar.gz", hash = "sha256:77b79ce34a2bdab2594f490c8e80dd62a02d650b91a75159a63ec413b8d104cd", size = 173555 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/33/52/f30da112c1dc92cf64f57d08a273ac771e7b29dea10b4b30369b2d7e8546/msgpack-1.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:353b6fc0c36fde68b661a12949d7d49f8f51ff5fa019c1e47c87c4ff34b080ed", size = 81799 },
+ { url = "https://files.pythonhosted.org/packages/e4/35/7bfc0def2f04ab4145f7f108e3563f9b4abae4ab0ed78a61f350518cc4d2/msgpack-1.1.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:79c408fcf76a958491b4e3b103d1c417044544b68e96d06432a189b43d1215c8", size = 78278 },
+ { url = "https://files.pythonhosted.org/packages/e8/c5/df5d6c1c39856bc55f800bf82778fd4c11370667f9b9e9d51b2f5da88f20/msgpack-1.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:78426096939c2c7482bf31ef15ca219a9e24460289c00dd0b94411040bb73ad2", size = 402805 },
+ { url = "https://files.pythonhosted.org/packages/20/8e/0bb8c977efecfe6ea7116e2ed73a78a8d32a947f94d272586cf02a9757db/msgpack-1.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8b17ba27727a36cb73aabacaa44b13090feb88a01d012c0f4be70c00f75048b4", size = 408642 },
+ { url = "https://files.pythonhosted.org/packages/59/a1/731d52c1aeec52006be6d1f8027c49fdc2cfc3ab7cbe7c28335b2910d7b6/msgpack-1.1.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7a17ac1ea6ec3c7687d70201cfda3b1e8061466f28f686c24f627cae4ea8efd0", size = 395143 },
+ { url = "https://files.pythonhosted.org/packages/2b/92/b42911c52cda2ba67a6418ffa7d08969edf2e760b09015593c8a8a27a97d/msgpack-1.1.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:88d1e966c9235c1d4e2afac21ca83933ba59537e2e2727a999bf3f515ca2af26", size = 395986 },
+ { url = "https://files.pythonhosted.org/packages/61/dc/8ae165337e70118d4dab651b8b562dd5066dd1e6dd57b038f32ebc3e2f07/msgpack-1.1.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:f6d58656842e1b2ddbe07f43f56b10a60f2ba5826164910968f5933e5178af75", size = 402682 },
+ { url = "https://files.pythonhosted.org/packages/58/27/555851cb98dcbd6ce041df1eacb25ac30646575e9cd125681aa2f4b1b6f1/msgpack-1.1.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:96decdfc4adcbc087f5ea7ebdcfd3dee9a13358cae6e81d54be962efc38f6338", size = 406368 },
+ { url = "https://files.pythonhosted.org/packages/d4/64/39a26add4ce16f24e99eabb9005e44c663db00e3fce17d4ae1ae9d61df99/msgpack-1.1.1-cp310-cp310-win32.whl", hash = "sha256:6640fd979ca9a212e4bcdf6eb74051ade2c690b862b679bfcb60ae46e6dc4bfd", size = 65004 },
+ { url = "https://files.pythonhosted.org/packages/7d/18/73dfa3e9d5d7450d39debde5b0d848139f7de23bd637a4506e36c9800fd6/msgpack-1.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:8b65b53204fe1bd037c40c4148d00ef918eb2108d24c9aaa20bc31f9810ce0a8", size = 71548 },
+ { url = "https://files.pythonhosted.org/packages/7f/83/97f24bf9848af23fe2ba04380388216defc49a8af6da0c28cc636d722502/msgpack-1.1.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:71ef05c1726884e44f8b1d1773604ab5d4d17729d8491403a705e649116c9558", size = 82728 },
+ { url = "https://files.pythonhosted.org/packages/aa/7f/2eaa388267a78401f6e182662b08a588ef4f3de6f0eab1ec09736a7aaa2b/msgpack-1.1.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:36043272c6aede309d29d56851f8841ba907a1a3d04435e43e8a19928e243c1d", size = 79279 },
+ { url = "https://files.pythonhosted.org/packages/f8/46/31eb60f4452c96161e4dfd26dbca562b4ec68c72e4ad07d9566d7ea35e8a/msgpack-1.1.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a32747b1b39c3ac27d0670122b57e6e57f28eefb725e0b625618d1b59bf9d1e0", size = 423859 },
+ { url = "https://files.pythonhosted.org/packages/45/16/a20fa8c32825cc7ae8457fab45670c7a8996d7746ce80ce41cc51e3b2bd7/msgpack-1.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8a8b10fdb84a43e50d38057b06901ec9da52baac6983d3f709d8507f3889d43f", size = 429975 },
+ { url = "https://files.pythonhosted.org/packages/86/ea/6c958e07692367feeb1a1594d35e22b62f7f476f3c568b002a5ea09d443d/msgpack-1.1.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ba0c325c3f485dc54ec298d8b024e134acf07c10d494ffa24373bea729acf704", size = 413528 },
+ { url = "https://files.pythonhosted.org/packages/75/05/ac84063c5dae79722bda9f68b878dc31fc3059adb8633c79f1e82c2cd946/msgpack-1.1.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:88daaf7d146e48ec71212ce21109b66e06a98e5e44dca47d853cbfe171d6c8d2", size = 413338 },
+ { url = "https://files.pythonhosted.org/packages/69/e8/fe86b082c781d3e1c09ca0f4dacd457ede60a13119b6ce939efe2ea77b76/msgpack-1.1.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:d8b55ea20dc59b181d3f47103f113e6f28a5e1c89fd5b67b9140edb442ab67f2", size = 422658 },
+ { url = "https://files.pythonhosted.org/packages/3b/2b/bafc9924df52d8f3bb7c00d24e57be477f4d0f967c0a31ef5e2225e035c7/msgpack-1.1.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:4a28e8072ae9779f20427af07f53bbb8b4aa81151054e882aee333b158da8752", size = 427124 },
+ { url = "https://files.pythonhosted.org/packages/a2/3b/1f717e17e53e0ed0b68fa59e9188f3f610c79d7151f0e52ff3cd8eb6b2dc/msgpack-1.1.1-cp311-cp311-win32.whl", hash = "sha256:7da8831f9a0fdb526621ba09a281fadc58ea12701bc709e7b8cbc362feabc295", size = 65016 },
+ { url = "https://files.pythonhosted.org/packages/48/45/9d1780768d3b249accecc5a38c725eb1e203d44a191f7b7ff1941f7df60c/msgpack-1.1.1-cp311-cp311-win_amd64.whl", hash = "sha256:5fd1b58e1431008a57247d6e7cc4faa41c3607e8e7d4aaf81f7c29ea013cb458", size = 72267 },
+ { url = "https://files.pythonhosted.org/packages/e3/26/389b9c593eda2b8551b2e7126ad3a06af6f9b44274eb3a4f054d48ff7e47/msgpack-1.1.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:ae497b11f4c21558d95de9f64fff7053544f4d1a17731c866143ed6bb4591238", size = 82359 },
+ { url = "https://files.pythonhosted.org/packages/ab/65/7d1de38c8a22cf8b1551469159d4b6cf49be2126adc2482de50976084d78/msgpack-1.1.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:33be9ab121df9b6b461ff91baac6f2731f83d9b27ed948c5b9d1978ae28bf157", size = 79172 },
+ { url = "https://files.pythonhosted.org/packages/0f/bd/cacf208b64d9577a62c74b677e1ada005caa9b69a05a599889d6fc2ab20a/msgpack-1.1.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6f64ae8fe7ffba251fecb8408540c34ee9df1c26674c50c4544d72dbf792e5ce", size = 425013 },
+ { url = "https://files.pythonhosted.org/packages/4d/ec/fd869e2567cc9c01278a736cfd1697941ba0d4b81a43e0aa2e8d71dab208/msgpack-1.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a494554874691720ba5891c9b0b39474ba43ffb1aaf32a5dac874effb1619e1a", size = 426905 },
+ { url = "https://files.pythonhosted.org/packages/55/2a/35860f33229075bce803a5593d046d8b489d7ba2fc85701e714fc1aaf898/msgpack-1.1.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb643284ab0ed26f6957d969fe0dd8bb17beb567beb8998140b5e38a90974f6c", size = 407336 },
+ { url = "https://files.pythonhosted.org/packages/8c/16/69ed8f3ada150bf92745fb4921bd621fd2cdf5a42e25eb50bcc57a5328f0/msgpack-1.1.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d275a9e3c81b1093c060c3837e580c37f47c51eca031f7b5fb76f7b8470f5f9b", size = 409485 },
+ { url = "https://files.pythonhosted.org/packages/c6/b6/0c398039e4c6d0b2e37c61d7e0e9d13439f91f780686deb8ee64ecf1ae71/msgpack-1.1.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:4fd6b577e4541676e0cc9ddc1709d25014d3ad9a66caa19962c4f5de30fc09ef", size = 412182 },
+ { url = "https://files.pythonhosted.org/packages/b8/d0/0cf4a6ecb9bc960d624c93effaeaae75cbf00b3bc4a54f35c8507273cda1/msgpack-1.1.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:bb29aaa613c0a1c40d1af111abf025f1732cab333f96f285d6a93b934738a68a", size = 419883 },
+ { url = "https://files.pythonhosted.org/packages/62/83/9697c211720fa71a2dfb632cad6196a8af3abea56eece220fde4674dc44b/msgpack-1.1.1-cp312-cp312-win32.whl", hash = "sha256:870b9a626280c86cff9c576ec0d9cbcc54a1e5ebda9cd26dab12baf41fee218c", size = 65406 },
+ { url = "https://files.pythonhosted.org/packages/c0/23/0abb886e80eab08f5e8c485d6f13924028602829f63b8f5fa25a06636628/msgpack-1.1.1-cp312-cp312-win_amd64.whl", hash = "sha256:5692095123007180dca3e788bb4c399cc26626da51629a31d40207cb262e67f4", size = 72558 },
+ { url = "https://files.pythonhosted.org/packages/a1/38/561f01cf3577430b59b340b51329803d3a5bf6a45864a55f4ef308ac11e3/msgpack-1.1.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:3765afa6bd4832fc11c3749be4ba4b69a0e8d7b728f78e68120a157a4c5d41f0", size = 81677 },
+ { url = "https://files.pythonhosted.org/packages/09/48/54a89579ea36b6ae0ee001cba8c61f776451fad3c9306cd80f5b5c55be87/msgpack-1.1.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:8ddb2bcfd1a8b9e431c8d6f4f7db0773084e107730ecf3472f1dfe9ad583f3d9", size = 78603 },
+ { url = "https://files.pythonhosted.org/packages/a0/60/daba2699b308e95ae792cdc2ef092a38eb5ee422f9d2fbd4101526d8a210/msgpack-1.1.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:196a736f0526a03653d829d7d4c5500a97eea3648aebfd4b6743875f28aa2af8", size = 420504 },
+ { url = "https://files.pythonhosted.org/packages/20/22/2ebae7ae43cd8f2debc35c631172ddf14e2a87ffcc04cf43ff9df9fff0d3/msgpack-1.1.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9d592d06e3cc2f537ceeeb23d38799c6ad83255289bb84c2e5792e5a8dea268a", size = 423749 },
+ { url = "https://files.pythonhosted.org/packages/40/1b/54c08dd5452427e1179a40b4b607e37e2664bca1c790c60c442c8e972e47/msgpack-1.1.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4df2311b0ce24f06ba253fda361f938dfecd7b961576f9be3f3fbd60e87130ac", size = 404458 },
+ { url = "https://files.pythonhosted.org/packages/2e/60/6bb17e9ffb080616a51f09928fdd5cac1353c9becc6c4a8abd4e57269a16/msgpack-1.1.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:e4141c5a32b5e37905b5940aacbc59739f036930367d7acce7a64e4dec1f5e0b", size = 405976 },
+ { url = "https://files.pythonhosted.org/packages/ee/97/88983e266572e8707c1f4b99c8fd04f9eb97b43f2db40e3172d87d8642db/msgpack-1.1.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:b1ce7f41670c5a69e1389420436f41385b1aa2504c3b0c30620764b15dded2e7", size = 408607 },
+ { url = "https://files.pythonhosted.org/packages/bc/66/36c78af2efaffcc15a5a61ae0df53a1d025f2680122e2a9eb8442fed3ae4/msgpack-1.1.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:4147151acabb9caed4e474c3344181e91ff7a388b888f1e19ea04f7e73dc7ad5", size = 424172 },
+ { url = "https://files.pythonhosted.org/packages/8c/87/a75eb622b555708fe0427fab96056d39d4c9892b0c784b3a721088c7ee37/msgpack-1.1.1-cp313-cp313-win32.whl", hash = "sha256:500e85823a27d6d9bba1d057c871b4210c1dd6fb01fbb764e37e4e8847376323", size = 65347 },
+ { url = "https://files.pythonhosted.org/packages/ca/91/7dc28d5e2a11a5ad804cf2b7f7a5fcb1eb5a4966d66a5d2b41aee6376543/msgpack-1.1.1-cp313-cp313-win_amd64.whl", hash = "sha256:6d489fba546295983abd142812bda76b57e33d0b9f5d5b71c09a583285506f69", size = 72341 },
+]
+
+[[package]]
+name = "pathspec"
+version = "0.12.1"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/ca/bc/f35b8446f4531a7cb215605d100cd88b7ac6f44ab3fc94870c120ab3adbf/pathspec-0.12.1.tar.gz", hash = "sha256:a482d51503a1ab33b1c67a6c3813a26953dbdc71c31dacaef9a838c4e29f5712", size = 51043 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/cc/20/ff623b09d963f88bfde16306a54e12ee5ea43e9b597108672ff3a408aad6/pathspec-0.12.1-py3-none-any.whl", hash = "sha256:a0d503e138a4c123b27490a4f7beda6a01c6f288df0e4a8b79c7eb0dc7b4cc08", size = 31191 },
+]
+
[[package]]
name = "pydantic"
version = "2.10.6"
@@ -295,6 +370,109 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/a0/4b/528ccf7a982216885a1ff4908e886b8fb5f19862d1962f56a3fce2435a70/starlette-0.46.1-py3-none-any.whl", hash = "sha256:77c74ed9d2720138b25875133f3a2dae6d854af2ec37dceb56aef370c1d8a227", size = 71995 },
]
+[[package]]
+name = "tree-sitter"
+version = "0.25.1"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/89/2b/02a642e67605b9dd59986b00d13a076044dede04025a243f0592ac79d68c/tree-sitter-0.25.1.tar.gz", hash = "sha256:cd761ad0e4d1fc88a4b1b8083bae06d4f973acf6f5f29bbf13ea9609c1dec9c1", size = 177874 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/e0/6c/6160ca15926d11a6957d8bee887f477f3c1d9bc5272c863affc0b50b9cff/tree_sitter-0.25.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:a15d62ffdb095d509bda8c140c1ddd0cc80f0c67f92b87fcc96cd242dc0c71ea", size = 146692 },
+ { url = "https://files.pythonhosted.org/packages/81/4a/e5eb39fe73a514a13bf94acee97925de296d673dace00557763cbbdc938f/tree_sitter-0.25.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:1d938f0a1ffad1206a1a569b0501345eeca81cae0a4487bb485e53768b02f24e", size = 141015 },
+ { url = "https://files.pythonhosted.org/packages/63/22/c8e3ba245e5cdb8c951482028a7ee99d141302047b708dc9d670f0fafd85/tree_sitter-0.25.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ba8cea296de5dcb384b9a15cf526985ac8339c81da51c7e29a251d82071f5ee9", size = 599462 },
+ { url = "https://files.pythonhosted.org/packages/c2/91/c866c3d278ee86354fd81fd055b5d835c510b0e9af07e1cf7e48e2f946b0/tree_sitter-0.25.1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:387fd2bd8657d69e877618dc199c18e2d6fe073b8f5c59e23435f3baee4ee10a", size = 627062 },
+ { url = "https://files.pythonhosted.org/packages/90/96/ac010f72778dae60381ab5fcca9651ac72647d582db0b027ca6c56116920/tree_sitter-0.25.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:afa49e51f82b58ae2c1291d6b79ca31e0fb36c04bd9a20d89007472edfb70136", size = 623788 },
+ { url = "https://files.pythonhosted.org/packages/0e/29/190bdfd54a564a2e43a702884ad5679f4578c481a46161f9f335dd390a70/tree_sitter-0.25.1-cp310-cp310-win_amd64.whl", hash = "sha256:77be45f666adf284914510794b41100decccd71dba88010c03dc2bb0d653acec", size = 127253 },
+ { url = "https://files.pythonhosted.org/packages/da/60/7daca5ccf65fb204c9f2cc2907db6aeaf1cb42aa605427580c17a38a53b3/tree_sitter-0.25.1-cp310-cp310-win_arm64.whl", hash = "sha256:72badac2de4e81ae0df5efe14ec5003bd4df3e48e7cf84dbd9df3a54599ba371", size = 113930 },
+ { url = "https://files.pythonhosted.org/packages/17/dc/0dabb75d249108fb9062d6e9e791e4ad8e9ae5c095e06dd8af770bc07902/tree_sitter-0.25.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:33a8fbaeb2b5049cf5318306ab8b16ab365828b2b21ee13678c29e0726a1d27a", size = 146696 },
+ { url = "https://files.pythonhosted.org/packages/da/d0/b7305a05d65dbcfce7a97a93252bf7384f09800866e9de55a625c76e0257/tree_sitter-0.25.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:797bbbc686d8d3722d25ee0108ad979bda6ad3e1025859ce2ee290e517816bd4", size = 141014 },
+ { url = "https://files.pythonhosted.org/packages/84/d0/d0d8bd13c44ef6379499712a3f5e3930e7db11e5c8eb2af8655e288597a3/tree_sitter-0.25.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:629fc2ae3f5954b0f6a7b42ee3fcd8f34b68ea161e9f02fa5bf709cbbac996d3", size = 604339 },
+ { url = "https://files.pythonhosted.org/packages/c5/13/22869a6da25ffe2dfff922712605e72a9c3481109a93f4218bea1bc65f35/tree_sitter-0.25.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4257018c42a33a7935a5150d678aac05c6594347d6a6e6dbdf7e2ef4ae985213", size = 631593 },
+ { url = "https://files.pythonhosted.org/packages/ec/0c/f4590fc08422768fc57456a85c932888a02e7a13540574859308611be1cf/tree_sitter-0.25.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:4027854c9feee2a3bb99642145ba04ce95d75bd17e292911c93a488cb28d0a04", size = 629265 },
+ { url = "https://files.pythonhosted.org/packages/a7/a8/ee9305ce9a7417715cbf038fdcc4fdb6042e30065c9837bdcf36be440388/tree_sitter-0.25.1-cp311-cp311-win_amd64.whl", hash = "sha256:183faaedcee5f0a3ba39257fa81749709d5eb7cf92c2c050b36ff38468d1774c", size = 127210 },
+ { url = "https://files.pythonhosted.org/packages/48/64/6a39882f534373873ef3dba8a1a8f47dc3bfb39ee63784eac2e789b404c4/tree_sitter-0.25.1-cp311-cp311-win_arm64.whl", hash = "sha256:6a3800235535a2532ce392ed0d8e6f698ee010e73805bdeac2f249da8246bab6", size = 113928 },
+ { url = "https://files.pythonhosted.org/packages/45/79/6dea0c098879d99f41ba919da1ea46e614fb4bf9c4d591450061aeec6fcb/tree_sitter-0.25.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:9362a202144075b54f7c9f07e0b0e44a61eed7ee19e140c506b9e64c1d21ed58", size = 146928 },
+ { url = "https://files.pythonhosted.org/packages/15/30/8002f4e76c7834a6101895ff7524ea29ab4f1f1da1270260ef52e2319372/tree_sitter-0.25.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:593f22529f34dd04de02f56ea6d7c2c8ec99dfab25b58be893247c1090dedd60", size = 140802 },
+ { url = "https://files.pythonhosted.org/packages/38/ec/d297ad9d4a4b26f551a5ca49afe48fdbcb20f058c2eff8d8463ad6c0eed1/tree_sitter-0.25.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ebb6849f76e1cbfa223303fa680da533d452e378d5fe372598e4752838ca7929", size = 606762 },
+ { url = "https://files.pythonhosted.org/packages/4a/1c/05a623cfb420b10d5f782d4ec064cf00fbfa9c21b8526ca4fd042f80acff/tree_sitter-0.25.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:034d4544bb0f82e449033d76dd083b131c3f9ecb5e37d3475f80ae55e8f382bd", size = 634632 },
+ { url = "https://files.pythonhosted.org/packages/c5/e0/f05fd5a2331c16d428efb8eef32dfb80dc6565438146e34e9a235ecd7925/tree_sitter-0.25.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:46a9b721560070f2f980105266e28a17d3149485582cdba14d66dca14692e932", size = 630756 },
+ { url = "https://files.pythonhosted.org/packages/b2/fc/79f3c5d53d1721b95ab6cda0368192a4f1d367e3a5ff7ac21d77e9841782/tree_sitter-0.25.1-cp312-cp312-win_amd64.whl", hash = "sha256:9a5c522b1350a626dc1cbc5dc203133caeaa114d3f65e400445e8b02f18b343b", size = 127157 },
+ { url = "https://files.pythonhosted.org/packages/24/b7/07c4e3f71af0096db6c2ecd83e7d61584e3891c79cb39b208082312d1d60/tree_sitter-0.25.1-cp312-cp312-win_arm64.whl", hash = "sha256:43e7b8e83f9fc29ca62e7d2aa8c38e3fa806ff3fc65e0d501d18588dc1509888", size = 113910 },
+ { url = "https://files.pythonhosted.org/packages/3f/d3/bfb08aab9c7daed2715f303cc017329e3512bb77678cc28829681decadd2/tree_sitter-0.25.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ae1eebc175e6a50b38b0e0385cdc26e92ac0bff9b32ee1c0619bbbf6829d57ea", size = 146920 },
+ { url = "https://files.pythonhosted.org/packages/f9/36/7f897c50489c38665255579646fca8191e1b9e5a29ac9cf11022e42e1e2b/tree_sitter-0.25.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:9e0ae03c4f132f1bffb2bc40b1bb28742785507da693ab04da8531fe534ada9c", size = 140782 },
+ { url = "https://files.pythonhosted.org/packages/16/e6/85012113899296b8e0789ae94f562d3971d7d3df989e8bec6128749394e1/tree_sitter-0.25.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:acf571758be0a71046a61a0936cb815f15b13e0ae7ec6d08398e4aa1560b371d", size = 607590 },
+ { url = "https://files.pythonhosted.org/packages/49/93/605b08dc4cf76d08cfacebc30a88467c6526ea5c94592c25240518e38b71/tree_sitter-0.25.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:632910847e3f8ae35841f92cba88a9a1b8bc56ecc1514a5affebf7951fa0fc0a", size = 635553 },
+ { url = "https://files.pythonhosted.org/packages/ce/27/123667f756bb32168507c940db9040104c606fbb0214397d3c20cf985073/tree_sitter-0.25.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a99ecef7771afb118b2a8435c8ba67ea7a085c60d5d33dc0a4794ed882e5f7df", size = 630844 },
+ { url = "https://files.pythonhosted.org/packages/2f/53/180b0ed74153a3c9a23967f54774d5930c2e0b67671ae4ca0d4d35ba18ac/tree_sitter-0.25.1-cp313-cp313-win_amd64.whl", hash = "sha256:c1d6393454d1f9d4195c74e40a487640cd4390cd4aee90837485f932a1a0f40c", size = 127159 },
+ { url = "https://files.pythonhosted.org/packages/32/fb/b8b7b5122ac4a80cd689a5023f2416910e10f9534ace1cdf0020a315d40d/tree_sitter-0.25.1-cp313-cp313-win_arm64.whl", hash = "sha256:c1d2dbf7d12426b71ff49739f599c355f4de338a5c0ab994de2a1d290f6e0b20", size = 113920 },
+ { url = "https://files.pythonhosted.org/packages/70/8c/cb851da552baf4215baf96443e5e9e39095083a95bc05c4444e640fe0fe8/tree_sitter-0.25.1-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:32cee52264d9ecf98885fcac0185ac63e16251b31dd8b4a3b8d8071173405f8f", size = 146775 },
+ { url = "https://files.pythonhosted.org/packages/f3/59/002c89df1e8f1664b82023e5d0c06de97fff5c2a2e33dce1a241c8909758/tree_sitter-0.25.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:ae024d8ccfef51e61c44a81af7a48670601430701c24f450bea10f4b4effd8d1", size = 140787 },
+ { url = "https://files.pythonhosted.org/packages/39/48/c9e6deb88f3c7f16963ef205e5b8e3ea7f5effd048b4515d09738c7b032b/tree_sitter-0.25.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d025c56c393cea660df9ef33ca60329952a1f8ee6212d21b2b390dfec08a3874", size = 609173 },
+ { url = "https://files.pythonhosted.org/packages/53/a8/b782576d7ea081a87285d974005155da03b6d0c66283fe1e3a5e0dd4bd98/tree_sitter-0.25.1-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:044aa23ea14f337809821bea7467f33f4c6d351739dca76ba0cbe4d0154d8662", size = 635994 },
+ { url = "https://files.pythonhosted.org/packages/70/0a/c5b6c9cdb7bd4bf0c3d2bd494fcf356acc53f8e63007dc2a836d95bbe964/tree_sitter-0.25.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:1863d96704eb002df4ad3b738294ae8bd5dcf8cefb715da18bff6cb2d33d978e", size = 630944 },
+ { url = "https://files.pythonhosted.org/packages/12/2a/d0b097157c2d487f5e6293dae2c106ec9ede792a6bb780249e81432e754d/tree_sitter-0.25.1-cp314-cp314-win_amd64.whl", hash = "sha256:a40a481e28e1afdbc455932d61e49ffd4163aafa83f4a3deb717524a7786197e", size = 130831 },
+ { url = "https://files.pythonhosted.org/packages/ce/33/3591e7b22dd49f46ae4fdee1db316ecefd0486cae880c5b497a55f0ccb24/tree_sitter-0.25.1-cp314-cp314-win_arm64.whl", hash = "sha256:f7b68f584336b39b2deab9896b629dddc3c784170733d3409f01fe825e9c04eb", size = 117376 },
+]
+
+[[package]]
+name = "tree-sitter-java"
+version = "0.23.5"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/fa/dc/eb9c8f96304e5d8ae1663126d89967a622a80937ad2909903569ccb7ec8f/tree_sitter_java-0.23.5.tar.gz", hash = "sha256:f5cd57b8f1270a7f0438878750d02ccc79421d45cca65ff284f1527e9ef02e38", size = 138121 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/67/21/b3399780b440e1567a11d384d0ebb1aea9b642d0d98becf30fa55c0e3a3b/tree_sitter_java-0.23.5-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:355ce0308672d6f7013ec913dee4a0613666f4cda9044a7824240d17f38209df", size = 58926 },
+ { url = "https://files.pythonhosted.org/packages/57/ef/6406b444e2a93bc72a04e802f4107e9ecf04b8de4a5528830726d210599c/tree_sitter_java-0.23.5-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:24acd59c4720dedad80d548fe4237e43ef2b7a4e94c8549b0ca6e4c4d7bf6e69", size = 62288 },
+ { url = "https://files.pythonhosted.org/packages/4e/6c/74b1c150d4f69c291ab0b78d5dd1b59712559bbe7e7daf6d8466d483463f/tree_sitter_java-0.23.5-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9401e7271f0b333df39fc8a8336a0caf1b891d9a2b89ddee99fae66b794fc5b7", size = 85533 },
+ { url = "https://files.pythonhosted.org/packages/29/09/e0d08f5c212062fd046db35c1015a2621c2631bc8b4aae5740d7adb276ad/tree_sitter_java-0.23.5-cp39-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:370b204b9500b847f6d0c5ad584045831cee69e9a3e4d878535d39e4a7e4c4f1", size = 84033 },
+ { url = "https://files.pythonhosted.org/packages/43/56/7d06b23ddd09bde816a131aa504ee11a1bbe87c6b62ab9b2ed23849a3382/tree_sitter_java-0.23.5-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:aae84449e330363b55b14a2af0585e4e0dae75eb64ea509b7e5b0e1de536846a", size = 82564 },
+ { url = "https://files.pythonhosted.org/packages/da/d6/0528c7e1e88a18221dbd8ccee3825bf274b1fa300f745fd74eb343878043/tree_sitter_java-0.23.5-cp39-abi3-win_amd64.whl", hash = "sha256:1ee45e790f8d31d416bc84a09dac2e2c6bc343e89b8a2e1d550513498eedfde7", size = 60650 },
+ { url = "https://files.pythonhosted.org/packages/72/57/5bab54d23179350356515526fff3cc0f3ac23bfbc1a1d518a15978d4880e/tree_sitter_java-0.23.5-cp39-abi3-win_arm64.whl", hash = "sha256:402efe136104c5603b429dc26c7e75ae14faaca54cfd319ecc41c8f2534750f4", size = 59059 },
+]
+
+[[package]]
+name = "tree-sitter-javascript"
+version = "0.23.1"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/cd/dc/1c55c33cc6bbe754359b330534cf9f261c1b9b2c26ddf23aef3c5fa67759/tree_sitter_javascript-0.23.1.tar.gz", hash = "sha256:b2059ce8b150162cda05a457ca3920450adbf915119c04b8c67b5241cd7fcfed", size = 110058 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/20/d3/c67d7d49967344b51208ad19f105233be1afdf07d3dcb35b471900265227/tree_sitter_javascript-0.23.1-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:6ca583dad4bd79d3053c310b9f7208cd597fd85f9947e4ab2294658bb5c11e35", size = 59333 },
+ { url = "https://files.pythonhosted.org/packages/a5/db/ea0ee1547679d1750e80a0c4bc60b3520b166eeaf048764cfdd1ba3fd5e5/tree_sitter_javascript-0.23.1-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:94100e491a6a247aa4d14caf61230c171b6376c863039b6d9cd71255c2d815ec", size = 61071 },
+ { url = "https://files.pythonhosted.org/packages/67/6e/07c4857e08be37bfb55bfb269863df8ec908b2f6a3f1893cd852b893ecab/tree_sitter_javascript-0.23.1-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5a6bc1055b061c5055ec58f39ee9b2e9efb8e6e0ae970838af74da0afb811f0a", size = 96999 },
+ { url = "https://files.pythonhosted.org/packages/5f/f5/4de730afe8b9422845bc2064020a8a8f49ebd1695c04261c38d1b3e3edec/tree_sitter_javascript-0.23.1-cp39-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:056dc04fb6b24293f8c5fec43c14e7e16ba2075b3009c643abf8c85edc4c7c3c", size = 94020 },
+ { url = "https://files.pythonhosted.org/packages/77/0a/f980520da86c4eff8392867840a945578ef43372c9d4a37922baa6b121fe/tree_sitter_javascript-0.23.1-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:a11ca1c0f736da42967586b568dff8a465ee148a986c15ebdc9382806e0ce871", size = 92927 },
+ { url = "https://files.pythonhosted.org/packages/ff/5c/36a98d512aa1d1082409d6b7eda5d26b820bd4477a54100ad9f62212bc55/tree_sitter_javascript-0.23.1-cp39-abi3-win_amd64.whl", hash = "sha256:041fa22b34250ea6eb313d33104d5303f79504cb259d374d691e38bbdc49145b", size = 58824 },
+ { url = "https://files.pythonhosted.org/packages/dc/79/ceb21988e6de615355a63eebcf806cd2a0fe875bec27b429d58b63e7fb5f/tree_sitter_javascript-0.23.1-cp39-abi3-win_arm64.whl", hash = "sha256:eb28130cd2fb30d702d614cbf61ef44d1c7f6869e7d864a9cc17111e370be8f7", size = 57027 },
+]
+
+[[package]]
+name = "tree-sitter-typescript"
+version = "0.23.2"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/1e/fc/bb52958f7e399250aee093751e9373a6311cadbe76b6e0d109b853757f35/tree_sitter_typescript-0.23.2.tar.gz", hash = "sha256:7b167b5827c882261cb7a50dfa0fb567975f9b315e87ed87ad0a0a3aedb3834d", size = 773053 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/28/95/4c00680866280e008e81dd621fd4d3f54aa3dad1b76b857a19da1b2cc426/tree_sitter_typescript-0.23.2-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:3cd752d70d8e5371fdac6a9a4df9d8924b63b6998d268586f7d374c9fba2a478", size = 286677 },
+ { url = "https://files.pythonhosted.org/packages/8f/2f/1f36fda564518d84593f2740d5905ac127d590baf5c5753cef2a88a89c15/tree_sitter_typescript-0.23.2-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:c7cc1b0ff5d91bac863b0e38b1578d5505e718156c9db577c8baea2557f66de8", size = 302008 },
+ { url = "https://files.pythonhosted.org/packages/96/2d/975c2dad292aa9994f982eb0b69cc6fda0223e4b6c4ea714550477d8ec3a/tree_sitter_typescript-0.23.2-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4b1eed5b0b3a8134e86126b00b743d667ec27c63fc9de1b7bb23168803879e31", size = 351987 },
+ { url = "https://files.pythonhosted.org/packages/49/d1/a71c36da6e2b8a4ed5e2970819b86ef13ba77ac40d9e333cb17df6a2c5db/tree_sitter_typescript-0.23.2-cp39-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e96d36b85bcacdeb8ff5c2618d75593ef12ebaf1b4eace3477e2bdb2abb1752c", size = 344960 },
+ { url = "https://files.pythonhosted.org/packages/7f/cb/f57b149d7beed1a85b8266d0c60ebe4c46e79c9ba56bc17b898e17daf88e/tree_sitter_typescript-0.23.2-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:8d4f0f9bcb61ad7b7509d49a1565ff2cc363863644a234e1e0fe10960e55aea0", size = 340245 },
+ { url = "https://files.pythonhosted.org/packages/8b/ab/dd84f0e2337296a5f09749f7b5483215d75c8fa9e33738522e5ed81f7254/tree_sitter_typescript-0.23.2-cp39-abi3-win_amd64.whl", hash = "sha256:3f730b66396bc3e11811e4465c41ee45d9e9edd6de355a58bbbc49fa770da8f9", size = 278015 },
+ { url = "https://files.pythonhosted.org/packages/9f/e4/81f9a935789233cf412a0ed5fe04c883841d2c8fb0b7e075958a35c65032/tree_sitter_typescript-0.23.2-cp39-abi3-win_arm64.whl", hash = "sha256:05db58f70b95ef0ea126db5560f3775692f609589ed6f8dd0af84b7f19f1cbb7", size = 274052 },
+]
+
+[[package]]
+name = "tree-sitter-zig"
+version = "1.1.2"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/5c/97/75967b81460e0ce999de4736b9ac189dcd5ad1c85aabcc398ba529f4838e/tree_sitter_zig-1.1.2.tar.gz", hash = "sha256:da24db16df92f7fcfa34448e06a14b637b1ff985f7ce2ee19183c489e187a92e", size = 194084 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/b3/c6/db41d3f6c7c0174db56d9122a2a4d8b345c377ca87268e76557b2879675e/tree_sitter_zig-1.1.2-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:e7542354a5edba377b5692b2add4f346501306d455e192974b7e76bf1a61a282", size = 61900 },
+ { url = "https://files.pythonhosted.org/packages/5a/78/93d32fea98b3b031bc0fbec44e27f2b8cc1a1a8ff5a99dfb1a8f85b11d43/tree_sitter_zig-1.1.2-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:daa2cdd7c1a2d278f2a917c85993adb6e84d37778bfc350ee9e342872e7f8be2", size = 67837 },
+ { url = "https://files.pythonhosted.org/packages/40/45/ef5afd6b79bd58731dae2cf61ff7960dd616737397db4d2e926457ff24b7/tree_sitter_zig-1.1.2-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1962e95067ac5ee784daddd573f828ef32f15e9c871967df6833d3d389113eae", size = 83391 },
+ { url = "https://files.pythonhosted.org/packages/78/02/275523eb05108d83e154f52c7255763bac8b588ae14163563e19479322a7/tree_sitter_zig-1.1.2-cp39-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e924509dcac5a6054da357e3d6bcf37ea82984ee1d2a376569753d32f61ea8bb", size = 82323 },
+ { url = "https://files.pythonhosted.org/packages/ef/e9/ff3c11097e37d4d899155c8fbdf7531063b6d15ee252b2e01ce0063f0218/tree_sitter_zig-1.1.2-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:d8f463c370cdd71025b8d40f90e21e8fc25c7394eb64ebd53b1e566d712a3a68", size = 81383 },
+ { url = "https://files.pythonhosted.org/packages/ab/5c/f5fb2ce355bbd381e647b04e8b2078a4043e663b6df6145d87550d3c3fe5/tree_sitter_zig-1.1.2-cp39-abi3-win_amd64.whl", hash = "sha256:7b94f00a0e69231ac4ebf0aa763734b9b5637e0ff13634ebfe6d13fadece71e9", size = 65105 },
+ { url = "https://files.pythonhosted.org/packages/34/8d/c0a481cc7bba9d39c533dd3098463854b5d3c4e6134496d9d83cd1331e51/tree_sitter_zig-1.1.2-cp39-abi3-win_arm64.whl", hash = "sha256:88152ebeaeca1431a6fc943a8b391fee6f6a8058f17435015135157735061ddf", size = 63219 },
+]
+
[[package]]
name = "typing-extensions"
version = "4.12.2"
@@ -317,3 +495,36 @@ sdist = { url = "https://files.pythonhosted.org/packages/4b/4d/938bd85e5bf2edeec
wheels = [
{ url = "https://files.pythonhosted.org/packages/61/14/33a3a1352cfa71812a3a21e8c9bfb83f60b0011f5e36f2b1399d51928209/uvicorn-0.34.0-py3-none-any.whl", hash = "sha256:023dc038422502fa28a09c7a30bf2b6991512da7dcdb8fd35fe57cfc154126f4", size = 62315 },
]
+
+[[package]]
+name = "watchdog"
+version = "6.0.0"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/db/7d/7f3d619e951c88ed75c6037b246ddcf2d322812ee8ea189be89511721d54/watchdog-6.0.0.tar.gz", hash = "sha256:9ddf7c82fda3ae8e24decda1338ede66e1c99883db93711d8fb941eaa2d8c282", size = 131220 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/0c/56/90994d789c61df619bfc5ce2ecdabd5eeff564e1eb47512bd01b5e019569/watchdog-6.0.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d1cdb490583ebd691c012b3d6dae011000fe42edb7a82ece80965b42abd61f26", size = 96390 },
+ { url = "https://files.pythonhosted.org/packages/55/46/9a67ee697342ddf3c6daa97e3a587a56d6c4052f881ed926a849fcf7371c/watchdog-6.0.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bc64ab3bdb6a04d69d4023b29422170b74681784ffb9463ed4870cf2f3e66112", size = 88389 },
+ { url = "https://files.pythonhosted.org/packages/44/65/91b0985747c52064d8701e1075eb96f8c40a79df889e59a399453adfb882/watchdog-6.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c897ac1b55c5a1461e16dae288d22bb2e412ba9807df8397a635d88f671d36c3", size = 89020 },
+ { url = "https://files.pythonhosted.org/packages/e0/24/d9be5cd6642a6aa68352ded4b4b10fb0d7889cb7f45814fb92cecd35f101/watchdog-6.0.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:6eb11feb5a0d452ee41f824e271ca311a09e250441c262ca2fd7ebcf2461a06c", size = 96393 },
+ { url = "https://files.pythonhosted.org/packages/63/7a/6013b0d8dbc56adca7fdd4f0beed381c59f6752341b12fa0886fa7afc78b/watchdog-6.0.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ef810fbf7b781a5a593894e4f439773830bdecb885e6880d957d5b9382a960d2", size = 88392 },
+ { url = "https://files.pythonhosted.org/packages/d1/40/b75381494851556de56281e053700e46bff5b37bf4c7267e858640af5a7f/watchdog-6.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:afd0fe1b2270917c5e23c2a65ce50c2a4abb63daafb0d419fde368e272a76b7c", size = 89019 },
+ { url = "https://files.pythonhosted.org/packages/39/ea/3930d07dafc9e286ed356a679aa02d777c06e9bfd1164fa7c19c288a5483/watchdog-6.0.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:bdd4e6f14b8b18c334febb9c4425a878a2ac20efd1e0b231978e7b150f92a948", size = 96471 },
+ { url = "https://files.pythonhosted.org/packages/12/87/48361531f70b1f87928b045df868a9fd4e253d9ae087fa4cf3f7113be363/watchdog-6.0.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c7c15dda13c4eb00d6fb6fc508b3c0ed88b9d5d374056b239c4ad1611125c860", size = 88449 },
+ { url = "https://files.pythonhosted.org/packages/5b/7e/8f322f5e600812e6f9a31b75d242631068ca8f4ef0582dd3ae6e72daecc8/watchdog-6.0.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6f10cb2d5902447c7d0da897e2c6768bca89174d0c6e1e30abec5421af97a5b0", size = 89054 },
+ { url = "https://files.pythonhosted.org/packages/68/98/b0345cabdce2041a01293ba483333582891a3bd5769b08eceb0d406056ef/watchdog-6.0.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:490ab2ef84f11129844c23fb14ecf30ef3d8a6abafd3754a6f75ca1e6654136c", size = 96480 },
+ { url = "https://files.pythonhosted.org/packages/85/83/cdf13902c626b28eedef7ec4f10745c52aad8a8fe7eb04ed7b1f111ca20e/watchdog-6.0.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:76aae96b00ae814b181bb25b1b98076d5fc84e8a53cd8885a318b42b6d3a5134", size = 88451 },
+ { url = "https://files.pythonhosted.org/packages/fe/c4/225c87bae08c8b9ec99030cd48ae9c4eca050a59bf5c2255853e18c87b50/watchdog-6.0.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a175f755fc2279e0b7312c0035d52e27211a5bc39719dd529625b1930917345b", size = 89057 },
+ { url = "https://files.pythonhosted.org/packages/30/ad/d17b5d42e28a8b91f8ed01cb949da092827afb9995d4559fd448d0472763/watchdog-6.0.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:c7ac31a19f4545dd92fc25d200694098f42c9a8e391bc00bdd362c5736dbf881", size = 87902 },
+ { url = "https://files.pythonhosted.org/packages/5c/ca/c3649991d140ff6ab67bfc85ab42b165ead119c9e12211e08089d763ece5/watchdog-6.0.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:9513f27a1a582d9808cf21a07dae516f0fab1cf2d7683a742c498b93eedabb11", size = 88380 },
+ { url = "https://files.pythonhosted.org/packages/a9/c7/ca4bf3e518cb57a686b2feb4f55a1892fd9a3dd13f470fca14e00f80ea36/watchdog-6.0.0-py3-none-manylinux2014_aarch64.whl", hash = "sha256:7607498efa04a3542ae3e05e64da8202e58159aa1fa4acddf7678d34a35d4f13", size = 79079 },
+ { url = "https://files.pythonhosted.org/packages/5c/51/d46dc9332f9a647593c947b4b88e2381c8dfc0942d15b8edc0310fa4abb1/watchdog-6.0.0-py3-none-manylinux2014_armv7l.whl", hash = "sha256:9041567ee8953024c83343288ccc458fd0a2d811d6a0fd68c4c22609e3490379", size = 79078 },
+ { url = "https://files.pythonhosted.org/packages/d4/57/04edbf5e169cd318d5f07b4766fee38e825d64b6913ca157ca32d1a42267/watchdog-6.0.0-py3-none-manylinux2014_i686.whl", hash = "sha256:82dc3e3143c7e38ec49d61af98d6558288c415eac98486a5c581726e0737c00e", size = 79076 },
+ { url = "https://files.pythonhosted.org/packages/ab/cc/da8422b300e13cb187d2203f20b9253e91058aaf7db65b74142013478e66/watchdog-6.0.0-py3-none-manylinux2014_ppc64.whl", hash = "sha256:212ac9b8bf1161dc91bd09c048048a95ca3a4c4f5e5d4a7d1b1a7d5752a7f96f", size = 79077 },
+ { url = "https://files.pythonhosted.org/packages/2c/3b/b8964e04ae1a025c44ba8e4291f86e97fac443bca31de8bd98d3263d2fcf/watchdog-6.0.0-py3-none-manylinux2014_ppc64le.whl", hash = "sha256:e3df4cbb9a450c6d49318f6d14f4bbc80d763fa587ba46ec86f99f9e6876bb26", size = 79078 },
+ { url = "https://files.pythonhosted.org/packages/62/ae/a696eb424bedff7407801c257d4b1afda455fe40821a2be430e173660e81/watchdog-6.0.0-py3-none-manylinux2014_s390x.whl", hash = "sha256:2cce7cfc2008eb51feb6aab51251fd79b85d9894e98ba847408f662b3395ca3c", size = 79077 },
+ { url = "https://files.pythonhosted.org/packages/b5/e8/dbf020b4d98251a9860752a094d09a65e1b436ad181faf929983f697048f/watchdog-6.0.0-py3-none-manylinux2014_x86_64.whl", hash = "sha256:20ffe5b202af80ab4266dcd3e91aae72bf2da48c0d33bdb15c66658e685e94e2", size = 79078 },
+ { url = "https://files.pythonhosted.org/packages/07/f6/d0e5b343768e8bcb4cda79f0f2f55051bf26177ecd5651f84c07567461cf/watchdog-6.0.0-py3-none-win32.whl", hash = "sha256:07df1fdd701c5d4c8e55ef6cf55b8f0120fe1aef7ef39a1c6fc6bc2e606d517a", size = 79065 },
+ { url = "https://files.pythonhosted.org/packages/db/d9/c495884c6e548fce18a8f40568ff120bc3a4b7b99813081c8ac0c936fa64/watchdog-6.0.0-py3-none-win_amd64.whl", hash = "sha256:cbafb470cf848d93b5d013e2ecb245d4aa1c8fd0504e863ccefa32445359d680", size = 79070 },
+ { url = "https://files.pythonhosted.org/packages/33/e8/e40370e6d74ddba47f002a32919d91310d6074130fe4e17dabcafc15cbf1/watchdog-6.0.0-py3-none-win_ia64.whl", hash = "sha256:a1914259fa9e1454315171103c6a30961236f508b9b623eae470268bbcc6a22f", size = 79067 },
+]
+