# @aigentyc/mcp — full LLM context

This file is the deep reference for AI coding tools (Cursor, Claude Code, Windsurf, Claude Desktop). It is generated from this package's docs + registered MCP tools at build time. For a short overview, read `llms.txt`.

---

## README.md

# @aigentyc/mcp

Model Context Protocol server for [aiGentyc](https://aigentyc.com) — lets
Claude Code, Cursor, Windsurf, and any other MCP-compatible agent drive the
content/authoring side of your aiGentyc project (documents, crawling, data
stores, custom tools, config, backups, …) without clicking through the
dashboard.

Chat/search embedding is **not** part of this package — use the separate
`aigentyc-chat-sdk` (React) for user-facing chat.

## Install

```bash
npx @aigentyc/mcp login \
  --api-key tyco_pk_XXXX \
  --project-id proj_XXXX
```

`login` verifies the key against `/api/auth/api-keys/verify` and writes
`~/.aigentyc/config.json` with `0600` perms.

**Dev-only flag**: pass `--allow-insecure` to permit plaintext HTTP against
non-loopback hosts (e.g. a staging box without TLS). Never use this against
production — all real traffic must be HTTPS.

Then wire it into your agent:

### Claude Desktop / Claude Code

`~/Library/Application Support/Claude/claude_desktop_config.json` (macOS):

```json
{
  "mcpServers": {
    "aigentyc": {
      "command": "npx",
      "args": ["-y", "@aigentyc/mcp"]
    }
  }
}
```

### Cursor

Settings → MCP → add server:

```json
{
  "aigentyc": {
    "command": "npx",
    "args": ["-y", "@aigentyc/mcp"]
  }
}
```

## Commands

```
aigentyc-mcp serve      Run the stdio MCP server (default when invoked with no args)
aigentyc-mcp login      Save + verify an API key profile
aigentyc-mcp logout     Remove a profile
aigentyc-mcp doctor     Verify config + dashboard reachability
```

## Tools (86 total, 20 domains)

End-to-end coverage. Highlights:

- **`aigentyc_get_started`** — call this first. Returns project status + a
  prioritised list of next steps the agent should walk the user through.
- **Embed the chat** — `chat_widget_setup` (paste snippet for existing app),
  `chat_widget_scaffold` (runs `npm create aigentyc-chat@latest` for a fresh
  starter), `chat_widget_get_snippet` (just the code).
- **Add content** — `files_upload`, `documents_create_from_text`,
  `extract_from_urls`, `link_sources_create`, `data_stores_*`.
- **Configure** — `config_update` (system prompt, model, …),
  `personas_upsert`, `tools_create` + `flows_create` (custom tool actions).
- **Operate** — `backups_*`, `analytics_*`, `jobs_status` / `jobs_wait`.

See `llms.txt` for the full tool inventory and recipes.

## Vibe-coder one-shot

```
You:    "Add my docs/ folder, set the system prompt, and scaffold a Next.js
         chat app at ./my-app."

Claude: aigentyc_get_started        → "kb empty, no system prompt"
        files_upload({ paths: [...] })
        config_update({ patch: { systemPrompt: "..." } })
        chat_widget_scaffold({ destination: "./my-app", template: "next",
                               confirm: true })
        → ✓ done. cd my-app && npm install && npm run dev
```

## Security

- API keys are project-scoped; a key for project A cannot read/write B.
- `~/.aigentyc/config.json` is written `0600`. The server refuses to start
  with wider perms.
- The HTTP client refuses plaintext HTTP to non-loopback hosts.
- Destructive operations (e.g. `documents_delete`) require `confirm: true`.
- Per-API-key rate limits: 300 reads/min, 60 writes/min (429 over limit).
- Every API-key-authed request is logged server-side
  (`api_key_audit_log` table) with `keyId`, `projectId`, `route`,
  `method`, `status`, and `X-Request-Id` for tracing.
- `files_upload` refuses paths that escape `$CWD` or `$HOME`, rejects
  non-regular files, and caps batches at 50MB/file, 500MB total.
- `extract_from_urls` prefilters RFC1918 / loopback / cloud-metadata URLs.

## Publishing

```
cd mcp-server
npm run build
npm run smoke           # stdio JSON-RPC smoke test
npm pack --dry-run      # inspect what would ship
npm publish --access public
```

## Deferred features

Tracked for v0.2+:

- **`backups_download_all` (ZIP) secret redaction** for API-key callers.
  Current implementation redacts JSON downloads but not the archived ZIP.
  Session callers are unaffected. Recommendation: use session for now.
- **`/api/extract/*` dual-auth + binary-file uploads** (PDF/DOC/DOCX).
  The extract proxy currently has no auth guard — not exposed to MCP.
  `files_upload` is therefore restricted to UTF-8 text formats only.
- **Analytics sessions/comments write paths** — MCP is read-only by design.
- **Custom rate-limit overrides per-key** — one limit for all keys today.

---

## AGENTS.md

# AGENTS.md — @aigentyc/mcp

Guide for AI coding assistants (Claude Code, Cursor, Windsurf, Claude Desktop, Codex) using or contributing to this MCP server. Humans: read `README.md` first.

**Live docs (always current):** see `context7.json` in this repo for the URL Context7 mirrors.

---

## What this package is

A Model Context Protocol (MCP) server that lets an LLM agent drive the **content & authoring** side of an Aigentyc project: documents, files, link-source crawling, custom tools, data stores, project config, backups, analytics. It is NOT a chat client — for embedding chat into your own app use the sister package `@aigentyc/chat-sdk`.

- Package name: `@aigentyc/mcp`
- Binary: `aigentyc-mcp` (Node 20+, ESM)
- Transport: stdio (default), wire into Claude Desktop / Cursor MCP config
- Auth: project-scoped API keys (`tyco_pk_...`) verified by Aigentyc auth-service
- Storage: `~/.aigentyc/config.json` (0600, refuses to start otherwise)

---

## Canonical install + usage (paste-ready)

```bash
# 1. Save + verify an API key (one-time per profile)
npx @aigentyc/mcp login \
  --api-key tyco_pk_XXXX \
  --project-id proj_XXXX

# 2. Wire into Claude Desktop / Cursor (one-time)
#    See README for the JSON snippet. Restart the client.

# 3. Sanity check
npx @aigentyc/mcp doctor
```

For local dev against a self-hosted backend on a different port:

```bash
npx @aigentyc/mcp login --api-key ... --project-id ... \
  --base-url http://localhost:3000 \
  --allow-insecure
```

---

## Default conversation opener

**Always call `aigentyc_get_started` first** in any new conversation about an aiGentyc project. It returns project state + a prioritised `nextSteps` list that tells you which of the tools below to call next. Don't guess at the user's progress — ask the project.

## Decision tree: which tool to use

```
First contact in a conversation?                    → aigentyc_get_started

Need to add markdown / text content to the KB?       → files_upload (paths)
Need to add raw text without a file?                 → documents_create_from_text
Need to crawl a public website into the KB?         → link_sources_create + jobs_wait
Need to extract from a list of public URLs?         → extract_from_urls + jobs_wait
Need to add a structured table (rows of data)?      → data_stores_create + bulk_upsert
Need to scrape a website into a data store?         → data_stores_scan_website + jobs_wait
Need to expose a custom action to chat?             → tools_create (rest_api / custom_execute / hybrid)
Need to chain tools?                                → flows_create
Need to tweak system prompt or chat model?          → config_update
Need to back up before a risky change?              → backups_create + jobs_wait
Need to inspect what's already in the KB?           → search_kb_search (read-only, non-streaming)
Need to see analytics?                              → analytics_overview / sessions / comments

Ready to put chat on the user's site?
  - User has an existing app already?              → chat_widget_setup → paste snippet
  - User wants a fresh starter?                    → chat_widget_scaffold (runs npm create)
  - User just wants the code, no readiness check?  → chat_widget_get_snippet
```

## End-to-end flow (the "vibe coder one-shot")

This is the canonical journey from "I have docs" to "chat is live on my site":

```
1. aigentyc_get_started               → tells you what's missing
2. files_upload / documents_create_from_text / extract_from_urls
                                      → fill the KB
3. config_update                      → set systemPrompt + chatModel
4. tools_create (optional)            → expose REST/JS actions to chat
5. aigentyc_get_started               → confirm snippetReady=true
6. chat_widget_scaffold (greenfield)  → npm create aigentyc-chat@latest
   OR
   chat_widget_setup (existing app)   → returns paste-ready snippet
```

When the user asks "how do I get started" or "set this up for me", run the journey from the top. Don't try to skip steps — `aigentyc_get_started` will tell you which steps are already done.

---

## Important behaviors agents should know

### 1. Tool inputs are validated by Zod

Required string fields enforce `min(1)` server-side via the chat-tools layer. Don't try to call a tool with `""` to "see what happens" — Zod will 400 and the LLM will be told to ask the user for real values.

### 2. Async ops return jobIds

These tools are intentionally non-blocking:
`backups_create`, `backups_restore`, `extract_from_urls`, `data_stores_scan_website`, `page_questions_auto_generate`, `files_upload` (when `indexAsDocuments=true` and the batch is large).

Pattern:
```
const { jobId } = await tool({ ... })
const result = await jobs_wait({ jobId })
```

There's a per-project concurrency cap of 3 active jobs. If exceeded you get 429.

### 3. Destructive ops require `confirm: true`

`documents_delete`, `files_delete`, `data_stores_delete`, `tools_delete`, `flows_delete`, `personas_delete`, `link_sources_delete`, `golden_answers_delete`, `backups_restore`, `data_stores_bulk_delete`. The agent should ALWAYS state what's about to be deleted and ask the user to approve before passing `confirm: true`.

### 4. Secrets are server-rejected, not just client-warned

Calling `config_update({ patch: { openaiApiKey: "..." } })` returns 403 from the server, not a client warning. Same for `customHttpHeaders` and `neonConnectionString`. To rotate secrets, use the dashboard.

### 5. Custom tool code can't reference `process.env`

`tools_create` / `tools_update` rejects `transformCode` / `executeCode` containing `process.env`. The right way to inject secrets is `apiConfig.headers` with template placeholders that the executor resolves.

### 6. URL inputs are SSRF-prefiltered

`extract_from_urls`, `link_sources_create` reject RFC1918 / loopback / cloud-metadata hosts both client-side AND server-side. Don't try `192.168.x.x` or `169.254.169.254` — they 400.

### 7. File uploads are FS-sandboxed

`files_upload` rejects paths outside `$CWD` and `$HOME`, refuses non-regular files, refuses symlinks that escape, caps 50MB/file and 500MB/batch. Text formats only (`.md .txt .json .csv .html .yaml .xml .rtf .log`). Binary formats (PDF/DOC/DOCX) need to go through the dashboard's file upload — out of scope for v0.1.

---

## Common pitfalls

- **"Tool created but agent doesn't see it"** → fully quit + relaunch the MCP client. The MCP server registers tools at startup.

- **"Created a doc but it's not under /knowledge-base/text"** → check `sourceType`. `documents_create_from_text` always uses `manual`. URLs sourced docs need `extract_from_urls` (which links them to a `link_source`).

- **"Tool fires with empty fields"** → the tool's `inputSchema` declared a string param without `required: true`. Update the tool schema.

- **"Persona writes return 404"** → upgrade your client; older agents may hit endpoints that didn't have writes.

- **"401 from a tool that worked yesterday"** → the API key was probably revoked. Re-run `aigentyc-mcp login`.

- **"Tool calls work for me but not from my CI"** → `~/.aigentyc/config.json` doesn't exist in the CI environment. Either run `login` in a build step or set `AIGENTYC_CONFIG=/path/to/config.json` env var pointing at a CI-safe location.

---

## Repository layout

```
src/
├── cli.ts            # entry: serve | login | logout | doctor
├── server.ts         # stdio MCP bootstrap (uses @modelcontextprotocol/sdk)
├── client.ts         # HTTP client (Bearer + X-Request-Id + retries + error map)
├── config.ts         # ~/.aigentyc/config.json read/write (0600 enforced)
├── errors.ts         # AigentycError + httpStatusToCode
├── fs-safety.ts      # path resolve + glob + size cap (used by files_upload)
└── tools/            # one file per domain; each exports defineTool([])
    ├── documents.ts
    ├── files.ts
    ├── extract.ts
    ├── link_sources.ts
    ├── page_questions.ts
    ├── data_stores.ts
    ├── tools_crud.ts   # custom tools + flows
    ├── golden_answers.ts
    ├── corrections.ts
    ├── personas.ts
    ├── config.ts
    ├── backups.ts
    ├── analytics.ts
    ├── misc.ts         # branches, unified_links, audience, kb_search
    ├── jobs.ts         # status / list / wait
    └── helpers.ts      # pid() + projectIdArg
```

---

## Adding a new tool

1. Pick a domain file (or create a new one if it's a new domain).
2. Use `defineTool({ name, title, description, inputSchema, handler })` from `./types.js`.
3. The handler gets parsed args (Zod-validated by the SDK) and an injected `DashboardClient`. Call `client.get/post/patch/delete(path, body|query)`.
4. Register it in `src/tools/index.ts` if a new domain.
5. Run `npm run build && npm run smoke` — the smoke test asserts no duplicate names + tool count.

Conventions:
- Tool names: `domain_action` (snake_case). Example: `documents_create_from_text`.
- Title: short human label.
- Description: tells the LLM **when** to call the tool, not **how**. Cross-reference adjacent tools when ambiguity is likely (e.g. "for URL-sourced content use extract_from_urls instead").
- Always use `pid(client, args.projectId)` to default to the active profile's projectId.
- Destructive tools: include `confirm: z.boolean().default(false)` and short-circuit when `false`.
- For long-running ops: assume the backend returns `{ jobId, status }` and don't try to wait inline. The user can chain `jobs_wait`.

---

## Releasing a new version

```bash
node scripts/release.mjs            # patch bump (default)
node scripts/release.mjs minor      # minor bump
node scripts/release.mjs major      # major bump
node scripts/release.mjs 1.2.3      # explicit
node scripts/release.mjs --dry      # rehearsal (no publish)
```

The script:
1. Bumps `package.json` version
2. Rewrites `context7.json` URL to self-reference the new version on jsdelivr
3. Runs `npm run build` (TypeScript + llms-full.txt)
4. `npm publish --access public`
5. Polls jsdelivr until the published tarball is mirrored
6. Prints the Context7 URLs for re-registration if needed

Requires npm credentials with publish rights on `@aigentyc`.

---

## Where to ask for help

- Bugs / feature requests → file in this repo
- Questions about the **platform** (`app.aigentyc.com`) → main monorepo
- Questions about embedding a chat UI → `@aigentyc/chat-sdk`

---

## llms.txt (overview, recipes, troubleshooting)

# @aigentyc/mcp

> Model Context Protocol server for the Aigentyc platform. Lets Claude Code, Cursor, Windsurf, Claude Desktop, and any MCP-compatible agent drive an Aigentyc project end-to-end: add documents, configure the chat agent, build custom tools — and scaffold or embed the front-end chat widget (`@aigentyc/chat-sdk`) — all from one conversation. The output is a working chat experience on the user's site.

## Zero-to-chat in one conversation

```
You:    "Add my docs/ folder, set the system prompt to be terse and helpful,
         and scaffold a Next.js chat app at ./my-app."

Claude: aigentyc_get_started        → "kb empty, no system prompt"
        files_upload({ paths: [...] }) → 14 docs ingested
        config_update({ patch: { systemPrompt: "..." } })
        aigentyc_get_started        → "snippetReady: true"
        chat_widget_scaffold({ destination: "./my-app", template: "next" })
        → ✓ done. cd my-app && npm install && npm run dev
```

## End-to-end recipe

1. **Discover state** — `aigentyc_get_started` returns project status + prioritised `nextSteps`. Always call this first in a new conversation.
2. **Add content** — pick the right ingest tool: `files_upload` (local files), `documents_create_from_text` (raw text), `extract_from_urls` (batch URLs), `link_sources_create` (crawl a website).
3. **Tune the chat** — `config_update` for the system prompt + chat model, `personas_upsert` for audience-specific tone, `tools_create` for custom REST/JS actions exposed to chat.
4. **Embed** — three options:
   - **`chat_widget_setup`** — confirms readiness, returns paste-ready snippet + install commands. Use when user has an app already.
   - **`chat_widget_scaffold`** — runs `npm create aigentyc-chat@latest <dir>` to bootstrap a fresh Vite or Next.js starter wired to the project. For greenfield apps.
   - **`chat_widget_get_snippet`** — just the snippet, no readiness check. For when you've already verified setup.

## Install

```bash
npx @aigentyc/mcp login \
  --api-key tyco_pk_XXXX \
  --project-id proj_XXXX
```

`login` verifies the key against `/api/auth/api-keys/verify` and writes `~/.aigentyc/config.json` with `0600` perms. The server refuses to start if perms are wider.

**Dev-only flag**: pass `--allow-insecure` to permit plaintext HTTP against non-loopback hosts. Never use this against production.

## Wire into your agent

### Claude Desktop / Claude Code

`~/Library/Application Support/Claude/claude_desktop_config.json` (macOS):

```json
{
  "mcpServers": {
    "aigentyc": {
      "command": "npx",
      "args": ["-y", "@aigentyc/mcp"]
    }
  }
}
```

### Cursor

Settings → MCP → add server:

```json
{
  "aigentyc": {
    "command": "npx",
    "args": ["-y", "@aigentyc/mcp"]
  }
}
```

Restart the client. The new tools appear under the `aigentyc` namespace.

## Commands

```
aigentyc-mcp serve      Start the stdio MCP server (default)
aigentyc-mcp login      Save + verify an API key profile
aigentyc-mcp logout     Remove a profile
aigentyc-mcp doctor     Verify config + transport + clock skew + reachability
```

## Tools (86 total, 20 domains)

- **embed (E2E entry points)** — aigentyc_get_started, chat_widget_setup, chat_widget_get_snippet, chat_widget_scaffold
- **documents** — list, get, delete, create_from_text, merge, versions
- **files** — list, get, upload (path-based), delete, reprocess, versions, bulk_audience
- **extract** — from_urls (jobified for api-key)
- **link_sources** — list, get, create, update, delete, list_items, update_item, delete_item
- **page_questions** — list, set_manual, auto_generate (jobified), delete_page
- **data_stores** — full CRUD + bulk_upsert + import + export + scan_website (jobified)
- **tools** (custom-tool CRUD) — list, get, create, update, delete, execute_dry_run
- **flows** — list, get, create, update, delete
- **golden_answers** — list, create, update, delete + ai_improve + ai_generate_variations
- **corrections** — list, create, update_status
- **personas** — list, upsert, delete
- **config** — get (secrets redacted for api-key), update (secret writes rejected)
- **backups** — list, create (jobified), restore (jobified), download, import, delete
- **analytics** — overview, sessions, session_timeline, comments
- **branches** — list
- **search** — kb_search (read-only authoring aid)
- **jobs** — status, list, wait

## Async job pattern

Long-running ops (backup create/restore, extract, data-store scan, page-questions auto-generate) return `{ jobId, status: "queued" }`. Poll with `jobs_wait`:

```
extract_from_urls({ urls: [...] })
  → { jobId: "abc-123", status: "queued" }
jobs_wait({ jobId: "abc-123" })
  → blocks until terminal; returns { status: "completed", result: {...} }
```

Concurrency cap: 3 active jobs per project.

## Security

- **Project-scoped keys**: a key for project A cannot read/write project B (403).
- **Per-key rate limits**: 300 reads/min, 60 writes/min. 429 with `Retry-After` over the limit.
- **Secrets**: `config.get` redacts `openaiApiKey` + `customHttpHeaders` to `***` for api-key callers. `config.update` rejects writes to those fields with 403. Backup downloads (single + ZIP) deep-redact secrets.
- **Custom tool code**: `tools.create` rejects `transformCode` / `executeCode` containing `process.env`. Use `apiConfig.headers` with template placeholders for secrets.
- **Destructive ops** (`*.delete`, `backups.restore`): require `confirm: true`.
- **Path uploads**: `files.upload` refuses paths outside `$CWD` / `$HOME`, refuses non-regular files, caps 50MB/file, 500MB/batch.
- **URL inputs**: `extract.from_urls` + `link_sources.create` reject RFC1918 / loopback / cloud-metadata hosts.
- **Audit log**: every api-key request is logged server-side (`api_key_audit_log` table) with `keyId`, `projectId`, `route`, `method`, `status`, `latencyMs`, `requestId`. Reads sampled at 10%; writes 100%.

## Recipes

### Vibe-coder one-shot — docs to embedded chat in 60 seconds

```
1. aigentyc_get_started()
2. files_upload({ paths: ["docs/**/*.md"], indexAsDocuments: true })
3. config_update({ patch: { systemPrompt: "You are the support agent for ACME. Cite sources from docs." } })
4. chat_widget_scaffold({ destination: "./acme-chat", template: "next", confirm: true })
5. → cd acme-chat && npm install && npm run dev
```

### Add a markdown KB from local files

```
files_upload({ paths: ["docs/intro.md", "docs/api.md"], indexAsDocuments: true })
```

The MCP reads each file (text formats only — `.md`, `.txt`, `.json`, `.csv`, `.html`, …), uploads metadata to `uploaded_files`, then chunks + embeds + indexes via `documents/create`. After ingest, `chat_widget_setup` returns paste-ready snippet.

### Crawl a website + dedupe before adding more docs

```
1. link_sources_create({ sourceUrl: "https://example.com", sourceType: "website" })
2. jobs_wait({ jobId: ... }) — wait for crawl to finish
3. search_kb_search({ query: "billing FAQ" })  // confirm it indexed
```

### Build a custom tool end-to-end

```
1. tools_create({
     name: "submitForm",
     toolType: "rest_api",
     apiConfig: { method: "POST", url: "...", body: "name={{name}}" },
     inputSchema: [{ name: "name", type: "string", required: true }],
   })
2. tools_execute_dry_run({ toolId, input: { name: "Test" } })
   // verify output before exposing to chat
```

### Update system prompt

```
config_update({ patch: { systemPrompt: "You are the support agent for ACME..." } })
```

(Cannot update `openaiApiKey` over api-key — use the dashboard for secrets.)

## Troubleshooting

- **404 on login** — auth-service may be on a different origin in dev. `login` falls back to dashboard ping. Use `--base-url http://localhost:3000` for local Docker.
- **Tool calls hang** — confirm `aigentyc-mcp doctor` passes. Most failures are wrong base-url, expired key, or clock skew (>60s breaks auth).
- **Tool created but UI doesn't show it** — fully restart Claude Desktop / Cursor. Server-side caches invalidate on every write; client-side router caches don't.
- **Empty fields on tool call** — required string fields with `min(1)` get rejected by Zod, prompting the LLM to gather real values from the user. If a tool fires with empty inputs, the field wasn't marked required at creation.

## Versioning

Server (`@aigentyc/mcp` on npm) and platform (`app.aigentyc.com`) version independently. The platform maintains backward compatibility with older MCP releases — endpoints aren't broken without deprecation.

---

## Tool surface (live snapshot)

All MCP tools registered at server startup, grouped by domain prefix. Use these names verbatim when invoking via an MCP client.


_Total tools: **86**_


### `aigentyc`

| tool | description |
| ---- | ----------- |
| `aigentyc_get_started` | Read the current state of the active project (KB readiness, config, custom tools, embedding status) and return a prioritised list of next steps the agent should walk the user through. Call this FIRST in any new conversation about an aiGentyc project — it tells you whether to add content, set a system prompt, or skip straight to embedding chat. |

### `analytics`

| tool | description |
| ---- | ----------- |
| `analytics_audience_overview` | Aggregate metrics for audience segments (document counts, coverage). |
| `analytics_overview` | Aggregate KPIs (sessions, messages, cost, latency, feedback) for a time window. Defaults to the last 30 days. |
| `analytics_sessions` | Paginated list of chat sessions with summary data, optional text search and date window. |
| `analytics_session_timeline` | Full Q&A timeline (messages + search events) for a single session. |
| `analytics_comments` | Project-wide list of admin-authored feedback comments on chat messages, ordered newest first. |

### `backups`

| tool | description |
| ---- | ----------- |
| `backups_list` | List available project backups (Qdrant snapshots + MySQL exports). |
| `backups_create` | Snapshot the project's Qdrant collections + MySQL data. Synchronous; may take tens of seconds on large projects. |
| `backups_restore` | Restore the project's Qdrant + MySQL state from a backup. HIGHLY DESTRUCTIVE: overwrites current data. |
| `backups_download` | Download a backup by index. type='mysql' returns JSON (secrets redacted for API-key callers); type='qdrant' returns an opaque Qdrant snapshot reference. |
| `backups_import` | Upload a previously-exported backup JSON blob. |
| `backups_delete` | Delete a backup by index. Destructive. |

### `branches`

| tool | description |
| ---- | ----------- |
| `branches_list` | List named branches of the knowledge base (draft vs published workspaces). |

### `chat`

| tool | description |
| ---- | ----------- |
| `chat_widget_get_snippet` | Return install command + framework-specific code snippet for embedding the aiGentyc chat widget into an existing app. For embeds on third-party domains, an apiKey is required (pass it via apiKey arg to inline it, or leave it blank to emit an env-var placeholder with setup instructions). For URL-sourced content + custom tools to actually do anything, the project must be configured (see aigentyc_get_started). |
| `chat_widget_setup` | One-shot setup for embedding chat: confirms the project is ready (KB has content, system prompt set), returns a paste-ready snippet, and warns about anything still missing. Use this AFTER adding content. For pure code-snippet generation without status checks, use chat_widget_get_snippet. Pass an `apiKey` if the user has one; the snippet otherwise uses an env-var placeholder and the notes explain where to generate a project-scoped key. |
| `chat_widget_scaffold` | Run `npm create aigentyc-chat@latest` in a chosen directory to scaffold a Vite or Next.js starter wired to this project. Refuses paths that escape $CWD or $HOME. Asks for confirmation. After scaffolding, the user runs `cd <dir> && npm install && npm run dev`. |

### `config`

| tool | description |
| ---- | ----------- |
| `config_get` | Fetch the full project configuration. When called via API key, secret fields (openaiApiKey, customHttpHeaders) are redacted to '***'. |
| `config_update` | Patch project config. Send only the fields you want to change. Secret fields (openaiApiKey, customHttpHeaders, neonConnectionString) cannot be updated via API key and will be rejected with 403. |

### `corrections`

| tool | description |
| ---- | ----------- |
| `corrections_list` | List user feedback / suggested edits against the knowledge base. Corrections are NOT auto-applied — review + mark status. |
| `corrections_create` | Record a suggested correction for a past Q&A. Status starts as 'pending'. |
| `corrections_update_status` | Set a correction's status to 'applied' or 'dismissed'. |

### `data`

| tool | description |
| ---- | ----------- |
| `data_stores_list` | List structured data tables (data stores) in the project. |
| `data_stores_create` | Create a new data store (structured table) with a schema. |
| `data_stores_get` | Fetch a data store's schema + metadata. |
| `data_stores_update` | Patch a data store's name, description, columns, or settings. |
| `data_stores_delete` | Delete a data store and all its rows. Destructive. |
| `data_stores_list_rows` | List rows in a data store with pagination + optional search. |
| `data_stores_create_row` | Insert one row into a data store. |
| `data_stores_update_row` | Update one row in a data store by id. |
| `data_stores_delete_row` | Delete one row by id. |
| `data_stores_bulk_upsert` | Insert many rows at once (max 10k). Does not dedupe by default. |
| `data_stores_bulk_delete` | Delete multiple rows by id in one call. Destructive. |
| `data_stores_import` | Bulk-import rows (≤10k) with optional column schema. `replaceExisting: true` wipes existing rows first. |
| `data_stores_export` | Export all rows of a data store as JSON. |
| `data_stores_scan_website` | Crawl a website and extract structured rows into the store. Returns a jobId; use jobs_wait to block until done. |

### `documents`

| tool | description |
| ---- | ----------- |
| `documents_list` | List documents in a project with pagination, search, and filtering. Returns document summaries (name, chunk count, source). |
| `documents_get` | Get all chunks of a single document by name, including enrichment metadata. |
| `documents_delete` | Delete all chunks of a document (including non-current versions) from MySQL and Qdrant. Destructive. |
| `documents_create_from_text` | Create a new manual-text document in the knowledge base. Chunks, embeds, and indexes synchronously. Shows up under the dashboard's 'Text' tab. For URL-sourced content use extract_from_urls instead (it goes through the crawler and links to a link_source). For file uploads use files_upload. After adding content, run aigentyc_get_started — once snippetReady is true, embed chat with chat_widget_setup. |
| `documents_merge` | Merge two consecutive chunks of the same document into one. Re-embeds the merged content and updates Qdrant. Provide the merged content verbatim. |
| `documents_versions` | action=list → return version history for a specific chunk (documentName + chunkIndex). action=create → create a new version of a chunk (documentId + newContent + changeReason). |

### `extract`

| tool | description |
| ---- | ----------- |
| `extract_from_urls` | Batch-extract documents from a list of URLs into the project's knowledge base. Returns synchronously when fast; for large batches the backend may return a jobId (then use jobs_wait). |

### `files`

| tool | description |
| ---- | ----------- |
| `files_list` | List files uploaded into the project's ingestion pipeline. |
| `files_get` | Get metadata + extraction status for a single uploaded file. |
| `files_versions` | List all versions of a file (same sourceUrl / family). |
| `files_delete` | Delete an uploaded file and (optionally) its derived documents. Destructive. |
| `files_reprocess` | Re-run embedding for all current document chunks derived from this file. Use after changing embedding model or chunk settings. |
| `files_upload` | Read text-based files (.md, .txt, .json, .csv, .html, …) from the user's local filesystem and ingest them into the project. Binary formats (pdf/doc/docx) are not supported in v0.1. Paths must be absolute or relative to CWD, cannot escape \$HOME, and are capped at 50MB/file and 500MB/batch. After ingesting, run aigentyc_get_started — once the KB is ready, embed chat with chat_widget_setup or chat_widget_scaffold. |
| `files_bulk_audience` | Assign an audience tag to multiple files at once. Audiences gate document visibility in search. |

### `flows`

| tool | description |
| ---- | ----------- |
| `flows_list` | List multi-step tool flows (orchestrations) in the project. |
| `flows_get` | Fetch a single tool flow. |
| `flows_create` | Create a multi-step tool flow. |
| `flows_update` | Patch a flow's fields. |
| `flows_delete` | Delete a flow. Destructive. |

### `golden`

| tool | description |
| ---- | ----------- |
| `golden_answers_list` | List high-priority Q&A pairs used for search ranking and fallback responses. |
| `golden_answers_create` | Create a golden Q&A pair. |
| `golden_answers_update` | Patch a golden answer by id. |
| `golden_answers_delete` | Delete a golden answer by id. Destructive. |
| `golden_answers_ai_improve` | Ask the AI to rewrite a response for a query in a chosen tone. Returns `improvedResponse` — you decide whether to save. |
| `golden_answers_ai_generate_variations` | Generate paraphrased variations of a question to improve retrieval. Returns `variations: [{question, context?}]`. |

### `jobs`

| tool | description |
| ---- | ----------- |
| `jobs_status` | Fetch the current status of a long-running job created by another MCP tool. |
| `jobs_list` | List recent long-running jobs for the project. Useful for debugging. |
| `jobs_wait` | Poll a job until it reaches a terminal state (completed, failed, cancelled). Returns the final row. |

### `kb`

| tool | description |
| ---- | ----------- |
| `kb_search` | Hybrid / vector search over the project KB. Non-streaming. Use BEFORE creating documents to check for overlap/duplicates. |

### `link`

| tool | description |
| ---- | ----------- |
| `link_sources_list` | List website/pdf link sources configured for the project. Each source tracks discovered URLs + processing state. |
| `link_sources_get` | Fetch a single link source with its status + discovered item counts. |
| `link_sources_create` | Register a new website or PDF source and kick off discovery. Returns the source row. |
| `link_sources_update` | Update audience tags or crawl parameters on an existing source. |
| `link_sources_delete` | Delete a link source and its discovered items. Does not delete derived documents. Destructive. |
| `link_sources_list_items` | Paginated list of discovered URLs for a given link source. Filter by isProcessed / isSkipped. |
| `link_sources_update_item` | Mark a discovered item as processed/skipped. Used to override crawler decisions. |
| `link_sources_delete_item` | Remove one URL from a link source's discovered set. |

### `page`

| tool | description |
| ---- | ----------- |
| `page_questions_list` | List per-page suggested questions pinned to specific URLs on the user's own site. |
| `page_questions_set_manual` | Set the suggested questions + welcome text shown for a specific page URL. Overwrites existing. |
| `page_questions_auto_generate` | Auto-generate Q&A for a page URL (runs AI over scanned document content) and persist as manual questions. Returns a jobId — poll with jobs_wait. |
| `page_questions_delete_page` | Remove the page questions entry for a specific URL. |

### `personas`

| tool | description |
| ---- | ----------- |
| `personas_list` | List the chat personas configured for the project. Personas are used by the AudienceSelector and define tone/style/custom instructions for different audience segments. |
| `personas_upsert` | Upsert a chat persona identified by `key`. If a persona with that key exists it is patched, otherwise a new one is created. The persona defines tone, response length, language style, and custom instructions for the chat agent when this persona is active. |
| `personas_delete` | Delete a persona by key. Destructive. |

### `tools`

| tool | description |
| ---- | ----------- |
| `tools_list` | List every custom tool in the project (rest_api, custom_execute, hybrid, data_store, web_search). |
| `tools_get` | Fetch one custom tool with its full config. |
| `tools_create` | Create a custom tool the chat agent can call.  **Display Type** — prefer passing `displayType` (preset \| custom_html \| json_render \| visualization) over the low-level `uiRenderMode`. The MCP normalizes it automatically.   • preset         — pre-built component (combine with uiComponent)   • custom_html    — HTML/CSS template (requires customHtml)   • json_render    — declarative spec (requires jsonRenderSpec)   • visualization  — recharts; the tool's output must include the chart spec  **Forms** (email or custom endpoint) are built as `toolType: "data_store"` + `displayType: "json_render"` + a jsonRenderSpec containing `_formConfig` nested at the spec root. Example: ```json jsonRenderSpec: {   root: "form",   elements: { form: { type: "Stack", children: [...] }, ... },   state: { form: { name: "", email: "" } },   _formConfig: { mode: "email", to: "you@co.com", subject: "New lead", successMessage: "Thanks!" } } ```  **Security**: tool code referencing `process.env` is rejected server-side — put secrets in apiConfig.headers with template placeholders. |
| `tools_update` | Patch selected fields of an existing custom tool. `displayType` is normalized the same way as in tools_create. |
| `tools_delete` | Delete a custom tool. Destructive. |
| `tools_execute_dry_run` | Invoke a custom tool in test mode with sample input. Safe for iteration; does not count against usage. |

### `unified`

| tool | description |
| ---- | ----------- |
| `unified_links_list` | Get the aggregated catalog of every URL known to the project (files + link sources + manual). |
