# Rerank Prompt
# Evaluates relevance scores for retrieved document chunks

## Task
Evaluate the relevance of each document chunk to the given query. Return scores in JSON format.

## Input
Query: {query}

Documents:
{documents}

## Scoring Scale
Use a normalized score from 0 to 1:
- 1.0: Perfectly relevant, directly answers the query with specific details
- 0.7-0.9: Highly relevant, contains important information related to the query
- 0.4-0.6: Moderately relevant, may contain some useful information
- 0.1-0.3: Low relevance, marginally related
- 0.0: Completely irrelevant

## Weight Application (External)
This returns raw relevance scores (0-1). External weighting will be applied in the hybrid search layer:
- Final score = raw_score × vector_weight (e.g., 0.7)
- This aligns with Memory's hybrid search: vectorWeight × 0.7 + textWeight × 0.3

## Output Format
Return ONLY a JSON object:

```json
{
  "scores": [
    { "index": 0, "score": 0.85, "reason": "Directly addresses the query with specific details" },
    { "index": 1, "score": 0.60, "reason": "Contains related information" }
  ],
  "summary": "Brief explanation of scoring pattern"
}
```

## Rules
- Each document MUST have index, score, and reason fields
- Scores must be between 0 and 1
- Output ONLY valid JSON, nothing else
- If unable to determine, assign score 0.5