Queries¶
Data models for generated queries and related types.
Overview¶
The queries module contains models for representing generated queries, their source tuples, and associated metadata.
GeneratedQuery¶
A natural language query with full traceability.
GeneratedQuery
¶
Bases: BaseModel
Natural language query with full traceability back to its tuple.
Constructor¶
GeneratedQuery(
query: str,
source_tuple: GeneratedTuple,
metadata: QueryMetadata = QueryMetadata(),
)
Fields:
| Field | Type | Description |
|---|---|---|
query |
str |
The generated natural language query |
source_tuple |
GeneratedTuple |
The tuple that produced this query |
metadata |
QueryMetadata |
Associated metadata |
Example:
from evaluateur import GeneratedQuery, GeneratedTuple
from evaluateur.queries import QueryMetadata
query = GeneratedQuery(
query="What's the prior auth process for specialty procedures?",
source_tuple=GeneratedTuple({
"payer": "Cigna",
"procedure": "specialty",
}),
metadata=QueryMetadata(goal_guided=True),
)
print(query.query)
print(query.source_tuple.model_dump())
print(query.metadata.goal_guided)
GeneratedTuple¶
A concrete combination of dimension values. Implemented as a Pydantic RootModel with dict-like access.
GeneratedTuple
¶
Bases: RootModel[dict[str, ScalarValue]]
A concrete combination of dimension values.
Dimension key-value pairs are stored directly (no wrapper). Supports dict-like access::
t["payer"] # item access
t.get("payer", "n/a") # safe access with default
t.items() # iterate key-value pairs
"payer" in t # membership test
Constructor¶
Type: ScalarValue = str | int | float | bool
Dict-like access: supports t["key"], t.get("key", default), t.items(), t.keys(), "key" in t, len(t), bool(t).
Example:
from evaluateur import GeneratedTuple
t = GeneratedTuple({
"payer": "Cigna",
"age_group": "adult",
"complexity": "high",
"geography": "Texas",
})
print(t["payer"]) # "Cigna"
print(t.model_dump()) # full dict
QueryMetadata¶
Metadata associated with a generated query.
QueryMetadata
¶
Bases: BaseModel
Metadata associated with a generated query.
This includes both: - run-level metadata injected by the evaluator (mode, goal_guided, query_goals) - per-query metadata set by generators (free-form keys)
Extra keys are allowed for experimentation and backend-specific tracing.
Constructor¶
QueryMetadata(
goal_guided: bool = False,
query_goals: GoalSpec | None = None,
goal_mode: GoalMode | None = None,
goal_focus: str | None = None,
goal_category: str | None = None,
**extra_fields, # Extra fields allowed
)
Fields:
| Field | Type | Default | Description |
|---|---|---|---|
goal_guided |
bool |
False |
Whether goals were used |
query_goals |
GoalSpec \| None |
None |
The goal spec used |
goal_mode |
GoalMode \| None |
None |
"sample", "cycle", or "full" |
goal_focus |
str \| None |
None |
Name of the focused goal (sample/cycle mode) |
goal_category |
str \| None |
None |
Category of the focused goal |
Extra fields are allowed for custom metadata.
Example:
from evaluateur.queries import QueryMetadata
meta = QueryMetadata(
goal_guided=True,
goal_mode="sample",
goal_focus="freshness checks",
goal_category="components",
custom_field="custom_value", # Extra fields allowed
)
print(meta.goal_guided) # True
print(meta.goal_focus) # "freshness checks"
print(meta.goal_category) # "components"
print(meta.model_dump()) # Includes custom_field
merge_query_metadata()¶
Standalone function for merging query metadata.
from evaluateur.queries import merge_query_metadata
merged = merge_query_metadata(
run_metadata=run_meta,
per_query_metadata=query_meta,
)
This function implements the merge policy: run metadata provides defaults, and per-query metadata wins on conflicts.
QueryMode¶
Enum for query generator selection.
from evaluateur import QueryMode
QueryMode.INSTRUCTOR # Default: use Instructor for structured generation
| Value | Description |
|---|---|
INSTRUCTOR |
Generate queries using Instructor |
ContextBuilder Protocol¶
Protocol for per-tuple context variation.
from evaluateur.queries import ContextBuilder, GeneratedTuple
class ContextBuilder(Protocol):
def __call__(self, tuple: GeneratedTuple) -> tuple[str, dict]:
"""Return (context_string, metadata_dict)."""
...
Example:
from evaluateur.queries import GeneratedTuple
def my_builder(t: GeneratedTuple) -> tuple[str, dict]:
context = f"Focus on {t.get('topic')}"
metadata = {"builder_used": True}
return context, metadata
Complete Example¶
import asyncio
from pydantic import BaseModel, Field
from evaluateur import Evaluator, GeneratedQuery
class Query(BaseModel):
topic: str = Field(..., description="subject")
level: str = Field(..., description="difficulty")
async def main() -> None:
evaluator = Evaluator(Query)
async for q in evaluator.run(
tuple_count=5,
goals="- Test edge cases\n- Verify citations",
):
# Access query text
print(f"Query: {q.query}")
# Access source tuple
print(f"Topic: {q.source_tuple['topic']}")
print(f"Level: {q.source_tuple['level']}")
# Access metadata
print(f"Goal-guided: {q.metadata.goal_guided}")
if q.metadata.goal_focus:
print(f"Focus: {q.metadata.goal_focus}")
if q.metadata.goal_category:
print(f"Category: {q.metadata.goal_category}")
print("---")
asyncio.run(main())
See Also¶
- Evaluator - Query generation methods
- Goals - Goal specification
- Tuples - Tuple generation
- Context Builders - Advanced customization