评估(“evals”)通过评估代理的执行轨迹(即其产生的消息和工具调用序列)来衡量代理的性能。与验证基本正确性的集成测试不同,评估根据参考或标准对代理行为进行评分,这使它们在您更改提示、工具或模型时有助于发现回归问题。 评估器是一个接受代理输出(以及可选的参考输出)并返回分数的函数:
def evaluator(*, outputs: dict, reference_outputs: dict):
    output_messages = outputs["messages"]
    reference_messages = reference_outputs["messages"]
    score = compare_messages(output_messages, reference_messages)
    return {"key": "evaluator_score", "score": score}
agentevals 包为代理轨迹提供了预构建的评估器。您可以通过执行轨迹匹配(确定性比较)或使用LLM 评判(定性评估)来评估:
方法何时使用
轨迹匹配您知道预期的工具调用,并且希望快速、确定性、无成本的检查
LLM 评判您希望在没有严格期望的情况下评估整体质量和推理

安装 AgentEvals

pip install agentevals
或者,直接克隆 AgentEvals 仓库

轨迹匹配评估器

AgentEvals 提供 create_trajectory_match_evaluator 函数来将您的代理轨迹与参考进行匹配。有四种模式:
模式描述用例
strict按相同顺序精确匹配消息结构和工具调用(消息内容可以不同)测试特定序列(例如,授权前的策略查找)
unordered与参考具有相同的消息结构和工具调用,但工具调用可以按任何顺序发生验证信息检索,当顺序不重要时
subset代理仅调用参考中的工具(没有额外工具)确保代理不超过预期范围
superset代理至少调用参考工具(允许额外工具)验证至少执行了所需的操作
下面的示例共享一个通用设置,即具有 get_weather 工具的代理:
from langchain.agents import create_agent
from langchain.tools import tool
from langchain.messages import HumanMessage, AIMessage, ToolMessage
from agentevals.trajectory.match import create_trajectory_match_evaluator


@tool
def get_weather(city: str):
    """Get weather information for a city."""
    return f"It's 75 degrees and sunny in {city}."

agent = create_agent("claude-sonnet-4-6", tools=[get_weather])
strict 模式确保轨迹以相同的顺序包含相同的消息和相同的工具调用,尽管它允许消息内容的差异。当您需要强制执行特定的操作序列时,这很有用,例如在授权操作之前要求策略查找。
evaluator = create_trajectory_match_evaluator(
    trajectory_match_mode="strict",
)

def test_weather_tool_called_strict():
    result = agent.invoke({
        "messages": [HumanMessage(content="What's the weather in San Francisco?")]
    })

    reference_trajectory = [
        HumanMessage(content="What's the weather in San Francisco?"),
        AIMessage(content="", tool_calls=[
            {"id": "call_1", "name": "get_weather", "args": {"city": "San Francisco"}}
        ]),
        ToolMessage(content="It's 75 degrees and sunny in San Francisco.", tool_call_id="call_1"),
        AIMessage(content="The weather in San Francisco is 75 degrees and sunny."),
    ]

    evaluation = evaluator(
        outputs=result["messages"],
        reference_outputs=reference_trajectory
    )
    # {
    #     'key': 'trajectory_strict_match',
    #     'score': True,
    #     'comment': None,
    # }
    assert evaluation["score"] is True
unordered 模式允许按任何顺序进行相同的工具调用。当您想要验证是否检索了特定信息但不关心顺序时,这很有帮助。例如,一个代理通过不同的工具调用检查城市的天气和事件。
@tool
def get_events(city: str):
    """Get events happening in a city."""
    return f"Concert at the park in {city} tonight."

agent = create_agent("claude-sonnet-4-6", tools=[get_weather, get_events])

evaluator = create_trajectory_match_evaluator(
    trajectory_match_mode="unordered",
)

def test_multiple_tools_any_order():
    result = agent.invoke({
        "messages": [HumanMessage(content="What's happening in SF today?")]
    })

    reference_trajectory = [
        HumanMessage(content="What's happening in SF today?"),
        AIMessage(content="", tool_calls=[
            {"id": "call_1", "name": "get_events", "args": {"city": "SF"}},
            {"id": "call_2", "name": "get_weather", "args": {"city": "SF"}},
        ]),
        ToolMessage(content="Concert at the park in SF tonight.", tool_call_id="call_1"),
        ToolMessage(content="It's 75 degrees and sunny in SF.", tool_call_id="call_2"),
        AIMessage(content="Today in SF: 75 degrees and sunny with a concert at the park tonight."),
    ]

    evaluation = evaluator(
        outputs=result["messages"],
        reference_outputs=reference_trajectory,
    )
    assert evaluation["score"] is True
supersetsubset 模式匹配部分轨迹。superset 模式验证代理至少调用了参考轨迹中的工具,允许额外的工具调用。subset 模式确保代理没有调用参考之外的任何工具。
@tool
def get_detailed_forecast(city: str):
    """Get detailed weather forecast for a city."""
    return f"Detailed forecast for {city}: sunny all week."

agent = create_agent("claude-sonnet-4-6", tools=[get_weather, get_detailed_forecast])

evaluator = create_trajectory_match_evaluator(
    trajectory_match_mode="superset",
)

def test_agent_calls_required_tools_plus_extra():
    result = agent.invoke({
        "messages": [HumanMessage(content="What's the weather in Boston?")]
    })

    # Reference only requires get_weather, but agent may call additional tools
    reference_trajectory = [
        HumanMessage(content="What's the weather in Boston?"),
        AIMessage(content="", tool_calls=[
            {"id": "call_1", "name": "get_weather", "args": {"city": "Boston"}},
        ]),
        ToolMessage(content="It's 75 degrees and sunny in Boston.", tool_call_id="call_1"),
        AIMessage(content="The weather in Boston is 75 degrees and sunny."),
    ]

    evaluation = evaluator(
        outputs=result["messages"],
        reference_outputs=reference_trajectory,
    )
    assert evaluation["score"] is True
您还可以设置 tool_args_match_mode 属性和/或 tool_args_match_overrides 来自定义评估器如何考虑实际轨迹与参考中工具调用之间的相等性。默认情况下,只有对同一工具具有相同参数的工具调用才被视为相等。访问仓库了解更多详情。

LLM 评判评估器

您可以使用 create_trajectory_llm_as_judge 函数使用 LLM 来评估代理的执行路径。与轨迹匹配评估器不同,它不需要参考轨迹,但如果可用,可以提供一个。
from agentevals.trajectory.llm import create_trajectory_llm_as_judge, TRAJECTORY_ACCURACY_PROMPT

evaluator = create_trajectory_llm_as_judge(
    model="openai:o3-mini",
    prompt=TRAJECTORY_ACCURACY_PROMPT,
)

def test_trajectory_quality():
    result = agent.invoke({
        "messages": [HumanMessage(content="What's the weather in Seattle?")]
    })

    evaluation = evaluator(
        outputs=result["messages"],
    )
    assert evaluation["score"] is True
如果您有参考轨迹,请使用预构建的 TRAJECTORY_ACCURACY_PROMPT_WITH_REFERENCE 提示:
from agentevals.trajectory.llm import create_trajectory_llm_as_judge, TRAJECTORY_ACCURACY_PROMPT_WITH_REFERENCE

evaluator = create_trajectory_llm_as_judge(
    model="openai:o3-mini",
    prompt=TRAJECTORY_ACCURACY_PROMPT_WITH_REFERENCE,
)
evaluation = evaluator(
    outputs=result["messages"],
    reference_outputs=reference_trajectory,
)
有关如何配置 LLM 评估轨迹的更多详细信息,请访问仓库

异步支持

所有 agentevals 评估器都支持 Python asyncio。异步版本可通过在函数名中的 create_ 后添加 async 来获得。
from agentevals.trajectory.llm import create_async_trajectory_llm_as_judge, TRAJECTORY_ACCURACY_PROMPT
from agentevals.trajectory.match import create_async_trajectory_match_evaluator

async_judge = create_async_trajectory_llm_as_judge(
    model="openai:o3-mini",
    prompt=TRAJECTORY_ACCURACY_PROMPT,
)

async_evaluator = create_async_trajectory_match_evaluator(
    trajectory_match_mode="strict",
)

async def test_async_evaluation():
    result = await agent.ainvoke({
        "messages": [HumanMessage(content="What's the weather?")]
    })

    evaluation = await async_judge(outputs=result["messages"])
    assert evaluation["score"] is True

在 LangSmith 中运行评估

为了随时间跟踪实验,将评估器结果记录到 LangSmith。首先,设置所需的环境变量:
export LANGSMITH_API_KEY="your_langsmith_api_key"
export LANGSMITH_TRACING="true"
LangSmith 提供两种主要的运行评估方法:pytest 集成和 evaluate 函数。
import pytest
from langsmith import testing as t
from agentevals.trajectory.llm import create_trajectory_llm_as_judge, TRAJECTORY_ACCURACY_PROMPT

trajectory_evaluator = create_trajectory_llm_as_judge(
    model="openai:o3-mini",
    prompt=TRAJECTORY_ACCURACY_PROMPT,
)

@pytest.mark.langsmith
def test_trajectory_accuracy():
    result = agent.invoke({
        "messages": [HumanMessage(content="What's the weather in SF?")]
    })

    reference_trajectory = [
        HumanMessage(content="What's the weather in SF?"),
        AIMessage(content="", tool_calls=[
            {"id": "call_1", "name": "get_weather", "args": {"city": "SF"}},
        ]),
        ToolMessage(content="It's 75 degrees and sunny in SF.", tool_call_id="call_1"),
        AIMessage(content="The weather in SF is 75 degrees and sunny."),
    ]

    t.log_inputs({})
    t.log_outputs({"messages": result["messages"]})
    t.log_reference_outputs({"messages": reference_trajectory})

    trajectory_evaluator(
        outputs=result["messages"],
        reference_outputs=reference_trajectory
    )
使用 pytest 运行评估:
pytest test_trajectory.py --langsmith-output
创建一个 LangSmith 数据集并使用 evaluate 函数。数据集必须具有以下架构:
  • input: {"messages": [...]} 用于调用代理的输入消息。
  • output: {"messages": [...]} 代理输出中的预期消息历史。对于轨迹评估,您可以选择仅保留助手消息。
from langsmith import Client
from agentevals.trajectory.llm import create_trajectory_llm_as_judge, TRAJECTORY_ACCURACY_PROMPT

client = Client()

trajectory_evaluator = create_trajectory_llm_as_judge(
    model="openai:o3-mini",
    prompt=TRAJECTORY_ACCURACY_PROMPT,
)

def run_agent(inputs):
    return agent.invoke(inputs)["messages"]

experiment_results = client.evaluate(
    run_agent,
    data="your_dataset_name",
    evaluators=[trajectory_evaluator]
)
要了解有关评估代理的更多信息,请参阅 LangSmith 文档