Friday, February 6, 2026

Scale back Imply Time to Decision with an observability agent


Clients of all sizes have been efficiently utilizing Amazon OpenSearch Service to energy their observability workflows and achieve visibility into their purposes and infrastructure. Throughout incident investigation, Web site Reliability Engineers (SREs) and operations heart personnel depend on OpenSearch Service to question logs, look at visualizations, analyze patterns, correlate traces to search out the basis explanation for the incident, and cut back Imply Time to Decision (MTTR). When an incident occurs that triggers alerts, SREs usually bounce between a number of dashboards, write particular queries, verify current deployments, and correlate between logs and traces to piece collectively a timeline of occasions. Not solely is that this course of largely guide, but it surely additionally creates a cognitive load on these personnel, even when all the info is available. That is the place agentic AI may also help, by being an clever assistant that may perceive question, interpret varied telemetry alerts, and systematically examine an incident.

On this put up, we current an observability agent utilizing OpenSearch Service and Amazon Bedrock AgentCore that may assist floor root trigger and get insights quicker, deal with a number of query-correlation cycles, and in the end cut back MTTR even additional.

Answer overview

The next diagram reveals the general structure for the observability agent.

Purposes and infrastructure emit telemetry alerts within the type of logs, traces, and metrics. These alerts are then gathered by OpenTelemetry Collector (Step 1) and exported to Amazon OpenSearch Ingestion utilizing particular person pipelines for each sign: logs, traces, and metrics (Step 2). These pipelines ship the sign information to an OpenSearch Service area and Amazon Managed Service for Prometheus (Step 3).

OpenTelemetry is the usual for instrumentation, and supplies vendor-neutral information assortment throughout a broad vary of languages and frameworks. Enterprises of varied sizes are adopting this structure sample utilizing OpenTelemetry for his or her observability wants, particularly these dedicated to open supply instruments. Extra notably, this structure builds on open supply foundations, serving to enterprises keep away from vendor lock-in, profit from the open supply group, and implement it throughout on-premises and varied cloud environments.

For this put up, we use the OpenTelemetry Demo software to display our observability use case. That is an ecommerce software powered by about 20 completely different microservices, and generates lifelike telemetry information along with characteristic units to generate load and simulate failures.

Mannequin Context Protocol servers for observability sign information

The Mannequin Context Protocol (MCP) supplies a standardized mechanism to attach brokers to exterior information sources and instruments. On this resolution, we constructed three distinct MCP servers, one for every sort of sign.

The Logs MCP server exposes device features for looking, filtering, and choosing log information that’s saved in an OpenSearch Service area for log information. This allows the agent to question the logs utilizing varied standards like easy key phrase matching, service identify filter, log degree, or time ranges. This mimics the standard queries you’d run throughout an investigation. The next snippet reveals a pseudo code of what the device operate can appear like:

# Logs MCP Server - Key Capabilities
search_otel_logs(
    question: string,           # Textual content search question for log messages
    service: string,         # Service identify to filter logs
    severity: string,        # Log degree (INFO, WARN, ERROR)
    startTime: string,       # Begin time (ISO format or relative e.g., 'now-1h')
    endTime: string,         # Finish time (ISO format or relative e.g., 'now')
    dimension: quantity             # Variety of outcomes to return
)
get_logs_by_trace_id(
    traceId: string,         # Hint ID to retrieve all correlated logs
    dimension: quantity             # Most variety of logs to return
)

The Traces MCP server exposes device features for looking and retrieving details about distributed traces. These features may also help lookup traces by hint ID and discover traces for a selected service, the spans belonging to a hint, the service map info constructed based mostly on the spans, and the speed, error, and period (also called RED metrics). This allows the agent to observe a request’s path throughout the providers and pinpoint the place failures occurred or latency originated.

# Traces MCP Server - Key Capabilities
get_otel_spans(
    serviceName: string,     # Service identify to filter spans
    traceId: string,         # Hint ID to filter spans
    spanId: string,          # Span ID to retrieve a selected span
    operationName: string,   # Operation/span identify to filter
    startTime: string,       # Begin time (ISO format or relative)
    endTime: string,         # Finish time (ISO format or relative)
    dimension: quantity             # Variety of outcomes to return
)
get_spans_by_trace_id(
    traceId: string,         # Hint ID to retrieve all spans for
    dimension: quantity             # Most variety of spans to return
)
get_otel_service_map(
    serviceName: string,     # Service identify to filter service map
    startTime: string,       # Begin time
    endTime: string,         # Finish time
    dimension: quantity             # Variety of outcomes to return
)
get_otel_rate_error_duration_metrics(
    startTime: string,       # Begin time (default: 'now-5m')
    endTime: string          # Finish time (default: 'now')
)

The Metrics MCP server exposes device features for querying time sequence metrics. The agent can use these features to verify error fee percentiles and useful resource utilization, that are key alerts for understanding the general well being of the system and figuring out anomalous conduct.

# Metrics MCP Server - Key Capabilities
query_instant(
    question: string,           # PromQL question expression
    time: string,            # Analysis timestamp (optionally available)
    timeout: string          # Analysis timeout (optionally available)
)
query_range(
    question: string,           # PromQL question expression
    begin: string,           # Begin timestamp
    finish: string,             # Finish timestamp
    step: string,            # Question decision step (e.g., '15s', '1m')
    timeout: string          # Analysis timeout (optionally available)
)
get_timeseries(
    metric: string,          # Metric identify or PromQL expression
    period: string,        # Time period to look again (e.g., '1h', '6h')
    step: string             # Step dimension (optionally available)
)
search_metrics(
    sample: string          # Search sample (helps regex e.g., 'http.*')
)
explore_metric(
    metric: string           # Metric identify to discover (metadata + samples)
)

These three MCP servers span throughout the several types of information utilized by investigation engineers, offering an entire working set for an agent to conduct investigations with autonomous correlation throughout logs, traces, and metrics to find out the attainable root causes for a problem. Moreover, a customized MCP server exposes device features over enterprise information on income, gross sales, and different enterprise metrics. For the OpenTelemetry demo software, you possibly can develop artificial information to assist in offering context for impression and different enterprise degree metrics. For brevity, we don’t present that server as part of this structure.

Observability agent

The observability agent is central to the answer. It’s constructed to assist with incident investigation. Conventional automations and guide runbooks usually observe predefined working procedures, however with an observability agent, you don’t must outline them. The agent can analyze, motive based mostly on the info obtainable to it, and adapt its technique based mostly on what it discovers. It correlates findings throughout logs, traces, and metrics to reach at a root trigger.

The observability agent is constructed with the Strands Agent SDK, an open supply framework that simplifies growth of AI brokers. The SDK supplies a model-driven strategy with flexibility to deal with underlying orchestration and reasoning (the agent loop) by invoking uncovered instruments and sustaining coherent, turn-based interactions. This implementation additionally discovers instruments dynamically, so if there’s a change within the capabilities, the agent could make selections based mostly on up-to-date info.

The agent runs on Amazon Bedrock AgentCore Runtime, which supplies totally managed infrastructure for internet hosting and working brokers. The runtime helps standard agent frameworks, together with Stands, LangGraph, and CrewAI. The runtime additionally supplies scaling availability and compute that many enterprises require to run production-grade brokers.

We use Amazon Bedrock AgentCore Gateway to connect with all three MCP servers. When deploying brokers at scale, gateways are indispensable elements to scale back administration duties like customized code growth, infrastructure provisioning, complete ingress and egress safety, and unified entry. These are important enterprise features wanted when bringing a workload to manufacturing. On this software, we create gateways that join all three MCP servers as targets utilizing server-sent occasions. Gateways work alongside Amazon Bedrock AgentCore Identities to offer safe credentials administration and safe id propagation from the person to the speaking entities. The pattern software makes use of AWS Identification and Entry Administration (IAM) for id administration and propagation.

Incident investigation is commonly a multi-step course of. It entails iterative speculation testing, a number of rounds of querying, and constructing context over time. We use Amazon Bedrock AgentCore Reminiscence for this goal. On this resolution, we use session-based namespaces to take care of separate dialog threads for various investigations. For instance, when a person asks “What about Fee service?” throughout an investigation, the agent retrieves current dialog historical past from reminiscence to take care of consciousness of prior findings. We retailer each person questions and agent responses with timestamps to assist the agent reconstruct the dialog chronologically and motive about already accomplished findings.

We configured the observability agent to make use of Anthropic’s Claude Sonnet v4.5 in Amazon Bedrock for reasoning. The mannequin interprets questions, decides which MCP device to invoke, analyzes the outcomes, and formulates the set of questions or conclusions. We use a system immediate to instruct the mannequin to suppose like an skilled SRE or an operation heart engineer: “Beginning with a high-level verify, narrowing down affected elements, correlate throughout telemetry sign sorts and derive conclusion with substantiation. You ask the mannequin to additionally recommend logical subsequent steps similar to performing a drill down to research inter service dependencies.” This makes the agent versatile to research and motive about widespread forms of incident investigations.

Observability agent in motion

We constructed a real-time RED (fee, errors, period) metrics dashboards for the whole software, as proven within the following determine.

To determine a baseline, we requested the agent the next query: “Are there any errors in my software within the final 5 minutes?”The agent queries the traces and metrics, analyzes the outcomes, and responds saying there are not any errors within the system. It notes that each one the providers are lively, traces are wholesome, and the system is processing requests usually. The agent additionally proactively suggests subsequent steps that is likely to be helpful for additional investigation.

Introducing failures

The OpenTelemetry demo software has a characteristic flag that we are able to use to introduce deliberate failures within the system. It additionally consists of load technology so these errors can floor prominently. We use these options to introduce just a few failures with the fee service. The true-time RED metrics dashboards within the earlier determine replicate the impression and present the error charges climbing.

Investigation and root trigger evaluation

Now that we’re producing errors, we have interaction the agent once more. That is usually the beginning of the investigation session. Additionally, we’ve workflows like alarms triggering or pages going out that may set off the beginning of an investigation.

We ask the query “Customers are complaining that it’s taking a very long time to purchase objects. Are you able to verify to see what’s going on?”

The agent retrieves the dialog historical past from reminiscence (if there’s any), invokes instruments to question RED metrics throughout providers, and analyzes the outcomes. It identifies a essential buy movement efficiency challenge: fee service is in a connectivity disaster and fully unavailable, with excessive latency noticed in fraud detection, advert service, and suggestion service. The agent supplies quick motion suggestions—restore fee service connectivity as the highest precedence—and suggests subsequent steps, together with investigating fee service logs.

Following the agent’s suggestion, we ask it to research the logs: “Examine fee service logs to know the connectivity challenge.”

The agent searches logs for the checkout and fee providers, correlates them with hint information, and analyzes service dependencies from the service map. It confirms that though cart service, product catalog service, and foreign money service are wholesome, the fee service is totally unreachable, efficiently figuring out the basis explanation for our intentionally launched failure.

Past root trigger: Analyzing enterprise impression

As talked about earlier, we’ve artificial enterprise gross sales and income information in a separate MCP server, so when the person asks the agent “Analyze the enterprise impression of the checkout and fee service failures,” the agent makes use of this enterprise information, examines the transaction information from traces, calculates estimated income impression, and assesses buyer abandonment charges as a result of checkout failures. This reveals how the agent can transcend figuring out the basis trigger and supply assist with operational actions like making a runbook for challenge decision sooner or later, which might be first the step to offering automated remediation with out involving SREs.

Advantages and outcomes

Though the failure situation on this put up is simplified for illustration, it highlights a number of key advantages that immediately contribute to decreasing MTTR.

Accelerated investigation cycles

Conventional workflows for troubleshooting contain a number of iterations of hypotheses, verification, querying, and information evaluation at every step, requiring context switching and consuming hours of effort. The observability agent reduces these drastically to some minutes by autonomous reasoning, correlation, and actioning, which in flip reduces MTTR.

Dealing with complicated workflows

Actual-world manufacturing situations usually contain cascading failures and a number of system failures. The observability agent’s capabilities can lengthen to those situations through the use of historic information and sample recognition. As an example, it may well distinguish associated points from false positives utilizing temporal or identity-based correlation, dependency graphs, and different strategies, serving to SREs keep away from wasted investigation effort on unrelated anomalies.

Slightly than present a single reply, the agent can present probabilistic distribution throughout potential root causes, serving to SREs prioritize remediation strategies; for instance:

  • Fee service community connectivity challenge: 75%
  • Downstream fee gateway timeout: 15%
  • Database connection pool exhaustion: 8%
  • Different/Unknown: 2%

The agent can evaluate present signs in opposition to previous incidents, figuring out whether or not related patterns have occurred previously, thereby evolving from a reactive question device right into a proactive diagnostic assistant.

Conclusion

Incident investigation stays largely guide. SREs juggle dashboards, craft queries, and correlate alerts underneath strain, even when all the info is available. On this put up, we confirmed how an observability agent constructed with Amazon Bedrock AgentCore and OpenSearch Service can alleviate this cognitive burden by autonomously querying logs, traces, and metrics; correlating findings; and guiding SREs towards root trigger quicker. Though this sample represents one strategy, the flexibleness of Amazon Bedrock AgentCore mixed with the search and analytics capabilities of OpenSearch Service permits brokers to be designed and deployed in quite a few methods—at completely different phases of the incident lifecycle, with various ranges of autonomy, or targeted on particular investigation duties—to fit your group’s distinctive operational wants. Agentic AI doesn’t change current observability funding, however amplifies them by offering an efficient approach to make use of your information throughout incident investigations.


In regards to the authors

Muthu Pitchaimani

Muthu Pitchaimani

Muthu is a Search Specialist with Amazon OpenSearch Service. He builds large-scale search purposes and options. Muthu is within the matters of networking and safety, and is predicated out of Austin, Texas.

Jon Handler

Jon Handler

Jon is Director of Options Structure for Search Providers at AWS. Primarily based in Palo Alto, CA. Jon works carefully with OpenSearch and Amazon OpenSearch Service, offering assist and steering to a broad vary of consumers who’ve generative AI, search, and log analytics workloads for OpenSearch.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles