Follow BigDATAwire:

April 9, 2025

Elastic Announces General Availability of LLM Observability for Google Cloud’s Vertex AI

SAN FRANCISCO, April 9, 2025 — Elastic today announced the general availability of the Elastic Google Cloud Vertex AI platform integration in Elastic Observability.

This integration offers large language model (LLM) observability support for models hosted in Google Cloud’s Vertex AI platform, providing insights into costs, token usage, errors, prompts, responses and performance. Site Reliability Engineers (SRE) can now optimize resource usage, identify and resolve performance bottlenecks, and enhance model efficiency and accuracy.

“Comprehensive visibility into LLM performance is crucial for SREs and DevOps teams to ensure that their AI-powered applications are optimized,” said Santosh Krishnan, general manager of Observability and Security at Elastic. “Google Cloud’s Vertex AI platform integration provides users robust LLM observability and detection of performance anomalies in real-time, giving them critical insights into model performance that help with bottleneck identification and reliability improvements.”

Availability

Support for the Elastic Google Cloud’s Vertex AI platform integration is available today.

Additional Resources

About Elastic

Elastic (NYSE: ESTC), the Search AI Company, enables everyone to find the answers they need in real-time using all their data, at scale. Elastic’s solutions for search, observability, and security are built on the Elastic Search AI Platform, the development platform used by thousands of companies, including more than 50% of the Fortune 500. Learn more at elastic.co.


Source: Elastic

BigDATAwire