DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Enterprise AI Trend Report: Gain insights on ethical AI, MLOps, generative AI, large language models, and much more.

2024 Cloud survey: Share your insights on microservices, containers, K8s, CI/CD, and DevOps (+ enter a $750 raffle!) for our Trend Reports.

PostgreSQL: Learn about the open-source RDBMS' advanced capabilities, core components, common commands and functions, and general DBA tasks.

AI Automation Essentials. Check out the latest Refcard on all things AI automation, including model training, data security, and more.

Related

  • The State of Observability 2024: Navigating Complexity With AI-Driven Insights
  • Building Safe AI: A Comprehensive Guide to Bias Mitigation, Inclusive Datasets, and Ethical Considerations
  • Revolutionizing Observability: How AI-Driven Observability Unlocks a New Era of Efficiency
  • Bridging the Observability Gap for Modern Cloud Architectures

Trending

  • Service Mesh Unleashed: A Riveting Dive Into the Istio Framework
  • API Appliance for Extreme Agility and Simplicity
  • DZone's Article Submission Guidelines
  • Organizing Knowledge With Knowledge Graphs: Industry Trends
  1. DZone
  2. Data Engineering
  3. Data
  4. Making Waves: Dynatrace Perform 2024 Ushers in New Era of Observability

Making Waves: Dynatrace Perform 2024 Ushers in New Era of Observability

Observability, automation, and sustainability took center stage. Key announcements around reducing carbon, ensuring AI reliability, and maximizing engineering efficiency.

By 
Tom Smith user avatar
Tom Smith
DZone Core CORE ·
Feb. 01, 24 · Analysis
Like (2)
Save
Tweet
Share
3.2K Views

Join the DZone community and get the full member experience.

Join For Free

Dynatrace welcomed thousands of in-person and virtual attendees to its annual Perform conference in Las Vegas this week. The overarching theme was “Make Waves,” – conveying both the tectonic shifts happening across industries and opportunities for organizations to drive transformational impact.

True to the cutting-edge nature of the company, Dynatrace had several major announcements that will allow enterprises to tackle some of today’s most pressing challenges around cloud complexity, AI adoption, security threats, and sustainability commitments. Let’s dive into the key developments.

Reducing the IT Carbon Footprint

With climate change accelerating, reducing carbon emissions has become a business imperative. However, IT infrastructures are extremely complex, making it difficult for enterprises to quantify and optimize their footprints at scale.

Dynatrace Carbon Impact is purpose-built to address this challenge. It translates highly granular observability data like compute utilization metrics into accurate sustainability impacts per data center, cloud provider regions, host clusters, and even individual workloads.

Teams can instantly identify “hot spots” representing the highest energy waste and emissions for focused efficiency gains. For example, Carbon Impact may reveal an overload of duplicate microservices, dragging down utilization rates across critical application resources.

It also suggests precise optimization actions based on cloud architectures and dependencies, like eliminating grossly underutilized instances. Moreover, its continuous monitoring provides oversight into sustainability KPIs overtime after taking measures like rightsizing initiatives or green coding enhancements.

According to Dynatrace customer Lloyds Banking Group, which aims to cut operational carbon 75% by 2030, these capabilities create “the visibility and impact across IT ecosystems needed to optimize infrastructure efficiency.”

As businesses pursue environmental goals amidst cloud scale and complexity, Carbon Impact makes observability the key enabler to reaching those targets.

Making Observability Work for AI

Artificial intelligence holds tremendous promise, but as the adoption of complex technologies like large language models and generative AI accelerates, new observability challenges arise.

These modern AI workloads can behave unexpectedly, carry proprietary IP within models, hampering visibility, and operate as black boxes unable to trace failures. Their on-demand consumption models also make resource usage hard to predict and control.

Dynatrace AI Observability is purpose-built to overcome these hurdles. It instruments the entire AI stack, including infrastructure like GPU clusters, ML pipelines, model governance systems, and AI apps.

This full-stack observability combined with explanatory models from Davis AI delivers precise insights into the provenance and behavior of AI systems. Teams can pinpoint the root causes of model degradation plus quantify accuracies.

For large language models like GPT, in particular, Dynatrace traces query patterns and token consumption to prevent overages. As models iteratively learn from new data, they monitor for harmful drift. This governance ensures models operate reliably and cost-effectively at the enterprise scale.

In an environment demanding responsible and secure AI rollouts across industries, observability is no longer optional. Dynatrace equips businesses to drive generative AI and ML innovation with confidence.

Driving Analytics and Automation at Scale

Modern cloud-native environments generate massive data streams that are difficult for enterprises to smoothly manage, let alone extract value from. Constrained bandwidth and storage compound the issue, while ad hoc observability pipelines and data quality defects create headaches for practitioners.

Dynatrace OpenPipeline elegantly solves these challenges. It offers a single, high-powered route to funnel all observability, security, and business telemetry pouring from dynamic cloud workloads into value-driving analytics and automation platforms like Dynatrace.

Leveraging patent-pending accelerated processing algorithms combined with instant query abilities, OpenPipeline can evaluate staggering data volumes in flight up to 5-10 times faster than alternatives to unlock real-time analytics use cases previously unachievable. No need for clumsy sampling approximations.

It also enriches telemetry with full topology context for precise answers while allowing teams to seamlessly filter, route, and transform data on ingest based on specific analytics or compliance needs. OpenPipeline even helps reduce duplicate streams by up to 30% to minimize bandwidth demands and required data warehouse storage capacity.

For developers, SRE, and data engineering teams struggling to build custom pipelines handling massive, myriad data sources across today's heterogeneous enterprise stacks, OpenPipeline brings simplicity and performance, allowing more focus on extracting insights.

Ensuring Analytics and Automation Quality

Making decisions or triggering critical workflows based on bad data can spell disaster for organizations. But maintaining flawless data quality gets exponentially harder as cloud scale and complexity mushroom.

Luckily for Dynatrace platform users, Data Observability helps eliminate these worries. It leverages Davis AI and other Dynatrace modules to automatically track key telemetry health metrics on ingest, including freshness, volume patterns, distribution outliers, and even schema changes.

Any anomalies threatening downstream analytics and automation fidelity trigger alerts for investigation, made easy by lineage tracking to pinpoint root sources even across interconnected data pipelines. Teams save countless hours and no longer need to manually piece together where data defects originated.

But beyond reactive governance, Dynatrace Data Observability also proactively optimizes analytics by continually assessing the relevance and utilization of data feeds. Teams can confidently retire unused streams wasting resources or identify new sources to incorporate for better insights and models.

For developers building custom data integrations and architects managing business-critical analytics, worry-free data means more efficient delivery of value and innovation for the business. Data Observability grants the ease of mind that both historical and real-time data fueling crucial automation is fully trustworthy.

The Path to Software Perfection

Across the board, Dynatrace Perform 2024 indicated how AI and automation will reshape performance engineering. Founder and CTO Bernd Greifeneder summarized it perfectly: “We built Dynatrace to help customers automate because that is how you get to software perfection. These advances give teams the answers and governance to prevent problems automatically versus manual fixes.”

Dynatrace Perform attendees are clearly excited for observability’s next paradigm shift. 

AI Data quality Observability Cloud Data (computing)

Opinions expressed by DZone contributors are their own.

Related

  • The State of Observability 2024: Navigating Complexity With AI-Driven Insights
  • Building Safe AI: A Comprehensive Guide to Bias Mitigation, Inclusive Datasets, and Ethical Considerations
  • Revolutionizing Observability: How AI-Driven Observability Unlocks a New Era of Efficiency
  • Bridging the Observability Gap for Modern Cloud Architectures

Partner Resources


Comments

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: