Skip to content

OpenTelemetry and logs, metrics, traces, profiles analytics, united by mishmash io.

License

Notifications You must be signed in to change notification settings

mishmash-io/opentelemetry-server-embedded

OpenTelemetry and Apache Big Data, United by mishmash io

This repository contains code that receives and adapts OpenTelemetry signals - like logs, metrics, traces and profiles - to Open Source projects of the Apache analytics ecosystem.

Blend and bundle them to build your own Observability analytics backends:

  • for batch processing with Apache Spark or Hive
  • for real-time analytics with Apache Druid and Apache Superset
  • for Machine Learning and AI

You will also find additional tools, examples and demos that might be of service on your own OpenTelemetry journey.

Tip

This is a public release of code we have accumulated internally over time and so far contains only a limited subset of what we intend to share.

Examples of internal software that will be published here in the near future include:

  • A small OTLP server based on Apache BookKeeper for improved data ingestion reliability, even across node failures
  • OpenTelemetry Data Sources for Apache Pulsar for when more more complex preprocessing is needed
  • Our Testcontainers implementations that you can use to ensure your apps always produce the necessary telemetry, or to track performance across releases

Watch this repository for updates.

Contents:

Why you should switch to OpenTelemetry

If you are new to OpenTelemetry you might be thinking how is it better than the multitude of existing telemetry implementations, many of which are already available or well established within popular runtimes like Kubernetes, for example.

There are a number of advantages that OpenTelemetry offers compared to earlier telemetries:

  • All signal types (logs, metrics, traces and profiles) are correlatable:

    For exmpale - you can explore only the logs emitted inside a given (potentially failing) span.

    To see how telemetry signal correlation works - refer to the OpenTelemetry for Developers, Data Engineers and Data Scientists examples section below.

  • More precise timing:

    Unlike other telemetries, OpenTelemetry does not pull data, it pushes it. By avoiding the extra request needed to pull data - OpenTelemetry reports much more accurate timestamps of when logs or spans and other events where emitted, or metrics values were updated.

  • Zero-code telemetry:

    You can add telemetry to your existing apps without any code modifications. If you're using popular frameworks - they already have OpenTelemetry instrumentation that will just work out of the box. See the OpenTelemetry docs for your programming language.

    Also, you do not need to implement special endpoints and request handlers to supply telemetry.

  • No CPU overhead if telemetry is not emitted:

    When code instrumented with OpenTelemetry runs without a configured signals exporter (basically when it is disabled) - all OpenTelemetry API methods are basically empty.

    They do not perform any operations, thus not requiring any CPU.

  • Major companies already support OpenTelemetry:

    Large infrastructure providers - public clouds like Azure, AWS and GCP already seamlessly integrate their monitoring and observability services with OpenTelemetry.

    Instrumenting your code with OpenTelemetry means it can be monitored on any of them, without code changes.

If the above sounds convincing - keep reading through this document and explore the links in it.

OpenTelemetry for Developers, Data Engineers and Data Scientists

We have prepared a few Jupyter notebooks that visually explore OpenTelemetry data that we collected from a demo Astronomy webshop app using the Apache Parquet Stand-alone server contained in this repository.

Tip

If you are the sort of person who prefers to learn by looking at actual data - start with the OpenTelemetry Basics Notebook.

When and where should you use the software in this repository

We, at mishmsah io, have been using OpenTelemetry for quite some time - recording telemetry from experiments, unit and integration tests - to ensure every new release of software we develop is performing better than the last, and within reasonable computing-resource usage. (More on this here.)

Tip

OpenTelemetry is great for monitoring software in production, but we believe you should adopt it within your software development process too.

Having been through that journey ourselves, we've realised that success depends on strong analytics. OpenTelemetry provides a number of tools to instrument your code to emit signals, and then to compose data transmission pipelines for these signals. And leaves it to you to decide what you ultimately want to do with your signals: where you want to store them depends on how you will work with them.

You can compose such pipelines for signals transmition using the OpenTelemetry Collector, which in turn uses a network protocol called OTLP. At the end - you have to terminate the pipelines into an observability (or OTLP) backend.

As a network protocol, OTLP is great at reducing the number of bytes transmitted, keeping the throughput high with minimum overhead. It does this by heavily nesting its messages - to avoid data duplication and take maximum advantage of dictionary encodings and data compression.

On the analytics side though - heavily nested structures are not optimal. A simple count(*) or sum() query, done over millions of OTLP messages, will have to unnest each one of them. Every time you run that query.

And this is the second reason why we believe you might find the software here useful:

Tip

When doing analytics on your observability data - you need a suitable data schema.

The tools in this repository convert OTLP messages into a 'flatter' schema, that's more suitable for analytics.

They preform transformations, only once - on OTLP packet reception, to minimize the overhead that would otherwise be incurred every time you run an analytics job or query.

Following are quick introductions of the individual software packages, where you can find more information.

Tip

If you're wondering how to get your first OpenTelemetry data sets - check out our fork of OpenTelemetry's Demo app.

In there you will find complete deployments that will generate signals, save them and let you play with the data - by writing your own notebooks or creating Apache Superset dashboards.

Artifacts

Embeddable collectors

The base artifact - collector-embedded contains classes that handle the OTLP protocol (over both gRPC and HTTP).

Apache Parquet Stand-alone server

This artifact contains a runnable OTLP-protocol server that receives signals from OpenTelemetry and saves them into Apache Paruqet files.

It is not intended for production use, but rather as a quick tool to save and explore OpenTelemetry data locally. The Basics Jupyter Notebook explores Parquet files as saved by this Stand-alone server.

Apache Druid OTLP Input Format

Use this artifact when ingesting OpenTelemetry signals into Apache Druid, in combination with an Input Source (like Apache Kafka or other).

Apache Druid is a high performance, real-time analytics database that delivers sub-second queries on streaming and batch data at scale and under load. This makes it perfect for OpenTelemetry data analytics.

With this OTLP Input Format you can build OpenTelemetry ingestion pipelines into Apache Druid. For example:

Find out more about the OTLP Input Format for Apache Druid:

Apache Superset charts and dashboards

superset-dashboard

Apache Superset is an open-source modern data exploration and visualization platform.

You can use its rich visualizations, no-code viz builder and its powerful SQL IDE to build your own OpenTelemetry analytics.

To get you started, we're publishing data sources and visualizations that you can import into Apache Superset.

OpenTelemetry at mishmash io

OpenTelemetry's main intent is the observability of production environments, but at mishmash io it is part of our software development process. By saving telemetry from experiments and tests of our own algorithms we ensure things like performance and resource usage of our distributed database, continuously and across releases.

We believe that adopting OpenTelemetry as a software development tool might be useful to you too, which is why we decided to open-source the tools we've built.

Learn more about the broader set of OpenTelemetry-related activities at mishmash io and follow GitHub profile for updates and new releases.