grafana alloy

Grafana Alloy: The Essential Connector Powering the Next-Gen LGTM Stack

Spread the love

The name Alloy is a term in material science which means the mixture of chemical elements of which at least one is a metallic element, although it is also sometimes used for mixtures of elements (according to Wikipedia).

In Grafana ecosystem the Alloy service serves as an mixture of collector to the three major pillars of Observability (logs, traces and metrics).

This saves a lot of time and complexity in telemetry data collection, transformation, and storage, and also sets a standard for all Observability telemetry data collection.

Managing the LGTM stack as a whole and using different collector tools for logs (Promtail), Traces (OTel) and Metrics (Prometheus Exporter, can be a lot of work.

From finding ways to manage individual components with the possibility of failure. To optimize

Introducing Grafana Alloy

Collecting, enriching, and sending telemetry data. Alloy is an all-encompassing collector for logs, metrics, and traces, and this collected data into storage engines such as Prometheus, Loki, and Pyroscope. With Alloy, there is no need to configure Promtail for logging and Prometheus-exporter for metrics. This reduces the operational complexity involved in managing different telemetry collection services. With Alloy you can configure all the needed telemetry collection or just what you need for the application and infrastructure.

Alloy is built with the same technology used to build Promtail; Golang. It also uses a configuration similar to Promtail with slight changes to the parameters, but they are very similar.

Apart from adding a Loki endpoint to send logs to, the configuration setting has options for Prometheus-related services (Mimir, Thanos) and Tempo for traces. The configuration below is an example of Grafana Alloy for collecting logs, metrics, and traces

Components serve as the foundational elements of Alloy, with each one dedicated to a specific function, such as gathering Prometheus metrics or fetching secrets. Grafana Alloy provides a comprehensive set of components designed to collect, process, and export telemetry data, enabling users to build robust observability pipelines.

When choosing a component, it’s essential to consider the type of data you need to collect (e.g., metrics, logs, or traces) and the source of that data (e.g., Prometheus, OpenTelemetry, or custom applications).

Each component is tailored for specific use cases, such as scraping metrics, receiving logs, or forwarding traces, ensuring flexibility and efficiency in configuring your observability stack.

Users can seamlessly integrate Alloy into their existing infrastructure by selecting the appropriate components, enhancing their ability to monitor and analyze system performance. For detailed guidance, refer to the official Grafana Alloy documentation to explore the full range of components and their configurations.

#Component 1

loki.write “default” {

 endpoint {

  url = “http://localhost:3100/loki/api/v1/push”

 }

 endpoint {

  url = “https://logs-us-central1.grafana.net/loki/api/v1/push”

  // Get basic authentication based on environment variables.

  basic_auth {

   username = “<USERNAME>”

   password = “<PASSWORD>”

  }

 }

}

#Component 2

loki.source.file “example” {

 // Collect logs from the default listen address.

 targets = [

  {__path__ = “/tmp/foo.txt”, “color” = “pink”},

  {__path__ = “/tmp/bar.txt”, “color” = “blue”},

  {__path__ = “/tmp/baz.txt”, “color” = “grey”},

 ]

 forward_to = [loki.write.default.receiver]

}

The sample configuration above is made up of two components. Components can also be referenced using DOT notation as used in the second component where the first component was referenced loki.write.default.receiver. The task of the first component is to send the logs to two Loki endpoints, one to local Loki instance, the other to the Grafana Cloud instance using basic auth to authorize the writing for the second. The second component collect the logs from a particular directory in the local machine and calls the first component to initiate the log delivery. The flow of data through the set of references between components forms a pipeline.

Installing Grafana Alloy

There are various options for installing Grafana Alloy. They are; Docker, Linux, Kubernetes, macOS, Windows, OpenshiftAnsible, Chef, Puppet. In this article I shall be sharing the Linux and Kubernetes installation options.

Setting up Alloy in a Linux Virtual Machine

There are various versions of Linux, all with a slightly different way to install applications. For this example, I shall use Linux Ubuntu 22.04

Step 1 (Optional):

If you do not have the gpg installed

sudo apt install gpg

Step 2:

Install GPG and add Grafana Apt package repository

sudo mkdir -p /etc/apt/keyrings/

wget -q -O – https://apt.grafana.com/gpg.key | gpg –dearmor | sudo tee /etc/apt/keyrings/grafana.gpg > /dev/null

echo “deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main” | sudo tee /etc/apt/sources.list.d/grafana.list

Step 3:

Update the repositories

sudo apt-get update

Step 4:

Install Grafana Alloy

sudo apt-get install alloy

Grafana Alloy will be installed in the virtual machine and run as a Daemonset as shown in the following screenshot

AeJs0R1wFfE8AAAAAElFTkSuQmCC

To start collecting telemetry data, Alloy needs to be configured properly. The configuration file for alloy, in this setup, is located in /etc/alloy/config.alloy file. When that file is opened, you can see a default Alloy configuration for scraping Prometheus metrics of Alloy itself.

// Sample config for Alloy.

//

// For a full configuration reference, see https://grafana.com/docs/alloy

logging {

 level = “warn”

}

prometheus.exporter.unix “default” {

 include_exporter_metrics = true

 disable_collectors= [“mdadm”]

}

prometheus.scrape “default” {

  targets = array.concat(

prometheus.exporter.unix.default.targets,

[{

// Self-collect metrics

job= “alloy”,

__address__ = “127.0.0.1:12345”,

}],

  )

forward_to = [

// TODO: components to forward metrics to (like prometheus.remote_write or

// prometheus.relabel).

]

}

This configuration is not complete because the metrics collected are not sent anywhere, lets complete this configuration by adding another component for writing the logs to a Prometheus-like system

// Sample config for Alloy.

//

// For a full configuration reference, see https://grafana.com/docs/alloy

logging {

 level = “warn”

}

prometheus.exporter.unix “default” {

 include_exporter_metrics = true

 disable_collectors= [“mdadm”]

}

prometheus.remote_write “default” {

 endpoint {

  url = “http://localhost:9090/api/v1/write”

 }

}

prometheus.scrape “default” {

  targets = array.concat(

prometheus.exporter.unix.default.targets,

[{

// Self-collect metrics

job= “alloy”,

__address__ = “127.0.0.1:12345”,

}],

  )

forward_to = [prometheus.remote_write.default.receiver]

}

NB: Ensure Prometheus is running locally because the configuration is pointing to localhost. If Prometheus is running in a remote location, update the localhost to specified in the configuration to the correct location of Prometheus.

Deploy Alloy in Kubernetes

With the use of Helm Charts, deploying Alloy in Kubernetes is quite easy and straight-forward. You can get the details from the Official Helm Chart repository. Then use the following steps to deploy Grafana Alloy in your Kubernetes cluster. Whether the Kubernetes cluster runs on the Cloud or it is OnPremise, Alloy can run on both. Before you proceed from this point, ensure you have Helm installed, or you can use ArgoCD for this deployment.

Step 1:

Add Grafana Helm repo

helm repo add grafana https://grafana.github.io/helm-charts

Step 2:

Update the repo list

helm repo update

Step 3:

Create a namespace for Alloy to isolate it

kubectl create ns galloy

Step 4:

Install Alloy via the Helm Chart

helm install –namespace <NAMESPACE> <RELEASE_NAME> grafana/alloy

The last step is an optional step to confirm Alloy is running appropriately

Step 5 (Optional):

kubectl get pods -n galloy

With these steps, Grafana Alloy has been installed in the Kubernetes cluster and running smoothly. This does not mean it will automatically start collecting telemetry data. It still needs to be configured to collect data. The configuration operation has been explained previously in this article where components were mentioned. The following sample values.yaml file can be used to configure Alloy for Kubernetes.

alloy:

mounts:

varlog: true

configMap:

content: |

logging {

level = “info”

format = “logfmt”

}

discovery.kubernetes “pods” {

role = “pod”

}

loki.source.kubernetes “pods” {

targets = discovery.kubernetes.pods.targets

forward_to = [loki.write.endpoint.receiver]

}

loki.write “endpoint” {

endpoint {

url = “http://loki-gateway.default.svc.cluster.local:80/loki/api/v1/push”

tenant_id = “local”

}

}

Conclusion 

Using Grafana Alloy as a single source of collection saves the time and effort involved in managing different collection tools in your observability setup.

It ensures that all telemetry collection has a single source of truth, which reduces complexities and enhances the collection system of the observability pipeline.

With its various options of components and the ability to create custom components, the capabilities of Grafana Alloy are enormous.

Alloy Telemetry data after collection needs to be visible on a Dashboard. Learn more about Grafana Dashboard.


Spread the love

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
×