2022-01-20

Shipping Filesystem Metrics to Axiom

Tola Ore-Aruwaji
@thecraftman_

Metricbeat collects metrics from sources which can be a file system, network, CPU processes, memory, and services. Metricbeat understands how to process your services into a specified format which is later on transmitted and shipped into a destination like Axiom. Using Metricbeat, you can collect metrics from your CPU, memory, Redis, Nginx and ship them directly to Axiom

Shipping your data from your systems CPU and memory to Axiom enables you to see your relevant application exceptions and errors in your environments.

Axiom transforms the metrics collected from your systems and services into insights. Using Axiom Data Explorer, you can see specific metric visualizations, quickly isolate and troubleshoot issues.

In this tutorial, I will show you how to ship Metricbeat data from your applications directly to Axiom.

Currently, we support all versions of Metricbeat.

Prerequisites

Letโ€™s get started ๐Ÿ’ก

  1. Firstly, visit our docs to copy, edit, and configure your module and metricsets.

  2. Create your dataset for your Metricbeat logs by selecting Settings โ†’ Datasets.

1

  1. Generate your API token,
  • On the Axiom UI, click on settings, select API Tokens.
  • Select Add ingest token.
  • Enter a name, description and select the permissions you want to give your token. You can choose Ingest and Query, Ingest or Query permissions for your token.
  • Copy the generated token to your clipboard. Once you navigate from the page, the token can be seen again by selecting API Tokens.

2

  1. Next, update your configuration with your new $DATASET_NAME we created in step 2 and yourapi_token we created in step 3.
metricbeat.modules:
setup.ilm.enabled: false
metricbeat.config.modules:
metricbeat.modules:
- module: system
  metricsets:
    - filesystem
output.elasticsearch:
  hosts: ["https://cloud.axiom.co:443/api/v1/datasets/$DATASET_NAME/elastic"]
  # token should be an ingest token
  api_key: 'axiom:$TOKEN'
  1. When you are done with your configuration, you can now run your configuration and ship your Metricbeats statistics and metrics from your filesystem, network and memory directly to Axiom.

3

  1. Visit your dataset in your Axiom Cloud console you can see your statistics and metrics from your services directly in your dataset view.

4

  1. You can analyze and run queries on your dataset using different aggregations. This lets you visualize the output of your metrics and segment your statistics across all or a subset of events in your Metricbeat dataset.
  • In this diagram, we want to know the mean value on our available filesystem, minimum value on our system load and maximum value on our file system packet.

When you select your field, it outputs a chart that contains the minimum value, average value, and maximum value for each time duration in the table below the chart.

5

  • You can also run other aggregations like topk() to know the โ€œtop 4โ€ or โ€œtop 10โ€ (where โ€˜4โ€™ and โ€™10โ€™ are โ€˜kโ€™ in the topk) values for a field(s) in your Metricbeat dataset.

6

  1. You can also run queries, monitor your datasets, and get insights into your resources using Axiom Data Explorer.

In Axiom UI, select the fourth icon, which is the Data Explorer icon on the pane.

7

  1. Start by typing your metricbeat dataset name, then your query.
  • A query consists of a sequence of query statements, with at least one statement being a tabular expression statement. The query's tabular expression statements produce the result of the query.

  • The syntax of the tabular expression tabular flow from one query operator to another, starting with your Metricbeat dataset name and then flow through a set of data operators and functions that are bound together through the use of the pipe | delimiter.

  1. On your Metricbeat dataset, you can start running queries using our operators and functions. This will enable you to gain direct insights into your filesystem, load, memory, and network metrics also know how they are performing and behaving especially when downtime and errors occur.
  • Project Operator: Using the project operator we selected fields to insert and embed new columns. The following query returns fields; agent.hostname agent.version ecs.version and event.dataset as columns.

8

  1. Sort the rows of your metrics into order by one or more columns.

The following query sorts the data in descending order by agent.hostname and ascending by agent.version

9

  1. Produce a table that aggregates all the content of your dataset using the summarize operator.

The summarize operator groups together rows that have the same values in the by clause. In the query below, each group is split into four rows. There is a row for agent.type event.duration event.module and count_

10


Whew! I love learning, donโ€™t you? ๐Ÿ˜Œ

We've got our Sandbox for you to play with different datasets like the; sample-http-logs dataset, hackernews dataset, and Github (fork, issues, pull request, and push-event dataset)

Explore other contents on our blog:

Good luck!

Join us in changing how developers think about data