2021-10-27

Monitoring using Axiom's Data Explorer

Tola Ore-Aruwaji
@thecraftman_

As the transforming power of big data expands across business environments, the volume of your data streams also grows to new magnitudes, performance and identifying problems and failures also grow on a very large scale. To discover valuable business insights and answer critical questions and make data-driven decisions on the go, a massive amount of your data needs to be stored, analyzed, and monitored in seconds.

Axiom Data Explorer is a high-performance analytics platform that brings every business the power to explore and monitor that ocean of data featuring the intuitive Axiom Processing Language and powerful ingestion and storage capabilities. Axiom Data Explorer is the perfect service to analyze, and monitor high volumes of fresh and historical structured data, get insights into the way your backend and applications are performing, and run super-fast queries in seconds.

In this tutorial, I will show you how to monitor your applications using Data Explorer.

Prerequisites

  • Dataset & Token created on the Axiom dashboard.
  • Access to Axiom Cloud
  • Ingested data into your dataset.

Let's roll 😤

  1. Create your dataset by selecting Settings → Datasets.
  • Ingest data into your dataset by using any of the data shippers we support.

1

  1. Generate your ingest token,
  • In the Axiom UI, click on settings, select ingest token.
  • Select Add ingest token.
  • Enter a name and description and select ADD.
  • Copy the generated token to your clipboard. Once you navigate from the page, the token can be seen again by selecting Ingest Tokens.

1

You can also run queries on Data Explorer using our Sandbox

  1. On the Axiom dashboard, select the fourth (4th) icon on the pane, which is the Data Explorer dashboard.

1

  1. You can start running queries, monitor your datasets, and get insights into your resources.
  • Start by typing a dataset name, followed by your query.

1

  1. Enter the dataset you created in step one it will autocomplete it for you. The dataset I created in step one was http-logs so I will type that into the Data Explorer.

1

  1. Start typing and running your queries. You can run queries using our Tabular operators, Scalar functions, Scalar Operators, and Aggregation Functions.
  • A query consists of a sequence of query statements, with at least one statement being a tabular expression statement. The query's tabular expression statements produce the result of the query.

  • The syntax of the tabular expression tabular flow from one query operator to another, starting with your datasets name and then flow through a set of data operators and functions that are bound together through the use of the pipe | delimiter

  • For example, the following query has a single statement, which is a tabular expression statement. The statement starts with a reference to your datasets called http-logs . The data (rows) for the dataset are then filtered by the value of method field. The query then returns the count of rows.

  • Link to Query on Play: https://play.axiom.co/axiom-play-qf1k/explorer?qid=1K15F9L8rtp-r1brbd

1

  1. On your dataset, you can start running queries using our operators and functions. With these queries, you can monitor, explore and analyse your data across all events in your dataset and visualize the output and also gain direct insights into your database and backend to know how they are performing and behaving.

  2. Take Operator: The take operator returns up to the specified number of rows of data. The following query returns 100 rows from the http-logs table.

1

  1. Project Operator: Use the project operator to select fields to insert and embed new columns. The following query returns a specific set of fields as columns.

1

Link to query: https://play.axiom.co/axiom-play-qf1k/explorer?qid=UfCajk2lzGN-r1bsa0

  1. Where: Filters a dataset to the subset of rows that satisfy a predicate.

The following filters the data by method and status

Link to run query: https://play.axiom.co/axiom-play-qf1k/explorer?qid=dpPlhI0lFvR-r1btvl

1

  1. sort: Sort the rows of the input dataset into order by one or more columns.

The following query sorts the data in descending order by ['id']

Link to run query: https://play.axiom.co/axiom-play-qf1k/explorer?qid=fkR8JnuJjDD-r1buay

1

  1. top: Returns the first N Records sorted by the specified columns.

Link to run query: https://play.axiom.co/axiom-play-qf1k/explorer?qid=MZcxN2Yo3Vv-r1ce8s

1

  1. summarize operator: Produces a table that aggregates the content of the dataset.

The summarize operator groups together rows that have the same values in the by clause. In the query below, the aggregation functions (count and topk) combine each group into four (4) rows. In this case there is a row for content_type and [geo.country](http://geo.country) and a column for each aggregation in the operator.

Link to run query: https://play.axiom.co/axiom-play-qf1k/explorer?qid=wGYlIb0NPNc-r1c545

1

  1. You can run queries using our String Functions. Here, I want to encode a string field as a base64 string.

Link to run query: https://play.axiom.co/axiom-play-qf1k/explorer?qid=KoTz9Rgsa5F-r1c8ww

1

  1. split: Here, I want to split the given string according to a given delimiter and return a string array with the contained sub-strings. Where: GET is the source and method is the delimiter

Link to run query: https://play.axiom.co/axiom-play-qf1k/explorer?qid=sHTDQ9gKjhp-r1c9qi

1

  1. strlen: Use the strlen string function to return the length, in characters, of the input string.

1

Link to run query: https://play.axiom.co/axiom-play-qf1k/explorer?qid=kBX7TFTPmyo-r1ca6f

  1. toupper: Use the toupper string function to convert a string to uppercase.

1

Link to run query: https://play.axiom.co/axiom-play-qf1k/explorer?qid=z4HXCeoyfR3-r1cal3

  1. You can also combine different operators and functions.

1

Link to run query: https://play.axiom.co/axiom-play-qf1k/explorer?qid=rCLOpKWsr3r-r1cbbx

Help me!

Don’t worry my little developer lamb, I’ve got you. ❤️

We've got our Sandbox for you to play with different datasets like the; sample-http-logs dataset, hackernews dataset, and Github (fork, issues, pull request, and push-event dataset)

References for the road

Until next time!

Join us in changing how developers think about data