Staying on top of network performance can be a great way to find errors between your apps and services before they even occur. However, monitoring network traffic is unattainable for many developers as so much data is produced that ingesting & storing the data increases costs exponentially.
Thankfully, with Axiom, you can monitor as much as you want without worrying about huge bills at the end of the month or unexpected sampling of your data. Even better, Axiom's storage is so cheap that you can keep more of your data for longer, which enables you to compare data across a year or more as easily as over the last hour.
In this post, I will show you how to analyze & capture network performance from Redis memory using the Packetbeat Redis backend on Axiom.
- Axiom Dataset & Token
- Access to an Axiom deployment
- Packetbeat Installed on your machine
- Redis protocol configured.
Let's get Started 🌟
- Visit our docs to copy, edit and configure your Redis protocol port from the
# Disable index lifecycle management (ILM) setup.ilm.enabled: false # network device to capture traffic from packetbeat.interfaces.device: en0 # Configure the maximum size of the packets to capture packetbeat.interfaces.snaplen: 44937833987 # Configure Sniffing & traffic capturing options packetbeat.interfaces.type: pcap # Configure the maximum size of the shared memory buffer to use packetbeat.interfaces.buffer_size_mb: 400 packetbeat.interfaces.auto_promisc_mode: true packetbeat.flows: timeout: 30s period: 10s protocols: dns: ports:  include_authorities: true include_additionals: true redis: ports:  output.elasticsearch: hosts: [""$YOUR_AXIOM_URL:443/api/v1/datasets/<dataset>/elastic"] # api_key can be your ingest or personal token api_key: "user:token"
- Generate your ingest token,
- In the Axiom UI, click on settings, select ingest token.
- Select Add ingest token.
- Enter a name and description and select ADD.
- Copy the generated token to your clipboard. Once you navigate from the page, the token can be seen again by selecting Ingest Tokens.
- Create your dataset for your Redis protocol by selecting Settings → Datasets.
Update the changes with the new Host URL and Dataset name on your configuration file so you can ingest Redis events into Axiom.
Analyse and get queries using different aggregations on the Axiom UI. You can group and extract statistics/insights from your Redis events by running aggregations across your Redis-memory dataset: Below is a list of aggregations you can run on your Redis dataset.
- Count(): Here I am running the
countaggregation, this will count all matching events in your Redis dataset, bucketed by a time duration then a line chart is then rendered with the returned data that shows the number of events in each time data.
- Sum(): The Sum aggregation calculates the total value for a field across the query time range. Select the field you want to run the
A chart is rendered that shows the sum in each bucket of time, and the overall total is available in the table beneath the chart.
- distinct(): The Distinct aggregation calculates the number of unique values for a field in each time duration in your Redis dataset. You can specify the
distinct($fieldName)function to get the chart for the values in the Redis fields you selected. The table beneath the chart shows the total number of distinct values for the entire time period selected.
- average (): You can calculate the mean value for a numeric field in each time duration on your Redis dataset. Below is the overall average across the entire time period of the query selected.
- minimum (): The
min()aggregation lets you get the minimum value for a numeric field from your Redis dataset using this aggregation. When you have selected your field, it outputs a chart that contains the minimum value for each time duration in the table below the chart.
- topk(): using the
topk()aggregation, you can obtain the “top 4” or “top 10” (where ‘4’ and ’10’ are ‘k’ in the topk) values for a field(s) in your Redis dataset.
Before you can implement the topk aggregation it takes two arguments: The field to aggregate and how many results to return. For example, getting the
top agent version and the
top event kind
You can also combine
- histogram (): You can get the distribution values in your Redis events field across a particular time range of your query using the
histogram()aggregation type. The histogram aggregation produces a boxplot that makes it very easy for you to see variations and know how your data is being distributed.
The Histogram aggregation takes two arguments:
- The field you want to aggregate
- The number of buckets to split the values into (y-axis).
- percentile (): With the Percentile aggregation, you can calculate the value of a field of your Redis data when it's at or below which the percent of the results in the field fall. With this, you can send multiple percentiles to easily visualize the distribution of your values. When you add a percentile aggregation, fill out the
95, 99, and 99.9 percentiles values.You can adjust these values whenever you want.
- Pairing Aggregations with Group By: Many of the aggregations above can be paired with a
Group By clausein your query to segment your data. You can get a comprehensive view by pairing your aggregations above with the Group by expression in your query. This lets you segment your data and let you have a view of how each fraction of your Redis network activity is operating.
- Pairing Aggregations with Against: You can get a measure and see the specific amount of queries over a period of time,
(1 hour,3 hours, 7 days, 30mins), etc. This lets you know what events or groups of activities happened in the past.
You can also compare the events from different periods to check if your network activity was similar to the network performance 2 days or 30 mins ago.
By using the Against query option, your query will be plotted against data from a preceding point in time so you can easily spot problems and activity flaws.
Can I do this myself? 😇
Heck yeah you can!