Sensu’s observability pipeline includes resources for collecting, filtering, transforming, and processing observability data: checks, event filters, mutators, and handlers. These resources and Sensu’s observability pipeline concept are well seasoned and widely used at thousands of companies. However, configuration can be somewhat unintuitive, especially for new users. For example, event filters, mutators, and handlers are distinct resource types, but configuration requires users to specify event filters and mutators in the handler definitions.
As of Sensu Go 6.5, the new first-class pipeline resource makes configuring event processing much more straightforward and flexible. Pipelines are composed of workflows that can include filters, mutators, and handlers. Instead of listing event filters and mutators in handler definitions, you now reference all three in a single workflow inside of a pipeline. With pipelines, you can configure your event filters, mutators, and handlers as distinct resources, then use and reuse them as the building blocks for pipeline resources. In other words, the pipeline resource converts the Sensu observability pipeline from abstraction to practice.
Pipeline resources offer another important benefit for Sensu users: two new high-performance handler types, the SumoLogicMetricsHandler and the TCPStreamHandler.
Traditional Sensu handlers have some scaling limitations. Because they are implemented as separate programs, they start a new UNIX process and require a new connection to send every event they receive. As traditional handlers receive more events, the rate at which they can transmit event data decreases because of the overhead of repeatedly starting a new process and opening a new connection. The new handler types are built directly into the Sensu backend service, which eliminates this overhead. They distribute the event processing workload across a pool of persistent connections, enabling high-throughput streaming data transmission.
However, due to their design, the new streaming handlers can only be used within pipeline workflows. They can’t be assigned as a handler in a check definition, and they don’t include fields for event filters and mutators.
Let’s dig in to Sensu’s new pipeline resource and take a look at how to configure a pipeline using one of the new handler types, the Sumo Logic Metrics Handler.
NOTE: To use pipelines, upgrade to Sensu Go 6.5. Sensu agents that are not upgraded to 6.5 will not run pipelines. Read the full upgrade guide in our documentation and download the latest version.
Use a pipeline to capture event history in Sumo Logic
The Sumo Logic Metrics Handler, new in Sensu Go 6.5, is a high-performance streaming handler designed to be used in pipeline workflows. It maintains a persistent connection pool to send Sensu events directly to a Sumo Logic HTTP Logs and Metrics Source, which is a Sumo Logic endpoint for receiving log and metric data.
There are a few preliminary steps before you get started. You’ll need a Sumo Logic account, an HTTP Logs and Metrics Source to receive your observability data, and a Sumo Logic dashboard to display your observability data. Complete these tasks before you begin:
- Sign up for a Sumo Logic account.
- Follow our step-by-step instructions to set up a Sumo Logic HTTP Logs and Metrics Source.
- Configure a Sumo Logic dashboard for your new HTTP Logs and Metrics Source. If you need a starting point, follow the dashboard setup instructions in our Sensu Plus guide.
Once you have completed these steps, you’re ready to start building a Sensu pipeline resource.
Configure the handler
Start by creating a Sumo Logic Metrics Handler that will send Sensu observability data to Sumo Logic. To configure this handler, you’ll need the URL for your HTTP Logs and Metrics Source. Your Sensu handler definition should look similar to this example:
---
type: SumoLogicMetricsHandler
api_version: pipeline/v1
metadata:
name: sumo_logic_http_metrics
spec:
url: "https://collectors.sumologic.com/receiver/v1/http/xxxxxxxx"
max_connections: 10
timeout: 10s
You probably noticed that this configuration exposes your Sumo Logic HTTP Logs and Metrics Source URL. To avoid exposing your URL, configure it as a secret with Sensu’s built-in env secrets provider. This example shows the same definition with the URL referenced as a secret instead:
---
type: SumoLogicMetricsHandler
api_version: pipeline/v1
metadata:
name: sumo_logic_http_metrics
spec:
url: $SUMO_LOGIC_SOURCE_URL
secrets:
- name: SUMO_LOGIC_SOURCE_URL
secret: sumo_logic_metrics_url
max_connections: 10
timeout: 10s
Next up: event filters.
Use event filters in pipelines
In most cases, your pipeline workflows will use event filters. The filters you apply will depend on the needs of your organization. For example, if you want long-term storage of all the observability data your Sensu checks collect, you may not need a filter at all – just configure a pipeline workflow that includes only your sumo_logic_http_metrics
handler.
On the other hand, suppose you want to create an incident history, storing only events that are in a warning or critical status. Applying event filters will make sure your pipeline only sends the events you want to capture. If you have existing event filters that you want to use, no problem! You can use your existing event filters in pipelines. You can also reference Sensu’s built-in event filters in pipelines without creating new filter definitions.
In this case, let’s assume that you only want to store metrics in Sumo Logic. You can add the built-in has_metrics
filter to ensure that your workflow only processes events that contain metrics.
Workflows can include more than one event filter. If a workflow has more than one filter, Sensu applies the filters in a series, starting with the filter that is listed first. In this example, you’ll also add the built-in not_silenced
event filter to prevent your pipeline from handling events that are silenced.
Create a pipeline resource
With your handler definition configured and your choice of the built-in has_metrics
and not_silenced
event filters decided, you’re ready to create a pipeline resource with a metrics workflow that references both, along with your sumo_logic_http_metrics
handler. Here’s the example pipeline definition:
---
type: Pipeline
api_version: core/v2
metadata:
name: sumo_logic_pipeline
spec:
workflows:
- name: metrics_workflow
filters:
- name: not_silenced
type: EventFilter
api_version: core/v2
- name: has_metrics
type: EventFilter
api_version: core/v2
handler:
name: sumo_logic_http_metrics
type: SumoLogicMetricsHandler
api_version: pipeline/v1
This pipeline does not include a mutator, but you can add one if you like. Use the same process as for the handler and event filter: create the mutator resource first, and then include it by reference in the pipeline workflow. Our pipeline reference doc shows a pipeline example that includes a mutator in a workflow.
Add the pipeline to a check
Your pipeline resource is now properly configured, but it’s not processing any events because no checks are sending events to it. To get your Sensu observability data flowing through the new pipeline, add a reference to it in at least one Sensu check.
This example shows a check definition that uses the Sensu System Check dynamic runtime asset to collect baseline system metrics in Prometheus format and sends events to the sumo_logic_pipeline
resource.
NOTE: If you haven’t already added the Sensu System Check dynamic runtime asset, use the following command to add it:
sensuctl asset add sensu/system-check:0.1.1 -r system-check
---
type: CheckConfig
api_version: core/v2
metadata:
name: collect_system_metrics
spec:
command: system-check
interval: 10
pipelines:
- type: Pipeline
api_version: core/v2
name: sumo_logic_pipeline
publish: true
runtime_assets:
- system-check
subscriptions:
- system
output_metric_format: prometheus_text
output_metric_tags:
- name: entity
value: "{{ .name }}"
- name: namespace
value: "{{ .namespace }}"
- name: os
value: "{{ .system.os }}"
- name: platform
value: "{{ .system.platform }}"
Now you have a Sensu check that sends metrics data to your pipeline, which filters and processes the metrics and sends them on a Sumo Logic HTTP Logs and Metrics Source!
If you followed the dashboard setup instructions in our Sensu Plus guide, after a few moments, you should see your observability data in Sumo Logic dashboards named Sensu Overview and Sensu Entity Details.
Add a workflow to capture incident history
As is, your pipeline includes a single workflow that filters and handles metrics from your collect_system_metrics
check. But suppose you want to create a second Sumo Logic HTTP Logs and Metrics Source endpoint to store incident history. Pipelines can have more than one workflow, so you can add an incident history workflow to your existing pipeline.
Start by setting up a Sumo Logic HTTP Logs and Metrics Source endpoint for your incident history. Make a note of the URL for your new HTTP Logs and Metrics Source, which you’ll need for your incident history handler.
Create another Sumo Logic Handler, this one dedicated to incidents. The SumoLogicMetricsHandler resource only handles metrics events, so you’ll need to create a handler that uses the Sensu Sumo Logic Handler dynamic runtime asset instead.
NOTE: Use the following command to add the Sensu Sumo Logic Handler asset:
sensuctl asset add sensu/sensu-sumologic-handler:0.2.0 -r sensu-sumologic-handler
Create the handler definition, similar to this example:
---
type: Handler
api_version: core/v2
metadata:
name: sumo_logic_http_incidents
spec:
command: >-
sensu-sumologic-handler
--send-log
--source-host “{{ .Entity.Name }}”
--source-name “{{ .Check.Name }}”
–-source-category “sensu-events”
–-url "https://collectors.sumologic.com/receiver/v1/http/xxxxxxxx"
type: pipe
runtime_assets:
- sensu-sumologic-handler
Here’s how to configure the handler with your URL saved as a secret with Sensu’s built-in env secrets provider instead:
---
type: Handler
api_version: core/v2
metadata:
name: sumo_logic_http_incidents
spec:
command: >-
sensu-sumologic-handler
--send-log
--source-host “{{ .Entity.Name }}”
--source-name “{{ .Check.Name }}”
--source-category “sensu-events”
–-url $SUMO_LOGIC_SOURCE_URL
type: pipe
runtime_assets:
- sensu-sumologic-handler
secrets:
- name: SUMO_LOGIC_SOURCE_URL
secret: sumo_logic_incidents_url
To continue this example, assume you want to send only incidents that represent a state change (from OK status to warning, warning to critical, warning to OK, and so on). Let’s create a custom event filter that only allows state change events through:
---
type: EventFilter
api_version: core/v2
metadata:
name: state_change_only
spec:
action: allow
expressions:
- event.check.occurrences == 1
We’ll also add the built-in not_silenced
event filter to the incident workflow. You can update your existing sumo_logic_pipeline
resource to add a new workflow named incidents_workflow
. The updated pipeline will look like this:
---
type: Pipeline
api_version: core/v2
metadata:
name: sumo_logic_pipeline
spec:
workflows:
- name: metrics_workflow
filters:
- name: not_silenced
type: EventFilter
api_version: core/v2
- name: has_metrics
type: EventFilter
api_version: core/v2
handler:
name: sumo_logic_http_metrics
type: SumoLogicMetricsHandler
api_version: pipeline/v1
- name: incidents_workflow
filters:
- name: not_silenced
type: EventFilter
api_version: core/v2
- name: state_change_only
type: EventFilter
api_version: core/v2
handler:
name: sumo_logic_http_incidents
type: Handler
api_version: core/v2
Finally, add the sumo_logic_pipeline
pipeline reference to a Sensu status check. This example shows the pipeline added to the check_cpu check from our guide for monitoring server resources.
NOTE: If you haven’t already configured this check, you will need to add the runtime asset with the following command:
sensuctl asset add sensu/check-cpu-usage:0.2.2 -r check-cpu-usage
Read Monitor server checks with resources for details and instructions for configuring the check_cpu
check.
---
type: CheckConfig
api_version: core/v2
metadata:
name: check_cpu
spec:
command: check-cpu-usage -w 75 -c 90
interval: 60
pipelines:
- type: Pipeline
api_version: core/v2
name: sumo_logic_pipeline
publish: true
runtime_assets:
- check-cpu-usage
subscriptions:
- system
Now you have a single dedicated pipeline for Sumo Logic, with separate workflows for metrics and incidents! Add workflows for every scenario where you want to capture, view, and analyze observability data in Sumo Logic.
Next steps
Now that you’re better acquainted with Sensu’s new pipeline resource, you might want to add a mutator to your sumo_logic_pipeline
workflows. JavaScript mutators, also new in Sensu Go 6.5, are a flexible, efficient way to transform Sensu observability event data. For example, you can use JavaScript mutators to add attributes to events or combine event attributes before handling. Learn more about JavaScript mutators in the reference docs.
Pipelines are also useful for contact routing: alerting the right people via the right service to reduce response and recovery time. Read Route alerts with event filters for a practical step-by-step exercise that shows you how to build a pipeline with multiple workflows that can send alerts to different teams, plus a fallback option for incidents with no team assignment.
And don’t forget to join our community, where you can learn from and share with other Sensu users and stay updated on the latest Sensu product releases.