ThousandEyes is now part of Cisco.

Learn More →
About Cisco
ThousandEyes documentation is now hosted at docs.thousandeyes.com. Content on this site will no longer be updated.
Take me to the new site...

How Alerts work

Last updated: Fri May 01 17:47:29 GMT 2020

The ThousandEyes platform allows customers to configure highly customizable Alert Rules and assign them to tests, in order to highlight or be notified of events of interest.  For customers who want simplicity in alert configuration and management, the ThousandEyes platform ships with default Alert Rules configured and enabled for each test.

Notifications

Alert notifications are delivered either via email, webhooks or via a third-party integration, such as PagerDuty, Slack, or ServiceNow. Recipients are configured in the Alert Rule's Notifications tab. Alerts will be active in the ThousandEyes platform as long as your Alert Rule conditions are met, but notification of the alert being active will only occur at the start of the active period. Alerts can optionally be configured to perform notification once the alert is no longer active.

For email notifications, when multiple alerts are raised simultaneously, their data will be grouped into a single email notification. 

Webhooks integration permits users to send JSON-formatted alert data to a webhooks-enabled server via HTTP. The information can then be programmatically processed and subsequent actions taken automatically. For more information on configuring ThousandEyes Alert Rules with webhooks, refer to the ThousandEyes Knowledge Base article Using Webhooks.

PagerDuty integration allows you to use an Escalation Policy (which defines rules for notification destinations, repeat notifications and other actions) in your PagerDuty service to receive notifications from ThousandEyes. For more information on configuring ThousandEyes Alert Rules with PagerDuty, refer to the ThousandEyes Knowledge Base article PagerDuty Integration.

The Slack integration allows alert data to be send to a chat/instant message application. Users can send notifications to the Slack channel of choice. For more information on configuring ThousandEyes Alert Rules with Slack, refer to the ThousandEyes Knowledge Base article Slack Integration.

ServiceNow integration facilitates delivery of direct notification into a ServiceNow account so it may be processed and acted upon based on workflows defined within that system.  For more information configuring Alert Rules to send notifications directly into the ServiceNow platform, refer to the ThousandEyes Knowledge Base article ServiceNow Integration.

Viewing alerts

Current and past alerts can be viewed on the Alerts page.  The Alerts page has two tabs:

  • Active Alerts: List of alerts currently active for any test within your Account Group.
  • Alerts History: List of alerts no longer active from tests in your Account Group, shown chronologically on a timeline, and with a table whose entries contain details on each alert.

Active Alerts

Active_Alerts.png

The Active Alerts tab shows all alerts currently active in your Account Group. The Active Alerts tab will auto-refresh every two minutes.

  1. Search:  Search for alerts based on the following criteria: Alert ID, Alert Rule Name,  Alert Type, Test ID, Test Name, Test Type or Status. Entering text followed by the return/enter key will execute a search and display results in the table below. To filter events by more than one criteria, click either All or Any links to specify whether the table rows must match all (AND) or any (OR) of the selected criteria.  
  2. Alert Status: 
    • A red colored box indicates that the Alert Rule is currently active for that Test. 
    • A green colored box indicates that the Alert was recently cleared for the test. A cleared Alert will be shown under Alert History tab.
    • A grey colored box indicates that the Alert Rule was disabled for that Test
  3. Alert Rule Name: Name of the Alert Rule currently active. Expand an Alert Rule for more detailed information by Agent, BGP monitor, Start/End Time, Metrics at Alert Start, Metrics at Alert End and Duration for which the Alert was active.
  4. Test Name: Name of the Test for which the Alert Rule is currently active
  5. Alert ID: When gathering details for an Alert via ThousandEyes API, use the Alert ID to reference a particular Alert.

Alert History

Alert_History.png

The Alerts History tab tabulates triggered Alerts which are currently in "cleared" or "inactive" state or are "disabled". To interact with the Alert History page,

  1. Date and Time slider: Input the date endpoints to view Alerts active during that timespan.  Click and drag on either the start or end bars and drag to the desired date.  Your selection will update the From and To date and time fields automatically. 
  2. Date and Time selector: The From and To fields allow manual input of the date and time endpoints to display Alerts active at that time.  Clicking in the date field will both allow manual entry of dates and display a clickable calendar to select a date. Click on the calendar arrows to navigate in the current view (default is the month view). To change to a view of months in a year or a range of years, click the current title (month, year or year range) at the top-middle of the calendar.  The view will cycle to the next timeframe: month -> months -> years.
  3. Search for alerts.  By entering text into the search box, you will search for alerts matching based on the following criteria: Alert ID, Alert Rule Name,  Alert Type, Test ID, Test Name, Test Type or Status. Entering text followed by the return/enter key will execute a search and refine the table below. To filter events by more than one criteria, click either All or Any links to specify whether the table rows must match all (AND) or either (OR) of the selected criteria.
  4. Alert Rule Name: Expand an Alert Rule for more detailed information by Agent, BGP monitor, Start/End Time, Metrics at Alert Start, Metrics at Alert End and Duration for which the Alert was active.
  5. Test Name: Name of the Test for which the Alert Rule was triggered.
  6. Duration: Length of time for which the Alert Rule was active for that test.
  7. Alert ID: When gathering details for an Alert via ThousandEyes API, use the Alert ID to reference a particular Alert.

Assignment to tests

Once you have created an Alert Rule it can be assigned to any test which has the Enable box checked, on the test configuration page.  By default, each test has the rule "Default <test type> Rule" assigned to it, with your account's email address configured as the recipient for email notification. To add or remove rules, click the pull-down menu below the Enable box, and select or deselect rules.  To create a new rule, click the Edit Alert Rules link to access the Add New Alert Rules page, and create your rule.  You will then return to the test configuration page, and use the pull-down menu to assign your new rule to the test.

Rule configuration

Each rule has a name, a series of tests against which it is enabled, a scope of locations to which the alert rule applies, Boolean criteria defining the alert conditions, and the number of locations from which the alert conditions must be met in order to trigger an alert. The rule also can include a notification mechanism, such as a list of email recipients (recipients need not be users of ThousandEyes in order to receive email notifications), a PagerDuty Service or one or more Webhooks.

The image below displays the configuration options of a new alert rule: 

User-added image

  1. Alert Type Layer: test layers available to your organization.
  2. Alert Type: available alert types for the selected test layer.
  3. Rule Name: An alphanumeric string naming this Alert Rule.
  4. Compatible Test Types: Test types to which this alert rule can be assigned.

Settings tab

  1. Tests: Select tests to which this alert rule is assigned. You may choose to configure no tests with this alert rule, and assign it to tests at a later time.
  2. Monitors, Countries, Agents: This selector will display either "Monitors" for a Routing Layer alert rule, "Countries" for a DNS+ Layer alert rule or "Agents" for all other alert rules. The selector has one of three values:
    • All: This alert rule applies to all agents or monitors for a test to which this alert rule is assigned.
    • All except: This alert rule applies to all agents or monitors for a test to which this alert rule is assigned, except for the Agents specified in the selector that will appear when "All except" value is chosen.
    • Specific: This alert rule applies only to specific agents or monitors for a test to which this alert rule is assigned. The Agents or Monitors are specified in the selector that will appear when "Specific" value is chosen.
The image below displays the rest of the configuration options of a new alert rule:
User-added image
  1. Specify the number of agents, all/any of the following alerting conditions, and the number of test rounds the conditions must be met before alerting.
  2. Sticky Agents: Select “any of” if you want an alert sent when any set of agents meet the alert condition(s) in consecutive rounds.
    For example, an alert rule is configured for if the same agent trips a specified threshold in three consecutive rounds. The Atlanta cloud agent trips the rule in round one, the Ashburn cloud agent trips it in round two, and the San Francisco cloud agent trips it in rule three.
    In this scenario, the alert rule would not trigger when using sticky agents. Either Atlanta, Ashburn, or San Francisco would need to trip the rule in three consecutive rounds to trigger the alert.
    Note: Sticky Agents are currently only available for Cloud and Enterprise Agent alerts, with the exception of DNS+.
  3. Threshold: Specify the threshold value for locations (agents, monitors, or countries, depending on rule type) that must meet the alert conditions in order to trigger this alert rule. This value will be either a number of agents/monitors/countries, or a percentage of agents/monitors/countries, as specified in the next setting.

    NOTE: When a percentage of agents, monitors, or countries is used, and the percentage results in a non-whole number threshold value of actual agents, monitors, or countries, the fractional part of the value is significant. For example, when an alert rule with a threshold of 25% of all agents is applied to 13 agents, the threshold is 3.25 agents. This threshold will require 4 agents to meet the alert criteria in order to trigger the alert rule.
  4. Threshold units: Select either agent, monitor, or country, or percentage of agents, monitors, or countries.
  5. Rounds (met): Select the number of test rounds that the following alert condtion(s) must be met out of a total number of rounds in order to trigger the alert rule. See the Rounds (total) entry below.
  6. Rounds (total): Select the total number of test rounds in which the Rounds (met) selection is evaluated. For example, if Rounds (met) = 2 and Rounds (total) = 3 then for every three rounds, the alert rule will trigger if the condition(s) were met twice.
  7. Metric: Select a test metric for this condition.
  8. Operators: The following operators are available:
    • >, <, ≥, ≤ : Numerical comparisons for greater than, less than, greater than or equal to, less than or equal to. Available for all numerical (decimal and integer) metrics, such as packet loss percentage (decimal) of Network Layer tests, or Error Count (integer) of a Page load test.
    • is, is not: Numeric comparison for values which are not continuous ranges (e.g. HTTP status codes) or to a fixed string value, such as the Error Type (e.g. "DNS", "Connect", "SSL").
    • is in, is not in: Numeric or string comparison to a list of values. For example, a BGP Routing rule compares a test metric's AS number (integer) to a list of one or more AS numbers to determine if the test metric is found or not found in the list.
    • is empty, is not empty: Determines whether a metric has a value or has no value.
    • is incomplete: Determines whether a test completed the operations for a given metric. For example, a Path Trace alert rule is used to determine whether the path trace reached its destination, or a Page Load test fully loaded a page.
    • is present: Triggered when an error condition is present.
    • matches, does not match: Determines whether the POSIX regular expression in the Alert Rule is found within the string produced by the test metric (i.e. a substring will produce a match). For example, an Alert Rule for the Error metric of an HTTP Server test with the alert condition:

      User-added image

      will alert when the test's Error Details text is "SSL certificate problem: certificate has expired":

      User-added image

      because the regular expression "certificate\s*\w*:" matches the sub-string "certificate problem:".

    The operators available per type of Alert Rule are also shown in the table below.

  9. Threshold: The value that the Metric setting will be compared against, using the chosen operator.  Note that some operators do not have a Value field.
  10. Add/Delete: Click the + or - icon to add or delete alert criteria to this Alert Rule.  Criteria can be nested for some types of Alert Rule.
  11. Compatible Test Types: Test types to which this Alert Rule can be assigned.

DNS Server Alert Rules

DNS Server Tests differ from other ThousandEyes tests in that multiple servers can be explicitly targeted in a single test.  As a result, DNS Server Alert Rules are evaluated on a per-server basis; each server in the DNS Servers field of the test configuration will have the Alert Conditions evaluated separately from all other servers in the DNS Servers field. For example, consider an Alert Rule that has the following Alert Conditions:

dns-server-alert

When assigned to a DNS Server test with two servers configured as the targets, each server will be evaluated separately against the above Alert Condition.  To trigger the Alert Rule, at least four Agents must receive an Error against same DNS server.  The Alert Rule would not be triggered if, for example, three Agents received an Error when testing the first DNS Server and a fourth Agent received an Error when testing the second DNS server.

BGP Alert Rules

A BGP Alert Rule can be applied to a Routing Layer BGP test, or to a different Layer type that provides the BGP Route Visualization View. It is important to note that some Alert Rule conditions can be applied differently depending on which type of test the rule is assigned to.  For example, a BGP test has only a single target prefix which will be evaluated against the Alert Conditions.  If the "Covered Prefixes" box is checked, any covered prefixes found are not evaluated against the Alert Conditions except the explicit "Covered Prefix" condition.

In contrast, a non-BGP test type can have one or more targets. DNS Server tests can explicitly test multiple DNS servers.  An Agent to Server target's domain name can resolve to multiple servers IP addresses.  When creating the BGP Path Visualization, the Prefix selector will show these multiple target prefixes, and evaluate each prefix against any BGP Alert Rules assigned to the test.  Thus, prefixes which would be considered covered prefixes under a BGP test and not evaluated by the Alert Rule (unless by a "Covered Prefix" condition) are evaluated when assigned to the non-BGP test.  Similarly, the "Covered Prefix" condition does not have any relevance when assigned to a non-BGP test.

The BGP Alert Rules have a parameter named "Prefix Length", which is used to determine the length of prefixes evaluated by the rule. The "Prefix Length" can be individually configured for IPv4 and IPv6 protocols.

Notifications tab

In addition to presenting the Alert in the app.thousandeyes.com UI, the ThousandEyes platform can deliver notifications of alerts through a number of services. The image below displays the Notifications configuration options of a new Alert Rule.

notifications-tab

  1. Send emails to: A list of addresses to which an alert email will be sent when the Alert Rule is first triggered.  Addressees need not be users of the ThousandEyes platform.
  2. Edit emails: Click this link to add email addresses to the Notifications address book.
  3. Send an email: Check this box to send an email when the Alert Rule is no longer active.
  4. Add/Remove Message: Enter text to be added to the body of the Alert Rule's email notification. To prevent code injection, custom messages cannot contain words or phrases wrapped in angle brackets "<like this>"
  5. Webhooks: Webhooks-enabled web services that receive the alert notification.
  6. Edit webhooks: create or edit webhooks which can then be added to the Webhooks Send Notifications to field.
  7. Integrations: integrations that should receive the Alert Notification. 
  8. Edit Integrations: create or edit an integration which can then be added to the Integrations Send Notifications to field. Currently ThousandEyes offer integrations for  PagerDutySlack, and ServiceNow.

Note: Alerts will be active as long as your Alert Rule criteria are met, but any configured email notification will only occur at the beginning of the alert.

Available Operators, Metrics and units

The following table shows a list of test types which are available in the ThousandEyes platform, and the test metrics and operators.   

Test LayerAlert TypeMetricOperatorsUnits
NetworkEnd-to-End (Server), End-to-End (Agent)Packet loss≤,≥%
NetworkEnd-to-End (Server), End-to-End (Agent)Latency1≤,≥ms
NetworkEnd-to-End (Server), End-to-End (Agent)Jitter≤,≥ms
NetworkEnd-to-End (Server), End-to-End (Agent)Erroris present, matches, does not matchn/a
NetworkEnd-to-End (Agent)Throughput≤,≥Kbps
NetworkEnd-to-End (Server)Available Bandwidth≤,≥Mbps
NetworkEnd-to-End (Server)Capacity≤,≥Mbps
NetworkPath TraceDelay≤,≥ms
NetworkPath TraceIP Address2in, not inIP address or prefix
NetworkPath TraceASN2in, not inList of ASNs
NetworkPath TracerDNS2in, not inexact hostname or wildcard-based match to domain
NetworkPath TraceMPLS Label2is empty, is not empty 
NetworkPath TraceDSCP2is, is notDSCP value selected from list
NetworkPath TraceServer IPin, not inIP address, prefix
NetworkPath TraceServer MSS<, >bytes
NetworkPath TracePath MTU<, >bytes
NetworkPath TracePath Length<, >hops
NetworkPath TraceTrace is incompleten/a 
DNSServer, Trace DNSSECErroris present, matches, does not matchn/a
DNSServerResolution time≤,≥ms
DNSServer, TraceMappingis not inquoted <comma-separated list of mappings>
DNS+Server Latency, DomainResolution Time≤,≥ms
DNS+DomainAvailability≤,≥%
DNS+DomainMappingis not inquoted <comma-separated list of mappings>
WebHTTP ServerResponse codeisany error (≥ http/400 or no response)
ok (http/200)
redirect (http/300
WebHTTP ServerResponse Headermatches, does not matchPOSIX Extended Regular Expression Syntax
WebHTTP ServerDNS time≤,≥ms
WebHTTP ServerConnect time≤,≥ms
WebHTTP ServerSSL negotiation time≤,≥ms
WebHTTP ServerWait time≤,≥ms
WebHTTP ServerReceive time≤,≥ms
WebHTTP ServerResponse time1≤,≥ms
WebHTTP ServerTotal Fetch Time≤,≥ms
WebHTTP ServerThroughput≤,≥kBps
WebHTTP ServerErroris present, matches, does not matchn/a
WebHTTP ServerError typeis, is notDNS, Connect, SSL, Send, Receive, Content, HTTP, Any
Web

 

HTTP ServerClient SSL Alert Codeis, is notSSL error type. 
eg. Unexpected message ( 10 ), Bad Certificate (42)
WebHTTP ServerServer SSL Alert Code

 

is, is notSSL error type. 
eg. Unexpected message ( 10 ), Bad Certificate (42)
WebPage LoadPage loadIs incompleten/a
WebPage LoadResponse time≤,≥ms
WebPage LoadDOM load time≤,≥ms
WebPage LoadPage load time1≤,≥ms
WebPage LoadError Count≤,≥#
WebPage LoadDomain Name3is in, is not inquoted <comma-separated list of mappings>
WebPage LoadTotal Fetch Time3≤,≥ms
WebPage LoadBlocked Time3≤,≥ms
WebPage LoadDNS Time3≤,≥ms
WebPage LoadConnect Time3≤,≥ms
WebPage LoadSend Time3≤,≥ms
WebPage LoadWait Time3≤,≥ms
WebPage LoadReceive Time3≤,≥ms
WebPage LoadSSL Negotiation Time3≤,≥ms
WebPage LoadComponent Load3is incompleten/a
WebTransaction (Classic)Erroris presentn/a
WebTransaction (Classic)Transaction Time≤,≥ms
WebTransaction (Classic)Completion≤,≥%
WebTransaction (Classic)Steps Completed≤, ≥, is#
WebTransaction (Classic)Any Steps meetsany, allof the following conditions: Step Duration
WebTransaction (Classic)Step # meetsany, allof the following conditions: Step Duration
WebTransaction (Classic)Any Page meetsany, allof the following conditions: Page Duration
WebTransaction (Classic)Page # meetsany, allof the following conditions: Step Duration
WebTransactionMarkeris/is not present, duration ≤, ≥ms
WebTransactionAssert Errordoes not match, is present, matchesvalue
RoutingBGPReachability<,>%
RoutingBGPPath Changes<,>n/a
RoutingBGPOrigin ASNis in, is not incomma-separated list of ASNs.
RoutingBGPNext Hop ASNis in, is not incomma-separated list of ASNs.
RoutingBGPPrefixis in, is not incomma-separated list of covered prefixes
RoutingBGPCovered Prefix4exists, is in, is not incomma-separated list of sub-prefixes
VoiceRTP StreamErroris present, matches, does not matchn/a
VoiceRTP StreamMOS≤,≥#
VoiceRTP StreamPacket loss≤,≥%
VoiceRTP StreamDiscards≤,≥%
VoiceRTP StreamDSCPis, is notDSCP Values. 
eg Best Effort (0), Expedited Forwarding (46)
VoiceRTP StreamLatency≤,≥ms
VoiceRTP StreamPacket Delay Variation≤,≥ms
  1. For some metrics, dynamic baselines can be configured. See Dynamic Baselines for more information.
  2. These metrics are configurable under the "Any Hop", "Last Hop", or "Hop #" entries in Path Trace alert rules.  Select "Any or "All" for multiple sub-conditions.
  3. These metrics are accessed under the "Any Component" alert condition in Page Load Tests. Select "Any or "All" for multiple sub-conditions.
  4. Only BGP Routing tests provide Covered Prefix data.  Do not assign a BGP Alert Rule with a Covered Prefix metric to a non-BGP test type that has BGP Path Visualization measurements enabled.  For non-BGP test types, use an Alert Rule that does not include the Covered Prefix metric, and if needed create a separate BGP test and an a separate Alert Rule with the Covered Prefix metric.

Each metric from the table above is defined in the ThousandEyes Knowledge Base article ThousandMetrics: what do your results mean?


Default alerting rules

Default Alert Rules are defined according to the following list.  Within the Account Group, Default Alert Rules can be changed by any user having a role with the View alert rules and Edit alert rules permissions, such as the built-in Account Admin or Organization Admin roles.  Default rules can be configured with zero or more alert rules representing the default alert rule for each type.

NameCriteriaMinimum Locations
Default Network Alert RulePacket loss ≥ 20%2 locations
Default DNS Trace Alert RuleError is present2 locations
Default DNS Server Alert RuleError is present2 locations
Default DNSSEC Alert RuleError is present2 locations
Default DNS+ Domain Alert RuleAvailability ≤ 90% and Reference Availability ≥ 90%2 countries
Default DNS+ Server Alert RuleResolution time ≥ 100ms1 country
Default HTTP Alert RuleError type is any2 locations
Default Page Load Alert RulePage load is incomplete2 locations
Default Transaction Alert RuleError is present2 locations
Default BGP Alert RuleReachability < 100%2 locations
Default Voice Alert RuleError is present1 location
 

Dynamic Baselines

Dynamic baselines allow users to create alerts that more accurately reflect the natural variance in test data. Using standard deviation, percentage change, or absolute values, users can configure alerts that dynamically determine whether to fire or not, based on historical data within a sliding time window.

Note: Dynamic baselines are currently only available for Cloud and Enterprise Agent alerts.

Let's imagine a scenario where a HTTP server test runs every fifteen minutes. Over the course of the first hour, four tests are run by an agent in New York. It gathers response times of 510ms, 490ms, 550ms, and 450ms, for an average of 500ms, and so far, the alert has not fired.

The alert uses a dynamic baseline, and has a two-hour window. Based on the four results so far, whether it will fire or not for the next test depends on whether it was configured using standard deviation, percentage change, or an absolute value:

  • The standard deviation (STDEV) for these results is 36. Using the default multiplier, the alert would fire if the next test returned a response time greater than 500+(36x2) = 572ms.
  • The percentage change would need to be at least 10% to have avoided firing until now. With an average of 500ms, the alert would now fire if the next test returned a response time greater than 500+10% = 550ms.
  • The absolute value needs to be at least 50ms for the alert to have not fired (the third value, 550, is 50 more than the average of the first two test results). The alert would therefore only fire if the next test returned a response time of 500+50 = 550ms.
In this example, alert rules using both the percentage change and absolute values would fire at the same point (551ms or longer), while alert rules using standard deviation would not fire until 573ms.

Now let’s add two more results - 482ms and 464ms. All six results are within the two hour window, which changes the average or baseline to 491ms, as well as changing when the alert fires:
  • The STDEV for the six results is 32.5, meaning that the alert would fire if the next test response time was greater than 491+(32.5*2) = 556ms.
  • The percentage change remains 10%, meaning that the alert would fire if the next test response time was greater than 491+10% = 540ms.
  • The absolute value remains 50ms, meaning that the alert would fire if the next test response time was greater than 491+50 = 541ms.
The different options allow users to adapt their alerting framework to better reflect the fluctuation in test results, and ensure that their system isn’t overwhelmed with alerts because of static metric baselines.

The following metrics currently support dynamic baselines:
  • Web / HTTP server / Response Time
  • Web / Page Load / Page Load Time
  • Network / End to End (Server) / Latency
The image below shows an example alert configuration using a dynamic baseline. The alert condition states that if the response time exceeds two standard deviations above the average value over the last four hours, the alert will fire.

User-added image
Important Note: The time window for the alert must be at least three times the length of the interval of any tests it is attached to, in order to fire. For example, if a test runs every five minutes, the time window for the alert must be at least fifteen minutes in order to gather the three data points required.

Additional Information

Cloud Agents displaying a Local Problems message on a test results page are excluded from alert calculations:

User-added image

This is equivalent of having the Alert Rule's Agents field set to "All agents except" the Cloud Agent with the Local Problems message.