Get Live Help
The following section describes the terms and concepts used within Change Auditor Threat Detection to help you understand how risk is assessed and alerts are determined.
Change Auditor Threat Detection applies machine learning to build behavioral features and a multi-dimensional baseline of typical behavior for each user in your environment. The baseline comprises a unique set of identifiers to ensure that only abnormal behaviors are flagged. For example, the baseline can include information about when a user typically logs on, which workstation they use, whether they tend to log on from remote locations, which files they typically access and so on.
As the baselines are refined over time, the Threat Detection server makes logical assumptions around what to expect, which minimizes the chances for any alarms around normal changes in activity. Change Auditor Threat Detection requires 30 days of audit history to establish the initial user behavior baselines.
Indicators define risky activity, such as suspicious user logons, brute-force password attacks, unusual Active Directory changes, and abnormal file access. However, threat indicators are not constrained to a specific raw event — they use machine learning to identify patterns of events that together could indicate a threat.
Specifically, as raw events stream in, the Threat Detection server analyzes human actors, accounts, locations and operations to identify behavior that deviates from established baselines.
Abnormal and risky behaviors are evaluated to produce threat indicators. These indicators are based on present and historical patterns, as well as specifically defined risky object attributes. An indicator consolidates all activities that are detected as abnormal.
Anomalous behavior that corresponds with a threat indicator is identified based on the event’s rarity and criticality. This strategy ensures that only behavioral changes that are important and potentially indicative of a suspicious activity are highlighted out of the raw events.
Threat indicators are the basis for the formation of alerts. Sorted by severity to reflect the security importance, alerts are managed by the analyst providing investigation and feedback.
SMART (Significant Multidimensional Anomaly Reduction Technology) is a correlation technology that provides prioritized results for dynamic and frequently changing behaviors. The technology uses statistical and machine learning algorithms to identify unique connections between anomalies, thereby reducing false positives and helping to spot threats.
SMART prioritizes and consolidates threats that reflect a meaningful deviation in user behavior. As a result, while millions of raw events might yield discovery of thousands of threat indicators, only patterns of truly suspicious behavior are scored. This means that fewer alerts are raised in the Threat Detection dashboard, and fewer false positives are identified. Like baselines, SMART alerts improve over time as more log data is processed, so they deliver increasingly accurate user threat detection.