Chat now with support
Chat with Support

Foglight Experience Monitor 5.8.1 - Installation and Administration Guide

Installing and configuring Multi-appliance clusters Configuring the appliance Specifying monitored web traffic Transforming monitored URLs Managing applications Foglight components and the appliance Using the console program Troubleshooting the appliance Appendix: Third party software Appendix: Dell PowerEdge system appliance

Deleting High Availability configurations

2
Click Delete to remove the configuration.

Aggregation of metrics and configuration

Multi-appliance configurations are, for the most part, transparent to the end user. When viewing and building report sets, end users are never exposed to the concept of portals or probes. Any exceptions are outlined in Storing report sets. Appliance administrators should have an understanding of how multiple appliances share metric data and configuration settings, and in what ways the multi-appliance concept is hidden from end users.

For more information, see these topics:

Aggregating metrics

In a multi-appliance cluster, all metrics collected by probes are pushed to the portal at the end of each five-minute interval. The portal then aggregates the data in real-time and stores it in its database. This architecture guarantees that there is always a single set of consolidated data that represents all of the traffic that was collected by the probes in the cluster.

Metrics are aggregated in a multi-appliance cluster using the following procedure:

Configuration settings

End users who browse the web console of the portal see a unified set of data representing the traffic collected by all of the probes in the cluster. This notion of the central data portal for end users does not always apply to the configuration settings for the portal and probes in the cluster.

While most of the configuration settings are shared by all appliances in the cluster, there are some settings that are specific to each probe:

For all other configuration settings, administrators can connect to any appliance, including the portal, to make changes to those settings.

All configuration settings, whether shared or probe-specific, are stored in a MySQL® database on the portal that is accessed by the probes using the MySQL protocol.

No matter where configuration settings are made, they are shared across the cluster. As shown in the diagram above, probes poll the configuration database on the portal every five minutes to receive any changes that have been made. This means that there could be up to a 10 minutes delay before configuration changes are reflected in the collected metrics.

If the portal goes offline, the probes utilize a local cached copy of the configuration database that reflects the last known state of the centralized configuration database on the portal. The probes can even be rebooted when a portal is down, and continue to use their own respective local copies of the configuration.

Related Documents