Converse agora com nosso suporte
Chat com o suporte

Foglight Agent Manager 5.9.5 - Foglight Agent Manager Guide

Configuring the embedded Agent Manager Installing external Agent Managers
Understanding how the Agent Manager communicates with the Management Server Deploying the Agent Manager cartridge Downloading the Agent Manager installer Installing the Agent Manager Starting or stopping the Agent Manager process Frequently asked questions
Configuring the Agent Manager Advanced system configuration and troubleshooting
Configuring Windows Management Instrumentation (WMI) Configuring Windows Remote Management (WinRM) UNIX- and Linux-specific configuration
Monitoring the Agent Manager performance Deploying the Agent Manager to large-scale environments

Example: Running multiple instances in a cluster environment

One example use of this functionality is running the Agent Manager in cluster environments, since it allows the next assigned host in the cluster to relaunch an Agent Manager instance—and the specific agents it manages—when cluster failover occurs.

In this type of installation, there are multiple physical installations of the Agent Manager on different failover nodes. When one node fails and shuts down, the next one starts and its Agent Manager instance accesses the latest changes stored in its state directory on a shared drive.

The process of running multiple Agent Manager instances in a cluster environment follows the outline presented below.

Begin by installing the Agent Manager on each node in your cluster. See Installing the Agent Manager for installation instructions.

When the Agent Manager installation is available to the nodes in the cluster, the next step is to initialize a state directory for an instance on the shared drive that is used by the cluster. When setting the state location locally from one of the nodes, you must define the full path to the remotely-mounted state directory.

In the following example, <state_dir> is a path to a state directory on a shared network server that is accessible locally from all machines. For example: on Windows clients, the <state_dir> can be f:\cluster_shared_dir\fglam_states\STATENAME_A, while on UNIX® clients, it is /mnt/cluster_shared_dir/fglam_states/STATENAME_A.

fglam --create-state --location <state_dir>

Run the Agent Manager from the active node and provide the full path to this instance’s state directory on the shared drive. For example:

fglam --location <state_dir>

The files related to the Agent Manager instance’s run-time state—for example, configuration and log files—are stored under its remote state directory on the shared drive.

When the Agent Manager is running, you can deploy agents to it and create agent instances. Files related to the run-time state for these agents (including log files) are stored under the remote state directory for this Agent Manager instance. Using the example above, they are stored in <state_dir>.

Ensure that only one instance of the Agent Manager that uses a particular state directory is running at a time. Do not run two instances of the Agent Manager on separate machines (or separate active nodes in the cluster) and cause these instances to use the same shared state directory simultaneously.

Controlling the polling rate

The FglAMAdapter, a component included with the Management Server, controls how often the connected downstream Agent Managers and agent instances connect and poll for messages. In general, the more hosts that are connected to the server, the less often they should be instructed to poll. The properties included with the FglAMAdapter control the polling behavior. They can be found on the Agent Properties dashboard. In most cases, changes to these properties are not required. Doing so is only recommended when instructed by Quest Support.

The polling rate is controlled by the following properties:

Minimum Polling Interval (seconds): The minimum polling interval, in seconds.
Maximum Polling Interval (seconds): The maximum polling interval, in seconds.
Polling Timeout (seconds): A time-out/grace period (in seconds) that the FglAMAdapter waits for a host to respond, before considering it as disconnected. This is used to account for clock skews and changes in timing typically seen on heavily loaded VMware images.

For more information about the Agent Properties dashboard, see the Administration and Configuration Help.

Configuring the Agent Manager to work in HA mode

High Availability (HA) mode is a configuration in which multiple Agent Managers work together in an HA Partition, where one Agent Manager is a primary host (HA Primary), and others are standby hosts (HA Peers). When configured, agent instances whose types are configured as HA Aware and belong to the same HA Partition are managed by the HA Primary host. If that Agent Manager stops responding or goes offline, the agent instances fail over to another Agent Manager.

Under a configured HA Partition, a common deployment set of agent types is kept in sync across all HA Peers. Agent packages deployed to that HA Partition are checked for any HA Aware agent types. Any detected HA Aware types are automatically deployed to all other HA Peers.

The FglAM Adapter monitors the deployments of the each Agent Manager host within the named HA Partition. The HA Primary is considered the master in terms of the deployment set and automatically deploys (or undeploys) HA Aware cartridges to each HA Peer. This also happens during cartridge upgrades, when the Adapter automatically pushes out the updates to all of the HA Peers in that HA Partition.

Assigning Agent Managers to HA partitions

HA mode is configured through the FglAM Adapter agent properties. You can use these properties to assign an Agent Manager to HA partitions, and define the priorities for promoting HA Peers to HA Primary hosts.

Start by navigating to the Agent Properties dashboard, the FglAM namespace, and the FglAMAdapter properties. From there, you can edit the High Availability Host Config list-based property to assign an Agent Manager to HA Partitions and define their eligibility for becoming HA Primary hosts.

3
On the navigation panel, under Dashboards, navigate to Administration > Agents > Agent Properties.
4
On the Agent Properties dashboard that appears in the display area, in the Namespace/Type view, expand the FglAM node, and click the FglAMAdapter node.
5
Assign the Agent Manager to a desired HA Partition by editing its entry in the High Availability Host Config list. This list contains all Agent Managers that are currently connected to the Adapter, and is accessible through the High Availability Host Config property. The list also identifies the names of their respective HA Partitions, and the priorities for considering Agent Managers as potential HA Primary hosts.
a
Get started with editing the High Availability Host Config list-based property.
In the Properties view, under High Availability, on the right of the High Availability Host Config property, click Edit.
In the dialog box that appears, locate the Agent Manager entry that you want to add to an HA Partition, and in its HA Partition Name column, type the name of that HA Partition. Do not add an Agent Manager to this list if the client you want to assign the HA Partition is not listed here. Agent Managers that connected to this Management Server will be automatically added to this list. Manually adding Agent Mangers is not supported.
d
Click Save Changes.

Using the JMX-Console, click FglAM:name=HAManager and invoke the diagnosticSnapshotAsString() method. The resulting output lists each of the known Agent Managers, which (if any) HA Partition they are assigned to, what deployment set they have, who is the HA Primary and what HA State they are in.

Documentos relacionados

The document was helpful.

Selecione a classificação

I easily found the information I needed.

Selecione a classificação