Chatta subito con l'assistenza
Chat con il supporto

Foglight Agent Manager 5.9.2 - Foglight Agent Manager Release Notes

Resolved issues and enhancements

Resolved issues and enhancements

The following is a list of issues addressed and enhancements implemented in this Foglight Agent Manager release.

Defect ID

Resolved Issue

FAM-7254 Agent in ext FglAM did not collect data without any error logs after the Agent Manager started for a few hours.
FAM-7241 "Too many open files" error was reported by many FglAMs running on Solaris.
FAM-7239 Fixed issue where FglAM picked up IPv6 address as the host name.
FAM-7214 Fixed issue where some FglAM commands caused FglAM crash.
FAM-7207 The Agent Manager dashboard loads slowly when loading more than 800 FglAMs.
FAM-7177 Unexpected NullPointerException results in broken agents. 
FAM-7175 Fixed issue where some FglAMs incorrectly shown as not upgradable.
FAM-7173 Supported "diffie-hellman-group-exchange-sha256" in SSH connection.
FAM-7145 Support more MAC algorithms, including hmac-sha2-512, hmac-sha2-256, and hmac-ripemd160.
FAM-7134 Inconsistent agentPackage version results in broken peers.
FAM-7086 Output of "fglam --status" does not match what is documented.
FAM-7079 Fixed issue where the file log scanner stopped scanning logs.
FAM-6996 Fixed the "Too many open files" error occurred when stopping and starting data collection for the JMX agent.
FAM-6977

System Lockbox failure may happen when moving agents.

FAM-6968

Foglight Agent Manager generated a NumberFormatException exception while executing activity of WindowsEventLog.

FAM-6930 Fixed issue where the Agent Manager encounters a "Permission denied" error when user attempts to upgrade the jre file.

 


Known issues

Known issues

The following is a list of issues known to exist at the time of this release. 

Defect ID

Known Issue

FAM-7210 The Agent Manager version number is incorrect on the Script Console dashboard, after being upgraded to 5.8.5.5.2 from 5.8.5.5.1.
FAM-7209 FglAM-Adapter-Devkit-6.1.3 is missing after the Agent Manager is upgraded from 5.8.5.4.1 to 5.8.5.4.2.
FAM-7193 FglAM running on Linux/Solaris fails to connect to the target host through link-local IPv6 WMI, if multiple network interfaces exist in the target host.
Workaround:
1. Log into the target host.
2. Execute the dos command->ipconfig/all command, to check how many tunnel interfaces which contain IPv6 address.
3. If only Tunnel adapter Teredo Tunneling Pseudo-Interface and Ethernet adapter Local Area Connection exist in the target host, uninstall/disable Teredo Tunneling Pseudo-Interface, and then check if the WMI connection can be established. 
FAM-7183 Adapter should alert upon FglAM failover and when cluster members are in the BROKEN state.
FAM-7169 WinRMCommandShell returns empty output when using BAT scripts.
FAM-7116 WMI connection cannot be established successfully when using the IPv6 address to monitor the target host.
 Workaround:
1. Set the agent's target host address and credential resource mapping to the long form, for example: fc00:0:0:0:0:0:a1e:9804.
2. Check if the WMI connection can be established. If not, change the agent's target host address from long form to compressed form (fc00::a1e:9804). If this solution still does not work, check the workaround in FAM-7193.
FAM-7072 Installer fails to exit and FglAM cannot report to FMS.
FAM-7023 Agents will be broken (FMS deletes them), after changing FglAM host name, IP, or display name. The agents will be recreated when the FglAM starts with the original FglAM display name.
FAM-6998

Cannot deploy multiple HA-aware gars to a standby host at one time.
Workaround:

  1. Log in to http://<foglight_home>:8080/jmx-console, and then click name = HAManager.
  2. In the page that opens, click Invoke for the diagnosticSnapshotAsString() API.
  3. Find the appropriate primary HA host (mState = PRIMARY).
  4. Log in to http://<foglight_home>:8080/, and then go to Dashboards > Administration > Agents > Agent Managers.
  5. Select the primary HA host found in step 3, and click Deploy Agent Package.
  6. In the dialog box that appears, select multiple agent packages to deploy.
  7. Click Next, and then click Finish to start the deploy task.
    When the deploy task completes, the agent packages deployed to the primary host will be automatically deployed to the standby host in a few minutes.
FAM-6942 HA failover fails and the value of mState on standby peers changes to MISSING_LOCKBOX in JMX console.
Workaround: If the failure of HA failover is caused by MISSING_LOCKBOX (Check the HA partitions info by navigating to JMX console > HAManager > diagnosticSnapshotAsString()), perform either of the following options to resolve this issue:
1. On the navigation panel, under Dashboards, click Administration > Credentials > Manage Lockboxes. In the Manage Lockboxes dashboard, release the missed lockboxes to the MISSING_LOCKBOX FglAM client. If you cannot find the missed lockboxes, perform step 2.
2. On the FglAM client that has the MISSING_LOCKBOX status, navigate to <fglam_home>\state\<state_name>\credentials. Clean up all files under this directory, and then restart the FglAM. After the restart completes, release all lockboxes as Primary in the Administration > Credentials > Manage Lockboxes dashboard.
FAM-6940 There are duplicate IP addresses for different FglAM instances (running on Linux) in the Agent Managers dashboard, if the FglAM servers are cloned virtual machines.
Workaround 1:
  1. Go to the monitored host, and then execute the following commands:
    su root
    rm -f /etc/machine-id
    systemd-machine-id-setup
  2. Restart the FglAM. Restart the operating system if this issue still exists after restarting the FglAM.
Workaround 2:
  1. Set system.id.enabled = false for all FglAM instances in the <fglam_home>/state/<state_name>/config/client.config file.
  2. Change the display name in the <fglam_home>/state/<state_name>/config/fglam.config.xml file, and then restart the FglAM.

Workaround 3: If you have cloned the servers, do not start up the FglAM. Set system.id.enabled = false for all FglAM instances in the <fglam_home>/state/<state_name>/config/client.config file, and then start the FglAM for the first time.

FAM-6934

Agent is not listed under "agents" property of FglAMClientInstance or Host object after moved.

FAM-6838

The Agent Manager fails to connect to the Management Server while using the default HTTPS SSL certificate.

Workaround: Install a newer certificate on the Management Server.

FAM-6688 Support is required for installing and running Agent Manager on a system with SELinux enabled.
FAM-6439

Upgrades from 5.7.4 to 5.8.1 or 5.8.2 may result in an error during the upgrade process.

Workaround: If you are currently running a 5.7.4 install and you want to upgrade, then you must upgrade to version 5.8.5 or later.

FAM-6228

The Agent Manager fails to start up after an upgrade.

FAM-5854

Foglight Log Monitor does not support UNC (Universal Naming Convention) paths.

Workaround: The following workaround applies when monitoring local log files. Monitoring remote log files is not supported.

  1. Map the UNC path to a drive letter on the local machine running the Agent Manager.

  2. If any monitored log files are residing in the UNC path, point the Log Monitor Agent to this location using the mapped drive letter, not its UNC path.

FAM-5832

Installing and configuring multiple Foglight Agent Manager instances on a single physical host can cause some topology churn in the Host model representing the Agent Manager.

Workaround: By default, the Agent Manager submits performance metrics about itself. If multiple Agent Manager instances are running on the same physical host, disable the performance monitoring self-metric submission for each Agent Manager Instance by completing the following steps:

  1.  Open the $FGLAM_STATE/config/baseline.jvmargs.config file for editing.

  2. Add the following entry to the vmparameter. properties section of the file:

    vmparameter.<X> = "-Dquest.glue.disable.performancemonitor=true";

    Important. You must replace <X> with the next numeric sequence number.

  3.  Save the file.

  4.  Restart the Agent Manager

Additionally, an Agent that submits a Host object representing the monitoring host (for example, an Agent Manager instance that is running and hosting an agent instance) as part of its monitored collection causes a Host topology churn when these agent types are deployed to additional Agent Manager instances running on the same physical host.
If the monitoring agent's type supports it, its configuration should be adjusted to prevent the submission of monitoring host system ID. For example, this configuration is available for the Foglight for Infrastructure Agents and can be disabled by setting the agent instances' Collect System ID property to false.

FAM-5600

The Agent Manager vm.config file migration fails under multi-state installs.

Workaround:  The legacy vm.config file is replaced with two new configuration files: client.config and baseline.jvmargs.config. Locate these files within the upgraded Agent Manager state instance. As these file instances may already contain transferred values from the legacy vm.config, review each of the settings in both of these files in order to ensure that these configuration options apply to the Agent Manager state instance that they are being copied into.

  1. Locate the vm.config file within the configuration state directory instance of the Agent Manager. The bottom of the file contains a section for defining vmparameter.x = ""; values. Copy over these settings from vm.config here into the baseline.jvmargs.config file.

  2. Review all of the options declared in vm.config with those of client.config that you have copied over. The client.config file is a super-set of properties from vm.config (with the exception of vmparameter values that are no longer defined here). So each property that exists in vm.config should also exist in client.config. Ensure that each of the common configuration values in the client.config file matches the values in the vm.config, and make any updates, if required.

  3. If the java.vm configuration parameter was set in vm.config, then you should update this option in the new client.config file. When transferring this value over, ensure that the path value is quoted and backslashes escaped. For example:

    Windows: java.vm = "C:\\shared_java_vms\\1.5\\jre";
    Unix: java.vm = "/opt/shared_java_vms/1.5/jre";

  4. After validating that all of the configuration settings are in their new locations, delete the vm.config file and restart the Agent Manager process.

FAM-5355

OutOfMemoryError: The Agent Manager cannot create new native thread. It shuts down when the open file descriptor limit is too low.
Workaround: When creating a large number of agents on a single Agent Manager instance, you must ensure that the maximum number of open file descriptors (displayed by the ulimit -n command) is set high enough. 256 is the minimum suggested for an Agent Manager installation and 512 or more is recommended for an Agent Manager hosting up to 15 agents. 1024 is recommended for an Agent Manager hosting more than 15 agents. This value may need to be adjusted even higher if more agents are created on a single Agent Manager install.

FAM-5264

The Agent Manager running on Solaris may fail to run local or external commands during startup or through the SSHLocalConnectionImpl class.

FAM-4955

Parentheses can cause a command execution to fail while using a LocalWindowsCommandShell connection.

Workaround: If parentheses are used for grouping commands (and not in an echo context), use spaces to separate them from the other tokens in the command. For example, instead of this command:

if 3 gtr 2 (echo "3>2") else (echo "3 leq 2")

Use the following:

if 3 gtr 2 ( echo "3>2" ) else ( echo "3 leq 2" )

FAM-2850

Slave processes exist for all installed out-of-process (OOP) agent packages, even if no agent instances are running.

Workaround: OOP packages which are not running any agent instances may be un-deployed from the Agent Manager. Currently, this is only possible using a manual procedure.

FAM-1972

The deployed agent scratch directory created for JFogbank-type agents is not deleted during an upgrade.

Workaround: The orphaned directory is benign, and can be manually deleted after the upgrade is complete.

 


Third party known issues

Third party known issues

This release of the Foglight Agent Manager does not include any third party known issues.

 


Upgrade and compatibility

Upgrade and compatibility

The 5.9.2 Foglight Agent Manager cartridge requires Foglight Management Server 5.7.5.5 or later. The cartridge is compatible with all previously released versions of the Agent Manager client application.

Agent Manager upgrades from a 5.5.4.x legacy release require an intermediary upgrade to 5.6.7 prior to upgrading to 5.8.5 or later. To complete this intermediary upgrade, install one or more of the Agent Manager 5.6.7 platform-specific cartridges (as required), and upgrade the legacy hosts to this release before deploying the 5.9.2 Agent Manager cartridge or upgrading the Foglight Management Server to version 5.7.5.5 or later. After all of the legacy hosts are running version 5.6.7, and the Foglight Management Server is upgraded to version 5.7.5.5 or later, you can start upgrading your hosts to version 5.9.2.

 

The following is a list of Foglight product versions and platforms compatible with this release.

Product Name

Product Version

Platform

Foglight Management Server 5.7.5.5 and later All platforms supported by these versions of the Foglight Management Server
Foglight Agent Manager Development Kit 5.9.2 All platforms supported by these versions of the Foglight Agent Manager Development Kit

 

For more information about upgrading the Management Server and the Agent Manager, see the Foglight Upgrade Guide.

 


Related Documents

The document was helpful.

Seleziona valutazione

I easily found the information I needed.

Seleziona valutazione