Navigation: Modifying Content Matrix Configuration and Settings > Client Side Configuration > Configuration Settings for Migrations Using the Import Pipeline |
Metalogix Content Matrix has various XML properties that can be modified to fine-tune or help troubleshoot your Import Pipeline migrations. These properties can be found in the EnvironmentSettings.xml file.
NOTE: You can also contact Quest Support for assistance in identifying and resolving issues with migrations using the Import Pipeline. For a list of supported migration actions, see Objects and Actions Supported for Using the Import Pipeline.
The following XML properties can optionally be modified. However, it is generally recommended to not modify these values unless you are encountering issues when migrating using the Import Pipeline.
UploadManagerBatchSizeItemThreshold
This key controls the maximum batch size in item count, including all folders and documents, to be used if Content Matrix will be submitting batches according to number of items. The default value is 200 and the value must be a positive integer.
<XmlableEntry>
<Key>UploadManagerBatchSizeItemThreshold</Key>
<Value>200</Value>
</XmlableEntry>
UploadManagerBatchSizeMBSizeThreshold
This key controls the maximum batch size to be used if Content Matrix will be submitting batches according to size. The default value is 1000 megabytes and the value must be a positive integer. The value should be set carefully, taking into consideration factors such as total upload bandwidth, speed of data retrieval from the source system, and so on.
<XmlableEntry>
<Key>UploadManagerBatchSizeMBSizeThreshold</Key>
<Value>1000</Value>
</XmlableEntry>
MaxAzureBatchRetryCount
This key controls the maximum number of times Content Matrix will resubmit the batch until it is successfully migrated. (The default value is 5.)
<XmlableEntry>
<Key>MaxAzureBatchRetryCount</Key>
<Value>5</Value>
</XmlableEntry>
ErrorMessagesForRetryAzureBatch
This key specifies error conditions for which you want Content Matrix to resubmit a batch, when it normally would not.
NOTE: Separate multiple error messages with a pipe character (|), as shown in the example below.
<XmlableEntry>
<Key>ErrorMessagesForRetryAzureBatch</Key>
<Value>Item does not exist|Object Reference Not Set</Value>
</XmlableEntry>
RetryBatchForCustomListWithVersions
By default, if you are migrating a batch that includes a list with a base type of CustomList (such as Announcements) and versioning is enabled, these lists are excluded from the resubmission, because duplicate items may be migrated to the target if the batch is resubmitted. You can choose to include these types of lists in resubmissions, however, by changing the value of the key RetryBatchForCustomListWithVersions from False to True.
NOTE: If custom lists are being excluded from batch resubmissions and all items are not successfully migrated, you can migrate any outstanding items using incremental migration.
<XmlableEntry>
<Key>RetryBatchForCustomListWithVersions</Key>
<Value>False</Value>
</XmlableEntry>
Performance Settings
BufferSizeForPipelineMigrationInMb
This key controls the buffer size while uploading files to Azure Storage Account Containers using the Import Pipeline. The default value is 64, which means, for example a 128 MB file will be uploaded in two parts, 64 MBs at a time. The lower buffer size, the more quickly the computer processor handles the information. Keep in mind that the higher the value, the more system resources will be consumed.
<XmlableEntry>
<Key>BufferSizeForPipelineMigrationInMb</Key>
<Value>64</Value>
</XmlableEntry>
MaxParallelUploadFilesInPipeline
This key controls the number of files uploaded in parallel to Azure Storage Account Containers when using the Import Pipeline. The default value is 2, which means a maximum of two files can be uploaded in parallel. Keep in mind that the higher the value, the more system resources will be consumed.
<XmlableEntry>
<Key>MaxParallelUploadFilesInPipeline</Key>
<Value>2</Value>
</XmlableEntry>
UploadManagerMaxRetryCountThresholdForJobResubmission
WARNING: This value should not be changed unless absolutely necessary.
This key controls the amount of time to wait for a response from the reporting queue before re-requesting a migration job. This value is specified in multiples of 15 seconds, meaning that the default value of 960 corresponds to 4 hours and the minimum value of 120 corresponds to 30 minutes. This value must be a positive integer greater than or equal to 120.
<XmlableEntry>
<Key>UploadManagerMaxRetryCountThresholdForJobResubmission</Key>
<Value>960</Value>
</XmlableEntry>
Temporary Storage Location
UploadManagerLocalTemporaryStorageLocation
This key indicates the directory in which the temporary binary files and manifest XML files for each batch are saved. If no filepath is specified, the default file path is used.
<XmlableEntry>
<Key>UploadManagerLocalTemporaryStorageLocation</Key>
<Value>C:\ProgramData\Metalogix\Temp folder sample</Value>
</XmlableEntry>
Navigation: Modifying Content Matrix Configuration and Settings > Changing Resource Utilization Settings |
How many threads can be used simultaneously per job in Metalogix Content Matrix is controlled by the Edit Resource Utilization Settings option, which is accessible from the Settings ribbon at the top of the Console UI.
If the option is clicked, a dialog containing a slider will appear that will allow control of the number of threads available within a single action. If the slider is moved all the way to the left, this will turn off multithreading and only allow a single thread to be used during the action. (For more information on threading, refer to the Microsoft article About Processes and Threads.)
NOTE: Turning off multithreading in this way can be a valuable way of troubleshooting whether multithreading is causing issues.
If the slider is left in the middle, it will allow twice the number of processors in threads to be created per action. For example, if the machine running Metalogix Content Matrix has a two core CPU, four threads will be able to be used if the slider is in the below state:
Moving the slider farther to the right will allow more threads to be used, but can potentially overwhelm system resources. This could lead to potential errors if the system resources cannot handle the data being migrated. There is a chance that if the speed is too high, you would see a slowdown in the overall migration, because the migration is trying to run actions faster than the resources properly allow.
While this value can be set through the Content Matrix Console, you can also set it through the back end, if the UI setting does not seem to be working for you. Please contact Quest Support for more information on this back end setting.
You can use Distributed Migration to significantly improve the time it takes to complete large migration jobs by distributing the workload efficiently across the resource pool. The distributed model enables parallel processing of migration jobs that reduces migration time, and enables higher utilization, better workload throughput and higher productivity from deployed resources.
The feature consist of a central Distributed Database and one or more Distributed Migration agents.
Distributed Migration Components
Distributed Migration relys on the following components:
Distributed Database
This is a SQL Server database that contains the repository or queue of migration job definitions which the agents can run. Distributed Migration agents share the same Distributed Database.
NOTE: The Distributed Database cannot be a SQL CE database.
Agents
Agents are physical or virtual machines on which you can run migrations remotely. Installed on each agent are the Content Matrix Console and the Metalogix Agent Service, which handles job queuing and processing. Any available agent can pull a migration job that has been set to Run Remotely directly from the Distributed Database.
Any logging information is then be sent to the Distributed Database.
When an agent is running a migration job, any interaction with the agent, such as changing a configuration setting, is not recommended.
If Distributed Migration was configured prior to version 9.2:
To provide more efficient resource utilization, the Distributed Migration model has been re-architected to eliminate the use of a Controller to push jobs to agents. Instead, Distributed Migration uses a Windows Service that allows any available agent to pull a migration job that has been set to Run Remotely directly from the Distributed Database.
If you configured Distributed Migration prior to version 9.2, you will need to reconfigure each agent in order to continue using Distributed Migration and to run any migration jobs remotely.
Refer to the Reconfiguring Distributed Migration After an Upgrade guide for more information.
NOTE: You will notice after an upgrade that the Configure Distributed Migration and Configure Self-Service options no longer display in the Content Matrix Console ribbon (Self-Service Migration has been removed as of version 9.2) and the Manage Agents dialog is empty.
Important Note About Global Mappings and Environmental Settings
The first time that a connection is made to the Distributed Database, Global (Domain, Url, Guid, and User) Mappings and environmental settings are copied from the connecting user's local machine and used by all agents. With subsequent connections to the Distributed Database, a pop-up displays with the option to overwrite Mappings and local settings with those used by the currently-connecting user's machine.
Global Mappings and settings can be copied from any machine that connects to the Distributed Database, even if it is not configured as an agent.
Navigation: Configuring Content Matrix for Distributed Migration > Distributed Migration System Requirements |
Requirements for the Distributed Database
·The Distributed Database must use a Microsoft-supported SQL server.
·The Distributed Database can reside on any machine in network, provided that all agents have access to that machine.
·The Distributed Database should be created from the Metalogix Content Matrix Console.
NOTE: It is recommended that SQL Server Authentication be used to connect to the Distributed Database, as it will allow for cross-domain connections.
Requirements for an Agent Machine
·An agent machine or workstation should have 16 GB of free RAM.
NOTE: An agent machine can be a physical or virtual machine.
·Microsoft .NET Framework 4.7.2 must be installed on the machine.
·The agent machine must meet all the prerequisites as specified in the Metalogix Content Matrix Console Advanced Installation Guide.
· If a migration job is using SP 2013 or later DB as a source connection, then users can only run jobs remotely on an agent that meets the connection requirements for the same DB version as the machine from which the migration is configured. This means that the agent machines must either: §Have the same version of SharePoint installed on or be a SharePoint WFE for a comptible version. OR §Be a 64-bit machine that has no version of SharePoint installed on it. ·Local Object Model (OM) connections to SharePoint are not supported when using Remote Agents. - A Local OM connection to SharePoint can only be made when Metalogix Content Matrix is installed on a machine that is running SharePoint (a SharePoint server or WFE). Since this type of connection can be made on the Host machine, but cannot be guaranteed to also be available on an agent machine, it is not a supported connection adapter for running migrations using remote Agents. For making remote Object Model connections, make sure the Metalogix Extension Web Service (MEWS) is installed on each agent machine. |
---|
© ALL RIGHTS RESERVED. Terms of Use Privacy Cookie Preference Center