This article contains frequently asked questions about the performance tuning for Secure Copy Version 7.0 and higher.
Performance tuning of Quest Secure Copy is dependent on the environment in which it is to run as there are a number of factors that can affect the performance of Secure Copy. Network bandwidth and disk throughput are the largest factors in determining the performance of data migrations.
What variables affect the performance of Secure Copy?
Some variables that can affect the performance of Secure Copy are as follows, not necessarily in the order presented.
The above variables work together to optimize performance when moving data. In the following sections, each of the variables will be examined and explained so that performance can be better tuned for a specific task.
What group membership does the user need to have for full access to files and folders?
Executing as a member of the Administrators and Backup Operators groups
Secure Copy runs under the context of the logged on user using the Console, or as the alternate credentials provided in a scheduled task when running a scheduled Job. For full access to all files and folders the user must be a member of the local Administrators and Backup Operators groups on the Source. Members of the Administrators group are by default members of the Backup Operators group unless manually changed.
Note that in a Windows to Windows data copy the user must only be a member of the Administrators group to effectively grant the Copy Job the Backup Operator privileges needed to utilize the “Override Security on Access Denied” feature located in the Copy Job’s Synchronization options category. However, when copying to or from certain network storage devices (such as a NetApp Filer), the ‘Override Security on Access Denied’ option may not function properly. In these cases it is necessary for the user to be made a member of the Backup Operators group manually to gain access. If there is any question as to whether a machine may or may not support this option, ensure the user is a member of the Backup Operators group to prevent being denied access to files or folders. Access Denied errors will be displayed in the Job Progress dialog and are also written to the Copy Job’s log.
What performance can be expected during the copy procedure?
Hard Disk read/write performance of Source and Target
The Hard Disk performance of the machines where the data is currently stored (Source) and where the data will be copied to (Target) can greatly affect performance and is the most common source of performance bottleneck. Machines typically have Disk I/O bottlenecks due to disk utilization, hardware age, and disk fragmentation. Secure Copy’s default Copy Job Performance settings are ideal for a Copy Job containing a mix of small, medium, and large files running from a multi-core CPU Member Server with a single 5400rpm SATA hard disk Source to a similar Target machine over an unsaturated 100Mb Ethernet network. With better performing hardware such as a RAID array, high performance RAID controller, Gigabit network, high performance SCSI disks, or Solid State Drives, consider increasing the default Copy Job Performance settings to fit the environment.
What are the recommendations for the number of threads and batch settings?
Total count and average size of the data files to be copied
Depending on the number and size of the data files to be copied, the settings for the number of threads, batch size, and batch count can be modified. General information and recommendations for the thread and batch settings based on what type of data is being migrated will be detailed further in this document.
Will RAM and CPU affect the copy performance?
RAM/CPU speed and availability
RAM will be consumed on the Source and Target machines, however, the majority of RAM will be consumed on the machine running the Console. The RAM used by the Console is for the configuration and execution of Copy Jobs. In the process of executing Copy Jobs, the copy engine generates lists of files that are portioned out amongst the available threads and batches. Maximizing the available RAM and CPU cycles on the Console machine will allow Secure Copy to perform its tasks without caching to disk due to lack of memory or becoming bogged down due to lack of available CPU. Typical memory utilization of a single Copy Job with default Performance settings should range between 100 and 200 megabytes during execution. CPU speed and availability also affects copy performance, and by default Secure Copy’s engine process threads are executed with Normal CPU priority. CPU utilization percentage for the Secure Copy engine process is determined entirely by the Operating System.
Where should I install the Secure Copy Console?
Location of Console
The Console can be installed on either the Source, Target, or third machine. The Source machine would be the ideal place for the Console machine, so that lookups/enumeration of files and folders on the source data are performed quickest. The Console needs to stream RPC commands to the Source and Target. Having the Console on a third machine will cause additional network connections to be opened to both Source and Target and thus consumes more network resources. Network latency and bandwidth between the Console, Source and Target machines comes into play, so involving 2 machines in the process instead of 3 will usually be optimal.
What are the recommended Thread settings?
Thread count
The number of copy threads is equivalent to the number of copy operations that can be performed simultaneously. The range is 2-250 threads. As a general rule; “small” files are measured in bytes or kilobytes, “medium” files are measured in megabytes, and “large” files are measured in gigabytes or terabytes. If the Job is moving primarily small and medium sized files then consider increasing the thread count. The result will be more threads per Job moving the maximum numbers of files in the shortest period of time (assuming a bottleneck is not reached). When moving primarily larger files, having fewer threads in use will be more efficient because threads will be able to focus on moving a few large files in a single batch, rather than switching control between multiple threads which are using resources the large files need to fully complete. Changing the thread count offers the most “bang for your buck” when trying to maximize copy performance.
What is the recommended Batch Count setting?
Batch count
Batch count is a limiter based on the number of files an individual thread can process at a time. The range is 25-1000 files. When a job is being processed, a single thread will copy either the maximum number of files in the batch count or the maximum amount of data determined by the batch size; whichever value is reached first is the threshold. When processing primarily small files, it is preferable to set the batch count to the highest level for efficiency. When processing large files the batch count is not as relevant because the batch size threshold will most likely be met prior to meeting the batch count limit. In most cases the default value is optimal. Changing the batch count can lead to performance increase or decrease depending on the data being migrated but generally adjusting this setting is only effective when copying small files.
What is the recommended Batch Size setting?
Batch size
Batch size is a limiter based on the size of the files an individual thread can process at a time. The range is 1-100MB of data. When a job is being processed, a single thread will copy either the maximum number of files in the batch count or the maximum amount of data determined by the batch size; whichever value is reached first is the threshold. When processing primarily small files, it is preferable to set the batch size to the highest level for efficiency. In most cases, the default values are optimal. When processing medium or large files the batch size threshold will most likely be met before the batch count threshold. Changing the batch size can lead to performance increase or decrease depending on the data being migrated, but generally adjusting this setting is only effective when copying small files.
What other Job options can affect copy performance?
Performance and other Copy Job options
Some Job options can cause more processing overhead. The overhead is minimal in most cases but using a combination of these options may have a cumulative negative effect.
Bandwidth Throttling
Bandwidth Throttling is part of the Performance Job options category. By default, the Inter-Packet Gap is set to zero. The Inter-Packet Gap (IPG), which is also referred to as an Interframe Gap (IFG), slows down the copy process, which reduces the bandwidth usage over the network. As a file is copied, it is copied 64 kilobytes at a time. The Inter-Packet Gap is a time span in milliseconds (ms) to wait before sending the next 64 kilobytes. Even a 100ms Inter-Packet Gap value can have a dramatic reduction in resource utilization. Due to factors such as the volume of other traffic on the network, it may be necessary to experiment with the time span to achieve a desired bandwidth.
Specify Files to Purge
Purging is part of the Synchronization Job options category. To purge files the Source and Target paths must be compared then purged accordingly. Because it requires the comparison of the Source and Target directories and files, this option will impact the overall duration of the copy process.
Verification of file copy
Verification is part of the Performance Job options category. Verifying the file copy is a time, I/O, and processor intensive activity because it compares the CRC32 checksums of the Source and Target files. Because it requires the comparison of the Source and Target files, this option will impact the overall duration and resource utilization of the copy process. On the Performance page for a copy job, there is now an option to include a CRC32 checksum verification on skipped files, as well as copied files.
Test copy job
Test network performance with live data to estimate migration job time. Test data on target server is automatically removed upon test completion.
Analyze copy jobs prior to migration
Before running a migration copy job, it may be beneficial to run the new Analyze feature to enumerate the number of folders and files to be migrated. Select a copy job, and click Analyze. The File System Statistics Analyzer runs in a command window and writes the data to a .csv file in the Secure Copy 7/Logs folder. The File System Statistics Analyzer can also be ran from the command line using FileSystemStatistics.exe, which is located in the Secure Copy 7 installation folder.
Compression
Compression is part of the Other File Options Job options category. When utilizing this option the compression is not performed by Secure Copy, it is performed by the Operating System. Because the compression function is performed at the time of the file write operation on the Target, this option will impact the overall duration of the copy process.
Migration of Groups and Users
Local Group and Users settings have their own Job option category. When a local group or user is migrated, Secure Copy sends an RPC call to an available Domain Controller to validate the user or group and its members. In a standard single Console migration, this option will impact the overall duration of the copy process.
In a non-standard multiple Console migration, where each Console is migrating local groups and users, and a large total number are being migrated, you could run into a problem. Secure Copy can potentially flood a Domain Controller with RPC requests attempting to validate numerous users and groups from multiple Consoles, which can generate RPC timeout errors on the Domain Controller. To avoid this problem, stagger the start of multiple Copy Jobs or limit the number of threads being used across all Consoles which are migrating local groups and users. Another suggestion would be to use the Job option “Copy only local groups and users, not files” to migrate the groups and users manually before running the migration.
Copy Job options are not global settings but are Job specific. This enables the user to create Copy Jobs optimized for small and large files as separate Jobs. Each Job with its own specific configuration of thread and batch settings can produce a completely optimized data migration. To configure these settings globally and automatically for all newly created Copy Jobs, change the global defaults under the TOOLS | NEW JOB OPTIONS menu.
© 2024 Quest Software Inc. ALL RIGHTS RESERVED. Feedback Terms of Use Privacy Cookie Preference Center