Affects users who have high data fragmentation in their repository. If this condition is met, it complicates the export process because data is stored in a non-linear fashion (randomly).
For instance, say we are creating an export to a Hyper-V host, which in turn starts requesting streams of data from the Core. If the data is stored randomly, we send one block per request. As a result, we spend too much time sending data over to the hypervisor on a one-block-per-request basis. Hence, what we need is the ability to send cumulative blocks for one request, which will help increase the export rate for users who have high data fragmentation in the repository.
Actual result:
Slow export rate for recovery points from repositories with high fragmentation.
Expected result:
Export speed is good for data with any level of fragmentation in the repository.