The repository optimization job performs re-deduplication of data in the repository to reclaim storage space. This job forces a comparison of the data in your snapshots to the information in the deduplication cache. If any repeated strings are found in the repository, that data is replaced with references to the data, which saves storage space in the repository.
The repository optimization job will only reclaim space if the dedupe cache size is increased and the data in the repository was not deduped well originally. If the data has been deduped properly (because the dedupe cache was sized correctly when the core was installed) then running the repository optimization job on a repository will have little to no impact on the storage space used.
The optimization process is processor and storage intensive. The amount of time it takes to run this job depends on several factors. These factors include the repository size; the amount of data in the repository; available network bandwidth; and existing load on the input and output of your system. The more data in your repository, the longer this job runs.
It is not possible to run the repository optimization job on a repository without some free space. There must be some space for new references to be written as the old data is processed.
Recommended steps for repository optimization:
The following jobs are blocked during a repository optimization job:
All other jobs should run normally.
© ALL RIGHTS RESERVED. Terms of Use Privacy Cookie Preference Center