ApexSQL Recover is investigating log files for 72 hours. Is this normal?
説明
When investigating the ApexSQL Recover log files in such cases the following errors are shown: 2022-07-21 20:49:40.058 9356 16840 WARN : Allocation unit 72058146359410688 is not available in the database. 2022-07-21 20:49:40.059 9356 16840 WARN : Allocation unit 72058146359607296 is not available in the database…………. 2022-07-21 20:49:40.093 9356 14180 WARN : Skipped operations at 000EDEA2:00028CE5:0001 since they don't belong to 000EE0FA:00000010:0001 2022-07-21 20:49:40.095 9356 14180 WARN : Skipped operations at 000EDEA2:00028E08:0001 since they don't belong to 000EE0FA:00000010:0001………….. 2022-07-21 20:49:47.725 9356 14180 WARN : Gap in log source detected between 000EE0AF:000371EC:0021 and 000EE0B4:00000010:0001 2022-07-21 20:49:47.726 9356 14180 WARN : Gap in log source detected between 000EE0BE:000371F8:000C and 000EE0C3:00000010:0001……………
By investigating the above lines of ApexSQL Recover log file its obvious that only SQL Server transaction log file is used for investigation so it make sense there are lot of gaps in data causing ApexSQL Recover to struggle and hang trying to read the “insufficient” data.
原因
If the transaction log backups are not sequential, ApexSQL Recover will not have sufficient information to perform full recovery and recreate full row history because the information will be missing with the exclusion of the full chain of transaction log backups – the full t-log chain must be added into investigation in order to gain full details from the transaction log backups auditing.
対策
No, this isn't a normal behavior since there isn’t any size limitation when loading large backup file. However, the larger the database and t-logs, the more time it will take for ApexSQL Recover to finish loading and later recovering those files. For instance, if the ldf file is 100+gb, such large amount of data requires time to be processed. Additionally, compression greatly affects recovery performance and reading from compressed backups is slower compared to working with non-compressed backups, especially when restoring table from large, compressed backups. Also, please consider that you’ll need to have a larger portion of free RAM on your machine to run things more smoothly
However, the solution here would be to, when reading t-log files, the full backup t-log chain must be added. To do so, please consult following article for more info: Create and maintain full transaction log backups