Basically, there is no way to do this without some manual scheduling. Files are replicated and delete in order via a journal.
If there are certain files that are more important than we would suggest those files be placed in another container and replicated separately.
Something similar to the following:
1. Setup a separate container.
2. Put the backups that need to go first in that container.
3. Schedule replication on that container to go first. If it generally is 4 hours to replicate that data to have the replication pipe for 4 hours without the other container replicating.
Container First Replication – Set for say 8 pm to 8 am
Container Slower Replication – set for say 12 pm to 8 am
The first container will now have the files that need to get offsite the fastest/first and will have 4 hours of time to do that. If there is more that needs to go it will share with the second container that will start at 12 pm and run in parallel for 8 hours.
Let me know if this is enough clarity for the request.