Tchater maintenant avec le support
Tchattez avec un ingénieur du support

Quadrotech Archive Shuttle 10.3 - Administration Guide

Tags

Tags are an organizational mechanism that can be used on any source or target container to identify special subset of the containers within a migration.

Tags can be used to:

·Flag users under legal consideration

·Group users into migration waves

·Identify problematic containers

·Identify VIPs that require special consideration

·Identify containers that are out of scope

 

Tags cab be created, modified, and assigned. Then, tags can be filtered. Tags can be created and applied to any of these pages:

·User Information

·Bulk Mapping

·Existing Mapping

·Stage 1

·Stage 2

info

NOTE: Tags can be used with any type of container (including ownerless ones). Additionally,tags can be assigned for a mapping’s source and target containers.

When you select objects and click the Tag Assignment button, you can create, remove, or assign tags using this window:

Tags1

 

Tag Management Page

The Configuration > Tag Management page shows a list of current tags and the number of containers assigned each tag. A tag can be expanded to review the containers that have the tag.

You can create, edit, or delete tags and tag assignments on this page.

An imported CSV file can be used to assign one or more tags to user group containers.

The supported fields for matching the containers are:

·UserSid

·SaMAccountName

·Container Name

The format of the CSV file should be one of the following:

·SID,tagname

·samAccountName,tagname

·ContainerName,tagname

Multiple tags can be assigned to a container, for example:

·samAccountName,tag1,tag2,tag3

You can assign a tag to all members of a group using the Assign User Groups button. This allows you to select a group and assign a specific tag to all containers in that group. This can be useful when you’re transitioning from using groups, to using tags.

System Configuration

The System Configuration page in the Archive Shuttle Admin Interface contains many settings that can be used to customize the migration and environment. Some of these settings will affect the workflow of the migration while others will affect throughput and performance.

Changing these configuration options should be done after careful analysis of the current workflow, environment and throughput.

Changes to system settings take place after a few minutes when the appropriate module checks in for new work items.

 

Schedule Settings

On the System Configuration page different settings/configurations can be applied according to a schedule.

When you first enter the System Configuration, the default schedule is shown. Configuring specific schedules consists of the following three steps:

1. Create and name a new schedule.

2. Make the required configuration changes for this schedule, above, or below the default schedule.

3. Select the times of day, or days of week that this schedule applies.

Date/Time is based on the date and time of the Core.

 

Example: More export parallelism on Saturday/Sunday

In this example we will configure higher Enterprise Vault export parallelism for Saturday and Sunday. The steps to follow are:

1. Navigate to the System Configuration page

2. Review the current configuration, this is the default configuration when no other configuration is set.

3. Via the button on the toolbar add a new schedule. Give the schedule a name (eg Weekend) and choose a colour from the available list.

4. Change the EV Export module parallelism settings to the new, higher values that you wish to use. When changes are made the entry will become bold, and a checkbox will appear underneath ‘Custom’.

5. Click on ‘Save’, in the toolbar, to commit those changes.

6. From the toolbar click on ‘Time Assignment’ in order to review the current scheduled configurations.

7. Click on the new schedule on the right hand side, and then highlight all of Saturday and all of Sunday.

8. Click on ‘Save’ on the schedule view in order to commit those changes.

 

General Settings

This section contains general settings relating to Archive Shuttle:

Item

Description

General


Do not re-export on File Not Found

Normally if an import module files that a required file is not present on the staging area it will report this to the Archive ShuttleCore and the Core will instruct the export module to re-export the item. If this behavior is not required, select this checkbox

Turn off the post-processing of secondary archives

This will disable the post processing modules from performing actions against an Exchange (or O365) Personal Archive

Stage 2 Default Priority

The normal priority which will be applied to users when they enter stage 2

Autoenable Stage 2

With this option enabled when a container gets to 100% exported and 100% imported, it will automatically move into Stage 2.

Without this option set containers, will remain in Stage 1 until Stage 2 is enabled for them manually.

Delete Shortcut Directly after Import

With this option selected, successfully ingested items in to Exchange, or Office 365 will have the shortcut removed from the target container straight away. Without this option selected, the shortcuts will only be deleted once the stage 2 command is executed.

This option can greatly enhance the user experience when migrating data back to existing containers (e.g., primary mailbox)

Archive Migration Strategy

When a batch of users has been selected for migration and given a specific priority, within that, a migration strategy can be used to also govern the order that the migrations will take place. The options here are:

·Smallest archive first (based on archive size)

·Largest archive first (based on archive size)

·Oldest first (based on the archived date/time of items)

·Youngest first (based on the archived date/time of items)

The default strategy equates to a random selection within the batch that was chosen

Do not transmit non-essential Active Directory Data

Stops the Active Directory Collector module from returning metadata about users which is not required for the product to function.

Use HOTS format

Instructs the Core and Modules to expect and use HOTS format for items on the staging area.

Clear staging area files older than [hours]

When the staging area cleanup process runs it will only remove files older than this number of hours.

Disable automatic staging area cleanup

Stops the automatic cleanup of the staging area.

Enable Auto Restart of Modules

Causes modules to automatically restart

Number of retries for Auto Restart of Modules

Specify the number of times modules attempt to restart

Logging


Log communication between modules and Core

Logs the sent command, and received results in a separate XML file on the Archive Shuttle Core. Note: Should only be used on advisement from Quadrotech Support

Log SQL Queries

Logs the internal Archive Shuttle SQL queries and timing information in to the Archive Shuttle Core Log file. Note: Should only be used on advisement from Quadrotech Support

Default Core Log Level

Allows the logging level for the Archive Shuttle Core to be changed.

Send errors to Quadrotech

This option sends application exceptions to a service which Quadrotech can use to help track the cause of unexpected application exceptions.

Delete ItemRoutingErrorLogs after successful Export/Import

This option allows the system to remove any reference to issues/errors during export or import once an individual has been successfully exported or imported.

Clear Module Performance Logs

By default module performance logs will be kept for 30 days, but this can be changed in this setting.

Item Database


Default size [MiB]

Shows the size that new item databases will be created.

Default log file size [MiB]

Shows the default size of new item database log files.

Item database update retry timeout [hours]

Shows the number of hours that will elapse before upgrades of the item databases will be retried.

User Interface


Global Timezone

By default the system operates in UTC

My Timezone Override

Allows a user specific timezone to be specified

Item size scale

Can be used to control whether item sizes in the user interface are displayed as bytes, Mib, GiB, TiB, or displayed in automatic scale

Options ‘All’ on grids will be disabled when this threshold is exceeded

If a value is specified here it will stop Archive Shuttle giving the option to display ‘all’ values on a grid, when there are a large number of items to display.

Delete mapping with all related data

Normally a container mapping cannot be deleted if data has been ingested into the target container, or Stage 2 has been enabled. If this option is selected then the mapping and related Archive Shuttle SQL data will be deleted.

The Administrator of the target environment will need to remove the target container.

The Administrator of the Archive Shuttle environment will need to remove the data from the Staging Area

Lock active mapping deletion

Prevents deletion of mappings if they are actively migrating.

Folder


Folder name for folder-less items

Specify a name to use for items which do not have a folder.

Treat all messages as folderless

If enabled this will ignore the folder which was obtained during the export of an item, and just ingest all items in to the folder specified as the ‘folderless items’ folder

Split folder parallelism

Maximum number of containers to process in parallel for folder splitting.

Split threshold for large folders (except journal / folderless)

If specified it indicates the maximum number of items in each of the journal folders before a new folder is created.

Split threshold folderless items

If specified it indicates the maximum number of items in the folderless folder before a new folder is created.

Journal split base folder name

Specifies the first part of the folder name which is used to stored data migrated from a journal archive.

Journal split threshold

If specified it indicates the maximum number of items in each folder before a new folder is created.

Calendar Folder Names

A list of folder names which are translated to Calendar, so that the folder will get the correct icon when it is created.

Task Folder Names

A list of folder names which are translated to Task, so that the folder will get the correct icon when it is created.

Contact Folder Names

A list of folder names which are translated to Contact, so that the folder will get the correct icon when it is created.

Chain of Custody


Re-export on Chain of Custody error

Instructs the import module to report a Chain of Custody error back to the Core, and for the Core to then queue work for the export module to re-export the item.

Enable Chain of Custody for Extraction

Causes the export modules to generate a hash of items (to be stored in the Item Databases) when items are exported

Enable Chain of Custody for Ingestion

Causes the import modules to validate the Chain of Custody information when items are ingested. If an item fails this check it will not be ingested.

Reliability


Allow import of messages with empty recipient e-mail

If a message is found without recipient information it will normally fail the ingestion process. This can be overridden with this option

Allow import of corrupt messages

Items which fail the MSG file specification may not be ingested into the target container. Some of these failures can be overridden with this option checked.

Enable item level watermarks

Archive Shuttle will stamp items that get imported with a Watermark, specifying details about the Item like Source Environment, Source Id, Archive Shuttle Version, ItemRoutingId, LinkId

Fix Missing Timestamps

If the message delivery time is missing on a message the messaging library will generate it, from other properties already present on the message

Missing timestamp date fallback

The date to be used in case of a missing timestamp

 

SMTP Settings

This section contains settings relating to the SMTP configuration (for sending out reports):

Item

Description

SMTP Server

The FQDN of an SMTP server which can be used to send emails

SMTP Port

The port to be used when connecting to the SMTP server

Use SSL

A flag to indicate that SSL should be used to connect

Use Authentication

A flag to indicate whether the SMTP server supports anonymous connections, or requires authentication

SMTP Username

The username to be used for authentication

SMTP Password

The password to be used for authentication

SMTP From Email Address

The ‘from’ address to be used in the outbound mail

SMTP From Name

The display name to be used in the outbound mail

 

FTP/SFTP Settings

This section contains settings relating to the FTP or SFTP configuration:

Item

Description

URL

The URL or IP address to use to connect to the server

Username

The username to be used when connecting to the server

Password

The password to be used when connecting to the server

Use SFTP

If ticked we’ll connect and use SFTP otherwise we’ll use FTP

 

Journal Archive Migration Settings

This section contains settings relating to journal archive migrations.

Item

Description

Import root folder

Specified the folder where imported items will be placed.

Delete items from staging are after initial import

If enabled items are removed from the staging area after initial export.

Delete items without journal archive migration import routings

Items without journal archive migration import routings will be deleted from the staging area. It is recommended to enable this if all user mappings are already enabled for import.

 

EV Collector Module

This section contains settings relating to the EV Collector Module:

Item

Description

Collector Parallelism

Defines how many archives will be collected in parallel

Collect Extended Metadata

Reads legal hold and path of items using the Enterprise Vault API for filtering purposes.

It is recommended that this is only enabled when filtering by folder

Use the BillingOwner on Archives which would be otherwise ownerless

This uses the owner set as “Billing Owner” in Enterprise Vault as the Owner of the archive instead of trying to use the entries in the Exchange Mailbox Entry table. This is useful where an Active Directory account relating to an archive has been deleted, for an employee who has left the company for example. This would normally show as Ownerless in Archive Shuttle, but with this switch enabled the Enterprise Vault Collector module will attempt to use the “Billing Owner”.

Ignore LegacyExchangeDN when matching EV users

With this option enabled the ownership detection for EV archives is modified so that the LegacyExchangeDN is not used.

Limit stored results

Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.

Threshold database limit

The number of items in the backlog before limiting takes place

 

EV Export Module

This section contains settings relating to the EV Export Module:

Item

Description

General


EV Export Archive Parallelism

Defines how many archives will be exported in parallel. Total thread count = EV Export Archive Parallelism multiplied by EV Export Item Parallelism.

EV Export Item Parallelism

Defines how many items should be exported in parallel per archive. Total thread count = EV Export Parallelism multiplied by EV Export Item Parallelism.

EV Export Storage

If using Azure for the Staging Area storage (or you are migrating a journal archive for an older version of EV where Archive Shuttle is doing the envelope reconstruction) ensure this option is set to Memory, otherwise File System or Memory with File System Fallback can be selected. In either of the described situations if ‘File’ is chosen, an error will be reported in the export module log file and export will not proceed.

This setting can be adjusted if there are problems exporting very large items.

Export Provider Priority

Specifies the order in which EV export mechanisms will be tried.

Limit stored results

Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.

Threshold database limit

The number of items in the backlog before limiting takes place

Failures handling


Export Messages with invalid Message ID from Journal

When enabled it will mean that items from a journal that require envelope reconstruction will still be processed (and a P1 message generated) even if the Original Message ID attribute can not be found in the item that was retrieved from EV (meaning that an EV Index lookup can not be performed).

Note: This setting will have an impact in that it may mean BCC information is not added to the P1 envelope (since it can not be obtained from EV)

Prevent exporting of items if envelope reconstruction fails

This will prevent Archive Shuttle from providing an item from EV if the envelope reconstruction fails. With this setting disabled some items may be provided without an appropriate P1 (Envelope) message.

Fail items permanently on specified errors

Indicates whether Archive Shuttle should mark certain items as permanently failed, even on the first failure.

Error message(s) to permanently fail items on

A list of error messages which will cause items to be marked as permanently failed (if the previous setting is enabled)

 

EV Import Module

This section contains settings relating to the EV Import Module:

Item

Description

Offline Scan Parallelism

Indicates the number of threads that should be used when scanning offline media

EV Default Retention Category ID

When a non-Enterprise Vault source is used, and the target of the migration is an Enterprise Vault environment, indicate here the retention category ID to apply to the data which is ingested

EV Import Archive Parallelism

Defines how many archives will be imported in parallel.

Total thread count = EV Import Archive Parallelism multiplied by EV Import Item Parallelism.

EV Import Item Parallelism

Defines how many items should be imported in parallel per archive.

Total thread count = EV Import Parallelism multiplied by EV Import Item Parallelism.

Import Journal Archive Through Exchange

Imports a journal archive through Exchange instead of through the Enterprise Vault API. Elements from the staging area will be added to Exchange (for an appropriate Enterprise Vault task to process) rather than directly into Enterprise Vault.

Journal Mailbox Threshold

If using the ‘Import Journal Archive Through Exchange’ option then this setting can be used to limit when ingest will be stopped while the appropriate task processing the mailbox catches up.

Suspend imports while EV is archiving

Disables import module while Enterprise Vault is in its archiving schedule.

Ingest Provider Priority

Indicate the type of ingest provider to use.

Read file to memory

Allow reading of files to system memory before ingestion

Read file to memory threshold (bytes)

Items below this size will be read into memory (to speed up ingestion), whereas items above this size won’t be read into memory

Fail items permanently on specified errors

If enabled items which encounter one of the specified errors below will be marked as permanently failed, regardless of the failed item threshold.

Error message(s) to permanently fail items on

A list of errors

Limit stored results

Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.

Threshold database limit

The number of items in the backlog before limiting takes place

 

EV Provisioning Module

This section contains settings relating to the EV Provisioning Module:

Item

Description

Shortcut Process Parallelism

Defines how many archives will be post processed in parallel.

Total thread count = EV Post Process Parallelism multiplied by EV Post Process Item Parallelism.

Shortcut Process Item Parallelism

Defines how many items will be post processed in parallel per archive.

Total thread count = EV Post Process Parallelism multiplied by EV Post Process Item Parallelism.

Delete shortcuts not related to migrated items

When shortcut deletion is progressing foreign shortcuts will be also be deleted.

Delete messages with EV properties but without proper shortcut message class

If this option is enabled and items are found to have EV attributes (such as Archive ID) they will be deleted by the shortcut process module.

Use EWS for Processing

Enables the post processing to use EWS rather than MAPI for processing. This is enabled by default.

Shortcut deletion maximum batch count

Shortcuts will be grouped into batches of 100 items. This number indicates the number of those batches to be processed in parallel.

Limit stored results

Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.

Threshold database limit

The number of items in the backlog before limiting takes place

Collect shortcuts for both primary and secondary mailboxes

If enabled this will enable collection of shortcut information from both the primary mailbox and secondary mailbox (personal archive). This is the preferred method of shortcut collection.

Convert EV foreign shortcuts to regular items

If enabled shortcuts belonging to other archives will be retrieved and added to the target container, and the shortcut will be removed during stage 2.

EWS is the preferred method of shortcut processing and should be used wherever reasonable to support.

 

Exchange Import Module

This section contains settings relating to the Exchange Import Module:

Item

Description

General


Use per server EWS Url for Exchange import

If this is enabled then the import module will use the EWS Url configured in Active Directory on the Exchange Server object rather than a general Url for all ingest requests

Import Root Folder

When ingesting data in to Exchange mailboxes or personal archives it is sometimes required to ingest the archived data into a top level subfolder (and then the archive folders beneath that). Specify the name of that top level folder here.

Maximum Batch Size Bytes

The maximum size that a batch of items can be, which is then sent in one go to Exchange

Maximum Batch Size Items

Maximum number of items in a batch

Exchange Timeout Seconds

Timeout in seconds until Archive Shuttle aborts the ingest (i.e. upload/processing)

Disable reminders for appointments in the past

This will remove the MAPI properties relating to whether a reminder has been sent/fired or not as the item is ingested into the target. If this is not enabled reminders may appear for long overdue archived items.

Mark migrated items as read

If this is enabled all migrated will be marked-as-read by the import module

Disable migration on specified error(s)

If enabled items which encounter one of the specified errors below cause the mapping to be disabled.

Error message(s) to disable migration on

List of errors

Fail items permanently on specified errors/td>

If enabled items which encounter one of the specified errors below will be marked as permanently failed, regardless of the failed item threshold.

Error message(s) to permanently fail items on

A list of errors

Limit stored results

Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.

Threshold database limit

The number of items in the backlog before limiting takes place

Threading/Parallelism


Offline Scan Parallelism

Number of threads that will be used for scanning offline media

Exchange Mailbox Parallelism

Defines how many Exchange mailbox imports will be ingested to in parallel.

Exchange Batch Parallelism

Defines how many batches will be ingested per mailbox in parallel

Exchange Item Parallelism

Defines how many items will be ingested per mailbox in parallel

Connectivity


Exchange Version

Specify the version of Exchange which is in use.

Disable Certificate check

Disable the certificate validity when connecting to Exchange

Exchange Connection URL

Specify an Autodiscover URL if the default one does not work

Use Service Credentials for Logon

Authenticate to Exchange with the credentials which the Exchange Import Module Server is running as.

 

Journal Transformation settings

This section contains settings relating to journal transformation:

Setting

Description

Notes

Delete items from staging are after initial export

When enabled items are deleted from the staging area after export.

It’s recommended to enable this if there is a small subset of users to be migrated. It reduces the requirements on the size of the staging area.

Delete items without journal explosion import mappings

When enabled items that do not have import routings will be deleted from the staging area.

It’s recommended to turn this on when all user mappings are enabled for import.

Time to expire inactive items (hours)

Default: 0, which means this setting is inactive.

After an item has been imported, but is needed in the future it starts an inactivity timer (it starts at the time of the starting to be imported). If the number of hours specified here is reached, and the item is still inactive, it is deleted from the staging area.


Mailbox Quota Exceeding Threshold (%)

Default: 95%. Ingesting items will stop if this percentage against quota is exceeded.

Ingest will resume if the quota is increased, or the mailbox size is reduced. WHICH QUOTA?

Ingest users with most items first

Default: Enabled. Start import to users which have most items to process.


Skip items which DO not belong to user mapping

Default: Enabled. When enabled items which are older than the user will not be ingested into the target.

This setting does not apply to leavers, re-created users and manually mapped users.

Enable All SMTP Addresses Discovered Upon Exploding

Default: Enabled. When enabled all SMTP addressed discovered during the explosion process will be automatically enabled for further processing. This setting is ignored if the list of domains is specified.


List of Accepted Domains

Only the list of domains specified here will be automatically enabled for further processing. Specify one domain per line.

If this list is altered during a migration, it will only affect future explosion of items, not the existing items.

 

Native Import Module

This section contains settings relating to the Native Import Module:

Item

Description

Stamps a header to imported messages for ProofPoint to identify message source

When enabled a message-header is added to each item as it is added to a PST file. The header is called x-proofpoint-source-id and has the itemid/item routing id, as the value. For example:

x-proofpoint-source-id: 91016abe-51e3-bdd6-132f-fb6763ecc751/2865103

Native Import File Parallelism

Defines how many PST files will be imported to in parallel.

Native Import Item Parallelism

Defines how many items will be ingested in parallel per PST file.

Finalize finished PSTs in Stage 1

With this option enabled finished/full PST files will be moved to the output area whilst the mapping is still in Stage 1. This will only happen on PSTs which are complete, ie those that have split at the predefined threshold. For migrations lower than the threshold, which therefore have just a single PST, this PST will not be moved in stage 1.

Fail items permanently on specified errors

If enabled items which encounter one of the specified errors below will be marked as permanently failed, regardless of the failed item threshold.

Error message(s) to permanently fail items on

A list of errors.

Limit stored results

Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.

Threshold database limit

The number of items in the backlog before limiting takes place

 

Office 365 Module

This section contains settings relating to the Office 365 Module:

Item

Description

General


Number of fastest servers to use

Determines how many servers from the list of fastest are used. They will be picked randomly by the module

Import Root Folder

When ingesting data in to Office 365 mailboxes or personal archives it is sometimes required to ingest the archived data into a top level subfolder (and then the archive folders beneath that). Specify the name of that top level folder here.

Ingest Provider Priority

Determines which ingestion methods will be used to ingest data into Office 365 and in which order

Office 365 Batch Size Bytes

The maximum size that a batch of items can be, which is then sent in one go to Office 365

Office 365 Batch Size Items

Maximum number of items in a batch

Office 365 Timeout Seconds

Timeout in seconds until Archive Shuttle aborts the ingest (i.e. upload/processing)

Disable reminders for appointments in the past

This will remove the MAPI properties relating to whether a reminder has been sent/fired or not as the item is ingested into the target. If this is not enabled reminders may appear for long overdue archived items.

Mark migrated items as read

If this is enabled all migrated will be marked-as-read by the import module

Convert journal messages to O365 journaling format

If this option is enabled information in the P1 envelope gets added to an attribute called GERP, and added to the message as it is ingested in to Office 365. This makes those items Office 365 journal-format messages. (See additional information here)

Limit stored results

Stops the module trying to get more work, if there is a backlog of transmissions to be sent to the Core.

Threshold database limit

The number of items in the backlog before limiting takes place

Allow synchronization of un-associated mailboxes

Mailboxes with no association to AD User present in AS database will be also synchronized and stored (matching is based on UPN/Email address)

Enable auto-recreate deleted O365 Users as Leavers

Deleted O365 Users which are marked as Enabled to Recreate will be recreated as Leavers once task ‘RecreateDeletedO365Users’ will run

Allow synchronisation of unassociated mailboxes

Allow synchronisation of mailboxes without an owner.

Collect retention tags

Collect retention tags from O365.

Allow ingest of items without O365 retention mapping

If the collection of retention tags is active, but an explicit mapping of retention tags hasn’t been performed then some items may not ingest into the target. If this option is enabled, if a mapping doesn’t exist, the item will still be ingested.

Preferred domain for ambiguous user match

Specify a domain (with or without the @) to use when trying to match users with ambiguous names.

Disable migration on specified error(s)

If enabled items which encounter one of the specified errors below cause the mapping to be disabled.

Error message(s) to disable migration on

List of errors

Fail items permanently on specified errors

If enabled items which encounter one of the specified errors below will be marked as permanently failed, regardless of the failed item threshold.

Error message(s) to permanently fail items on

A list of errors.

Disable ingest account on specified error

If enabled items which encounter one of the specified errors below will result in the ingest account being disabled from further ingest requests.

Error message(s) to set ingest account as disabled

A list of errors.

Virtual Journal


Virtual Journal Item Count Limit

The maximum number of items in a virtual journal mapping before a new mapping will be created.

Virtual Journal Size Limit

The maximum size of a virtual journal mapping before a new mapping will be created.

Threading/Parallelism


Offline Scan Parallelism

Number of threads that will be used for scanning offline media

Office 365 Mailbox Parallelism

Defines how many items will be ingested per mailbox in parallel.

Office 365 Item Parallelism

Defines how many items will be ingested per mailbox in parallel.

Office 365 Batch Parallelism

Defines how many batches will be ingested per mailbox in parallel

Connectivity


Use faster server (round-trip)

If enabled, from time to time the Office 365 module will get a list of servers responding to Office 365 ingest requests and use only those for ingestion.

Office 365 Exchange Version

Specify the Office 365 Exchange version

Disable certificate check

Disable the certificate validity check when connecting to Office 365

Use Multiple IP from DNS

When Office 365 returns multiple IP address entries for it’s ingestion service this setting will allow the ingest module to communicate to all of those IP addresses instead of just one. For this to work, the ‘Disable certificate check’ option must be enabled.

Exchange Connection URL

Specify an Autodiscover URL if the default one does not work.

Use modern authentication (OAuth)

Select this option to ingest to Exchange Online using modern Azure Active Directory (OAuth) authentication.

OAuth concurrent batches

By default we’ll use 5 batches in parallel. This value can be adjusted.

 

PST Export Module

This section contains settings relating to the PST Export Module:

Item

Description

General


File Parallelism

The number of PST files to ingest from simultaneously

PST item collection file parallelism

The number of PST files to scan simultaneously

Limit stored results

When the module is working, should the number of items it is tracking and not sent the result to the Core be limited?

Threshold database limit

The number of items allowed to be stored locally, and not sent to the Core, before the module stops requesting more work.

Journal Archive Migration


Process messages without P1 header

If enabled items will still be processed for journal archive migrations even if they are missing a P1 header

Process distribution lists from messages without P1

If enabled items will still be processed for journal archive migrations even if they are missing a P1 header

 

Metalogix Export Module

This section contains settings relating to the Metalogix Export Module:

Item

Description

Metalogix Archive Parallelism

The number of archives to process in parallel

Metalogix Item Parallelism

The number of items to process in parallel per archive

Threshold Database Limit

The number of items in the backlog before limiting takes place

Limited Stored Results

Stops the module trying to get more work, if there is a backlog of transmissions to be sent to the Core

Allow import of corrupted messages

If selected, import of corrupted messages is allowed

 

EAS Zantaz Module

This section contains settings relating to the EAS Zantaz Module:

Item

Description

EAS Archive Parallelism

The number of archives to process in parallel

EAS Item Parallelism

The number of items to process in parallel per archive

Limit stored results

Stops the module trying to get more work, if there is a backlog of transmissions to be sent to the Core.

Threshold database limit

The number of items in the backlog before limiting takes place

 

Sherpa Module

This section contains settings relating to the Sherpa MailAttender Module:

Item

Description

Sherpa Archive Parallelism

The number of archives to process in parallel

Sherpa Item Parallelism

The number of items to process per mailbox in parallel

Limit stored results

Stops the module trying to get more work, if there is a backlog of tranmissions to be sent to the Core.

Threshold database limit

The number of items in the backlog before limiting takes place

 

SourceOne Module

This section contains settings relating to the SourceOne Module:

Item

Description

SourceOne Container Parallelism

The number of archives to process in parallel

SourceOne Collect Items Container Parallelism

The number of items to process per archive in parallel

SourceOne Sync Archives Batch Size

Defines how many archives will be synced in one batch.

Ignore Missing Transport Headers

Ignores missing transport headings in case of export from Journal export.

Email address used in case ‘sender is null’ error

Specify an email address to be used if the module reports a ‘sender is null’ error

Limit Stored Results

Select this option to limit stored results.

Threshold Database Limit

Limit of records in local database, when module stops asking for work items.

 

Dell Archive Manager Module

This section contains settings relating to the Dell Archive Manager Module:

Item

Description

Container Parallelism

Defines how many archives will be exported in parallel.

Collect Item Parallelism

Defines how many items should be collected in parallel per archive.

Limit Stored Results

Select this option to limit stored results.

Threshold Database Limit

Limit of records in local database, when module stops asking for work items.

Collect size of archives

If enabled the module will also collect the overall size of archives

 

PowerShell Script Execution Module

This section contains settings relating to the PowerShell Script Execution Module:

Item

Description

Limit stored results

If enabled helps protect the module from some failures where backlog of results can’t be sent to the Core.

Threshold database limit

The number of results that will stored locally before the module stops processing more work. (Combined with previous setting).

Day to Day Administration of Migration Activity

The tasks that need to be performed from day to day during an Archive Shuttle based migration vary from project to project; however, aspects that should always be performed are:

Monitoring Performance

Monitoring Workflows

Monitoring Module-Level Activity

Adjusting Priority of Migrations

Monitoring Migrations

Pausing, Resuming, and Skipping Workflow Steps

Changing Workflows

Setting Users Up for Migration

Configuring, Loading, and Saving Filters

Monitoring Performance

Performance of the migration can be viewed in various ways within the Archive Shuttle Web Interface and can be exported to a variety of formats. It is recommended to review the following:

Progress & Performance

Over a period of time, the number of containers (users) processed should start to increase. In fact, all the statistics should rise towards 100% during the migration. It is also possible to view the information displayed on this page both for the overall progress of the migration and an individual link level. To get a fine level of detail on the progress of a migration, it is also possible to show the graphs on the page as items/second rather than items/hour.

The data which is displayed and the order it is displayed in can be customized to meet the needs of the particular migration. These views can be stored, and can be switched between when required. Each chart or data grid is a widget that can be moved around the screen, or removed by clicking on the small ‘x’ at the top right of the widget. Once removed, widgets can be re-added if required.

Link Statistics

Ensure each link is progressing towards 100% over time

Performance

Once steady-state migration has been reached the number of items being exported and imported per hour should be fairly consistent.

System Health

Ensure that the Free Space amounts do not reach a warning or critical level.

Also ensure that modules are not deactivated or otherwise not running.

Finally ensure that the link ‘used’ space is within normal parameters.

Monitoring Workflows

Monitoring workflows involves reviewing the Stage 1 Status screen for hangs and failures. By default, the number of export errors and the number of import errors are shown as data columns on the screen. Additional data columns can be added (by clicking on Columns in the Actions Bar and dragging additional columns on to the data grid).

The Stage 1 Status screen can also be filtered and sorted to show (for example) all containers with less than 100% imported. This can be done as follows:

1.Navigate to the Stage 1 (Sync Data) screen.

2.Click Reset on the Actions Bar in order to return the page to the default view

3.In the text box under Imported Percentage” enter 100.

4.Click on the small filter icon to the right of the text box, and choose “is less than” from the pop-up list

5.Click on the Apply button at the right of the filter row.

The page will now show those containers that have been enabled for Stage 1 that have not reached the 100% import stage.

Monitoring Module-level Activity

Valuable information about module-level responsiveness can be gained from the Module Dashboard in Archive Shuttle. Using the dashboard it is possible to see detailed information relating to the operations being performed by particular modules on particular servers involved in the migration. For example, if new mappings have just been created, and yet one or more Enterprise Vault Export modules are showing no items being exported, it may indicate an area that should be investigated further.

Adjusting Priority of Migrations

When reviewing the data synchronization on Stage 1, it might be necessary from time to time to adjust the priority order that they are running. A good way to do this is to ensure that all new mappings are created with a priority of 10, then, other priorities can be assigned on the Stage 1 screen in order to raise or lower the priority of particular mappings.

info

NOTE: The lower the priority number, the higher the priority.

Monitoring Migrations

Monitoring migrations involves reviewing the Stage 2 Status screen for failures. Each command, performed by the appropriate module, will report back results to the Archive Shuttle Core.

This screen can also be filtered to show (for example) all containers that have not yet finished the migration completely. This can be done as follows:

1.Navigate to the Stage 2 (Switch User) screen.

2.Click Reset on the Actions Bar in order to return the page to the default view

3.In the drop down list under Finished select No.

4.Click on the Apply button at the right of the filter row.

The page will now show those containers that have been enabled for Stage 2, but have not yet completed the migration.

Pausing, Resuming, and Skipping Workflow Steps

Particular steps in a Workflow can be paused, resumed or even skipped. This can be done as follows:

1.Navigate to the Stage 2 Status screen.

2.Click Reset on the Actions Bar in order to return the page to the default view

3.Select one or more containers.

4.Click the appropriate action button, e.g., Pause, Suspend, Resume.

Changing Workflows

At any time a particular container mapping can have the workflow changed to a new one. In order to do this, perform the following:

1.Navigate to the Stage 2 (Switch User) screen.

2.Click Change Workflow Policy on the Actions Bar

3.Select a new workflow for this mapping.

4.Click [Save] to commit the change.

Once a new workflow has been selected, all previous workflow steps will be removed for the mapping and the new workflow will begin with the first command in that workflow.

This can also be used to re-run a chosen workflow associated with a container mapping.

Setting Users Up for Migration

Most archive migrations are performed by selecting groups or batches of user archives (containers) to process through Stage 1 and Stage 2. The mapping and selection is performed on the Bulk Mapping screen in the Archive Shuttle Web Interface. This screen can be filtered and sorted with the current list of data columns, and additional data columns can be added to help facilitate the selection of containers.

There are many ways that the selection can be defined. Below is an example of selecting users based on the source Vault Store:

1.Navigate to the Map Containers screen.

2.Click Reset on the Actions Bar in order to return the page to the default view

3.In the text box under Link Name enter the name of one of the source Vault Stores, and press enter or click into a different text box

4.Click the Apply button at the right of the filter row.

The page will now refresh to show all archives (containers) in that particular Vault Store (link).

This selection can be further refined, before selecting some or all of the containers and performing the Add Mapping function from the Actions Bar.

Configuring, Loading and Saving Filters

Both the Stage 1 (Sync Data) and the Stage 2 (Switch User) pages in the Admin Interface allow for additional, useful data columns to be added to the screen. These can be re-arranged and sorted as required.

Both of these pages also allow for these filters, and column configurations to be saved under a friendly name. Previously saved filters can be loaded, allowing an administrator to jump between slightly different views of the migration.

info

NOTE: The filters are saved per user and can be used on any machine that accesses the Admin Interface from the same Windows account.

Logging

Archive Shuttle performs logging for activities in two locations, both of which can be useful for troubleshooting purposes. Logging can be configured to record data at a number of different levels: Info, Debug, Trace, Warn, Error, and Fatal.

Module-Level Logging

Each module records log information at the INFO level though that can be overridden if required. When the modules are installed, the location of these log files can be chosen. By default, the location is inside the program files folder in a subfolder called Logs.

info

NOTE: When the module logs (or default core log from System Configuration) for particular modules are set to a different logging level than 'Info', the logging level is set back to 'Info' after a specific period of time (default 12 hours).

The current location for the log files can be seen on the Modules page in the Archive Shuttle Admin Interface.

In addition to logging data locally, each module also transmits this log information to the Archive Shuttle Core, using a Web Service.

Each module can log extended information about item-level migration details; for example, on successful migration of an item an EV Import module may log the Archive Shuttle reference for an item, the source item ID and the target item ID.

Core-Level Logging

The Archive Shuttle Core logs information as well as the modules. The location where the log files will be written can be configured during the installation of the Archive Shuttle Core. In addition, to the core operations the Archive Shuttle Core will also log user interface actions to a file on the Archive Shuttle Core.

The Archive Shuttle Core also receives logging from each module. These files have .Client in their filename. This means it is not normally necessary to get log files from the servers where modules are installed.

Documents connexes

The document was helpful.

Sélectionner une évaluation

I easily found the information I needed.

Sélectionner une évaluation