Best practices for a Capture/Reply on an Oracle RAC environment:
1. FGAC capture activities at any valid table in specified schema, so user may need create some custom tables in specified schema firstly
2. In “Select Capture Method” page, user can switch tablespace for FGA capture tables. In this way, if the spacetable of connection user is not enough, user can switch a bigger tablespace to store capture tables generated by FGA capture
3. About server directory and console directory, there are some explain in “Server Directory” and “Console Directory” page.
4. In “Capture and Export Scope” page, user can specify any schema which user custom table belong to, for example, if user want to capture activities of table in the schema X, then user can specified schema X.
5. In “Filter Settings” page, user can specify some filter, if any filter is reach, all activities in this filter will not be captured.
6. After submit the capture in ”Finish” wizard page, the capture will be started automatic, all activities in table in specified schema will be captured.
7. After capture finished, FGA capture will generate scenario XML file automatic, then user can replay the capture with this XML file.
Note: All audit trail is send to the SYS.FGA_LOG$ table in the database and includes SQL Text and SQL Bind, and SYS.FGA_LOG$ is belong to system tablespace, so user may need consider increase the space of system tablespace.
And for RAC environment, the server directory in all NODE should be set to same or set to OCFX path.
8. Before starting the capture make sure as to define the application you want to capture. Don’t do a database wide capture since, depending on the transaction rate of the database, can be very large amounts of data as well as contain unnecessary SQL for a successful replay, such as background database activity. Defining the application/schema you want to capture up front will allow for just the capture of the most important workload as well as keeping the amount of captured data to a minimum amount required.
9. Start with a smaller capture time. The default value of 30 minutes is a good starting point but you can go higher if necessary. Initially keeping this capture window small will allow you determine how much data is being generated and what the system overhead will be. The overhead is difficult for us to determine since it is dependent upon the overall transaction rate, the amount and size of the tables being recorded, as well as the capture scope.
10. Monitor the database you are capturing on for the first few captures so as to verify the impact of the capture (overhead and amount of data). If the numbers are not satisfactory to you, stop the capture and make adjustments to it.
11. Make sure that you can restore the database on the target replay system to the same state as it was before the capture started. You can do this by using either BMF to perform the capture, or you can use your own. This is important otherwise data integrity errors may occur during replay.
© ALL RIGHTS RESERVED. Terms of Use Privacy Cookie Preference Center