How to Install

From DB Optimizer
Jump to: navigation, search
 -- (c) Kyle Hailey 2007

 GOAL
 ----------------------------------------------------------
   Collect active session history on a TARGET machine 
   Save data on REPOSITORY machine

 INSTRUCTIONS
 ----------------------------------------------------------


 1) Edit <a href=http://www.perfvision.com/ftp/sashpack151107.sh>sashpack.sh</a>
    Modify:

      # TARGET - install script is run on this machine
        SCHEMA=sys
        PW="sys as sysdba"

      # REPOSITORY - need schema/passsword and HOST and SID
        REPO_SCHEMA=sash
        REPO_PW=sash
        REPO_SID=cdb
        REPO_HOST=cont01
        REPO_PORT=1521

    Where PW is the sys password on TARGET machine.
    REPO_ variables are the connect information on the REPOSITORY machine.
    The repository machine is meant to be a remote machine.
    The repository machine can be the same as target.

    The ash repository user needs to be something like <a href=/sql/ash/ashcru.sql>create sash user</a>

    NOTE: you need partitioning option on REPOSITORY
    if you don't have it there are two options
    a) just take out the partion part of the v$ash_all create statement
       and don't run the partition management job
       (not sure how you'll be able to efficiently purge in this case)
    b) edit ashpack.sh to use poormans partitions
       I did this but don't have time to upkeep two versions of ashpack.sh
       Basically I created 7 tables called v$ash_1 to v$ash_7
       and create v$ash_all as a union all of them
       then I insert into them depending on the day number of the week
       and truncate the oldest one when roll over a week.
       see:  <a href=/sql/ash/ashpack_poor.sh>poor mans ashpack</a> for ideas only (way out of date)

 2)  On each TARGET machine 
   
     run the install script
    
     $ sh ashpack.sh
  
     The script will give lots of errors about table creation if this is not the first target.
     You can ignore these errors.
     The important part is if the package is created at the end, and the 3 calls
     to the pacakge are successfull. These are the last 3 commands of the script.

 3)  Then set up a job to run the collection on the TARGET machine
     On each target machine, run script
        <a href=ashstart.sql>ashstart.sql<a>
     to kill all previous ash jobs and restart them, or by hand do:

       variable job number
       -- collect ASH data
       begin
         dbms_job.submit( job       => :job
                        ,what      => 'ash.collect(3,1200);'
                        ,next_date => sysdate
                        ,interval  => 'trunc(sysdate+(1/(24)),HH)'
                        );
       end;
 /
       -- collect related data like SQL text, SQL stats, usernames, objectnames
       begin
        dbms_job.submit(:job,
                        'ash.get_all;',
                        sysdate,
                        'trunc(sysdate+(1/(24)),HH)'
                        );
       end;
 /
     commit;

 4)  On the Repository Machine (ie only needs to be run once)
    need to set up partition management job
    which creates new partions for each day 
    drops partitions over 7 days old
    (other wise the collection will error out after a day
     because no new partions have been created
     and of course the old ones one be dropped )
    run 
     <a href=ashstart_mng.sql>ashstart_mng</a>
    or by hand do:

     var job number;

     begin
       sys.dbms_job.submit
         ( job       => :job
          ,what      => 'ashpkg.part_management(V$ASH_ALL,7);'
          ,next_date => sysdate
          ,interval  => 'trunc(sysdate)+1'
          ,no_parse  => TRUE
         );
     end;
 /
     commit;


      
 Explanation
 ----------------------------------------------
 The ash.collect job runs every 3 seconds by default and stores
 active session information in a view V$ASH_ALL on 
 the REPO machine. 
 The REPO machine;'s job ashpkg.part_management creates new partions
 once a day for the next day and purges data over 7 days old
 Go to the REPO machine to get performance data
 See <a href=/sql/ash/note.html>analysis</a> for ideas on
 mining the data