ppti.info Technology Oracle Database 12c Administration Workshop Pdf


Friday, December 20, 2019

Jobs 21 - 38 Oracle Database 12c: Administration Workshop Student Guide – Volume I. Nazar Amir. Oracle Database 12c: Administration Workshop Student. Jobs 21 - 38 Oracle Database 12c: Administration Workshop 1 - 2 Course Objectives After completing this course, you should be able to: • Describe Oracle. Jobs 21 - 38 Oracle Database 12c: Administration Workshop 12 - 2 implementation manuals published in PDF format as well as product README files and.

Language:English, Spanish, German
Published (Last):16.01.2016
ePub File Size:20.59 MB
PDF File Size:13.70 MB
Distribution:Free* [*Regsitration Required]
Uploaded by: NOAH

The Oracle Database 12c: Administration Workshop will teach you about the Oracle In this course, you will be introduced to Oracle Database Cloud Service. 3. The Oracle Database 12c R2: Administration Workshop Ed 3 course is designed to provide you with a firm foundation in administration of an Oracle Database. The Oracle Database 12c: Administration Workshop will help you build a firm foundation in administration of an Oracle Database. You'll learn about the Oracle .

Keesling Disclaimer James L. Spiller This document contains proprietary information and is protected by copyright and other intellectual property laws.

You may copy and print this document solely for your own use in an Oracle training course. The document may not be modified or altered Technical Contributors in any way. Except where your use constitutes "fair use" under copyright law, you and Reviewers may not use, share, download, upload, copy, print, display, perform, reproduce, publish, license, post, transmit, or distribute this document in whole or in part without Darryl Balaski the express authorization of Oracle.

Rainer Bien The information contained in this document is subject to change without notice. If you Maria Billings find any problems in the document, please report them in writing to: This document is not Andy Fortunak warranted to be error-free. Gwen Lazenby U. Government contract. Other names may be trademarks of their respective owners.

Download as PDF,. Pdf , Text File. Copying ekit materials from this computer is strictly prohibited. No preview available.

Oracle Database 12c:. Txt or read online. Explorar intereses. This Oracle Database 12c:. Some administrators choose not to back up tablespaces containing data or objects that can be easily re-created such as indexes or data that is batch-loaded frequently. Click the Policy tab to: RMAN automatically deletes old backups to make room for new ones if the retention policy allows it. The retention policy can be specified as a number of backups or a number of days. If you intend to create incremental backups.

This setting enables you to skip read-only and offline data files. The recovery time could reach back as far as 48 hours. This is because just before a backup is taken on a given day day. The Oracle-suggested strategy uses the incremental backup and incrementally updated backup features.

Because these backups on disk are retained. Enterprise Manager Cloud Control makes it easy for you to set up an Oracle-suggested backup strategy that protects your data and provides efficient recoverability to any point in the preceding 24 hours. Because it is a whole database backup. Both strategies enable you to set up encrypted backups. By clicking Schedule Customized Backup.

Select the objects that you want to back up—the whole database the default or individual tablespaces. After that. The Oracle- suggested strategy takes a full database copy as the first backup. The Backup Strategies section enables you to select from the Oracle-suggested backup and Customized backup strategies. To establish an Oracle-suggested strategy. Enterprise Manager Cloud Control. You are not a Valid Partner use only loss of all control files. Although it is very unlikely that a properly configured database with multiple copies of the control file placed on separate disks and separate controllers would lose all control files at the same time.

Control files have an additional backup option. In Enterprise Manager Database Express. A control file trace backup contains the SQL statement that is required to re- create the control files in the event that all control files are lost.

This deletes backups older than the retention policy. This task identifies those files and adds them to the catalog. If you delete a backup without using RMAN.

At the top of the Manage Current Backups page. RMAN can automatically delete obsolete backups. This deletes the catalog listing for any backups that are not found when the cross-check is performed as described previously. Using RMAN to make an image copy of all the database files simply requires mounting or opening the database. That causes RMAN to remove the archive log files after backing them up. In a terminal session. A whole database backup is a copy of all data files and the control file.

In that case. Execute configuration commands: You can also create a backup either a backup set or image copies of previous image copies of all data files and control files in the database by using the following command: This is useful especially if you are not using a fast recovery area. You are not a Valid Partner use only a. True b. Recovery Oracle University and Error: Checks whether all redo log groups known to the control file have at least one member present..

As a database moves from the shutdown stage to being fully open. Any missing members are noted in the alert log. If even one control file is missing or corrupt. If any files are missing.

To find all files that need recovery. If any files require media recovery. Verifies that all data files known to the control file are present unless they have been taken offline.

When the instance finds files that are missing. Files that are present and require media recovery are listed. If necessary. Verifies that all data files that are not offline or read-only are synchronized with the control file. Offline files are not checked until the administrator tries to bring them online. Even if an inactive redo log group is lost.

Recovering from these types of media failure must be done with the database down. As a result. After a database is open.

The loss of data files belonging to other tablespaces does not cause instance failure. Fail over to a standby database if you are in a Data Guard configuration. Data failures can be very serious. This allows users to come back online as soon as possible. In this mode.

The Data Recovery Advisor automatically gathers data failure information when an error is encountered. Note that repairs are always under human control. In addition. The Data Recovery Advisor handles both cases: The preferred way to address serious data failures is as follows: Repair the primary cause of the data failure. Some data failures like block corruptions in data files are not catastrophic because they do not take the database down or prevent you from opening the Oracle database.

When failures exist. Supported Database Configurations In the current release. The Data Recovery Advisor cannot use blocks or files transferred from a standby database to repair failures on a primary database. Oracle Oracle University and Error: After the control file is successfully restored. The options for recovery from the loss of a control file depend on the storage configuration of the control files and on whether at least one control file remains or have all been lost.

Oracle Database 12c: Administation Workshop I or II

If using ASM storage. If the media failure is due to the loss of a disk drive or controller. Remember that Oracle recommends having at least two control files at all times. If your control files are stored as regular file system files and at least one control file copy remains. Recovering from the loss of all control files is covered in the course titled Oracle Database 12c: To perform this recovery by using SQL commands: Recovering from the loss of a single redo log group member should not affect the running instance.

You can clear the affected group manually with the following command: If you want to ensure that the new redo log member is an OMF file. Restore the missing file by first dropping the lost redo log member: If you are using Oracle Managed Files OMF for your redo log files and you use the preceding syntax to add a new redo log member to an existing group.

Determine whether there is a missing log file by examining the alert log. Doing so breaks the chain of redo information. Enterprise Manager does not allow you to clear a log group that has not been archived. Failure to do so may result in a loss of data if another failure occurs. To clear an unarchived log group.

If you must clear an unarchived log group. You are not a Valid Partner use only 2 Restore the entire database—including 2. Select Whole Database as the type of recovery. Shut down the instance if it is not already down. Open the database. Have users re-enter all changes that were made since the last backup.

To perform this type of recovery by using Enterprise Manager Cloud Control: So users must re-enter all changes made since that backup. Click Recover to begin g the g guided restore and recoveryy process.

To restore and recover the missing data file by using Enterprise Manager Cloud Control. The rest of the database remains available for users to continue work. Submit the RMAN job to restore and recover the missing files. Click Add to select the data files to restore and recover.

Specify whether you want to restore the files to the default location or if a disk or controller is missing to a new location. In the User Directed Recovery section. A loss of one of these files requires the database to be restored from the MOUNT state unlike other data files that may be restored with the database open. Mount the database. If the instance is not already shut down. To perform this recovery by using Enterprise Manager Cloud Control: Click Add to select all data files that need recovery.

The instance may or may not shut down automatically. If it Oracle University and Error: You are not a Valid Partner use only does not. Users are not required to re-enter data because the recovery is up to the time of the last commit.

In the User Directed Recovery Section. Restore and recover the missing data file. Loss of a control file if there is a remaining multiplexed Oracle University and Error: You are not a Valid Partner use only control file b. Loss of one redo log member if there is a remaining multiplexed redo log member from the same group of the lost member d Loss of the active undo tablespace d.

Overview This practice covers recovering from the loss of a: Used by worker processes for all metadata unloading and loading. Has been integrated with external tables. Major functional components: Oracle Database supports a Direct Path API interface that minimizes data conversion and parsing at both unload and load time.

All stopped Data Pump jobs can be restarted without loss of data as long as the metainformation remains undisturbed.

The ability to detach from and re-attach to long-running jobs without affecting the job itself enables you to monitor jobs from multiple locations while they are running. Oracle Database provides the following tools: Overview As a server-based facility for high-speed data and metadata movement.

Oracle Data Pump: Data Pump uses external tables rather than direct path to move the data. Oracle Data Pump enables very high-speed data and metadata loading and unloading of Oracle databases. It does not matter whether the job is stopped voluntarily or involuntarily due to a crash. Benefits Data Pump offers many benefits and many features. You can specify the database version for objects to be moved using the VERSION parameter to create a dump file set that is compatible with a previous release of Oracle Database that supports Data Pump.

If you compress the metadata. In addition you can specify a percentage of data to be sampled and unloaded from the source database when performing a Data Pump export. During import.

This can be done by using a database link to the source system. Network mode enables you to export from a remote database directly to a dump file set. You can specify a particular encryption algorithm to use during an export. You are not a Valid Partner use only different or remote database and it must remain secure in transit.

Or perhaps the dump p file set will be imported p on-site using g the Oracle Encryption yp Wallet but it may y also need to be imported off-site where the Oracle Encryption Wallet is not available.

You can choose to encrypt both data and metadatametadata. You can specify the type of security to use for performing encryption and decryption during an export.

This provides greater flexibility for file management. They contain the location of a specific operating system directory. Directory objects are owned by the SYS user.

To edit or delete a directory object. Directory objects are required when you specify file locations for Data Pump because it accesses files on the server rather than on the client. This directory object name can be used in Enterprise Manager so that you do not need to hard-code directory path specifications.

Directory names are unique across the database because all the directories are located in a single name space that is. Enter the name of the directory object and the OS path to which it maps. On the Directory Objects page. You can manage user privileges on individual directory objects. Permissions for directory objects are not the same as OS permissions on the physical directory on the server file system. OS directories sshould ou d be ccreated eated before be o e they t ey are a e used used.

You ou ca can test tthis s by cclicking c g Test est File e Syste System. Click Show SQL to view the underlying statements.

Click Return when finished. Click OK to create the object. On the Privileges page. This increases the level of security and gives you granular control over these objects.

For the test. The MT is dropped when the Data Pump job finishes normally. This mode is particularly useful to export data from a read-only source database. Data Pump Export is a utility for unloading data and metadata into a set of operating system files called dump file sets. At the center of every Data Pump operation is the master table MT. The MT maintains all aspects of the job.

This is known as network mode. The MT is built during a file-based export job and is written to the dump file set as the last step. Data Pump Import is used to load metadata and data stored in an export dump file set into a target system. These utilities can also be used to export from a remote database directly to a dump file set.

Interactive-command mode is also enabled when h you attach tt h to t an executing ti or stopped t d job. In the menu.

Oracle Database 11g: Administration Workshop II

You can interact with Data Pump Export and Import by using one of the following interfaces: Enables you to specify all command-line parameters in a parameter file. Stops logging to the terminal and displays the export or import prompts. Data Pump Export and Import provide different modes for unloading and loading different portions of the database. The mode is specified on the command line by using the appropriate parameter.

The available modes are listed in the slide and are the same as in the original export and import utilities utilities. If a privileged user is connected to the database instance. From the Database home page. Enterprise Manager Cloud Control provides a wizard to guide you through the process of performing a Data Pump export and import procedure.

The next window that appears enables the selection of export type. The example in the slide shows a Data Pump export. The example in the slide illustrates a Data Pump import using the impdp utility. For a complete list of options. There are some parameters that are available only by using the command-line interface. Oracle Database also includes Data Pump command-line clients for import and export operations. Transformations You can remap: Data Pump Import supports several transformations: Because object metadata is stored as XML in the dump file set.

Global options. A particular data file can be in fixed record format. It has a powerful data parsing engine that puts little limitation on the format of the data in the data file.

Control file: Although not precisely defined. If no record format is specified. Data to be loaded Oracle Database 12c: Input data files: Discard file: This file is created only when it is needed and only if you have specified that a discard file should be enabled.

Bad file: The bad file contains records that are rejected. If it cannot create a log file. Each of these blocks contains information about the table such as the table name and the columns of the table into which the data is to be loaded.

If th the O Oracle l d database t b d determines t i th thatt th the row iis valid. The discard file contains records that are filtered out of the load because they do not match any record-selection criteria specified in the control file. If the row is determined to be invalid. After a Oracle University and Error: The log file contains a detailed summary of the load. Log file: Comments can appear anywhere in the command section of the file.

If you are continuing a load that has been interrupted in progress. All text to the right of the double hyphen is ignored until the end of the line. The remainder of the control file contains the field list.

That decimal point delimits dollars and cents in the field and causes records to be rejected if SAL has no value. Precede any comment with two hyphens. In this example. It defines the relationship between records in the data file and tables in the database.

Each data load may involve several transactions. A direct path load eliminates much of the Oracle database overhead by formatting Oracle data blocks and writing the data blocks directly to the database files. A direct load does not compete with other users for database resources. Direct path loads use data saves to write blocks of data to Oracle data files.

This is why the di t path direct th lloads d are ffaster t ththan th the conventional ti l ones. On the first page you indicate whether you are using an existing control file or want a new control file to be generated. Click Continue on this page to invoke the wizard. You can load data by using Enterprise Manager Cloud Control. In the Schema menu. You can override most of the defaults by specifying additional parameters on the command line.

The names of the log files come from the name of the table by default.

C1 C2. The table HR. TEST was created with three columns. Insert option in effect for this table: C2 C3 End of generated control file for possible reuse. There are two access drivers used with external tables. You can connect to the database and create metadata for the external table using DDL.

The DDL for an external table consist of two parts: Nor does it describe how data is stored in the external source. An external table does not describe any data that is stored in the database. The binary files have the same format as th files the fil used dbby th the D Data t PPump IImportt and dEExportt utilities tiliti and d can bbe iinterchanged t h d with ith them. It is the responsibility of the access driver and the external table layer to do the necessary transformations required on the data in the external file so that it matches the external table definition definition.

External tables access data in external sources as if it were in a table in the database. It uses binary files as the external files. External tables are read only. It uses text files as the data source. External data can be queried and joined directly in parallel to tables residing in the database.

Oracle Database 12c r2 Administration Workshop Ed 3

The data files created for the external table can be moved and used as the data files for another external table in the same database or different database.

You are not a Valid Partner use only with tables residing g in the database. The only restriction is that the metadata for all the external tables must be exactly the same. The metadata for an external table is created using the SQL language in the database.

Best-practice tip: If you have a lot of data to load. This command does not create the external text files. Any extra files that are specified ifi d are iignored.

The external table is read-only after is has been populated. There is never more than one parallel execution server writing into one file at a particular point in time. Because the external table can be large. As opposed to a parallel query from an external table. The external table. External tables are queried just like internal database tables.

Oracle Database 12c: Administration Workshop NEW

The results are not stored in the database. The third example in the slide illustrates the direct appending of an internal table data with the query and load of data from an external table. The data dictionary views in the slide provide information about external tables: Descriptions of the columns of tables.

Descriptions of the directory objects in the database Oracle Database 12c: False Oracle University and Error: The alerts notify you and often also provide recommendations on how to resolve the reported problem.

The Oracle database server. At regular intervals. Reactive database maintenance includes critical errors and conditions discovered by database health checkers: Proactive database maintenance is made easy by the sophisticated infrastructure of the Oracle database.

The captured data can be analyzed by you. An alert indicates a potential problem: The Alert History page displays a chart that shows the alert history of the current database in segments of time that you designate. Click the metric name listed on the Alert History page to get detailed statistics. Optimizer statistics are used by the query optimizer to choose the best execution plan for each SQL statement.

AWR snapshots include database statistics and metrics. Data collections providing database and object detail — Optimizer statistics: Used by query optimizer Oracle University and Error: You are not a Valid Partner use only — Database statistics: Infrastructure for data gathering gathering. This baseline becomes valid after sufficient data has been collected and the statistics y Saturday calculation occurs.

The statistics calculation is scheduled for every y at midnight g by y default. Database statistics provide information for performance monitoring. Th The b baseline li iis used d ffor performance comparison. The System Moving Window baseline is a changing set of snapshots that include the last eight days of snapshots by default. Statistics are collections of data that provide more details about the database and the objects in it.

The System Moving Window baseline is collected by default. Direct data manipulation language DML commands against the repository tables are not supported. You can modify both the snapshot interval and the retention intervals. The AWR contains hundreds of tables. Every 60 minutes by default. You can view it as a data warehouse for database statistics.

Statistics are stored in persistent storage for several reasons: AWR snapshots are accessible through data dictionary views and Enterprise Manager.

The memory version of the statistics is transferred to disk on a regular basis by the MMON background process. When old statistics are replaced by new ones because of memory shortage. The AWR infrastructure has two major parts: With the AWR.

Because the database advisors rely on these snapshots. Taking manual snapshots is supported in conjunction with the automatic snapshots that the system generates. A snapshot is a set of performance statistics captured at a certain time and stored in the AWR. Manual snapshots are expected to be used when you want to capture the system behavior at two specific points in time that do not coincide with the automatic schedule. Statspack snapshots can be marked for retention. Statspack is not accessible through Enterprise Manager.

The Statspack utility does provide scripts for setup. The default is NULL. You can also set up threshold-based alerts by using baselines from Enterprise Manager. A baseline identifier that is unique for the life of a database is assigned to the newly created baseline. This is usually a set of snapshot data for an important period that you tag and retain in the AWR.

A baseline is defined on a pair of snapshots. Each snapshot set has starting and ending snapshots and includes all the snapshots in between. A baseline is identified by a user-supplied name. You can set the expiration time in number of days with the expiration parameter of this procedure.

You can specify an expiration value to indicate the number of days that the snapshot will be retained. M Note: Snapshot sets are used to retain snapshot data. On the Automatic Workload Repository page. Remember that decreasing any of these settings affects the functionality of components that depend on the AWR. Increasing the settings can provide improved advisor recommendations—but at the cost of the space that is required to store the snapshots and the performance expended in collecting the snapshot information.

When tuning is complete. Consider setting collection level to ALL when tuning a new application. AWR settings include retention period.

Eight days — Consider storage needs Oracle University and Error: They represent what is typically needed to monitor Oracle database behavior. This includes automatic gathering of statistics to reduce the likelihood of poorly performing SQL statements due to stale or invalid statistics statistics. The automatic maintenance tasks include gathering optimizer statistics. The automatic optimizer statistics task is disabled.As hardware resources increase.

Sign Out. The AWR contains hundreds of tables. Overview xiv. Classroom Training. If your database supports a data warehouse application that has large tables and indexes, a larger block size may be beneficial.

FRANCESCA from Maryland
I do enjoy studying docunments unethically. Browse my other articles. I have only one hobby: body building.