Oracle DBA Exam #170-023: 8i Architecture and Administration
9 questions. Answers can be found at the end of the pop quiz.
courtesy of Measure Up
Questions
1) Oracle8i uses something called a fast COMMIT to guarantee that data can be recovered in the event of a catastrophic failure. Which of the following properties do NOT describe a fast COMMIT (choose all that apply)?
a. Dirty data buffers are flushed to disk.
b. The User Process is informed that the COMMIT has finished.
c. The LGWR process writes all redo log buffers up to the point of the commit to disk.
d. The LGWR process places a commit record (SCN - system change number) in the redo log buffer.
2) A table named SCOTT.CUSTOMER_CONTACTS has been exhibiting poor performance, and you think that the table may be suffering from row migration. Which of the following ANALYZE commands will help you get the most conclusive information to help you investigate this theory?
a. ANALYZE TABLE SCOTT.CUSTOMER_CONTACTS ESTIMATE STATISTICS;
b. ANALYZE TABLE SCOTT.CUSTOMER_CONTACTS COMPUTE STATISTICS;
c. ANALYZE TABLE SCOTT.CUSTOMER_CONTACTS VALIDATE STRUCTURE;
d. ANALYZE TABLE SCOTT.CUSTOMER_CONTACTS ANALYZE ALL;
3) You have determined that data access to your CUSTOMERS table would benefit from having a bitmap index created on the REGION column. Which line in the following CREATE INDEX statement must be changed in order to create this index?
CREATE BITMAP INDEX CUSTOMERS_REGION_IDX
ON CUSTOMERS(REGION) REVERSE
PCTFREE 30
STORAGE (INITIAL 100K NEXT 100K
PCTINCREASE 0 MAXEXTENTS 100)
TABLESPACE INDX
STORAGE (INITIAL 100K NEXT 100K
PCTINCREASE 0 MAXEXTENTS 100)
a. TABLESPACE INDX
b. ON CUSTOMERS(REGION) REVERSE
c. CREATE BITMAP INDEX CUSTOMERS_REGION_IDX
d. PCTFREE 30
4) You have created a profile called CLERKS and you want these policies to apply to the user BLAKE. Which of the following commands should you issue to accomplish this?
a. ALTER USER BLAKE PROFILE CLERKS;
b. ALTER USER BLAKE SET PROFILE = CLERKS;
c. ALTER PROFILE CLERKS ADD USER (BLAKE);
d. GRANT CLERKS TO BLAKE;
5) Which of the following views will show you the NLS settings for your current session? (choose all correct answers)
a. NLS_INSTANCE_PARAMETERS
b. NLS_SESSION_PARAMETERS
c. V$NLS_PARAMETERS
d. NLS_DATABASE_PARAMETERS
6) You are preparing to create a new Oracle8i database. In order to create your new database, you need to complete the following steps. What is the correct order that these steps need to be completed?
A) CREATE DATABASE
B) cp init.ora initswamp.ora
C) @catproc.sql
D) db_name=swamp
a. B, D, A, C
b. C, D, B, A
c. D, B, C, A
d. B, A, D, C
7) When you issue the STARTUP OPEN command, where does the server find information about the names and locations of data files, log files and checkpoint information?
a. Control file
b. SYSTEM tablespace
c. Parameter file
d. Data dictionary
8) As a consultant, a new client asks you to provide an assessment of their backup strategy. In gathering your research, you execute the ARCHIVE LOG LIST command. What information does this command give you?
a. A list of archived redo log files.
b. The current status of automatic archiving.
c. What log file members are associated with which log groups.
d. A listing of redo log files that have been archived since the last backup and are still stored in the control file.
9) You want to build a report that will show the current usage of every block in the USERS tablespace. The report should allow a reader to tell whether or not a block is allocated to a segment, and if so, what the name of that segment is. This can be done by querying which two of the following data dictionary views? (choose two answers)
a. DBA_OBJECTS
b. DBA_TABLESPACES
c. DBA_SEGMENTS
d. DBA_FREE_SPACE
e. DBA_EXTENTS
Answers
1) a, d are correct. This question tests your understanding of how COMMITs are processed. The Oracle8i Server uses a fast commit to ensure that committed data changes can be recovered after a serious failure. This process is also very fast and is designed to be time independent of the size of the transaction. That is, the amount of time needed to execute a COMMIT is not dependent on the size of the transaction. This is very important for database performance.
The first step in the execution phase of a COMMIT is for the Server Process to write a SCN (system change number) to the redo log buffers indicating the event. The SCN is essentially a sequential timestamp that is unique to the server environment. The purpose of the SCN is to make it possible for the LGWR process to identify all the changes up to and including the COMMIT record. Note that it is the Server Process that writes this SCN record to the redo log buffer and NOT the LGWR process.
The next step in processing a COMMIT is for the LGWR to physically write all of the redo log buffer entries up to the point of the COMMIT to disk. After this is completed, Oracle8i can guarantee that these changes can be recovered in the event of a failure. This write operation is very fast sequential I/O, as opposed to the slow random I/O that is required to write the dirty data buffer cache blocks to disk. The reason why this operation is not dependent upon the size of the transaction is that the LGWR process MAY flush the redo log buffers to disk BEFORE the COMMIT takes place. In that case, the only thing that must be written to disk is the SCN record indicating a COMMIT took place.
Dirty data buffers are flushed to disk asynchronously of the COMMIT process. They may be written to disk either before or after a COMMIT. Because the datablocks are random, and since whole blocks need to be written to disk instead of only the data that changed, this operation is typically much slower than writing redo log buffers to disk.
Thus, the correct answers for this question are:
"Dirty data buffers are flushed to disk."
"The LGWR process places a commit record in the redo log buffer."
2) b is correct. A chained row is a row that is too large to fit into any database block, even an empty one. Oracle stores such large rows by placing a piece of the row in one database block. This first block will also contain a pointer to another database block which contains the second piece of the row. The second database block can point to a third, which can point to a fourth, and so on, until Oracle has successfully stored the large row as a "chain" of row pieces.
Migrated rows are also stored in more than one Oracle block, but for a different reason. If a row is being updated such that it is going to grow, there is a chance that the new version of the row will not fit in the original Oracle block. This is typically caused by setting PCTFREE too low, thus not leaving enough room inside the database block to accommodate row updates. When this happens, Oracle moves the entire row to a new block, but leaves a pointer to the new location inside the old block. This is done so that the Row ID of the data does not change. If the Row ID were to change, all indexes referencing that row would need to be updated as well.
Whether a row is chained or migrated, retrieving such a row requires more than one I/O, since the data resides in more than one Oracle block. While chained rows cannot be avoided (except by converting to a larger DB_BLOCK_SIZE, which means rebuilding the entire database from scratch), migrated rows are the result of setting PCTFREE too low. Also, Oracle never creates a migrated row as the result of an INSERT; they are always the result of an UPDATE. This means that migrated rows can be eliminated by an exporting, deleting, and importing the data. So how do you detect chained and migrated rows?
The ANALYZE TABLE command is used to validate structural integrity and to gather useful statistics for tables. In this case we want to gather statistics, including the number of chained and migrated rows. ANALYZE TABLE can put a count or estimate of the number of chained and migrated rows into DBA_TABLES.CHAIN_CNT. To do this, you run either of the following commands:
ANALYZE TABLE table_name ESTIMATE STATISTICS;
or
ANALYZE TABLE table_name COMPUTE STATISTICS;
Either of these commands will fill in the CHAIN_CNT column of DBA_TABLES for the row matching that particular table. However, as the syntax implies ANALYZE TABLE ... ESTIMATE STATISTICS does not always look at every row of a table. By default, ANALYZE will sample 1064 rows if the ESTIMATE option is used. Because of this, if thorough or conclusive statistics are needed (as in this situation), you will want to use the COMPUTE option. ANALYZE TABLE ... COMPUTE STATISTICS will examine every row in the table when gathering statistics. Of course, this may take substantially longer to do on a large table.
There are a couple of things you should keep in mind regarding the statistics gathered by ANALYZE TABLE ... [ESTIMATE | COMPUTE] STATISTICS. First, the resulting CHAIN_CNT column contains a count of both chained and migrated rows, so if you really need to know how many of the rows are migrated (not chained), you may need to resort to other means to investigate which rows are chained and which are migrated. Fortunately, Oracle provides another command, ANALYZE TABLE ... LIST CHAINED ROWS. This command will put the Row ID of each chained or migrated row into a specified table (which can be created using the supplied UTLCHAIN1.SQL script), so that they can be investigated easily. The second thing to remember about the ANALYZE TABLE ... [ESTIMATE | COMPUTE] STATISTICS command is that the statistics it gathers are useful to the Oracle optimizer. For this reason, you should endeavor to maintain accurate and up-to-date statistics for your database tables, especially those that are large and frequently accessed.
ANALYZE TABLE ... VALIDATE STRUCTURE is used to validate structural integrity, not to gather table statistics. ANALYZE TABLE ... ANALYZE ALL is not a valid ANALYZE command. 3) d is correct. CREATE BITMAP INDEX CUSTOMERS_REGION_IDX is valid. It declares the SQL statement being used (CREATE INDEX), the name of the index being created (CUSTOMERS_REGION_IDX), and the type of index being created (BITMAP).
ON CUSTOMERS(REGION) tells Oracle that the table being indexed is CUSTOMERS and that the column being indexed is REGION. If more than one column were being indexed, this clause would be in the format ON OWNER.TABLE(COLUMN1, COLUMN2, ...). However, the keyword REVERSE is invalid when building a bitmap index. Bitmap index keys may not be reversed. Since the question states that a bitmap index is to be built, this keyword will have to be removed for the CREATE INDEX statement to succeed.
PCTFREE 30 is valid. PCTFREE 30 causes the index to be built with leaf-level pages that are only 70 percent full. PCTUSED may not be specified for index creation. More discussion on this topic can be found later in this explanation.
STORAGE (INITIAL 100K NEXT 100K PCTINCREASE 0 MAXEXTENTS 100) specifies that the first extent created will be 100 KB in size and the NEXT setting (from which Oracle reads the size of each subsequent extent) will be set to 100 KB. PCTINCREASE 0 specifies that the value for NEXT will not grow each time a new extent is allocated, and MAXEXTENTS 100 means that this index will only be allowed to allocate 100 extents.
TABLESPACE INDX specifies the name of the tablespace the index should be built in.
Let's discuss PCTFREE and PCTUSED in a little more detail. PCTUSED is not a valid option for index creation. This is because index blocks cannot be made unavailable for inserts, and so it does not make any sense to specify a PCTUSED threshold (which would tell when to make them available for inserts again). To understand this better, it helps to think about the use of PCTFREE with indexes, and what happens when data is inserted into an indexed table.
When PCTFREE is specified for an index creation, it only affects the initial building of the index. For example, if PCTFREE 30 is specified, the index will be built with leaf-level blocks that are only 70 percent full. It is important to specify PCTFREE for any indexes where you expect inserts or row-lengthening updates within the range of initially indexed keys. The default of PCTFREE 0 means that each index block will be entirely filled when the index is built. In this case, inserting a row inside the range of existing index keys will cause the index block to split, sending half of the information on it to a block at a new location. This is called an index block split. Since this newly-acquired block will break the previously contiguous chain of index blocks, an index block split incurs a significant I/O penalty when the index is later used to retrieve data.
However, after the index is built, PCTFREE has no effect on the subsequent usage of the index blocks. For example, if the EMP table has an index on the ENAME column and a row is inserted into EMP with an ENAME of 'SMITHE', then the location of the index entry is not negotiable - it must logically go into the appropriate index location - for example, between 'SMITH' and 'SMITT'. If this part of the index has enough room for the new entry (because PCTFREE was properly specified, or because rows have been deleted), then the new index entry is made in the existing index block. But if the appropriate block of the index for the new entry is already full, an index block split will occur. Making an index block unavailable for inserts would cause it to split prematurely, thus hindering the effectiveness of the index since the leaf-level nodes will no longer be sequentially scannable.
4) a is correct. The ALTER USER statement is used to associate users with profiles. The correct syntax is:
ALTER USER user_name PROFILE profile_name;
The keyword SET is not used with the ALTER USER command, and ALTER USER does not use the equal sign when assigning user settings.
ALTER PROFILE is used to modify the limits and policies that the profile establishes. It is not used to associate a user with a profile.
GRANT is used to give roles and users permission to use roles, system privileges and object privileges. A profile is not a role or a privilege, but rather a set of limitations and policies.
5) b, c are both correct. NLS_DATABASE_PARAMETERS shows all of the NLS parameters that are a permanent part of the database. Prominent among these would be NLS_CHARACTERSET (the database character set) and NLS_NCHAR_CHARACTERSET (the national character set). However, most of these parameters may be overridden in your session, so you would not rely on this information to find out your current session's NLS parameters.
NLS_INSTANCE_PARAMETERS shows all of the NLS parameters that are associated with the instance. This would include all parameters the instance inherits from its INIT.ORA or from operating system defaults. While none of these parameters are modifiable at the system level (using ALTER SYSTEM), they are all modifiable at the session level (using ALTER SESSION). Therefore, you would not rely on this view to discover your current session's NLS parameters, either.
NLS_SESSION_PARAMETERS shows the NLS parameters in effect for the current session. It shows only the parameters that can be modified at the session level. The dynamic performance view V$NLS_PARAMETERS shows the NLS parameters in effect for the current session, whether or not they are modifiable at the system level. This means that NLS_CHARACTERSET and NLS_NCHAR_CHARACTERSET are not displayed in NLS_SESSION_PARAMETERS, but they are displayed in V$NLS_PARAMETERS.
6) a. With this question, we are testing to make sure you understand the order in which you need to carry out specific tasks when creating a database. The first step is to plan your environment and prepare your operating system. This includes determining where your files are going to be located and setting environmental variables. Part of this step is deciding on the name of your new database, and placing that name in the ORACLE_SID variable.
Next, you need to create and edit the parameter file. The most common way to create the parameter file is to copy another parameter file and then modify the relevant parameters in the file. You can start out by copying the parameter file that is installed in the default database when you first install Oracle8i. If you are on a UNIX system, you would create the parameter file using something like:
cp init.ora initswamp.ora
Next, after creating the parameter file, you need to modify the file. One of the key parameters you need to change is the database name. Usually, you should set this to the same name you are using in your ORACLE_SID, although that is not required. There are other settings you should review and change as appropriate, such as: DB_BLOCK_SIZE, CONTROL_FILES, BACKGROUND_DUMP_DEST, USER_DUMP_DEST, etc. There are over 220 parameters that can be placed in the parameter file, but only just a handful are actually required.
The next step is to start the instance, create the control file, and create the database (CREATE DATABASE). Finally, you need to compile the data dictionary views. These views are contained in the catalog.sql and catproc.sql scripts.
7) a is correct. The control file is an integral part of the database architecture, and is considered to be part of the database (the "stuff" on disk). Whereas the parameter file contains startup settings and is not used once the database is opened, the control file contains the names and locations of data files, log files, and checkpoint information. Without this information, the server would not be able to identify the files that are part of the database.
The data dictionary contains metadata about the database, names of tables, tablespaces, users, permissions, etc. It even contains segment allocation information and information about the data files and log files. But the data dictionary information itself is stored in segments on one or more data files that are members of the SYSTEM tablespace. So how would the server know which data files make up the SYSTEM tablespace when the server starts? That is where the control files comes into play. The control file is sort of like the "data dictionary to the data dictionary."
The control file contains:
- Data file and redo log names and locations
- Redo log archive history
- Checkpoint information necessary for database recovery
- Log sequence number for managing log switches
- Tablespace names
- Name and location of backups
The parameter file is not the correct choice because it contains startup parameters. It specifies the names and locations of the control files, but beyond that it does not provide any information about the database itself. The data dictionary is a logical structure contained in the SYSTEM tablespace data files. The server needs to obtain the names and locations of these data files from the control file before it can open the data dictionary.
8) b is correct. Oracle8i guarantees transactional integrity by using the redo log files. Whenever a transaction is committed, all changes to database blocks are written to the physical redo log file before the transaction is successful. Writing to the redo log files is a sequential process, and therefore is much faster than the random writes to the data files. The redo log files are used only for database recovery if a disaster strikes.
Redo log files are used in a circular fashion. That is, Oracle8i writes to only one log group at a time. Once that log group is full, a log-switch event occurs, and the server starts writing to the next log group. When that log group is full, it goes to the next one, or back to the first one if there are only two log groups. You must have a minimum of two log groups in order for this circular log writing to work. Oracle8i will not work with a single log group. It is often recommended to have at least 3 log groups defined. That way, if you loose one log group, you can take that group offline and still bring the database back up quickly, since you still have the minimum of two log groups remaining.
Each log group has at least one member, but it may have several members. Each member in a log group is a physical file that holds the log information. When the server is writing to a particular log group, the members in that group are multiplexed. That is, they contain identical information, and are written to at the same time. As such, each member should be placed on a separate disk. That way, if the disk containing one member fails, the other members will still be online and database operations will remain uninterrupted.
You can gather information regarding redo log files and the current archiving status through a variety of sources. Following are the more common ones:
- ARCHIVE LOG LIST - this command provides the current state of affairs with regards to archive log mode. The results show whether the database is in archive mode or not, whether automatic archiving is enabled, and the last log sequence number.
- V$DATABASE / V$INSTANCE - these dynamic performance views provide additional information regarding the current log archive mode.
- V$THREAD - provides information about how many redo log files groups are being used, and which group is currently active. The last log sequence number is also shown here.
- V$LOG / V$LOGFILE - These files give detailed information about the log file groups and members. Information such as the status of specific groups, the size of each group, and the number of members in each group are provided in V$LOG. The V$LOGFILE provides information about individual members in each group.
The correct answer is: "The current status of automatic archiving." A list of archived redo log files, as long as they are still stored in the control file, can be obtained from the V$LOG_HISTORY view, but not the ARCHIVE LOG LIST command. If the information has already been overwritten in the control file, you need to refer to the RMAN repository, if it exists, to get the information. The log file member associates can be seen in the V$LOGFILE view.
9) d and e. To see which Oracle blocks are in use by extents that have been allocated to database objects, you should query the data dictionary view DBA_EXTENTS. This view contains one row for each allocated extent in the database, and it lists the starting Block ID and the size of the extent in blocks.
To see which Oracle blocks currently are not allocated to any object, you should use the data dictionary view DBA_FREE_SPACE. This view contains one row for each unallocated extent of space in the database, along with the Block ID where the free extent starts and the size of the extent in blocks.
It is possible to write a single UNION query based on these two views to present the information required. Here is one way this could be done:
SELECT * FROM
( SELECT OWNER, SEGMENT_NAME, BLOCK_ID, BLOCKS
FROM DBA_EXTENTS
WHERE TABLESPACE_NAME = 'USERS'
UNION
SELECT 'N/A', '*** FREE SPACE ***', BLOCK_ID, BLOCKS
FROM DBA_FREE_SPACE
WHERE TABLESPACE_NAME = 'USERS'
)
ORDER BY 3;
Output from this query would show each allocated and unallocated extent in the USERS tablespace, along with the object that owned each allocated extent. The report would be sorted by the starting Block ID of the extent and would include the number of blocks in the extent, so that a reader could account for the current use of each and every block in the USERS tablespace.
- DBA_OBJECTS includes one row for each object in the database. However, since each object may have several extents allocated to it, there is no extent-level allocation information in this view.
- DBA_SEGMENTS includes one row for each segment in the database. However, since each segment may have several extents allocated to it, there is no extent-level allocation information in this view.
- DBA_TABLESPACES includes one row for each tablespace in the database. However, since each tablespace may have several extents within it, there is no extent-level allocation information in this view.
These questions and answers are provided by MeasureUp, Inc. An EarthWeb Company. Order the full version of this exam, plus other Cisco, CompTIA, Microsoft, Novell, Oracle and Prosoft simulations online at www.measureup.com, phone 678-356-5000, fax 770-777-0732, or mail to MeasureUp, Inc. An EarthWeb Company, 2325 Lakeview Parkway, Suite 175, Alpharetta, Georgia 30004.
More Pop Quiz:
|