Microsoft Exam #70-228 Installing, Configuring, and Administering SQL Server 2000
6 questions. Answers can be found at the end of the exam.
courtesy of Transcender LLC
Questions
1. You administer a Windows NT 4.0 network. When you installed SQL Server 2000, you configured the MSSQLServer service and the SQLServerAgent service to run under different domain user accounts. You have configured a mail profile for the SQLServerAgent service, and designated operators have reported that they receive e-mail notifications when certain tasks, such as full database backups, have successfully completed. The manager of the Sales department, Judy, wants to be notified when changes are made to the Customers table. You create a trigger that will send an e-mail message to her when INSERT, DELETE or UPDATE statements are executed against the Customers table. The trigger uses the xp_sendmail extended stored procedure. You configure Judy's user account as a designated operator, and you test this by making a change to the Customers table. Judy does not receive an e-mail message. You want to enable the trigger to work properly by using the least administrative effort. Which of the following actions should you perform to enable the trigger to work properly?
a. Manually revoke the Log on as a service right for the domain user accounts under which the MSSQLServer and SQLServerAgent services run.
b. Configure the MSSQLServer and SQLServerAgent services to run under the local System account.
c. Configure the MSSQLServer service to run under the same account as the SQLServerAgent service.
d. Configure the domain where the SQL Server computer resides to trust the domain where the Exchange Server computer resides.
e. Configure the MSSQLServer service to use the same mail profile as the SQLServerAgent service.
2. You administer a SQL Server 2000 database that resides on a Windows 2000 Server computer. Each quarter, you must transfer a large amount of data from this database to several other instances of SQL Server 2000. You decide to use snapshot replication to accomplish this task. You configure the source instance of SQL Server as the Publisher/Distributor, and you configure the target instances as Subscribers. Because the amount of data is very large, you want to ease the load on the Publisher/Distributor computer. Which of the following actions should you perform?
a. Set the min server memory configuration option to 10 MB.
b. Set the @keep_partition_changes parameter to true.
c. Create two snapshot folders. Specify the default location for one snapshot folder; specify a network share location for a second snapshot folder.
d. Specify a single location for the snapshot folder on a network share.
3. A number of users connect to your company's SQL Server 2000 database by using a custom application. A few months ago, your company was acquired by another corporation. Many of your company's existing business practices were changed to reflect policies that are enforced by the new management. As a result, the custom application that implements those business practices must be changed. The application is based on stored procedures, which have been changed substantially in order to comply with the new business rules. Now, users report that the application is much slower. In addition to using the custom application to access the database, some of the employees are authorized to access the database directly. Normally, these employees use Query Analyzer to execute queries. You open System Monitor and select several counters in order to determine why the application's performance has degraded. Two days later, you review the results. The Avg. Disk Queue Length counter of the PhysicalDisk object has a value of approximately 1. The Buffer cache hit ratio counter of the SQLServer:BufferManager object has a value of approximately 99 percent. The Lock Blocks counter of the SQLServer:Memory Manager object in the context of the Products database has a value of approximately 15. The Adhoc Sql Plans instance of the Cache Hit Ratio counter that belongs to the SQLServer:Cache Manager object has a value that is consistently at or above 95. The Procedure Plans instance of the Cache Hit Ratio counter that belongs to the SQLServer:Cache Manager object has a value of approximately 1. Which of the following actions is most likely to improve performance?
a. adding another CPU
b. adding another disk
c. configuring all applications that access this database to operate in implicit transactions mode
d. removing the WITH RECOMPILE option from the definition of the stored procedures that the application uses
4. You are a database application developer. You maintain a lab in which SQL Server 6.5 and SQL Server 7.0 reside on a single computer. You have been using the version switch that is provided with SQL Server 7.0 to switch between SQL Server 6.5 and SQL Server 7.0. You maintain copies of several production databases in SQL Server 6.5. You have been developing new applications that are based on SQL Server 7.0 while you continue to support the older applications that are based on SQL Server 6.5. The SQL Server 7.0 databases store different portions of your company's business information than the SQL Server 6.5 databases store. Although your company uses newer applications that have been developed for SQL Server 7.0, the company continues to use its existing SQL Server 6.5 applications. With the release of SQL Server 2000, you have been instructed to upgrade all SQL Server 6.5 and SQL Server 7.0 databases to SQL Server 2000. However, you must retain all three versions of SQL Server on your network. All databases that currently exist in SQL Server 6.5 and SQL Server 7.0 must be upgraded to SQL Server 2000, and the existing original databases must be retained in their respective versions of SQL Server. How should you install SQL Server 2000?
a. by installing SQL Server 2000 as a named instance on the same computer
b. by installing SQL Server 2000 as a default instance on another computer
c. by installing SQL Server 2000 as a default instance on the same computer
d. by installing SQL Server 2000 as a named instance on another computer
5. You are a systems engineer who has recently been hired to administer a purely Windows 2000 network. You must implement a new SQL Server 2000 database on Computer A, which is a member server that runs the Windows 2000 Server operating system. Computer A contains three disks, which have already been partitioned. The operating system and program files reside on drive C, which is located on Disk 0. Drive D and drive E are located on Disk 1; drive F, drive G and drive H are located on Disk 2. Disk 1 and Disk 2 are completely empty. Drive D and drive E each have 5 GB of available space; drive F, drive G and drive H each have 4 GB of available space. The data files for the new database will not require more than 8 GB, and the transaction log file will not require more than 3 GB. You have been instructed to provide the best write performance while ensuring that administrative overhead is kept to a minimum. Which of the following strategies should you implement to provide the best write performance?
a. Create one data file in the Primary filegroup on drive D and another data file in a user-defined filegroup on drive E. Create three transaction log files; one transaction log file will reside on drive F, one transaction log file will reside on drive G, and one transaction log file will reside on drive H.
b. Create two data files in the Primary filegroup; one data file will reside on drive D, and one data file will reside on drive E. Create one transaction log file on drive F.
c. Create two data files in the Primary filegroup; one data file will reside on drive D, and one data file will reside on drive F. Create one transaction log file on drive G.
d. Create two data files in the Primary filegroup; one data file will reside on drive D, and one data file will reside on drive F. Create two transaction log files; one transaction log file will reside on drive E and one transaction log file will reside on drive G.
6. Your company is using a SQL Server 2000 database named Sales to store its business transactions. Your backup strategy consists of full database backups on Sunday evenings and differential database backups on Tuesday and Thursday evenings. On Thursday morning, you run a DBCC CHECKDB statement and notice several new consistency errors in a table named Products. No allocation errors have been detected. You must make the database consistent with little or no data loss. You must accomplish this task in time to perform Thursday's differential database backup. Which of the following should you do?
a. Restore the database from the latest database backups.
b. Execute the DBCC CHECKTABLE (Products, REPAIR_REBUILD) statement.
c. Execute the DBCC CHECKALLOC (Products, REPAIR_REBUILD) statement.
d. Execute the DBCC CHECKCATALOG (Products) statement.
ANSWERS:
1) Answer: c and e. SQL Server 2000 provides two independent mechanisms for communicating by using e-mail messages: SQL Mail and SQLServerAgent Mail. SQL Mail handles all stored procedures that are related to e-mail, such as the xp_sendmail extended stored procedure. SQLServerAgent Mail operates independently of SQL Mail and sends e-mail messages in response to alerts or to the completion or failure of specified tasks. The trigger that you configured to use xp_sendmail failed to notify Judy when changes to the Customer table occurred because xp_sendmail is part of the SQL Mail mechanism, and no mail profile has been specified for SQL Mail. Because you have already configured a mail profile for the SQLServerAgent Mail, you can configure SQL Mail to use the same profile. To accomplish this, the MSSQLServer and SQLServerAgent services must be configured to use the same user account. In this scenario, you should ensure that the MSSQLServer service is configured to use the same user account that the SQLServerAgent service has been configured to use. If you were to configure the SQLServerAgent service to use the same account as the MSSQLServer service, then the existing mail profile for SQLServerAgent would become invalid, and you would be required to create another profile for both services. Alternatively, you could create a separate mailbox for the domain user account that is used to run MSSQLServer and configure the appropriate mail profile for SQL Mail. Initially, when the operating system is installed, no default mail profiles exist. You can create a new mail profile or copy an existing mail profile by logging on with the appropriate domain user account, clicking the Mail icon in Control Panel and specifying the necessary settings. After that, you should configure SQL Mail to use that profile. Because the local System account does not have network access rights in Windows NT 4.0 domains, configuring the MSSQLServer service and the SQLServerAgent service to run under this account would prevent you from performing certain tasks, such as backing up the databases to a network drive or using the SQL Server mail services. The domain user account(s) to which the MSSQLServer and SQLServerAgent services are assigned are automatically granted the Log on as a service right. If you manually revoked this right, then neither service would start during an attempt to restart, and SQL Server would become inaccessible. If the SQL Server computer and Exchange Server computer reside in different domains, then the domain where the Exchange Server computer resides must trust the domains where the user accounts assigned to the MSSQLServer and SQLServerAgent services reside. Otherwise, those accounts would not be visible on the Exchange Server computer. No other trust relationships are required for using Exchange mail profiles. Reference: SQLSBO, Contents, "Administering SQL Server," "Managing Servers," "SQL Mail." SQLSBO, Contents, "Administering SQL Server," "Managing Servers," "SQL Mail," "Configuring SQL Mail." SQLSBO, Contents, "Administering SQL Server," "Managing Servers," "SQL Mail," "Configuring Mail Profiles." SQLSBO, Contents, "Installing SQL Server," "Overview of Installing SQL Server 2000," "Setting up Windows Services Accounts."
2) Answer: d. With snapshot replication, schema, data and database objects, such as constraints and indexes, are sent to the designated Subscribers in a replication scheme. A snapshot image resides in a snapshot folder. By default, the snapshot folder is created on the computer that is configured as the Distributor. During snapshot replication, a copy of the data to be replicated is written to the snapshot folder. Placing the snapshot folder in an alternate location, such as a network share, reduces the resources that the Distributor needs to function properly. Specifying a single location for the snapshot folder on a network share will improve snapshot replication performance. Greater processor power would be required if you configured a separate location for the snapshot folder in addition to the default location. Snapshot replication would also require more time to complete if there were multiple snapshot folders. To further reduce the overhead on the computer that is designated as the Distributor, you can configure pull subscriptions. The replication of data for a pull subscription is performed by the Distribution Agent running on a Subscriber, whereas the replication of data for a push subscription is performed by the Distribution Agent running on the Distributor. Because you configured the source instance as the Publisher/Distributor, the min server memory configuration option must be set to a minimum of 16 MB. Although you can manually set the min server memory and max server memory configuration options, it is generally considered appropriate to allow SQL Server to adjust its memory needs dynamically based on the computer's available memory. Setting the @keep_partition_changes parameter to true when adding an article to a merge publication by calling the sp_addmergepublication system stored procedure can reduce the amount of time that is required by the Merge Agent to determine whether particular rows that were recently changed should be sent to Subscribers. Merge replication is irrelevant to this scenario. Reference: SQLSBO, Contents, "Replication," "Implementing Replication," "Applying the Initial Snapshot," "Generating the Initial Snapshot." SQLSBO, Contents, "Replication," "Implementing Replication," "Applying the Initial Snapshot," "Alternate Snapshot Locations." SQLSBO, Contents, "Replication," "Enhancing Replication Performance." SQLSBO, Contents, "Replication," "Enhancing Replication Performance," "Enhancing Snapshot Replication Performance." SQLSBO, Contents, "Administering SQL Server," "Managing Servers," "Setting Configuration Options," "Server Memory Options." SQLSBO, Contents, "Replication," "Replication Options," "Optimizing Synchronization."
3) Answer: d. The fact that the Adhoc Sql Plans instance displays a high value signifies that SQL Server has allocated enough memory for the procedure cache in which execution plans for queries are stored. A low value that is displayed for the Procedure Plans instance indicates that stored procedure execution plans are reused infrequently. Normally, SQL Server 2000 compiles an execution plan for a stored procedure when that procedure is initially run. An execution plan is stored in the procedure cache in memory. The next time that the stored procedure is run, SQL Server should detect that an execution plan already exists for the procedure and reuse the existing plan. Such a low value in the Procedure Plans instance, as in this scenario, indicates that SQL Server is ignoring the cached execution plans when it executes stored procedures. The most likely reason that this is occurring is that the WITH RECOMPILE option has been specified in the definition of the majority of stored procedures that are currently in use. This option could have been specified in the definition of certain stored procedures when they were being rewritten to comply with the new business practices. In this scenario, you have not selected any counters to monitor processor usage. Thus, you cannot accurately assess the need for an additional processor. If the Avg. Disk Queue Length counter of the PhysicalDisk object sustained a value greater than 2, then you should consider adding another disk to improve performance. The Buffer cache hit ratio counter of the SQLServer:BufferManager object should normally display a high value, which indicates that SQL Server is performing reads from the cache rather than from the disk. If this counter sustained a low value, then adding more RAM might improve performance. The Lock Blocks counter of the SQLServer:Memory Manager object indicates the number of locks that are in use. The value of 15 for this counter does not indicate that there may be a problem caused by excessive contention for resources. Configuring all applications to operate in implicit transactions mode might cause the applications to hold locks for longer periods of time, which could further degrade the performance of the custom application. Reference: SQLSBO, Contents, "SQL Server Architecture," "Database Architecture," "Logical Database Components," "SQL Stored Procedures." SQLSBO, Contents, "SQL Server Architecture," "Relational Database Engine Architecture," "Query Processor Architecture," "Stored Procedure and Trigger Execution." SQLSBO, Contents, "SQL Server Architecture," "Relational Database Engine Architecture," "Query Processor Architecture," "Execution Plan Caching and Reuse." SQLSBO, Search, "CREATE PROCEDURE."
4) Answer: b. SQL Server 6.5 can coexist with SQL Server 7.0 or SQL Server 2000 on the same computer. However, only one version can be active at any given moment. You can switch between SQL Server 6.5 and SQL Server 7.0, or between SQL Server 6.5 and SQL Server 2000, by using vswitch, which is a version-switch utility. In this scenario, SQL Server 6.5 and SQL Server 7.0 reside on the same computer, and you can switch between those versions. The currently active version of SQL Server is considered the default instance. You could install SQL Server 2000 as a named instance on the same computer that hosts SQL Server 6.5 and SQL Server 7.0. If you did so, then you could simultaneously use two instances: either SQL Server 6.5 and SQL Server 2000 or SQL Server 7.0 and SQL Server 2000. Because SQL Server 7.0 supports the upgrading of its databases to a named instance of SQL Server 2000 on the same computer or another computer, you could upgrade the SQL Server 7.0 databases in this scenario to SQL Server 2000. However, the SQL Server Upgrade Wizard cannot be used to upgrade SQL Server 6.5 databases. SQL Server 6.5 databases can be upgraded to a default instance of SQL Server 2000 but not to a named instance. In order to meet the requirements of this scenario, you should install SQL Server 2000 as a default instance on another computer. If you wanted to upgrade the SQL Server 6.5 databases to SQL Server 2000 on the lab computer after SQL Server 2000 had been installed and configured as a named instance, then you would be required to perform the following steps. You would have to uninstall SQL Server 2000, use the version switch utility to switch to SQL Server 7.0 and run the installation again as a default instance, which would overwrite the instance of SQL Server 7.0. SQL Server 6.5 and SQL Server 2000 would then reside on the same computer. You would be able to switch between them, and you would be able to upgrade the SQL Server 6.5 databases to SQL Server 2000. However, performing these steps would violate the requirements of this scenario, which stipulate that an upgrade of the SQL Server 6.5 and SQL Server 7.0 databases to SQL Server 2000 be performed and that all three versions of SQL Server be retained. Reference: SQLSBO, Contents, "Installing SQL Server," "Working with Named and Multiple Instances of SQL Server 2000." SQLSBO, Contents, "Installing SQL Server," "Working with Instances and Versions of SQL Server," "Working with Three Versions of SQL Server." SQLSBO, Contents, "Installing SQL Server," "Upgrading to SQL Server 2000 Overview," "Upgrading Databases from SQL Server 6.5 (Upgrade Wizard)." SQLSBO, Contents, "Installing SQL Server," "Upgrading to SQL Server 2000 Overview," "Upgrading Databases from SQL Server 6.5 (Upgrade Wizard)," "Upgrading Using One or Two Computers (Logon Screen)."
5) Answer: b. You can provide the best write performance for a database by placing the transaction log file on a dedicated physical disk. Doing so also protects the database in cases of disk failure or data file corruption. Log performance is degraded when log files are placed on the same physical disk as a data file or any program files. The transaction log file is written to sequentially; placing it on a separate disk minimizes radial movements of the read/write heads, thus improving performance. To provide the best read performance for a database, you could create two data files and place the data files on different physical disks. However, you cannot achieve both goals simultaneously in this scenario because only two physical disks are available on Computer A for hosting the database. To achieve the best performance for both the transaction log file and the data files, you would require at least three physical disks. Therefore, the best solution in this scenario is to place the transaction log file on one disk and to create two files that are placed on the other disk. In this manner, you achieve the best possible performance. To minimize the administrative overhead, the two data files should be created in the same filegroup. Files that belong to the same filegroup will be filled relatively evenly; thus, you will not be required to monitor either drive's space usage. Because the transaction log file is written to sequentially, creating several log files would not achieve a performance benefit. The second transaction log file would not be used until the first log file had been filled. In this scenario, the log file will not exceed 3 GB; 4 GB of space exists on drive F. Therefore, if you implement a 4-GB transaction log file on drive F, the file will not be completely filled. Furthermore, if you created two data files that belong to different filegroups, then you would be required to monitor the space taken by each file. Because it is unlikely that you would know how much space each particular object would require, you would need to regularly observe the amount of available space on drive D and drive E to ensure that neither drive filled completely. This would require greater administrative effort. You should create data files in additional filegroups only in order to place specific objects on specific filegroups. Distributing the two data files between two physical disks in this scenario would require that one of the data files reside on the same disk as the transaction log file. Contention between the log file and the data file on the same physical disk would offset any performance gain you would achieve by distributing data files across separate physical disks. Reference: SQLSBO, Contents, "SQL Server Architecture," "Database Architecture," "Physical Database Architecture," "Physical Database Files and Filegroups." SQLSBO, Contents, "SQL Server Architecture," "Database Architecture," "Physical Database Architecture," "Space Allocation and Reuse," "Managing Space Used by Objects." SQLSBO, Contents, "SQL Server Architecture," "Database Architecture," "Physical Database Architecture," "Transaction Log Architecture," "Transaction Log Physical Architecture." SQLSBO, Contents, "Optimizing Database Performance," "Database Design," "Physical Database Design," "Optimizing Transaction Log Performance."
6) Answer: b. To correct consistency errors, you should run the DBCC CHECKTABLE (Products, REPAIR_REBUILD) statement, which will perform the most thorough repairs possible without losing data. In addition to the REPAIR_REBUILD repair option, the DBCC CHECKTABLE statement can be run with the REPAIR_FAST and REPAIR_ALLOW_DATA_LOSS repair options. The REPAIR_FAST option fixes only insignificant errors without incurring data loss. The REPAIR_ALLOW_DATA_LOSS option repairs all errors that are possible to fix automatically, but some data loss may occur. In order for the DBCC CHECKTABLE statement to be run with repair options, the database must be in single-user mode. The DBCC CHECKDB statement can be run with the same repair options as the DBCC CHECKTABLE statement; however, the DBCC CHECKDB statement must be executed against the entire database and would probably take more time to complete than the DBCC CHECKTABLE statement. The latest database backups do not include any transactions that occurred after the completion of the most recent differential backup on Tuesday. Restoring from backups would cause you to lose all changes to the database that occurred since that time. The scenario stipulates that data loss should be minimized or avoided. Executing the DBCC CHECKALLOC statement with the REPAIR_REBUILD option against the Products table would repair only allocation errors, which are not present in the table. The DBCC CHECKCATALOG statement checks for consistency errors in the system tables for a specified database; this statement cannot be executed against a user table. Reference: SQLSBO, Search, "DBCC CHECKDB." SQLSBO, Search, "DBCC CHECKTABLE." SQLSBO, Search, "DBCC CHECKALLOC." SQLSBO, Search, "DBCC CHECKCATALOG."
These questions and answers are provided by Transcender LLC. Order the full version of this exam simulation online at www.transcender.com, phone 615-726-8779, 8 a.m. - 6 p.m., (CST), M - F, fax 615-726-8884, or mail to Transcender LLC, 565 Marriott Drive, Suite 300, Nashville, TN 37214.
More Pop Quiz:
|