Feeds:
Posts
Comments

Archive for the ‘Performance Tuning’ Category

http://www.mssqltips.com/sql-server-tip-category/9/performance-tuning/

http://www.sql-server-performance.com/2007/gen-tips/

http://www.codeproject.com/Articles/22853/SQL-Server-Performance-Tips-and-Guidelines

http://social.technet.microsoft.com/wiki/contents/articles/5957.sql-server-performance-survival-guide.aspx

Advertisements

Read Full Post »

This post taken from  :- http://www.teratrax.com/articles/sql_server_blocking_deadlocks.html

SQL Server blocking

SQL Server blocking occurs when one connection (user process or application process) places a lock on a table (or a number of rows) and a second connection attempts to read or modify the data under the lock. Depending on the type of the lock, this can cause the second connection to wait until the first connection releases its lock. A blocked connection waits indefinitely for the blocking connection to release its lock.

The more blocking happens on the server the less concurrency the system achieves. A certain amount of blocking is unavoidable but too many blocks for longer periods of time can degrade the performance of SQL Server.

SQL Server deadlocks

The combination of two blocked connections where the first is blocking the second and the second is blocking the first is called a deadlock. Since deadlocks are not naturally resolved with time, SQL Server automatically kills one of the connection (Deadlock victim) once it detects a deadlock. This allows the other connection to continue with its transaction.

Although deadlocks can be caused by two short blocks (Fraction of a second), it is often the long blocks that increase the chances of a deadlock to happen.

Blocks escalating to deadlocks

The following diagram shows the sequence of events leading to a deadlock. Consider two applications (A1, A2) accessing two different table (T1, T2):

Event 1: A1 places a lock on T1 inside its transaction and continues to execute other statements

Event 2: A2 places a lock on T2 inside its transaction and continues to execute other statements

Event 3: A1 attempts to place a lock on T2 (Needs to access T2 before it can finish the transaction) but has to wait for A2 to release its lock

At this point, a block is created since A2 is blocking A1

Event 4: While A1 is waiting, A2 attempts to place a lock on T1 (Needs to access T1 before it can finish its own transaction)

A deadlock is created since two connections have blocked one another. SQL Server automatically resolves the deadlock by choosing one of the connections as a deadlock victim and killing it.

Read Full Post »

9 Best Practices List.
  1. Auto Shrink runs at regular interval (30 mins) in background of SQL Server and it is cause performance dropping fragmentation. It should be turn off.
  2. All the statistics should be updated at regular interval.
  3. Compressed backup should be used instead of regular backup as it improves speed and performance. Restoring compressed backup is faster than regular restore.
  4. Always take full database backups along with differential and log backups. Log backups can be very much useful for point-in-time recovery.
  5. Pay attention to “Activity Monitor” – a new feature which displays current activity of a) Process Time, b) Waiting Tasks, c) Database I/O, d) Batch Requests/Sec.
  6. Control the configurations as well enforces policies using Policy Based Management.
  7. Utilize new datatypes DATE and TIME instead of conventional DATETIME.
  8. Keep database updated with latest SPs and Cumulative Updates.
  9. Use Dynamic Management Views to figure out longest running queries as well most popular queries.

 

Read Full Post »

Performance Tuning

There are several strategies for tuning the performance in SQL Server:

Problem: Low Memory Condition detected and Low Hit Cache Rate results in page faults.
Solution: Increment the total RAM memory to improve the cache hit rate, that is, the number of data pages in memory.

Problem: Full Table Scan found that is used only to find specifics rows.
Solution: Create and maintain indexes.

Problem: Excess Paging detected.
Solution: Add a RAID I/O subsystem or faster disk drives to your server.

Problem: Low Query Execution detected because data tables are very large.
Solution: Partition large data sets and create indexes. This reduces I/O contention and improves parallel operations.

Problem: Low Query Execution detected.
Solution: Tune the SQL queries and programs written in PL/SQL using the techniques explained.

For More Details Visit

http://www.c-sharpcorner.com/UploadFile/john_charles/QueryoptimizationinSQLServer200512112007154303PM/QueryoptimizationinSQLServer2005.aspx

Read Full Post »

Simple tricks which DBA can apply to gain immediate performance gain.

  • Table should have primary key
  • Table should have minimum of one clustered index
  • Table should have appropriate amount of non-clustered index
  • Non-clustered index should be created on columns of table based on query which is running
  • Following priority order should be followed when any index is created a) WHERE clause, b) JOIN clause, c) ORDER BY clause, d) SELECT clause
  • Do not to use Views or replace views with original source table
  • Triggers should not be used if possible, incorporate the logic of trigger in stored procedure
  • Remove any adhoc queries and use Stored Procedure instead
  • Check if there is atleast 30% HHD is empty – it improves the performance a bit
  • If possible move the logic of UDF to SP as well
  • Remove * from SELECT and use columns which are only necessary in code
  • Remove any unnecessary joins from table
  • If there is cursor used in query, see if there is any other way to avoid the usage of this (either by SELECT … INTO or INSERT … INTO, etc)

There are few hardware upgrades can be considered as well like separating index on different disk drive or moving tempdb to another drive. However, I am not suggesting them here as they are not quick way to improve the performance of query.

Reference : Pinal Dave (http://blog.SQLAuthority.com)

Read Full Post »

When we talk about statistics in SQL Server, we’re not discussing a database to store your baseball card collection. No, statistics in SQL Server refers specifically to information that the server collects about the distribution of data in columns and indexes. This data in turn is used by the query optimizer to determine the plan of attack for returning results when you run a query. While in the majority of cases SQL Server takes care of this statistical recordkeeping for you automatically, it’s useful to have some understanding of what’s going on, and to know when you might need to intervene.

Automatic Statistics

By default, SQL Server databases automatically create and update statistics. Although you can disable this (by setting a property of the database), you should almost certainly leave it on. SQL Server needs these statistics to do a good job with query processing, and it’s likely to be smarter than you about what and when to update. The information that gets stored includes:

  • The number of rows and pages occupied by a table’s data
  • The time that statistics were last updated
  • The average length of keys in a column
  • Histograms showing the distribution of data in a column
  • String summaries that are used when performing LIKE queries on character data

SQL Server maintains statistics on index and key columns of all of your tables – in other words, the columns that the query processor is able to use in various ways in queries depending on the data that they contain.

Viewing Statistics

You can get an idea of the statistics that SQL Server is maintaining on a table from either T-SQL or the SQL Server Management Studio user interface. Let’s look at the T-SQL solution first. The place to begin is with the sp_helpstats system stored procedure, which will list all of the statistics being maintained on a table. For example, in the AdventureWorks sample database, you can find the statistics on the Sales.SalesOrderDetail table this way:

sp_helpstats 'Sales.SalesOrderDetail', 'ALL'
 
statistics_name                                      statistics_keys
---------------------------------------------------- ---------------------------------
_WA_Sys_00000006_245D67DE                            SpecialOfferID
_WA_Sys_0000000B_245D67DE                            ModifiedDate
AK_SalesOrderDetail_rowguid                          rowguid
IX_SalesOrderDetail_ProductID                        ProductID
PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID  SalesOrderID, SalesOrderDetailID

Given an individual statistic, you can use DBCC SHOW_STATISTICS to retrieve detailed information:

 
DBCC SHOW_STATISTICS('Sales.SalesOrderDetail', IX_SalesOrderDetail_ProductID)
 
Name                           Updated              Rows                 
------------------------------ -------------------- -------------------- 
IX_SalesOrderDetail_ProductID  Oct 14 2005  1:59AM  121317               
 
Rows Sampled         Steps  Density       Average key length String Index
-------------------- ------ ------------- ------------------ ------------
121317               197    0.007550886   12                 NO 
 
(1 row(s) affected)
 
All density   Average Length Columns
------------- -------------- -------------------------------------------
0.003759399   4              ProductID
8.242868E-06  8              ProductID, SalesOrderID
8.242868E-06  12             ProductID, SalesOrderID, SalesOrderDetailID
 
(3 row(s) affected)
 
RANGE_HI_KEY RANGE_ROWS    EQ_ROWS       DISTINCT_RANGE_ROWS  AVG_RANGE_ROWS
------------ ------------- ------------- -------------------- --------------
707          0             3083          0                    1
708          0             3007          0                    1
709          0             188           0                    1
710          0             44            0                    1
711          0             3090          0                    1
(additional rows removed)

The first section of the DBCC SHOW_STATISTICS output gives you general information on the statistic, including how many rows went into building the statistic (SQL Server may use only a sample of the data if the statistic is on a non-indexed column). The second section shows the sets of columns that this statistic can be used for, and statistical measures of how well this statistic will help a query locate data in those columns. The final section is the histogram showing the distribution of data. The columns in this section show the key value that demarcates the range, the number of rows that are in the range but not equal to the upper bound, the number of rows in the range that are equal to the upper bound, the number of distinct values in the range (not counting the upper bound), and the average number of duplicate values in the table for this range (which helps measure the selectivity of the statistic). Even without understanding the math behind query optimization in detail, it should be obvious that this is useful information.

Of course, these days there are many developers and administrators who prefer to work with the GUI administration tools rather than raw SQL. Fortunately, all of this information on statistics is also available in SQL Server Management Studio. Connect to a database and expand the Object Explorer tree until you find the table of interest, then drill into the Statistics folder underneath the table. There you’ll find a graphical representation of all of the statistics on the table, as shown in Figure 1.

Of course, these days there are many developers and administrators who prefer to work with the GUI administration tools rather than raw SQL. Fortunately, all of this information on statistics is also available in SQL Server Management Studio. Connect to a database and expand the Object Explorer tree until you find the table of interest, then drill into the Statistics folder underneath the table. There you’ll find a graphical representation of all of the statistics on the table, as shown in Figure 1.

If you right-click on one of these statistics and select Properties, you’ll see the general information on the statistic, as shown in Figure 2.

Selecting the Details page shows detailed information on the statistic, as shown in Figure 3. You may recognize this format – the SQL Server Management Studio developers seem to have simply piped theDBCC SHOW_STATISTICS output to the user interface.

Updating Statistics

Just as the data in your database isn’t static, statistics aren’t either. As you add, delete, and update rows, SQL Server needs to adjust the statistics so that they still provide good guidance for the query optimizer. When you run a query, SQL Server checks statistics used in that query’s plan to see whether they’re outdated. It uses these heuristics to determine if a statistic is outdated:

  • If the rowcount in the table has gone from zero to any nonzero value, statistics are outdated.
  • If the number of rows in the table was under 500, and at least 500 rows have been modified (added/deleted/edited), statistics are outdated.
  • If the number of rows in the table was over 500, and at least 500 plus 20% of the rowcount rows have been modified (added/deleted/edited), statistics are outdated.

Normally statistics are updated as soon as it’s discovered that they should be, which can hold up the execution of a query. You can set the AUTO_UPDATE_STATISTICS_ASYNC database property to have the update performed asynchronously instead, which may result in less-accurate query plans but faster start of execution.

Sometimes, you may find that the heuristics aren’t good enough for telling when statistics should be updated. For example, if you’re adding a lot of rows at the end of the range of a column (think about adding autonumbers, or timestamps), the histogram for that column will become outdated rapidly. You can force an immediate update from T-SQL by executing the UPDATE STATISTICS statement:

 

UPDATE STATISTICS table | view

[

{

{index | statistics_name}

| ({index |statistics_name} [,…n])

}

]

[WITH

[

[FULLSCAN]

|SAMPLE number {PERCENT | ROWS}]

|RESAMPLE

|<update_stats_stream_option>[,…n]

]

[[,] [ALL | COLUMNS | INDEX]

[[,] NORECOMPUTE]

] ;

 

<update_stats_stream_option> ::=

[STATS_STREAM = stats_stream]

[ROWCOUNT = numeric_constant]

[PAGECOUNT = numeric contant]

As you can see, you can update all of the statistics for a table or view, or pick a particular statistic to update by index name or statistic name. You can also specify some update options:

  • FULLSCAN forces a complete read of every row of data.
  • SAMPLE lets you specify how much of the data SQL Server should sample in building statistics. By default, it will figure out its own sampling coverage.
  • RESAMPLE uses the same sampling rate as the last time the statistic was gathered.
  • ALL, COLUMNS, or INDEX specify whether the update should be to all statistics, column statistics, or index statistics. The default is all statistics.
  • NORECOMPUTE turns off automatic statistics gathering for this table or statistic.

The Undocumented Options

If you check Books Online, you’ll find the update_stats_stream_option documented as “This syntax is for internal use only and is not supported. Microsoft reserves the right to change this syntax at any time.”

The STATS_STREAM option, so far as I know, remains undocumented. But we now know what the other two options do, thanks to the new blogging spirit of openness and a blog posting from the SQL Server Query Optimization Team. It turns out that ROWCOUNT and PAGECOUNT can be used to trick the query optimizer by letting you fake the internal metadata about a particular table – specifically the count of rows and count of pages that the table occupies.

If you’re going to experiment with this, please do not do so in a production database!

For example, you can vastly increase the apparent rowcount and pagecount of the Sales.SalesOrderDetail table by issuing this command:

 

UPDATE STATISTICS Sales.SalesOrderDetail

WITH ROWCOUNT=5000000, PAGECOUNT = 5000000

Why would you want to do this? SQL Server’s query optimizer does things differently on small tables and on large tables. During development, you likely don’t have a whole lot of data in your tables. Rather than spend time generating huge amounts of sample data, you can use the undocumented options ofUPDATE STATISTICS to see how the query optimizer will treat your tables after they’ve been in production for a while. This can be a quick and dirty method to uncover potential performance issues before they arise, and perhaps point out the need for additional indexes or other schema work. I’d recommend confirming any such changes by retesting with realistic amounts of sample data, though.

Using Statistics Wisely

In most cases, SQL Server 2005 is so well tuned in this area that you only need to remember a single best practice: leave the statistics set to automatically update and forget about them. If you’re having a problem with particular queries, there are a few things you can do to make statistics more accurate. These include running an update with FULLSCAN, creating multicolumn statistics if you have multiple-column WHERE clauses that aren’t covered by existing statistics, and updating statistics more frequently if you’re filling a table with ascending data. Armed with the ability to evaluate the quality of statistics, you should be able to decide whether you need to undertake any of these steps in your own databases.

 

 

Read Full Post »

12 Essential Steps After Installing SQL Server

Rolling out SQL Servers is a key task for DBAs.  While for some DBAs this can be a frequent exercise, others will have to face it one time or other throughout their career in an organisation.

Successfully planning an installation requires a fair bit of knowledge about SQL Server and a basic understanding of the operating system. Once the installation is complete, the system usually needs some changes to default configuration before production databases can be rolled out. In this article I list some common steps that I take after installing a SQL Server. This can be used as a “checklist” to ensure consistency for all the SQL Servers that you roll out.

Step1: Install the Service Pack, Hotfixes and Cumulative Updates

This is the first step you should take after a successful installation. You should always install the latest service pack available unless there is a clear business cause not to do so. At the time of this writing, SQL Server 2005 is in service pack level 3 and SQL Server 2008 has service pack 1. Service packs are freely downloadable from the Microsoft web site. If there are hotfixes or cumulative updates available after the last service pack has been released, you should apply them as well. This ensures that your server is ready with the latest version of the database software. Once all the patches are installed, make a backup of your system databases and reboot the server.

Step 2: Configure SQL Services

Although your planning phase should determine what SQL services you are going to install, there may be occasions when a service is rolled out but not needed immediately. Perhaps your business requires you to install a SQL Server and although reporting is not going to be part of it, management insists you install Reporting Services – “just in case”. If a service is not going to be needed immediately (or needed at all), disable it.

One example of a service that you can disable is the Browser service. This service is installed by default. If you are not running named instances or planning to install multiple instances in one machine, you can disable this service.  You can disable the Browser service from the SQL Server Configuration Manager.

Another service that you can disable is the SQL Server VSS Writer. Unless you are using applications that make use of the Windows Volume Shadow Copy infrastructure to backup SQL databases, you can disable this service safely. Note that SQL VSS Writer is not available from the Configuration Manager. You can access it from the Control Panel > Services applet (see below).

While non-essential services should be disabled, other services like the Agent are supposed to be running continuously. This is because scheduled backups, maintenance plans, replication and jobs are dependent on the Agent service. Similarly, if your server is going to execute Integration Service packages, ensure the SSIS service starts automatically every time the server boots up. You can configure the start-up property of a service from the SQL Server Configuration Manager.

One thing that is not available in the Configuration Manager is the recovery behaviour of SQL Services – i.e. what should the services do if they fail unexpectedly. To access this behaviour, you can bring up the service’s property from the Control Panel’s “Services” applet. The “Recovery” tab of the service property allows you to define what the computer should do in case the service fails once, twice or subsequent times. From here, you can specify that the service should start automatically if it fails.

The same type of behaviour can be set in the SQL Server Agent’s property page. Here, you can specify that SQL or Agent service should restart automatically if they stop unexpectedly.

Step 3: Configure Default Directories

Everything has its place in this world and when it comes to files and folders, SQL Server is no exception. A number of directories are created during installation – you have one directory for storing data and log files, another directory for replication, another for full text indexes, and yet another for backup files.

Where SQL places its files is determined mostly during the installation process. This happens because either you provide this information explicitly or SQL uses a default set of locations. The directory path information you provide during the setup should really be coming from your planning phase. Once the installation is complete, it is still possible to fine-tune some of these component locations.

If you have followed the best practice, your data files should be residing in a separate physical drive and directory.  The transaction log files of user databases should also be in a separate drive and folder. You can configure both the locations from the server’s properties:

Similar to the data and log directories, you can specify the default locations for:

a)    Backup files

b)    Replication files

c)    Full text index files

d)    Job output files

SQL Server creates the folders for these components in the same location where data files are placed. To ensure data files are given the maximum possible dedicated space as far as disk activity is concerned, I try to move these folders to a separate drive.

Unfortunately, you cannot use a GUI front-end to configure these alternate locations. The path information is saved as registry values and that means you either have to manually change registry entries or use an extended stored procedure like xp_regwrite.  Also, the location of the registry keys will be different between named instances. They will also vary depending on your version of SQL Server (2005 or 2008). Generally, you will be looking for a registry key called “Working Directory”.

The script below shows how I changed the default locations for a default instance of SQL Server 2005 installed in a Windows Server 2003 system:

Configuring_Default_Directories.sql

And of course, the paths you specify as registry key values must exist – that means you need to create (or move) the folders.

To ensure the system is aware of your changes, restart SQL services after making the changes.

Step 4: Configure Default Database Properties

We all know it – model database works as the “template” for new databases. If you want to set some common properties across all your user databases, you can set them in the model. A number of properties can be set here to take advantage of this approach:

a)    Recovery Model: perhaps you are installing SQL Server for a data warehouse that will host a number of data marts that would not require full recovery model. You can set the recovery model of the model database to “simple” mode in such cases.

b)    File Properties: by default, the data files are configured to grow by 1 MB and the log files by 10% of their size. This means that a large transaction log files will grow by a large proportion when it needs more space. For example, every time a 6 GB log file needs to grow, it will take up 600 MB of disk space in one gallop. For a write intensive database, the data file’s incremental growth by 1 MB can result in fragmentation. As a general rule of thumb, the log files should be quarter or half the size of the data files. If you have an idea about how big your data and log files are going to be and how they should grow, you can set up the properties in the model database. I generally set the growth increments in megabytes rather than leaving it as a percentage.

Another area where you can make use of the model database is filegroups. If you want your user tables to be created on secondary data files and leave the primary filegroups with system objects only, you can create secondary data files in the model database and assign them to a user defined filegroup and make that filegroup default.

c)    Database Options: depending on your requirements you can set up other database options in the model database. Examples of such options could be torn page detection, ANSI defaults etc.

d)    Custom Objects: if you have custom objects that need to be present in each user database, you can put them in the model database.

Step 5: Configure tempdb Database

The temporary database is used heavily in SQL 2005 and latter versions. Best practice suggests that for optimal performance, SQL Server’s tempdb database files should be placed in a separate dedicated disk drive. This should be followed wherever possible, but even if your server’s current disk configuration does not allow this, you can still make some changes to ensure tempdb is optimally tuned. Part of this tuning means you should allocate a healthy amount of space to this database.

The number of data files you should allocate in the tempdb database will depend on the number of CPUs (logical or physical) present in the machine. When SQL Server accesses a database, it starts a scheduler thread for each data file present. So if you have eight CPUs in your physical server, the tempdb database should have eight data files to “load balance” the threads.

The easiest way you can check the number of CPUs present in the machine is by looking at the Windows Task Manager’s Performance tab.

Also, data files in a database are used in a “proportional fill” algorithm. This means that if you have two data files in a database – one 4 GB and another 8 GB, SQL will try to stripe the data across the two files in a 1:2 ratio. For each stripe of data written to the 4 GB file, SQL will write two stripes in the 8GB file – hence the term “proportional fill”.

Now if you configure tempdb to have the same number of data files as the number of CPUs present and specify each data file to have the same size and growth, you are effectively ensuring the CPU load is evenly distributed across tempdb.

The following script shows one such configuration done from the query window.

Configuring_tempdb_Data_Files.sql

As you can see from the code, we are adding three new data files to tempdb (since there are four CPUs present and there is already one data file) and ensuring the four files to have the same size and growth characteristics.

Once you have completed the tempdb configuration, you will need to restart the SQL instance for the changes to take effect.

Step 6: Configure Server Properties and Features

Of all the SQL Server configurations, this one is probably the most important and has a direct effect on how the server behaves. What properties you configure and what values you set here will depend on your organisation’s business needs, SOEs etc. There are a number of properties you can set for the database server engine and there is more than one way you can set some of them. I like to divide these properties into three categories:

Configuring Features:

If it is SQL 2005, use the Surface Area Configuration tool for enabling some of the SQL Server features. For SQL 2008, use the Surface Area Configuration Facet from the Server properties. Both these versions list the same features and by default all of these features are disabled.

What you enable here will depend on what you want your server to do. You may want to enable Database Mail (discussed later). If your stored procedures are going to use the xp_cmdshell utility, you will need to switch it on here. If you think there will be managed code in your database application, you need to enable CLR.

Configuring Parameter Values:

a)    Memory: Set the minimum and maximum server memory parameters. You can do so from the server properties dialogue box or using the sp_configure system stored procedure. If AWE is required, enable that too.

b)    Fill factor: Set a default value for index fill factor. This can save page splitting in indexes in future.

c)    Processor: Configure max. degree of parallelism, cost threshold for parallelism as necessary.

d)    Network connections: Configure remote query timeouts if you want to override the default behaviour of 600 seconds.

e)    Security: Although you can change the authentication mode (Trusted or Mixed) at this stage – this is a choice you probably have made during the installation. You can also enable C2 audit mode as part of security configuration.

f)     Trace Flags: If you are installing SQL Server 2005, you may wish to enable trace flags as start-up parameters for detecting and reporting deadlocks. The trace flag 1204 can be used for such purposes.

Configuring SQL Server Network:

This is where you define how SQL will use the network to talk to other computers. The first thing you need to do here is enabling or disabling the network protocols SQL will use to listen for incoming traffic. You set this from the Configuration Manager. The picture below shows how I have disabled Named Pipes.

Next, you need to see if your applications require server aliases. Some of the older apps may require you to define a server alias for a named instance.

Finally, you need to decide if you will assign non-default TCP port number for your instance. The default instance uses a default port number of 1433 and the named instances are usually assigned dynamic port numbers at start up. If you want to assign a non-default port number to SQL Server for TCP connections, you can set it here.

Most of the server property changes would require restarting the service.

Step 7: Configure Security

When it comes down to SQL Server security, companies usually take one of two approaches – some follow a rigorous data security policy, others just leave it all to defaults (and to potential attacks). If you are working for a company like the former, you are in luck – you will know what to do. If your company does not have any specific procedures for implementing SQL Server security (written or unwritten), it is up to you how you define it. Once you have completed this part though, you will have something to refer back to in future.

First and foremost, ensure there are as few members in the sysadmin server role as possible. The Local Administrators Windows group is a member of the sysadmin role by default. Unless you want to have your Windows Administrators to have access to the SQL Server, you should get rid of this group from the sysadmin fixed server role. The next step would be to revoke the login for this group.

If you have a pre-defined account that you use for database administration, add that to the sysadmin server role.

You can also create Credentials and Proxy accounts at this stage if you want your jobs to run SSIS packages or operating system commands under those accounts.

Step 8: Configure Error Logs

SQL Server error log is the first port of call for DBAs when they troubleshoot server related issues. By default, the server keeps six logs as archive and one log as current. Every time SQL service starts, the system begins a new log file. The currently active log file name is ERRORLOG. The log file before restart (previous ERRORLOG) becomes archived as ERRORLOG.1, the log file before that (named ERRORLOG.2) becomes ERRORLOG.3 and so on.

One of the annoying things with default error log behaviour is that any one file can contain a number of days of entries. When you try to view a file, it can take some time to load. If you are interested in a particular day’s events, you will need to sift through the entries or use a filter.

Also, log files are overwritten in a rolling fashion after six files are created. So that means you won’t have any history before the oldest log file (ERRORLOG.6).

What you can do here is to configure SQL Server to keep a predefined number of log files. You can also create a scheduled job to “recycle” the error logs regularly. Recycling means that SQL Server will start a new log file without restarting the service.

To keep a pre-defined number of archived logs, you will need to configure the SQL error log properties.

As you can see, I would recommend keeping at least the last thirty log files.

Once the number of error log files is configured, you can create a scheduled job to run every day to reinitialise the log. The job will have only one step that will call the sp_cycle_errorlog system stored procedure. This procedure starts a new error log file without restarting the SQL service. I usually schedule it to run at 12:00 AM every morning.

The combined effect of these two changes is that you will have one new log file created every day at midnight. If your server does not restart in between, you will have the last one month’s log entries in disk with each day’s log in one separate file.

The same kind of job can be defined for SQL Agent error log files. The latest Agent error log is named SQLAGENT.OUT. Executing the msdb database’s sp_cycle_agent_errorlog procedure from a job will create a new Agent log file and archive the previous one. Unlike SQL Server error logs though, you cannot use a dialogue box to specify the number of archived Agent error logs to keep.

Step 9: Configure Database Mail

This step is optional. If your databases are going to send e-mails, you will first need to enable database mail from the Surface Area Configuration tool (SQL 2005) or the Server Properties’ Surface Area Configuration Facet (for SQL 2008).

Once enabled, you can easily configure it using a wizard in the Management Studio.

You can also enable the mail profile for SQL Agent service.

If your apps are not going to use the mail feature – don’t enable or configure it.

Step 10: Configure SQL Agent & Operator

We are almost at the end of our configuration process. It is now worth having a look at the SQL Agent properties.  The three most important things that can be done here are:

a)    Configuring SQL and Agent services to restart when they stop unexpectedly (discussed before)

b)    Enabling the mail profile for SQL Agent. You can do it from the SQL Agent’s Alert System properties

c)    Increasing the size of the job history log in the msdb database

Changing the size of the job history log in the msdb database is usually a good idea if your server is going to host a large number of scheduled jobs.

Define DBA Operators:

Every database server should have one standard DBA operator defined. This operator will be notified via e-mail when jobs fail or alerts fire. Do not create yourself as this operator – if you leave the company, your e-mail account is likely to be deleted and those who come after you will need to spend time for reconfiguring everything as the notifications will fail. That’s why you should ask your system administrators to create a standard DBA account and assign a mailbox to it.

Step 11: Configure Operating System

You can also make some OS level changes to give some extra power to SQL. This works if you have administrative privilege on the Local Windows server.

If the database application is going to participate in distributed transactions through linked servers, ensure that Network DTC is enabled.

To reach this screen in a Windows Server 2003 system,

1.    Start the Component Service applet (Start > Administrative Tools > Component Services)

2.    Expand Component Services node and then Computers node in the left side pane

3.    Right Click on My Computer and choose Properties

4.    Choose the MS DTC tab

5.    Click on Security Configuration screen

The steps are similar for Windows Server 2008

You should also give your SQL Service account the right to “Lock pages in memory”. This ensures that Windows will not swap out SQL pages from the memory on to the paging file on disk. In fact from version 2005, SQL Server can detect a memory pressure from OS signal and can dynamically allocate / de-allocate pages itself.

To configure this property, assign the SQL service account to this privilege from the Local Security Policy.


Step 12: Set up Maintenance Plans for System Databases

Your server is now ready for production database rollouts. To wrap it up all, take a moment to create a maintenance plan for the system databases. This maintenance plan should reorganise the indexes, run DBCC CHECKS and update the statistics. It should also back up the master, model and the msdb databases.  As a best practice, the plan should backup the system databases once every day. When you have created the maintenance plan, run each of the tasks manually to see if they succeeded.

Conclusion

Although it is not possible to describe every possible installation scenario, the steps discussed here should be common in every installation. You should customise this list to incorporate any extra configuration necessary. For example, you may need to configure Reporting Services after SQL configuration is complete. Perhaps there are custom jobs or DBA stored procedures that you roll out in every SQL Server in the company. In each case, these extra steps will form parts of your post-installation configuration.


Read Full Post »

Older Posts »