Archive for the ‘Interview Questions’ Category

        Reasons to Backup Master Database – Why Should Master Database Backedup (Pinal Dave) :- http://blog.sqlauthority.com/2009/02/15/sql-server-reasons-to-backup-master-database-why-should-master-database-backedup/

1.Backup the Master Database.

2.For Testing purpose intentionally delete any login from SQL Server.

4. Login was Deleted from SQL Server.

Now we will restore the Master Database From Backup file

1.Open SQL Server Configuration Manager

2.Stop the SQL Server Service.

3. Add -m; Parameter in SQL Server Startup Parameters for start the SQL Server with Single User Mode.

4. Start the SQL Server Service.

5.Open command prompt for restore a master database.

6.Now Open SQL Server Using SSMS Check the Deleted Login Id restored.


Read Full Post »

I got a question from Interview. Run the Truncate Command in Log Shipping Primary Database What are the impact will happen in Secondary database.

Truncate Command impacts will  forward to the Secondary Database. another question raised in your mind. since Truncate command is non logged command so how it will make a impact on second Database.

Truncate Table  command will not delete the data’s row by row basic. instead of  SQL Server just deallocating a pages from a table so this operation captured by a log file. when this log backup file will restore to the secondary database the same thing will happen in secondary database.

DBCC Log (<database name>,4)

Read Full Post »

This Post about  during log shipping configuration if we shall provide same location for backup  and copy job then what are the impacts will occur in log shipping?

Log shipping will be working fine with out any impacts. Copy Job not need when we using same location for backup  and copy job in log shipping.

Backup Settings


Secondary Database Settings

 History of Copy Job



Read Full Post »

This post taken from  :- http://www.teratrax.com/articles/sql_server_blocking_deadlocks.html

SQL Server blocking

SQL Server blocking occurs when one connection (user process or application process) places a lock on a table (or a number of rows) and a second connection attempts to read or modify the data under the lock. Depending on the type of the lock, this can cause the second connection to wait until the first connection releases its lock. A blocked connection waits indefinitely for the blocking connection to release its lock.

The more blocking happens on the server the less concurrency the system achieves. A certain amount of blocking is unavoidable but too many blocks for longer periods of time can degrade the performance of SQL Server.

SQL Server deadlocks

The combination of two blocked connections where the first is blocking the second and the second is blocking the first is called a deadlock. Since deadlocks are not naturally resolved with time, SQL Server automatically kills one of the connection (Deadlock victim) once it detects a deadlock. This allows the other connection to continue with its transaction.

Although deadlocks can be caused by two short blocks (Fraction of a second), it is often the long blocks that increase the chances of a deadlock to happen.

Blocks escalating to deadlocks

The following diagram shows the sequence of events leading to a deadlock. Consider two applications (A1, A2) accessing two different table (T1, T2):

Event 1: A1 places a lock on T1 inside its transaction and continues to execute other statements

Event 2: A2 places a lock on T2 inside its transaction and continues to execute other statements

Event 3: A1 attempts to place a lock on T2 (Needs to access T2 before it can finish the transaction) but has to wait for A2 to release its lock

At this point, a block is created since A2 is blocking A1

Event 4: While A1 is waiting, A2 attempts to place a lock on T1 (Needs to access T1 before it can finish its own transaction)

A deadlock is created since two connections have blocked one another. SQL Server automatically resolves the deadlock by choosing one of the connections as a deadlock victim and killing it.

Read Full Post »

This post was taken from :- http://thakurvinay.wordpress.com/2011/06/20/difference-between-checkpoint-and-lazywritter/


Lazy Writer

1. Flush dirty pages to Disk 1. Flush dirty pages to disk.
2. Flush only Data pages to disk 2. Check for available memory and removed Buffer pool (execution plan/compile plan/ Data pages /Memory objects)
3. Default, Occurs approximately every 1 minute 3. Occurs depending upon memory pressure and resource availability
4. Can be managed with sp_confige -recovery interval option 4. It is lazy,  Sql server manages by its own.
5. Does not check the memory pressure 5. Monitor the memory pressure and try maintain the available free memory.
6. crash recovery process will be fast to read log as data file is updated. 6. No role in recovery
7. Occurs for any DDL statement 7. Occurs per requirement
8. Occurs before Backup/Detach command 8. Occurs per requirement
 9. Depends upon the configuration setting, we can control. 9. Works on Least recent used pages and removed unused plans first, no user control.
10.  for simple recovery it flush the tlog file after 70% full. 10. No effect on recovery model.
11. can manually /Forcefully run command “Checkpoint” 11.No command for Lazy Writer
12. Very Less performance impact 12. No performance impact


Checkpoint occurs on database level.

To find when the checkpoint occur use undocumented function

select  * from ::fn_dblog(null,null)  WHERE [Operation] like ‘%CKPT’

or sql server 2000

select top 10  [Operation],[checkpoint begin],[checkpoint end] from ::fn_dblog(null,null)  WHERE [Operation] in(‘LOP_BEGIN_CKPT’, ‘LOP_END_CKPT’,’LOP_XACT_CKPT’)

Also enabling trace flag will provide information on error log when checkpoint started at what database.

DBCC TRACEON(3502, -1)

Checkpoint impact the performance (very low IO) for heavy system, so we can even disable automatic checkpoint —-Never do this, using trace flag 3505

Please correct/update me if you have any comment on this.




Lazy writer is on the server  to check when lazy writer occurs use

SQL Server Buffer Manager Lazy writes/sec

Read Full Post »

1.  we delete the column in publisher database table that deleted column will delete in subscriber database table .

2.  we add the new column in publisher database table then that newly added column will add to the  subscriber database table with below criteria .

In Publication Properties –> Subscription Options –> Replicate schema changes value is True then newly column will add to the subscription database table False then No Change.

Read Full Post »

Which Database Users are not associated with SQL  server Logins.that users are Called Orphaned Users.

For More Information :-






Read Full Post »

Older Posts »