Feeds:
Posts
Comments

Archive for the ‘HA’ Category

02010504060703080911101312141716151918

Read Full Post »

I got a question from Interview. Run the Truncate Command in Log Shipping Primary Database What are the impact will happen in Secondary database.

Truncate Command impacts will  forward to the Secondary Database. another question raised in your mind. since Truncate command is non logged command so how it will make a impact on second Database.

Truncate Table  command will not delete the data’s row by row basic. instead of  SQL Server just deallocating a pages from a table so this operation captured by a log file. when this log backup file will restore to the secondary database the same thing will happen in secondary database.

DBCC Log (<database name>,4)

Read Full Post »

This Post about  during log shipping configuration if we shall provide same location for backup  and copy job then what are the impacts will occur in log shipping?

Log shipping will be working fine with out any impacts. Copy Job not need when we using same location for backup  and copy job in log shipping.

Backup Settings

 

Secondary Database Settings

 History of Copy Job

 

 

Read Full Post »

1.  we delete the column in publisher database table that deleted column will delete in subscriber database table .

2.  we add the new column in publisher database table then that newly added column will add to the  subscriber database table with below criteria .

In Publication Properties –> Subscription Options –> Replicate schema changes value is True then newly column will add to the subscription database table False then No Change.

Read Full Post »

Prerequisites for Transactional Replication

Primary Key:

This is a basic rule that every article should have a Primary Key to be a candidate table for Transactional Replication. Primary keys are used to maintain uniqueness of records and to maintain  referential integrity between tables, and that is why it is recommended for every article to have a primary key.

Securing snapshot folder:

Schedule:

Network bandwidth:

Enough disk space for database being published:

We need to make sureWe need to make sure that we have ample space available for the transaction log for the published database, as it will continue to grow and won’t truncate the log records until they are moved to the distribution database. Please note that even in simple recovery model, the log fle can grow large if replication breaks. That is the reason it is recommended to set T-log’s auto grow option to value “true”.  We should also make sure that the distribution database is available and  log reader agent is running.

Enough disk space for distribution database:

It is necessary to haveIt is necessary to have enough disk space allocated to the distribution database. This is because the distribution database will store the transactions marked for replication until it is applied to the subscriber database within the limit of retention period of distribution (which is 72 hours by default), or it will retain the transactions until the snapshot agent re-runs and creates a new snapshot. re-runs and creates a new snapshot.

Use domain account as service account:

We should always use the domain account as a service account, so that when agents access the shared folder of snapshot fles, it won’t have any problem just because they are local to the system and do not have permission to access network share. While mentioning service account, we are asked to choose from two built-in accounts including Local System account, Network Services, and this account, wherein we have to specify the domain account on which the service account will run.

Read Full Post »

Error :-

Check the Replication accounts have a appropriate Permission.

Example :-

In this case the Database doesn’t have any user rights so the error was occurred.You Enter the Owner of database the error will be go.

 

Read Full Post »

  1. Both source and target SQL Server instances of log shipping must be running under Domain User Account.
  2. Both source and target instances of log shipping must be running SQL Server 2005 Enterprise Edition, Workgroup Edition or Standard Edition with all SP and Patches should updated.
  3. Account(s) running SQL Server service on source and destination instance must have access to the network folder where you will store transaction log backup files.
  4. Computers hosting primary and standby SQL Servers must be connected to a network.
  5. SQL Server Agent service must be running on both source and destination servers. Log shipping is implemented as a collection of jobs. If SQL Server Agent service isn’t running, none of the jobs will execute.
  6. Log shipping must be setup at the database level. The database must use either full or bulk logged recovery model. Simple recovery model truncates transaction log during each checkpoint and does not support transaction log backups. Therefore databases that use simple recovery model cannot participate in log shipping. If you want to provide high availability for multiple databases on a single instance of SQL Server, you must setup log shipping separately for each database.
  7. Although not a prerequisite for log shipping configuration, it is essential to realize that in case of failover the target server becomes your primary database server. If you want to provide acceptable level of service to your user community in case of the primary server’s failure, then ensure that the standby (target) server has the same hardware configuration as the primary (source) server.

For More Details :- http://sqlserverpedia.com/wiki/Log_Shipping

Read Full Post »

Older Posts »