Transactional replication is a more flexible data replication technology that operates at the individual transaction level. It involves publishing and replicating specific transactions from a primary (publisher) database to one or more secondary (subscriber) databases. With transactional replication, individual data modifications (inserts, updates, deletes) are captured and delivered to the subscribers in near real-time.
The data is typically kept synchronized between the publisher and subscribers. Transactional replication is suitable for scenarios where you need real-time data.
It supports bi-directional replication, allowing changes to flow both from the publisher to the subscribers and vice versa. However, bidirectional replication can be complex to manage and maintain.
Step by step configuration process:
1. Configure Distributor:
1.1. Expand the Server explorer and right-click on Replication -> Configure Distributor.
select distributor server then click on next.
1.2. Set/Add a network path as Snapshot path then click on next
1.3. Set/Add distributor name or leave it as it is then click click on next
1.4. Select Publisher server then click on next
1.5. Check Configure distribution then click on next
1.6. Verify then click on next
1.7. After getting the Success status close it
2. Configure Publisher
2.1. Expand Replication and right click on ‘Local Publication’ then click on ‘New Publication’. From the popup select publication database the click on next.
2.2. Select publication type— Transactional Publication then click on next.
2.3. Select objects to publish then click on next
2.4. Next again…
2.5. Select the first option— Create a snapshot immediately then next
2.6. Click on the security settings, from child window select option— Run under the SQL Server Agent service account then from connect to publisher section select — By impersonation the process account then click on next.
2.7. Check the option Create the publication then next
2.8. Add publication name and review the configuration then click on ‘Finish’
2.9. After getting the Success status close it.
3. Configure Subscribers
3.1. Right-click on the publication name and click on ‘New Subscriptions’. From the popup window click on next.
3.2. Select ‘Run all agent at the Distributor’ option then next
3.3. Click on ‘Add Subscriber-> New SQL Server Subscriber’ and connect to the subscriber then click on next
3.4. Select Subscription database or select ‘New Database’ option
3.5. If a new database option is selected, give a name and other information then click on ‘OK’. From the new subscription wizard click on next.
3.6. Click on the ellipse button to configure ‘Distribution Agent Security’. Here select ‘Run under SQL Server Agent service account’, ‘By impersonating the process account’ and ‘Using the following SQL Server login’— provide the login credentials then click on ‘OK’
3.7. Select ‘Run continuously’ then next
3.8. Check ‘Initialize’ then next
3.9. Check ‘Create a subscription’
3.10. Review and click in ‘Finish’
Now insert, update, delete data in primary database then check the changes in the secondary database.
Log shipping is a database-level disaster recovery solution that involves copying and restoring transaction log backups from a primary (source) database to one or more secondary (destination) databases. It works by taking regular transaction log backups from the primary database and then copying and restoring those backups on the secondary server(s).
Log shipping is designed for warm standby scenarios, where the secondary database is in a constant state of recovery and can be brought online relatively quickly in case of a primary database failure.
It provides a one-way data replication, where changes are applied from the primary to the secondary database(s) but not vice versa. As a result, log shipping is not intended for read scalability or load balancing purposes.
Step by step setup process:
1. Right click on the primary database -> Tasks -> then click on Ship Transaction Logs.
From the popup window check the ‘Enable this as a primary database in a log shipping configuration’ then click on ‘Backup Settings’
2. Set the network path then click on ‘Schedule’
3. Now set the desired time then click on ‘OK’
4. From the first window click on ‘Add’ button to set the secondary database. From the coming popup click on ‘Connect’ then connect to your desired secondary server.
5. From the popup window, in ‘Copy Files’ tab set the path for ‘Distribution folder for copied files’ then click on the ‘Schedule’ button. From next popup set schedule time then click ‘OK’
6. Now click on the ‘Restore Transaction Log’ Tab select Standby mood then check ‘Disconnect users in the database when restoring backups’ then click on ‘Schedule’ button. By setting schedule click on ‘OK’
7. After completing the setup— insert, update or delete data in primary database to see the update on secondary database.
Delete G and H then wait 2 seconds now see the result
- Use Cases: Log shipping is primarily used for disaster recovery, providing a warm standby database for failover. Transactional replication is used for real-time data synchronization, reporting, and distributing data across multiple servers for read scalability and load balancing.
- Granularity: Log shipping operates at the database level, replicating entire transaction log backups. Transactional replication operates at the transaction level, replicating individual data modifications.
- Failover: Log shipping involves a manual failover process, while transactional replication can support automatic failover with some additional configurations.
- Direction: Log shipping is unidirectional, only applying changes from the primary to the secondary. Transactional replication can be bidirectional, supporting changes in both directions.
In summary, log shipping is a simpler and one-way disaster recovery solution, whereas transactional replication is a more advanced and flexible data replication technology designed for real-time synchronization and read scalability. Your choice between the two depends on the specific requirements of your environment, such as recovery time objectives, data synchronization needs, and read scalability requirements.