SQL Server Replication: Everything You Need To Know

Luke Smith
Enterprise Solutions Architect
June 20, 2023
Matt Tanner
Developer Relations Lead
June 20, 2023
Matt Tanner
Matt Tanner
Developer Relations Lead
Developer Relations Lead
June 22, 2023
 min read
Join our newsletter

In the universe of data management, replication is a pivotal principle that extends many benefits to enterprises. In short, these benefits include load balancing, data recovery, and ensuring that localized data is available across distributed systems. In today's data-driven era, businesses that are heavily reliant on data to power their operations need to comprehend replication and the advantages it can bring to their organizations. This is especially in the context of SQL Server, one of the most used databases in existence, to harness the power of their SQL Server-hosted data effectively. With a myriad of applications, the potential of replication ranges from aiding data-driven decision-making and predicting market trends, to powering robust AI models. 

In this comprehensive blog post, we will navigate the different aspects of data replication, exploring what it is and how SQL Server replication operates. We will unravel the different types of SQL Server replication, and the different SQL Server replication components. Lastly, we will look at the process of setting up SQL Server replication with Arcion and the advantages of using Arcion. Let’s start by taking a deep dive into what replication is and how it can be used.

Table of Contents

What is Data Replication?

Data Replication, at its core, is a technology process that involves copying and distributing data and database objects from one database to another. Once the initial data is loaded, the replication process then synchronizes these databases to maintain data consistency. In its simplest form, replication keeps multiple databases in sync with one another. In real life, let's consider a high-volume e-commerce platform as an example. The e-commerce company may have several databases that store a broad range of data from customer information, order details, product listings, reviews, and more. These databases may be located across different regions around the world, allowing them to cater to the company's vast global operations.

Now, imagine if the e-commerce platforms' databases didn't synchronize, and the data remained isolated in each region's database. A customer from Europe visiting the United States would potentially face issues while trying to place orders from the U.S. since their customer information resides in the European database. Replication mitigates such issues by ensuring that a change in one database – say, the European customer's order placed in the U.S. – is reflected across all of the platforms' databases worldwide. This means that in these cases, local data can be made available globally by replicating it where it is needed.

Replication is a foundational strategy for managing data across distinct databases and data warehouses that are often sprawled across multiple geographical locations and organizational units. For instance, a multinational bank may have several branches spread across multiple countries. Each branch may have a local database to quickly process customer transactions. However, having a customer's data siloed in one branch would be inefficient. If the bank were to use replication, the customer's data can be replicated across all the bank's databases. This helps to provide seamless banking services to the customer, no matter the branch they visit.

Replicating data essentially allows data to be duplicated on a secondary system. The process of duplicating data ensures speedy data access for users, boosts data availability, and reinforces the reliability and resilience of a data management system. Consider the example of a streaming service that operates in numerous countries and has millions of subscribers globally. Replication ensures that the streaming service’s media content is available to all users, regardless of their location. By replicating and distributing its media databases worldwide, The service can ensure low latency, providing a smooth and enjoyable viewing experience for its users.

Replication can also act as a guard against data losses resulting from hardware failure, data corruption, and other disasters which may impact data availability. For instance, if a natural disaster were to damage the servers of a company, without replication, this could potentially lead to catastrophic data loss. Since the data is only available on the damaged servers, there is a high risk of losing the data. If the company has a replication process in place, even if one server fails or gets damaged, the data can still be accessed from other replicated databases. This can help to prevent any significant data loss and also helps to maintain business continuity since up-to-date data is still available in a secondary location.

Another major area of impact for replication is as part of a real-time analytics pipeline. Going back to our multinational bank example, the bank may want to analyze transactions in real-time to detect fraudulent activities. With replication, as soon as a transaction occurs at a branch, the data could be replicated to an analytics database that is set up to detect fraudulent patterns. The bank can then use real-time analytics to monitor transactions as they occur and promptly detect and respond to any potential fraudulent activities. This is tough to implement in a primary database since many are not built with powerful analytics in mind, the kind needed to detect fraud. By moving the analytics into a specialized platform, there is also the benefit of taking the load off of the primary application database so it can still run at peak performance, without overloading it.

Similarly, in the case of our previous streaming service example, replication can also enable real-time viewer analytics. By replicating viewer data such as play history, search queries, and ratings, the streaming service could analyze this data to understand viewer behavior, preferences, and trends. These insights can help to provide personalized recommendations to help improve viewer engagement and satisfaction. In an environment with real-time replication and analytics, these insights can be dynamic and instant, enabling the streaming provider to react quickly to changing viewer trends.

In all these scenarios, replication is not merely a data management strategy; it's a crucial enabler of data-driven decision-making. Replication, when implemented correctly, can make a massive contribution to an organization's success.

What is SQL Server Replication?

Within SQL Server, there are different sets of tools and solutions engineered by Microsoft that empowers users to implement replication. These solutions allow you to copy and distribute data from one database to another, allowing users to synchronize data across databases to uphold consistency and improve availability. SQL Server replication tools automate the process of data distribution and synchronization, providing the architecture to design solutions for a wide range of business use cases. 

By leveraging replication within SQL Server, data scientists, and developers can enhance application performance, segregate workloads, and ensure data availability. SQL Server replication is frequently used to consolidate data from different sites into a central database, or conversely, to distribute a central database to various remote locations. For instance, a multinational business with offices in New York, London, and Tokyo may use SQL Server replication to replicate a central database, the one located in New York, to local servers in London and Tokyo. This way, employees in each office could access up-to-date data quickly, without being hindered by network latency from having to access the server in New York, halfway around the globe. All of this could be implemented with the native tools supplied within SQL Server.

How Does SQL Server Replication Work?

SQL Server replication can be described as a publishing framework and within that framework are the components which include a Publisher, a Distributor, and a Subscriber. Most software and data engineers are familiar with the “Publisher-Subscriber” design pattern that the replication process is built on. Each of these high-level components plays a crucial role in the process of replication within SQL Server. Let's look at each component in more detail and consider a few real-world scenarios to better understand how each component within the replication process works.


The Publisher component is the source database where the original data resides. This could be a central application database where all the primary data entry occurs from an application. For example, imagine a global retail company that operates several stores around the globe. The retail company's central database server, located at its headquarters, collects sales data from each transaction that has taken place within its stores. In this scenario, the central database server acts as the Publisher and makes its data available to be replicated into another database. 


The Distributor component manages the distribution database and acts like a courier service that ensures that data is delivered to its final destination, the Subscriber component mentioned below. The Distributor stores all the transactions or changes that need to be propagated to the subscriber(s). In some cases, the Distributor may be located on the same server as the Publisher or on a different server, depending on infrastructure requirements. In the retail company example from earlier, the Distributor would record all the sales transaction data from the central database and prepare the data for distribution to the Subscribers.


The last component is the Subscriber, the destination for the data. The Subscriber is the database that will receive replicated data. It can be thought of as the end-user of the data, constantly waiting for the data to arrive from the Distributor. Depending on the type of replication, it could receive a read-only version of the data or a read-write version. For our retail company example, each regional office might have its database server acting as a Subscriber, waiting for the latest data from across the company. These regional servers would receive the replicated sales and operational data, which would allow the company’s regional managers and other employees to access global sales data in real time.

In a typical replication process, the Publisher captures transactional or snapshot data and passes them to the Distributor. The Distributor then sends these transactions or snapshots to the Subscriber(s). The precise flow of data and the roles of the Publisher, Distributor, and Subscriber can vary depending on the type of replication in use - Snapshot, Transactional, Merge, or Peer-to-peer. We will cover more on these different types of SQL Server replication in the next section.

What Are The 4 Types of Replication in SQL Server?

SQL Server offers four primary types of replication: Snapshot, Transactional, Peer-to-peer, and Merge Replication. Each of these replication types serves a unique purpose, can be tailored for specific use cases, and operates differently based on data replication and synchronization needs. Let’s take a look at each type in more detail.

Snapshot Replication

Snapshot Replication involves creating a complete copy or 'snapshot' of the entire database, or selected database objects, and then replicating this snapshot to the Subscriber or target database. This type of replication is suitable for relatively small amounts of data where changes are infrequent. When a snapshot is applied to the target database, generally it results in overwriting the entire dataset that is existing. 

The advantage of Snapshot Replication lies in its simplicity and reliability. It is essentially a one-time copy-and-paste process. However, in some cases, it may not be the best fit for larger databases or environments with frequent data changes. This is due to the network load and potential performance impact during the snapshot generation process that tends to exponentially increase as data volume increases. 

A real-world example might be when a retail business might want to distribute a catalog with a list of products to its multiple branches. Given that the catalog might only change occasionally (like every quarter or year), Snapshot Replication could be a straightforward and effective solution to replicate a product database from a central server out to others. 

Transactional Replication

Unlike Snapshot Replication, Transactional Replication initiates with an initial snapshot replication, after which individual transactions are replicated as they occur. This ensures that the Subscriber has an almost real-time copy of the Publisher's data.

Transactional Replication is beneficial in environments where maintaining data consistency and currency across databases is crucial. It's also more efficient than Snapshot Replication for larger databases or databases with frequent changes, as it only transmits the modified data instead of replicating the entire database. Of course, the initial data load will still use Snapshot Replication but since it is a one-time operation, any network or system load generated is a one-off occurrence.

A great example of when to use Transactional Replication would be in a large e-commerce business with a lot of daily transactions. The business might need to replicate transactions in real time from the primary server (Publisher) to a secondary server (Subscriber) for real-time analytics, reporting, or backup purposes. In this scenario, Transactional Replication would be a suitable choice that could ensure data is always up-to-date with minimal load on the source system.

Peer-to-Peer Replication

Peer-to-Peer Replication is a variant of Transactional Replication that allows multiple servers to function as Publishers and Subscribers simultaneously, each maintaining a copy of the replicated data. Changes made in any of the databases are propagated to all other databases, making it ideal for load balancing and providing high availability. This is more easily thought of as bi-directional replication where all servers are constantly up to date.

This type of replication is suitable for applications where real-time, distributed access to the most current data is necessary. The challenge is that this type of replication also requires careful conflict detection and resolution strategies since changes can be made to the same data in multiple locations. This could lead to data reliability and duplication issues if not handled correctly.

A good example of Peer-to-Peer Replication would be in a financial services company with offices in New York, London, and Hong Kong. Each of these offices might require access to the same set of financial data in real time for trading activities. Since each of the offices may have trading activity going on, all servers need to be in sync. In this case, Peer-to-Peer Replication can be used to replicate data across servers located in these offices to reduce data latency and enable faster and more informed decision-making.

Merge Replication

The last type of replication we will cover is Merge Replication. This type of replication blends the initial snapshot replication with subsequent data changes and conflict resolution. It allows changes to be made at both the Publisher and Subscriber, with changes merged and any conflicts resolved when the Subscriber synchronizes with the Publisher.

Merge Replication is particularly handy in environments where connectivity between the Publisher and Subscriber is intermittent or unreliable, or where changes need to be made at the Subscriber level. It also handles resolving conflicts in the data well too, allowing changes to occur at multiple sites while still being able to accurately merge them back into the Publisher.

A good example of where merge replication can be used is when a sales team might need to access and update customer data from their mobile devices while on the move. Due to mobile connectivity issues, at some points, users might be without a consistent internet connection. Merge Replication would allow them to work with a local copy of the data stored on the device, which could then be synchronized and merged with the central database when they're back online.

What are SQL Server Replication Components?

When looking even closer at native SQL Server replication processes, various components make replication within SQL Server possible beyond the high-level Published, Distributor, and Subscriber components we already discussed. Each component plays a distinct role in ensuring that data replication occurs seamlessly and efficiently within the SQL Server replication ecosystem. To better understand the role of each of these components, let's look at them in more detail.


Articles are essentially the individual database objects that are chosen to be replicated. These can include tables, views, stored procedures, and other database objects. For instance, in a hospital management system, the 'Patient' table containing patient details, the 'Appointments' table storing appointment data, or a stored procedure that calculates the total bill for a patient, could all be defined as articles for replication.


A publication is a collection of articles from a single database that will be replicated. For example, in a multi-branch banking system, a publication could comprise articles such as the 'Customer' table, 'Transactions' table, and 'Account' table, which collectively represents all necessary data related to banking operations.


The Publisher, as we've previously talked about, is the source database. This is where the original data resides and from where it is made available for replication. In the context of a global e-commerce platform, the central server that collects all transactions and user data could be used as a Publisher.


Another component we already chatted about was the Distributor. This component is the manager of data flow, acting as the intermediary that stores metadata related to replication and controls how data is cascaded from the Publisher to the Subscribers. The Distributor can be the same as the Publisher, known as a local Distributor. This type of Distributor functions within the same server as the Publisher. Another configuration is to have the Distributor on another server, known as a remote Distributor. This setup can be useful for load balancing in high-transaction environments to keep performance high.


Another component we already discussed was the Subscriber. The Subscriber is the destination database that receives and applies the replicated data. This could be a regional server in a globally-distributed company that needs up-to-date access to the latest data from a central data server.


A subscription refers to the agreement by a Subscriber to receive a publication. There are two types of subscriptions. In a push-based subscription, the Distributor proactively pushes data to the Subscriber whenever there's new data available. In contrast, with a pull-based subscription, the Subscriber requests updates from the Distributor, usually by constantly polling or running queries to see if there is any new data to retrieve. Choosing between these two types of subscription depends on factors such as data volume, network bandwidth, and desired degree of control over the replication schedule. For real-time replication, a push-based approach is usually the best.


The last component in the replication architecture is Agents, Agents are essentially the 'workers' or jobs within the SQL Server Replication process. These jobs are responsible for copying and distributing data among Publishers and Subscribers. There are several agents involved in the SQL Server replication process and each has a specific role. These agents include the Snapshot Agent which generates snapshot data, the Log Reader Agent which collects transaction logs, the Distribution Agent which delivers snapshot data and transactions to the Subscribers, and the Merge Agent which reconciles changes for Merge Replication.

Understanding each of these components and their roles in the replication process is critical to successfully implementing, managing, and troubleshooting SQL Server Replication. Being familiar with these terms can help to better understand documentation and examples showing how to configure, monitor, and maintain a replication environment with SQL Server.

Setting Up SQL Server Replication with Arcion

Now that we’ve explored the ins and outs of native SQL Server replication, let’s look at a more automated and real-time approach to moving data from SQL Server. To achieve this, we will use Arcion. Arcion is a real-time, in-memory Change Data Capture (CDC) solution trusted by some of the largest organizations in the world. The highlight of Arcion's capabilities includes offering users massive amounts of data scalability and data consistency at all times. When it comes to applicable use cases, Arcion is suitable for migrating and replicating data with ease since it has integrations with various enterprise databases and data warehouses, including SQL Server. Throughout this section, we will explore how to connect Microsoft SQL Server as a source (a Publisher, when looking at the terminology from previous sections) to Arcion’s CLI tool, known as Replicant. 

Step 1: Download And Install Arcion Self-hosted

The first thing we will need to do is download and install the self-hosted version of Arcion. To gain access, you will be required to follow the steps outlined on the Arcion Self-hosted webpage. Once you have downloaded Arcion’s CLI tool Replicant, you will need to create a home directory for it. By default, the home directory will be where Replicant was downloaded. This home directory, from now on, will just be referred to as $REPLICANT_HOME in the examples below. The last step is adding your credentials to your Arcion instance, which is covered in the quick start guide that you can reference for more details on how to do this.

Having downloaded and installed Arcion Self-hosted, we can move on to the next steps where we will configure and enable Arcion to connect with SQL Server and Databricks. 

Step 2: Set Up Connection Configuration for SQL Server

The next step will be for us to set up the connection configuration file for SQL Server. To do this, we will open the sample connection configuration file, sqlserver.yaml. The file can be found in the $REPLICANT_HOME directory under the conf/conn directory. Below is an example command to open up the file with Vi.

vi conf/conn/sqlserver.yaml

Then, we will go ahead and add our SQL Server credentials to give Arcion access to the instance. If your connection credentials are stored on AWS Secrets Manager, you can retrieve them using Replicant. Alternatively, we can add them directly to thesqlserver.yaml file we opened with the previous command. Some example inputs can be seen below.

extractor: {CHANGE|LOG}
host: localhost
port: 1433
username: 'USERNAME'
password: 'PASSWORD'
database: 'tpcc'
is_azure: true
#  enable: true
#  hostname-verification: false

From the configuration above, you can set the following values:

Extractor: For SQL Server as a data source, two CDC Extractors are available to choose from, CHANGE and LOG. You can select the one you want to use by setting the extractor parameter in the connection configuration file. The CHANGE extractor is the default value, with it, SQL Server Change Tracking is used for real-time replication. If you choose to use the LOG extractor, the Replicant SQL Server Agent will be used as the CDC Extractor.

host: This refers to the hostname of the SQL Server instance

port: This is the port number of the SQL Server instance

username: This refers to the username of the DATABASE user

password: This is the password associated with the USERNAME

database: This is where you input the name of the SQL Server database to connect to

max_number_of_connections: This is used to specify the maximum number of connections Replicant would use to fetch data from the source.

is_azure: If you are hosting SQL Server on Azure, the is_azure parameter must also be set to true. In the connection configuration file, that will look like this:

is_azure: true

After you are done setting the configuration, save the file and exit the text editor.

Step 3: Set up Extractor Configuration for SQL Server

Now that our connector configuration file is set, we need to set our extractor configuration. To set this, from the $REPLICANT_HOME directory open the Extractor configuration file located at conf/src/sqlserver.yaml. The command below shows an example of how to open up the file with Vi:

vi conf/src/sqlserver.yaml

Once the file is opened, you will need to decide what replication mode you want to use. Arcion fully supports real time, delta-snapshot, and snapshot modes for SQL Server. In this example, the configuration we will use is real-time mode and our focus will be on full-mode replication. For this to work, we will also need to create a heartbeat table. It is also important in full-mode replication that all tables to be replicated have a primary key on them. 

First, let’s create the heartbeat table. Below is an example of a SQL command you can run on your SQL Server instance to create a heartbeat table.

CREATE TABLE "tpcc"."dbo"."replicate_io_cdc_heartbeat"("timestamp" BIGINT NOT NULL, PRIMARY KEY("timestamp"))

After creating the heartbeat table, the next thing to do is to add the realtime configuration to the Extractor configuration file. In the extractor configuration file opened earlier, let's add a section for realtime mode (if it’s not already present). The configuration below shows an example of how to set up Replicant to operate in realtime mode:

 threads: 4
 fetch-size-rows: 10000
 fetch-duration-per-extractor-slot-s: 3
   enable: true   
   catalog: "tpcc"   
   schema: "dbo"   
   interval-ms: 10000

You can visit here for further reading and more information about Realtime mode configuration and parameters you can set within Arcion.

Optional Step: Enable Change Tracking for Realtime Mode

For SQL Server Change Tracking to be implemented in realtime mode, change tracking must be enabled on all the databases and tables. This step is only needed if your extractor setting was set to CHANGE in your Connection configuration file as seen below.

extractor: CHANGE

To enable change tracking in the SQL Server instance, execute the following example SQL command shown below with your parameters subbed in. The database_name variable in the statement will be replaced with the name of the database you want Change Tracking enabled on. If you have multiple databases you want to enable this for you can run this statement multiple times for the various databases.


Alternatively, Change Tracking can also be enabled on a specific table only. the example SQL command below can enable change tracking on a per-table basis. The table_name variable will be replaced with the name of the table you want Change Tracking enabled on. If you want to enable this on multiple tables, you can run the statement individually for each table.


Step 4: Configure The Target Connector and Run Replicant

The next step in the process is to connect Arcion to the target database, or Subscriber. Within Arcion, you can set up many different targets, including SQL Server, Databricks, Snowflake, and many others. Setting up the Target connector also involves configuring the Applier and Mapper configs.

Once the Target connector is configured, the last step is to run Replicant to begin the CDC replication process. To run Replicant, check out the Running Replicant docs page that outlines all of the necessary commands.

Advantages of Using Arcion

The ability to migrate and replicate data from SQL Server to various destinations can be done in a few simple steps. For example, you can set up a streaming pipeline in Arcion using Change Data Capture to replicate changes from SQL Server to any of the other destination databases supported, like Snowflake, or others like Oracle, DB2, SAP (ASE, HANA, IQ), SingleStore, and many more. Below are some highlights of using Arcion for data migrations and Change Data Capture.

Sub-Second Latency 

Many other existing CDC solutions don’t scale for high-volume, high-velocity data, resulting in slow pipelines, and slow delivery to the target systems. Arcion is the only distributed, end-to-end multi-threaded CDC solution that auto-scales vertically & horizontally. Any process that runs on Source & Target is parallelized using patent-pending techniques to achieve maximum throughput. There isn’t a single step within the pipeline that is single-threaded. It means Arcion users get ultra-low latency CDC replication and can always keep up with the forever-increasing data volume on Source.

100% Agentless Change Data Capture

Arcion is the only CDC vendor in the market that offers 100% agentless CDC to all its supported enterprise connectors. The agentless CDC applies to all complex enterprise databases including SQL Server. Arcion reads directly from the database transaction log, never reading from the database itself. Previously, data teams faced administrative nightmares and security risks associated with running agent-based software in production environments. You can now replicate data in real-time, at scale, with guaranteed delivery — but without the inherent performance issues or the security concerns.

Data Consistency Guaranteed

Arcion provides transactional integrity and data consistency through its CDC technology. To further this effort, Arcion also has built-in data validation support that works automatically and efficiently to ensure data integrity is always maintained. It offers a solution for both scalable data migration and replication while making sure that zero data loss has occurred.

Automatic Schema Conversion & Schema Evolution Support

Arcion handles schema changes out of the box requiring no user intervention. This helps mitigate data loss and eliminate downtime caused by pipeline-breaking schema changes by intercepting changes in the source database and propagating them while ensuring compatibility with the target's schema evolution. Other solutions would reload (re-do the snapshot) the data when there is a schema change in the source databases, which causes pipeline downtime and requires a lot of compute resources, which can get expensive!  


To conclude our journey, we’ve seen that SQL Server replication is an essential technology that equips organizations with the capacity to distribute data across multiple databases accurately and consistently. It's a vital tool for enterprise data management that enhances data access speed, availability, resilience, and load balancing. We also saw that from a disaster recovery perspective, replication also serves as a strong defense against potential data loss.

We examined how SQL Server replication operates through the publishing framework, including an in-depth overview of key components such as Publisher, Distributor, and Subscriber. Following that, we explored the intricacies of the four types of replication offered by SQL Server. This included a deep dive into Snapshot, Transactional, Peer-to-peer, and Merge Replication and looking at how each type caters to different scenarios and requirements.

In addition to comprehending the core concepts and types of SQL Server replication, we also looked at the low-level implementation components including Articles, Publications, Publishers, Distributors, Subscribers, Subscriptions, and various Agents. In that review, we looked at how each of these components works in harmony to create a smooth and efficient replication process.

Lastly, we introduced Arcion, a powerful platform offering significant advantages for data migrations and Change Data Capture (CDC). Arcion delivers on sub-second latency for high-volume data, 100% agentless CDC, guaranteed data consistency, and automatic schema conversion and evolution support. Its capabilities to support SQL Server, among other significant databases, make it an ideal solution for real-time, scalable, and reliable data replication.

If you're ready to get started with Arcion and implement a robust solution for replicating SQL Server data, contact the Arcion team today! Let our team of SQL Server replication and migration experts assist you in making your projects and initiatives a success.

Matt is a developer at heart with a passion for data, software architecture, and writing technical content. In the past, Matt worked at some of the largest finance and insurance companies in Canada before pivoting to working for fast-growing startups.
Luke has two decades of experience working with database technologies and has worked for companies like Oracle, AWS, and MariaDB. He is experienced in C++, Python, and JavaScript. He now works at Arcion as an Enterprise Solutions Architect to help companies simplify their data replication process.
Join our newsletter

Take Arcion for a Spin

Deploy the only cloud-native data replication platform you’ll ever need. Get real-time, high-performance data pipelines today.

Free download

8 sources & 6 targets

Pre-configured enterprise instance

Available in four US AWS regions

Contact us

20+ enterprise source and target connectors

Deploy on-prem or VPC

Satisfy security requirements