One of the greatest challenges with scaling out SQL Server is selecting the right option when a database needs to scale. When a database or a set of databases behind an application becomes more heavily used, database architects and developers need to be prepared well in advance to make sure performance doesn't suffer.
Requires Free Membership to View

More on SQL Server scale-out
Scale up or scale out? Read this Q&A to find out
Four ways to boost SQL Server scalability
Upgrades to make SQL Server more scalable
As a database gets bigger and busier, you'll likely run into performance problems when mixing a transactional load like data updates, with your decision support load (reports, building of online analytical processing cubes). As you start mixing these two types of loads, queries scanning the ever-growing historical data start slowing down the Web applications because the database server gets very busy. A simple solution is to simply restore a backup of the database on a reporting server and use it as a back end for history, data mining and other decision support loads. This is very easy to implement and, if needed, you can even restore on multiple servers. In my experience, you can restore a 500 GB database in less than 30 minutes on good hardware, especially if you utilize the built-in backup compression. If there is no requirement that your reporting database must be available 24/7, you can just restore a new backup over the existing database. If you need round-the-clock availability, you can have two copies of the database, restore them into the second copy and then "swap" the two databases by renaming them. You can use this "reporting" database not only for reports or data mining, but you can also access it from a front-end application, provided you do not need to get the most recent data. For example, if you refresh this database once a day, you could code your application to update the data in the main database in real time and use it to read data less than 24 hours old and then use the restored database to read older data. The beauty of this solution is that it is very simple to implement, yet gives you a lot of options and can greatly offload your main database server.
Another option is to make data from a production database easily available on another server using a combination of database mirroring and database snapshots. When you mirror a database to a different server, the database is not available because it is in the same state as a restored database that has not been "recovered," or brought online. However, if you build a database snapshot on top of the mirrored database, you actually do get a read-only copy of the data. It is essentially the same as building a snapshot on top of a regular database -- you end up with a read-only, point-in-time copy of the database. This implementation allows you to use the mirror server for offloading the primary server, just like we described in the previous scenario with restoring backups. You can use it for reporting, data mining and so on. The underlying mirrored database will continue getting updates, but your snapshot will remain the same. Once your threshold for data staleness is reached, you can delete the snapshot and create another one. The downside of this implementation is that you can only use it with the Enterprise Edition of SQL Server.
The third and final option for easily scaling out the primary database is the new AlwaysOn technology in SQL Server 2012. It is a high-availability feature that allows you to ship data in real time to up to four readable replicas on other servers. Just like in the two previous implementations, you can use these replicas to reduce the load on the primary server. Moreover, with AlwaysOn you have one advantage over the previous two methods -- the databases on the replicas are getting updated in almost real time, and if there are no network issues, the data on the replicas should be only seconds behind the primary database. This means that in addition to reporting, you can use these replicas from your front-end applications, so long as you only use it to read the data (since only the primary database is read/write, while the replicas are read-only). You should, however, expect small network hiccups and don't assume that the replicas will always be up to date. You could code your application to connect to the primary database to update the data and retrieve the most recent data. Then, any data older than 10 to 15 minutes could be retrieved from a replica. AlwaysOn provides automatic redirection for the main database -- if the primary node dies, one of the replicas becomes the new primary database. It also provides redirection for replicas. For example, if your primary server is Server A and the replicas are Server B and Server C, and you use Server B for reporting, AlwaysOn can automatically redirect your reporting applications to Server C if Server B becomes unavailable. The downside is that AlwaysOn not only requires the Enterprise Edition of SQL Server, but you also need to install and set up Windows Server Failover Clustering on each node.
You have now learned three different ways of easily spreading data from a primary database across multiple servers -- backup/restore, database snapshot over a mirrored database and finally, utilizing AlwaysOn replicas. As you can see, AlwaysOn provides options and flexibility for providing replicas of a primary database on other servers, but it is also the most complex and costly to implement.
This was first published in January 2013
Join the conversationComment
Share
Comments
Results
Contribute to the conversation