Take the 2-minute tour ×
Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. It's 100% free, no registration required.

We got a database of approximately 100-120 GB on a server with 128GB ram, while the db is sitting on an 256 GB SSD drive.

The bulk of the database space is spent on the tables below:

Page_contents (page_content_id, member_id, page_id, added_on, status)
Custom_fields (page_content_id, custom_field_key, custom_field_name)

and then statistics like

messages, visits, clicks table where we store the raw data (page_content_id, timestamp, page_campaign_id) on each table per status (visits on visits table, messages on messages etc)

Those tables take up pretty much 90% of the db space.

We had a blow up in production recently due to a failure on the raid controller and we are looking to see if the database can be replicated to a second equivalent server.

Questions:

  • What's the best way to do that on a SQL Server 2012 database?
  • Should we expect performance degradation?
  • Is the setup described above adequate for our size you think? We expect the database to keep growing.
  • Is there a way to optimize the schema above that you can think of. I understand you have little information.

EDIT: To answer some of the questions in the comments

The SQL Server is 2012 Enterprise edition. I am in control of what I can do and how I use the two boxes. I can decide to drop one box and double the power on the other one. It's really up to me.

I would like to leave questions 3 & 4 in case someone can think of optimizations of the existing schema. If there are questions to answer to clarify things I l happily do so.

My requirement is for the database to be accessible at all times. I don't mind losing some data (1 hour) in terms of the analytics, but can't sustain more than 5 minutes loss of data for the other two tables. If I can increase the performance as well as prevent downtime given the requirements above I would prefer that option for obvious reasons.

share|improve this question

migrated from stackoverflow.com Jun 25 at 17:04

This question came from our site for professional and enthusiast programmers.

    
Bullet Questions #3,#4 seem too broad or unanswerable given the information you've given us -- I recommend editing that out of the question. –  George Stocker Jun 25 at 17:06
    
Also, what version of SQL Server 2012 are you using? Enterprise? Standard? Web? –  George Stocker Jun 25 at 17:06
    
What exactly are your requirements? What do you have available to you? How much data loss can you sustain? –  Thomas Stringer Jun 25 at 17:08
    
Edited my question to clarify –  Yannis Jun 25 at 19:45

1 Answer 1

If I were you I'd be looking at AlwaysOn Availability Groups with a synchronous replica. It'll add a couple of milliseconds of latency to the SQL commands when they are run, but it'll give you a second server that's available in the event of a server failure without any loss of data. It requires no shared storage so you can keep using your local SSD drives.

It does require setting up Windows clustering but the SQL Server instances don't need to be clustered.

share|improve this answer
    
Given that the two boxes are on different data centers doesnt this mean that the latency added will be significantly more? –  Yannis Jun 26 at 11:52
    
If they will be in two different data centers then you'll want to use asynchronous data movement which means you won't have any automatic failover. Because of this you'll want to have a local synchronous copy that you can use for automatic failover. Failover to an async copy is a manual failover, and doesn't allow for failback as the LSN chain will be broken. –  mrdenny Jun 26 at 18:17

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.