Making the move to Azure for SQL Database services or even to Amazon’s RDS services for SQL Server can be hair-raising. I’m not sure why it’s so difficult. We’re in the process of moving the SQL Server as a service. We’ve been looking to understand the pros and cons of the two major service providers.
It’s not as easy as it might seem it should be. “SQL Server here, SQL Server there… let’s see who’s doing what better…”
I have a preference for implementation technologies and approaches, but when it comes to the basics… Sizing, pricing, all of that which ends up being the starting point for comparisons, good grief. It’s just very difficult to figure out what you need to be doing.
Sizing, for example. Run the sizing scripts. Run them again. Lather, rinse, repeat. They come back with different recommendations, as yo might expect, based on loading. I get that. But even when the loading is the same, the recommendations are wildly different. Like orders of magnitude different. And they’re up and down – so you can’t do much resource leveling to guess where you should start.
I was reading up on how to size instances and resources and actually saw a recommendation that you start somewhere in the middle, then size up and down based on what you end up seeing, because the actual processing approach is different, so even your current workload may not be indicative of the resource sizing you’ll need. I found myself nodding, then realized I wanted to scream a bit.
Add to this the fact that the sizing information and approach you do get back is not applicable between services. Nope, no apples to apples here, it’s more like apples to alien life forms. It’s not even close on understanding what you’ll need between the environments, so you’re left to do the sizing dart board approach for both and then testing and seeing what works independently.
It’s a little frustrating.
It’s not like the risk of screwing it up isn’t high, either. If you do size wrong and requests are queue’d or just flat timed-out, your application may fail, performance may go South (or really, really South) or any number of things depending on how you configured things, how your application recovers from timeout failures and/or refused connections. It’s not a “oh, things are a bit slow, let’s crank up the power” type of a situation.
It’s a little nuts.
But we’ll keep after it. We’ll figure it out. Or, we’ll get a really big dart board and close our eyes and throw darts at sizing schemes. Then we’ll hold our breath as we size up and down with test loads, then the same with production loading… What could possibly go wrong for the users? Our applications? Our environment?
I’ll keep you posted for what we find.