The future is Power(ful)

I am finally geting around to reading up on SQL Server vNext 2017 which will hopefully be released sometime this year.

Of course we have all been spinning up RedHat boxes so we can play with SQL on Linux (btw, this does not excite me), but something else caught my eye that made me step back, scratch my head and say “uh, excuse me”

I found this statement to be very troublesome.

  • SQL Server Reporting Services is no longer available to install through SQL Server’s setup as of CTP 2.1.

(https://docs.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-2017)

SSRS is no longer available?  Say WHAT?

Reading on I discover that it is now “Power BI Report Server”, in the cloud or a downloadable on premise with desktop version. (https://powerbi.microsoft.com/en-us/documentation/reportserver-get-started/)

Truth be told, I have not experimented with anything “Power” since MS PowerPivot and Excel.  (Hey I’ve been busy with Admin stuff.  Yes, I know I need to get into the current decade.)  This looks to be a promising direction MSFT is taking SQL Reporting.  What I have seen of Power BI through blogs, SQL Saturday’s and other presentations, I have been somewhat impressed with its capabilities.

I am just concered about converting all those 2008 – 2016 SSRS reports to PowerBI!  Well, at least it is job security!

Until next time…

 

Another change in jobs: and upcoming events

Recently I was forced to make a job change. 

I am now working for a reputable local bank as a DBA and I am no longer a consultant.  l enjoyed my time as a consultant even though I never could get used to the idea of being “billable”.  And putting the fate of your job in the hands of other individuals (sales department) was nerve racking to say the least.  But I learned valuable skills and information from my time as a consultant.

I hope to continue participating in the SQL community.  In fact, I am presenting my newest presentation “You want me to Isolate What? An Introduction into Data Integrity and Isolation Levels of SQL Server” THIS Saturday at SQL Saturday Pensacola #617.  Of course I will upload my slide deck and demos after this weekend.

And you can catch me in two weeks at SQL Saturday Houston #651.

Hope to see you there!

Expanding a DBA’s Role

I have never really considered my self a System Administrator, but I am familiar enough with the Windows Operating System to consider myself a little dangerous. However, with more and more of our clients utilizing the power of Microsoft Azure VMs I see my role as a SQL Consultant starting in encompass System Administrator duties.

Recently a client found this article on “Best Practices for SQL Server in Azure Virtual Machines” and wanted to re-provision his volumes to adhere to them.

No my first thoughts was wait, I’m a DBA, not a System Admin that’s not my role! But thinking more about it I realized the client views this as a SQL Server issue and I am the SQL Server Consultant and that it is my job to remedy this problem.

Not being 100% confident in Azure, I spun up a VM SQL Server and attempted to add some volumes.  To my surprise, this was way too easy.

After selecting the server I wanted to manage….

Step 1:    Select Disk on the left menu and then click “+ Add Data Disk”AddingDisks1 

Step 2: Configure Disk

  • You will need to provide a name or you can use the pre-populated name
  • Select Source Type, in this case I left it at the default “New (empty disk)”
  • Select Account Type, again, left it at the default of “Premium (SSD)”
  • Choose the Size.  From 0 t0 1023 which is a Terabyte of space.
  • Select Storage Container.  This will open another tab and just pick the one you want or create a new one
  • Storage Blob name – once again you can use the pre-populated name

AddingDisks2

Step 3: Select a Container

This is where the actual VHD file will reside, you can use an existing container or create a new one for these files

AddingDisks3

 

That’s it!  That is all that is need to mount a new volume to an existing VM SQL Server. It automatically populates in the OS Disk Management where you will need to create a “Simple Volume” (or however you would like to do it) and format the drive (remember MSFT recommends 64K for data and log drives)

You are now ready to do. 

And removing an existing drive is even easer.  At the top of the Disk Panel is the work “Edit” click there and you are given the option to delete. It will automatically remove it from your server, no reboot no warning!  So be careful.

Heck, maybe being a System Administrator in Azure isn’t so tough after all! 

Isolation Levels 101: Concurrency Issues

The other day I had a client complaining about slowness, queries erroring out, blocking and processes stopping; you know the usual complaints. So, like any other DBA I pulled out my handy dandy toolbox of queries to determine exactly what was going on with the server.

Upon reviewing their current running sessions, I noticed that the Isolation Level for a simple UPDATE transaction was “Serializable” and all other transactions were “Read Committed”. And this Serializable transaction was blocking all other transactions.

Now, let me preference the rest of this blog series with: in all my SQL experiences, I have yet had to change the isolation levels on a transaction or database level. Which means, I have never studied this topic in depth. So, this got me thinking, what are the different isolation levels and why would anyone ever change from the default isolation, which is “Read Committed”?

Typically, when I attempt to learn something, I like to write it down. Because this is a lot of information, I am going to present my learning in 3 parts: Concurrency Issues and how SQL works, Isolation Levels and how they help resolve Concurrency issues, and some helpful scripts to test and figure all this out.

So, let’s start at the beginning. The purpose of an Isolation level is to provide SQL server with a mechanism to protect data against specific concurrency problems when multiple CRUD (create, read, update, delete) operations occur on the same data.

Concurrency Problems

Data is intended to be read, modified, created or deleted over time. Most databases have multiple connections doing all these things at the same time. Managing all these concurrent connections SQL server must decide which statements gets precedence. SQL typically operates on a “first-in, first-out” method of handling queries. But what those queries do can determine the fate of queries in line to be executed. And these decisions can cause data concurrency issues. The following concurrency “phenomena” have been identified by ISO/ANSI standard for SQL-92 as possible results for concurrent transactions.

Lost Updates

It is possible for two or more transactions to modify the same row. While the subsequent transactions are reading the data, the 1st transaction commits its changes. Then the subsequent transaction makes its changes and commits, thereby possibly losing the changes committed by the 1st transaction. In this scenario, the last transaction always wins.

Dirty Read

A dirty read is when a SELECT statement will read against a table that is currently being modified and not yet committed to the database. By default, SQL Server will protect against this on all isolation levels, except for Read Uncommitted or by using NOLOCK in your queries.

Non-Repeatable Read

This form of concurrency problem can happen when a specific dataset (using the same WHERE clause) is accessed multiple times within the same transaction and some of the rows in the dataset are updated between the 1st and 2nd access. This will product two different result sets.

Phantom Reads

Very similar to non-repeatable reads, a phantom read concurrency is when two queries in the same transaction, against the same table with the same WHERE clause produces two different result sets because a 3rd query has changed data between the first and second SELECT statements.

How to combat these Concurrency Problems?

For most DBAs, the integrity of their data is, or should be, the most important aspect of their job. It quite literally can be a “job killer”. Fortunately for us, Microsoft has built SQL Server with several levels to isolate the data during CRUD operations to ensure these concurrency problems are eliminated. These Isolation Levels are called Read Uncommitted, Read Committed, Repeatable, Serializable, Snapshot.

The following script when executed will display all current sessions and their Isolation Levels. It is always good to know what your connections are doing to SQL.

select
	s.session_id
	,s.status
	,db_name(s.database_id) [Database]
	,case 
	     when s.transaction_isolation_level = 1 then 'Read Uncommitted'
	     when s.transaction_isolation_level=2 then 'Read Committed'
	     when s.transaction_isolation_level=3 then 'Repeatable'
	     when s.transaction_isolation_level=4 then 'Serializable'
	     when s.transaction_isolation_level=5 then 'Snapshot'
	     else 'Unspecified'
	end [Transaction Isolation Level]
	,s.host_name
	,s.cpu_time
	,s.deadlock_priority
from sys.dm_exec_sessions s
where s.session_id>50

In the case of my client, there was one UPDATE transaction that was using Serializable Isolation Level, it was this session that was BLOCKING all other transactions from completing. But why was the transaction Serializable and why was it blocking for so long of a period?

Not knowing the details behind Serializable Isolation Level, it was time for some more learning. I will discuss the different Isolation levels in my next post. Until then…

A New Mini-Adventure with Idera Software

Yesterday, Idera Software (@Idera_Software) announced their slate of 2017 Idera ACEs (Advisors and Community Educators).Idera_Ace_Logo

I am honored to be included as an Idera ACE for 2017. Part of my responsibilities as an ACE is to represent Idera at local SQL Saturdays which means I get to travel more, meet more people and present more!

One of the reasons I enjoy working as a consultant with Sparkhound is the concept of “knowledge transfer”.  We believe in not only helping clients with their SQL Server problems; but also teaching them how to do it for themselves.

Being an Idera ACE seems to fall right in line with that philosophy!  Joining me in this endeavor are the following members of the SQL Family!  I hope to meet each and everyone of you over the course of the next year!

Rie Irish – Atlanta, GA

Mindy Curnutt – Dallas, TX

Maximo Trinidad – Port St. Lucie, FL

Lance Tidwell – Austin, TX

Brandon Preece – Salt Lake City, UT

Sebastian Meine – Philadelphia, PA

I will also do my best to dedicate more time to blogging, answering community forum questions and I will see you at SQL Saturdays!

A Day in the life of SQL Mirroring

This past week I have been working with a client to setup mirroring on a SQL 2014 instance.  I realize mirroring is a deprecated feature; but many DBAs and companies still use it as a fail safe line of defense. And the fact that Mirroring has been a “deprecated feature” since SQL 2012 and is still available in SQL 2016; I think it might be around a little longer.

I am writing about this not to call out any client on the use of Mirroring or to promote or condone mirroring but to remind me of the things to look for when setting up any mirroring environment.

The Task at hand

My client was upgrading from SQL 2012 to 2014.  The current SQL 2012 instance was mirrored to a second server.  I was to reproduce the mirroring on the 2014 instance in preparation for the rollout to production.

This particular setup was problematic for me because of Windows Firewall rules and having multiple instance (heck multiple versions) of SQL installed on the same server.

Server Setup

The following version of SQL were installed on the same server:

  • SQL 2008 R2 (default instance)
  • SQL 2012 ( named instance) mirrored with a second server
  • SQL 2014 (named instance)  this was the server I needed to setup with mirroring to reproduce the SQL 2012 environment.
  • Windows Firewall was basically configured to close everything unless a rule specified otherwise.

This list of things to check may not apply to all configurations and all mirroring setups.  It is just some things that prevented this setup for succeeding.

  1. With multiple instances and connecting via Named Pipes, it is very important to have the “Named Pipes” protocol enabled in SQL Configuration Manager.
  2. With multiple instances SQL Browser becomes very important, especially if you do not want to use Port Numbers in your connection string.   Here is the important part, check the Windows Firewall rules to ensure UDP Port #1434 has both an Incoming and Outgoing rule.
  3. Mirroring requires its own Port for communication as well.  Incoming and Outgoing Windows Firewall rules will have to be created to ensure communication between the two servers.  By default this is TCP Port #5022.  However, because the SQL 2012 instance was already using this port for mirroring, I had to configure SQL 2014 with port #5023.  Again, another firewall rule. Actually, I just added the port to the existing firewall rule.
  4. The last piece that was problematic was permissions.  Because Principle and Target each had their own domain service account, each account needed to be added to the opposite server with the correct database permissions. 

Once all of this was figured out and configured correctly, the mirroring session configured and started up with no problems!