Hostile Takeover

Last night, I had the privilege of presenting to the Baton Rouge SQL Server User Group (@BRSSUG).  This was my 2nd time presenting to the group and I hope it was as informational as it was enjoyable for me.

The Nuts and Bolts of the presentation

The whole premise of the presentation was to outline and layout what to do when you are given a SQL Server to manage.  In some cases you may “discover” a new SQL Instance on your network and not have access to it.  I presented one method of gaining access through what I call a “hidden door”.

I am a big supporter (even though I don’t do it as often as I should) of documentation!  PowerKnowledge is Power and as our friendly neighborhood Spiderman would say “With great power comes great responsibility”.

What type of information do you need to collect and document on?  I explained several bits of information that I gather on a regular basis and gave several tips and demonstrated some wonderful scripts from people much smarter than I am on how to collect this information.  I have been using these scripts in “My SQL Toolbox” for a  long time.

Here is the slide-deck I used last night with all the links to the various tools of “people smarter than me” that I use on a regular basis. 

Hostile Takeover

On September 10th, I will be presenting at the Baton Rouge SQL Server Users Group (@BRSSUG) on a topic that is near and dear to me.

Since becoming the DBA at Woman’s Hospital a year ago, I have used these techniques and processes almost every day in taking “ownership” of over 90 SQL Server instances. After a year, I still have some instances I have to “deal with”; but I am using these steps to implement my “Hostile takeover”!

I will be discussing my tips, processes and SQL tools I use to gather as much information about the SQL Server instances that I have now become administrator.

I will upload my slide deck after the presentation and will give a debriefing here.

BTW, this is my 2nd time presenting to the User Group, maybe next time I will update my SQL Jeopardy and we can play again!

Well, it is official!

I have been asked, I agreed, and my topic has been accepted.

I am the September Presenter at the Baton Rouge SQL Server User Group meeting!

 

Hostile Takeover

–A SQL Server instance was just handed to you to manage.  OK, great, now what?  What are the logical steps and/or best practices in taking over the management of a SQL Server instance.

 

I will provide more info on my presentation after I present it.

Thank the SQL Gods!

Microsoft has listened to the out cry of SQL DBAs everywhere!

New service packs will be released for SQL Server 2008 and SQL Server 2008 R2 extending it’s lifecycle for another 12 months!

I am not sure about you but I for one am happy.  That give me more time to plan!

SQL 2008 Service Pack Announcement

SQL CoPilot–A Software Review

As the only DBA in a 24 x7, mission critical shop (I work in a hospital) managing over 60 instances of SQL Server ranging from version 8 to 11; I am ALWAYS on the lookout for Software that will help me manage instances.  Heck, who am I kidding.  I want a cheap piece of software (since I have NO software budget to speak of) that tells me what I need to know when I need to know it and be able to present it to the “powers that be” so I can argue for more RAM, more CPU cycles and more SAN space.  And that is exactly what SQL CoPilot does for me as well as many other things.

SQL CoPilot is not really a “program” in the traditional sense of the word.  It does not get installed, it does not run a windows service and it doesn’t require a bunch of resources on target servers.  So how does it work?  To be perfectly honest, I am not sure.  But because it is not compatible with SQL 2000, I can only assume it uses DMV and DMF to determine all the information it needs. Because SQL 2005 and later are designed to return DMV and DMF results quickly, the response time for CoPilot is amazing, even with heavily used machines.

So what can the program do for you?  Just read this “About Page” to see what all information is presented to you in a quick and efficient manner.  This is a snap shot of the main page.  It’s interface is VERY intuitive and simple to understand.  All of the “snap shot” squares found in the “Big Picture” are navigable to a page with detailed information.

image

The information presented for databases is very detailed and very informative.databases-gallery

And the best part?  If you just want a snap shot or a quick glance at your instances, then try out the SQL CoPilot Free Edition, but I promise you, after a week of using SQL CoPilot you will want to buy a full license. You will want to see the additional features: like unused indexes, duplicate indexes, index fragmentation and all sort of other “goodies”.  I know I did!  And for only $120 per licensed USER (that’s right USER, not instance) it is the best bargain out there.  Many 3rd party software that provides this type of information charges hundred’s of dollars per instance monitored.

As much as I have praised SQL CoPilot, for me, there are some downsides.  It doesn’t work with SQL 2000.  But hey, I may be the only schmuck who is still using it in production.  I doubt it, but at least I have fewer instances to really dig up the details on because of SQL CoPilot.

SQL CoPilot does not have a method of “capturing” the data it collects for baseline measurements or historical information.  This is not a monitoring tool with a historical repository, but it is so close to that it makes me want more.  Maybe just a simple button to say, “snap shot it” and it would record all the information for that view in a pre-determined repository database somewhere.

All in all, this product is well worth the $120 purchase price!  Give the Free Version a try and I would be willing to bet you a cup of coffee that within a month you will pay for the full version!

Merry Christmas

A little personal post this morning.   I wish everyone, those I know and those I don’t, a blessed Merry Christmas and a Happy New Year!

YodaSanta

8 Weeks of Indexes: Types of Indexes

This is the 3rd post in a series entitled “8 Weeks of Indexes”.  The purpose of this series was more for me to learn more about indexes. 

There are many types of indexes used in Microsoft SQL. I could easily use up the remainder of my weekly posts to discuss each one in depth, but in order to actually get to the meat and bones of how to use indexes, this post is an overview of the more commonly used indexes.

Each Edition of MS SQL from 2005 and up, continues to add types of index to the engine. SQL 2005 had a total of 7 types of indexes, SQL 2008 added 1 additional type, SQL 2012 has a total of 10 types of indexes and SQL 2014 lists 12 types of different indexes.

My personal experiences with index usage unfortunately has been limited to numbers 1-6 for SQL 2005. I currently do not manage any data store with spatial data, so I haven’t had a need to use that one.  I am really excited about experimenting with Filtered Indexes any I do not (yet) have a SQL 2012 server in production, so I have not looked at the two new added indexes.

Here is the break down of each type of index by SQL Engine. Highlighted indexes are new to that specific edition. 

SQL 2005

  1. Clustered – based on a column or multiple columns, this index determines the physical sort order of how the data is physically written to the data base.  A clustered index for a phone book would be on “Last Name”
  2. Non-clustered – can be created on any column or multiple columns in addition to a Clustered index. this index create a “pointer” to find what row data is located on. If you wanted to know all the “Robert”s in a phone book, you could create a Non-Clustered index on First Name column.
  3. Unique – this ensures that either a clustered or non-clustered indexes key does not have duplicate values.
  4. Index with included columns – extends the functionality of non-clustered indexes to include “non-key” items in the index.
  5. Indexed Views – a unique way to present structured, indexed data in a view rather than a table. can use Non-Clustered as well, but only after a Clustered index was created.
  6. Full-text – a special type of index used exclusively with the Microsoft Full-Text Engine to facilitate word searches within data columns.
  7. XML – a way to index XML data type columns

SQL 2008 and 2008 R2

  1. Clustered
  2. Non-Clustered
  3. Unique
  4. Index with included columns
  5. Full-Text
  6. Spatial  provides the ability to index “spatial data” in a column of geometry data type.
  7. Filtered—provides a way to filter out unnecessary rows of a commonly used subset of data
  8. XML

SQL 2012

  1. Clustered
  2. Non-Clustered
  3. Unique
  4. Column store – based on vertical partitioning of the data by columns
  5. Index with included columns
  6. Index on computed columns – an index on a column that is derived of one or more columns
  7. Filtered
  8. Spatial
  9. XML
  10. Full-Text

SQL 2014

  1. Hash – an index that provides data through an in-memory hash table
  2. memory-optimized nonclustered indexes – used in the new feature of “in memory optimized tables”
  3. Clustered
  4. Non-clustered
  5. Unique
  6. Column store
  7. Index with Included columns
  8. Index on computed columns
  9. Filtered
  10. Spatial
  11. XML
  12. Full-Text

8 Weeks of Indexes: What is an Index?

First and foremost, I am a slacker!  It takes discipline to write a weekly blog, I am not sure how some people can do it daily!  However, I hope to get back on track for this series.

Merriam-Webster defines index as:

a list … arranged usually in alphabetical order of some specified datum

One of the most common real world examples of an Index is your telephone book. The book stores information (Name, address and phone number) sorted alphabetically by last name. The pages are written in such a way that A comes before B and B comes before C, etc.. If one knows their alphabet, then any name can be easily looked up.  Typically the first “key” to finding a name is at the top of the page, which tells you what section of the book you are in.  If you were to locate my entry in the phonebook, you would quickly scan through the key until you found the letter B at the top of the page.  Then you would continue to scan until you find the group of entries for BISHOP.  And of course, then locate which entry matched my name, BISHOP, ROBERT. If there were no key at the top of the page, you would have to seek through all the pages one at a time until you got to the B section.  Another excellent real-world example of an index system, is the Dewey Decimal System.  Libraries have been indexing their books with a numbering system for years. 

So, how does this all relate to SQL Server?  There are several bold print words above that translate to SQL Server terms and how SQL works the same way as a phone book.  To fully understand how SQL Indexes work one really needs to know how SQL stores data. We know SQL has the .mdf files that actually stores all your data.  However, the data file is made of pages that are 8 KB in size.  At the top of each page is a “page header” used to store system information about that page.  There are many different types of pages that store different things, but two specific types of pages I want to talk about are “data” pages and “index” pages.

A “data page” is where your actual data (based on data types) is stored and as you guessed it, the index page stores index information. The “key” to proper storage of data is a Clustered Index.  A Clustered Index, physically writes and stores a row of data in a page by selected column and by sort order selected.  So a Clustered Index on a user table could be by the column “Last Name”, just like a phone book. This will ensure that the data rows are written in alphabetical order on each page and in turn each page will be in alphabetical order as well, very efficient.  SQL Engine “scans” the index to determine exactly what page the “B” last names are located.  If a table did not have a clustered index, the data would be stored in a “first come-first served” fashion.  In this scenario, the SQL Engine would have to scan then entire page or multiple pages to find your entry, very inefficient.  Imagine how inefficient a phone book would be if the publisher just kept adding rows to the end of the book every year without being sorted by name.  How long would it take you to find my name then?

So, the key to storing data in SQL, is to have a pre-determined way you want the data rows saved to the page. Ideally this would be the most used method of finding a row, i.e. by “Last Name”. 

Next week…..Types of Indexes

Software Review: Idera’s SQL Elements

Over the last few weeks, I have had the opportunity to beta test Idera’s newest product, SQL Elements.  My initial response?  This is both a “fantastic” product and a “tease” of a product, I’ll explain later. Idera has definitely done their homework.

Initial install is easy; all you need is a web server, SQL Server for a data repository, and a Domain Service Account with SysAdmin privileges on the SQL Servers you would like to monitor or SQL account with SA privileges. My environment uses a Domain Security Group called SQL-DBA. Since it includes both my account and all service accounts that need SysAdmin rights, I just used one of these service accounts.

One of the better aspects of SQL Elements (SE) is the “auto discover” feature; it finds the servers for you.  It basically scours your network to find all SQL Server instances it can find, including Express Editions. Even if the service account does not have SysAdmin rights, SE will find the server and provide a listing for it, which can help you to realize which SQL Servers, you as a DBA, do not have access to.  It sometimes has problems determining the instance name (if you are using named instances example: “Server\SQLExpress,”), but who really uses SQL Express Edition for production anyway? You can always manually add the named instance to avoid that issue. This feature also periodically scan’s the network for any new instances and automatically adds them to the “Discovered Instances” list so you know when someone else installs SQL Server.  NICE FEATURE!

SQL Elements uses the concept of “health checks” to determine the status of your SQL Server.  These health checks include: DBCC CHECKDB consistency, current backup checks, auto-shrink enabled, and “optimize for ad hoc workloads” just to name a few.  Many of the more critical checks have email alerts associated with them to let you know when a database is filling up or when a drive is running out of room.

Once you login to the website, the Dashboard for SE provides a brilliant snapshot of your environment. First and foremost, at the top is the “Health Check Recommendations” that SE has found in your environment. Each Health Check is given a “Level” based on the severity of the problem.  Idera provides a brief explanation of why each recommendation is made and a link to a more detailed explanation. Once you review the recommendations, you have the choice of dismissing the alert or refreshing the alert supposedly after you have resolved the issue with the recommendation.

Below the recommendations are two simple graphs listing “Top Databases by Size” and “Top Databases by Activity”.  Personally, I wish I could hide this module of the dashboard and move the “Instances” grid below it up. I haven’t found much use for these two graphs, but maybe that is just me.  The grid of Instances is very user-friendly; it’s a simple list of what instances are being monitored, their monitoring status, response time, version, # of databases, and total size of databases. Each column is sortable,  and the grid works on 20 listings per page which is a very reasonable size to work with.

On the right side menu, is a simple “My Environment” section, which allows you to manage the SQL Servers in your environment.  The ability to classify the servers by “tags” is nice, especially if you want to just look at your “Critical 24×7” servers or just your “Test” servers. I really like the concept of “labeling” SQL Servers with a category so I can prioritize the server health check recommendations. I only wish when selecting a “tag” the resulting page showed the “Health Checks” for those specific servers, not the “Explorer” tab.

Clicking on the Instance name actually brings you to a very valuable “Instance Details” page. Again, at the top is the list of Health Check Recommendations for this particular instance.  Below that is a grid listing all the databases found on the instance as well their status, recommendations, type, data size, log size, and activity. I would love this grid to include the Compatibility level of the database, because many time developers will restore, move, or copy a database from one server to a higher edition server and not change the compatibility level.  On the right side menu, you have simple information pertaining to this particular instance.  Clicking on the Server Name, however, will bring you to “Hardware Details”. There is also a link to view the SQL configuration settings.

There are more sources of information found in SQL Elements. I won’t go into those here, but they include an “Explorer” tab, which allows you to explore your environment by filters and tags can be helpful when trying to locate a specific server in a larger environment. As my environment is not that large, I really haven’t used it much.

So, after this “novel” of a review, here is what I think of SQL Elements:

First and foremost, the application is well written, has smooth transitions between pages, and has yet to throw any type of exception error with me. The ability to classify instances is a wonderful concept and I use it everyday.  One of the additional features that I truly enjoy is the ability to assign an owner and location to an instance. I assign the “end user” as the owner and either “Data Center” (for physical servers) or “VM Ware” (for well, VM instances). That way, I quickly know if I am dealing with a physical server or not.  Monitored instances have to be SQL 2005 SP1 or above, which in my environment leaves me a little frustrated since, unfortunately, we are still running a dozen or so SQL 2000 instances. But it does let me know what SQL 2000 servers I have out there, so I’m able to start my migration plans now!

The only major “flaw” I found with SQL Elements is the lack of producing reports based on data collected. Many times, managers and directors require “physical proof” of why I am asking for another terabyte of drive space for a SQL Server. The “powers that be” like pretty graphs and trends. If a drive is running out of space, we need to be able to show them the trend of drive usage so we can justify that new 1.5 terabyte hard drive. Having participated in the beta forums for SQL Elements, I have faith that Idera will listen to the masses and in the future provide some sort of reporting feature. 

I mentioned earlier, that this was a “tease” of a product.  The DBA who uses this product needs to remember that this is not a monitoring tool.  If you are hoping this will provide full SQL monitoring, then you will be disappointed.  For that, I would recommend, Idera SQL Diagnostic Manager. However, if you want a way to know what SQL Instances are in your environment and get a quick overview of your server, then SQL Elements is for you. This is an “Inventory Tool” with some basic monitoring of the most fundamental aspects of a SQL Server: drive size, data integrity, backups, etc. Things that could and will cause major problems if not checked regularly.

This is a very valuable tool for starting DBAs or IT groups that have no clue what they have in their network (which is my group, because we have never had a DBA for our over 100 instances of SQL, before me).  I would definitely recommend this product!  I only hope I can convince my “powers that be” to get it for me! 

Vendors – Trust, but Verify

Well, between yesterday and today, I have learned a very valuable lesson when dealing with Vendor MS SQL installations.  And that is, “Trust, but Verify”.

I won’t call out the Vendor, I am not that mean. 

They setup 2 SQL boxes with replication between them and a Push Snapshot subscription to 12 different desktops supporting SQL Express.  Now, first off I thought nice system.  All data changed in a central database and then pushed out daily to the clients. 

However, when the vendor neglects to setup ANY maintenance plans this could cause problems.  I found myself dealing with an out of control Replication based Transaction Log that complete filled up a 100 GB drive.

Here’s how I solved it.

  1. Because this was a VM server, I asked networking if I could get another drive to use for now. 
  2. I added another log file to the database and located it on the new drive.  Now that the log was able to expand, I could fix the problem.
  3. I setup the maintenance plans and executed the FULL backup immediately
  4. I executed the log backup.
  5. Using SHRINK FILE, I ”Released Unused space“ of the main log file
  6. DBCC SHRINKFILE (N‘DBname_log’ , 0, TRUNCATEONLY)
  7. Executed another FULL backup
  8. Executed another Log Backup, now my logs were most empty and having returned almost 80GB of drive space back to the OS I need to undo the changes I did. 
  9. I repeatedly tried to drop the 2nd Log file and repeatedly got a “file not empty” message. I then noticed this little setting in the SHRINK FILE screen.ShrinkFile Scripting it out I discovered this. Wow, learn something new everyday.  Executing this statement moved what little transactions were left in the 2nd Log file to the Main log file so I could remove the file without any problems.

DBCC SHRINKFILE (N‘DBname_log’ , EMPTYFILE)

I am not 100% sure if this was the “best” method of doing this.  But it worked.  And it taught me a valuable lesson in working with Vendors, “Trust, but Verify”. 

Time to go back and double check all the other 50 or so SQL Servers that our wonderful vendors setup!

Follow

Get every new post delivered to your Inbox.

Join 109 other followers