Microsoft has listened to the out cry of SQL DBAs everywhere!
New service packs will be released for SQL Server 2008 and SQL Server 2008 R2 extending it’s lifecycle for another 12 months!
I am not sure about you but I for one am happy. That give me more time to plan!
As the only DBA in a 24 x7, mission critical shop (I work in a hospital) managing over 60 instances of SQL Server ranging from version 8 to 11; I am ALWAYS on the lookout for Software that will help me manage instances. Heck, who am I kidding. I want a cheap piece of software (since I have NO software budget to speak of) that tells me what I need to know when I need to know it and be able to present it to the “powers that be” so I can argue for more RAM, more CPU cycles and more SAN space. And that is exactly what SQL CoPilot does for me as well as many other things.
SQL CoPilot is not really a “program” in the traditional sense of the word. It does not get installed, it does not run a windows service and it doesn’t require a bunch of resources on target servers. So how does it work? To be perfectly honest, I am not sure. But because it is not compatible with SQL 2000, I can only assume it uses DMV and DMF to determine all the information it needs. Because SQL 2005 and later are designed to return DMV and DMF results quickly, the response time for CoPilot is amazing, even with heavily used machines.
So what can the program do for you? Just read this “About Page” to see what all information is presented to you in a quick and efficient manner. This is a snap shot of the main page. It’s interface is VERY intuitive and simple to understand. All of the “snap shot” squares found in the “Big Picture” are navigable to a page with detailed information.
And the best part? If you just want a snap shot or a quick glance at your instances, then try out the SQL CoPilot Free Edition, but I promise you, after a week of using SQL CoPilot you will want to buy a full license. You will want to see the additional features: like unused indexes, duplicate indexes, index fragmentation and all sort of other “goodies”. I know I did! And for only $120 per licensed USER (that’s right USER, not instance) it is the best bargain out there. Many 3rd party software that provides this type of information charges hundred’s of dollars per instance monitored.
As much as I have praised SQL CoPilot, for me, there are some downsides. It doesn’t work with SQL 2000. But hey, I may be the only schmuck who is still using it in production. I doubt it, but at least I have fewer instances to really dig up the details on because of SQL CoPilot.
SQL CoPilot does not have a method of “capturing” the data it collects for baseline measurements or historical information. This is not a monitoring tool with a historical repository, but it is so close to that it makes me want more. Maybe just a simple button to say, “snap shot it” and it would record all the information for that view in a pre-determined repository database somewhere.
All in all, this product is well worth the $120 purchase price! Give the Free Version a try and I would be willing to bet you a cup of coffee that within a month you will pay for the full version!
This is the 3rd post in a series entitled “8 Weeks of Indexes”. The purpose of this series was more for me to learn more about indexes.
There are many types of indexes used in Microsoft SQL. I could easily use up the remainder of my weekly posts to discuss each one in depth, but in order to actually get to the meat and bones of how to use indexes, this post is an overview of the more commonly used indexes.
Each Edition of MS SQL from 2005 and up, continues to add types of index to the engine. SQL 2005 had a total of 7 types of indexes, SQL 2008 added 1 additional type, SQL 2012 has a total of 10 types of indexes and SQL 2014 lists 12 types of different indexes.
My personal experiences with index usage unfortunately has been limited to numbers 1-6 for SQL 2005. I currently do not manage any data store with spatial data, so I haven’t had a need to use that one. I am really excited about experimenting with Filtered Indexes any I do not (yet) have a SQL 2012 server in production, so I have not looked at the two new added indexes.
Here is the break down of each type of index by SQL Engine. Highlighted indexes are new to that specific edition.
- Clustered – based on a column or multiple columns, this index determines the physical sort order of how the data is physically written to the data base. A clustered index for a phone book would be on “Last Name”
- Non-clustered – can be created on any column or multiple columns in addition to a Clustered index. this index create a “pointer” to find what row data is located on. If you wanted to know all the “Robert”s in a phone book, you could create a Non-Clustered index on First Name column.
- Unique – this ensures that either a clustered or non-clustered indexes key does not have duplicate values.
- Index with included columns – extends the functionality of non-clustered indexes to include “non-key” items in the index.
- Indexed Views – a unique way to present structured, indexed data in a view rather than a table. can use Non-Clustered as well, but only after a Clustered index was created.
- Full-text – a special type of index used exclusively with the Microsoft Full-Text Engine to facilitate word searches within data columns.
- XML – a way to index XML data type columns
SQL 2008 and 2008 R2
- Index with included columns
- Spatial – provides the ability to index “spatial data” in a column of geometry data type.
- Filtered—provides a way to filter out unnecessary rows of a commonly used subset of data
- Column store – based on vertical partitioning of the data by columns
- Index with included columns
- Index on computed columns – an index on a column that is derived of one or more columns
- Hash – an index that provides data through an in-memory hash table
- memory-optimized nonclustered indexes – used in the new feature of “in memory optimized tables”
- Column store
- Index with Included columns
- Index on computed columns
First and foremost, I am a slacker! It takes discipline to write a weekly blog, I am not sure how some people can do it daily! However, I hope to get back on track for this series.
Merriam-Webster defines index as:
a list … arranged usually in alphabetical order of some specified datum
One of the most common real world examples of an Index is your telephone book. The book stores information (Name, address and phone number) sorted alphabetically by last name. The pages are written in such a way that A comes before B and B comes before C, etc.. If one knows their alphabet, then any name can be easily looked up. Typically the first “key” to finding a name is at the top of the page, which tells you what section of the book you are in. If you were to locate my entry in the phonebook, you would quickly scan through the key until you found the letter B at the top of the page. Then you would continue to scan until you find the group of entries for BISHOP. And of course, then locate which entry matched my name, BISHOP, ROBERT. If there were no key at the top of the page, you would have to seek through all the pages one at a time until you got to the B section. Another excellent real-world example of an index system, is the Dewey Decimal System. Libraries have been indexing their books with a numbering system for years.
So, how does this all relate to SQL Server? There are several bold print words above that translate to SQL Server terms and how SQL works the same way as a phone book. To fully understand how SQL Indexes work one really needs to know how SQL stores data. We know SQL has the .mdf files that actually stores all your data. However, the data file is made of pages that are 8 KB in size. At the top of each page is a “page header” used to store system information about that page. There are many different types of pages that store different things, but two specific types of pages I want to talk about are “data” pages and “index” pages.
A “data page” is where your actual data (based on data types) is stored and as you guessed it, the index page stores index information. The “key” to proper storage of data is a Clustered Index. A Clustered Index, physically writes and stores a row of data in a page by selected column and by sort order selected. So a Clustered Index on a user table could be by the column “Last Name”, just like a phone book. This will ensure that the data rows are written in alphabetical order on each page and in turn each page will be in alphabetical order as well, very efficient. SQL Engine “scans” the index to determine exactly what page the “B” last names are located. If a table did not have a clustered index, the data would be stored in a “first come-first served” fashion. In this scenario, the SQL Engine would have to scan then entire page or multiple pages to find your entry, very inefficient. Imagine how inefficient a phone book would be if the publisher just kept adding rows to the end of the book every year without being sorted by name. How long would it take you to find my name then?
So, the key to storing data in SQL, is to have a pre-determined way you want the data rows saved to the page. Ideally this would be the most used method of finding a row, i.e. by “Last Name”.
Next week…..Types of Indexes
Over the last few weeks, I have had the opportunity to beta test Idera’s newest product, SQL Elements. My initial response? This is both a “fantastic” product and a “tease” of a product, I’ll explain later. Idera has definitely done their homework.
Initial install is easy; all you need is a web server, SQL Server for a data repository, and a Domain Service Account with SysAdmin privileges on the SQL Servers you would like to monitor or SQL account with SA privileges. My environment uses a Domain Security Group called SQL-DBA. Since it includes both my account and all service accounts that need SysAdmin rights, I just used one of these service accounts.
One of the better aspects of SQL Elements (SE) is the “auto discover” feature; it finds the servers for you. It basically scours your network to find all SQL Server instances it can find, including Express Editions. Even if the service account does not have SysAdmin rights, SE will find the server and provide a listing for it, which can help you to realize which SQL Servers, you as a DBA, do not have access to. It sometimes has problems determining the instance name (if you are using named instances example: “Server\SQLExpress,”), but who really uses SQL Express Edition for production anyway? You can always manually add the named instance to avoid that issue. This feature also periodically scan’s the network for any new instances and automatically adds them to the “Discovered Instances” list so you know when someone else installs SQL Server. NICE FEATURE!
SQL Elements uses the concept of “health checks” to determine the status of your SQL Server. These health checks include: DBCC CHECKDB consistency, current backup checks, auto-shrink enabled, and “optimize for ad hoc workloads” just to name a few. Many of the more critical checks have email alerts associated with them to let you know when a database is filling up or when a drive is running out of room.
Once you login to the website, the Dashboard for SE provides a brilliant snapshot of your environment. First and foremost, at the top is the “Health Check Recommendations” that SE has found in your environment. Each Health Check is given a “Level” based on the severity of the problem. Idera provides a brief explanation of why each recommendation is made and a link to a more detailed explanation. Once you review the recommendations, you have the choice of dismissing the alert or refreshing the alert supposedly after you have resolved the issue with the recommendation.
Below the recommendations are two simple graphs listing “Top Databases by Size” and “Top Databases by Activity”. Personally, I wish I could hide this module of the dashboard and move the “Instances” grid below it up. I haven’t found much use for these two graphs, but maybe that is just me. The grid of Instances is very user-friendly; it’s a simple list of what instances are being monitored, their monitoring status, response time, version, # of databases, and total size of databases. Each column is sortable, and the grid works on 20 listings per page which is a very reasonable size to work with.
On the right side menu, is a simple “My Environment” section, which allows you to manage the SQL Servers in your environment. The ability to classify the servers by “tags” is nice, especially if you want to just look at your “Critical 24×7” servers or just your “Test” servers. I really like the concept of “labeling” SQL Servers with a category so I can prioritize the server health check recommendations. I only wish when selecting a “tag” the resulting page showed the “Health Checks” for those specific servers, not the “Explorer” tab.
Clicking on the Instance name actually brings you to a very valuable “Instance Details” page. Again, at the top is the list of Health Check Recommendations for this particular instance. Below that is a grid listing all the databases found on the instance as well their status, recommendations, type, data size, log size, and activity. I would love this grid to include the Compatibility level of the database, because many time developers will restore, move, or copy a database from one server to a higher edition server and not change the compatibility level. On the right side menu, you have simple information pertaining to this particular instance. Clicking on the Server Name, however, will bring you to “Hardware Details”. There is also a link to view the SQL configuration settings.
There are more sources of information found in SQL Elements. I won’t go into those here, but they include an “Explorer” tab, which allows you to explore your environment by filters and tags can be helpful when trying to locate a specific server in a larger environment. As my environment is not that large, I really haven’t used it much.
So, after this “novel” of a review, here is what I think of SQL Elements:
First and foremost, the application is well written, has smooth transitions between pages, and has yet to throw any type of exception error with me. The ability to classify instances is a wonderful concept and I use it everyday. One of the additional features that I truly enjoy is the ability to assign an owner and location to an instance. I assign the “end user” as the owner and either “Data Center” (for physical servers) or “VM Ware” (for well, VM instances). That way, I quickly know if I am dealing with a physical server or not. Monitored instances have to be SQL 2005 SP1 or above, which in my environment leaves me a little frustrated since, unfortunately, we are still running a dozen or so SQL 2000 instances. But it does let me know what SQL 2000 servers I have out there, so I’m able to start my migration plans now!
The only major “flaw” I found with SQL Elements is the lack of producing reports based on data collected. Many times, managers and directors require “physical proof” of why I am asking for another terabyte of drive space for a SQL Server. The “powers that be” like pretty graphs and trends. If a drive is running out of space, we need to be able to show them the trend of drive usage so we can justify that new 1.5 terabyte hard drive. Having participated in the beta forums for SQL Elements, I have faith that Idera will listen to the masses and in the future provide some sort of reporting feature.
I mentioned earlier, that this was a “tease” of a product. The DBA who uses this product needs to remember that this is not a monitoring tool. If you are hoping this will provide full SQL monitoring, then you will be disappointed. For that, I would recommend, Idera SQL Diagnostic Manager. However, if you want a way to know what SQL Instances are in your environment and get a quick overview of your server, then SQL Elements is for you. This is an “Inventory Tool” with some basic monitoring of the most fundamental aspects of a SQL Server: drive size, data integrity, backups, etc. Things that could and will cause major problems if not checked regularly.
This is a very valuable tool for starting DBAs or IT groups that have no clue what they have in their network (which is my group, because we have never had a DBA for our over 100 instances of SQL, before me). I would definitely recommend this product! I only hope I can convince my “powers that be” to get it for me!
Well, between yesterday and today, I have learned a very valuable lesson when dealing with Vendor MS SQL installations. And that is, “Trust, but Verify”.
I won’t call out the Vendor, I am not that mean.
They setup 2 SQL boxes with replication between them and a Push Snapshot subscription to 12 different desktops supporting SQL Express. Now, first off I thought nice system. All data changed in a central database and then pushed out daily to the clients.
However, when the vendor neglects to setup ANY maintenance plans this could cause problems. I found myself dealing with an out of control Replication based Transaction Log that complete filled up a 100 GB drive.
Here’s how I solved it.
- Because this was a VM server, I asked networking if I could get another drive to use for now.
- I added another log file to the database and located it on the new drive. Now that the log was able to expand, I could fix the problem.
- I setup the maintenance plans and executed the FULL backup immediately
- I executed the log backup.
- Using SHRINK FILE, I ”Released Unused space“ of the main log file
- DBCC SHRINKFILE (N‘DBname_log’ , 0, TRUNCATEONLY)
- Executed another FULL backup
- Executed another Log Backup, now my logs were most empty and having returned almost 80GB of drive space back to the OS I need to undo the changes I did.
- I repeatedly tried to drop the 2nd Log file and repeatedly got a “file not empty” message. I then noticed this little setting in the SHRINK FILE screen. Scripting it out I discovered this. Wow, learn something new everyday. Executing this statement moved what little transactions were left in the 2nd Log file to the Main log file so I could remove the file without any problems.
DBCC SHRINKFILE (N‘DBname_log’ , EMPTYFILE)
I am not 100% sure if this was the “best” method of doing this. But it worked. And it taught me a valuable lesson in working with Vendors, “Trust, but Verify”.
Time to go back and double check all the other 50 or so SQL Servers that our wonderful vendors setup!
Last night, I got to “present” for the first time at the Baton Rouge SQL Server User Group (@BRSSUG). In a not so normal fashion, we played an electronic version SQL Jeopardy that I created.
Not trying to be “vain”; but I think a lot of people had fun. A couple of people learned some things and I most of all, learned very valuable lessons.
- Just because this was not a “real” presentation, I still need to spell check!
- The animations and audio built into the program was a hit! It seems the simplest things make people smile.
- Be sure to double check my facts before including them in the game.
Of course I also learned a lot of SQL information putting the game together. I had fun, it was enjoyable and hopefully it can become a reoccurring event at User Group meetings.
I hope to create a next version with and Administration Section so one can mange the Categories, Questions, and answers. Also would like to figure out a way to keep score.
Until Next time…
Tonight, I am doing my first presentation at a SQL User Group, the Baton Rouge SQL Server User Group (@BRSSUG) . But in my quirky unusual fashion, I am not “presenting” a topic and this is not your typical presentation.
Following the idea of Jeremy Kadlec, I am hosting SQL Jeopardy!
Step 1: Create a type of “game board” so to speak. I had to present this in a fashion that would be memorable and easy to work with. Drawing on my experience as a .NET programmer, I developed an interactive Jeopardy game board, complete with sounds. Here’s a sneak peak:
Step 2: With the help of our local User Group President, William Assef (@william_a_dba), we developed the 51 questions needed for the game. I chose to store them in a local SDF database for portability. I tried my best to fashion the Answer/Questions in the same manner as Jeopardy, i.e. providing the answer so the player was required to state their response in the form of a Question. It was hard, and some of the question I have can’t be done that way, version 2.0 will hopefully correct this.
Step 3: Test, Test, Test. I am not the most efficient typist and if it wasn’t for spell check, I would probably come across as an idiot, so I had to review, review, and review the entries to be sure I had them all correctly spelled and the Answers / Questions were all correct and factual.
Step 4: GAME ON!
Tomorrow I will let you all know how it went and how it was received. My future plans are to create the “admin” side of the game board so the questions and answers can be edited, changed and tailored to individual group’s needs as well as making the categories editable. Because of time constraints, all the categories and Answer/Questions are hard coded in either the forms or SDF database.
WISH ME LUCK!
Over the next 8 weeks, I hope to discuss the ever confusing world of indexes. I know this topic has been written about and covered in depth by smarter people than me, but this is my attempt at trying to discuss a topic that is as elusive as quantum mechanics. But more importantly, I am also using this exercise as a learning tool for me. As an accidental DBA turned “official” DBA, I want to learn as much as I can about a very powerful tool that helps SQL run efficiently.
Why 8 weeks? I m being realistic, I am new to blogging and especially new to technical blogging so I am trying to pace myself so I actually present correct information. Here is my outline:
- Introduction (this post)
- What is an index?
- Types of indexes
- Structure of an index
- Determining what indexes your tables have now
- Are they effective indexes? What makes effective indexes?
- Management of indexes? They are not “set it and forget it”!
- What I’ve learned