This is post #5 of my 8 Weeks of Indexes; these are not posted weekly; but I am getting faster at posting them. (Side note: I am making it a personal effort to use time at work [hey it’s learning right] to write my blogs)
When you are given access to a database and are told to manage it; what is the best way to figure out what indexes are available, are they being used, and how big are they?
Let us figure this out step by step.
What Indexes do I have?
Any time I have to determine what indexes a database has, I always use the following script. It is quick, easy and returns most of the information you need to identify an index and it’s configuration.
The below picture is a sample of the results for the above query.
Next, I usually want to know IF the indexes are actually being used. Believe it or not; there are sometimes indexes placed on databases that never get used. And that is just a waste of I/O and disk space.
This query provides the information you need to determine how often the indexes are used and if the SQL Engine is using SEEK, SCAN or LOOKUP to find data (this will be discussed in my next article). If the index is not listed with these results; it has not been used since last index rebuild or SQL Engine restart.
And that is how I determine what indexes a server has, how they are being used and how big they are.
Next week (hopefully), I will go into how to determine if the indexes are effective!
Late last night, I got the call that no DBA wants to get.
“Our Data Center has crashed”
It appears, both controllers in the VM Ware SAN crashed at the same time bringing every VM server down. Once the controllers were replaced, servers were brought back online. But that’s not the real point of this article, it is just the background of what happened.
Typically when I install a SQL server, I use a domain Security Group as the Sys Admin. Members of this group include myself, my monitoring service account and domain admins. Because I am a member of this group, I typically remove my individual login. This has not been an issue until last night; well really this morning.
SQL Agent Jobs
I had SQL Agent jobs failing all over the network once my servers were back up and running from the earlier crash and I couldn’t figure out why.
The job failed. The owner () of job SystemDB.Backup-Full does not have server access.
The owner()…does not have server access. Why did it not list the owner? Is there an owner?
I understand the importance of having non-domain accounts “owning” SQL Agent jobs; now more than ever. As much as I try to replace myself as job owner when I edit a job; it just sometimes slips my mind.
Every job owned by my network account was failing across the company. When researching I tried to get a listing of all jobs and their owners using the script below; but not all the jobs were listed. Why weren’t all jobs being listed?
Digging deeper, I remembered that syslogins is actually a view of sys.server_principles. Reading the entries there, my network login was not a principle which made sense because I removed my network login principle. But my login was a job owner; apparently SQL will use the current logged in user as the new owner ever if it is not a principle. I tested this by removing my login from a test server, creating a new job and there she blows!
wanted needed a quick way to change job ownership of all SQL Agent Jobs. So I developed the following script.
This script will be added to my “toolbox” for sure. I may set this up as a SQL Agent job itself randomly checking, or maybe a SSIS package using @Variables for Server Names, but for now, I will manually run this on severs that need it.
This is the 4th post in my series of 8 Weeks of Indexes
In this installment, I hope to describe what makes up an index, how it is created and some of the more common options.
The Following table will be used to demonstrate my examples.
If you look at the structure of the table above, I already have a Unique Clustered Index built in because of the CONSTRAINT/PRIMARY KEY clause. So my table physically will be stored in numerical order on the ID column. This type of structure and index are useful for pulling individual records, if you know the ID of the row.
My first example will be of a non-clustered index. You can have up to 249 non-clustered indexes per table in SQL 2005 and SQL 2008 increased that limit to 999.
This is a simple index on the Patients table so you can query by Last Name. This tells me very plainly that I want to create an index called “IX_LastName” on the Patients table using the column “LastName” in Ascending alphabetical order and store that index on the [PRIMARY] file group. There are different options (MSDN) with creating a non clustered index. The more common options are: (all quotes are from MSDN)
- INCLUDE – “specifies the non-key columns to be added to the leaf level of the nonclustered indexes”. Usually these are other columns in your SELECT statements
- WHERE – the ability to filter the index by a certain criteria to limit the number of rows returned
- WITH – this option has many different settings that can be applied to the index: these include the more common options PAD_INDEX, FILLFACTOR, SORT_IN_TEMPDB, DROP_EXISTING and ONLINE (see MSDN article for complete list)
The last command ON is a very cool feature. This particular feature tells SQL where you want to physically store the index based on filegroups. On very busy systems, it may be advantageous to store these on a separate drive in a separate file group.
The above code uses 3 common options and 4 common parameters for the WITH options.
INCLUDE – adds the two columns to the leaf level so the query can easily return those results with index
WHERE – this will effectively return only the rows that are included in this filter. Only patients in Baton Rouge will be included in the index.
WITH – The following index options applied are as follows: FILLFACTOR will force SQL to only use 80% of space of the index page. It is recommended to use PAD_INDEX=ON with FILLFACTOR because this will allow SQL to force let’s say 81% of the index page if 80% is not enough to store one row. The ONLINE=ON is only available with Enterprise and Developer edition but this reduces the locks on the table to allow the data to be available. And the last option is SORT_IN_TEMPDB=ON. This does exactly what it say, it will copy the index to the tempdb, sort it there than write it back to its original location. This feature is VERY helpful if your tempdb is on a separate drive because SQL can READ/WRITE to 2 separate drives at the same time but not the same drive.
Well that is enough for today. My mind can only take so much. Next week I will be out of the office and away from computers so there is no chance to get a post out (not like I have been doing them every week anyway).
But I will endeavor to have Post #5 of my 8 Weeks of Indexes out the week of Aug 10th.
Being a SQL DBA and part time SSRS developer, I sometimes get very confused and lost in what version of BIDS or SSDT (SQL Server Data Tools) I need for what version of SQL. And now it seems SSDT is different from SSDT-BI (SQL Server Data Tool for Business Intelligence) So, I am writing this blog as a reminder and starting point to determine what I need to tell colleagues what to download when setting up a new developer’s desktop with the right tools for the right job. I also want to be able to know what exactly will be installed on the machine for the developers.
My inventory of SQL Servers range from 2000 (yes I still have 2 instances of SQL 2000 in production, but that’s for another blog post) through 2014! My “shop” primarily uses SSIS and SSRS with our SQL Servers so that is all I am confident with.
The below information goes under the assumption you do not have any retail version of Visual Studio installed on your computer. And with that assumption, all of the downloads will install a Visual Studio Shell IDE for use in your development.
Database Development (with scripting, deployment and Source Control)
SQL 2005 & 2008 — SQL Server Data Tools (SSDT) for Visual Studio 2010
SQL 2012 & 2014 — Visual Studio 2012 & 2013 (Built in support)
SSRS 2005 — BIDS 2005 (Visual Studio 8)
SSRS 2008 — BIDS 2008 (Visual Studio 9)
SSRS 2008 R2 — SQL Server Data Tools for Visual Studio 2010
SSIS 2005 — BIDS 2005 (Visual Studio 8)
SSIS 2008 — BIDS 2008 (Visual Studio 9)
SSIS 2008 R2 — SQL Server Data Tools for Visual Studio 2010
Well, I hope I didn’t mess those up. I sure wish Microsoft would get SQL Server, SSDT and SSDT-BI all on the same numbering system! I can never remember which version of Visual Studio to launch for which version of SQL; maybe this reference will help!
By definition, a “deprecated” feature refers to a feature or specific function in a piece of software that will be replaced with a newer function/feature or just flat out will be no longer available.
Microsoft is notorious for changing and/or removing features in every version of SQL! SQL 2014 has an extensive list of deprecated features: https://msdn.microsoft.com/en-us/ms143729.aspx.
The one I found out about today is SET FMTONLY (format only). This particular setting allows a stored procedure to return the format of the results without actually returning results. This feature has been available since SQL 2005 and apparently will be removed from a future version, to be determined later.
In my line of work, I do a lot of SSIS packages that export information to flat files. And my developers use temp tables religiously. The problem with this, SSIS packages typically have a problem with resolving the columns of the final result set from a stored procedure when temp tables are used. The use of SET FMTONLY OFF allows the SSIS package to resolve the stored procedure without attempting get the column definitions first so the statements can complete, which in turn presents the SSIS Data Source with the final column definition.
Beginning in SQL 2012 two new features were debuted to replace FMTONLY setting.
sp_describe_first_result_set Returns the metadata for the first possible result set of the Transact-SQL batch.
sys.dm_exec_describe_first_result_set This dynamic management function takes a Transact-SQL statement as a parameter and describes the metadata of the first result set for the statement
Why 2 you ask? I have no clue. These two seem to do the same thing, not sure why two different ways to do the same thing, but I digress…
The point of this article is now when I create a stored procedure with temp tables for an SSIS package, I have to pass the T-SQL statements as a parameter into these function first then execute the T-SQL statements.
Maybe I am too much of a novice, but to take a stored procedure with several hundred lines of code and turn it into a variable @tsql, then pass that variable into a function just to get column definition, then EXEC @tsql sounds more difficult than simply typing SET FMTONLY OFF; Apparently smarter people than me have decided this is the best way to do this.
Of course by accident, I discovered another, almost as simple, solution for SSIS packages and temp tables. Instead of using a temp table, define a Table Variable at the beginning of the stored procedure. This @TableVariable will be your final result set, insert records into this table. Using this method, your stored procedure can still use temp tables and have a defined result set for the SSIS data source.
So, to answer the original question: Is a deprecated feature making my job more difficult?
Well, Yes and No. Yes, because I believe the replacement feature is more difficult to execute and write code for, especially when your stored procedure has multiple statements. And adding more steps to an already lengthy process is never a good thing.
And No, because I found a just as easy useable solution with table variables. I just switch # for @ and define my columns. easy as 1…2…3!
After a couple of tries (finding time to study is harder than I thought), I have finally passed my 70-432 to earn my Microsoft Certified Solutions Associate SQL 2008! Having previously earned my MCTIP: SQL Server 2008 Business Intelligence, I only needed this one test.
Now, I must crack down! I will study, break, fix and torture a SQL Development server for my 70-457 and 70-458 to transition to SQL 2012!
I know some people in the SQL world do not see certifications as all that necessary and definitely not completely indicative of real world experiences; but this was a personal/professional goal of mine to earn my MCSA SQL 2012. I am now 2 tests away!
Who knows, I may go for my MCSE!
So, I was working on a project for my employer trying to determine the best course of action for licensing a new instance of SQL Server (i.e. what is the cheapest way to get it legally). And I came across something I did not know.
Found deep inside the bowels of a 40 something page document, I learned something new from the Microsoft SQL Server 2014 Licensing Guide.
SQL Server Developer Edition
SQL Server 2014 Developer Edition is a full-function version of SQL Server software—including all of the features and capabilities of Enterprise Edition—licensed under the Developer Tools model, which is a “per user” model. One license is required for each person that accesses or uses the software.
“Per User” is what I didn’t realize. If I translate this correctly, I purchase one Developer License and install it on a machine, only I can use it as a development box. To open it up to a team of developers, each member of the team must purchase a Developer License. If I have multiple development servers for a multiple person team, this could get expensive. But hey, it is still cheaper than licensing a server by Cores and most likely cheaper than Server+CAL method.
Write this up as my “learning for the day”!
Earlier last month, a contest was brewing to be mentored by Paul Randal (b|t). Yes, THAT Paul Randal of SQLSkills. Despite knowing my SQL Server level of proficiency, I figured what the heck, it can’t hurt to try and the worst that could happen is I wouldn’t win. But in Paul’s quirky fashion, everyone won! That’s right, I am extremely fortunate to be included in the Mentoring Class of 2015 by Paul Randal. I hope to document this experience here.
Now, that all being said, one of the first requirements beyond the obligatory “hellos” was to complete my
homework Work Items. (Eeek gad, homework? I haven’t heard that word in a long time, not counting my kiddos of course).
They were simple: #1) Who are you? and #2) What do you want help with? Are they really that simple though? They both all seem simple enough. All I have to do is provide basic info about me, what I like to do and simple stuff.
It was the second “Work Item” that really got me thinking. What do I want help with? Really? I could write for days where I need help: both professionally, physically, mentally, personally? But, I didn’t want to scare Paul away so I kept my answer simplified.
I want to learn! Knowledge is Power!
Reflecting back on my experiences and then interacting with others in the SQL community has taught me lesson. I am only but a “notch up” from a novice. There is so much I should learn about SQL. I always feel like I am a step behind on everything. I want to catch up. I want to earn my MCSE SQL 2012. I want to be able to defend my recommendations with facts and documentation.
So Paul, if you are reading this, strap your boots on because this is going to be one hell of ride; I am dedicating the next year to learning as much as I can about SQL Server administration. And hopefully with your guidance, after this year I can say, I am a “real” Database Administrator!
I have been working with SQL long enough that I have built my personal “toolbox” for SQL related tools that I use on a regular basis. They range from FREE to minimal cost (by that I mean mostly under $100; yeah, I know I am cheap).
You can find my tools here, Padre’s Toolbox and I hope to keep it updated as I test drive and use more and more different SQL programs. For now, I will briefly introduce my favorite tools (in no particular order). In the future, I hope to write a post on how and why I am using these tools.
Since my initial review of this software, Richard Fryar has even further enhanced his product to include Query Plan analysis.
This is from Brent Ozar Unlimited. Enough Said! You not only get their excellent diagnostic scripts, but e-books, whitepapers, and posters Oh My!
DMV Diagnostic Queries
A collection of free SQL tools, I find my self using these more and more for “spot checking” issues or problems.
Not technically a SQL Tools, but as a DBA we have to sometimes manage the OS itself. This tool is crucial for me to be able to remote into multiple servers without having to minimize RDP session to switch between servers.
My newest tool in my arsenal, is one that quite frankly scared the hell out of me at first. Just like Cache Plans in general. But I am mudding through the learning process and I am coming to rely more and more on this program. I haven’t purchased the “Pro” version yet, so I am still evaluating the usefulness of this program and see if I can justify the cost to my “powers-that-be”
We all are aware of the “standard practice” of enabling the Security Policy “Lock Page in Memory” for your SQL Server service account. Just about everywhere you read, it is pretty much standard practice, and so like an red-blooded “Accidental DBA”, I have always used it.
Until today…, well at least for one particular machine!
Let me preface this blog post by saying: I can’t prove nor do I know how to prove that my solutions was the answer to the problem; but it hasn’t been a problem since the change.
I have a physical 32 core, 448 GB RAM, SQL Server 2012 Enterprise Edition, her after referred to as “The Beast”! Unfortunately the Beast got it’s legs chopped off at the knees. Sometime ago, my company bought a new SAN for this server and other shared servers as well. This new SAN, would be 10K drives with loads of storage. The Beast was used to 15K drives which would hide a myriad of query problems. This new SAN also had what the vendor called “tiered storage”. A SSD that would be refreshed every night with ~100GB of the most used files actually read from the SAN since the previous 24 hour check to allow quick access. I personally don’t believe this is a wise configuration for a SQL server with multiple MDF files greater than 400 GB, just sayin’! We have disabled this feature for now.
Anyway, since the new SAN went online, every so often, especially on larger READ queries, the SQL Server would crash and burn. OS RAM was maxed out to the point RDP was not accessible, SQL Server would create a SQL Mini dump referencing page latch issues and the only way to revive it was to execute a remote SHUTDOWN /r command. This went on for a couple of months.
Then I read, Jonathan Kehayias’ article Great SQL Server Debates: Lock Pages In Memory. It is a little old, but it is still a good read. Now this got me thinking, quite possibly this policy may not be for every server. So I researched and researched the whole concept of storing data in memory, dirty pages, and everything I could about how the SQL Engine manage memory, drives and data.
So, with the thought process that my SQL server engine was trying to take more memory from the OS and literally strangle the Beast out of any life by consuming and trying to add more data pages to the Buffer and not releasing older pages, I disabled the policy Lock Pages In Memory.
Since then, my Beast has been happy . I have noticed more writes to the MDF file in the last three weeks, not a steady increase just bursts here and there. I am thinking that is SQL writing dirty pages back to disk, which wasn’t happening before. The Buffer cache is being swapped out with data being used vs. old data at a more consistent rate now.
At least that’s my story and I’m sticking to it!