Over the last few weeks, I have had the opportunity to beta test Idera’s newest product, SQL Elements. My initial response? This is both a “fantastic” product and a “tease” of a product, I’ll explain later. Idera has definitely done their homework.
Initial install is easy; all you need is a web server, SQL Server for a data repository, and a Domain Service Account with SysAdmin privileges on the SQL Servers you would like to monitor or SQL account with SA privileges. My environment uses a Domain Security Group called SQL-DBA. Since it includes both my account and all service accounts that need SysAdmin rights, I just used one of these service accounts.
One of the better aspects of SQL Elements (SE) is the “auto discover” feature; it finds the servers for you. It basically scours your network to find all SQL Server instances it can find, including Express Editions. Even if the service account does not have SysAdmin rights, SE will find the server and provide a listing for it, which can help you to realize which SQL Servers, you as a DBA, do not have access to. It sometimes has problems determining the instance name (if you are using named instances example: “Server\SQLExpress,”), but who really uses SQL Express Edition for production anyway? You can always manually add the named instance to avoid that issue. This feature also periodically scan’s the network for any new instances and automatically adds them to the “Discovered Instances” list so you know when someone else installs SQL Server. NICE FEATURE!
SQL Elements uses the concept of “health checks” to determine the status of your SQL Server. These health checks include: DBCC CHECKDB consistency, current backup checks, auto-shrink enabled, and “optimize for ad hoc workloads” just to name a few. Many of the more critical checks have email alerts associated with them to let you know when a database is filling up or when a drive is running out of room.
Once you login to the website, the Dashboard for SE provides a brilliant snapshot of your environment. First and foremost, at the top is the “Health Check Recommendations” that SE has found in your environment. Each Health Check is given a “Level” based on the severity of the problem. Idera provides a brief explanation of why each recommendation is made and a link to a more detailed explanation. Once you review the recommendations, you have the choice of dismissing the alert or refreshing the alert supposedly after you have resolved the issue with the recommendation.
Below the recommendations are two simple graphs listing “Top Databases by Size” and “Top Databases by Activity”. Personally, I wish I could hide this module of the dashboard and move the “Instances” grid below it up. I haven’t found much use for these two graphs, but maybe that is just me. The grid of Instances is very user-friendly; it’s a simple list of what instances are being monitored, their monitoring status, response time, version, # of databases, and total size of databases. Each column is sortable, and the grid works on 20 listings per page which is a very reasonable size to work with.
On the right side menu, is a simple “My Environment” section, which allows you to manage the SQL Servers in your environment. The ability to classify the servers by “tags” is nice, especially if you want to just look at your “Critical 24×7” servers or just your “Test” servers. I really like the concept of “labeling” SQL Servers with a category so I can prioritize the server health check recommendations. I only wish when selecting a “tag” the resulting page showed the “Health Checks” for those specific servers, not the “Explorer” tab.
Clicking on the Instance name actually brings you to a very valuable “Instance Details” page. Again, at the top is the list of Health Check Recommendations for this particular instance. Below that is a grid listing all the databases found on the instance as well their status, recommendations, type, data size, log size, and activity. I would love this grid to include the Compatibility level of the database, because many time developers will restore, move, or copy a database from one server to a higher edition server and not change the compatibility level. On the right side menu, you have simple information pertaining to this particular instance. Clicking on the Server Name, however, will bring you to “Hardware Details”. There is also a link to view the SQL configuration settings.
There are more sources of information found in SQL Elements. I won’t go into those here, but they include an “Explorer” tab, which allows you to explore your environment by filters and tags can be helpful when trying to locate a specific server in a larger environment. As my environment is not that large, I really haven’t used it much.
So, after this “novel” of a review, here is what I think of SQL Elements:
First and foremost, the application is well written, has smooth transitions between pages, and has yet to throw any type of exception error with me. The ability to classify instances is a wonderful concept and I use it everyday. One of the additional features that I truly enjoy is the ability to assign an owner and location to an instance. I assign the “end user” as the owner and either “Data Center” (for physical servers) or “VM Ware” (for well, VM instances). That way, I quickly know if I am dealing with a physical server or not. Monitored instances have to be SQL 2005 SP1 or above, which in my environment leaves me a little frustrated since, unfortunately, we are still running a dozen or so SQL 2000 instances. But it does let me know what SQL 2000 servers I have out there, so I’m able to start my migration plans now!
The only major “flaw” I found with SQL Elements is the lack of producing reports based on data collected. Many times, managers and directors require “physical proof” of why I am asking for another terabyte of drive space for a SQL Server. The “powers that be” like pretty graphs and trends. If a drive is running out of space, we need to be able to show them the trend of drive usage so we can justify that new 1.5 terabyte hard drive. Having participated in the beta forums for SQL Elements, I have faith that Idera will listen to the masses and in the future provide some sort of reporting feature.
I mentioned earlier, that this was a “tease” of a product. The DBA who uses this product needs to remember that this is not a monitoring tool. If you are hoping this will provide full SQL monitoring, then you will be disappointed. For that, I would recommend, Idera SQL Diagnostic Manager. However, if you want a way to know what SQL Instances are in your environment and get a quick overview of your server, then SQL Elements is for you. This is an “Inventory Tool” with some basic monitoring of the most fundamental aspects of a SQL Server: drive size, data integrity, backups, etc. Things that could and will cause major problems if not checked regularly.
This is a very valuable tool for starting DBAs or IT groups that have no clue what they have in their network (which is my group, because we have never had a DBA for our over 100 instances of SQL, before me). I would definitely recommend this product! I only hope I can convince my “powers that be” to get it for me!
Well, between yesterday and today, I have learned a very valuable lesson when dealing with Vendor MS SQL installations. And that is, “Trust, but Verify”.
I won’t call out the Vendor, I am not that mean.
They setup 2 SQL boxes with replication between them and a Push Snapshot subscription to 12 different desktops supporting SQL Express. Now, first off I thought nice system. All data changed in a central database and then pushed out daily to the clients.
However, when the vendor neglects to setup ANY maintenance plans this could cause problems. I found myself dealing with an out of control Replication based Transaction Log that complete filled up a 100 GB drive.
Here’s how I solved it.
- Because this was a VM server, I asked networking if I could get another drive to use for now.
- I added another log file to the database and located it on the new drive. Now that the log was able to expand, I could fix the problem.
- I setup the maintenance plans and executed the FULL backup immediately
- I executed the log backup.
- Using SHRINK FILE, I ”Released Unused space“ of the main log file
- DBCC SHRINKFILE (N‘DBname_log’ , 0, TRUNCATEONLY)
- Executed another FULL backup
- Executed another Log Backup, now my logs were most empty and having returned almost 80GB of drive space back to the OS I need to undo the changes I did.
- I repeatedly tried to drop the 2nd Log file and repeatedly got a “file not empty” message. I then noticed this little setting in the SHRINK FILE screen. Scripting it out I discovered this. Wow, learn something new everyday. Executing this statement moved what little transactions were left in the 2nd Log file to the Main log file so I could remove the file without any problems.
DBCC SHRINKFILE (N‘DBname_log’ , EMPTYFILE)
I am not 100% sure if this was the “best” method of doing this. But it worked. And it taught me a valuable lesson in working with Vendors, “Trust, but Verify”.
Time to go back and double check all the other 50 or so SQL Servers that our wonderful vendors setup!
Last night, I got to “present” for the first time at the Baton Rouge SQL Server User Group (@BRSSUG). In a not so normal fashion, we played an electronic version SQL Jeopardy that I created.
Not trying to be “vain”; but I think a lot of people had fun. A couple of people learned some things and I most of all, learned very valuable lessons.
- Just because this was not a “real” presentation, I still need to spell check!
- The animations and audio built into the program was a hit! It seems the simplest things make people smile.
- Be sure to double check my facts before including them in the game.
Of course I also learned a lot of SQL information putting the game together. I had fun, it was enjoyable and hopefully it can become a reoccurring event at User Group meetings.
I hope to create a next version with and Administration Section so one can mange the Categories, Questions, and answers. Also would like to figure out a way to keep score.
Until Next time…
Tonight, I am doing my first presentation at a SQL User Group, the Baton Rouge SQL Server User Group (@BRSSUG) . But in my quirky unusual fashion, I am not “presenting” a topic and this is not your typical presentation.
Following the idea of Jeremy Kadlec, I am hosting SQL Jeopardy!
Step 1: Create a type of “game board” so to speak. I had to present this in a fashion that would be memorable and easy to work with. Drawing on my experience as a .NET programmer, I developed an interactive Jeopardy game board, complete with sounds. Here’s a sneak peak:
Step 2: With the help of our local User Group President, William Assef (@william_a_dba), we developed the 51 questions needed for the game. I chose to store them in a local SDF database for portability. I tried my best to fashion the Answer/Questions in the same manner as Jeopardy, i.e. providing the answer so the player was required to state their response in the form of a Question. It was hard, and some of the question I have can’t be done that way, version 2.0 will hopefully correct this.
Step 3: Test, Test, Test. I am not the most efficient typist and if it wasn’t for spell check, I would probably come across as an idiot, so I had to review, review, and review the entries to be sure I had them all correctly spelled and the Answers / Questions were all correct and factual.
Step 4: GAME ON!
Tomorrow I will let you all know how it went and how it was received. My future plans are to create the “admin” side of the game board so the questions and answers can be edited, changed and tailored to individual group’s needs as well as making the categories editable. Because of time constraints, all the categories and Answer/Questions are hard coded in either the forms or SDF database.
WISH ME LUCK!
Over the next 8 weeks, I hope to discuss the ever confusing world of indexes. I know this topic has been written about and covered in depth by smarter people than me, but this is my attempt at trying to discuss a topic that is as elusive as quantum mechanics. But more importantly, I am also using this exercise as a learning tool for me. As an accidental DBA turned “official” DBA, I want to learn as much as I can about a very powerful tool that helps SQL run efficiently.
Why 8 weeks? I m being realistic, I am new to blogging and especially new to technical blogging so I am trying to pace myself so I actually present correct information. Here is my outline:
- Introduction (this post)
- What is an index?
- Types of indexes
- Structure of an index
- Determining what indexes your tables have now
- Are they effective indexes? What makes effective indexes?
- Management of indexes? They are not “set it and forget it”!
- What I’ve learned
In the last few months I have been panicking because the MS SQL Server 2008 certification were set to retire July 31, 2013. At least that is what I kept reading. Yes, I know SQL 2012 is “standard issue” now; but most of us are still using SQL 2008/2008 R2 and even possibly 2005! It would be nice to get our certifications in the systems that we primarily use.
For those of us that are “true” procrastinators, Microsoft apparently has given us a reprieve on our MS SQL Server Certifications, at least for DBAs.
If you scroll down and expand the “Microsoft Certified IT Professional (MCITP) accordion, you will notice that MCITP for Administration retire date has been pushed back to January 1, 2014. That’s 6 more months of time you (and I) have to complete it (if you want).
The bad news, the BI and Developer MCITP Certifications will still expire on July 31, 2013. Luckily I already have my MCITP in BI.
Now the real question, can I squeeze 70-448 and 70-452 (Dev tests) into 29 days? Only time will tell!
I have been a fan of Telerik WinForms and ASP.NET controls since 2007. When Telerik decided to enter the Business Intelligence (BI) development, I was ecstatic. I immediately downloaded and began using Telerik Reporting v.1 I must confess, I was a little lost for words. The great Telerik had finally let me down. The first edition of Telerik Reporting seemed nothing more than a “revamp” of Microsoft Visual Studio embedded reports. So, unfortunately I sort of pushed it aside and continued using Visual Studio for reporting needs.
Fast forward to 2012……………
Today I downloaded the new Telerik Reporting and was VERY impressed and pleased with the newest rendition of the product. The integrated Client Install, is an easy way to “pick and choose” what you would like to “try” and actually gives you access to licenses as well.
I am very familiar with the integration with Visual Studio for a Win Forms or Web applications, but because, I currently work with existing Microsoft SSRS 2008 server and over 100 reports in several Report Projects, I wanted to know explicitly how it could help me in managing existing SSRS hosted reports. So I installed with the Visual Studio 2008 support and immediately opened a new SSRS report project. Bummer; well it seems Telerik and SSRS Projects are still very separate entities, which I am not surprised. I was hoping for the ability to edit deployed SSRS report projects and for some more “integration” between the two; especially since I am now primarily a BI developer.
Now on the flip side; the ability to integrate reports into your application, either Windows or Web, is made easier with Telerik Reporting. The Controls are very straight forward and easy to understand. The ability for Silverlight controls is also an added plus for those applications developers.
I also do a few “side” development jobs one of which required embedded SSRS report. Having Telerik could have made it a more enjoyable experience and quite possibly better to render.
As a BI developer using SSRS, my “customers” are always wanting unusual, challenging and even down right (what I thought was) impossible reports. Because of these challenges and the somewhat short-falls of SSRS 2008; I may actually consider proposing a “reporting” ASP.NET website with Telerik Reports and show my bosses the POWER of TELERIK!
I have been tasked for “specking” out the requirements for a new OTLP SQL Server and I would like some feed back on this as well.
Here are my initial requirements, we have to use SQL 2008 R2 because the application that will be using these databases have not been certified for SQL 2012 yet (boo)
Some background, the current server has over 1 TB of databases and is highly active with over 100 concurrent connections. These connection process UPDATE, INSERT, DELETE statements like they were going out of style. And one of these databases is a repository for data used by SSRS Reports. Not an ideal situation, but I have to follow application manufacture guidelines.
here’s what I have:
- OS: Windows Sever 2008 R2 x64
- SQL: SQL Server 2008 R2 Enterprise Edition x64
- Processors: 2 Quad core processors
- Memory: as much as can be budgeted, but minimum of 64 GB
- Local Drives:
- 1 – System Drive for OS & Applications the minimum free space needs to equal the amount of RAM installed
- 1 — for TempDB, Full-Text Indexes, other resources and System databases
- Database drives: SAN Connections 2 drives: one drive for MDF and one drive for LDF
Please let me know if I missed something or if I am TOTALLY off my rocker with this setup.
After reading Pinal Dave blog post on Best Practices to Store the SQL Server Backups, I noticed he mentioned an “easy to use” SQL Backup 3rd party software that allows network storage, even FTP transporting.
So, I thought I would give SQLBackupAndFTP a try. The download is a relatively small foot-print and installation was a breeze, however I wish there were a x64 bit version. I know x64 bit machines will run the software, but it is my personal “best practices” to not use x32 bit software with x64 bit machines.
Dave was right, this is most definitely an “easy to use” software. I can see it coming in real handy with applications that require SQL Express. With the database sizes (almost 1TB of data on one server) I work with, remote storage and FTP are not a viable option. Setting up the network storage was a breeze.
I was a little worried when the error message popped up concerning Remote SQL Server backups, especially since it mentioned “beta”.
I believe this would be an excellent development/test environment backup solution because it can be installed on the server and managed on each server. However, in a production environment with over 90 SQL Servers, a remote backup solution would be ideal. I am afraid this would not be the solution. I could be wrong. I am a little fearful to place it into production right now.
However, that all being said, I will definitely be putting it to use on my development servers, workstations and test servers.
This past Friday evening, a simple data import process (that has been working for several months), decided to not work correctly. Because there was no “early warning” type of notifications systems on the SQL 2005 server, the SQL Agent Job was actually running when Monday morning rolled around.
Using Idera’s SQL Check, I quickly noticed that there was a process that had a lock on a table that was causing some MAJOR problems. Four different SQL processes was waiting on this lock. And of course my supervisor was getting calls left and right from end users because their applications were not working.
I had to figure out what was going on. I was able to get a list of processes and which one was causing all the problem with the sp_who2 command.
However, this returned a “bias” HOST NAME. I am not really sure why. But I confirmed with networking that the returned hostname and IP address do not exist on our network, but that is for another day.
I needed to know an accurate way to get all the information I needed at that moment when a job wrongs long. This code is probably providing way more information than I really need, but sometimes more is better. Even though the “host_name” is returning a “bogus” host name; the client_net_address is not! This valid IP address is very important to locating the culprit machine.
SELECT C.session_id ,C.client_net_address ,S.host_name ,S.Client_version ,S.client_interface_name ,S.login_name ,S.original_login_name ,db_name(R.database_id) AS [DB Name] ,R.status ,R.command ,R.blocking_session_id ,R.wait_type ,R.percent_complete ,wait_resource ,C.net_transport ,C.protocol_type ,C.auth_scheme ,C.last_read ,C.last_write ,S.cpu_time ,S.memory_usage ,(S.total_elapsed_time/1000) AS [Total Sec] ,S.lock_timeout FROM sys.dm_exec_connections AS C JOIN sys.dm_exec_requests AS R ON C.session_id=R.session_id JOIN sys.dm_exec_sessions AS S ON C.session_id=S.session_id ORDER BY session_id
Using Thomas LaRock’s HOW TO: Find Currently Running Long SQL Agent Jobs, I setup up a simple 2 step SQL Agent job to check for 1) long running jobs and 2) session information (using the code above). A simple “save results to txt file” allows me to export the results to and study them later.
USE [msdb] GO /****** Object: Job [job_WH_MonitorDatabase] Script Date: 10/30/2012 13:57:18 ******/ BEGIN TRANSACTION DECLARE @ReturnCode INT SELECT @ReturnCode = 0 /****** Object: JobCategory [xx] Script Date: 10/30/2012 13:57:18 ******/ IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'xxx' AND category_class=1) BEGIN EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'xxx' IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback END DECLARE @jobId BINARY(16) EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'job_MonitorDatabase', @enabled=1, @notify_level_eventlog=0, @notify_level_email=2, @notify_level_netsend=0, @notify_level_page=0, @delete_level=0, @description=N'’, @category_name=N'xx', @owner_login_name=N'INFOSYS\xx', @notify_email_operator_name=N'xx', @job_id = @jobId OUTPUT IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback /****** Object: Step [Monitor] Script Date: 10/30/2012 13:57:18 ******/ EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Monitor', @step_id=1, @cmdexec_success_code=0, @on_success_action=3, @on_success_step_id=0, @on_fail_action=2, @on_fail_step_id=0, @retry_attempts=0, @retry_interval=0, @os_run_priority=0, @subsystem=N'TSQL', @command=N'exec usp_LongRunningJobs', @database_name=N'master', @flags=0 IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback /****** Object: Step [RecordProcesses] Script Date: 10/30/2012 13:57:18 ******/ EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'RecordProcesses', @step_id=2, @cmdexec_success_code=0, @on_success_action=1, @on_success_step_id=0, @on_fail_action=2, @on_fail_step_id=0, @retry_attempts=0, @retry_interval=0, @os_run_priority=0, @subsystem=N'TSQL', @command=N'select sysp.spid AS [SPID] ,sysp.status AS [Status] ,sysp.hostname AS [Host Name] ,cn.client_net_address AS [Client IP] ,db_name(sysp.dbid) AS [DB Name] ,convert(sysname, rtrim(sysp.loginame)) as [Login] ,sysp.program_name AS [Program] ,sysp.cmd AS [CMD Status] ,sysp.cpu AS [CPU usage] ,sysp.physical_io [IO] ,sysp.blocked AS [Blocked By] ,sysp.waitresource AS [Wait Resource] from master.dbo.sysprocesses AS sysp INNER JOIN sys.dm_exec_connections AS cn ON sysp.spid=cn.session_ID WHERE sysp.dbid > 4 AND sysp.cmd <> ''AWAITING COMMAND'' ORDER BY sysp.hostname', @database_name=N'master', @output_file_name=N'F:\SQLAgent_Logs\RunningProcesses.txt', @flags=2 IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1 IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'Daily', @enabled=1, @freq_type=4, @freq_interval=1, @freq_subday_type=4, @freq_subday_interval=10, @freq_relative_interval=0, @freq_recurrence_factor=0, @active_start_date=20121030, @active_end_date=99991231, @active_start_time=0, @active_end_time=235959 IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)' IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback COMMIT TRANSACTION GOTO EndSave QuitWithRollback: IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION EndSave:
Thanks for your time! I will let you know how it turns out.