After talking about Ola’s MaintenanceSolution for SQL Server, I wanted those new/accidental DBAs out there to make sure they add a very helpful (and wildly used) stored procedure to their toolbox.
Created by Adam Machanic, sp_whoisactive provides the user with all the current active connections running at the time of execution. This can be very useful when trying to identify what may be inflicting pain on your SQL Server or, in some cases, help you figure out who keeps executing that pesky “select * From”. It is easy to use, and is well documented. You can read the documentation here or on Adam’s Github page. He has recently moved his updates to Github, so if you want the most up to date version of sp_whoisactive, go there.
As always, I do want to point out a useful parameter that pulls the sleeping spids out as well, so you can see the connections that are still open, but just waiting for the “next task.” This is an over generalization but the idea is that these could still impact your SQL Server’s performance, even if they are not active currently.
sp_whosisactive @@show_sleeping_spids = 2
We can see the session_id 54 is in a “sleeping” status. While it is not currently querying our database server, at any time, this user could execute a new query. Where this is helpful is sometimes identifying a user or even an application, with an open connection to the server that “recently” executed the query (details in the “sql_text” column of the results). We can see system data of the performance impact the query had, and in some aspect, what to expect next time they do it again. I am over generalizing here, and I am sure some of you are screaming at the screen that I am forgeting about X detail, Y context, etc. But, for those out there not “classically” trained in Database Administration, this is what they need.
Finally, my last tip, just make sure to install the query on the same database (whether that is your master database or a user database created for your tools) as the rest of the stored procedures you use for the maintenance of your servers. Hopefully this quick post helped someone out there. Thanks again for reading.
I mean it this time when I say this will be a quick one. For most DBAs, this information is already a part of your bread and butter. But for some accidental DBAs, this information may be useful.
One of the first things that was drilled into my brain was to ensure that we had a solid maintenance plan for our SQL Servers, complete with a consistent backup schedule and troubleshooting tools to assist when a problem arose. Luckily, the SQL Community has developed many free, open source tools that are widely used across many industries. Today, we are looking at one of the most importance tool of them all.
This is the first thing I deploy to my new servers after SQL Server has been installed. This sets up various jobs (backups, integrity checks, reindexing, etc) that you can easily schedule to your hearts content. The parameters are WELL documentated and the community can assist with anything you run into if you find an issue installing the plan. Out of the box, this is solid but the customization is endless. Below are the general parameters that you SHOULD look at when installing the plan, as your needs will be different than mine.
USE [master] -- Specify the database in which the objects will be created.
1) I use a separate database, like a “DBADMIN”, to store all my tools in rather than master. But if you use master, just keep it set to this.
DECLARE @CreateJobs nvarchar(max) = 'Y' -- Specify whether jobs should be created.
2) If this is your first install, leave this to ‘Y’. If you are updating your plan, due to an updated version pushed out by Ola, make sure to switch this to ‘N’.
DECLARE @BackupDirectory nvarchar(max) = NULL -- Specify the backup root directory. If no directory is specified, the default backup directory is used.
3) I strongly suggest backing up your databases to another server, and put the network path here. This will set the default backup path for your jobs. So change NULL to that path.
DECLARE @CleanupTime int = NULL -- Time in hours, after which backup files are deleted. If no time is specified, then no backup files are deleted.
4) For me, 72 hours works here. But that does not mean the need of your client/your customer will be the same. Make sure to ask them! And if they don’t tell you, then maybe it is worth the effort to sit down with them to discover your RTO/RPO. (More on that in another post?)
DECLARE @OutputFileDirectory nvarchar(max) = NULL -- Specify the output file directory. If no directory is specified, then the SQL Server error log directory is used.
5) Find a location on your server to output a seperate log of this plan. You want this, trust me. With the path, you can always take a look at the log to see why a job failed in an easier to read format 🙂
DECLARE @LogToTable nvarchar(max) = 'Y' -- Log commands to a table.
6) I always leave this as “Y” as there are times where I need to see what commands were passed (in case I screwed something up) in a previous job execution. The table will be created under whatever database you have installed the plan in.
That Wasn’t So HARD!?
After that, go ahead and execute the plan and now you have a solid maintenance plan! Go ahead and adjust the schedules to your liking, and make sure to include your operator in those notifications if the job fails!
In the last week of February, I think I will try to do some smaller posts every week with something new I learned or found useful while conducting my job. The goal will be to submit small tidbits of info that will provide some sort of value for those seeking it! This week, it is going to be about changing your default backup path for your database maintenance jobs!
There will come a time where you might need to change your default path for your backup jobs. For some of you, this might be a pretty straightforward task on a handful of servers.For the rest of us, it is not ideal to hop from one instance to the next, updating the default paths manually across several servers. Thankfully for everyone, our community has a solution (most of the time) for our daily struggles. And this situation is no different!
Identifying WHERE your backups are going
I know you are probably thinking, “How could ANYONE not know where your backups are going?” I will simply respond by saying please see any sysadmin that was dubbed an accidental DBA by their company and told to “just figure it out.” Plus, maybe you are someone new to the gig and really don’t have any guidance to where anything is! These things happen people!
A quick solution would be to connect to your database servers and run the following command:
FROM dbo.sysjobs j
JOIN dbo.sysjobsteps js
ON js.job_id = j.job_id
JOIN master.dbo.sysservers s
ON s.srvid = j.originating_server_id
WHERE js.command LIKE N'%BACKUP%'
This will look through system databases, identify commands where a backup path is mentioned, and output the value to you. From this info, you can identify what the job id is, servername, name of job, what step the backup command is in, and then the command itself. Unfortunately, I do not have a test server to show you the output of the command, so you will need to just trust me (or run it on your server!).
Change the backup path!
Now, the next step, with the use of the following script, makes changing default backup paths an absolute BREEZE.
DECLARE @Find nvarchar(max);
DECLARE @Replace nvarchar(max);
DECLARE @DebugOnly bit;
SET @Find = N'\\SERVERA\Backups';
SET @Replace = N'\\SERVERB\Backups';
SET @DebugOnly = 1;
IF OBJECT_ID(N'tempdb..#excludeJobs', N'U') IS NOT NULL
DROP TABLE #excludeJobs;
CREATE TABLE #excludeJobs
JobName sysname NOT NULL
PRIMARY KEY CLUSTERED
INSERT INTO #excludeJobs (JobName)
VALUES ('The Name of a job you want to skip');
IF OBJECT_ID(N'tempdb..#deets', N'U') IS NOT NULL
DROP TABLE #deets;
CREATE TABLE #deets
JobName sysname NOT NULL
, StepName sysname NOT NULL
, OldCommand nvarchar(max) NOT NULL
, NewCommand nvarchar(max) NOT NULL
, PRIMARY KEY (JobName, StepName)
DECLARE @JobName sysname;
DECLARE @StepName sysname;
DECLARE @StepID int;
DECLARE @Command nvarchar(max);
DECLARE @NewCommand nvarchar(max);
DECLARE cur CURSOR LOCAL FORWARD_ONLY STATIC READ_ONLY
FROM dbo.sysjobsteps sjs
INNER JOIN dbo.sysjobs sj ON sjs.job_id = sj.job_id
WHERE sjs.command LIKE N'%' + @Find + N'%' ESCAPE N'|' COLLATE SQL_Latin1_General_CP1_CI_AS
AND sj.enabled = 1
AND NOT EXISTS (
FROM #excludeJobs ej
WHERE ej.JobName = sj.name
ORDER BY sj.name
FETCH NEXT FROM cur INTO @JobName
WHILE @@FETCH_STATUS = 0
SET @NewCommand = REPLACE(@Command, @Find, @Replace) COLLATE SQL_Latin1_General_CP1_CI_AS;
INSERT INTO #deets (JobName, StepName, OldCommand, NewCommand)
SELECT JobName = @JobName
, StepName = @StepName
, PriorCommand = @Command
, NewCommand = @NewCommand;
IF @DebugOnly = 0
EXEC dbo.sp_update_jobstep @job_name = @JobName, @step_id = @StepID, @command = @NewCommand;
PRINT N'Updated ' + @JobName;
FETCH NEXT FROM cur INTO @JobName
IF @@TRANCOUNT > 0
PRINT N'Transaction rolled back';
Lets focus on the parameters you can modify, which make this script super helpful. In the “SET @Find” parameter, insert the backup path you have already identified either on your own or via the script I provided before. Next, you will want to add in the “SET @REPLACE” parameter the NEW default path you would like to update to all of your backup jobs. Finally, and this is my favorite part, is the “SET @debugOnly” parameter. By default, I leave this set to 1 as it will show you the output of running the script without actually changing the default path. Once you have set your parameters, run the script and take a look at the details provided. In the results, you can see the JobName, StepName, OldCommand, NewCommand, OldOutputfile, and NewOutputfile. By running the script in debug mode, you can verify if your new paths are correct. Once you are confident in the new path, set the debug mode to 0, and execute the script. I recommend running the script twice, as the second execution should come up blank signifying that your change was made successfully.
Do the backup jobs run successfully with the new path?
Since I cannot account for every situation, I will speak plainly. I strongly encourage that you run a system database backup just to verify that the backups can write to the new path. The last thing you want to do is make changes to the backup paths, and then fail to verify if the database server can even write to the new path. My guess is you will receive an unwanted call when the database server starts sending emails out saying backups are failing….
The script I provided above was not my own to be fair, and you can find the author’s write up here
I just wanted to post this as a “lessons learned” for me, a very green DBA, as I wish I would have known this before I ran into an issue in production. Yes, production. I hear all the people screaming “You Monster! How could you!” Believe me, I was in the same boat. Thankfully, many other people have done the same!
So, let us get to the point. There will come a time when someone runs a query that bloats your tempdb (and any of the separate data files) and you begin to receive alerts. For some people, the first response should ALWAYS be to just extend the drive, then look at why your tempdb expanded to fill the drive.
However, some of us may not be as lucky and have to find a way to make space. In this case, we shrink-responsibly!
My first move was to make sure I could even shrink the files.
DBCC SHRINKFILE (Tempdev, 20480);----remember this number is megabytes. Make sure to be exact.
DBCC SHRINKFILE (Temp02, 20480);
If all goes well, you should be good right? Some people suggest that you do not need to restart SQL Service. However, in my case I needed to as I had another piece to the problem I have not touched on yet. Regardless, I would strongly suggest if you have already scheduled the downtime to shrink the tempdb files, it would not hurt for the restart.
BUT! My problem was not a rouge transaction that expanded the tempdb files to max out the drive. No, the problem was my stupidity in a previous scheduled downtime to add more tempdb files to match the rest of our database servers. In doing so, I added TOO MANY. Yes…TOO many. So I was going back and fixing my mistake. In this case, the SQL gods were making it a point to teach me a lesson and here is why.
I had to shrink all of the files down, and delete the extra one. So I shrank the files, and went to delete the extra tempdb file. In order to do so, a file needs to be “empty” in order to be removed. So….
DBCC SHRINKFILE (LogicalFilename, EMPTYFILE);
And then the problem hit me:
DBCC SHRINKFILE: Page 3|44101 could not be moved because it is a work table page.
Msg 2555, Level 16, State 1, Line 1
There are work tables present. Of course!
Now, you could try and restart the instance to see if the worktables magically go away, but I want to reduce the amount of times we restart the service. Therefore, I took to the community and came across some suggestions about clearing various caches(sp?). Let’s give it a shot!
DBCC FREESYSTEMCACHE ( 'ALL');
But life isn’t easy right? Why would it be! I tried again to empty the file and it did not work. Thankfully, I have a FANTASTIC mentor, who suggested “Why don’t you just try to shrink it down to 1MB, do a restart, and delete it.” At this point, I was happy for suggestions. So….
DBCC SHRINKFILE (Temp05, 1);
Then the restart of the service. So far so good, as the file DID shrink to close to 1MB. Let’s try to delete via SSMS…and SUCCESS! I could not believe it. I spent two nights, during off hours, to try and fix this and the answer was as simple as just shrinking the file to 1mb, and delete.
I would like to link two wonderful articles that I found in the process of battling the tempdb, and you should definitely follow both Erin and Andy. They are fantastic people, and smart. Very smart.
Moral of the story: Don’t shrink, add space. Because, should you ever really shrink your SQL Server database? Andy has the answer for you
One of the things I have been focusing on lately is improving the processes that are currently in place and to automate tasks to give myself (and my colleague) more time focusing on larger projects. Most recently, I began writing some knowledge base for our help desk that would walk agents through basic troubleshooting steps. The hope was to have them gather some information up front (active connections, blocking sessions etc) to help speed up the resolution time for us. Since the agents are not system admins, they lack the necessary rights to gather certain data…or do they?
The following articles ( Brent Ozar and Adam Machanic) walk through how to create “certificates” that, through steps you can follow through, will allow non-sysadmins the ability to execute specific stored procedures. In my example, I will use sp_whoisactive (sp for activity monitoring), but ANY stored procedure will work.
Creating A Certificate
CREATE CERTIFICATE sp_whoisactiveCERT
ENCRYPTION BY PASSWORD = 'EnterYourPasswordHERE'
WITH SUBJECT = 'Who is Active',
--START_DATE = 'If you want the cert to activate at a certain time, enter it here'
EXPIRY_DATE = '9999-12-31'
Great, now we have the certificate created, we can move on to creating the user from the certificate.
Creating The User
CREATE LOGIN sp_whoisactive_login
FROM CERTIFICATE sp_whoisactiveCERT
Simple enough right? The next bit is to grant permissions, and since I want this to be as “tied down” as much as possible, we are going to grant the login VIEW SERVER STATE. This will grant the ability to return server state information from Dynamic Management Views and Functions, that can be used to monitor the health of a server and diagnose issues. I do not want the agents to have the ability to anything more than that. More on Dynamic Management Views and Functions can be found here.
GRANT VIEW SERVER STATE
With the necessary rights applied, we are ready to move to the final step. We will need to add the signature to the actual stored procedure (sp_whoisactive in this case).
Adding A John Hancock
ADD SIGNATURE TO sp_WhoIsActive
BY CERTIFICATE sp_whoisactiveCERT
WITH PASSWORD = 'This is the same password you used for the certificate creation'
I should mention, it would help if you have sp_whoisactive installed first before you execute the last steps. But after that you are done!
Huge shout out to Brent and Adam for providing the great details on how to utilize this “feature” in SQL Server, and making it easy to implement. As I said earlier, you can really implement this with any stored procedure. In a future post, I will be showing off a simple powershell script that will help you keep track of updating any of your “third-party” stored procedures that you have implemented into your arsenal across multiple servers. Until next time, thanks for reading, leave a comment below, or reach out to me if you have any questions!
For my first real post, I wanted to share my experience with Microsoft SQL Server’s feature “Log Shipping.” This was one of my first major implementations and I am pretty proud of what it has become. I will not explain log shipping in this post, as that may be for another time. If you would like to learn more about log shipping, and how it works, check out the link below!
My implementation was to provide a solution that would give users the ability to generate reports off a database that was refreshed with data more frequently than 24 hours. And that is all I can really tell you to avoid losing my job. So, here are the details (from a high level, removing sensitive information) on my setup.
Primary Server (also acting as the monitoring server) Database: theRetroDBAdb LS_Alert job set to 120 min LS_Backup job set to every 30 min
Secondary Server Database: theRetroDBAdbLS Standby Mode Enabled with “Disconnect Users When Restoring” active LS_Copy job runs every 30 min LS_Restore job runs every hour, on the hour
Backup Server (accessed through the network) Holds all backups created through maintenance plan Under the “theRetroDBAdb” folder, two folders were created (Copy, Backup) and the files would go into their respective folders.
Remove the LS_database from any backup maintenance plan, this includes log,difs, and fulls
This was something that I did not anticipate being a problem, but after a colleague mentioned removing the log shipped db from the existing maintenance plan, it resolved on of our issues with the log shipping chain breaking. I had been told that the log shipped dbs were ignored, as long as the primary db was backed up. Well, by removing the log shipped db from the maintenance plan, we no longer ran into issues…until VEEAM backups were executed
I decided to reach out to the community to shed more light on the question regarding excluding the LS db form maintenance plans. You can follow the responses in this Stack Exchange question.
If you have VEEAM backups, make sure application-aware processing handles logs (transaction logs) with COPY ONLY.
Another obscure issue that we ran into was due to VEEAM backups taking place and breaking the log chain. Luckily, my Senior DBA had read something prior to my pilot of the “Application-Aware Processing” feature in VEEAM. Under this feature was a setting that handled the processing of the transaction logs on db servers. The setting was set to “Process transaction logs with this job”.
What we needed was “Perform Copy Only” which allows SQL Server to manage the transaction logs. Now, if we did not have log shipping on this server, I am not sure it would have been an issue. However, once we changed this setting we no longer saw log chain breakage over the weekened when the VEEAM backups would take place.
This is an optional thing, but I have noticed that some servers would kick off the alert because there was not a valid log backup to apply, but the chain was not broken. If I let the process continue, eventually the alerts would stop once a valid log backup was found. This caused a few panics and then quick realization that nothing was technically wrong with log shipping, it was just doing its job to inform me that no restore had taken place. Since our restore jobs were set to 1 hour, I adjusted the alert to 2 hours. This way, if something was actually wrong and the alert was triggered, I probably needed to look at it!
Seperate your data and log file (optional)
I chose to have the LS copy to go to separate drives than the default data file path, as I did with the log.
Again, this was more specific to our infrastructure, but I can see the benefit of having the log shipped copy and log file on seperate drives than the secondary’s default location. For my company, it made sense due to space allocation restraints and the ability to delete/remove the disk drive log shipping was shipping to in order to save space for the host in an emergency. Since our log shipped dbs are used for reporting, they can be removed in the case of a storage emergency. However, I think it is overall a great practice to separate, if possible, your log shipping copies from the other databases housed on our secondary server.
Entertain various restore time lines (30min, 1 hour, 12 hours, etc)
We were not given a specific increment that these restore jobs needed to take place. The current solution we had in place, which utilized a SSIS package, created full backups and restored the database on the secondary server every 24 hours. Anything better than 24 hours was a win, so we started with 30 min. The 30 min restore ran into some contention as other backup jobs were writing to the backup server the log shipping copy and restore jobs were pulling from. So, we moved it up to 45 min and then an hour to make life easier for our reporting team to relay to our users the downtime (for restoring) for the datasource. Even though the restore process takes litteral seconds, as it’s just restoring a log backup, it was necessary to inform the users if they tried to run a report at the top of the hour that it would more than likely fail.
And that does it for this post. Have suggestions of your own? Or, do you have any good log shipping stories you would like to share? Post a comment below!