Checking Who Be Active

After talking about Ola’s MaintenanceSolution for SQL Server, I wanted those new/accidental DBAs out there to make sure they add a very helpful (and wildly used) stored procedure to their toolbox.

sp_whoisactive

Created by Adam Machanic, sp_whoisactive provides the user with all the current active connections running at the time of execution. This can be very useful when trying to identify what may be inflicting pain on your SQL Server or, in some cases, help you figure out who keeps executing that pesky “select * From”. It is easy to use, and is well documented. You can read the documentation here or on Adam’s Github page. He has recently moved his updates to Github, so if you want the most up to date version of sp_whoisactive, go there.

As always, I do want to point out a useful parameter that pulls the sleeping spids out as well, so you can see the connections that are still open, but just waiting for the “next task.” This is an over generalization but the idea is that these could still impact your SQL Server’s performance, even if they are not active currently.

sp_whosisactive @@show_sleeping_spids = 2

We can see the session_id 54 is in a “sleeping” status. While it is not currently querying our database server, at any time, this user could execute a new query. Where this is helpful is sometimes identifying a user or even an application, with an open connection to the server that “recently” executed the query (details in the “sql_text” column of the results). We can see system data of the performance impact the query had, and in some aspect, what to expect next time they do it again. I am over generalizing here, and I am sure some of you are screaming at the screen that I am forgeting about X detail, Y context, etc. But, for those out there not “classically” trained in Database Administration, this is what they need.¬†

Last Bit

Finally, my last tip, just make sure to install the query on the same database (whether that is your master database or a user database created for your tools) as the rest of the stored procedures you use for the maintenance of your servers. Hopefully this quick post helped someone out there. Thanks again for reading.

http://whoisactive.com/

https://github.com/amachanic/sp_whoisactive/releases

So You Need A Maintenance Plan?


I mean it this time when I say this will be a quick one. For most DBAs, this information is already a part of your bread and butter. But for some accidental DBAs, this information may be useful.

One of the first things that was drilled into my brain was to ensure that we had a solid maintenance plan for our SQL Servers, complete with a consistent backup schedule and troubleshooting tools to assist when a problem arose. Luckily, the SQL Community has developed many free, open source tools that are widely used across many industries. Today, we are looking at one of the most importance tool of them all.

Ola Hallengren’s Maintenance Solution

https://ola.hallengren.com/

This is the first thing I deploy to my new servers after SQL Server has been installed. This sets up various jobs (backups, integrity checks, reindexing, etc) that you can easily schedule to your hearts content. The parameters are WELL documentated and the community can assist with anything you run into if you find an issue installing the plan. Out of the box, this is solid but the customization is endless. Below are the general parameters that you SHOULD look at when installing the plan, as your needs will be different than mine.

USE [master] -- Specify the database in which the objects will be created.

1) I use a separate database, like a “DBADMIN”, to store all my tools in rather than master. But if you use master, just keep it set to this.

DECLARE @CreateJobs nvarchar(max)          = 'Y'         -- Specify whether jobs should be created.

2) If this is your first install, leave this to ‘Y’. If you are updating your plan, due to an updated version pushed out by Ola, make sure to switch this to ‘N’.

DECLARE @BackupDirectory nvarchar(max)     = NULL        -- Specify the backup root directory. If no directory is specified, the default backup directory is used.

3) I strongly suggest backing up your databases to another server, and put the network path here. This will set the default backup path for your jobs. So change NULL to that path.

DECLARE @CleanupTime int                   = NULL        -- Time in hours, after which backup files are deleted. If no time is specified, then no backup files are deleted.

4) For me, 72 hours works here. But that does not mean the need of your client/your customer will be the same. Make sure to ask them! And if they don’t tell you, then maybe it is worth the effort to sit down with them to discover your RTO/RPO. (More on that in another post?)

DECLARE @OutputFileDirectory nvarchar(max) = NULL        -- Specify the output file directory. If no directory is specified, then the SQL Server error log directory is used.

5) Find a location on your server to output a seperate log of this plan. You want this, trust me. With the path, you can always take a look at the log to see why a job failed in an easier to read format ūüôā

DECLARE @LogToTable nvarchar(max)          = 'Y'         -- Log commands to a table.

6) I always leave this as “Y” as there are times where I need to see what commands were passed (in case I screwed something up) in a previous job execution. The table will be created under whatever database you have installed the plan in. 

That Wasn’t So HARD!?

After that, go ahead and execute the plan and now you have a solid maintenance plan! Go ahead and adjust the schedules to your liking, and make sure to include your operator in those notifications if the job fails! 

Changing Backup Paths, Better

In the last week of February, I think I will try to do some smaller posts every week with something new I learned or found useful while conducting my job. The goal will be to submit small tidbits of info that will provide some sort of value for those seeking it! This week, it is going to be about changing your default backup path for your database maintenance jobs!

There will come a time where you might need to change your default path for your backup jobs. For some of you, this might be a pretty straightforward task on a handful of servers.For the rest of us, it is not ideal to hop from one instance to the next, updating the default paths manually across several servers. Thankfully for everyone, our community has a solution (most of the time) for our daily struggles. And this situation is no different!

Identifying WHERE your backups are going

booty had me like dat ass GIF by August Burns Red

I know you are probably thinking, “How could ANYONE not know where your backups are going?” I will simply respond by saying please see any sysadmin that was dubbed an accidental DBA by their company and told to “just figure it out.” Plus, maybe you are someone new to the gig and really don’t have any guidance to where anything is! These things happen people! 

A quick solution would be to connect to your database servers and run the following command:

USE msdb;
SELECT  j.job_id,
    s.srvname,
    j.name,
    js.step_id,
    js.command,
    j.enabled
FROM    dbo.sysjobs j
JOIN    dbo.sysjobsteps js
    ON  js.job_id = j.job_id
JOIN    master.dbo.sysservers s
    ON  s.srvid = j.originating_server_id
WHERE   js.command LIKE N'%BACKUP%'

This will look through system databases, identify commands where a backup path is mentioned, and output the value to you. From this info, you can identify what the job id is, servername, name of job, what step the backup command is in, and then the command itself. Unfortunately, I do not have a test server to show you the output of the command, so you will need to just trust me (or run it on your server!).

Change the backup path!

Now, the next step, with the use of the following script, makes changing default backup paths an absolute BREEZE. 

USE msdb;
DECLARE @Find nvarchar(max);
DECLARE @Replace nvarchar(max);
DECLARE @DebugOnly bit;
SET @Find = N'\\SERVERA\Backups';
SET @Replace = N'\\SERVERB\Backups';
SET @DebugOnly = 1;
IF OBJECT_ID(N'tempdb..#excludeJobs', N'U') IS NOT NULL
BEGIN
    DROP TABLE #excludeJobs;
END
CREATE TABLE #excludeJobs
(
    JobName sysname NOT NULL
        PRIMARY KEY CLUSTERED
);

INSERT INTO #excludeJobs (JobName)
VALUES ('The Name of a job you want to skip');

IF OBJECT_ID(N'tempdb..#deets', N'U') IS NOT NULL
DROP TABLE #deets;
CREATE TABLE #deets 
(
    JobName sysname NOT NULL
    , StepName sysname NOT NULL
    , OldCommand nvarchar(max) NOT NULL
    , NewCommand nvarchar(max) NOT NULL
    , PRIMARY KEY (JobName, StepName)
);
DECLARE @JobName sysname;
DECLARE @StepName sysname;
DECLARE @StepID int;
DECLARE @Command nvarchar(max);
DECLARE @NewCommand nvarchar(max);

BEGIN TRY
    BEGIN TRANSACTION;

    DECLARE cur CURSOR LOCAL FORWARD_ONLY STATIC READ_ONLY 
    FOR
    SELECT sj.name
        , sjs.step_name
        , sjs.step_id
        , sjs.command
    FROM dbo.sysjobsteps sjs
        INNER JOIN dbo.sysjobs sj ON sjs.job_id = sj.job_id
    WHERE sjs.command LIKE N'%' + @Find + N'%' ESCAPE N'|' COLLATE SQL_Latin1_General_CP1_CI_AS
        AND sj.enabled = 1
        AND NOT EXISTS (
            SELECT 1
            FROM #excludeJobs ej
            WHERE ej.JobName = sj.name
            )
    ORDER BY sj.name
        , sjs.step_name;

    OPEN cur;
    FETCH NEXT FROM cur INTO @JobName
        , @StepName
        , @StepID
        , @Command;
    WHILE @@FETCH_STATUS = 0
    BEGIN
        SET @NewCommand = REPLACE(@Command, @Find, @Replace) COLLATE SQL_Latin1_General_CP1_CI_AS;
        INSERT INTO #deets (JobName, StepName, OldCommand, NewCommand)
        SELECT JobName = @JobName
            , StepName = @StepName
            , PriorCommand = @Command
            , NewCommand = @NewCommand;

        IF @DebugOnly = 0
        BEGIN
            EXEC dbo.sp_update_jobstep @job_name = @JobName, @step_id = @StepID, @command = @NewCommand;
            PRINT N'Updated ' + @JobName;
        END
    
        FETCH NEXT FROM cur INTO @JobName
            , @StepName
            , @StepID
            , @Command;
    END
    CLOSE cur;
    DEALLOCATE cur;

    SELECT *
    FROM #deets;

    COMMIT TRANSACTION;
END TRY
BEGIN CATCH
    IF @@TRANCOUNT > 0
    BEGIN
        ROLLBACK TRANSACTION;
        PRINT N'Transaction rolled back';
    END
    PRINT ERROR_MESSAGE();
    PRINT ERROR_LINE();
END CATCH

Lets focus on the parameters you can modify, which make this script super helpful. In the “SET @Find” parameter, insert the backup path you have already identified either on your own or via the script I provided before. Next, you will want to add in the “SET @REPLACE” parameter the NEW default path you would like to update to all of your backup jobs. Finally, and this is my favorite part, is the “SET @debugOnly” parameter. By default, I leave this set to 1 as it will show you the output of running the script without actually changing the default path.  Once you have set your parameters, run the script and take a look at the details provided. In the results, you can see the JobName, StepName, OldCommand, NewCommand, OldOutputfile, and NewOutputfile.  By running the script in debug mode, you can verify if your new paths are correct. Once you are confident in the new path, set the debug mode to 0, and execute the script. I recommend running the script twice, as the second execution should come up blank signifying that your change was made successfully.

Do the backup jobs run successfully with the new path?

Since I cannot account for every situation, I will speak plainly. I strongly encourage that you run a system database backup just to verify that the backups can write to the new path. The last thing you want to do is make changes to the backup paths, and then fail to verify if the database server can even write to the new path. My guess is you will receive an unwanted call when the database server starts sending emails out saying backups are failing….

The script I provided above was not my own to be fair, and you can find the author’s write up here

Find-and-Replace for SQL Server Agent Jobs – SQL Server Science

Until next week! 

Non-Sysadmins can run SPs Too!

One of the things I have been focusing on lately is improving the processes that are currently in place and to automate tasks to give myself (and my colleague) more time focusing on larger projects. Most recently, I began writing some knowledge base for our help desk that would walk agents through basic troubleshooting steps. The hope was to have them gather some information up front (active connections, blocking sessions etc) to help speed up the resolution time for us. Since the agents are not system admins, they lack the necessary rights to gather certain data…or do they?

The following articles ( Brent Ozar and Adam Machanic) walk through how to create “certificates” that, through steps you can follow through, will allow non-sysadmins the ability to execute specific stored procedures. In my example, I will use sp_whoisactive (sp for activity monitoring), but ANY stored procedure will work.

Creating A Certificate

USE master 
GO

CREATE CERTIFICATE sp_whoisactiveCERT
ENCRYPTION BY PASSWORD = 'EnterYourPasswordHERE' 
WITH SUBJECT = 'Who is Active', 
--START_DATE = 'If you want the cert to activate at a certain time, enter it here'
EXPIRY_DATE = '9999-12-31' 
GO

Great, now we have the certificate created, we can move on to creating the user from the certificate.

Creating The User

CREATE LOGIN sp_whoisactive_login
FROM CERTIFICATE sp_whoisactiveCERT
GO

Simple enough right? The next bit is to grant permissions, and since I want this to be as “tied down” as much as possible, we are going to grant the login VIEW SERVER STATE. This will grant the ability to return server state information from Dynamic Management Views and Functions, that can be used to monitor the health of a server and diagnose issues. I do not want the agents to have the ability to anything more than that. More on Dynamic Management Views and Functions can be found here.

Granting Permissions

GRANT VIEW SERVER STATE 
TO sp_whoisactive_login
GO

With the necessary rights applied, we are ready to move to the final step. We will need to add the signature to the actual stored procedure (sp_whoisactive in this case).

Adding A John Hancock

ADD SIGNATURE TO sp_WhoIsActive 
BY CERTIFICATE sp_whoisactiveCERT
WITH PASSWORD = 'This is the same password you used for the certificate creation' 
GO

I should mention, it would help if you have sp_whoisactive installed first before you execute the last steps. But after that you are done!

Final Remarks

Huge shout out to Brent and Adam for providing the great details on how to utilize this “feature” in SQL Server, and making it easy to implement. As I said earlier, you can really implement this with any stored procedure. In a future post, I will be showing off a simple powershell script that will help you keep track of updating any of your “third-party” stored procedures that you have implemented into your arsenal across multiple servers. Until next time, thanks for reading, leave a comment below, or reach out to me if you have any questions!

Learning How to Log Ship

For my first real post, I wanted to share my experience with Microsoft SQL Server’s feature “Log Shipping.” This was one of my first major implementations and I am pretty proud of what it has become. I will not explain log shipping in this post, as that may be for another time. If you would like to learn more about log shipping, and how it works, check out the link below!

Books online


SQLShack

My implementation was to provide a solution that would give users the ability to generate reports off a database that was refreshed with data more frequently than 24 hours. And that is all I can really tell you to avoid losing my job. So, here are the details (from a high level, removing sensitive information) on my setup. 

Primary Server (also acting as the monitoring server)
Database: theRetroDBAdb
LS_Alert job set to 120 min
LS_Backup job set to every 30 min

Secondary Server
Database: theRetroDBAdbLS
Standby Mode Enabled with “Disconnect Users When Restoring” active
LS_Copy job runs every 30 min
LS_Restore job runs every hour, on the hour

Backup Server (accessed through the network)
Holds all backups created through maintenance plan
Under the “theRetroDBAdb” folder, two folders were created (Copy, Backup) and the files would go into their respective folders. 

Lessons Learned

  1. Remove the LS_database from any backup maintenance plan, this includes log,difs, and fulls
    • This was something that I did not anticipate being a problem, but after a colleague mentioned removing the log shipped db from the existing maintenance plan, it resolved on of our issues with the log shipping chain breaking. I had been told that the log shipped dbs were ignored, as long as the primary db was backed up. Well, by removing the log shipped db from the maintenance plan, we no longer ran into issues…until VEEAM backups were executed
    • I decided to reach out to the community to shed more light on the question regarding excluding the LS db form maintenance plans. You can follow the responses in this Stack Exchange question.
  2. If you have VEEAM backups, make sure application-aware processing handles logs (transaction logs) with COPY ONLY.
    • Another obscure issue that we ran into was due to VEEAM backups taking place and breaking the log chain. Luckily, my Senior DBA had read something prior to my pilot of the “Application-Aware Processing” feature in VEEAM. Under this feature was a setting that handled the processing of the transaction logs on db servers. The setting was set to “Process transaction logs with this job”.  
    • What we needed was “Perform Copy Only” which allows SQL Server to manage the transaction logs. Now, if we did not have log shipping on this server, I am not sure it would have been an issue. However, once we changed this setting we no longer saw log chain breakage over the weekened when the VEEAM backups would take place. 
  3. Set the alert to twice the restore timetable 
    • This is an optional thing, but I have noticed that some servers would kick off the alert because there was not a valid log backup to apply, but the chain was not broken. If I let the process continue, eventually the alerts would stop once a valid log backup was found. This caused a few panics and then quick realization that nothing was technically wrong with log shipping, it was just doing its job to inform me that no restore had taken place. Since our restore jobs were set to 1 hour, I adjusted the alert to 2 hours. This way, if something was actually wrong and the alert was triggered, I probably needed to look at it! 
  4. Seperate your data and log file (optional)
    • I chose to have the LS copy to go to separate drives than the default data file path, as I did with the log.
    • Again, this was more specific to our infrastructure, but I can see the benefit of having the log shipped copy and log file on seperate drives than the secondary’s default location. For my company, it made sense due to space allocation restraints and the ability to delete/remove the disk drive log shipping was shipping to in order to save space for the host in an emergency. Since our log shipped dbs are used for reporting, they can be removed in the case of a storage emergency. However, I think it is overall a great practice to separate, if possible, your log shipping copies from the other databases housed on our secondary server. 
  5. Entertain various restore time lines (30min, 1 hour, 12 hours, etc) 
    • We were not given a specific increment that these restore jobs needed to take place. The current solution we had in place, which utilized a SSIS package, created full backups and restored the database on the secondary server every 24 hours. Anything better than 24 hours was a win, so we started with 30 min. The 30 min restore ran into some contention as other backup jobs were writing to the backup server the log shipping copy and restore jobs were pulling from. So, we moved it up to 45 min and then an hour to make life easier for our reporting team to relay to our users the downtime (for restoring) for the datasource. Even though the restore process takes litteral seconds, as it’s just restoring a log backup, it was necessary to inform the users if they tried to run a report at the top of the hour that it would more than likely fail.

And that does it for this post. Have suggestions of your own? Or, do you have any good log shipping stories you would like to share? Post a comment below!