Multi server script to automatically monitor SQL Server availability

Introduction

As database administrators, we obsess over a few things, things we don’t compromise on. One of them is SQL Server availability. We may use several tools, scripts, and/or configurations, in addition to constant oversight to ensure that the database is always available. We sometimes even go that extra mile to ensure availability, because we know it is easier to ensure a database is available than deal with the outcome of unavailability. In this post, let us look at some of the easiest ways to ensure availability.

August 29, 2017

Creating SharePoint farm documentation

Installing SharePoint farms can go from “click-click-next” to a full day work depending on the configuration level for the server farm. For administrators that are new to the farm, it can be quite troublesome to learn the farm’s configuration. In those cases, it is important to have a farm documentation with all the setting information. SharePoint documentation is also important for recreating sets of configurations for when failure occur.

August 25, 2017

How to create and use CRUD stored procedures in SQL Server

Working with a database, at some point, usually means working with CRUD operations. Although not complex, CRUD operations are often used with SQL, so it is of great importance for developers to learn about how to create them efficiently and easily.

August 25, 2017

How to prevent accidental data loss from executing a query in SQL Server aka “Practicing safe coding”

We sometimes may find ourselves in a stressful situation at work, where for example we need to update or delete some records in our database. We’ve all been there. Right after we click that “Execute” button, we realize we forgot to include a WHERE clause and the entire table is wiped instead of only one row. Although things like this can happen to the best of us, we can plan ahead and take preventative measures to make sure, we don’t get negatively impacted by the consequences of such a mistake.

August 17, 2017

How to identify and solve SQL Server index scan problems

Introduction

Once you have a SQL Server query working correctly – that is, returning the correct results or properly updating a table with update, insert or delete operations, the next thing you usually want to look at is how well the query performs. There are simple things that you can do to improve the performance of a critical query; often those improvements can be quite dramatic!

In this article, we’ll look at one of the most-frequently-seen performance killers: SQL Server index scans. Starting with a simple example, we’ll look at what SQL Server does to figure out how to return the requested result, at least from a high level. That will allow us to zero-in on any bottlenecks and look at strategies to resolve them.

August 10, 2017

How to automatically pull SQL Server database objects from a shared script folder to a local database

Challenge

As it was explained in article on How to automatically compare and synchronize SQL Server database objects with a shared script folder, this article will explain the solution for the reverse process, when changes needs to be pulled from a shared script folder to a local database. This might be helpful if a developer returns from vacation and wants to catch up to the team with all changes or if a build has been tweaked, as part of a recent test/delivery and the latest version needs to be re-propagated directly to all developers via their local development database.

July 4, 2017

How to automatically compare and synchronize SQL Server database objects with a shared script folder

Challenge

In some cases, source control systems are not an option for a particular SQL developer team, due to cost concerns, lack of approval etc., but the requirements for such a system, or close approximation, for managing changes across the developer team can still be a priority.

In such cases, the team needs to think of another way of “uploading” their database changes in one place, comparing and even synchronizing them. One approach is to create a folder that is located on a shared network location, to which all developers have access and essentially use this as a “poor man’s source control repository.

In the following team example, everyone works with their own local copy of a database for development purposes but they will write all changes to a shared, central file folder:

This shared folder will contain scripts of all database objects e.g. the whole database schema.

The challenge now is to keep the shared folder up to date with changes made by all developers

If any developer makes any change to their local database, they’ll need a tool that will compare the current state of their local database against the shared script folder to update those changes on the shared folder, and to also update their local development database with changes from all of the other developers on their team, via the shared folder.

Once differences are reviewed, they should be able to select specific/all objects that he wants to synchronize to a shared script folder. Additionally, the whole process should be able run unattended, when developers want to perform synchronization on a click or to schedule it. In that case, it would be useful to have date stamped comparison reports and output files that would contain all information about changes.

Solution

In this article, a tool – ApexSQL Diff will be shown as a tool that is up for this challenge, as it offers comparison of databases to script folders. It can be automatized using its CLI and scheduled to be run at specific time/date unattended.

In this example that will be described in this article, the comparison will be scheduled every 30 minutes, if there are any differences between a shared script folder and a database, a synchronization will be executed to update the shared folder, this “poor man’s” source control repository, with changes from one of the developer’s local databases. Additionally, a text file can be created that will contain server and database names for all developers and the created script will iterate through all of them, and conduct the comparison and synchronization process for each developer database.

Along with the performed synchronization, ApexSQL Diff will create date stamped HTML comparison reports of the changes and the output summary files.

Installation topography

The installation setup can be done in two different ways:

  1. If one installation of ApexSQL Diff will control all synchronizations to a shared script folder, the whole setup will be:

    • This single instance of ApexSQL Diff will need to be able to see all of the local SQL Servers used by developers, on the network

    • Login to each individual developer database is required. Windows authentication is used in our example

    • Compare the current local database against the shared script folder
      • If there are differences, perform the synchronization process

    • Move to the next developer database
    • If a new developer is added or one leaves the team, the list that contains server and database names can be easily edited to add/remove a server/database
  2. In the example below, code is used to iterate thru a file which has a list of local databases by SQL Server name.

  3. If there is a need that each developer has ApexSQL Diff installed on his machine, then each developer needs to set this process to be run on a schedule, at a specific time, that will slightly different from other developers, to avoid collisions. For example, if one developer sets it to run each every day at 3 PM. The other developers can pick different times

Set up and how it works

Before setting up the process, if a shared script folder is not created check out the article on exporting SQL data sources, so that a whole database is exported in one script folder.

The whole process can be setup first from the application’s GUI in following steps:

  1. Run ApexSQL Diff
  2. Select a database as a source and script folder as a destination in the New project window:

    Quick tip icon

    Quick tip:

    If a database was exported to a script folder or if a database was compared and synchronized against empty script folder, the SQL Server version will be automatically loaded, but if this is the first comparison and synchronization, you should specify the same version as the version of a compared database

  3. Click the Options tab and the following options in the Synchronization options section, will ensure that synchronization process is error free:

  4. Once everything is set, click the Compare button from the bottom-right corner of the New project window to initiate the comparison process

  5. After the comparison process is done, in the Results grid all compared objects will be shown by default:
  6. Additional filtering of compared objects can be done from the Object filter panel on the left side of the main window:

    In this case, added and equal are unchecked, in order to not delete any objects that exist only in the script folder, e.g. another developer might have added these objects and we don’t want to remove them, and to not show equal objects.

  7. Check all desired objects for the synchronization process and from the Home tab, click the Save button, so that the whole setup so far can be saved to a project file that will be used for automating the process:

    The same project file with its settings can be used to process all databases, as the only thing that will be changed are the databases from the text file.

  8. Once the project file is saved, click the Synchronize button from the Home tab to start the Synchronization wizard:


  9. Once it’s started, the first step will show the synchronization direction and by default it will synchronize from source to destination

  10. In the next step, any potential dependencies and dependent objects will be analyzed and if any dependent objects are found it will be shown:

  11. In the Output options step, select the Synchronize to script folder action from the drop-down list:

    Additionally, check to create a snapshot file and backup of the script folder before the synchronization process starts, so if needed the previous state of the script folder can be rolled back.

  12. In the last step of the Synchronization wizard, actions and potential warnings can be reviewed before the synchronization process starts:

  13. If everything is in order click the Synchronize button from the bottom-right corner of the Synchronization wizard and once the synchronization process is finished, the Results window will be shown:

Automation

Now, when the first synchronization to a shared script folder was finished successfully, along with creating the project file that contains all needed settings, the whole process can be automated by creating a PowerShell script.

In our example Windows authentication was used to connect to a database, but if you choose SQL Server authentication your password will be encrypted in the previously saved project file. To learn more about handling login credentials, check out the article about ways of handling database/login credentials.

We’ll now show you only the important parts of the PowerShell script, while the whole script can be downloaded below and you can use for your purposes. If you want to learn how to automatically create folders for storing all outputs, set up their locations, along with the root folder, check out Appendix A.

Let’s define location of ApexSQL Diff and text file that contains server and database names that will be used to process and

#location of ApexSQL Diff and its parameters, date stamp variable is defined, along with tool’s parameters 
$diffLocation   = "ApexSQLDiff"
$serverDbsLocation = "servers_databases.txt"

Now, let’s define ApexSQL Diff’s parameters, along with the date stamped and return code variables:

#application's parameters, date stamped and return code variables: 
$stampDate = (Get-Date -Format "MMddyyyy_HHMMss") 
$diffParameters = "/pr:""SFSync.axds"" /ots:m d /ot:html /hro:s d t is /on:""$repLocation\ReportSchema_$stampDate.html"" /out:""$outLocation\OutputSchema_$stampDate.txt"" /sync /v /f" 
$returnCode = $LASTEXITCODE

The last important part of the PowerShell script is setting up the function that will go through each server/database from the text file and call the ApexSQL Diff application to executing its parameters:

#go through each database and exeute ApexSQL Diff's parameters
foreach($line in [System.IO.File]::ReadAllLines($serverDbsLocation))
{

    $server   = ($line -split ",")[0]    
    $database = ($line -split ",")[1]

    #calling ApexSQL Diff to run the schema comparison and synchronization process
    (Invoke-Expression ("& `"" + $diffLocation +"`" " +$diffParameters))

}

Additionally, all potential outcomes can be defined and each one can be processed in specific way. If you’re interested in defining these potential outcomes, learn more about it from article on Utilizing the “no differences detected” return code.

E-mail system

In addition to previous automation of the process, e-mail system can be set to inform you about any changes or errors. To learn more about it, check out the article on How to setup an e-mail alert system for ApexSQL tools.

Scheduling

Since the whole process is now automated with a PowerShell script, the process can be now easily scheduled in a couple of ways. Learn more about the ways of scheduling ApexSQL tools.

Reviewing outputs

Once the whole system is up and running for a while, all created outputs can be reviewed anytime by all developers, since the folder that contains HTML reports and output summaries is located on a shared network location:

If there is a need to review a specific HTML report, it can be easily identified as all of them are date stamped and by opening it, all comparison differences can be reviewed:

If an e-mail system was set up and an e-mail was received with a subject “ApexSQL Diff synchronization error”, the latest output summary will be attached in received e-mail and it can be analyzed to see what went wrong. Once the attached output summary is opened the following is shown:

An issue occurred during application execution at 05312017_214538.
Return code: 2
Error description: Switch ‘of’ is not recognized

With a quick check of the common return error codes in the article General usage and the common Command Line Interface (CLI) switches for ApexSQL tools, it can be concluded that the /of is invalid switch and it doesn’t exit.

By identifying who run the last synchronization, checking out the CLI switch used as a parameter and comparing it with CLI switches in the article ApexSQL Diff Command Line Interface (CLI) switches, we can quickly identify that /of is not a valid switch and that /on switch should be used instead.

Downloads

Please download the script(s) associated with this article on our GitHub repository.

Please contact us for any problems or questions with the scripts.

Appendix A

In addition to explained automation process, we can also create a function that will check and create needed folders for all outputs:

#existence check and creating of Outputs and Reports folders
function CheckAndCreateFolder
{
    param 
    (
        [string] $rootFolder, 
        [switch] $reports, 
        [switch] $outputs
    )

    $location = $rootFolder

    #set the location based on the used switch
    if ($reports -eq $true)
    {
        $location += "\Reports"
    }
    if ($outputs -eq $true)
    {
        $location += "\Outputs"
    }
    #create the folder if it doesn't exist and return its path
    if (-not (Test-Path $location))
    { 
        mkdir $location -Force:$true -Confirm:$false | Out-Null 
    }
    return $location
}

The next thing is to define the root folder and locations of the outputs folders:

#root folder
$rootFolder = " \\vmware-host\Shared\AutoSF"

Quick tip icon

Quick tip:

In this case, the root folder should be located on a shared network location (next to the shared script folder), so that all developers could easily review all outputs

#location for HTML reports 
$repLocation = CheckAndCreateFolder $rootFolder -Reports

#location for schema output summaries 
$outLocation = CheckAndCreateFolder $rootFolder -Oinstalutputs

June 5, 2017

How to automate and schedule SQL Server index defragmentation

Introduction

SQL Server maintenance is not a one-time event, but rather a part of a continuous process. Apart from regular backups and integrity checks, performance improvements can be achieved with index maintenance. If done at regular intervals, it can free the server to focus on other requests rather than losing time scanning for fragmented indexes.

June 5, 2017

How to backup multiple SQL Server databases automatically

In situations with few databases, maintaining the regular backup routine can be achieved easily, either with the help of a few simple scripts, or by configuring a SQL Server agent job that will perform the backup automatically. However, if there are hundreds of databases to manage, backing up each database manually can prove to be quite time-consuming task. In this case, it would be useful to create a solution that would back up all, or multiple selected SQL Server databases automatically, on the regular basis. Furthermore, the solution must not impact the server performance, or cause any downtime.

June 1, 2017

How to create and manage database backup chains in SQL Server

Each event that causes data loss or disruption of regular daily operations on a SQL Server can be defined as a “disastrous” event. These events include power outages, hardware failure, virus attacks, various types of file corruption, human error, natural disasters, etc. Although there are many methods that are focused on preventing these events, they still occur from time to time, and therefore require proper measures to be addressed. One of the most effective methods for this purpose is the creation of suitable disaster recovery plan.

May 15, 2017

Two ways to rename SQL Server database objects

From time to time, a database object may need to be renamed for various reasons. When that happens, native features for renaming SQL Server database objects can be very useful. But, there are big differences between just renaming SQL Server database objects in the SQL Server Management Studio and Safe renaming them with ApexSQL Refactor.

This article will explain the differences between renaming database objects with SSMS and the ApexSQL Refactor’s Safe rename feature.

April 27, 2017

How to automatically compare and synchronize multiple databases on different SQL Server instances

Challenge

It’s often quite a challenge to keep all SQL databases located on different SQL Servers in sync. As time goes by, a lot of schema and data changes are made on QA databases on daily basis that require to be in sync with Production databases.

To keep everything in sync, there should a system that would be either triggered or scheduled to run the comparison of all SQL databases and synchronize the ones were changes are detected. This system should be also aware of any dependencies during the synchronization in order to keep the SQL database integrity.

April 5, 2017

How to set up email notifications for backup jobs in SQL Server

Introduction

For a SQL Server DBA handling multiple databases on any given time, knowing how to set up regular backup schedules, backups with unique names on a daily basis, making backup mirrors for redundancy, cleaning up old backup files is important. Equally important is automatic confirmation that the backups have been successfully created for the databases with an email notification. There are a couple of different ways to set up email notifications which can be done from Microsoft’s SQL Server Management Studio, or from a third party application for managing MS SQL Server backups like ApexSQL Backup.

April 3, 2017

How to detect whether index fragmentation affects SQL Server performance 

Background

After initial index creation in a SQL Server database, everything is properly ordered, which means that the logical index page order perfectly matches the physical index page order within the datafile. This is the ideal scenario and it allows for maximum query performance. If the table contains data that never changes, the index will remain perfectly ordered.

March 27, 2017

How to automate SQL Server defragmentation using policies

Introduction

Apart from numerous factors, poor index maintenance can be a reason for decreased SQL Server performance. If a database contains tables with numerous entries, that get updated frequently, it is most likely that high index fragmentation will occur. For smaller indexes, high fragmentation does not necessarily degrade the performance of the queries that are run on a table. But for the larger tables, with indexes that consist of 1000 pages and more, fragmentation could cause noticeable performance issues. Luckily, performing index maintenance tasks on a regular basis can eliminate the risk of degrading performance significantly. The most effective ways for treating index fragmentation are reorganize and rebuild index operations.

March 9, 2017