SQL Server performance monitoring – identifying problems using wait statistics and query wait analysis

In aggregate, SQL Server Wait statistics are complex, comprising hundreds of wait types registered with every execution of the query. So, to better understand how and why the wait statistic, in general, is the most important for detecting and troubleshooting SQL Server performance issues, it is important to understand the mechanism of how a query executes on SQL Server, since SQL Server performance problems are the ones that most often affect end users.

October 6, 2017

How to export SQL Server data to a CSV file

This article will explain different ways of exporting data from SQL Server to the CSV file. This article will cover the following methods:

  1. Export SQL Server data to CSV by using the SQL Server export wizard
  2. Export SQL Server data to CSV by using the bcp Utility
  3. Export SQL Server data to CSV by using SQL Server Reporting Services (SSRS) in SQL Server Data Tools (SSDT) within Visual Studio
  4. Export SQL Server data to CSV by using the ApexSQL Complete Copy results as CSV option
October 4, 2017

Multi server PowerShell script to backup SQL Server databases automatically

Eighty-two percent of the enterprises expect the number of databases to increase over the next twelve months. An increase in data volumes can have negative effects on the performance of databases. Think about the storage requirement and backup strategy to meet the Recovery Time Objective and the Recovery Point Objective. RPO and RTO are two of the most important parameters of a disaster recovery or data protection plan.

September 27, 2017

How to use source control with SQL Server Management Studio (SSMS)


The growing adoption of Continuous Integration development practice implies that developers have to work in a collaborative manner by storing and sharing their source code into a central repository. For SQL Server Business Intelligence (SQL Server BI) developers, the collaboration is usually made possible by the Team Explorer plug-in for Visual Studio which provides an interface for integrating SSRS, SSIS and SSAS solutions into source control environments such as Team Foundation Version Control (TFVC) or Git. However, several aspects of SQL Server BI solutions involve T-SQL development (i.e. defining of SSRS datasets, SSAS data source views, as well as SSIS source data flow tasks).

September 25, 2017

Creating and automating Tableau Site documentation

One of many Tableau administrator responsibilities is to be familiar with the server configuration, managing users, automated server job process management, knowing Tableau metadata and being familiar with migration and scheduling. To stay updated and keep everything under control, good Tableau documentation can come in handy. Such documentation can be created using ApexSQL Doc, a tool for creating SQL Server, BI, SharePoint as well as Tableau site documentation, which will be covered in this article.

September 22, 2017

How to create and use DML triggers to audit data changes

A DML trigger is a special kind of procedural code programmed to fire when a data manipulation language (DML) event occurs in the database server. DML events include UPDATE, INSERT, or DELETE statements issued against a table or view. SQL Server has three types of triggers: DML triggers, DDL triggers, and logon triggers. For the purpose of this article, only DML triggers will be used to demonstrate and achieve a specific goal.

September 21, 2017

How to customize policies for automatically defragmenting SQL Server indexes

This article explains how to create customized policies for index defragmentation jobs for SQL Server


When creating indexes, database administrators should look for the best settings to ensure minimal performance impact and degradation. However, over time, indexes will get fragmented, which can severely impact server performance.

September 18, 2017

How to Import and Export SQL Server data to an Excel file

There are several ways to Export/Import SQL Server data to an Excel file. In this article, a couple of ways to import and export SQL Server data to an Excel file will be explained:

  • Export SQL Server data to an Excel file using
    • the SQL Server Import and Export Wizard
    • T-SQL code
    • Export to Excel from ApexSQL Complete
  • Import SQL Server data into Excel using the Data Connection Wizard
September 11, 2017

SQL query performance tuning with I/O statistics and execution plans


In a busy system, many things can cause a SQL query to run more slowly than expected or desired. Sometimes, getting a handle on the root cause can involve digging into wait statistics, CPU, memory or network pressure, ferreting out problems in the I/O subsystem, looking for issues in a virtual machine configuration and many other things. Often though, the problem comes down to a simple symptom: the query is doing too much I/O.

August 31, 2017

Multi server script to automatically monitor SQL Server availability


As database administrators, we obsess over a few things, things we don’t compromise on. One of them is SQL Server availability. We may use several tools, scripts, and/or configurations, in addition to constant oversight, to ensure that the database is always available. We sometimes even go that extra mile to ensure availability, because we know it is easier to ensure a database is available than deal with the outcome of unavailability. In this post, let us look at some of the easiest ways to ensure availability.

August 29, 2017

Creating SharePoint farm documentation

Installing SharePoint farms can go from “click-click-next” to a full day work depending on the configuration level for the server farm. For administrators that are new to the farm, it can be quite troublesome to learn the farm’s configuration. In those cases, it is important to have a farm documentation with all the setting information. SharePoint documentation is also important for recreating sets of configurations for when failure occur.

August 25, 2017

How to create and use CRUD stored procedures in SQL Server

Working with a database, at some point, usually means working with CRUD operations. Although not complex, CRUD operations are often used with SQL, so it is of great importance for developers to learn about how to create them efficiently and easily.

August 25, 2017

How to prevent accidental data loss from executing a query in SQL Server aka “Practicing safe coding”

We sometimes may find ourselves in a stressful situation at work, where for example we need to update or delete some records in our database. We’ve all been there. Right after we click that “Execute” button, we realize we forgot to include a WHERE clause and the entire table is wiped instead of only one row. Although things like this can happen to the best of us, we can plan ahead and take preventative measures to make sure, we don’t get negatively impacted by the consequences of such a mistake.

August 17, 2017

How to identify and solve SQL Server index scan problems


Once you have a SQL Server query working correctly – that is, returning the correct results or properly updating a table with update, insert or delete operations, the next thing you usually want to look at is how well the query performs. There are simple things that you can do to improve the performance of a critical query; often those improvements can be quite dramatic!

In this article, we’ll look at one of the most-frequently-seen performance killers: SQL Server index scans. Starting with a simple example, we’ll look at what SQL Server does to figure out how to return the requested result, at least from a high level. That will allow us to zero-in on any bottlenecks and look at strategies to resolve them.

August 10, 2017

How to automatically pull SQL Server database objects from a shared script folder to a local database


As it was explained in article on How to automatically compare and synchronize SQL Server database objects with a shared script folder, this article will explain the solution for the reverse process, when changes needs to be pulled from a shared script folder to a local database. This might be helpful if a developer returns from vacation and wants to catch up to the team with all changes or if a build has been tweaked, as part of a recent test/delivery and the latest version needs to be re-propagated directly to all developers via their local development database.

July 4, 2017

How to automatically compare and synchronize SQL Server database objects with a shared script folder


In some cases, source control systems are not an option for a particular SQL developer team, due to cost concerns, lack of approval etc., but the requirements for such a system, or close approximation, for managing changes across the developer team can still be a priority.

In such cases, the team needs to think of another way of “uploading” their database changes in one place, comparing and even synchronizing them. One approach is to create a folder that is located on a shared network location, to which all developers have access and essentially use this as a “poor man’s source control repository.

In the following team example, everyone works with their own local copy of a database for development purposes but they will write all changes to a shared, central file folder:

This shared folder will contain scripts of all database objects e.g. the whole database schema.

The challenge now is to keep the shared folder up to date with changes made by all developers

If any developer makes any change to their local database, they’ll need a tool that will compare the current state of their local database against the shared script folder to update those changes on the shared folder, and to also update their local development database with changes from all of the other developers on their team, via the shared folder.

Once differences are reviewed, they should be able to select specific/all objects that he wants to synchronize to a shared script folder. Additionally, the whole process should be able run unattended, when developers want to perform synchronization on a click or to schedule it. In that case, it would be useful to have date stamped comparison reports and output files that would contain all information about changes.


In this article, a tool – ApexSQL Diff will be shown as a tool that is up for this challenge, as it offers comparison of databases to script folders. It can be automatized using its CLI and scheduled to be run at specific time/date unattended.

In this example that will be described in this article, the comparison will be scheduled every 30 minutes, if there are any differences between a shared script folder and a database, a synchronization will be executed to update the shared folder, this “poor man’s” source control repository, with changes from one of the developer’s local databases. Additionally, a text file can be created that will contain server and database names for all developers and the created script will iterate through all of them, and conduct the comparison and synchronization process for each developer database.

Along with the performed synchronization, ApexSQL Diff will create date stamped HTML comparison reports of the changes and the output summary files.

Installation topography

The installation setup can be done in two different ways:

  1. If one installation of ApexSQL Diff will control all synchronizations to a shared script folder, the whole setup will be:

    • This single instance of ApexSQL Diff will need to be able to see all of the local SQL Servers used by developers, on the network

    • Login to each individual developer database is required. Windows authentication is used in our example

    • Compare the current local database against the shared script folder
      • If there are differences, perform the synchronization process

    • Move to the next developer database
    • If a new developer is added or one leaves the team, the list that contains server and database names can be easily edited to add/remove a server/database
  2. In the example below, code is used to iterate thru a file which has a list of local databases by SQL Server name.

  3. If there is a need that each developer has ApexSQL Diff installed on his machine, then each developer needs to set this process to be run on a schedule, at a specific time, that will slightly different from other developers, to avoid collisions. For example, if one developer sets it to run each every day at 3 PM. The other developers can pick different times

Set up and how it works

Before setting up the process, if a shared script folder is not created check out the article on exporting SQL data sources, so that a whole database is exported in one script folder.

The whole process can be setup first from the application’s GUI in following steps:

  1. Run ApexSQL Diff
  2. Select a database as a source and script folder as a destination in the New project window:

    Quick tip icon

    Quick tip:

    If a database was exported to a script folder or if a database was compared and synchronized against empty script folder, the SQL Server version will be automatically loaded, but if this is the first comparison and synchronization, you should specify the same version as the version of a compared database

  3. Click the Options tab and the following options in the Synchronization options section, will ensure that synchronization process is error free:

  4. Once everything is set, click the Compare button from the bottom-right corner of the New project window to initiate the comparison process

  5. After the comparison process is done, in the Results grid all compared objects will be shown by default:
  6. Additional filtering of compared objects can be done from the Object filter panel on the left side of the main window:

    In this case, added and equal are unchecked, in order to not delete any objects that exist only in the script folder, e.g. another developer might have added these objects and we don’t want to remove them, and to not show equal objects.

  7. Check all desired objects for the synchronization process and from the Home tab, click the Save button, so that the whole setup so far can be saved to a project file that will be used for automating the process:

    The same project file with its settings can be used to process all databases, as the only thing that will be changed are the databases from the text file.

  8. Once the project file is saved, click the Synchronize button from the Home tab to start the Synchronization wizard:

  9. Once it’s started, the first step will show the synchronization direction and by default it will synchronize from source to destination

  10. In the next step, any potential dependencies and dependent objects will be analyzed and if any dependent objects are found it will be shown:

  11. In the Output options step, select the Synchronize to script folder action from the drop-down list:

    Additionally, check to create a snapshot file and backup of the script folder before the synchronization process starts, so if needed the previous state of the script folder can be rolled back.

  12. In the last step of the Synchronization wizard, actions and potential warnings can be reviewed before the synchronization process starts:

  13. If everything is in order click the Synchronize button from the bottom-right corner of the Synchronization wizard and once the synchronization process is finished, the Results window will be shown:


Now, when the first synchronization to a shared script folder was finished successfully, along with creating the project file that contains all needed settings, the whole process can be automated by creating a PowerShell script.

In our example Windows authentication was used to connect to a database, but if you choose SQL Server authentication your password will be encrypted in the previously saved project file. To learn more about handling login credentials, check out the article about ways of handling database/login credentials.

We’ll now show you only the important parts of the PowerShell script, while the whole script can be downloaded below and you can use for your purposes. If you want to learn how to automatically create folders for storing all outputs, set up their locations, along with the root folder, check out Appendix A.

Let’s define location of ApexSQL Diff and text file that contains server and database names that will be used to process and

#location of ApexSQL Diff and its parameters, date stamp variable is defined, along with tool’s parameters 
$diffLocation   = "ApexSQLDiff"
$serverDbsLocation = "servers_databases.txt"

Now, let’s define ApexSQL Diff’s parameters, along with the date stamped and return code variables:

#application's parameters, date stamped and return code variables: 
$stampDate = (Get-Date -Format "MMddyyyy_HHMMss") 
$diffParameters = "/pr:""SFSync.axds"" /ots:m d /ot:html /hro:s d t is /on:""$repLocation\ReportSchema_$stampDate.html"" /out:""$outLocation\OutputSchema_$stampDate.txt"" /sync /v /f" 
$returnCode = $LASTEXITCODE

The last important part of the PowerShell script is setting up the function that will go through each server/database from the text file and call the ApexSQL Diff application to executing its parameters:

#go through each database and exeute ApexSQL Diff's parameters
foreach($line in [System.IO.File]::ReadAllLines($serverDbsLocation))

    $server   = ($line -split ",")[0]    
    $database = ($line -split ",")[1]

    #calling ApexSQL Diff to run the schema comparison and synchronization process
    (Invoke-Expression ("& `"" + $diffLocation +"`" " +$diffParameters))


Additionally, all potential outcomes can be defined and each one can be processed in specific way. If you’re interested in defining these potential outcomes, learn more about it from article on Utilizing the “no differences detected” return code.

E-mail system

In addition to previous automation of the process, e-mail system can be set to inform you about any changes or errors. To learn more about it, check out the article on How to setup an e-mail alert system for ApexSQL tools.


Since the whole process is now automated with a PowerShell script, the process can be now easily scheduled in a couple of ways. Learn more about the ways of scheduling ApexSQL tools.

Reviewing outputs

Once the whole system is up and running for a while, all created outputs can be reviewed anytime by all developers, since the folder that contains HTML reports and output summaries is located on a shared network location:

If there is a need to review a specific HTML report, it can be easily identified as all of them are date stamped and by opening it, all comparison differences can be reviewed:

If an e-mail system was set up and an e-mail was received with a subject “ApexSQL Diff synchronization error”, the latest output summary will be attached in received e-mail and it can be analyzed to see what went wrong. Once the attached output summary is opened the following is shown:

An issue occurred during application execution at 05312017_214538.
Return code: 2
Error description: Switch ‘of’ is not recognized

With a quick check of the common return error codes in the article General usage and the common Command Line Interface (CLI) switches for ApexSQL tools, it can be concluded that the /of is invalid switch and it doesn’t exit.

By identifying who run the last synchronization, checking out the CLI switch used as a parameter and comparing it with CLI switches in the article ApexSQL Diff Command Line Interface (CLI) switches, we can quickly identify that /of is not a valid switch and that /on switch should be used instead.


Please download the script(s) associated with this article on our GitHub repository.

Please contact us for any problems or questions with the scripts.

Appendix A

In addition to explained automation process, we can also create a function that will check and create needed folders for all outputs:

#existence check and creating of Outputs and Reports folders
function CheckAndCreateFolder
        [string] $rootFolder, 
        [switch] $reports, 
        [switch] $outputs

    $location = $rootFolder

    #set the location based on the used switch
    if ($reports -eq $true)
        $location += "\Reports"
    if ($outputs -eq $true)
        $location += "\Outputs"
    #create the folder if it doesn't exist and return its path
    if (-not (Test-Path $location))
        mkdir $location -Force:$true -Confirm:$false | Out-Null 
    return $location

The next thing is to define the root folder and locations of the outputs folders:

#root folder
$rootFolder = " \\vmware-host\Shared\AutoSF"

Quick tip icon

Quick tip:

In this case, the root folder should be located on a shared network location (next to the shared script folder), so that all developers could easily review all outputs

#location for HTML reports 
$repLocation = CheckAndCreateFolder $rootFolder -Reports

#location for schema output summaries 
$outLocation = CheckAndCreateFolder $rootFolder -Oinstalutputs

June 5, 2017

How to automate and schedule SQL Server index defragmentation


SQL Server maintenance is not a one-time event, but rather a part of a continuous process. Apart from regular backups and integrity checks, performance improvements can be achieved with index maintenance. If done at regular intervals, it can free the server to focus on other requests rather than losing time scanning for fragmented indexes.

June 5, 2017

How to backup multiple SQL Server databases automatically

In situations with few databases, maintaining the regular backup routine can be achieved easily, either with the help of a few simple scripts, or by configuring a SQL Server agent job that will perform the backup automatically. However, if there are hundreds of databases to manage, backing up each database manually can prove to be quite time-consuming task. In this case, it would be useful to create a solution that would back up all, or multiple selected SQL Server databases automatically, on the regular basis. Furthermore, the solution must not impact the server performance, or cause any downtime.

June 1, 2017

How to create and manage database backup chains in SQL Server

Each event that causes data loss or disruption of regular daily operations on a SQL Server can be defined as a “disastrous” event. These events include power outages, hardware failure, virus attacks, various types of file corruption, human error, natural disasters, etc. Although there are many methods that are focused on preventing these events, they still occur from time to time, and therefore require proper measures to be addressed. One of the most effective methods for this purpose is the creation of suitable disaster recovery plan.

May 15, 2017