SQL Server transaction log reader use cases

One of the most common tasks for ApexSQL Log is to be used as part of a post-incident investigation, usually involving inadvertently or maliciously changed or damaged data to determine who made the changes, what exactly was changed, when and how. With ApexSQL Log, users can dig into the transaction log files and see the exact values before and after the change has occurred as well as review the complete history of changes on the affected rows.

Table of contents:



Forensic auditing


DDL auditing

Instead of tracking the DDL changes via database triggers, which can be a performance intensive and intrusive task, you can use ApexSQL Log to filter out, examine, report and export on all schema changes made to a database. You can also track the user, machine and application that made the change and even isolate and rollback problematic and/or inadvertent changes.


Before and after auditing

Before-and-after auditing can be vital to drilling down to isolate the transaction responsible for “breaking changes”, missing or damaged data, or even maliciously changed data. ApexSQL Log allows you to drill down to the table and individual row associated with the change based on powerful filter criteria including date, user, object, operation type etc and see the values before and after the change. This analysis can be done on data changes even before the tool was installed


Continuous auditing

Imagine if you could store every transaction ever made in your database in a repository, and not incur any performance impact. Or query the transaction log information just like you could any other database table?

All of the data that can be read, filtered, viewed, reported on and exported via the ApexSQL Log graphical user interface can also be continuously streamed directly to a SQL Server database repository, in a process we refer to as “Continuous auditing”

Achieving continuous auditing is an easy task for ApexSQL Log. By continuously reading transaction log files (and backups), ApexSQL Log creates uninterrupted auditing trail of all transactions executed on the audited databases ensuring that no transactions are missed or duplicated at any point.

With continuous auditing, ApexSQL Log allows user to always have easy access to the transaction log information which can be stored directly to a SQL Server database for auditing purposes and then accessed/queried on demand just like any other data.




There is a wealth of information about transaction activity, volume and type in the SQL Server transaction log. This information can be extremely useful in isolating problems and assessing any impacts to the overall system.

With ApexSQL Log and the continuous auditing feature, you can constantly monitor transaction volume, to identify suspicious spikes or other anomalies and create reports on critical metrics like transaction volume by server, database, table, user etc

Even better, if incidents are detected, the affecting transactions can be isolated and rolled back, quickly and easily


Operational reporting

By continuous archiving all transactions in a repository, reports can be configured to create a wealth of valuable aggregate information, such as transactional volumes by day, user, table application, machine etc. Reports can be created for top instances, databases, users, tables etc. by transaction volume, transactions by type e.g. Inserts, updates and deletes or DML vs DDL


Restores and recovery

Disaster recovery

After Identifying who made the inadvertent changes, where and when as described above, one of the critical tasks after the database was affected by the disaster is to roll back those inadvertent changes and revert both structure and data to the pre-disaster original state. With ApexSQL Log we can easily recover from any rogue changes by and isolate rogue transactions and easily reverse them by creating undo scripts down to the transaction, object and even row level


Row-level recovery

Recovering entire databases or even tables is not always the viable solution because any data changes, since the incident may be lost. By isolating specific transactions, ApexSQL Log can identify, isolate and repair data at the row level with surgical precision, leaving later changes untouched and avoiding any collateral data loss typically associated with a full and even object level restore.


Point in time recovery

Restore only the specific database tables back in time without affecting other tables which do not require a rollback with undo scripts created by ApexSQL Log which undo all specified operations as if they’ve never occurred.


SQL injection attacks

Quickly identify SQL injection attacks by identifying spikes in transactions or auditing for suspicious activity and pinpoint the exact areas which were affected or damaged and identify the exact data all the way to the row level that was changed. Create rollback scripts and reverse the changes made on the data to completely repair the damage caused by the incident.



Load testing

ApexSQL Log can record the load on a production server for a particular period of time, persist that to disk, so it can be replayed on a staging server(s) to perfectly replicate the activity and transactional throughput.

This system can be integrated as part of a continuous integration process to replay the production load on a staging server each time it is rebuilt



Transactional replication

ApexSQL Log can read all of the transactions on a Publisher database and write them directly to a Subscriber(s) or persist them to file, so they can be shipped and executed against a Subscriber(s) at a remote location.

ApexSQL Log can manage schema changes to keep the subscribers synchronized with the Publisher schema by replicating DDL transactions as well


Peer to peer, bi-directional replication

As easily as ApexSQL Log can do uni-directional replication, it can replicate bi-directionally between peers as well. The uni-direction process is replicated for each peer, while filtering our transactions executed by ApexSQL Log itself, or if done remotely, by a user or machine designated to execute the changes


Reporting servers

Maintain a reporting version of production database by offloading performance intensive queries. Once the reporting database has been created and configured, it can be kept up to data with uni-directional replication from a Publisher database


Cross-platform replication

SQL Server still doesn’t support cross-platform replication on Linux but with ApexSQL Log, you won’t have to wait. Replicate from SQL Server on a Windows machine to Linux or any other combination


Consolidated reporting

Consolidated reporting servers allow for optimized performance of the reporting system by avoiding using linked servers or other means to conjoin the systems, frees up loads on the transactional databases and avoids having to provide access to all systems just for reporting users and allows for optimizing the reporting database for handling queries vs transactional loads

ApexSQL Log can be set up to create multiple publishers into a single Subscriber to consolidate data across disparate databases, to provide a single, centralized reporting database.


Migrations and restores

No downtime/data loss migrations

Migration processes might take a while, especially when working with large databases or in performance intensive environments. If the production system is shut down during this period, it will create a maintenance window which may be unacceptable for system users. If production is shut down, it is bound that some transactions will be missing once the migration downtime ends. With ApexSQL Log those missing transactions can be read from the online transaction log file, at the exact point when the migration began, and replayed on the migrated database to ensure that no data was lost and eliminating the need for a maintenance window


Zero data loss restores

While database restore process restores the database to an exact point in time, it comes with one major flaw – any data or structure changes made after the database backup was created will be lost. ApexSQL Log comes to the rescue by capturing the missing transactions from the ongoing transaction log file and replaying them after the database restore job to ensure that no data (or structure) was lost due to the database restore task.


August 24, 2018