Some times, you may need your backup files to be smaller. For example:


  • Running out of space in your backup disk drive.
  • Need to copy backup files to other location through a low bandwidth connection.
  • Low speed of your backup disk drive.


You can find several tools in the market that offer you reduced backup size with inherited advantages such as less time to complete backup/restore operations, less space needed to store your backups…

The main disadvantage of these tools is that you have to pay for them. This does not mean they do not worth the money, but sometimes, it is possible to satisfy a need with fewer resources. However, if you can afford any of these tools, I would say you can stop reading this note.


Some weeks ago, I was looking for a solution to reduce the size of my backup files and found these four ways (in addition to commercial tools, of course)




Yes, I’m saying NTFS compression. I know most DBA’s disagree using NTFS compressed folders to perform backup and restore operations, but there are some cases where this solution can fit perfectly.


I’ll give you my personal feed back on this: on my laptop, I have a demo application which uses a SQL Server database. I also have a backup of this database and after every demo I need to restore the database with the original backup. In this way, I can repeat the same demo next day with a different client.

Well, the size of the backup file is 2.5 Gb while if this file is stored in a NTFS compressed folder, it uses only 800 Mb, that is , near 70% of  free space gained only by clicking a check box in the advanced properties dialog box of the backup folder. There should be a very good reason to don’t use this feature in this scenario, shouldn’t it?




With the previous technique, I get more free space in my hard disk but, what happens if I need to copy the backup file to other location or send it by ftp or e-mail? The original 2.5 Gb will be copied since the compressed version of the file only “lives” in the compressed folder. Outside this folder, the file needs its original size.

To solve this situation, typically we do a backup to a normal folder and then, we compress the backup file using a compression tool to get a “real” compressed file.


Of course, if we are looking for cheap solutions, we will use free compressors. For clarity, here is an example using gzip tool, but feel free to use your preferred one:

  • Step one: perform a normal backup to disk




  • Step two: compress de backup file. At command prompt and using gzip, type




To restore the database from a compressed backup file, typically you would complete the inverse steps in the reverse order:


  • Step one: decompress the compressed backup file. At command prompt and using gunzip, type




  • Step two: perform a restore operation from disk




By the way, with this technique, I got a compressed demo backup file with only 225 Mb and I successfully sent it to my colleague who accidentally lost his backup file





With the previous technique, we are able to generate a compressed backup file ready to be archived, copied or sent to other place using less space and bandwidth. However, his technique does not reduce the total amount of disk space you need in your backup disk drive. In fact, with this technique, you need more space because you need the space to store the normal backup plus the space to store the compressed backup file (of course you can delete the normal backup file once you have generated the compressed file, but during the compression process you will be using and you will need the space for the backup file and for the compressed file)


The solution to this issue would come from the ability to generate directly the compressed backup file as backup is performed, that is, on the fly compression. Can we do this? The answer is yes, because:

 Note for SQL Server 2005: SQL Server 2005 does not support named pipes for backup restore opertations. Thus, the method described in this section is not valid for SQL Server 2005. You can still use the method proposed in section 4th and also you can read


  • SQL Server has the ability to backup/restore databases to/from pipes.
  • gzip and gunzip (as most of free compressors tools) have the ability to redirect input/output to pipes.


Putting all together you can:


  • Execute this sentence on the query analyzer tool




  • Immediately, after you started previous sentence in the query analyzer, execute this command at the command prompt


gzip -c -9 <\\.\pipe\demodb > C:\BACKUPS\DEMODB.BKP.gz


What is happening? SQL Server is writing the content of the backup to a named pipe instead to a standard file (we can imagine a pipe as a chunk of memory where SQL Server is writing) while gzip is reading the information to be compressed from that named pipe (instead from a standard file) and writing the compressed information to the compressed file.

In this way, no extra disk space is needed since backup and compression are performed in memory.


Conversely, if you need to execute a restore operation directly from a compressed backup file without previously decompress that file, you can:


  • Execute this sentence on the query analyzer tool




  • Immediately, after you started previous sentence in the query analyzer, execute this command at the command prompt


gunzip -c -d  C:\BACKUPS\DEMODB.BKP.gz >\\.\pipe\demodb


What is happening? SQL Server is restoring the database reading the information from a pipe instead of from a standard file while gunzip is writing the decompressed information to that pipe instead to a standard file.


I’m sure you know at least one way to chain both actions (one SQL Server command to perform a backup/restore operation and one OS command to perform compression/decompression).


How did I use this technique? Remember the demo application I have in my laptop? A week ago I needed to extend the demo to clients showing historical data, so, I needed to install the historical data database in my laptop which is only 80 Gb. Oops, 80 Gb for the datafile plus 80 Gb for the backup file are 160 Gb! Too much even for my new laptop. Fortunately, DBA’s told me that a compressed version of the full backup was generated weekly for different purposes. That’s all I needed to hear: I copied the compressed backup file to my laptop (which is only 8 Gb) and directly restored an historical data database at a total cost of 88 Gb instead of a total of 160 Gb. Now, when after several demos, my copy of historical data becomes “too much dirty”, I refresh its content with the latest compressed backup file.





With the previous technique, we are able to generate a compressed backup file as the backup operation is being performed with no need of extra disk space. However, if you plan to develop a professional tool to manage SQL Server backups in your own way (for example, compress those backups using your own compression algorithm, encrypt those backups with your own encryption algorithms and so on) then the recommended interface to interact with SQL Server is Virtual Device.


I’m neither a high skilled programmer nor trying to develop a professional backup tool (they already exist on the market), but curiosity was high and I got the feeling all the necessary pieces for a proof of concept were at my hand:


  • On the SQL Server installation CD, you have several samples. Some of these samples show you how to manage Virtual Devices to perform Backup/Restore operations. The simplest one of them, called “simple” shows in only 350 lines (including comments and blank lines 😉 ) how to use Virtual Devices to perform backup/restore operations.
  • Visit to get all you need for an API that allows you any dreamed operation for compression/decompression (in fact, to complete this sample you only need zlib.h, zlib.lib, zconf.h and zlib1.dll files ).

  • Put all together by:
    • Adding one line to the simple.cpp file


#include "zlib.h"                     // zlib library



    • Modifying these 9 lines for the performTransfer function in the simple.cpp file. Following, you can see how the function looks like after old lines have been suppressed and new lines have been added (old lines are in red and new ones are in green)


void performTransfer (

    IClientVirtualDevice*   vd,

    int                     backup )


    //1 FILE *      fh;

    gzFile          fgzh;

    //2 char*       fname = "superbak.dmp";

    char*           fgzname = "superbak.dmp.gz";

    VDC_Command *   cmd;

    DWORD           completionCode;

    DWORD           bytesTransferred;

    HRESULT         hr;


    //3 fh = fopen (fname, (backup)? "wb" : "rb");

    fgzh = gzopen (fgzname, (backup)? "wb6" : "rb");

    //4 if (fh == NULL )

    if (fgzh == NULL )


        //5 printf ("Failed to open: %s\n", fname);

        printf ("Failed to open: %s\n", fgzname);




    while (SUCCEEDED (hr=vd->GetCommand (INFINITE, &cmd)))


        bytesTransferred = 0;

        switch (cmd->commandCode)


            case VDC_Read:

                //6 bytesTransferred = fread (cmd->buffer, 1, cmd->size, fh);

                bytesTransferred = gzread (fgzh,cmd->buffer, cmd->size);

                if (bytesTransferred == cmd->size)

                    completionCode = ERROR_SUCCESS;


                    // assume failure is eof

                    completionCode = ERROR_HANDLE_EOF;



            case VDC_Write:

                //7 bytesTransferred = fwrite (cmd->buffer, 1, cmd->size, fh);

                bytesTransferred = gzwrite (fgzh,cmd->buffer,cmd->size);

                if (bytesTransferred == cmd->size )


                    completionCode = ERROR_SUCCESS;



                    // assume failure is disk full

                    completionCode = ERROR_DISK_FULL;



            case VDC_Flush:

                //8 fflush (fh);

                gzflush (fgzh,Z_SYNC_FLUSH);

                completionCode = ERROR_SUCCESS;



            case VDC_ClearError:

                completionCode = ERROR_SUCCESS;




                // If command is unknown…

                completionCode = ERROR_NOT_SUPPORTED;



        hr = vd->CompleteCommand (cmd, completionCode, bytesTransferred, 0);

        if (!SUCCEEDED (hr))


            printf ("Completion Failed: x%X\n", hr);





    if (hr != VD_E_CLOSE)


        printf ("Unexpected termination: x%X\n", hr);




        // As far as the data transfer is concerned, no

        // errors occurred.  The code which issues the SQL

        // must determine if the backup/restore was

        // really successful.


        printf ("Successfully completed data transfer.\n");



    //9 fclose (fh);

    gzclose (fgzh);



And that’s all. Just compile the simple sample, link with zlib.lib and you will get an exe tool that directly backups to / restores from compressed files using Virtual devices.


Did I use this technique? Well, not too much. In fact, taking as starting point other sample also available at SQL Server installation CD (concretely mthread sample which adds multi stream support), I improved it to add command line parameters for the database name, the compression level and the full path for the compressed backup file. I compiled it and got an interesting prototype. Now, the new developments department is evaluating the prototype and making a decision: is it better to develop our backup tool or buy one?

Esta entrada fue publicada en Informática e Internet. Guarda el enlace permanente.


Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de

Estás comentando usando tu cuenta de Cerrar sesión /  Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión /  Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión /  Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión /  Cambiar )


Conectando a %s