¡Hola mundo!

Welcome to WordPress.com. This is your first post. Edit or delete it and start blogging!

Anuncios
Publicado en Sin categoría | 1 Comentario

(II ) ON THE FLY COMPRESSION IS EASY ON UNIX, BUT IT IS ALSO EASY ON WINDOWS

 

2. – ZipPipe tool

ZipPipe tool creates a Windows named pipe, listens on it and compress using zlib library what other process writes on it using a compression level.

 

Usage:  ZipPipe in_pipe_name compressed_out_file compressionlevel

 

Here is the source code for this tool. To build it, copy the code in a file called ZipPipe.c in the same directory where you have a copy of the zlib1.dll, zlib.h ,zconf.h and zlib.lib (you can get them from http://www.zlib.net/zlib123-dll.zip ) and invoke the Microsoft C/C++ compiler typing

 

cl ZipPipe.c /link zlib.lib

 

//start of ZipPipe.c file

// to bulid the sample type cl ZipPipe.c /link zlib.lib

 

/*

Copyright notice

================

 (C) 1995-2005

 

  This software is provided ‘as-is’, without any express or implied

  warranty.  In no event will the author be held liable for any damages

  arising from the use of this software.

 

  Permission is granted to anyone to use this software for any purpose,

  including commercial applications, and to alter it and redistribute it

  freely, subject to the following restrictions:

 

  1. The origin of this software must not be misrepresented; you must not

     claim that you wrote the original software. If you use this software

     in a product, an acknowledgment in the product documentation would be

     appreciated but is not required.

  2. Altered source versions must be plainly marked as such, and must not be

     misrepresented as being the original software.

  3. This notice may not be removed or altered from any source distribution.

 

*/

#include <windows.h>

#include <string.h>

#include <stdio.h>

#include "zlib.h"                  // zlib library

 

enum {nBytesToRead=8192};           //Read-Write in 8 Kb chunks

const InBufferSize=131072;                //128 Kbfor Input Buffer

const OutBufferSize=131072;             //128 Kbfor Input Buffer

 

int main(int argc, char *argv[])

{

    HANDLE                                           hPipe;

    char                                    inBuffer [nBytesToRead];

    char                                    spipearg[50],spipe[300];

    char                                    sfile[300];

    char                                    scompress[5];

    unsigned long                  nBytesRead,bytesTransferred;

    int                                                       bResult,ncompression,badParm=TRUE,completionCode;

    gzFile                                  fgzh;

 

    if (argc == 4)

    {

                                sscanf (argv[1], "%s", spipearg);

                                sscanf (argv[2], "%s", sfile);

                                sscanf (argv[3], "%d", &ncompression);

                                badParm=FALSE;

    }

    sprintf(scompress,"wb%d",ncompression);

 

    if (badParm)

    {

        printf ("usage: ZipPipe in_pipe_name compressed_out_file compressionlevel  \n"

            "Creates a pipe,listens on it and compress what other process writes on it.\n");

        return (1);

    }

    strcpy(spipe,"\\\\.\\pipe\\");

    strcat(spipe,spipearg);

    hPipe = CreateNamedPipe(spipe,

          PIPE_ACCESS_DUPLEX, PIPE_WAIT, PIPE_UNLIMITED_INSTANCES,

          OutBufferSize, InBufferSize, 1000, NULL);

 

    printf("\nListening on pipe %s and writing to file %s with compression level %d… \n",spipe,sfile,ncompression);

    ConnectNamedPipe(hPipe, NULL);

    fgzh = gzopen (sfile, scompress);

 

    if (fgzh == NULL )

    {

        printf ("Failed to open: %s\n", sfile);

        return (1);

    }

 

    completionCode=0;

    bResult=1;

    nBytesRead=1;

    while  (bResult &&  nBytesRead != 0  && completionCode==0)

    {

                bResult = ReadFile(hPipe, &inBuffer, nBytesToRead, &nBytesRead, NULL) ;

                bytesTransferred = gzwrite (fgzh,&inBuffer,nBytesRead);

                if (bytesTransferred != nBytesRead )

                {

                                completionCode = 1;

                }

    }

    gzflush (fgzh,Z_SYNC_FLUSH);

    gzclose (fgzh);

    DisconnectNamedPipe(hPipe);

    CloseHandle(hPipe);

    return(0);

}

//end  of ZipPipe.c file

 

 

3. – UnZipPipe tool

UnZipPipe tool reads and uncompress the content of a gz compressed file using the zlib and writes it to a Windows named pipe

 

Usage:  UnZipPipe out_pipe_name compressed_in_file

Here is the source code for this tool. To build it, copy the code in a file called UnZipPipe.c in the same directory where you have a copy of the zlib1.dll, zlib.h ,zconf.h and zlib.lib (you can get them from http://www.zlib.net/zlib123-dll.zip ) and invoke the Microsoft C/C++ compiler typing

 

cl UnZipPipe.c /link zlib.lib

 

 

//start of UnZipPipe.c file

// to bulid the sample type cl UnZipPipe.c /link zlib.lib

 

/*

Copyright notice

================

 (C) 1995-2005

 

  This software is provided ‘as-is’, without any express or implied

  warranty.  In no event will the author be held liable for any damages

  arising from the use of this software.

 

  Permission is granted to anyone to use this software for any purpose,

  including commercial applications, and to alter it and redistribute it

  freely, subject to the following restrictions:

 

  1. The origin of this software must not be misrepresented; you must not

     claim that you wrote the original software. If you use this software

     in a product, an acknowledgment in the product documentation would be

     appreciated but is not required.

  2. Altered source versions must be plainly marked as such, and must not be

     misrepresented as being the original software.

  3. This notice may not be removed or altered from any source distribution.

 

*/

 

#include <windows.h>

#include <string.h>

#include <stdio.h>

#include "zlib.h"                  // zlib library

 

enum {nBytesToRead =    8192};      //Read-Write in 8 Kb chunks

const InBufferSize = 131072;              //128 Kb for Input Buffer

const OutBufferSize=131072;             //128 Kb for Input Buffer

int main(int argc, char *argv[])

{

    HANDLE                                           hPipe;

    char                                    inBuffer [nBytesToRead];

    char                                    spipearg[50],spipe[300];

    char                                    sfile[300];

    unsigned long                  nBytesRead,bytesTransferred;

    int                                                       bResult,badParm=TRUE,completionCode;

    gzFile                                  fgzh;

 

    if (argc == 3)

    {

                                sscanf (argv[1], "%s", spipearg);

                                sscanf (argv[2], "%s", sfile);

                                badParm=FALSE;

    }

 

    if (badParm)

    {

        printf ("usage: UnZipPipe out_pipe_name compressed_in_file  \n"

            "Reads and uncompres the content of a compressed file and writes it to a pipe.\n");

        return (1);

    }

    strcpy(spipe,"\\\\.\\pipe\\");

    strcat(spipe,spipearg);

    hPipe = CreateNamedPipe(spipe,

          PIPE_ACCESS_DUPLEX, PIPE_WAIT, PIPE_UNLIMITED_INSTANCES,

          OutBufferSize, InBufferSize,1000, NULL);

 

    printf("\nReading from file %s and writing to pipe %s …\n",sfile,spipe);

    fgzh = gzopen (sfile, "rb");

 

    if (fgzh == NULL )

    {

        printf ("Failed to open: %s\n", sfile);

        return (1);

    }

 

    ConnectNamedPipe(hPipe, NULL);

 

    completionCode=0;

    bResult=1;

    nBytesRead=1;

    while  (bResult &&  nBytesRead != 0  && completionCode==0)

    {

                nBytesRead = gzread(fgzh, inBuffer, nBytesToRead) ;

                WriteFile (hPipe,inBuffer,nBytesRead,&bytesTransferred,NULL);

                if (nBytesRead ==0)

                {

                                bResult = 0;

                }

    }

    gzclose (fgzh);

    FlushFileBuffers(hPipe);

    CloseHandle(hPipe);

    DisconnectNamedPipe(hPipe);

    return(0);

}

//end of UnZipPipe.c file

 

 

Publicado en Sin categoría | 14 comentarios

ON THE FLY COMPRESSION/UNCOMPRESSION IS EASY ON UNIX, BUT ALSO ON WINDOWS

By jcarlossaez1@hotmail.com

 

ON THE FLY COMPRESSION IS EASY ON UNIX, BUT IT IS ALSO EASY ON WINDOWS

 

There are a number of situations where the output results of a program become the input data for other program (for example you may want to compress your backup file with gzip tool and the compressed file be encrypted with Rijndael algorithm using GNU aes tool)

 

When both programs support stdin and stdout as a mechanism for input output, you can easily pipe the output of the first program to the input of the second program at the command line. For example

 

gzip –c mybackupfile.bkp | aes –e -p mypass -o mybackupfile.bkp.gz.enc

 

Unfortunately, this is not always the case and some programs don’t accept stdin and stdout for data input output (this is the case of Oracle import/export tools or Microsoft bcp tool).

 

On Unix environments these cases have been typically solved using mknod tool to create an OS file pipe. Once the pipe is created, the first program is able to write its output results to the pipe as if it was a normal file and the second program is able to read data from that pipe as if it was a normal file.

 

As an example, here you can find a largely used script by Oracle DBA to perform on the fly compression of an export operation

 

# Make a pipe

mknod expdat.dmp p

# Start compress the pipe in background

gzip -c < expdat.dmp > expdat.dmp.gz &

# Wait start the export

sleep 5

# Start the export

exp scott/tiger file=expdat.dmp

 

As far as I know, there is no similar native way to perform this operation on Microsoft Windows operating system.

 

I started thinking on it and finally I got a simple solution using Microsoft Windows pipes, zlib library (http://www.zlib.net/zlib123-dll.zip ) and a couple of small tools (less than 100 lines of code each of them)  I wrote: ZipPipe.exe and UnZipPipe.exe.

 

In point 1, I will show several uses of these tools, basically how to perform on the fly compression using Oracle imp and exp tools.

To get the necessary bin files (ZipPipe.exe, UnZipPipe.exe and zlib1.dll I would suggest to read point 2 and 3 of this document, but if you have any problem to obtain these files, just drop me an e-mail at jcarlossaez1@hotmail.com

Note: You can obtain a compiled version of these tools from http://cid-b3378f057444b65c.skydrive.live.com/self.aspx/P%c3%bablico/ZipPipe/zippipe.zip You don´t need anything more than these to run the tools.

 

1 HOW TO USE ZipPipe AND UnZipPipe TOOLS

 

1.1 On the Fly compression with Oracle Exp and Imp tools

 

On Unix environments, it has been largely used scripts allowing on the fly compression of the dump file generated by exp utility.

In the same way, on the fly decompression can be achieved to perform import operations reading directly from a compressed file.

 

A typical script to perform on the fly compression for the data generated by exp utility is

 

# Make a pipe

mknod expdat.dmp p

# Start compress the pipe in background

gzip -c < expdat.dmp > expdat.dmp.gz &

# Wait start the export

sleep 5

# Start the export

exp scott/tiger file=expdat.dmp

 

A typical script to execute an import operation reading directly from a compressed file is 

 

# Make a pipe

mknod expdat.dmp p

# Start decompress to the pipe in background

gzip -c < expdat.dmp.gz > expdat.dmp > &

# Wait start the import

sleep 5

# Start the import

imp scott/tiger file=expdat.dmp

 

There is no way to accomplish this work in the same way on Windows platforms. When exporting you first export to a normal file and then, you can compress it (except using NTFS built-in compression capabilities, but this is not what we are looking for)

When importing, you need first decompress the file and then you can import the file.

 

However, whit the ZipPipe and UnZipPipe tools, you can achieve the same behaviour as you have on Unix.

 

How to perform on the fly compression while exporting on Windows platforms?

 

Until now, your bat scripts looks something similar to this

 

exp scott/tiger file=expdat.dmp

gzip  expdat.dmp expdat.dmp.gz

 

Only when exp tool finishes its job, you can start compressing the file. This way needs more disk space and in most of the cases more time.

 

Look how you can export and compress without any intermediate file

 

start /MIN ZipPipe EXPPIPE expdat.dmp.gz 9

exp scott/tiger file= \\.\pipe\EXPPIPE

 

The first line starts our “compressor engine” that listens on named pipe \\.\pipe\EXPPIPE and writes the compressed information to the file expdat.dmp.gz  with a compression level of 9 (compression level can be in the range 1 to 9)

When export tool completes the export operation, ZipPipe process detects it and ends

 

How to perform on the fly decompression while importing on Windows platforms?

 

Until now, your bat scripts looks something similar to this

 

gzip –d expdat.dmp.gz expdat.dmp

imp scott/tiger file=expdat.dmp

 

Only when decompressor tool finishes its job, you can start importing the file. This way needs more disk space and in most of the cases more time.

 

Look how you can import and decompress without any intermediate file

 

start /MIN UnZipPipe IMPPIPE expdat.dmp.gz

imp scott/tiger file= \\.\pipe\IMPIPE

 

The first line starts our “decompressor engine” that listens that reads from the compressed file expdat.dmp.gz and writes the decompressed information to named pipe \\.\pipe\IMPPIPE

When import tool completes the import operation, UnZipPipe process detects it and ends.

 

You can think on ZipPipe and UnZipPipe as the equivalent tool to mknod plus gzip in the Unix environment.

Of course, you can make many remarks to this solution, but it allows you to achieve the same functionality you have on Unix, saving lot of space in Disks and most of the times reducing import/export duration.

 

One more thing: it is a pity these tools don’t work with new expdp and impdp tools available in Oracle10g. But don’t blame to Microsoft Windows or to these tools themselves. You won’t be able to perform on the fly compression with these new tools on Unix environments too. It is due to a change in the design of these tools. (And don’t get wrong with the COMPRESS parameter of these new tools. This parameter only compresses metadata).

1.2 On the Fly compression with Microsoft bcp tool

What a terrible pity! I have been able to use these tools only to on the fly compress the output of the bcp in native format.

I can not use them for on the fly decompression when using bcp to import or even when downloading data in no native format.

Perhaps someone can make them work.

 

How do you use bcp to export pubs..authors table to an uncompressed file and then compress?

 

Typically, at the command prompt in the source SQL Server you only need to type:

 

            bcp  pubs..authors out  authors.txt -T –n

            gzip authors.txt authors.txt.gz

 

The first command exports the data and the second one compress the generated file using gzip tool

 

Note that during the process you need enough space to store authors.txt and authors.txt.gz simultaneously provided that at the end you can delete the uncompressed file.

 

How can you use bcp and ZipPipe to export pubs..authors table directly to a compressed file?

 

At the command prompt in the source SQL Server you only need to type:

 

            start /MIN ZipPipe authors_pipe authors.txt.gz 9

            bcp  pubs..authors out  \\.\pipe\authors_pipe -T -n

 

The first command starts our compressor tool (you can think this step is similar to create a pipe and start the background compressor in the Unix environment all in one step).

Second, you only need to start bcp tool, but giving the pipe \\.pipe\ authors_pipe  as the file name where bcp has to write.

 

Another  process is launched. This background process is our compressor tool that creates and listens on named pipe \\.pipe\ authors_pipe and saves the data once compressed in the file authors.txt.gz. This process automatically ends when bcp completes export operation.

 

And you can see how file authors.txt.gz is the only file generated in one step.

 

Rest of the article where you can find how to build these tools

 http://spaces.msn.com/members/jcarlossaez/Blog/cns!1phQKLZOcIUsN9Tj5QObzgdw!112.entry

 

Publicado en Informática e Internet | 1 Comentario

FOUR CHEAP WAYS TO GET YOUR SQL SERVER BACKUP FILES SMALLER

 

By jcarlossaez1@hotmail.com

FOUR CHEAP WAYS TO GET YOUR SQL SERVER BACKUP FILES SMALLER

 

Some times, you may need your backup files to be smaller. For example:

 

  • Running out of space in your backup disk drive.
  • Need to copy backup files to other location through a low bandwidth connection.
  • Low speed of your backup disk drive.

 

You can find several tools in the market that offer you reduced backup size with inherited advantages such as less time to complete backup/restore operations, less space needed to store your backups…

The main disadvantage of these tools is that you have to pay for them. This does not mean they do not worth the money, but sometimes, it is possible to satisfy a need with fewer resources. However, if you can afford any of these tools, I would say you can stop reading this note.

 

Some weeks ago, I was looking for a solution to reduce the size of my backup files and found these four ways (in addition to commercial tools, of course)

 

1. – USE NTFS BUILT-IN COMPRESSION CAPABILITY

 

Yes, I’m saying NTFS compression. I know most DBA’s disagree using NTFS compressed folders to perform backup and restore operations, but there are some cases where this solution can fit perfectly.

 

I’ll give you my personal feed back on this: on my laptop, I have a demo application which uses a SQL Server database. I also have a backup of this database and after every demo I need to restore the database with the original backup. In this way, I can repeat the same demo next day with a different client.

Well, the size of the backup file is 2.5 Gb while if this file is stored in a NTFS compressed folder, it uses only 800 Mb, that is , near 70% of  free space gained only by clicking a check box in the advanced properties dialog box of the backup folder. There should be a very good reason to don’t use this feature in this scenario, shouldn’t it?

 

2. – USE EXTERNAL COMPRESSORS

 

With the previous technique, I get more free space in my hard disk but, what happens if I need to copy the backup file to other location or send it by ftp or e-mail? The original 2.5 Gb will be copied since the compressed version of the file only “lives” in the compressed folder. Outside this folder, the file needs its original size.

To solve this situation, typically we do a backup to a normal folder and then, we compress the backup file using a compression tool to get a “real” compressed file.

 

Of course, if we are looking for cheap solutions, we will use free compressors. For clarity, here is an example using gzip tool, but feel free to use your preferred one:

  • Step one: perform a normal backup to disk

 

BACKUP DATABASE DEMODB TO DISK=’C:\BACKUPS\DEMODB.BKP’

 

  • Step two: compress de backup file. At command prompt and using gzip, type

 

gzip -c -9 C:\BACKUPS\DEMODB.BKP > C:\BACKUPS\DEMODB.BKP.gz

 

To restore the database from a compressed backup file, typically you would complete the inverse steps in the reverse order:

 

  • Step one: decompress the compressed backup file. At command prompt and using gunzip, type

 

gunzip -c -d C:\BACKUPS\DEMODB.BKP.gz > C:\BACKUPS\DEMODB.BKP

 

  • Step two: perform a restore operation from disk

 

RESTORE DATABASE DEMODB FROM DISK=’C:\BACKUPS\DEMODB.BKP’

 

By the way, with this technique, I got a compressed demo backup file with only 225 Mb and I successfully sent it to my colleague who accidentally lost his backup file

 

 

3. – PERFORM ON THE FLY COMPRESSION USING PIPES

 

With the previous technique, we are able to generate a compressed backup file ready to be archived, copied or sent to other place using less space and bandwidth. However, his technique does not reduce the total amount of disk space you need in your backup disk drive. In fact, with this technique, you need more space because you need the space to store the normal backup plus the space to store the compressed backup file (of course you can delete the normal backup file once you have generated the compressed file, but during the compression process you will be using and you will need the space for the backup file and for the compressed file)

 

The solution to this issue would come from the ability to generate directly the compressed backup file as backup is performed, that is, on the fly compression. Can we do this? The answer is yes, because:

 Note for SQL Server 2005: SQL Server 2005 does not support named pipes for backup restore opertations. Thus, the method described in this section is not valid for SQL Server 2005. You can still use the method proposed in section 4th and also you can read http://jcarlossaez.blogspot.com/2007/01/backup-and-restore-tool-for-sql-server.html

 

  • SQL Server has the ability to backup/restore databases to/from pipes.
  • gzip and gunzip (as most of free compressors tools) have the ability to redirect input/output to pipes.

 

Putting all together you can:

 

  • Execute this sentence on the query analyzer tool

 

BACKUP DATABASE DEMODB TO PIPE=’\\.\pipe\demodb’

 

  • Immediately, after you started previous sentence in the query analyzer, execute this command at the command prompt

 

gzip -c -9 <\\.\pipe\demodb > C:\BACKUPS\DEMODB.BKP.gz

 

What is happening? SQL Server is writing the content of the backup to a named pipe instead to a standard file (we can imagine a pipe as a chunk of memory where SQL Server is writing) while gzip is reading the information to be compressed from that named pipe (instead from a standard file) and writing the compressed information to the compressed file.

In this way, no extra disk space is needed since backup and compression are performed in memory.

 

Conversely, if you need to execute a restore operation directly from a compressed backup file without previously decompress that file, you can:

 

  • Execute this sentence on the query analyzer tool

 

RESTORE DATABASE DEMODB FROM PIPE=’\\.\pipe\demodb’

 

  • Immediately, after you started previous sentence in the query analyzer, execute this command at the command prompt

 

gunzip -c -d  C:\BACKUPS\DEMODB.BKP.gz >\\.\pipe\demodb

 

What is happening? SQL Server is restoring the database reading the information from a pipe instead of from a standard file while gunzip is writing the decompressed information to that pipe instead to a standard file.

 

I’m sure you know at least one way to chain both actions (one SQL Server command to perform a backup/restore operation and one OS command to perform compression/decompression).

 

How did I use this technique? Remember the demo application I have in my laptop? A week ago I needed to extend the demo to clients showing historical data, so, I needed to install the historical data database in my laptop which is only 80 Gb. Oops, 80 Gb for the datafile plus 80 Gb for the backup file are 160 Gb! Too much even for my new laptop. Fortunately, DBA’s told me that a compressed version of the full backup was generated weekly for different purposes. That’s all I needed to hear: I copied the compressed backup file to my laptop (which is only 8 Gb) and directly restored an historical data database at a total cost of 88 Gb instead of a total of 160 Gb. Now, when after several demos, my copy of historical data becomes “too much dirty”, I refresh its content with the latest compressed backup file.

 

 

4. – PERFORM ON THE FLY COMPRESSION USING VIRTUAL DEVICES

 

With the previous technique, we are able to generate a compressed backup file as the backup operation is being performed with no need of extra disk space. However, if you plan to develop a professional tool to manage SQL Server backups in your own way (for example, compress those backups using your own compression algorithm, encrypt those backups with your own encryption algorithms and so on) then the recommended interface to interact with SQL Server is Virtual Device.

 

I’m neither a high skilled programmer nor trying to develop a professional backup tool (they already exist on the market), but curiosity was high and I got the feeling all the necessary pieces for a proof of concept were at my hand:

 

  • On the SQL Server installation CD, you have several samples. Some of these samples show you how to manage Virtual Devices to perform Backup/Restore operations. The simplest one of them, called “simple” shows in only 350 lines (including comments and blank lines 😉 ) how to use Virtual Devices to perform backup/restore operations.
  • Visit www.zlib.org to get all you need for an API that allows you any dreamed operation for compression/decompression (in fact, to complete this sample you only need zlib.h, zlib.lib, zconf.h and zlib1.dll files ).

  • Put all together by:
    • Adding one line to the simple.cpp file

 

#include "zlib.h"                     // zlib library

 

 

    • Modifying these 9 lines for the performTransfer function in the simple.cpp file. Following, you can see how the function looks like after old lines have been suppressed and new lines have been added (old lines are in red and new ones are in green)

 

void performTransfer (

    IClientVirtualDevice*   vd,

    int                     backup )

{

    //1 FILE *      fh;

    gzFile          fgzh;

    //2 char*       fname = "superbak.dmp";

    char*           fgzname = "superbak.dmp.gz";

    VDC_Command *   cmd;

    DWORD           completionCode;

    DWORD           bytesTransferred;

    HRESULT         hr;

 

    //3 fh = fopen (fname, (backup)? "wb" : "rb");

    fgzh = gzopen (fgzname, (backup)? "wb6" : "rb");

    //4 if (fh == NULL )

    if (fgzh == NULL )

    {

        //5 printf ("Failed to open: %s\n", fname);

        printf ("Failed to open: %s\n", fgzname);

        return;

    }

 

    while (SUCCEEDED (hr=vd->GetCommand (INFINITE, &cmd)))

    {

        bytesTransferred = 0;

        switch (cmd->commandCode)

        {

            case VDC_Read:

                //6 bytesTransferred = fread (cmd->buffer, 1, cmd->size, fh);

                bytesTransferred = gzread (fgzh,cmd->buffer, cmd->size);

                if (bytesTransferred == cmd->size)

                    completionCode = ERROR_SUCCESS;

                else

                    // assume failure is eof

                    completionCode = ERROR_HANDLE_EOF;

                break;

 

            case VDC_Write:

                //7 bytesTransferred = fwrite (cmd->buffer, 1, cmd->size, fh);

                bytesTransferred = gzwrite (fgzh,cmd->buffer,cmd->size);

                if (bytesTransferred == cmd->size )

                {

                    completionCode = ERROR_SUCCESS;

                }

                else

                    // assume failure is disk full

                    completionCode = ERROR_DISK_FULL;

                break;

 

            case VDC_Flush:

                //8 fflush (fh);

                gzflush (fgzh,Z_SYNC_FLUSH);

                completionCode = ERROR_SUCCESS;

                break;

   

            case VDC_ClearError:

                completionCode = ERROR_SUCCESS;

                break;

 

            default:

                // If command is unknown…

                completionCode = ERROR_NOT_SUPPORTED;

        }

 

        hr = vd->CompleteCommand (cmd, completionCode, bytesTransferred, 0);

        if (!SUCCEEDED (hr))

        {

            printf ("Completion Failed: x%X\n", hr);

            break;

        }

    }

 

    if (hr != VD_E_CLOSE)

    {

        printf ("Unexpected termination: x%X\n", hr);

    }

    else

    {

        // As far as the data transfer is concerned, no

        // errors occurred.  The code which issues the SQL

        // must determine if the backup/restore was

        // really successful.

        //

        printf ("Successfully completed data transfer.\n");

    }

 

    //9 fclose (fh);

    gzclose (fgzh);

}

 

And that’s all. Just compile the simple sample, link with zlib.lib and you will get an exe tool that directly backups to / restores from compressed files using Virtual devices.

 

Did I use this technique? Well, not too much. In fact, taking as starting point other sample also available at SQL Server installation CD (concretely mthread sample which adds multi stream support), I improved it to add command line parameters for the database name, the compression level and the full path for the compressed backup file. I compiled it and got an interesting prototype. Now, the new developments department is evaluating the prototype and making a decision: is it better to develop our backup tool or buy one?

Publicado en Informática e Internet | Deja un comentario