Bash Script: Incremental Encrypted Backups with Duplicity (Amazon S3)

Update (5/6/12): I have not been actively developing this script lately. Zertrin has stepped up to take over the reins and offers a up-to-date and modified version with even more capabilities. Check it out over at github.

This bash script was designed to automate and simplify the remote backup process of duplicity on Amazon S3. After your script is configured, you can easily backup, restore, verify and clean (either via cron or manually) your data without having to remember lots of different command options and passphrases.

Most importantly, you can easily backup the script and your gpg key in a convenient passphrase-encrypted file. This comes in in handy if/when your machine ever does go belly up. Code is hosted at github.

how to use

To get the latest latest code in the script you can download a zip copy of the source or clone the git repository like so:

  • git clone git://github.com/thornomad/dt-s3-backup.git

You’ll also need to have a number of things in place in order to utilize this script, specifically: gpg, duplicity, an Amazon S3 account, and (optionally) s3cmd. If you need help getting all these in order, I wrote another post about putting it all together. It’s not all that difficult, but does take a few pieces of the puzzle to be in order.

Once you have the script, you will need to fill out the foobar variables with your own specific information.  I suggest testing the script on a small directory of files and a local directory for your destination first to make sure it is working.

Usage

From the README file:

COMMON USAGE EXAMPLES
=====================

* View help:
    $ dt-s3-backup.sh

* Run an incremental backup:
	$ dt-s3-backup.sh --backup

* Force a one-off full backup:
    $ dt-s3-backup.sh --full

* Restore your entire backup:
	$ dt-s3-backup.sh --restore 
    You will be prompted for a restore directory

	$ dt-s3-backup.sh --restore /home/user/restore-folder
    You can also provide a restore folder on the command line.

* Restore a specific file in the backup:
    $ dt-s3-backup.sh --restore-file
    You will be prompted for a file to restore to the current directory

    $ dt-s3-backup.sh --restore-file img/mom.jpg
    Restores the file img/mom.jpg to the current directory

    $ dt-s3-backup.sh --restore-file img/mom.jpg /home/user/i-love-mom.jpg
    Restores the file img/mom.jpg to /home/user/i-love-mom.jpg

* List files in the remote archive
	$ dt-s3-backup.sh --list-current-files

* Verify the backup
    $ dt-s3-backup.sh --verify

* Backup the script and gpg key (for safekeeping)
    $ dt-s3-backup.sh --backup-script

Changes

You can view the changelog at github.

185 Comments (newest first)

  1. Marc says:

    So how to get this gpg key going. I’ve jumped through all the hoops to create those darn things and now I’m finding I still can’t backup because the key is not trusted. What’s worse apparently, other people need to “trust” my key, which suggests I have to start begging around “please trust my key” to be able to get this backup thing going.

    I must be wrong (I hope) so some hints would be welcome on how to get this working!

    Thx,Marc

  2. Joey says:

    I seem to be getting this error ->

    ./dt-s3-backup.sh 
    ./dt-s3-backup.sh: line 268: syntax error near unexpected token `('
    ./dt-s3-backup.sh: line 268: `  echo ">> To restore these files, run the following (remember your password):"'

    any ideas?

    • Damon says:

      I am not sure – it sounds like maybe the file got corrupted some how (perhaps an unclosed parenthesis or something) – did you modify the file in any way after your got the source ? Was it working before and then just stopped magically ? What system are you running it on ? (unix, linux, mac, etc)

  3. Bagodonuts says:

    Hi, Thanks for this script! I am excited to get it working.

    I am currently receiving this error ->

    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
        An unexpected error has occurred.
      Please report the following lines to:
       s3tools-bugs@lists.sourceforge.net
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    
    Problem: AttributeError: 'S3Error' object has no attribute 'Code'
    S3cmd:   0.9.9.91
    
    Traceback (most recent call last):
      File "/usr/bin/s3cmd", line 1736, in 
        main()
      File "/usr/bin/s3cmd", line 1681, in main
        cmd_func(args)
      File "/usr/bin/s3cmd", line 44, in cmd_du
        subcmd_bucket_usage(s3, uri)
      File "/usr/bin/s3cmd", line 70, in subcmd_bucket_usage
        if S3.codes.has_key(e.Code):
    AttributeError: 'S3Error' object has no attribute 'Code'
    
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
        An unexpected error has occurred.
        Please report the above lines to:
       s3tools-bugs@lists.sourceforge.net
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    here is a few sections of the log ->

    -----------[ Duplicity Cleanup ]-----------
    /usr/bin/duplicity remove-all-but-n-full 2 --force --encrypt-key=mykey --sign-key=mykey s3://path/to/
    
    ---------[ Source File Size Information ]---------
    /home/user   180K
    /home/user    80K
    /home/user    36K
    
    ------[ Destination File Size Information ]------
    Current Remote Backup File Size:
    • Damon says:

      The error should have nothing to do with the script, but s3cmd – am not sure what’s going on. You should test your s3cmd install and see if it is throwing that error on every command you run against it or just the ones in the script.

  4. peter host says:

    Hi !

    Every once in a while, you stumble on a script which is so well thought, written (and useful) and solves so many problems you just can’t believe it. Example : I was in the process of writing a little script to backup the script and GPG info, then re-read the docs, and bang : dt-s3-backup.sh –backup-script
    You thought of everything.

    I’ve been using dt-s3-backup for less than 2 days and I just can’t believe I lived without.

    Best S3 backuper script around (and I’ve tested a few!)

    Thank you so much

    • Damon says:

      Hey Peter –

      Thanks so much for the kind words; glad you found it useful.

      If you have any new ideas or ways to improve on it or want to contribute in any way, please fork it on github and I would love to include improvements.

      Damon

  5. […] backups I recommend use duplicity and DT-S3-Backup bash […]

  6. […] http://blog.damontimm.com/bash-script-incremental-encrypted-backups-duplicity-amazon-s3/ February 7, 2010 9:57 am Jeremy Bouse S3 with Duplicity is what I use to backup my Linode VPS. February 7, 2010 5:18 am Anirvan Linode runs its own internal managed backup program, which should work great for your needs. Check it out at http://www.linode.com/backups/ July 21, 2010 1:53 am Seth I agree that you’re best served by using the backup service provided by Linode though if you want to do it yourself, Amazon’s S3 is really cheap and simple to use. […]

  7. aleks says:

    Hi,
    now it has happend :-)
    It is the first time to restore a real file – not for a drill, a real real file.

    I noticed, that i cannot use dt-s3-backup.sh to get an older version of the lost file – i have to use duplicity itself for it.
    Am i right with this?
    Found no update vor http://github.com/thornomad/dt-s3-backup/issues#issue/2

    Nice script, anyway – but it would be very nice to use “-t D3” ore something like that directly with dt-s3-backup.sh

    Aleks

    • Damon says:

      Hey! Yes you are right: unfortunately you will have to dig out old fashioned duplicity to restore something from a specific time period. We have not moved those changes into the script just yet … you can restore using the script but it will just take your last upload.

      Good luck!

  8. Eli says:

    Hello,

    Is there any way to tell if it is actually working or not?

    After typing and confirming my GPG passphrase, I get no further output. Looking in the logs, I see:

    Failed to create bucket (attempt #4) 'backups' failed (reason: S3CreateError: S3CreateError: 409 Conflict
    
    BucketAlreadyExistsThe requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.backups502A0BC09A77D71DBIgzasr762L8wynkk1WWQu5FSECrHsNcjlCR8A6/cGX7IDmjKG5gDrzAce4JN7RM)

    If I leave it running this error repeats hundreds of times. Looking at my bandwidth usage on my router, it doesn’t appear that there is any significant data being sent.

    • Eli says:

      It looks like the XML got stripped by this comment form — the error is actually a short XML document.

    • Damon says:

      Hi Eli –

      To answer your question, you can check if it worked a few different ways. You could: [1] view your Amazon S3 buckets (using a program like Cyberduck) and see if the bucket was created and your duplicity backups exist; [2] restore your backup (using the script) to a different location and see if it worked; or [3] trust the log output.

      Your log output seems to be saying that the bucket name you chose is already in use. I would pick a new (unique) name for your bucket and try again.

  9. Gordon Schulz says:

    Seriously nice script. Thanks for sharing!

  10. Mehul says:

    Anyone getting the following error

    Oops!! dt-s3-backup.sh was unable to run!
    We are missing one or more important variables at the top of the script.
    Check your configuration because it appears that something has not been set yet.

    Please check your INCLIST. It cannot be empty. Correcting this will fix the error.

    • Damon says:

      Well noted – I created a bug/ticket. Hopefully can get around to fixing this in the next few weeks. Thanks.

      • Mehul says:

        It seems there’s some problem with this approach after the latest updates.
        I have set my home directory as the ROOT and also the INCLIST. Earlier it was working fine. But, now it doesn’t seem to descend into sub-directories, it just looks into top level directories. This happened after I upgraded to boto-2a since the newer version of duplicity is broken with the version of boto that CentOS ships with.

        • Damon says:

          Are you getting any errors ?

          You can troubleshoot the output command of the script by removing the comment hash (#) on line 109:

          ECHO=$(which echo)

          This will cause the script to echo to the terminal what the script is attempting to run.

          You can tell if the error is the with the script forming a bad command, or with your install of duplicity itself.

          Good luck.

          • Mehul says:

            I am not able to find anything useful

            Here’s the steps I performed.
            1. Create directories test1 and test2
            2. Do a backup
            3. Add 4 new files – testfile1 and testfile2 each in both the above directories
            4. Run the script again.

            Here’s what I got

            ——– START DT-S3-BACKUP SCRIPT ——–

            Local and Remote metadata are synchronized, no sync needed.
            Last full backup date: Thu Nov 18 17:39:41 2010

            After that I enabled the ECHO variable
            Here’s the output I got

            TEST RUN ONLY: Check the logfile for command output.

            and the log file

            ——– START DT-S3-BACKUP SCRIPT ——–

            /usr/bin/duplicity -v3 –full-if-older-than 14D –s3-use-new-style –encrypt-key=KEY –sign-key=MYKEY –include=/home/mehul/Documents/ –exclude=** /home/mehul/Documents s3+http://BUCKET_NAME
            ———–[ Duplicity Cleanup ]———–
            /usr/bin/duplicity remove-all-but-n-full 2 –force –encrypt-key=KEY –sign-key=MYKEY s3+http://bucket_name

            ———[ Source File Size Information ]———
            /home/mehul/Documents/ 540K

            ——[ Destination File Size Information ]——
            Current Remote Backup File Size: 481k

            ——– END DT-S3-BACKUP SCRIPT ——–

            • Damon says:

              Have you tried to actually restore your backup after you have added new files to see if they are being restored? I am not sure why, but the reporting in duplicity sometimes (seems to me) is flakey.

              Also – are they actual files or just empty placeholders you touch?

              • Mehul says:

                I will try the restore and see how it goes.
                The files in my test directories are just placeholders. But, the behaviour is the same in the test backups as well the live backups.

  11. Arend says:

    Thank you for providing us with this excellent script. I use for quite some time now and it rocks!

  12. dark-saber says:

    Amazing script, thanks for sharing it. But I have problems when I add paths with spaces into INCLIST. I tried to use backslashes in that variable but that didn’t solve the problem.

  13. […] Bash Script: Incremental Encrypted Backups with Duplicity (Amazon S3) « damontimm.com (tags: s3 aws duplicity backup incremental sysadmin) […]

  14. Patrick says:

    Great script, thank you very much! I’m happily running my daily backups with it.
    Since the backup is incremental: Is it possible to restore a certain version of a file (e.g. not the latest version, but the version it was a few days/weeks ago)?

    • Damon says:

      Hey Patrick – right now, we don’t have an option to restore from a specific time period built in yet … you can, of course, just run a normal duplicity command to get the file … that would be a nice feature, actually. I will open a ticket and see if we can get to it at some point. If you want to contribute the code, I am happy to include it. –Damon

      • Peter says:

        This would be a really useful addition to the code, and allows me to forget the Duplicity commands a bit more ;)

  15. Samuel says:

    Hi!

    I’m trying to use this script, it seems great… but I get this error:

    dt-s3-backup.sh: 63: Syntax error: "(" unexpected

    That line is the INCLIST:

    INCLIST=( "/home/myuser/" )

    If I remove the ‘(‘ and ‘)’ then I get the same error but with the EXCLIST :(

    I’m using Ubuntu 10.04 and latest duplicity.

    Thank you in advance.

    • Damon says:

      Without seeing how you set up everything, is hard to guess — but, if I had to, I would venture that maybe you have a simple syntax error somewhere earlier in your code … consider trying again with a fresh copy of the script and replace the foobar variables with your own (careful of quotes, etc).

      Looking at the source code, I see that line 63 is a comment in the original, which means you may have moved some stuff around …

      Just a guess: syntax problem.

      Hope you find it!

      • Samuel says:

        Thank you very much for your fast reply Damon ;)

        Well, I tried with a fresh new copy like you suggest, and the previous problem dissapear. But now I have another error message, this one:

        Specified archive directory '/home/samuel/.cache/duplicity/daedb76a93a9a5c65793185c65874bc3' does not exist, or is not a directory
        Specified archive directory '/home/samuel/.cache/duplicity/daedb76a93a9a5c65793185c65874bc3' does not exist, or is not a directory

        I can’t figure why is telling this, I tried to exclude that directory from the backup but it says the same even doing that :(

        This is my script configuration:

        # AMAZON S3 INFORMATION
        export AWS_ACCESS_KEY_ID="my_access_key"
        export AWS_SECRET_ACCESS_KEY="my_secret_key"
        
        # If you aren't running this from a cron, comment this line out
        # and duplicity should prompt you for your password.
        export PASSPHRASE="my_passphrase"
        
        # Specify which GPG key you would like to use (even if you have only one).
        GPG_KEY="my_key"
        
        # The ROOT of your backup (where you want the backup to start);
        # This can be / or somwhere else -- I use /home/ because all the 
        # directories start with /home/ that I want to backup.
        ROOT="/home/samuel/backups-mysql"
        
        # BACKUP DESTINATION INFORMATION
        # In my case, I use Amazon S3 use this - so I made up a unique
        # bucket name (you don't have to have one created, it will do it
        # for you).  If you don't want to use Amazon S3, you can backup 
        # to a file or any of duplicity's supported outputs.
        #
        # NOTE: You do need to keep the "s3+http:///" format
        # even though duplicity supports "s3:///".
        #DEST="s3+http://backup-bucket/backup-folder/"
        DEST="file:///home/samuel/new-backup-test/"
        
        # INCLUDE LIST OF DIRECTORIES
        # Here is a list of directories to include; if you want to include 
        # everything that is in root, you could leave this list empty (I think).
        #INCLIST=( "/home/*/Documents" \ 
        #    	  "/home/*/Projects" \
        #	      "/home/*/logs" \
        #	      "/home/www/mysql-backups" \
        #        ) 
        
        INCLIST=( "/home/samuel/backups-mysql" ) # small dir for testing
        
        # EXCLUDE LIST OF DIRECTORIES
        # Even though I am being specific about what I want to include, 
        # there is still a lot of stuff I don't need.           
        EXCLIST=( "/home/*/Trash" \
        	      "/home/*/Projects/Completed" \
        	      "/**.DS_Store" "/**Icon?" "/**.AppleDouble" "/home/samuel/.cache" "/home/samuel/.*" \ 
                   ) 
        
        # STATIC BACKUP OPTIONS
        # Here you can define the static backup options that you want to run with
        # duplicity.  I use both the `--full-if-older-than` option plus the
        # `--s3-use-new-style` option (for European buckets).  Be sure to separate your
        # options with appropriate spacing.
        STATIC_OPTIONS="--full-if-older-than 14D --s3-use-new-style"
        
        # FULL BACKUP & REMOVE OLDER THAN SETTINGS
        # Because duplicity will continue to add to each backup as you go,
        # it will eventually create a very large set of files.  Also, incremental 
        # backups leave room for problems in the chain, so doing a "full"
        # backup every so often isn't not a bad idea.
        #
        # You can either remove older than a specific time period:
        #CLEAN_UP_TYPE="remove-older-than"
        #CLEAN_UP_VARIABLE="31D"
        
        # Or, If you would rather keep a certain (n) number of full backups (rather 
        # than removing the files based on their age), you can use what I use:
        CLEAN_UP_TYPE="remove-all-but-n-full"
        CLEAN_UP_VARIABLE="2"
        
        # LOGFILE INFORMATION DIRECTORY
        # Provide directory for logfile, ownership of logfile, and verbosity level.
        # I run this script as root, but save the log files under my user name -- 
        # just makes it easier for me to read them and delete them as needed. 
        
        LOGDIR="/home/samuel/logs/test2/"
        LOG_FILE="duplicity-`date +%Y-%m-%d-%M`.txt"
        LOG_FILE_OWNER="samuel:samuel"
        VERBOSITY="-v3"
        
        • Samuel says:

          Forget about my latest comment, I found it was a permission problem, so sorry ;)

          Finally I get it working in the local test.

          Thank you very much for this great script and your kindly support :)

          • Damon says:

            Glad you got it working – you can always troubleshoot your script by un-commenting the ECHO portion … and reading the output.

  16. Alex says:

    Hi,

    Tried to use your script and have got the following error:

    ./dt-s3-backup.sh –full

    Use of new-style (subdomain) S3 bucket addressing wasrequested, but does not seem to be supported by the boto library. Either you need to upgrade your boto library or duplicity has failed to correctly detect the appropriate support.

    I have the last duplicity a boto library available for CentOS via yum. Any idea what the problem might be please?

    I tried to create a small script myself and it did work so now I am trying to figure out what I did wrong with this script…

    Regards
    Alex

    • Damon says:

      Well, if you are not in Europe you can remove the --s3-use-new-style from the STATIC_OPTIONS configuration … otherwise, sounds like you need to upgrade your py-boto packages.

      • Alex says:

        Thanks Damon, I am in Europe (UK) and I have python-boto 1.0a (1.el5 release) version. Unfortunately, I could not find a newer version via yum. Do you think I need to find a more later RPM and install it? (Boto has 1.9b)

        • Damon says:

          To be honest, I am not really sure. I don’t use the --s3-use-new-style flag and I am only running version 0.9d of python-boto … I would post the question to the duplicity mailing list … Good luck!

  17. Kevin says:

    If INCLIST requires full paths, then what’s the point of the ROOT parameter?

    • Damon says:

      Duplicity’s most basic command line structure is:

      • duplicity src dest

      In our case, ROOT is our src … you do not have to have an include or an exclude list (those are optional) but you do need a src and a dest … (at least, as far as I know).

  18. Kevin says:

    Well, I’m not getting the same errors now that I’ve upgraded to version 0.6.08b, but after an hour of backing up to S3, I get this. Any ideas?

    # /usr/local/sbin/dt-s3-backup.sh --backup
    Traceback (most recent call last):
      File "/usr/bin/duplicity", line 1239, in ?
        with_tempdir(main)
      File "/usr/bin/duplicity", line 1232, in with_tempdir
        fn()
      File "/usr/bin/duplicity", line 1205, in main
        full_backup(col_stats)
      File "/usr/bin/duplicity", line 416, in full_backup
        globals.backend)
      File "/usr/bin/duplicity", line 294, in write_multivol
        globals.gpg_profile, globals.volsize)
      File "/usr/lib/python2.4/site-packages/duplicity/gpg.py", line 279, in GPGWriteFile
        data = block_iter.next(min(block_size, bytes_to_go)).data
      File "/usr/lib/python2.4/site-packages/duplicity/diffdir.py", line 505, in next
        result = self.process(self.input_iter.next(), size)
      File "/usr/lib/python2.4/site-packages/duplicity/diffdir.py", line 631, in process
        data, last_block = self.get_data_block(fp, size - 512)
      File "/usr/lib/python2.4/site-packages/duplicity/diffdir.py", line 658, in get_data_block
        buf = fp.read(read_size)
      File "/usr/lib/python2.4/site-packages/duplicity/diffdir.py", line 415, in read
        buf = self.infile.read(length)
      File "/usr/lib/python2.4/site-packages/duplicity/diffdir.py", line 384, in read
        buf = self.infile.read(length)
    IOError: [Errno 22] Invalid argument
    
    • Damon says:

      If you are generating an appropriate command line argument I would imagine it is a duplicity issue — I’m not active in duplicities development … probably a good question for launchpad.

      Maybe I should add a way to generate the script “output command” so its easier to debug …

    • Kevin says:

      In case anyone runs across the same issue, it appears the offending directory is /home/virtfs. If you are running a WHM/CPanel setup and want to use duplicity to backup your server, then you’ll want to exclude /home/virtfs from your backups.

  19. Agricola says:

    There is a big/question reported on launchpad — it sounds like a duplicity issue … hopefully they can fix it for us soon.;

  20. Brian says:

    I’ve been using this script for a while, thanks.

    Just updated to 6.07 and now getting an error

    Traceback (most recent call last):
      File "/usr/bin/duplicity", line 1236, in 
        with_tempdir(main)
      File "/usr/bin/duplicity", line 1229, in with_tempdir
        fn()
      File "/usr/bin/duplicity", line 1115, in main
        action = commandline.ProcessCommandLine(sys.argv[1:])
      File "/usr/lib/python2.6/dist-packages/duplicity/commandline.py", line 876, in ProcessCommandLine
        args = parse_cmdline_options(cmdline_list)
      File "/usr/lib/python2.6/dist-packages/duplicity/commandline.py", line 450, in parse_cmdline_options
        (options, args) = parser.parse_args()
      File "/usr/lib/python2.6/optparse.py", line 1394, in parse_args
        stop = self._process_args(largs, rargs, values)
      File "/usr/lib/python2.6/optparse.py", line 1434, in _process_args
        self._process_long_opt(rargs, values)
      File "/usr/lib/python2.6/optparse.py", line 1509, in _process_long_opt
        option.process(opt, value, values, self)
      File "/usr/lib/python2.6/optparse.py", line 782, in process
        value = self.convert_value(opt, value)
      File "/usr/lib/python2.6/optparse.py", line 774, in convert_value
        return self.check_value(opt, value)
      File "/usr/lib/python2.6/optparse.py", line 769, in check_value
        return checker(self, opt, value)
      File "/usr/lib/python2.6/dist-packages/duplicity/commandline.py", line 110, in check_time
        return dup_time.genstrtotime(value)
      File "/usr/lib/python2.6/dist-packages/duplicity/dup_time.py", line 271, in genstrtotime
        return override_curtime - intstringtoseconds(timestr)
    TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'
    • Damon says:

      Hmm – what are your remove-older-than variables ? Looks like there is an error passing the time variables on the command line …

      Also, as a side note, we have been working on a new version of the script available at: http://github.com/thornomad/dt-s3-backup.

      I haven’t updated this blog yet but will do so soon.

      • Patrick says:

        I am also getting this issue. I am not using the ‘remove-older-than’ cleanup type; I am using ‘remove-all-but-n-full’ (as suggested by the comments).

        Here is the relevant snippet (I’m pretty sure this is how I downloaded it; I don’t think I modified these lines at all):

        # FULL BACKUP & REMOVE OLDER THAN SETTINGS
        # Because duplicity will continue to add to each backup as you go,
        # it will eventually create a very large set of files.  Also, incremental
        # backups leave room for problems in the chain, so doing a "full"
        # backup every so often isn't not a bad idea.
        #
        # You can either remove older than a specific time period:
        #CLEAN_UP_TYPE="remove-older-than"
        #CLEAN_UP_VARIABLE="31D"
        
        # Or, If you would rather keep a certain (n) number of full backups (rather
        # than removing the files based on their age), you can use what I use:
        CLEAN_UP_TYPE="remove-all-but-n-full"
        CLEAN_UP_VARIABLE="2"

        Any ideas?

        • Damon says:

          I just took a quick look on launchpad.net and it appears as if this is a bug with version 0.6.07 (not the script). It looks the same to me, anyhow, and they have said a fix is coming (released tomorrow).

          PS – are you using the latest code from github ? We have some new features there.

          • Patrick says:

            Okay, thanks, I will try to get the new version tomorrow.

            Yes, I am using the latest from github, as of earlier today.

          • Patrick says:

            Well, I just got the latest duplicity, and now I am having a new problem:

            # ./dt-s3-backup.sh --backup
            Traceback (most recent call last):
              File "/usr/bin/duplicity", line 1239, in ?
                with_tempdir(main)
              File "/usr/bin/duplicity", line 1232, in with_tempdir
                fn()
              File "/usr/bin/duplicity", line 1205, in main
                full_backup(col_stats)
              File "/usr/bin/duplicity", line 416, in full_backup
                globals.backend)
              File "/usr/bin/duplicity", line 294, in write_multivol
                globals.gpg_profile, globals.volsize)
              File "/usr/lib/python2.4/site-packages/duplicity/gpg.py", line 272, in GPGWriteFile
                file = GPGFile(True, path.Path(filename), profile)
              File "/usr/lib/python2.4/site-packages/duplicity/gpg.py", line 105, in __init__
                if globals.gpg_options:
            AttributeError: 'module' object has no attribute 'gpg_options'

            I have set the GPG_KEY variable in your script.

            By any chance, would you suggest reverting duplicity back to a version that is known to work with your script? (Of course, if you have any idea how to solve this problem, and use the latest/greatest version, that would be ideal).

            Thanks!

            • Damon says:

              Hey there – hmm, not sure about the second error. I need to install the latest version on a different machine to test it … because I am still using the 5.x series. I would like to see if this is a problem with the script or with duplicity itself (as the last one).

              One thing you can test is to go to line 210 and add an “echo” before the backup command then see what is being run in your log file.

              duplicity_backup()
              {
                echo ${DUPLICITY} ${OPTION} ${VERBOSITY} ${STATIC_OPTIONS} \
                --encrypt-key=${GPG_KEY} \
                --sign-key=${GPG_KEY} \
                ${EXCLUDE} \
                ${INCLUDE} \
                ${EXCLUDEROOT} \
                ${ROOT} ${DEST} \
                >> ${LOGFILE}
              }

              This will output the full command being run rather than execute it — you could see if it looks suspicious …

              • Patrick says:

                Here is the command that would be executed:

                /usr/bin/duplicity -v3 --full-if-older-than 14D --encrypt-key=B8C500C9 --sign-key=B8C500C9 --exclude /share/osowski --exclude /share/michalson --include=/share/kaeding --include=/home/jkaeding --include=/home/malvarado --exclude=** / s3+http://mybucket/marshall
                

                Does that look like it should work? Nothing jumps out at me as being amiss.

            • Patrick says:

              My very limited knowledge of python has led me to this cached blog entry (the original was offline), which makes me think that maybe I am missing some python module for gpg. I do have the python-gnupginterface package installed (on Ubuntu), so I don’t know what else it could be.

            • Kevin says:

              I’m getting the same error. I’m using duplicity 0.6.08a that was released only a few hours ago, including a bugfix from 0.6.08.

              Has anyone figured out the problem yet?

            • Patrick says:

              So, this problem seemed to have been caused by something in the Duplicity update. Upgrading to version 0.6.08b fixed the issue for me.

              Thanks to Damon and also to the the folks over on the Duplicity Launchpad site for all the help tracking this down!

              • Kevin says:

                No kidding; a big thanks, indeed! Duplicity has been such a wonderful thing for our server backups, I can’t live without it!

                Still in the middle of a new backup, but there aren’t any errors yet. That’s a good sign.

  21. […] allo stesso tempo li criptasse. Per semplificarne l’uso ho aggiunto delle modifiche ad uno script bash per la gestione di duplicity sviluppato inizialmente da Damon Timm, sto aspettando che porti il suo script su github.com e che pubblichi la nuova versione dello […]

  22. Thank you for your script.

    I found some bugs on this script and I resolve it.
    I want to improve the script and I do some little change.

    Would you like to contact me by private email?

  23. Vlad says:

    How do I restore a backup that came from one machine, on another?
    I backed up from one machine, and now I want to get that backup on another machine, in case the first one completely goes belly up. So far I get this message, when I run ./DT-S3-Backup-v3.sh –restore

    ===== Begin GnuPG log =====
    gpg: encrypted with ELG-E key, ID 16C3509D
    gpg: decryption failed: secret key not available
    ===== End GnuPG log =====

    One way is to replace the entire .gnupg directory, which would contain the right key then…
    Thoughts?

    • Damon says:

      Hi – sorry for the delay (email notification was being treated as Spam) … It looks like you don’t have the GPG key installed on the second machine … I you run the script with the --backup-this-script option you can then add your key to the second machine’s key ring. I need to add clearer instructions for that but haven’t gotten to it yet.

      I can’t think of the gpg command off hand but it is something like --add-private-key

  24. i couldn’t get this to work, so i made something myself. this php script only backs up tables that have changed since the last backup. each table is backed up into a gz archive. i dump these archives straight into my web folder, which i backup with duplicity regularly. this is semi-incremental: the largest tables in my database hardly ever change, and this way, i don’t create new archives for those tables, so i also don’t have to push them to s3 every time.

    Details and download:
    http://www.netpresent.net/CMS/Products/Products/#mysqlincbak

    [hmmm, is this comment system even working? trying again. i hope i’m not posting multiple times, but i’m not seeing any result, nor an error message]

  25. Hi, firstly excellent script – it really is incredibly useful. However, I seem to be hitting intermittent issues (once every couple of days). The error is “duplicity.backends.BackendException: Error downloading”. It attempts 5 times, but then crashes out. Looking at my s3 repository, the file duplicity errors on is definitely there, so i’m not sure what could be the reason. has anyone else come across this?

    • Damon says:

      Hi Benjamin – I haven’t seen that error myself. Hmm – is strange. Seems to be an error with the S3 functionality. Sorry I don’t have much input, however, on how to go about fixing this! If you do find out the reason, let us know.

      • Neil says:

        Looks like this is quite old, but I’m running into the same issue when rolling data from s3 to glacier. The sigtar’s are unavailable from s3 now and I can’t force a full backup without downloading the sigtar’s first. How can I REALLY force a full backup?

        Thanks

  26. Sam S says:

    Hey, nice script … so im running into an issue with the script not wanting to delete the old backup sets. I keep getting the following error:

    “Which can’t be deleted because newer sets depend on them”

    My variables are …

    OLDER_THAN=”14D”
    FULL=”7D”

    I’ve been logging into S3 using the S3 plugin for Firefox and deleting the old backups that way. Any ideas as to why it keeps failing?

    TIA

    • Damon says:

      How long will it go before it finally deletes a backup ? I had/have a similiar issue but I think it has to do with the timing/overlap of the full backup and the remove_older_than variable. That is, it seems to go one more “full backup” than I would expect. But they do get removed.

  27. Matthew says:

    Any idea what can be causing this? Gentoo, duplicity 0.4.11.

    Thanks.

    mail DT-S3-Backup-v3 # sh DT-S3-Backup-v3.sh       
    Traceback (most recent call last):
      File "/usr/bin/duplicity", line 463, in 
        with_tempdir(main)
      File "/usr/bin/duplicity", line 458, in with_tempdir
        fn()
      File "/usr/bin/duplicity", line 444, in main
        full_backup(col_stats)
      File "/usr/bin/duplicity", line 155, in full_backup
        bytes_written = write_multivol("full", tarblock_iter, globals.backend)
      File "/usr/bin/duplicity", line 87, in write_multivol
        globals.gpg_profile,globals.volsize)
      File "//usr/lib/python2.5/site-packages/duplicity/gpg.py", line 217, in GPGWriteFile
        file.write(data)
      File "//usr/lib/python2.5/site-packages/duplicity/gpg.py", line 125, in write
        return self.gpg_input.write(buf)
    IOError: [Errno 32] Broken pipe
    close failed in file object destructor:
    IOError: [Errno 32] Broken pipe
    No old backup sets found, nothing deleted.
  28. Andreas says:

    Thanks Damon, great script and instructions.

    For any users on Mac Leopard: for the sed-variant included in 10.5., line 253 needs to be like so:

    sed -i "" '/-------------------------------------------------/d' ${LOGFILE}

    Without the double quote, you get an error otherwise.

  29. Dalby says:

     

    ./backup.sh --full
    Command line error: Too many arguments
    See the duplicity manual page for instructions
    Command line error: Too many arguments
    See the duplicity manual page for instructions

    anybody an idea?

    (with and without --full)
    trying to ftp btw

    • Damon says:

      Hi – I would guess that one of settings you are using at the top of the script is incorrect or not parsing quite right. I have never tried this over ftp — perhaps one of the settings is S3 specific. If you want, post your settings and I can try and take a look. -D

  30. Alex says:

    Many thanks for your comments. To be honest before Amazon S3 I was never familir with duplicity, always a good old rsync…

  31. Alex says:

    Hi

    Excellent script, however, my question is if it is possible to resume the initial backup if it has been interrupted? In my case the script said “Warning found incomplete backup set, probably left from aborted session” and started from the very beginning. I have too many Gbs to backup monthly and I do not want to start it all from the beginning if the script is aborted due to network problems. Any ideas?

    • Damon says:

      That’s a very good question and I don’t have the answer to it — I think, though, the answer probably lies outside of the script and with duplicity itself. I would check their mailing list or just post a question there … not sure, actually! Sorry!

      • Alex says:

        Thanks Damon. Also another good question is if it is possible to verify the integrity of the backup without downloading it to the local machine? Is it possilbe to restore just a specific file/folder instead of restoring the whole backup?

        • Damon says:

          I think duplicity needs to do download the data in order to decode it, in order to then verify it’s integrity. However, using the --archive-dir option may fix that … I haven’t added it yet to the script but it’s an easy addition. See this comment.

          If you want to restore a specific file or folder, you can do that using traditional duplicity parlance — you may try to add it to the script, as well … check out this comment.

  32. AskApache says:

    Love the script, very nice! I’m recommending it to all my readers.

  33. sbeam says:

    Running this now and seems to be working nicely. Your hints on how to use s3cmd and gpg were very helpful.

    I will probably make some small changes to allow for more command line options – so I can split my stuff into different segments with different backup rules. For instance, I have sensitive personal/work data I’d like backed up daily, and encrypted (2Gb total). But my personal photos, videos and music collections can probably go in the clear and only once a week or so (80Gb total – lots of overhead to encrypt that…)

    thanks a lot for this and kudos.

  34. Steve says:

    Hi the script seems to be working for me except i’m getting the following errors:

    du: illegal option -- -
    usage: du [-H | -L | -P] [-a | -s | -d depth] [-c] [-h | -k | -m] [-n] [-x] [-I mask] [file ...]
    du: illegal option -- -
    usage: du [-H | -L | -P] [-a | -s | -d depth] [-c] [-h | -k | -m] [-n] [-x] [-I mask] [file ...]
    sed: 1: "/usr/local/www/apache22 ...": extra characters at the end of l command

    FYI i’m runnign on FreeBSD 7.0

    Any ideas? I’m pretty new to this sort of stuff. Thanks :)

    • Damon says:

      Your version of du might not take all the options I am giving it — check out line 189 … I would guess the “–exclude=” part is what is throwing it off (won’t work on the Mac, too, probably) … you might try erasing that part and seeing what happens … although, it won’t necessarily give you an accurate reading (depending on what you are excluding and including).

    • T D says:

      To make it work in FreeBSD, change line 194 from

      189
      
      du -hs --exclude-from="-" ${include} | \

      to

      189
      
      du -hs ${include} | \

      I guess the du syntax is a little different.

  35. lee says:

    I love the script, thanks. I was just wondering if there is a way to restore backups from say, 12 days ago. If I have a month of backups in s3, how do I restore a specific day? Thanks again.

    • Damon says:

      Hi Lee — there isn’t any way to do that yet without altering the script … duplicity does support this feature, however, so to make the change isn’t that hard … you could edit the script around line 321:

      elif [ "$1" = "--restore" ]; then
        ROOT=$DEST
        DEST=$RESTORE
        FULL_IF_OLDER_THAN=
        OPTION=

      Go ahead and put in an option … like this: OPTION="-t 12D" … that should work (though I haven’t tested it). Let me know if that solves the problem.

  36. Hey Damon – first, thx a ton for writing this. Very handy!

    One issue I managed to track down: If you define a DEST such as…

    DEST="s3+http://test.com/backuptest/"

    You’ll get an error which boils down to

    Problem: AttributeErr: S3Error instance has no attribute 'Code'

    What’s going on is after the backup, the script tries to run s3cmd du, but around line 185 the DEST variable gets piped through sed which strips off the slashes, so what gets run is actually

    s3cmd du -H s3://test.combackuptest

    …which generates the nonsensical error about ‘Code’…

    Not sure if it’s best to just document (If I missed docs, sorry) or have the code check the variable…

    John
    (btw would be nice to have comment markup help here)

    • Damon says:

      Hi John – Oh! I see, I guess I didn’t take into account that someone would have a folder within a bucket … yea, that would cause a problem with no forward slashes. Smile. Thanks for pointing that out.

      Obviously, if you uset DEST="s3+http://single-bucket-name/" it works without the error … because the slashes can all be stripped.

      Good idea about adding guidelines for comment markup … you can use most html stuff like: em, strong, code, … the only fancy one is that you can use ” pre lang=’bash’ ” to mark your code snippets so that it looks something like this:

        elif [ `echo ${DEST} | cut -c 1,2` = "s3" ] && [ -x "$S3CMD" ]; then
            TMPDEST=`echo ${DEST} | cut -c 10- | sed s:/::g`   
            SIZE=`s3cmd du -H s3://${TMPDEST} | awk '{print $1}'`
        fi

      Which shows the problem of stripping all the slashes … I’ll have to work on that!

      Thanks.

    • Damon says:

      I think I fixed it in Version 3 … let me know if it works for you.

  37. T D says:

    Thanks for the script. Took some fiddling to make it work with fbsd but now I’m golden. Finally, the backup solution I’ve been looking for.

    • Damon says:

      Hi TD – glad you got it to work. I’ve never used FreeBSD before … what kind of changes did you have to make?

      • T D says:

        I replied to Steve’s post below with the changes.

        I noticed you changed CLEAN_UP_VARIABLE to 31D from 14D. I made that same change a few weeks ago (without seeing your updated post) because my s3 bill was triple what it should have been due to outbound data transfer. Outbound?? Turns out, duplicity would download my data to check if newer sets depended on them. Telling it to delete items more than 14D old causes this because of the maintain-two-full-copies directive. So for the first two weeks everything is peachy…

        [user@machine /home/user/s3logs]$ cat duplicity-2009-01-24-06.txt 
        --------------[ Backup Statistics ]--------------
        StartTime 1232788342.33 (Sat Jan 24 01:12:22 2009)
        EndTime 1232788563.32 (Sat Jan 24 01:16:03 2009)
        ElapsedTime 220.99 (3 minutes 40.99 seconds)
        SourceFiles 9112
        SourceFileSize 4821175397 (4.49 GB)
        NewFiles 0
        NewFileSize 0 (0 bytes)
        DeletedFiles 0
        ChangedFiles 0
        ChangedFileSize 0 (0 bytes)
        ChangedDeltaSize 0 (0 bytes)
        DeltaEntries 0
        RawDeltaSize 9148781 (8.72 MB)
        TotalDestinationSizeChange 2105762 (2.01 MB)
        Errors 0
        -------------------------------------------------
         
        -----------[ Duplicity Cleanup ]-----------
        No old backup sets found, nothing deleted.

        Then it starts:

         
        [user@machine /home/cronuser/s3logs]$ cat duplicity-2009-02-22-06.txt 
        --------------[ Backup Statistics ]--------------
        StartTime 1235293877.43 (Sun Feb 22 01:11:17 2009)
        EndTime 1235294117.27 (Sun Feb 22 01:15:17 2009)
        ElapsedTime 239.84 (3 minutes 59.84 seconds)
        SourceFiles 9254
        SourceFileSize 4946798976 (4.61 GB)
        NewFiles 0
        NewFileSize 0 (0 bytes)
        DeletedFiles 0
        ChangedFiles 0
        ChangedFileSize 0 (0 bytes)
        ChangedDeltaSize 0 (0 bytes)
        DeltaEntries 0
        RawDeltaSize 9149797 (8.73 MB)
        TotalDestinationSizeChange 2030732 (1.94 MB)
        Errors 0
        -------------------------------------------------
         
        -----------[ Duplicity Cleanup ]-----------
        There are backup set(s) at time(s):
        Wed Jan 21 09:49:21 2009
        Wed Jan 21 10:26:44 2009
        Thu Jan 22 01:06:35 2009
        Which can't be deleted because newer sets depend on them.
        No old backup sets found, nothing deleted.
         
        ---------[ Source File Size Information ]---------
        /usr/home/xxx/s3xxx    4.0G
        /usr/local/xxx/xxx.net   1.7G
        /usr/local/xxx/xxxxx.com        303M
         
        ------[ Destination File Size Information ]------
        Current Remote Backup File Size: 2G

        And it goes on until there are 14 entries in that list, then they all get wiped and the process restarts.

        • Damon says:

          Hi – so, how did you change your variables to solve this ? Currently, I am using (at home):

          FULL_IF_OLDER_THAN="14D"
          CLEAN_UP_TYPE="remove-older-than"
          CLEAN_UP_VARIABLE="28D"

          I think this should keep two full backups — however, less than 28 days ago I switched to a new home server so I didn’t keep the old logs … will have to wait until I hit 28 days to see how it works. Am curious how you remedied this and what you would recommend.

          Thanks,
          Damon

          • T D says:

            I chose 31D because it was much longer than 14D. I don’t know if the ideal number is 28D, 31D, or anything else for that matter. I’m still waiting to see if the behavior appears again with the new variable.

            That said, I think the better solution here is to use the –archive-dir option so that duplicity need not download the remote files to compute hashes regardless. I just made that modification:
            line 228, before –encrypt-key:
            --archive-dir=${LOCAL_ARCHIVE_DIR} \
            line 238 (239 after the above addition), before –encrypt-key:
            --archive-dir=${LOCAL_ARCHIVE_DIR} \

            Finally, add this in your header…

            # Provide an optional local archive directory to store copies of your
            # backups. Duplicity uses these local copies in preference to remote ones
            # when calculating hashes. This is useful for minimizing your Amazon S3
            # outgoing bandwidth bill. This archive may be deleted or discarded at any
            # time without affecting your remote backups - it's only used for the sake of
            # minimizing data transfer.

            LOCAL_ARCHIVE_DIR="/usr/home/bos/s3archive"

            Just tested and it seems to work, although you’ll need some time to see the benefits of it (old files on the server do not get automatically mirrored, just new ones are copied over as they’re made)

            • T D says:

              I can confirm that using the –archive-dir option has solved the issue with duplicity downloading old chunks to compare / compute hashes. Now, the only outbound data transfer from S3 is the verification download duplicity does after uploading a new diff — meaning I transfer in and out about the same amount of data, and that amount is just the diff.

  38. John says:

    Hi Damon,

    I get the same errors without the –full option. Any idea on when you will have a version available that would display the commands? I may put some echo’s in there myself.

    Oh yeah i am trying to run this on CentOS 5.2. Any issues with that OS and this script?

    Thanks,

    • Damon says:

      Well – you could try to bump up the verbosity in the script to -V9 and see if it that has any more details … something odd is happening because you are getting errors both from s3cmd and duplicity … I would typically expect you to have one or the other. Have you expiremented with either command line utility outside of using it within the script? Like, just done some test runs yourself before plugging in the variables?

      I dont’ have CentOS and haven’t tried it; I’ve only run it on Mac OS X and Ubuntu. My guess is maybe something is going on non-related to the script itself … of course, I could be wrong. What version of duplicity are you using ?

      • John says:

        Hey Damon,

        the script works fine when backing up to a locale directory. I havnt tried the commands alone yet, I will try this tonight. I will also try the -v9.

        Thanks,

        • Damon says:

          My guess is that the S3 info isn’t correct — because you are getting errors both from s3cmd and duplicity … I would double check your key pairs and make sure you are selecting a very random bucket name (or it may already be taken.

          • John says:

            For the bucket name does it have to have http:// DEST="s3+http://group-backup-01/"
            or would DEST="s+.images" work for this script?

            thanks,

            • Damon says:

              Hi John – sorry, your comment got held for moderation … I don’t know why … maybe because it thinks the http:// stuff is you spamming me with links. Heh. Anyway – the duplicity man page gives two options:

              s3://host/bucket_name[/prefix]
              s3+http://bucket_name[/prefix]

              I have to assume either one will work … not sure where you are headed with the .images bit … needs to be a name of your bucket.

  39. John says:

    Hey,

    anyone happen to know what is going on with the below?

    ./DT-S3-Backup-v2.sh --full
    Traceback (most recent call last):
      File "/usr/bin/duplicity", line 463, in ?
        with_tempdir(main)
      File "/usr/bin/duplicity", line 458, in with_tempdir
        fn()
      File "/usr/bin/duplicity", line 444, in main
        full_backup(col_stats)
      File "/usr/bin/duplicity", line 155, in full_backup
        bytes_written = write_multivol("full", tarblock_iter, globals.backend)
      File "/usr/bin/duplicity", line 87, in write_multivol
        globals.gpg_profile,globals.volsize)
      File "/usr/lib64/python2.4/site-packages/duplicity/gpg.py", line 213, in GPGWriteFile
        data = block_iter.next(bytes_to_go).data
      File "/usr/lib64/python2.4/site-packages/duplicity/diffdir.py", line 407, in next
        result = self.process(self.input_iter.next(), size)
      File "/usr/lib64/python2.4/site-packages/duplicity/diffdir.py", line 284, in get_delta_iter_w_sig
        sigTarFile.close()
      File "/usr/lib64/python2.4/site-packages/duplicity/tarfile.py", line 508, in close
        self.fileobj.write("" * (RECORDSIZE - remainder))
      File "/usr/lib64/python2.4/site-packages/duplicity/dup_temp.py", line 101, in write
        return self.fileobj.write(buf)
      File "/usr/lib64/python2.4/site-packages/duplicity/gpg.py", line 125, in write
        return self.gpg_input.write(buf)
    IOError: [Errno 32] Broken pipe
    close failed: [Errno 32] Broken pipe
    No old backup sets found, nothing deleted.
     
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
        An unexpected error has occurred.
      Please report the following lines to:
      s3tools-general@lists.sourceforge.net
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
     
    S3cmd:  0.9.8.4
    Python: 2.4.3 (#1, May 24 2008, 13:57:05)  [GCC 4.1.2 20070626 (Red Hat 4.1.2-14)]
     
    Traceback (most recent call last):
      File "/usr/bin/s3cmd", line 1070, in ?
        main()
      File "/usr/bin/s3cmd", line 1049, in main
        cmd_func(args)
      File "/usr/bin/s3cmd", line 47, in cmd_du
        subcmd_bucket_usage(s3, uri)
      File "/usr/bin/s3cmd", line 73, in subcmd_bucket_usage
        if S3.codes.has_key(e.Code):
    AttributeError: S3Error instance has no attribute 'Code'
     
     
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
        An unexpected error has occurred.
        Please report the above lines to:
      s3tools-general@lists.sourceforge.net
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    Thanks,

    • Damon says:

      Hi John – are you able to successfully run the script without using the --full option? I am not on my home machine so can’t double check, but maybe there is an issue when --full is run without any previous “normal” backups? Not sure, to be honest …

      One thing I would like to add is the ability to capture, via standard out, the exact command used to run duplicity via the script … that way we can see if something funny went got mixed in there … and also, then the real duplicity folks could help troubleshoot it.

  40. Damon says:

    @ Alvaro: that’s an interesting suggestion … I, obviously, didn’t know that! I guess, in part, my thinking on this script was that if someone is on my box (physically or has hacked into the system) they have pretty direct access to all my files anyway!

    But, maybe if someone were being more clever than me, they could hide the files and these unset variables would tip off a would-be (or already-be) intruder. Anyway, I will change that on the script and when I get some more fixes, will include it in version 3. Thanks!

  41. Alvaro says:

    Hey, i gotta say it’s a very nice script.

    But just for the record, i usually do “unset var” instead of “export var=”. You may ask what’s the difference.

    (Maybe there’re associated problems; in my environment works great)

    If anyone hacks into your system (or it’s a multiuser box), some one could see you have FOO,BAR and FOOBAR vars empty. Then check what time do you use them, what values do they have, etc. In this case, it seems pretty valuable info there, so (me, personally) i prefer to be paranoic.

    Anyway, keep up the good work!

  42. Charlie says:

    If I understand it right, the hash data is stored locally (rather than remotely) to speed-up the process?

    It stores it locally *and* remotely, but it tries to retrieve it locally first. If it’s not found, then yes, it will go out and get the one in the repository. Now, if they somehow get out of sync, I’m not sure what would happen.

    So far my “implementation” is just commenting out the GPG lines and hard-coding in the additional options inside duplicity_backup(). When I get it working, I’ll try to clean it up.

    The real challenge I’m having is trying to get duplicity to follow symlinks when backing up.

    What I’m trying to implement is a usb thumbdrive backup that I can run from whichever PC I happen to be plugged into. The challenge is that the drive mappings change. So I added a drive letter parameter so that the script can remove and create the proper link, say from /usb -> /cygdrive/f to /usb -> /cygdrive/h

    but… it’s not working. it’s possible that it’s just not going to work unfortunately.

  43. Damon says:

    @ Charlie: would be interested to see your implementation if you wanted to share it … I looked at the --archive-dir option on the man page and it states:

    When backing up or restoring, specify the local archive directory. This option is not necessary, but if hash data is found locally in path it will be used in preference to the remote hash data. Use of this option does not imply that the archive data is no longer stored in the backup destination, nor that the local archive directory need be kept safe. The local archive directory is a performance optimization only, and may safely be discarded at any time.

    If I understand it right, the hash data is stored locally (rather than remotely) to speed-up the process ? Seems though (as would make sense) that if no local version were found it would revert to the remote copy? Maybe?

  44. Charlie says:

    Very nice script. A few of the modifications that I’m working on/would like to see:

    1) Ability to use straight pass-phrase (symmetric key) rather than GPG
    2) Use of the –archive-dir option to help with performance
    3) Use of –time-separator=. which is required for cygwin use

    Still, very clean, nice script!

  45. Ian Ward says:

    Hi Damon, thanks for adding the license and the updates. If I ever end up making any significant changes for my own need I’ll let you know what I do.

    cheers,
    Ian

  46. Damon says:

    Just uploaded an updated version of the script … added the GPL license and some other stuff.

  47. Damon says:

    @ Ian: you know, I hadn’t really thought about a license, but now that you brought it up, maybe I should go ahead and apply one. Have a preference?

    When I started the script, it seemed so small that a license didn’t seem necessary — I actually was thinking of posting a question to slashdot: when should you apply a license to a script?

    I didn’t know when it became necessary. I wanted to add a couple changes to the script (especially as to how it handles the “full backup” schedule as well as the “cleaning” schedule) … maybe I will do that tonight or tomorrow and add the license then.

    Thanks for bringing it up — that will get me to do it.

  48. Ian Ward says:

    Nice script. I was just wondering if there is any kind of license on the script. Thanks,

    Ian

  49. Damon says:

    @ Betrand: You know, that is a good question. I would try to just run it again and see if it recovers; if not, it’s sure to throw an error or say “switching to full” — in which case it may make sense to start over with a full backup.

  50. Bertrand says:

    Very useful script, thank you !
    One question I have : I have launched a first (full backpup) of 3 GO to Amazon S3 and at around 80% done, it stopped because of a problem on my server. Do you know if I have to redo the whole full backup or I can just restart duplicity to finish it ?