Now I have a pair of 500 GB external USB drives. Large compared to my working data of ~3 GB. Please suggest improvements to my backup
system by exploiting these drives. I can imagine a complete copy of A
onto an external drive for each backup; but with most files in A not
changing during the backup interval, that is inefficient.
Finding a file as it existed months or years ago can be tedious. For example, find A/MailMessages as it was at 2023.02.07. Otherwise the
backup system works well.
I can imagine a complete copy of A onto an external drive for each
backup; but with most files in A not changing during the backup interval, that is inefficient.
Now I have a pair of 500 GB external USB drives. Large compared to my
working data of ~3 GB. Please suggest improvements to my backup
system by exploiting these drives. I can imagine a complete copy of A
onto an external drive for each backup; but with most files in A not
changing during the backup interval, that is inefficient.
rnapshot
On 27/06/2024 15:23, peter@easthope.ca wrote:
Now I have a pair of 500 GB external USB drives. Large compared to my working data of ~3 GB. Please suggest improvements to my backup
system by exploiting these drives. I can imagine a complete copy of A
onto an external drive for each backup; but with most files in A not changing during the backup interval, that is inefficient.
rnapshot
This function is applied every week or two to write to a DVD.
xorriso -for_backup -dev /dev/sr0 \
-update_r . / \
-commit \
-toc -check_md5 failure -- \
-eject all ;
Finding a file as it existed months or years ago can be tedious.
Now I have a pair of 500 GB external USB drives. Large compared to my working data of ~3 GB. Please suggest improvements to my backup
system by exploiting these drives. I can imagine a complete copy of A
onto an external drive for each backup; but with most files in A not
changing during the backup interval, that is inefficient.
Hi,
My working data is in a directory we can refer to as A. A is on a
removable flash store. "du -hs /home/me/A" reports 3.0G. I want a
reliable backup of most files A/*.
I created a directory "Backup" on the HDD and apply this shell
function whenever motivated.
Backup() { \
if [ $# -gt 1 ]; then
echo "Too many arguments.";
else
echo "0 or 1 arguments are OK.";
source="/home/me/A/*";
echo "source is $source.";
if [ $# -eq 0 ]; then
echo "0 arguments is OK.";
destination=/home/me/Backup;
echo "destination is $destination.";
else
echo "1 argument is OK.";
destination=/home/me/$1;
echo "destination is $destination.";
fi;
echo "Executing sync and rsync.";
sync;
rsync \
--exclude='Trap*' \
--exclude='*.mp3' \
--exclude='*.mp4' \
-auv $source $destination ;
/bin/ls -ld $destination/MailMessages;
printf "du -hs $destination => ";
du -hs $destination;
fi;
}
When the flash store fails, work since the last execution of Backup
can be lost.
In case the Backup directory on the HDD is lost or I want to see an
old file not current in A, I want backups of Backup. This function is applied every week or two to write to a DVD.
FilesToDVD () { \
printf "Insert open or new DVD-R.";
read t;
startPath=$PWD;
echo "startPath is $startPath";
source=/home/me/Backup;
echo "source is $source";
cd $source;
xorriso -for_backup -dev /dev/sr0 \
-update_r . / \
-commit \
-toc -check_md5 failure -- \
-eject all ;
cd $startPath ;
echo " xorriso -dev /dev/sr0 -toc ";
echo " mount -o sbsector=nnnnnn /dev/sr0 /mnt/iso "; }
Finding a file as it existed months or years ago can be tedious. For example, find A/MailMessages as it was at 2023.02.07. Otherwise the
backup system works well.
Now I have a pair of 500 GB external USB drives. Large compared to my working data of ~3 GB. Please suggest improvements to my backup
system by exploiting these drives. I can imagine a complete copy of A
onto an external drive for each backup; but with most files in A not
changing during the backup interval, that is inefficient.
Thanks, ... Peter E.
Hi,
peter@easthope.ca wrote:
This function is applied every week or two to write to a DVD.
xorriso -for_backup -dev /dev/sr0 \
-update_r . / \
-commit \
-toc -check_md5 failure -- \
-eject all ;
Finding a file as it existed months or years ago can be tedious.
You could give the backups volume ids which tell the date.
-volid BOB_"$(date '+%Y_%m_%d_%H%M%S')"
(BOB = Backup Of Backup :))
This would also make it possible to verify that the medium is either an appendable BOB or blank. Before -dev you would insert:
When I boot the file server (possibly today but definitely tomorrow) I'll post my backup script.
You could give the backups volume ids which tell the date.
This would also make it possible to verify that the medium is either an appendable BOB or blank. Before -dev you would insert:
-assert_volid 'BOB_*' fatal
xorriso -for_backup -dev /dev/sr0 \
Finding a file as it existed months or years ago can be tedious
You could give the backups volume ids which tell the date.
Thanks. I should have added that when you mentioned a few
years ago.
-assert_volid 'BOB_*' fatal
Prevents me from appending to a DVD from another backup system. But I
need to add it when beginning a blank DVD.
I am working on a solution for your non-unique volume id situation
by optionally referring to modification timestamps.
A new command -toc_info_type can switch -toc away from showing volume ids:
$ xorriso -indev /dev/sr0 -toc_info_type mtime -toc
xorriso 1.5.7 : RockRidge filesystem manipulator, libburnia project.
...
Media current: DVD+RW
...
Volume id : 'HOME_Z_2024_06_27_225526'
...
TOC layout : Idx , sbsector , Size , Modification Time
ISO session : 1 , 32 , 1240808s , 2024.06.20.232334
ISO session : 2 , 1240864 , 29797s , 2024.06.21.220651
ISO session : 3 , 1270688 , 20484s , 2024.06.23.225019
ISO session : 4 , 1291200 , 28928s , 2024.06.24.224429
ISO session : 5 , 1320128 , 21352s , 2024.06.25.223943
ISO session : 6 , 1341504 , 30352s , 2024.06.26.223934
ISO session : 7 , 1371872 , 29023s , 2024.06.27.225617
Media summary: 7 sessions, 1400744 data blocks, 2736m data, 1746m free
This is a zisofs compressed backup which happens every evening except saturdays.
Note the time difference between 2024_06_27_225526 and 2024.06.27.225617. These 51 seconds where spent between program start and begin of writing.
This program enhancement is already committed to git.
In a few days there will be a new GNU xorriso 1.5.7 tarball, which is
easy to build and to test without any danger of frankendebianing.
You could give the backups volume ids which tell the date.
-volid BOB_"$(date '+%Y_%m_%d_%H%M%S')"
(BOB = Backup Of Backup :))
You could give the backups volume ids which tell the date.
-volid BOB_"$(date '+%Y_%m_%d_%H%M%S')"
(BOB = Backup Of Backup :))
I'm beginning to learn Git. So I wonder about another approach where
files are in a local Git repository. That would allow tracing the
history of any file. A backup of the extant repository would still be necessary.
Hello,
Git has some properties that are desirable for general backup
purposes, but also some fairly huge downsides. For example:
- It's not efficient or performant for storing large binary files.
rnapshot
From https://rsnapshot.org/
rsnapshot is a filesystem snapshot utility ...
why did you not use something as backup2l ?
The restore function allows to easily restore the state of the file
system or arbitrary directories/files of previous points in time.
...
An integrated split-and-collect function allows to comfortably
transfer all or selected archives to a set of CDs or other removable
media.
I'm beginning to learn Git. So I wonder about another approach where
files are in a local Git repository. That would allow tracing the
history of any file. A backup of the extant repository would still be necessary.
I don't know the software well enough to compare the two approaches.
On one computer I use rsync ...
I'm beginning to learn Git. So I wonder about another approach where
files are in a local Git repository. That would allow tracing the
history of any file. A backup of the extant repository would still be necessary.
From https://rsnapshot.org/
rsnapshot is a filesystem snapshot utility ...
Rather than a snapshot of the extant file system, I want to keep a
history of the files in the file system.
From: eben@gmx.us
Date: Thu, 27 Jun 2024 15:52:44 -0400
On one computer I use rsync ...
See reply to Eduardo.
From: Eduardo M KALINOWSKI <eduardo@kalinowski.com.br>
Date: Thu, 27 Jun 2024 16:06:18 -0300
rnapshot
From https://rsnapshot.org/
rsnapshot is a filesystem snapshot utility ...
Rather than a snapshot of the extant file system, I want to keep a
history of the files in the file system.
Thanks Thomas. Ideally I should find time to follow your suggestions
but already overcommitted to volunteer activities. I might have to wait until -toc_info_type is in a Debian release.
You could give the backups volume ids which tell the date.
-volid BOB_"$(date '+%Y_%m_%d_%H%M%S')"
...
You could easily have one or more ISOs with history and one or more
rsync mirror trees in the same filesystem. It is always good to keep
one valid backup untouched when the other gets endangered by
writing.
(If xorriso -update_r is too slow compared to rsync, consider
xorriso command -disk_dev_ino as in the man page example.)
You could use command -rollback_end to refrain from writing:
xorriso ...the.desired.commands... -rollback_end
This will perform the commands but then just end the program run
instead of writing the result and thus reading all the content of
the files which were mapped into the ISO.
I have a classification of my
system disks in startup file /etc/opt/xorriso/rc :
-drive_class banned '/dev/sda*'
-drive_class banned '/dev/sdb*'
-drive_class harmless /dev/null
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (0 / 16) |
Uptime: | 165:41:18 |
Calls: | 10,385 |
Calls today: | 2 |
Files: | 14,057 |
Messages: | 6,416,525 |