Where is my bottle-neck? I*thought* with a gigabit connection I could getover
100MB/s on a shared folder transfer over my network. I am doing this WiFi tomy
ThinkPad laptop - hmmmm... maybe this old ThinkPad has a sucky WiFi card?Maybe
I should plug the laptop into ethernet and see what the transfer rates are then???
I had setup an Open Media Vault on a Raspberry Pi 3, which I know doesn'thave
gigabit ethernet...ONE
I use cheap USB Hard Drives, 2 x 4TB Seagates...
I run a PLEX server and, although I'm sure I don't have awesome 4k content [Probably not even all 1080p!], it seems to work just fine for streaming to
television at a time.
However, both on my Samba Shares and NFS Shares, I'm getting around 10mb/Sec transfer rates. Sometimes they'll bump up to +/-18mb but not often; I'm sure this is just the particular instance reporting wrong.... I'm around 10-12mb constantly.
So... I thought my bottle-neck was the Pi, and not having gigabit - I threw Pi 4 8gbRAM model at it today... I reinstalled fully, and setup fromscratch.
I've only pushed over one of my drives YET because... wouldn't ya know it,the
transfer rate is the EXACT same as on the Pi 3!!
paul lee <nospam.paul.lee@f420.n105.z1.binkp.net> wrote:backup:/bak/esprimo/cur/home/chris
[snip speeds etc.]
FWIW here are some figures I just got copy a large file across my
network to my Pi 'NAS':-
chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip
2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38 chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip backup: 2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38
Chris Green <cl@isbd.net> wrote:850MB 20.6MB/s 00:41
paul lee <nospam.paul.lee@f420.n105.z1.binkp.net> wrote:The above speeds are wrong, I think one of my switches was playing up, revised speeds as follows:-
[snip speeds etc.]
FWIW here are some figures I just got copy a large file across my
network to my Pi 'NAS':-
chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip backup:/bak/esprimo/cur/home/chris
2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38
chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip backup:
2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38
Desktop to backup SD card:-
bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%850MB 21.3MB/s 00:39
Desktop to backup external USB3 hard drive:-850MB 47.3MB/s 00:17
bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%850MB 45.3MB/s 00:18
Backup to desktop, from SD card:-850MB 36.9MB/s 00:23
bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
Backup to desktop, from external USB3 hard drive:-850MB 39.2MB/s 00:21
bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
So, getting on for half the theoretical speed over a Gigabit network in thebest case.
I had setup an Open Media Vault on a Raspberry Pi 3, which I know doesn't have
gigabit ethernet...
I use cheap USB Hard Drives, 2 x 4TB Seagates...
I run a PLEX server and, although I'm sure I don't have awesome 4k content [Probably not even all 1080p!], it seems to work just fine for streaming
to ONE
television at a time.
On 11/01/2021 15:16, Chris Green wrote:backup:/bak/esprimo/cur/home/chris
Chris Green <cl@isbd.net> wrote:
paul lee <nospam.paul.lee@f420.n105.z1.binkp.net> wrote:
[snip speeds etc.]
FWIW here are some figures I just got copy a large file across my
network to my Pi 'NAS':-
chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip
850MB 20.6MB/s 00:412020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38The above speeds are wrong, I think one of my switches was playing up, revised speeds as follows:-
chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip backup:
2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38
Desktop to backup SD card:-
bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
850MB 21.3MB/s 00:39bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
850MB 47.3MB/s 00:17Desktop to backup external USB3 hard drive:-
bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
850MB 45.3MB/s 00:18bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
850MB 36.9MB/s 00:23Backup to desktop, from SD card:-
bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
850MB 39.2MB/s 00:21Backup to desktop, from external USB3 hard drive:-
bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
the best case.So, getting on for half the theoretical speed over a Gigabit network in
Mmm. I get pretty close to my 100Mbps network against a linux server
with NFS.
you said gigabit Ethernet and now you are saying Wifi?
Wifi is to put it bluntly, utter shit designed for morons. Especially on laptops with no proper antennae.
I have NEVER gotten more than 10Mbps *actual transfer rate* out of a basic 2.4Ghz wifi link, even feet away from the router. Even when it
said it was connected at 65Mbps or 72Mbps.
Remember wifi is half duplex., Every time you send an ack back, it stops the forward channel.
And if any other device is on the wlan, you are sharing the link speed with that, too.
It is worse than old coaxial Ethernet was at 10Mbps. It is to put it bluntly consumer crap for morons. Like StupidPhones™.
Using iwconfig I have watched connection rates and attenuation vary by
3:1 for no apparent reason whatsoever. Or simply stop working altogether until reconnected. Yes, I have foil in all my walls and that makes for a tricky wifi environment, but even so.
FWIW here are some figures I just got copy a large file across my
network to my Pi 'NAS':-
chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip backup:/bak/esprimo/cur/home/chris 2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38
chris@esprimo$ scp 2020-08-20-raspios-buster-armhf-lite.zip backup: 2020-08-20-raspios-buster-armhf-lite.zip 100% 434MB 11.2MB/s 00:38
As you can see I was using scp, the first copy is to an external USB3
hard drive,
the second is to the Pi's SD card. As you can see the speed is
identical whichsuggests to me that it's limited almost entirely by the network rather than thePi's internals.
It's all Gigabit (I checked), out of 'esprimo' which is a desktop
machine, via a
switch near my desktop, along buried UTP to another switch in the garage andthence to the Pi.
--
Chris Green
I'm using a 4B with a USB 3.1 HD. The HD does about 100MB/s (megaBytes) read/write locally, and using Cyrstal Diskmark over Samba its showing maximum transfer rate of 72MB/s read and 58MB/s write. When backing up from my other Pis over NFS I'm seeing rates of 40-50MB/s.
Check you are getting 1000Mb/s Full duplex using the command:-
ethtool eth0
Also try another Ethernet cable (at least Cat 5e), as it wasn't until I started using that Pi as a NAS did I find its upload speed was very
poor. Up to then it was only used for web browsing and its download
speed was fine. I think the cable had been kinked, replacing it restored the upload speed.
---druck
I'll connect over ethernet to both the Pi NAS systems and post again withthe
results, but... is this a fair and valid test that I should persue?Absolutely
So, yes... my daily driver machine is an older T430S Thinkpad laptop,probably
with a less than current WiFi chip/card... however, I was running my NAS on Pi 3; and just upgraded (actually still running both) to a Pi 4. My LAN/ethernet network is all gigabit+ hardware.
So, what I think I should do is simply plug my Thinkpad into the ethernetport
and retest both the Pi 3 and the Pi 4 NAS systems and see what I get then.I,
being a fairly versed an knowledgable NEWBIE, didn't release I should be 'happy' with 12-14mb over my laptops WiFi (which again, is probably lessthan
current since the laptops I run are from 2012).the
I'll connect over ethernet to both the Pi NAS systems and post again with
results, but... is this a fair and valid test that I should persue?of,
I thought I would get better speeds OVER that WiFi connection I'm speaking
but... understand what you've stated here. :P I am, as you can tell, still learning..
bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100% 850MB 16.2MB/s 00:52
So I get about half the 'expected' speed, also quite similar to your
speeds I think.
Thanks for your reply... I was told by another poster that I, since the laptop
I use connects via wifi, should consider my 12mb speeds normal -
On the Pi 3, which is connected via ethernet - I'm NOT.
On the Pi 4, which is connected via ethernet - I AM.
On Tue, 12 Jan 2021 10:57:04 +0000, Chris Green <cl@isbd.net> declaimed the following:
850MB 16.2MB/s 00:52bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100%
headerSo I get about half the 'expected' speed, also quite similar to your
speeds I think.
Are you taking into account the IP header size, the TCP (or UDP)
size, and MTU size? The latter will tend to determine how many packets need to be sent (and for TCP, ACKed). Also, does your transfer method apply any sort of CRC or ECC logic, which will also consume some space in those packets?
o on a 300Mb/s
wireless link between the same two machines one would, sort of, expect something a bit more than 30MB/s whereas in reality one gets about
half of that.
On Tue, 12 Jan 2021 00:35:31 +1300, nospam.paul.lee@f420.n105.z1.binkp.net (paul lee) declaimed the following:
On the Pi 3, which is connected via ethernet - I'm NOT.
As I recall, the R-Pi 3 Ethernet is internally a USB dongle, so throughput will be comparable to USB-2... <30MB/s
On the Pi 4, which is connected via ethernet - I AM.
Real Ethernet on R-Pi 4
I had a T430 that died (very unusual for Lenovo T series), I now have
a T470 I bought used off eBay for rather less than I expected, lovely!
My WiFi connection reports that it is 300Mb/s but the real speed is
never anything like that. Here are my results sending from T470
laptop (WiFi connection, reports as 300Mb/s) to desktop:-
bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100% 850MB 16.2MB/s 00:52
So I get about half the 'expected' speed, also quite similar to your speeds I think.
theI had a T430 that died (very unusual for Lenovo T series), I now have
a T470 I bought used off eBay for rather less than I expected, lovely!
I love the ThinkPad hardware; and while the T430/T440 series have SOME of
cool stuff from the old days, they are just beginning to be a little long in the tooth for me. I'm not very hardware intensive, but... I will be lookingat
some other ThinkPad models in the future. I might just bite the bullet andgo
CURRENT T-series, but I haven't decided just yet.wonder
My WiFi connection reports that it is 300Mb/s but the real speed is never anything like that. Here are my results sending from T470
laptop (WiFi connection, reports as 300Mb/s) to desktop:-
bone-debian-9.4-console-armhf-2018-07-08-1gb.img 100% 850MB 16.2MB/s 00:52
So I get about half the 'expected' speed, also quite similar to your speeds I think.
Understood... however, you are getting a LITTLE better speeds than me; I
what type of WiFi chip/card is in the T470 vs what is CURRENT??? Maybe itwould
be worth it, for me, to upgrade the WiFi chip/card in my T430s to the bestit
will take, OR whatever is current in 2021...better
Even if I won't get the 100mb/s that I had THOUGHT, I'd probably see a
transfer rate anyway. Hmmmmm...
On Tue, 12 Jan 2021 00:31:38 +1300, nospam.paul.lee@f420.n105.z1.binkp.net (paul lee) declaimed the following:
Thanks for your reply... I was told by another poster that I, since the laptop
I use connects via wifi, should consider my 12mb speeds normal -
When I copy from laptop to desktop (both quite fast machines with fast
disks) I get something quite a bit over 100MB/s on wired Gigabit
connections. So the overhead isn't that great given that the
theoretical maximum would be 1000/8 which is 125MB/s. So on a 300Mb/s wireless link between the same two machines one would, sort of, expect something a bit more than 30MB/s whereas in reality one gets about
half of that.
Is it really that important or significant? I.e. does it really
matter if a transfer takes 15 seconds rather than 10 seconds?
On 12/01/2021 18:04, Chris Green wrote:
When I copy from laptop to desktop (both quite fast machines with fast disks) I get something quite a bit over 100MB/s on wired Gigabit connections. So the overhead isn't that great given that the
theoretical maximum would be 1000/8 which is 125MB/s. So on a 300Mb/s wireless link between the same two machines one would, sort of, expect something a bit more than 30MB/s whereas in reality one gets about
half of that.
in general I have found that on a good link, speeds of a little over
1/10th Mbps rate to be obtained at the byte level, So overheads is not
that heavy a penalty.
Probably ~10%
That's on a *full duplex* link. Broadband is full duplex. Ethernet of
the cat 5 sort is full duplex.
Wifi is NOT full duplex.
That means that any ACK packets going back share bandwidth with the
forward data stream, In a fairly nasty 'wait till the stream packet size
is exceeded, then send an ack oh dear collisions/backoffs/try again...'
sort of way.
When my Pi zero link was going titsup before I slapped in an access
point 5 feet away, although it *said* it was connected at 5Mbps, it
couldn't support a 128kbps stream of audio.
My so called 72Mbps links couldn't handle HD TV, which is around 5Mbps I think, reliably.
That means that any ACK packets going back share bandwidth with the
forward data stream, In a fairly nasty 'wait till the stream packet size
is exceeded, then send an ack oh dear collisions/backoffs/try again...'
sort of way.
On Wed, 13 Jan 2021 12:20:16 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
That means that any ACK packets going back share bandwidth with the
forward data stream, In a fairly nasty 'wait till the stream packet size
is exceeded, then send an ack oh dear collisions/backoffs/try again...'
sort of way.
Your mission, should you choose to accept it, is to devise a better approach that allows full duplex wifi. As always, should you or any of your IM Force be caught or killed, the Secretary will disavow any knowledge of your actions. This post will self-destruct in five seconds.
My so called 72Mbps links couldn't handle HD TV, which is around 5Mbps I >think, reliably.
The radio spectrum is limited and precious. Go up to light frequencies
and there's lots more speed available. But that doesn't punch through
solid walls..
On Wed, 13 Jan 2021 14:23:18 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:
The radio spectrum is limited and precious. Go up to light frequencies
and there's lots more speed available. But that doesn't punch through
solid walls..
Ah we have the solution - modulated X-Rays.
Even if I won't get the 100mb/s that I had THOUGHT, I'd probably see a transfer rate anyway. Hmmmmm...Is it really that important or significant? I.e. does it really
matter if a transfer takes 15 seconds rather than 10 seconds?
I run my (incremental, so rarely really huge) backups overnight via anacron so whether they take 10 minutes or 30 minutes doesn't matter
at all. As long as they complete before I wake up in the morning it's fine.
I now have an Ethernet cable to where the laptop lives
a bEven if I won't get the 100mb/s that I had THOUGHT, I'd probably see
filestransfer rate anyway. Hmmmmm...Is it really that important or significant? I.e. does it really
matter if a transfer takes 15 seconds rather than 10 seconds?
I run my (incremental, so rarely really huge) backups overnight via anacron so whether they take 10 minutes or 30 minutes doesn't matter
at all. As long as they complete before I wake up in the morning it's fine.
While I suppose you're right about the bigger backups being at night, the
I work with WOULD benefit from any and all transfer increases. I mean... alot
of 4k movies & videos, backups of 1TB drives, etc etc.
When you backup a 1TB drive do you actually copy the whole 1TB? It's
a huge waste of time and space and you can't keep so many backups.
Use some form of incremental backup and also backup*selectively*.
On 14/01/2021 08:56, Chris Green wrote:
When you backup a 1TB drive do you actually copy the whole 1TB? It's
a huge waste of time and space and you can't keep so many backups.
Use some form of incremental backup and also backup*selectively*.
depends on what you want. I rsync huge amounts of data. Disk space is
cheap. Recovering from data loss is not, Working out what is important
and what is not is even more expensive.
depends on what you want. I rsync huge amounts of data. Disk space is
cheap. Recovering from data loss is not, Working out what is important
and what is not is even more expensive.
Disk space may be cheap, but then you have to manage it. And remember -
if you make backup/restore complicated then noddy users won't do it.
On 14 Jan 2021 at 11:06:48 GMT, The Natural Philosopher<tnp@invalid.invalid>
wrote:done
On 14/01/2021 08:56, Chris Green wrote:
When you backup a 1TB drive do you actually copy the whole 1TB? It's
a huge waste of time and space and you can't keep so many backups.
Use some form of incremental backup and also backup*selectively*.
depends on what you want. I rsync huge amounts of data. Disk space is
cheap. Recovering from data loss is not, Working out what is important
and what is not is even more expensive.
Incremental backup as done by Time Machine allows many more backups. It's
with hard links, so a file is backed up the first time, but hard links are created for subsequent backups. This means that what is presented to me when want to do a restore from a selected date just looks an ordinary folder asit
would appear on the Desktop. I highlight one or more files/folders with the mouse and click Restore. No farting about with command line options that I have no interest in remembering.you
Disk space may be cheap, but then you have to manage it. And remember - if
make backup/restore complicated then noddy users won't do it.
On 14 Jan 2021 11:43:24 GMT
TimS <timstreater@greenbee.net> wrote:
Disk space may be cheap, but then you have to manage it. And remember -
if you make backup/restore complicated then noddy users won't do it.
You can make it as easy as you like and they still won't. A long
time ago I set up a system for a customer with an overnight backup schedule and prepared a box of QIC tapes labelled Mon, Tue, Wed, Thu, Fri, Fri, Fri and left instructions to change the tape daily and keep all but one of the Fri tapes offsite cycling them round each week. Many months later the hrd disc failed during the nightly backup so after replacing the drive and finding the backup corrupt I asked for the previous night's tape - it
emerged that they had *never* changed the tape.
We all learned something that day.
Re: Re: My darn NAS...suffices,
By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06 am
> depends on what you want. I rsync huge amounts of data. Disk space is
> cheap. Recovering from data loss is not, Working out what is important
> and what is not is even more expensive.
>
I agree with this position.
I know that just backing up the data that is not easily reproductible
in theory. However, if you only back the data up without the applicationsand
the OS stack, your recovery consits on a sysadmin installing software for a week and swearing at his notebook.
In article <rtnhf6$gjc$1@dont-email.me>, The Natural Philosopher wrote:
Ah we have the solution - modulated X-Rays.
well you may well laugh...why stop there. Gamma rays?
You do know that X-Rays and Gamma rays are essentially the same thing?
They occupy much the same part of the EM spectrum. The distinction
often made between them is to do with their means of production. Gamma
rays are produced inside the atomic nucleus while X-Rays are created by relaxation of highly excited electrons outside the nucleus.
https://en.wikipedia.org/wiki/X-ray#Gamma_rays
Ah we have the solution - modulated X-Rays.
well you may well laugh...why stop there. Gamma rays?
On 14/01/2021 08:56, Chris Green wrote:
When you backup a 1TB drive do you actually copy the whole 1TB? It's
a huge waste of time and space and you can't keep so many backups.
Use some form of incremental backup and also backup*selectively*.
depends on what you want. I rsync huge amounts of data. Disk space is
cheap. Recovering from data loss is not, Working out what is important
and what is not is even more expensive.
Re: Re: My darn NAS...
By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06 am
depends on what you want. I rsync huge amounts of data. Disk space is cheap. Recovering from data loss is not, Working out what is
important and what is not is even more expensive.
I agree with this position.
I know that just backing up the data that is not easily reproductible suffices,
in theory. However, if you only back the data up without the
applications and the OS stack, your recovery consits on a sysadmin
installing software for a week and swearing at his notebook.
On 14 Jan 2021 11:43:24 GMT
TimS <timstreater@greenbee.net> wrote:
Disk space may be cheap, but then you have to manage it. And remember -
if you make backup/restore complicated then noddy users won't do it.
You can make it as easy as you like and they still won't.
On 13/01/2021 17:36, Richard Falken wrote:suffices,
Re: Re: My darn NAS...
By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06 am
> depends on what you want. I rsync huge amounts of data. Disk space is
> cheap. Recovering from data loss is not, Working out what is important
> and what is not is even more expensive.
>
I agree with this position.
I know that just backing up the data that is not easily reproductible
andin theory. However, if you only back the data up without the applications
the OS stack, your recovery consits on a sysadmin installing software for week and swearing at his notebook.
Well I do reinstall all apps BUT remembering what the config files were called, what changes were made and where they were, is something I
prefer to leave for that recovery phase.
In general a well crashed primary disk is an excuse to upgrade everything...
On 14 Jan 2021 at 11:06:48 GMT, The Natural Philosopher<tnp@invalid.invalid>
wrote:done
On 14/01/2021 08:56, Chris Green wrote:
When you backup a 1TB drive do you actually copy the whole 1TB? It's
a huge waste of time and space and you can't keep so many backups.
Use some form of incremental backup and also backup*selectively*.
depends on what you want. I rsync huge amounts of data. Disk space is cheap. Recovering from data loss is not, Working out what is important
and what is not is even more expensive.
Incremental backup as done by Time Machine allows many more backups. It's
with hard links, so a file is backed up the first time, but hard links are created for subsequent backups. This means that what is presented to me when want to do a restore from a selected date just looks an ordinary folder asit
would appear on the Desktop. I highlight one or more files/folders with the mouse and click Restore. No farting about with command line options that I have no interest in remembering.
Disk space may be cheap, but then you have to manage it. And remember - ifyou
make backup/restore complicated then noddy users won't do it.
In article <rtnhf6$gjc$1@dont-email.me>, The Natural Philosopher wrote:
Ah we have the solution - modulated X-Rays.
well you may well laugh...why stop there. Gamma rays?
You do know that X-Rays and Gamma rays are essentially the same thing?
They occupy much the same part of the EM spectrum. The distinction
often made between them is to do with their means of production. Gamma
rays are produced inside the atomic nucleus while X-Rays are created by relaxation of highly excited electrons outside the nucleus.
https://en.wikipedia.org/wiki/X-ray#Gamma_rays
On Thu, 14 Jan 2021 06:36:06 +1300, Richard Falken wrote:
Re: Re: My darn NAS...
By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06 am
> depends on what you want. I rsync huge amounts of data. Disk space is
> cheap. Recovering from data loss is not, Working out what is
> important and what is not is even more expensive.
>
>
I agree with this position.
I know that just backing up the data that is not easily reproductible
suffices,
in theory. However, if you only back the data up without the
applications and the OS stack, your recovery consits on a sysadmin
installing software for a week and swearing at his notebook.
Theres a simple tweak that fixes most of that stuff: move /usr/local to
/home local and replace it with a symlink to /home/local
I've done the the equivalent with my (large) PostgreSQL databases and my local Apache- based website (by default these are in /var, so I changed
their configurations to put these files in /home too.
Everything continues to work as before but now I've secured almost all of
my own work and customisation by backing up /home
The only thing thats not safeguarded now is the contents of /etc, so
either back that up along with /home or keep copies of everything in /etc that you've explicitly changed in, say, your normal home login. I do the latter but of course ymmv. Changes in /etc made by software updates don't need backing up because they'll be automatically reapplied when you're rebuilding the failed device that holds your filing system.
On 14/01/2021 13:24, Martin Gregorie wrote:
On Thu, 14 Jan 2021 06:36:06 +1300, Richard Falken wrote:
Re: Re: My darn NAS...
By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06 am >>
> depends on what you want. I rsync huge amounts of data. Disk space is >> > cheap. Recovering from data loss is not, Working out what is
> important and what is not is even more expensive.
>
>
I agree with this position.
I know that just backing up the data that is not easily reproductible
suffices,
in theory. However, if you only back the data up without the
applications and the OS stack, your recovery consits on a sysadmin
installing software for a week and swearing at his notebook.
Theres a simple tweak that fixes most of that stuff: move /usr/local to /home local and replace it with a symlink to /home/local
I've done the the equivalent with my (large) PostgreSQL databases and my local Apache- based website (by default these are in /var, so I changed their configurations to put these files in /home too.
Everything continues to work as before but now I've secured almost all of my own work and customisation by backing up /home
The only thing thats not safeguarded now is the contents of /etc, so
either back that up along with /home or keep copies of everything in /etc that you've explicitly changed in, say, your normal home login. I do the latter but of course ymmv. Changes in /etc made by software updates don't need backing up because they'll be automatically reapplied when you're rebuilding the failed device that holds your filing system.
what about /var that contains all the webs servers and Mysql databases
by default? /opt as well has stuff in it. /boot has grub configs
The Natural Philosopher <tnp@invalid.invalid> wrote:
On 13/01/2021 17:36, Richard Falken wrote:I make very sure that all the configuration is either in /home or
Re: Re: My darn NAS...Well I do reinstall all apps BUT remembering what the config files were
By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06 am >>>
> depends on what you want. I rsync huge amounts of data. Disk space is >>> > cheap. Recovering from data loss is not, Working out what is important
> and what is not is even more expensive.
>
I agree with this position.
I know that just backing up the data that is not easily reproductible suffices,
in theory. However, if you only back the data up without the applications and
the OS stack, your recovery consits on a sysadmin installing software for >>> week and swearing at his notebook.
called, what changes were made and where they were, is something I
prefer to leave for that recovery phase.
/etc, most programs do behave properly and keep their configurations
in the right place.
In general a well crashed primary disk is an excuse to upgrade everything...Yes, so why would one back up /usr ??
On 14/01/2021 13:19, Chris Green wrote:am
The Natural Philosopher <tnp@invalid.invalid> wrote:
On 13/01/2021 17:36, Richard Falken wrote:
Re: Re: My darn NAS...
By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06
is
> depends on what you want. I rsync huge amounts of data. Disk space
important> cheap. Recovering from data loss is not, Working out what is
suffices,> and what is not is even more expensive.
>
I agree with this position.
I know that just backing up the data that is not easily reproductible
applications andin theory. However, if you only back the data up without the
for athe OS stack, your recovery consits on a sysadmin installing software
I make very sure that all the configuration is either in /home orweek and swearing at his notebook.Well I do reinstall all apps BUT remembering what the config files were
called, what changes were made and where they were, is something I
prefer to leave for that recovery phase.
/etc, most programs do behave properly and keep their configurations
in the right place.
everything...In general a well crashed primary disk is an excuse to upgrade
Yes, so why would one back up /usr ??
Because /usr/local and /usr/lib is full of nice stuff like fonts and screensave backgrounds and the like
On 14/01/2021 13:24, Martin Gregorie wrote:
On Thu, 14 Jan 2021 06:36:06 +1300, Richard Falken wrote:what about /var that contains all the webs servers and Mysql databases
Re: Re: My darn NAS...
By: The Natural Philosopher to Chris Green on Thu Jan 14 2021 11:06
am
> depends on what you want. I rsync huge amounts of data. Disk space
> is cheap. Recovering from data loss is not, Working out what is
> important and what is not is even more expensive.
>
>
I agree with this position.
I know that just backing up the data that is not easily reproductible
suffices,
in theory. However, if you only back the data up without the
applications and the OS stack, your recovery consits on a sysadmin
installing software for a week and swearing at his notebook.
Theres a simple tweak that fixes most of that stuff: move /usr/local to
/home local and replace it with a symlink to /home/local
I've done the the equivalent with my (large) PostgreSQL databases and
my local Apache- based website (by default these are in /var, so I
changed their configurations to put these files in /home too.
Everything continues to work as before but now I've secured almost all
of my own work and customisation by backing up /home
The only thing thats not safeguarded now is the contents of /etc, so
either back that up along with /home or keep copies of everything in
/etc that you've explicitly changed in, say, your normal home login. I
do the latter but of course ymmv. Changes in /etc made by software
updates don't need backing up because they'll be automatically
reapplied when you're rebuilding the failed device that holds your
filing system.
by default? /opt as well has stuff in it. /boot has grub configs
Incremental backup as done by Time Machine allows many more backups. It'sdo
with hard links, so a file is backed up the first time, but hard links are created for subsequent backups. This means that what is presented to mewhen
want to do a restore from a selected date just looks an ordinary folder as would appear on the Desktop. I highlight one or more files/folders with the mouse and click Restore. No farting about with command line options that I have no interest in remembering.
I make very sure that all the configuration is either in /home or
/etc, most programs do behave properly and keep their configurations
in the right place.
So you make it automatic. I backup my wife's laptop with incremental backups, she doesn't have to do anything, any time her laptop is
connected to our LAN overnight (quite often) it gets backed up to the
NAS in the garage. It works just the same for my systems (desktop,
laptop, pi server), they get backed up automatically every night. I'm
far to lazy to actually do any backups that require action on my part
(and I suspect most people are the same).
Since they're incremental backups they don't eat space very fast, my
8TB NAS disk is only 5% full since moving to it from a 3TB one. The
3TB one was about 5 years old (backups back to 2015) and was 50% full,
though that wasn't *all* incremnentals.
Re: Re: My darn NAS...do
By: TimS to All on Thu Jan 14 2021 11:43 am
Incremental backup as done by Time Machine allows many more backups. It's
arewith hard links, so a file is backed up the first time, but hard links
whencreated for subsequent backups. This means that what is presented to me
as iwant to do a restore from a selected date just looks an ordinary folder
thewould appear on the Desktop. I highlight one or more files/folders with
butmouse and click Restore. No farting about with command line options that have no interest in remembering.
Hard link based incremental backups are great. I do a lot of it with rsync. There is something worth mentioning, though:
You may use hard link based backups in order to make a snapshot per week,
if a file remains unchanged for long, all your hard links will be pointingto
the same file in your backup drive. This means if the file gets corruptedyou
have no copies of it despite having 500+ "images". I have seen it happen andit
is not pretty.checks
It didn't happen to me, thankfully :-P BUt it pays to run some integrity
fro tieme to time, or at least have backups of the backup.
Re: Re: My darn NAS...do
By: TimS to All on Thu Jan 14 2021 11:43 am
Incremental backup as done by Time Machine allows many more backups. It's
arewith hard links, so a file is backed up the first time, but hard links
whencreated for subsequent backups. This means that what is presented to me
as iwant to do a restore from a selected date just looks an ordinary folder
thewould appear on the Desktop. I highlight one or more files/folders with
butmouse and click Restore. No farting about with command line options that have no interest in remembering.
Hard link based incremental backups are great. I do a lot of it with rsync. There is something worth mentioning, though:
You may use hard link based backups in order to make a snapshot per week,
if a file remains unchanged for long, all your hard links will be pointingto
the same file in your backup drive. This means if the file gets corruptedyou
have no copies of it despite having 500+ "images". I have seen it happen andit
is not pretty.
On 13 Jan 2021 at 22:45:19 GMT, Richard Falken <Richard Falken> wrote:It's do
Re: Re: My darn NAS...
By: TimS to All on Thu Jan 14 2021 11:43 am
Incremental backup as done by Time Machine allows many more backups.
arewith hard links, so a file is backed up the first time, but hard links
whencreated for subsequent backups. This means that what is presented to me
as iwant to do a restore from a selected date just looks an ordinary folder
thewould appear on the Desktop. I highlight one or more files/folders with
that Imouse and click Restore. No farting about with command line options
rsync.have no interest in remembering.
Hard link based incremental backups are great. I do a lot of it with
butThere is something worth mentioning, though:
You may use hard link based backups in order to make a snapshot per week,
toif a file remains unchanged for long, all your hard links will be pointing
youthe same file in your backup drive. This means if the file gets corrupted
and ithave no copies of it despite having 500+ "images". I have seen it happen
fileis not pretty.
I don't know whether Time Machine does this or not, or perhaps limits the number of hard links to any file and creates a new complete backup of teh
and starts again.
On my main file machine I've set TM to use a second disk; it alternates between them, so this is some protection.
what about /var that contains all the webs servers and Mysql databases
by default? /opt as well has stuff in it. /boot has grub configs
Restoring from a copy of /var/lib/mysql can leave databases in an inconsistent state.With C-ISAM it is a useable state.
TimS <timstreater@greenbee.net> wrote:
I don't know whether Time Machine does this or not, or perhaps limits the >> number of hard links to any file and creates a new complete backup of teh file
and starts again.
On my main file machine I've set TM to use a second disk; it alternatesThat's a rather neat idea, I might get my backup system to do it.
between them, so this is some protection.
On 14/01/2021 08:56, Chris Green wrote:
When you backup a 1TB drive do you actually copy the whole 1TB? It's
a huge waste of time and space and you can't keep so many backups.
Use some form of incremental backup and also backup*selectively*.
depends on what you want. I rsync huge amounts of data. Disk space is
cheap. Recovering from data loss is not, Working out what is important
and what is not is even more expensive.
When you backup a 1TB drive do you actually copy the whole 1TB? It's
a huge waste of time and space and you can't keep so many backups.
Use some form of incremental backup and also backup *selectively*.
No... I mean on my BBS box, I do backup all /files and... it's literally 500GB or so. But of course, for my Linux I'm just backing up /home and a
few other spots where I hold my personal files. I also use a package
that takes a 'snapshot' or basically a LISTING of every installed
package on the system.
But, still, I'm backing up enough that speeds matter.
I mean... don't speeds kinda always matter, anyway?
No... I mean on my BBS box, I do backup all /files and... it's literally 500GB or so. But of course, for my Linux I'm just backing up /home and a few other spots where I hold my personal files. I also use a package that takes a 'snapshot' or basically a LISTING of every installed package on the system.
What OS do you use on the BBS box?
But, still, I'm backing up enough that speeds matter.Have you tried rsync and/or rsnapshot?
I mean... don't speeds kinda always matter, anyway?
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 418 |
Nodes: | 16 (2 / 14) |
Uptime: | 23:06:52 |
Calls: | 8,804 |
Calls today: | 2 |
Files: | 13,304 |
Messages: | 5,970,093 |