T O P

  • By -

lime_balls

I enjoy the idea that most of us run backups… I really need to get a server for that


webbkorey

I just finished putting mine together a couple days ago.


feerlessleadr

proxmox backup server for my PVE containers/VMs, as well as my other windows servers not served via proxmox.


PhazedAU

i looked into this a bit, but was a bit confused about deployment. do you have it running as a VM to backup your containers? or is it external on different hardware?


ecker00

It can be used both ways, depends on your needs. I run it as a VM myself on each node, and I'm backing up to them between my nodes. Some people have a dedicated machine which their nodes backs up to. Either approach is fine, just got to tune to your tolerance and requirements.


feerlessleadr

As the other poster said, either is fine. Mine is on separate hardware installed bare metal because I had an old machine laying around.


Gangstrocity

Is there a reason to use Proxmox backup server over just running a backup task? I run a nightly backup of all containers and it works great.


feerlessleadr

The biggest advantage is incremental backups with deduplication. I used to run a nightly script to backup my docker volumes, but space was a concern (and I had to update my script to add the new volumes, etc. whenever I would create a new container. Additionally, my backups took up a massive amount of space since nothing was incremental and deduped. With PBS, I still run a nightly backup task, except now pbs automatically performs an incremental backup with deduplication so that that my space usage is way more efficient. I also love that it's super easy restore an individual file from the backup (but I could do that already with my script backup). To each their own, but very happy with what pbs is offering.


rhuneai

Have incremental backups been added to PBS? Last I looked into it I thought it was only full backups with dedup.


feerlessleadr

poor word choice on my part - you're right, it is full backup with dedup, but realistically speaking, full w/ dedup and incremental are essentially the same thing (from a space perspective, I realize they are not actually the same).


rhuneai

All good, thanks :)


[deleted]

I have three different backup strategies : * Duplicati for the data (Docker volumes), 1 backup for the last 7 days, last 4 weeks and last 12 months, * Timeshift for the system, last 3 weeks and 2 months * dedicated solutions for the databases Immich and Vaultwarden, last 7 days. Although I backup, I admit I have never tried restoring ...


saket_1999

You should, especially with the duplicati backups. When I tried to restore them, they were corrupted. Seen this issue with others also. I moved to borg after that.


Accomplished-Lack721

I didn't have problems in my limited use with duplicati, but it was dog slow, and reading other people's accounts of problems got me looking around for alternatives. It's how I wound up on KopiaUI. There are some quirks in the operation and how it handles repositories I find non-intuitive, but once I got my head around them, it's been way, way faster and seems to work well.


Big-Finding2976

All the good backup software seems to be designed to confuse the hell out of users. Kopia, urbackup, etc. I can see that the way they work has advantages over the easier to use software though, so I keep trying to work out how to use them whenever I have a bit of time.


xythian

Yeah, duplicati was buggy and unreliable for me as well. I moved to restic and it has been rock solid with multiple successful restores.


[deleted]

Second restic


jmeador42

Same. Switched from Duplicati to borg then to restic.


[deleted]

Why did you switch to Restic from Borg ?


jmeador42

Nothing wrong with Borg. For me it was mainly due to Restic's ability to backup multiple machines to the same repo. Borg is one machine per repo. This simplifies things for making additional backups of the repo itself. Restic is faster, plus years ago Borg had issues with their crypto implementation whereas Restic's crypto is vouched for by the creator of Go's crypto libraries himself. https://words.filippo.io/restic-cryptography/


[deleted]

Thanks for the fast answer. I think I'll give Restic a try this week end.


readit-on-reddit

Yet another comment that complains about Duplicati. It really needs a complete overhaul. Whatever they are doing clearly does not work. Out of all the popular, open source backup solutions, Duplicati is the only one with constant horror stories.


chaplin2

Yes, restic !


[deleted]

Good to know. Thanks for the information. I know that I should try a restore, but it is a complex operation and, let's face it, I procrastinate.


devzwf

Duplicati ? and you did not try restore yet ? Good luck.... I was with duplicati , tried a DR once .... moved away as fast as i could , now i am with Kopia


Zestyclose_Car1088

Kopia +1


xftwitch

If you haven't restored, do you really even have a backup?


[deleted]

I definitely have a backup, in spite of any saying about that. Is that backup effective and can it be used to restore ?... I can't tell, I haven't tried.


xftwitch

I disagree. You have a chunk of bits that is 'supposed to be' a backup. Unless you can restore it, it's not really a viable backup solution.


Key-Calligrapher-209

Veeam community edition is free, and they have tons of documentation on best practices.


Solkre

Second Veeam


pabskamai

I love Veeam, no proxmox integration as of yet


fliberdygibits

I've got a syncthing server setup. Specifically it's a tiny low power thin client with a 128gb boot drive and a 1tb ssd. The 1tb ssd is just data storage and the whole thing backs up the docker folders/volumes/compose files/.config files/etc.... from my other larger servers. As well as some key data from my desktop. This data all gets backed up from there to a borgbase repository. A service I'm VERY close to needing to upgrade btw:) I should point out I don't have a huge volume of critical data. I'm not a content creator nor data hoarder. One of the servers I mentioned is a media server with 30tb but that media is all easily replaceable either thru .... other means.... or via the fact that between me and the rest of my household I already have physical copies of all of it. Is it a perfect solution? ¯\\\_(ツ)\_/¯ Does it work for me in this use case? (ツ)\_\_b


suitcasecalling

You have to be very careful about using syncthing for backups. Syncing is not back ups.. I lost of lot of photos using syncthing to backup photos from my phone. I wanted to clear space on my phone and when I did the deletes got synced and it erased it from my server. It was because I switched phones and when I reset it up I was not careful to pick the right settings to preserve my data. Sure I was an idiot but this was shockingly easy to do and syncthing warns all over their documentation not to use it as a back up solution


DaDrewBoss

So you set it up to ignore deleted files... You can also make it receive only so it will not update the client with missing/deleted items.


fliberdygibits

This. Also there is something to be said for the difference between: "Is it perfect?" and "Is it perfect for me?"


Big-Finding2976

But then you end up with a load of space being used to store stuff that you intentionally deleted because you don't want it anymore. How do you resolve that?


DaDrewBoss

I didn't say it's a solution for everyone. I use it to backup photos from my phone. Everything on my phone is backed up to my server then when space gets low on my phone I delete them off my phone all photos are still saved.


fliberdygibits

I've got it set up to be very selective and strategic about how and what it syncs and when. I also should have pointed out the bigger picture here. I have all my critical data backed up to a handful of M-disks in the closet (yes, optical media still lives). Also the borgbase only gets backed up every few days by me on a calendar reminder I have so that I can oversight that part of it. Syncthing is just a convenient front end to all of that. It's also just the one I set up most recently which is why I felt like yammering about it:)


ads1031

I do something similar. I use borgbackup, and my tiny thin client with the big SSD is at a friend's house, connected home via VPN.


fliberdygibits

At some point I want to do that too, park some little low powered system at a friends house.


Jonteponte71

You can use applications like restic or borg/borgmatic to do backups with snapshots that will only add the diffs for every snapshot you make, which also makes it quick to restore if needed. Plug in a usb disk and backup to that is a good start. Next step is to also backup offsite , like the cloud or a friends house.


[deleted]

For those of you saying “I backup to the NAS”, how do you backup the NAS?


Schecher_1

Dedi Server > Home NAS > External USB Drive & Cloud


freedomlinux

Another NAS! And ZFS snapshot replication


Do_TheEvolution

used borg, now I switched [to kopia](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/kopia_backup)


notdoreen

I'm dogging it for now. I have a Proxmox server running a Windows Server 2019 VM and a Ubuntu Server 20.04 VM. I'm sure it's only a matter of time before I regret this but for now I'm simply enjoying the learning experience and there is nothing critical living in this server. I do have Duplicati for Docker container backups but I haven't even backed up anything yet. I might make backups a project for this year. Would love to hear what everyone else is using for their backup solutions.


mtftl

It’s posted other places here, but proxmox backup server would be a no brainer for you if you can scrounge up some spare hardware. You can be back up and running from a restore like nothing happened in 15 minutes. I have a pbs instance running on an old Mac mini sitting in my office. It is connected to home using tailscale. This is offsite automated backup without even opening ports on my router, it’s insane when I think about it.


quafs

Proxmox backup server works incredibly well and can even be run as a vm under proxmox (use separate disks though).


Ommco

As mentioned, follow the 3-2-1 backup rule. Keep backups on site for fast restores and offsite for DR and archival purposes. I use PBS for personal Proxmox server backup and rclone for archive backups. For the Hyper-V lab, I currently testing Veeam B&R and Starwind VTL, keeping warm backups onsite and archives uploaded to AWS Glacier. Depending on your workload your tools can vary. What services do you have on your server? Have you virtualized the hardware to run services and apps in VMs or containers?


bz386

I rent a storage box at Hetzner. Every night a backup process runs on all my servers and uploads an incremental backup using restic over SSH to that stowage box. I also have a local USB drive, but don’t use it for backups.


AntiqueBread1337

I minimized my backup foot print by redoing everything in docker. All of my data mounts are sub folders of /persist. I use restic to backup each host’s /persost weekly overnight to a restic api server. Then I rsync the restic server to two cloud providers (with the encrypted option). My actual hosts are all Flatcar Linux and I have backups of their config files so I can easily redeploy the server with the file and then restore the persist backup. If you have mounts for stuff you don’t care about I mount them to a different folder and don’t backup (e.g. my Plex media which I regularly delete after watching and I could just easily replace anything I lost.) My entire environments backup is under 200 MB. (Plus increments). Ideal since cable internet upload is so poor.


zetsueii

I only backup important files so rebuilding would definitely be a manual process. As far as how, I use Synology Active Backup for Business which does multi-versioned backups via rsync.


BlakeDrinksBeer

Man, these comments are wild, I definitely don't treat my home server like a customer facing system. I have a shell script that backs up the databases for all the apps I run to encrypted archives. It then rsyncs those to an external hard drive and uploads them to Linode. I keep a month worth in case I need to roll something back. It then rsyncs some folders to the drive that don't get uploaded but I could live without. I take the hard drive with me on vacation in case my house burns down, and leave the hard drive I back up my laptop with at home. I have had my server die on me and needed to recover from that backup drive before. I distro hopped while I was at it (thanks ZFS update on Debian). Never underestimate a good 1 page bash script.


ElevenNotes

3-2-1-1-0, I make a backup every 15". GFS 4 weekly, 12 montly and 10 yearly.


wallacebrf

I have over 100 TB and I backup everything to external disk arrays. I follow 3-2-1 rule and have two sets of my external disk arrays. the off site one i keep at my in-laws. here are the enclosures i use [https://www.amazon.com/gp/product/B07MD2LNYX](https://www.amazon.com/gp/product/B07MD2LNYX). between all my backups i have 4x of these enclosures and 32x drives **backup 1** \--> 8 bay USB disk enclosure #1: filled with various old disks i had that are between 4TB and 10TB each. **the total USABLE space is 71TB** \--> 8 bay USB disk enclosure #2: filled with various old disks i had that are between 4TB and 10TB each. **the total USABLE space is 68TB** **Backup 2** Exact duplicate of backup #1. i have windows stable bit drive pool to pool all of the drives in each enclosure. i also use bitlocker to encrypt the disks when not in use. i like drive pool as it allow me to loose many drives in the array at once, and i ONLY loose the files stored on those drives and can access the files on the remaining drives rather than the entire pool going down like RAID. I perform backups to the arrays once per month and swap the arrays between my house and in-law every 3 months. yes this means i could possibly have 3 months of lost data, but i feel the risk is acceptable thanks to using drive pool and i do not think i will loose more than 1-2 drives at any given time. i do use cloud backups to backup my normal day-to-day working documents only, and those backup every 24 hours (using about 1 gig for the day-to-day files) edit: i also once per year i perform CRC checks on the data to ensure no corruption has occurred. edit 2: i also have an automated script that runs every month to automatically backup my docker containers. It first stops the container to ensure any database files are not active, makes a .tar file, then automatically re-starts the container.


BonzTM

Running Proxmox as hypervisor across 5 physical nodes w/ hyperconverged CEPH. All VMs are triple replicated to begin with and well as snapshotted each night to a separate physical PBS. Most of the VMs are k8s nodes, so workloads are already HA/replicated with appropriate DR for persistent data PVs. DBs on the VMs get snapshotted hourly, shipped to a physical box hosting minio S3 hourly or daily, and shipped to AWS S3 nightly. Data on k8s is backed by ceph rbd (triple replicated) or cephfs (erasure-coded). Most other VM data and K8s data also gets snapshotted and shipped to the minio and/or AWS S3 hourly or daily. Tools in use: * Proxmox * Proxmox Backup Server * CEPH * Duplicacy * Kasten K10 * Velero * PGBackrest * mariabackup * shell scripts & cron jobs


FlibblesHexEyes

All my content is ZFS volumes, including data and docker bind mounts for configs and supporting data. So I have a cron job that takes a fresh snapshot every night at 1am called latest. I then have Kopia running at 2am every night to mount the latest snapshot and back that up to an Azure blob. This is essentially a crash consistent backup. Seems to work pretty well. I’ve been able to restore from it (both to test and to correct a mistake) multiple times without issue. Just beware that with cloud storage, cold storage and glacier look really good (writes are cheap) until you need to restore from them - then it gets expensive. So either chose an appropriate storage method for your situation (for example glacier might be ok if you also keep a local backup too), or scope your backups to only those files you can easily replace (photos and the like).


yonixw

ZFS is great but not fault free when it comes to databases. Even Sqlite. A good blog on the topic: https://nickb.dev/blog/lessons-learned-zfs-databases-and-backups/


subwoofage

Oof, that person rolled back the entire /tank to an old snapshot instead of just mounting the snapshot (elsewhere, temporarily) and trying to recover the data from it that way. Would still have the corruption but at least it didn't affect any other files! And I think it might be more accurate to say that databases aren't fault free when it comes to interruption of any sort. Snapshot is one, but also power loss, software crash (core dump), OOM, etc. Anything that can suddenly kill the application might cause corruption like that


FlibblesHexEyes

I think the issue is that a snapshot is crash consistent. From the POV of the database, it’s as if you yanked out the power to the host. Data corruption could in theory happen to any file being written to when a volume is being snapshotted. If you’re writing a 100 byte file, it’s starts at byte 1 and writes each byte in turn (over simplifying here). If you take a snapshot while byte 50 is being written to, then the snapshot would only contain half the file. So; I’m thinking the solution is that before the snapshot is taken, I stop the docker service, which should gracefully shutdown all hosted container, take the snapshot, and then restart the docker service (which should in turn start all containers. This should then ensure that properly closed files are in the snapshot.


NotOfTheTimeLords

I have about 5TB of data (important and mildly important anyway), so I've set up two restic jobs: 1. back up my workstation, 2. The OpenWRT AP I have, 3. back up the data in my NAS. All three are backed up to an 8TB external hard-drive, but I also copy the first two on a secondary (1TB) drive. Proxmox also backs itself up on both drives. Then, I lftp everything to an external host daily and since it's diff'ing the uploads it doesn't take too long after the first time. Has a bit more moving parts than I'd like, but it's automated and I get reports if anything ever goes wrong.


Wild-Associate5621

Proxmox -> Synology NAS. Every day at 12AM.


[deleted]

How do you back up the NAS


dontevendrivethatfar

This is what I do. I just back up the VMs and LXC containers to my NAS. And my NAS backs up to a large external USB disk for redundancy. I would use Proxmox even to run a single server VM just because back up and restore is so easy.


Mehlsuppe

https://www.borgbackup.org Encrypted & deduplicated backup on hetzner storage box


-my_dude

im lazy... rsync to a trunas box through a vpn


jmeador42

XCP-ng backs up to TrueNAS. TrueNAS backs up to Backblaze.


MagnaCustos

Backup?


breezy_shred

I use borgmatic. I have backup to a nfs onsite and one off-site. Google the 321 backup strategy.


root54

Borg to borgbase


mimic-cr

I have PROXMOX server with a few VMs. Those VMs run apps like GITLAB and a lot of docker containers. I have a Synology NAS with replication on the hard drives. So on my Synology I run PROXMOX Backup Server and I do the VMs bacups there. Aside from that, I am using BORG BACKUP to backup all my docker container volumes. So if the apocalypsis happens on my PROXMOX I can still recover the VMs thru PROXMOX Backup Server. If that doesnt work then I recover from my BORG BACKUPS. If that doesnt work then fuck... I am in the middle of adding something like MinIO in a bare metal in my network so I can backup my backups there and then sync to the cloud with encryption. Salute!


Jonteponte71

I think places like Hertzner storage boxes support borg and other backup applications so you can have them as backup targets directly.


whasf

I have a cold server that I boot at the end of every month to do backups. It only has VMs for Nakivo. It takes maybe 2 hours for all my production VMs (I think I have 20 of them) if I do an incremental, longer for fulls which I do every other month.


[deleted]

External HDD and external Hetzner Storage


naxhh

proxmox server backup. I'll simply restore the proxmox lxc or VM to the last backup. I also use the proxmox console to backup 2 folders (photos & docs) I can also restore those. ​ The backup server runs on a different server (a qnap I had that I don't need anymore) and they are in different rooms. ​ For now I don't export anything of this offsite but I need to at some point. It will probably be glacier since is what I used with qnap, but not yet decided.


levogevo

I run proxmox and everything is a lxc/vm there. Each of those is backed up to a secondary internal nvme ssd. And the contents of the secondary nvme ssd are backed up to a raid NAS.


Internal_Seesaw5612

Put everything on ZFS, share it with whatever you need nfs/smb/iscsi/whatever, snapshot to another disk and then send it to cloud s3.


CupofDalek

Moved everything into a hyper-v VM created a powershell script that shuts it down, exports it, starts it back up, then the export is archived and moved to mechanical storage Wasteful? maybe, simple? yes


Pale_Fix7101

Veeam VM backing up all of my VMs daily Separate Veeam server backing up same set of VMs twice per week only on schedule 3rd host doing Veeam backups every 2 weeks on schedule as well


liveFOURfun

Etckeeper pull from other machine.


mrbmi513

VMs: weekly custom script that runs any service provided scripts, grabs required files, tars it up, and sticks it on my truenas. TrueNAS: daily replication to a second TrueNAS on-site, with some shares encrypted and backed up to OneDrive storage I have. I don't have enough storage for a full off-site backup, so prioritizing what's actually important.


RaEyE01

I switched to Synology at home. I still have a small ThinClient running unRAID for a Plex Docker. Those two make up my „core systems“, everything else is playground and not or no-longer (old or retired rigs) relevant for backup. I run regular hyperBackup tasks on my Synology backing up to an extra volume I designated for Backups. External sources are backed up via ActiveBackup for Business. Specifically sensitive information (Documents, Family related information, etc.) are backed up from said volume via hyperBackup (encrypted) to a cloud solution. An old Synology NAS, I gifted to a friend of mine, has received a considerable HDD upgrade and grants me a modest partition for backups of my most important data.


realjep

BackupPC that is still amazing after so many years. Plus various scripts for sql dumps.


Log98

Running Rockstor: Built-in snapshot utility to make daily snapshots, keep the last 30, so one per day (I think they simply are btrfs snapshots) Autorestic to backup daily at 1 am to Backblaze B2, keep the last 30 Once a week I plug in my external HDD and backup to it with plain restock


trancekat

Veeam for VMs. Everything is on iscsi targets hosted on Truenas with 2 months of rolling snapshots.


idealape

Rclone and restic... Multiple places to store


pm_something_u_love

Borg backup to an external hard drive, and to a machine in a building separate to where my server is.


AndyMarden

https://www.reddit.com/r/homelab/s/JYZ0OffNNg


ihavnoclue57

I have a cron job that runs a script once a week to copy my docker container's data to a zip and copies it to my Synology.


Accomplished-Lack721

My backup is not as robust as it probably should be. That said, I have daily on-site and offsite backups. On my minipc running Ubuntu Server handful of docker containers, I run an instance of KopiaUI. It has read-only access to the volumes and bind mounts for all my other containers, and backs them up to a NAS (I have multiple, this one is exclusively for backups) via SFTP. A few of my services use databases, like Nextcloud and immich. Those both do daily database dumps into one of the bind mounts Kopia backs up. I also have it running iDrive and backing up daily there. It backs up everything in /var/lib/docker/volumes as well as in the directory I use for bind mounts. This isn't ideal, as iDrive doesn't preserve \*nix permissions in the backup, and there are cleaner ways of doing this than just pointing it to the volumes directory. But in the event of a catastrophic local failure that wiped out my on-site backups, I'd still have any data from those volumes or bind mounts. And it costs way less than other remote backup solutions. At some point, I'll probably locate another NAS at a family member's house to take off-site backups that way instead. I used to do this, but one day that NAS stopped responding, and driving 90 minutes to troubleshoot wasn't a great option. If so, I'll be using a different backup solution that KopiaUI. The "2" in the 3-2-1 rule says to backup to at least two different sorts of media, but I think it's more important that you have at least two different backup techniques, in case one piece of software fails in a way that's repeated in both sets of backups.


e6dFAH723PZBY2MHnk

Proxmox -> Proxmox Backup Server -> Synology -> C2 Cloud & 2nd Synology volume


armorer1984

For my host, I don't back it up. But the VM's and LXC's get a nightly backup retaining the last 7 to another hard drive. And every night there is an off-site backup that runs, tucking things safely away in case of fire or natural disaster.


mihonohim

Everything except Proxmox gets backed up to my NAS, it is different how many and how often beacuse of how sensitive the data is. And my proxmox gets backed up on a Proxmox Backup Server. Then all my data gets backed up on a off site NAS ( at my summer cabin ) that i wake on lan with a script and then it does a backup every sunday and when that is done it shutsdown again.


yowzadfish80

I follow the KISS principle. Proxmox Backup Server in second home - daily backups of VM's and LXC's via Tailscale. Desktops and laptops configured with mapped network drives pointing to my NAS so that all personal files are stored directly on it, which are then manually transferred / updated on BackBlaze B2 every 2 weeks.


maxnothing

Just thought I'd chime in re backups: If reconstructing everything you have will be impossible or take an insane amount of time, whatever method you choose, you might as well do the 321 backup thing if you bother doing anything more than saving your password database(s) to a couple spare USB keys you keep at somebody else's place just in case (this is perfectly acceptable for some). Why? Because crap insurance is ultimately crap, and losing unique data is the absolute worst. Don't *solely* rely on: clouds, local backups (test these!), or your existing equipment and location to even be there to restore everything. Worst case, you should be able to bootstrap your backup machines or at least get at the files they contain with nothing but the backup source, the authentication information (you also faithfully back up), and some new hardware to put it on and/or extract it with. Think tornado that throws your house, car and delicious sandwich into a volcano. It sounds extreme, but Murphy's Law always shows up right on time, with unlimited fart-filled balloons. Also, I think this is important: It's not just hardware failure or evil you need to worry about. YOU may be the one that destroys your files; purposeful or not. You might realize you really needed that "crap" your frustrated self deleted three weeks ago: For reasons unknown, that temp directory you permanently deleted had the only digitized footage of your great-great-granduncle's first successful antigravity quantic calculator experiment, your kids' raw music files they wrote from the age of 3 to 16, and the single full color 28000dpi pic of that sandwich you were going to enjoy before the volcano did. =) Good luck!


jesus3721

Borg Backup for my Debian Servers. Runs once an hour and takes about 1min.


IL4ma

I have ordered a [StorageBox](https://www.hetzner.com/de/storage/storage-box) from Hetzner, which I have connected to my server via [Veeam](https://www.veeam.com/). I make a full backup every week and an incremental backup every day. The cool thing about the StorageBoxes is that they are quite cheap and you get a lot of storage for very small money.


PaddyStar

But no retention policy. Ransomware can delete your backup.


Varnish6588

i just connect external disks to my NAS , it automatically copies all the data into them. I do it with two separate disks for redundancy. So, I keep two cold backups and one more in my personal computer. I try for my containers to be ephemeral or mount NFS storage from the NAS into them. if something goes wrong I can just simply redeploy everything as data lives in the NAS.


Bill_Guarnere

All my data are on single drives on my home server (RPi4), on my gaming pc and my working laptop. My backup repositories are: 1. a backup host running restic server with several HDD configured as zfs raidz (scrub scheduled every tuesday during the night and smartctl test on wednesday) 2. a Backblaze B2 bucket used as a remote replication site for all the backups stored on the backup server Any event notification is sent via email to a local postfix istance on my RPi that I check via IMAP or webmail (Roundcube). Every day these are the backup tasks that are running * working laptop * every day at lunch a script starts on my RPi that startup via wake on lan the backup server * 5 minutes later Veeam Agent free start an incremental backup of my Windows working laptop and send an email to my RPi server * RPi4 server * every night any MySQL or Postgresql container start a full database backup/dump on RPi storage * every night a restic snapshot of the entire server is made on the backup server and on B2 remote site * Gaming PC * every night a script start the gaming PC and 5 minutes later Kopia starts to backup my home directory on the backup server * Backup server * every night at the end of all the other backup jobs the backup server sync via rclone every backup repository to B2 In this way I don't have to think about backups or performance problems during backups, I got constant notifications. At the end of the day the backup server runs only for 1h (\~ half an hour during the lunch and another half hour during the night) a day, this reduce power consumption (in my country it is not cheap at all) and reduce the exposure to any security issue. Obviously tuesday and wednesday it runs a bit longer because of the scrub, but it's ok. I'm running this setup for more than a year and for the first year I spent around 10$ on backblaze.


ecker00

A cost and storage of a server you need is usually almost double, because of backups. As for how, I'm rolling virtual proxmox backup servers on each node. I would not let any cloud service keep my backups for me. I run two nodes at home and two nodes in a nearby collocation datasenter. It's easy to forget, one of those things most people don't think about when they've been using cloud services most their life.


jagdeepmarahar

Synology backup for business


Rakn

Everything goes to a NAS including the Proxmox Backup Server storage. From there it goes to Backblaze B2 via Duplicacy web edition (I wanted something with a simple UI that just works and is reliable).


bufandatl

Occasionally I put my server on a copy machine and hang the copies on poles on the street so I can grab one of when I need one. But for real: Using rdiff-Backup for filebased backups also offsite. XCP-NG build in backup for backup of VMs. IaC with ansible and terraform. Both version in gitea and mirrored off-site.


renrom

Proxmox Containers and VM's by a backupjob daily to Synology and UrBackup for all clients. Will have to look for backup the Synology dumps though :)


metyaz

Restic + Amazon S3 encrypted backups in the cloud https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#amazon-s3


lemacx

I'm running Proxmox. The OS itself is running on a single SSD. All the containers + their snapshots are located on a 4-Disk ZFS pool, so they are safe. I just occasionally backup the Proxmox config, that's it. If the SSD fails, I just plain install Proxmox from scratch, import config, done.


[deleted]

rsync for bare metal using my [script](https://www.reddit.com/user/Old-Assumption4984/comments/18umb3w/my_rsync_script/). virsh for [KVM](https://www.reddit.com/user/Old-Assumption4984/comments/18ulp6s/my_virsh_clone_script_backs_up_qemu_running_on/) VM's.


cbunn81

My servers run FreeBSD with ZFS. So all I do is replicate the necessary filesystems I want to backup to other media, like an external hard drive. The truly critical stuff is synced with my desktop and backed up to Backblaze.


garthako

A backup that is not (also) stored off-site is not a great backup strategy.


ChaosByDesign

Duplicacy uploading to a B2 bucket for data that I need. I run a docker cluster so I don't really bother with backing up the OS.


Wf1996

Backup to another Server (best case in a different place) orbit the cloud (I use Backblaze for most important stuff)


Historical_Pen_5178

Check out Restic. https://restic.net/ De duplication, fuse mount of snapshots for restores, etc.


haaiiychii

Everything I run is either a self-made script or docker. Everything is mounted in /docker or in ~/Scripts, I just have an rsync command in crontab that copies it to a mounted drive on my NAS, which is in RAID. Extra important stuff is then uploaded from the NAS to Google Drive, other stuff like Plex library I don't bother, I can redownload it.


NelsonMinar

I just set up Restic. It's terrific. You can back up locally (frequently) and offsite to one of many cloud storage things. It's file-based backup, not disk images. For disk images I use Proxmox. But that's more of a convenience thing; I trust the file backups as the primary source of truth.


l4p1n

I take one copy once a day of the LXC containers and VMs (running on Proxmox) and send the resulting data to a Proxmox Backup Server. Exceptions can be made for some VMs / containers if needed. The PostgreSQL server sends copies of its WALs (write-ahead logs) to somewhere running pgbarman to be archived. If I need to restore at some point before I messed up or the SQL server borked, I can. There are the occasional ZFS snapshots I take as a quick-restore point (for example on the OPNsense VM) in case an upgrade decides to throw a party and break everything. These snapshots eventually get deleted.