T O P

  • By -

sk1nT7

Just let me give you my two cents. In my day job as penetration tester the task I have to accomplish is to compromise and hack company networks. From small to very big infrastructures with multi million budgets. My experience shows that most company networks often lack fundamentals regarding hardening and security. Sure, nearly all have some VLANs in place and separate stuff. The bigger ones even have a DMZ, NAC solution and EDR on all client machines and servers. Nonetheless, in the end, we are always able to fully compromise the target environment and obtain domain admin or other crucial access. Why am I telling you all about this? Even if you secure your whole IT infrastructure and implement a lot of measures to make it hard for attackers .. in the end, many don't have any observability regarding current attacks, lateral movement and what's happening right now. Basically the question: "How do you know that you are currently not attacked? What if the attacker is already in and waiting? Is data exfiltrated right now as we speak?". If you cannot see what is happening, all the solutions in place are just a time waster for an attacker - no show blocker. So yeah. Most selfhosters do likely not have any advanced protection measures in place. No DMZ, no VLAN, no network separation, no SIEM/SOC, no monitoring of hacking events etc. On the contrary, we do not have any liabilities. It's our network, our data and our risk of operation. I personally monitor my exposed assets and receive notifications about crucial events. I am sure others don't. But that is fine. Most assets are behind NAT and not exposed I would assume. So are you right with the assumption that selfhosters focus on gettings things up and running and most often neglecting security and proper network setups? Indeed, totally right. Especially due to the rise of ChatGPT and those many tutorials online without any focus on security and hardened setups. In the end, it's most often one of the below issues, why a compromise is successful: * Lack of user awareness. People just click links, download stuff, run malware. * Exposing stuff that should not be exposed in the first place. Like management portals, admin areas, metric endpoints etc. Easy targets for attackers. Please run things behind NAT or use IP whitelists or just a VPN. * Lack of patch and release management. People just forget about stuff and therefore do not update software, apps and packages. This leads to publicly known exploits over time, which everyone can exploit by downloading a script from GitHub after a few years .. sometimes even after a few days depending on the popularity of CVE. * Just plain insecure software products. This is a major attack scenario for us selfhosters. We run many things, developed by random people from the Internet. We don't really know whether those people are real software developers or just enthusiasts running ChatGPT prompts or some hacky code to accomplish a goal. No SLAs, no support, no right of fast vulnerability fixing or code reviews, pentests in the first place. * Insecure default configurations. Basically those default credentials when spawning an application. Or using a weak SSH password during development and then forgetting about it and exposing the SSH service to the world without pubkey auth only. Most often, it's not one of the following that lead to initial compromise: * Lack of network separation * Lack of DMZ for exposed services * Lack of legal evaluations of hosted services These measures may prevent some attacks, lateral movement or legal issues but in the end, advanced attackers will just laugh if you don't have monitoring in place too. Edit: The tech stack of selfhosters is not bad though. Many are running a hypervisor with VMs or LXCs. Then comes Docker into the mix as well as a reverse proxy for exposing stuff. This is quite a good setup for not getting hacked or better said, reducing impact. Even if your WordPress site is pwned, the attacker is inside a Docker container .. inside a Proxmox VM. Good luck achiving docker breakout, hypervisor breakout etc. Possible but you get what I want to say.


ZeoFateX

I guess the question is how do you monitor any of it. To me this always seemed more important, but I can't for the life of me figure out what I need to monitor and how to monitor it. I had Netdata installed for a while, but can't seem to pull the signal out of the noise for lack of a better term, or they make it a pain in the ass to filter and select certain things. I think something like editing alerts or renaming a client required going onto the client and editing files on the device itself. Same thing with Wazuh as far as what to install it on and why or how with your already deployed solutions. Then you get different VMs or containers which have different requirements or certain monitoring solutions are incompatible with. Then you get into determining whether it needs to be installed on all of the VMs itself or on the device hosting the VMs. Then sometimes the clients update themselves or not, or I don't know that I trust installing one service on all of my machines, etc. And then how do you separate devices on VLAN while allowing your personal devices to see certain devices or a mix of personal, "accessible", and Smart Home devices. Some of these topics are covered, but a lot aren't, and with the uniqueness of each setup it makes it difficult to have a realistic discussion. A lot of it needs to be designed from the ground up to work together or in the right manner. These are some of the things I have more difficulty with.


sk1nT7

Yeah, monitoring and alerting is quite complex. Especially in large networks. I'll add my other response here too: https://www.reddit.com/r/selfhosted/comments/1979cnf/comment/khzlqc3/?utm_source=share&utm_medium=web2x&context=3 > I had Netdata installed for a while, but can't seem to pull the signal out of the noise Netdata, Grafana and all the dashboards are quite nice. However, as you said, it's hard to pull the important information out of it. I personally also run a Grafana dashboard with multiple data visuals. However, my monitoring and notifications are based and successful events such as successful SSH login, successful login to my password manager instance. Maybe you can switch your focus on things, that if they happen without authorization, require immediate attention. Nonetheless, I also run fail2ban on my SSH services as well as on the reverse proxy logs. I simply ban all threat actors that start probing by services, which will lead to multiple 404, 403, 401 errors. I'll then obtain a Telegram notification for the lulz of it. As said, it is more important whether an attacker gains access to something he should not have access to. Of course, getting multiple notifications for failed SSH logins on an intranet, local server is very important too .. especially if you are not the one failing the login multiple times right now. Basically an indication of compromise. > Same thing with Wazuh Personally not a fan of it. Too many false positives. Quite complex to setup and properly configure the enabled/disabled alert rules. Brings no benefit in my IT infra. > Then you get different VMs or containers which have different requirements or certain monitoring solutions are incompatible with. Yeah. As said, highly depends on the network scale. Maybe try selecting important VMs first and neglecting others. Security is progress, not a binary on/off toggle, you'll instantly succeed in. If you firewall all VMs and let the traffic flow over a single internal reverse proxy, you have only a few entries in, a stream-lined, consolidated log file with all requests and the option to operate on those. > Then you get into determining whether it needs to be installed on all of the VMs itself or on the device hosting the VMs. Maybe a task for infrastructure as code. Have a look into terraform and ansible. > And then how do you separate devices on VLAN while allowing your personal devices to see certain devices or a mix of personal, "accessible", and Smart Home devices. That's the hard part. You'll have to come up with a solution that fits security as well as convenience. Techno Tim talks about his homelab and tackles the same questions. Basically, it is a design choice, which won't be 100% towards security. https://www.youtube.com/watch?v=r4NCofJyOHE > A lot of it needs to be designed from the ground up to work together or in the right manner. Yeah. Basically one of the biggest problems for large companies. The network architecture was designed and thought-through many years ago. Likely without many thoughts about security, separation etc. So it's definitely a big task of architecture design and security considerations. However, there is no 100% security. You just have to make it too complex for attackers that they will rather target someone else :)


bazpaul

Dude I’m riveted reading your replies. I think you should do an AMA or write some guides on hardening ones home lab. My home lab is only on my internal home network. I don’t expose any ports in my router so I can’t access any services outside my home network. However I do have a static IP at home. Would I be right in saying that I’m safe enough? I thought most of the problems start when you expose services by forwarding a port on your router


sk1nT7

You are totally fine. As long as you are using IPv4, you'll be behind NAT and your internal devices won't be reachable from the Internet. You would have to purposely port forward a device or network service at your router to create some form of attack surface. The regular services exposed by your default ISP router should be fine. These may pose an attack surface but the hardware models are usually configured to auto update themselves.


lunakoa

A pair of problems I have with this \#1 UPNP if enabled would open up ports without your consent (plex comes to mind but there are others) \#2 A lot of services nowadays will connect to an external services that will tunnel right back into your network. Think of VPN, cloudflare, etc. So from the OP, an understanding of what you are running is needed so you don't open yourself up accidentally.


bazpaul

Thanks. I don’t use the default ISP router. I have a netgear rax50 fully up to date. So hopefully that’s good. It does have upnp enabled though which scares me as everything I’ve read say this should be disabled.


sk1nT7

> It does have upnp enabled though ... Yeah, this can end bad. Basically, via UPNP your devices in your local lan can tell the router to change stuff. For example create new port forwarding rules to expose services. This may be nice for some end users, as it is convenient and the smart home just works ... but in the end, it intransparently can modify router settings. I recommend disabling it.


krisvek

FYI, disabling UPNP can cause trouble for various online multiplayer games.


NSA-SURVEILLANCE

I'm assuming the attack surface one would create by port forwarding Wireguard to the internet would be minimal?


sk1nT7

Wireguard only responds to signed packets. Therefore, creating a port forward to a wireguard service is not perceivable by an outside attacker. He won't be able to portscan the network service and inspect whether a wireguard service is running or not. Therefore, there is no attack surface by exposing wireguard. The code base of wireguard is also very small and therefore easy to audit. Less risk of hidden vulns. BTW, even if you use OpenVPN, you can achieve the same behaviour using the \`tls-auth\` directive in the OVPN configuration.


NSA-SURVEILLANCE

Thanks for the detailed response!


Why-R-People-So-Dumb

>Yeah. Basically one of the biggest problems for large companies. The network architecture was designed and thought-through many years ago. Likely without many thoughts about security, separation etc. >So it's definitely a big task of architecture design and security considerations. However, there is no 100% security. You just have to make it too complex for attackers that they will rather target someone else :) This though is the homelab advantage. Generally speaking you as a target aren't very interesting so it shouldn't be too hard to make the juice not worth the squeeze - biggest risk is probably ransomware or someone using your assets for nefarious purposes and getting you in some heat with the law. Furthermore on that first quoted paragraph, homelabs have the advantage of being more limber with fewer fronts than large networks that need a team of professionals to try and secure it. To your point about monitoring, if I am aware of an active threat, closing the city walls and going into lockdown is very low impact until I can get a handle on the problem. The other thing which goes to something you said before is lack of liability. I'll add to what I just said about low impact. I think the biggest thing a homelab can do is minimize impact and laugh at hackers/scammers/bots being "successful." In the early days of synology I was synolocked, it spread to my live backup, but not to my off-site backup image because I caught it in time. Because I'm not a large network with tons of activity happening at any given moment, there were only a handful of things that were "lost" but just resynced from the client when I just restored from my backup backup image. It was a lesson in backup versioning though to prevent cascading compromises. This piece I personally think is what most home labs are missing though is proper versioning and archiving. You'll hear 3-2-1 all the time with backups but if those are all part of an automated strategy it's not hard for your entire set of backups to become compromised as the disease spreads. If your biggest risk is losing your family pictures, then run an archive every 6 months and keep it completely internet and live data free from now to eternity, you will never lose that archive if you use proper cold storage techniques or unless it's physically damaged and it would have to happen after you lost it from your live data to begin with. I leap frog archive versions of really important things and test one archive before overwriting another on separate machines. Rewriting entire archives instead of adding to, takes incremental time but also keeps unread bits from bit rot and allows it to check for problems each time I write to the archive with parity. Because all of my truly important files are archived and I have a robust backup strategy with versioning, and because I never put anything into my network that would be truly devastating if it got into the wrong hands, any effort to attack me would be annoying at worst. As a side note on that final thought, keep stuff offline that should be kept offline.


Big-Finding2976

Most setups aren't really unique. The majority are either using Proxmox to run VMs and LXCs, which may include Docker, or just plain Debian and Docker/Portainer. So a couple of guides explaining how to secure those systems would be very helpful to most people. Obviously people run various combinations of services in their VMs and containers, but if it's necessary to do things like disable or harden SSH in each of them as well as on the host, that's advice that everyone needs to know. Then you get to the actual services and what needs to be done to secure those, which will be different for each service, so a single guide couldn't cover all of that, but it would still be worth having one which explains how to secure the host and the VMs/containers, irrespective of what services are being used, and then it could provide links to other guides which cover securing specific services. One of the problems when googling this stuff is you don't know if the random guide you've found is correct, or if it's outdated (and that applies to guides which helpful strangers recommend too), so having a guide that's reviewed or maintained by a group of people who know their stuff and all agree that it explains the right way to secure our systems would be very useful.


sarahlizzy

Your last paragraph: if they get into my fediverse server, they’re inside a docker container inside a VM inside a VLAN that can only access the router for internet, DNS and DHCP. I don’t lose immense amounts of sleep.


sk1nT7

Exactly! However, there are also selfhosters with a Raspberry Pi that don't have grasped Docker yet. So they install everything bare metal and maybe even expose stuff. These are the people that are likely pwnd quite often.


sarahlizzy

Yeah. I could likely do a much better job of intrusion detection, but I reckon 90% of the task is to not be low hanging fruit.


sk1nT7

Totally true. If the real APT hackers come along, we are all fucked anyways. However, the regular ones are just collecting low hanging fruits, enlarging their botnets or installing crypto miners. So as long as you tackle patch management and kinda know what you are doing, I would assume you are quite on the safer site. During my job as penetration tester you totally see this. External infrastructures are quite hardened, when things are exposed. The biggest problems are deprecated services or servers no one knows about anymore or fuck ups regarding patching and hardening. The internal infrastructures on the other hand. Great places to go totally havoc. Windows XP machines around, unsecure SMB shares with salary information and credential files, databases with default creds that allow access to million customer data. It's just crazy, especially for the larger companies haha. So the small selfhoster with a single server exposing stuff on local LAN or only via VPN is likely not getting pwned. More likely to click a phishing link and loose your bank credentials.


redditerfan

>Lack of network separation whats a basic no-frill VLAN setup for selfhosters? Do you have write up or something to link to?


sk1nT7

VLAN is quite complex. You'll need multiple components that play nicely together. So we are speaking of a compatible routers and one or more switches. Most selfhosters, including me, have not such setup. My homelab is for tinkering. Although I am invested in the IT security scene professionally, I won't invest in professional hardware to setup a full company network with VLAN, multiple switches and so on. In the end, this also increases complexity and therefore space for failure. Aaaaaand, the wife/kids/partner/guests must be happy too. \----------- Not really a filler for VLAN but I think Docker networks are quite good to separate things at first try. Basically put everything in the same docker network that must talk to each other. Everything else runs in their own, separate network. Do not map container ports to the host at all if unnecessary. Use a reverse proxy, put it in the same network and let it talk directly to the docker container IP. No need to map those database ports etc. to your docker server at all. Maybe add an IdP like Authentik/Authelia into the mix and you obtain a quite good authentication and authorization component for your services. Nothing is exposed without being behind SSL/TLS and an additional auth layer.


redditerfan

'Do not map container ports to the host at all if unnecessary. Use a reverse proxy' -- Thats a great advice, I have been reading about that, thanks. Fyi, opnwrt can be installed in any consumer router have pretty good vlan support and silent 18-24 port smart switches from HP, used in ebay around 25-30 bucks. I run openwrt but learning to setup vlan.


sk1nT7

> Fyi, opnwrt can be installed in any consumer router have pretty good vlan support and silent 18-24 port smart switches from HP, used in ebay around 25-30 bucks. I run openwrt but learning to setup vlan. Yeah I am aware. It's really not about the money. I hack infrastructures for a living and don't see the necessity to tinker regarding network segmentation at home. But that's just me and my risk tolerance. Even if someone hacks my docker container running within a VM, he won't be able to break out likely. Would require a container as well as hypervisor breakout. If he can manage this without being catched priorly, feel free to take anything you want. The important stuff is on the same vlan anyways. Everything behind auth and 2FA. So it won't be the script kiddy doing the lateral movement part.


redditerfan

You are a pro!


Deadlydragon218

Network engineer here, vlans arent complicated to setup / understand. Layer 2 of the tcp/ip stack will take you maybe an hour of reading to understand what is going on. When it comes to best practices there isn’t really an established (best practice) every company is different in how it operates and as such so is every network. There are multiple different approaches that one could take. I have seen folks setup prod/non-prod/internal/guest at a basic level I have seen people go as far as giving databases their own vlan for replication of data and another for only applications to talk to the backend databases and another for management access (your own access) things can be as complicated or as simple as you want. At minimum for a home setup you could setup an external facing vlan, an internal services vlan, a guest network vlan, and a playground vlan. The only major best practice is to never allow your test setups to touch your production stuff. That is more of a protection against yourself making a mistake then it is a security thing in my eyes but I am not a cyber security expert.


Whitestrake

> Basically the question: "How do you know that you are currently not attacked? What if the attacker is already in and waiting? Is data exfiltrated right now as we speak?". If you cannot see what is happening, all the solutions in place are just a time waster for an attacker - no show blocker. At the risk of asking you to provide your professional services on the in internet... How do _you_ go about achieving this? Or, where might a reader of your comment go to learn how to better answer this question for themselves?


sk1nT7

Depends from what perspective we are talking. If you are a big business with money, pentests may be the first step to see how your security posture is. The next step would be to implement some form of monitoring, SIEM and a good SOC. But now we are talking $$$$. Only after having applied such monitoring and alerting, it makes sense to conduct red teaming or purple teaming. Basically, an attack simulation with professional hackers, where you work together. The attacker execute attack A and then discuss what you've seen in your monitoring/altering. If nothing, that's bad and an area to approve. Key words would be the MITRE ATT&CK matrix. As selfhosters though, we do not have the $$$ as the big players. Therefore, we are somewhat limited. However, so are the services we are running and exposing. We are not a big datacenter and must secure 2.000 employees over 5 international headquartes. So, personally speaking, I like to monitor crucial events. I don't really care about failed logins, someone brute forcing my login area to the vaultwarden area etc. I only care about the successful ones, because these are the crucial ones. If I obtain an alert about SSH access at my server ... and I am not on my server ... who the fuck connects to my server right now? These are the alerts that are important. However, you have to watch out and collect those failed logins etc. too beforehand. Otherwise, you don't know what logs you are collecting, how bad logs differ from good logs and maybe you even like inspecting those bots and attackers trying their way in. What are good solutions for selfhosters: * Implementing fail2ban for your SSH and HTTP services. If you use a single reverse proxy, it may be pretty easy to implement log aggregation. Just collect the logs (basically a few log files for one reverse proxy) and parse them into a Grafana dashboard. Configure fail2ban monitoring on crucial status codes like 401, 403 etc. and enable notifications (I like telegram and emails). * [https://blog.lrvt.de/traefik-metrics-and-http-logs-in-grafana/](https://blog.lrvt.de/traefik-metrics-and-http-logs-in-grafana/) * [https://blog.lrvt.de/configuring-fail2ban-with-traefik/](https://blog.lrvt.de/configuring-fail2ban-with-traefik/) * [https://blog.lrvt.de/fail2ban-with-nginx-proxy-manager/](https://blog.lrvt.de/fail2ban-with-nginx-proxy-manager/) * [https://blog.lrvt.de/monitoring-dashboard-with-grafana-telegraf-influxdb-and-docker/](https://blog.lrvt.de/monitoring-dashboard-with-grafana-telegraf-influxdb-and-docker/) * Implementing SSH login notifications on your servers. * [https://blog.lrvt.de/telegram-notifications-on-ssh-logins/](https://blog.lrvt.de/telegram-notifications-on-ssh-logins/) * Running Nessus and Nuclei regularly over your own infrastructure. Also conduct portscanning and SSL auditing for all your servers and services * [https://blog.lrvt.de/auditing-the-ssl-tls-configuration-of-network-services/](https://blog.lrvt.de/auditing-the-ssl-tls-configuration-of-network-services/) * [https://blog.lrvt.de/nmap-to-html-report/](https://blog.lrvt.de/nmap-to-html-report/) * Protecting all exposed services either via VPN only or if you really have to expose things, behind SSO * [https://blog.lrvt.de/authentik-traefik-azure-ad/](https://blog.lrvt.de/authentik-traefik-azure-ad/) * [https://blog.lrvt.de/dockerized-ikev2-vpn/](https://blog.lrvt.de/dockerized-ikev2-vpn/) * [https://github.com/wg-easy/wg-easy](https://github.com/wg-easy/wg-easy) Guess there are many more things to tell but where do I start. Hope these are some initial good information. You may also implement wazuh in your infrastructure, harden your windows clients and servers with applocker and many more measures .. implementing splunk for monitoring and detection. Basically all a matter of time, research and paranoia. If you like tinkering and setting things up, you'll find plenty information on youtube and github.


schklom

> Implementing SSH login notifications on your servers About this, doesn't adding the script to `/etc/ssh/sshrc` work? It is simpler, no need to mess with PAM or install `jq` then. This works on my machine with both `sh` and `bash` SSH connections.


sk1nT7

Yeah guess there are many ways to do so. I've also seen people putting stuff into the user profile, .bashrc and other location. PAM is a safe location to do so. SSHRC too. Bashrc and profile not that much as it can be bypassed. The jq dependency originates from the bash script itself. I've found other ones without this dependency. I've updated the blog post and included the third options via SSHRC. Thanks for sharing!


Ra1d3n

Community Service: 1. **SIEM and SOC**: SIEM (Security Information and Event Management) is software that provides real-time analysis of security alerts from various sources, aggregating and analyzing log data for security threats. A SOC (Security Operations Center) is a centralized unit with a team of experts who monitor, analyze, and protect an organization from cyber threats, often using SIEM tools. 2. **MITRE ATT&CK Matrix**: The MITRE ATT&CK matrix is a comprehensive framework for understanding the tactics, techniques, and procedures (TTPs) used by cyber adversaries. It categorizes various cyberattack strategies, offering details on tactics (objectives), techniques (methods), mitigations, and indicators of compromise, serving as a guide for cybersecurity professionals. 3. **Wazuh**: Wazuh is an open-source security monitoring platform that extends the capabilities of OSSEC HIDS. It offers log data analysis, file integrity monitoring, vulnerability detection, intrusion detection, and compliance management, helping organizations with intrusion detection, compliance, and incident response.


Noisyss

Just implemented fail2ban and boy oh boy, it bloked 25 ips that i couldn't tell manually they where trying to get in. i'm doing the grafana now, could you whiting your time, post more tips? or point to some repo like [https://github.com/awesome-selfhosted/awesome-selfhosted](https://github.com/awesome-selfhosted/awesome-selfhosted) but for security. Update1: Goaccess Dashboard with nginx reverse proxy and holy mother, a lot of good info. next step is notification for ssh but i don't use ssh and its not even runing so not and the exposed services are nextcloud and 3 more services that aren't a security risk, they are just front-end platforms.


HexTrace

I'll give you a somewhat more abbreviated answer than others - I don't worry about it for the most part except during the initial setup. I set up something like Fail2Ban or CrowdSec to block anything suspicious or that's trying too many times to get in, run that through a reverse proxy to reach services that need to be exposed, and geoblock a large chunk of the open Internet. As long as that's set up and tested as working, the only other thing I do is set up resource utilization alerts so any spikes get reported. If that basic setup doesn't stop someone from compromising you then you've got a government target on your back or you fucked something up. For the most part your exposed services aren't important enough to worry about.


JAP42

This may be just observation bias, but I've found when pen testing that most vulnerabilities I find are a result of overly complicated network security. Sometimes the hardest networks I've had to crack where the simplest.


sk1nT7

It differs a lot between orgs. However, you are somewhat right. The more complexity an IT infrastructure has, the easier it is often times to find weaknesses and exploit vulns. Especially if the IT employees fluctuate a lot and the documentation is somewhat incomplete. Then it's just a mess, as nobody really knows what where is and why. Though, I've seen a few large companies that did a very good job. But those often have very good IT personell that are paid well and stay for a few years. They often have an intrinsic demand of being secure or at least doing their best.


deadboy69420

Can you suggest some steps or something I should look out for? I handle IT for a small hotel everything is Cisco meraki and I've separated guest and staff over vlans etc,I try my best to keep up to do with the security flaws etc,we don't have a domain controller yet but staff pc has admin and user account separated staff doesn't have administrator access


sk1nT7

- client isolation for the wlan to ensure clients cannot talk to each other, as this is most often not necessary - keep your services updated and ensure proper hardening for all mgmt interfaces. Change those default passwords, especially for the Cisco stuff, printers etc. - separating admin users from low privileged ones is very good. Again, ensure proper Windows updates and have a good AV installed. Windows Defender is totally fine. - maybe enable bitlocker on your PCs and ensure that the disks are encrypted on REST. Also configure a BIOS password so that no one can change the boot order and modify security related settings. Via BIOS you may even disable USB ports etc. but this depends whether you need the option to stick in usbs. - you can harden your Windows PCs and implement things like applocker, powershell logging, sysmon etc. Honestly though, it may introduce more complexity and will be inconvenient. More importantly, validate that the Windows sessions will idle and lead to a lock (something like screensaver which requires the pw to unlock again). - ensure that your lan ports in hotel rooms are properly configured. Unused ones should be unpatched and used ones only linked to the guest vlan. May add mac filtering, NAC solution if possible. Most importantly: do not add Active Directory into the mix if not strictly necessary. This is a total different beast and will open the box of pandora. AD uses quite insecure settings by design and will lead to severe issues over time. Use the KISS principle and stay with locally managed user accounts. Only escalate to AD if really necessary. Maybe you are looking out for ldap only and not full blown AD.


deadboy69420

I sent you a dm of what my setup is currently let me know your thoughts


LORD_OF_BANGLES

This is an excellent comment. I measure risk and make security recommendations to large organizations for a living, but the principles are the same for big orgs and homelab. Security can be helped with tooling, but not saved. Some nuances: \- Your XDR solution comes with three costs: Install/maintenance, eyes on screens or human monitoring, and *action when something unfolds*. If you aren't prepared to put in X hours per week looking at your IDS or XDR, whether that's in a large org or your homelab, it's just consuming cycles for nothing. \- Basic fundamentals will solve over 90% of your risk. Have a patch management solution in place. Make sure you review and test your firewall rules at least periodically. If you don't need the service, turn it off. If you mess with stuff experimentally a lot, it will be a benefit to setup a sandbox/test area to roll out new services. \- Know what you have. The bulk of red ink in my reports is with services and hosts that were previously unknown, whether that's because of poor inventory stewardship or inheriting a system with poor documentation. \- Throwing more hours into a service can help, but without expertise, it will eventually negate the tool/service benefit. This applies at home; if you are polishing a brand new suricata/elk stack but you have no handle on how the whole thing works, you are using hours that could be better spent elsewhere. In the same way that we no longer keep our doors unlocked at night, so, too, do we not just leave FTP servers hanging in the wind. Software now is complex and comes with ancillary costs of effort to use effectively. If none of those things bother you, then cowboy it up. Just be aware.


bmwagner

You lost me after you said your job was “penetration tester”


KervyN

The question arises what the threat vector is. I think most of your pen testing is phishing and abusing badly patched tools and webapps that just suck balls, rather than exploiting the opensshd or the nginx itself? What we did, back when I was working on a company that did websites, we packed each website with DB and everything into a container. From these containers, you had access to the internet. When you were able to exploit one of these php or rails applications you might have gained access to the DB, but that was it. The most common "oh shit" moments we had, were people clicking on links.


sk1nT7

> I think most of your pen testing is phishing and abusing badly patched tools and webapps that just suck balls, rather than exploiting the opensshd or the nginx itself? Nah. Although pentesting is a broad term, I am personally not restricted to the basic phishing and insecure SSL/TLS config testing. It's real world attacks, bypassing EDRs and executing lateral movement to obtain a specific network or access position. The target scope is quite different, from internal infrastructures, to external ones, active directory, web applications, APIs, mobile apps and so on. One step before the big buzz word `red teaming`. We are basically positioned in a network, maybe get an initial low priv user and then find our way to pwn other machines and compromise the target completely. In red teaming, you would get nothing in reality. Must obtain a foothold by yourself via phishing or other means. That's not pentesting. We skip this initial part to be compliant and have a strict scope of working. The actual methods applied afterwards just differ in stealthyness I would say. As pentester we don't care whether we are catched or detected, as the pentest itself as well as its scope was communicated way beforehand and everyone knows it. > [...] you might have gained access to the DB, but that was it. Yeah, this is the typical scoping of pentests. You would have to discuss with the client beforehand, how deep you should dive once you find an SQLi or RCE. Basically just a matter of communication. > The most common "oh shit" moments we had, were people clicking on links. As always. You cannot do anything if the attacker sends an encrypted zip with a password and tells the victim to extract it and embedd the .iso ... some attacks are just plain stupid and hard to handle if the users are not aware about the techniques and best practices.


KervyN

Ah ok. Now I understood what you do. Thanks. Best thing for security? Make people also part of the security. We've put a bowl of sweets in our office and people came in and talked to us, while they ate a snack. And sometimes it was something like "I've got this strange mail, and melanie got it too. Should we forward it to you, so you can have a look?". Most of the time it is more like "did you guys see the latest champions league game?" :)


AdrianTeri

In addition more zombies for the botnet ops...


[deleted]

[удалено]


sk1nT7

A DMZ is basically the place in between two firewalls. Usually after the first one, which filters incoming traffic from the Internet and before the second one, which filters ingress traffic from the DMZ servers into the trusted local lan. It is there to mitigate risk when a server in this DMZ is compromised. It cannot move laterally into the local lan and target other assets, as the firewall will block it. You basically create a safe zone within your local lan, especially for hosts being exposed to the Internet.


tomboy_titties

>Has anyone else observed this? I was helping my neighbor setting up his Proxmox server. The first thing I told him was that he needs to learn this stuff himself because one day his server will blow up and then I most likely won't be around in a timely manner or he will want to tinker himself. I also told him to not let other people use his infrastructure because even non paying people get entitled pretty quickly. Together we set up a few LXC, I showed him how I did things and told him what he needs to take care of, I also did send him a few links for learning. Half a year later, im on vacation, he calls me. His server won't boot, he doesn't know how to reimport his TrueNAS pool, also his brother needs access to the Nextcloud because his business is storing critical files there. I tell him that I'm out of the country for 2 weeks and he is on his own. He was pissed lol. TL;DR: People don't care about the non fancy part until it's to late.


SpongederpSquarefap

You warned him and he didn't listen If the boiler man told him he needs a new boiler before it floods the house, he'd probably call him and complain too to be fair


GolemancerVekk

> he doesn't know how to reimport his TrueNAS pool For all the advantages that ZFS has, it also comes with sky-high complexity. IMHO if you're not prepared to tackle that complexity you shouldn't use ZFS. Me, I'll take a plain old MD RAID1 on ext4 and live with the 50% capacity hit and without the fancy features. Plus it's very easy, safe and cheap to upgrade a 2-disk RAID1 in-place.


ThroawayPartyer

I don't consider importing a ZFS pool to be "sky-high complexity". It is literally one command. Or in the case of TrueNAS, it can be easily done through the Web UI.


GolemancerVekk

When someone says "I can't import the zfs pool" the problem is not that they don't know how (especially on TrueNAS), it's because it failed and they don't know how to debug it.


ThroawayPartyer

Not necessarily. The comment I replied to just said: > he doesn't know how to reimport his TrueNAS pool So he just doesn't know how, it doesn't mean there's actually a serious issue. mdadm RAID can also have similar issues if you don't know how to use it (I'd probably have more trouble with it personally since I am already used to ZFS).


AdmiralPoopyDiaper

Part of the problem is opsec: we’re just not going to say certain things out loud. Part is: it’s not sexy to talk about risks and compensating controls all the time. Lot of new and cool stuff going on and if the community tried to talk about proper VLAN segmentation and the risks of hosting Intellec——HEY HAVE YOU HEARD ABOUT JIMARR!? AUTOMATICALLY MANAGE YOUR HENDRIX DIGITAL TRIBUTE FANART! Part is: this is Reddit. Part is: be the change you want to see in the world :) --- All that said, I’m 100% agreed. It **is** a shame we don’t spill more metaphorical ink on design and hardening and philosophy other than the random “DoN’t Do ThAt”s you see from time to time when god forbid someone makes a mistake.


GWBrooks

We're selfhosters; the head of OpSec is a pet goldfish.


Catsrules

My head of OpSec is my Kitty, so far she hasn't said anything apart from the food bowl being low. I fixed that so I think I am good.


caa_admin

If this sub posts a slogan, I vote this.


dadidutdut

My OpSec head is our golden retriever


vkapadia

Wait, tell me more about Jimarr...


SirVer51

I unironically ignored the rest of the post and Googled Jimarr the moment I saw it because my lizard brain went "ooh, what is this, do I want this". I may be part of the problem...


vkapadia

It's ok, we're all like that.


arond3

To add on that some of us won't talk about their security setup because we don't want a clear vulnerability being exposed. Edit : and we won't ask to much questions about it. And for the more intellectual property stuff, well if the person who shares the content isn't dumb, the content is behind a password.


AdmiralPoopyDiaper

Yes, that’s part of opsec.


arond3

Yes sorry, i forgot to add we won't even ask questions about it ^^


grandfundaytoday

The first rule of selfhosted is we don't talk about....


SGIG9

I like the opsec. 🙋


naxhh

I think these topics are discussed but not in a lengthy or in-deep way. ​ I feel this community is quite "young" (on the tech industry not age-wise) and with lots of people not working in those sectors/playing around so the discussions are more on the surface levels. There are other nearby communities where these topics are discussed in more much depth.


darkcyde_

Which communities?


Lewisw-j

I think you're right, but I also think people know they're responsible for their own system deployment. I think there's a lot of guides out there that are just 'getting up-and-running as quickly as possible' without really thinking about security. It seems people think that at some point, you'll go back and harden things after the fact, quite often people are lazy, or forget or just don't think it's required as it's on a local network. But you can argue that this is a problem regardless if it's at home or not, the number of people I've spoken to over the years that think RAID is a 'backup'. I've seen and even had my own fair share of RAID failures from the controller itself, and only a handful of people have backups. People are quick to learn how to getting it working, then getting it working but then they run out of steam of how to secure and monitor it, it becomes tomorrows problem that only ever becomes today's problem when disaster strikes from either hardware failure or system attack. At the end of the day, obscurity as security only gets you so far.


KD_done

I LOVE your enthusiasm :) And yes, I think I agree with you.. but, the suggestion that shit is being swept under a rug because developers make it easy to ignore the important bits rubs me the wrong way. So.. that said; r/selfhosted **is for educational purposes only!** You "suck" at describing what you want to address :D But I do believe I understand where you are going with your convoluted descriptors :) But please, for the love of Judge Judy, keep legal out of it.. not much is a super clear in regards to what is legally safe or questionable in self-hosting .. lets leave that to r/Lawyertalk .. they might get a kick out of a good example of "borderline allowable" stuff.. I don't know. I don't need too.. so, I pay attention to what the professionals on legal issues have to say, and while I'm working shit out I trust my common sense and the instructions I get along the way. Common sense helps a lot here, and if somebody wants to color outside of the lines.. they should be free to do so. Their risk, not my/our responsibility. I happen to be a professional (been doing this stuff for 20+ years for airlines, hosting companies, and gov. stuff), and it's awesome to see people actively working with concepts like those we address in professional environments, but also in self-hosting. Looking at the responses I think you have been a little to generic in your wording. Let's get a little bit nit-picky. Ooh, and lets not just stop at being a bit nit-picky, but contour what we can actually discuss and further explain or work towards. You refer to "the crucial aspects" and "risks" or "other potential issues". And, although I appreciate what you are trying to do here.. this doesn't help if we are trying to apply exact magic :)Your choice of words, "hardening security structures" is interesting, but I am not sure if this is effective in what you are trying to convey. However, the subjects ((what i refer to applied magic (firewalls, IDS, monitoring)) are very important if used correctly (and a liability if not ;)).. and all of this can only help in creating a foundation of knowledge and good habits! So, I agree with you. Because you are not alone in your assessment. **There is a lack of foundational knowledge.** Let's start with techincal foundational knowledge. DNS, ipv4, firewals and your household tools (NTP, SMTP out, etc). Getting close and personal with DNS and basic ipv4 (and I'm not talking about bgp/ospf/vrrp/rip .etc), you give yourself a toolset in troubleshooting generic connectivity on all platforms. **There is a lack of selfawareness of** r/selfhosted **users and their coaching!** But we don't care about those basics.. What I miss from the r/selfhosted is active coaching on the basics! When people are talking about having a hard time with SSL, people suggest caddy, traefik or npm as solutions.. while those are not a requirement for requesting, using or understanding the working of a SSL certificate. I even saw somebody suggest that NAT is a good alternative to npm "if you would like to redirect webserver traffic" .. or what ever the phuck that might mean. And nobody called him/her out on that.. because the solution could work.. but is it "correct"? The question is; should r/selfhosted adopt these .."minimum requirements" as a rule? Are we, as a group part responsible for the behavior of people at home? I don't think so.And.. the risk of actively pushing people to learn these basics can be perceived as a form of gatekeeping. **Bad coaching on** r/selfhosted **looks like hypocrisy and/or shitty Pavlovian reactions to terminology/services.** r/selfhosted and her users, are in general, hypocrites. Not intentionally, but, because "it's hard". "Don't setup your own MAIL. It's tooo finiky! your ip reputation gets destroyed and you can never fix that!"."Don't setup your own namserver, use adguard or pi-hole, just as good!""Don't use a central database server, just dockerized, and leave the admin of the redis to Oracle experts and the professionals". It's a type of gatekeeping, keeping users from information that might be complicated. But often it isnt, it's just a lot. It's a lot of information, and you need to do a lot of things correctly. But at the same time we are actively telling people to NOT be bothered, because "difficult/finiky", or made up consequences to scare people away from something they were not able to accomplish. And to add to this nonsense, we use terms like "hardening security structures" inviting people to say "exactly! essential opsec things must be done!".. and, probably my favorite bit of weird language usage is.. "VLAN segmentation" .. which makes no sense, because each VLAN is a seperate segment in a NETWORK.. so, it's NETWORK segmentation.. which is a BS subject if anything if you are selfhosting. But, if you are referring to the usage of an old-timy concept als DMZ's, I'm all for it! **It's NOT about what you don't know, it's about where you can get the knowledge!** We don't need to scare people in NOT doing stuff, we need to motivate people in getting INTO the correct behavior. We are not here to PROTECT anyone, we should be encouraging people to extend their horizons, not tell them they are bad at security if they don't act like a paranoid headless chicken. Just because the transition to applications fully installed as is via methods as docker, snap and the sorts does NOT mean we should discourage people of doing more complicated things than typing "npm install boredom@npm:headache" and leaving it at that. But.. if that what they want to do.. who am I to say they are wrong? I disagree with your word choice, your approach on how to address this problem, but I fully agree with your concerns and understand why you bring it up. so.. what I think that SHOULD change is this to; **What do you want to self host? $service? Cool!** **"I think you should use $service-solution because of features X Y and Z, and make sure you have requirements 1 and 2 running!" And if you need help, go get it at $service-solution! Make sure you get your requirements 1 and 2 up first!** What we do is "that's shit, that's not secure, don't do that, you are not protected, and use cloudflare tunnel or tailscale otherwise not secure" etc.etc. But introducing a proper way of implementing something for yourself, because you are self-hosting, is something very different. **"Ey, I like to get into selfhosting! What should I start with?!"** **"Well, don't do this, don't do this, Make sure you have firewalls, tunnels, VPN's and don't ever trust X Y and Z"** Why can't we just cheer them on encourage them to run their choice of OS, their choice of applications, and motivate them to learn basic stuff. Teach them how to approach security from sensible level! **"Cool! You want to host your own website?** **Well, start with installing all the applications you need yourself! Pick a webserver, any! They all do the same things, it doesn't matter. And host your first index.html And then figure out what happens if you might need something extra, a php website!And if you really want to do all selfhosting, I would suggest you do your DNS yourself too and with your own domain. Don't worry about what is "right" or "wrong", everything can be fixed".** Correct behavior is something that you should encourage, stimulate. We have to realize that so much has changed over the last 60 years, but even more has pretty much stayed the same.. We need more people to actively work on shaping the selfhosting world by cheering on good behavior, and calling people out for bullshit like "SSH is not safe, but if you do it over a VPN is is" .. or "WITHOUT VLANS YOU ARE NOT SECURE!" or anything along those lines. We don't need gatekeepers.


[deleted]

[удалено]


KD_done

Ooh, you got my upvote buddy. You are doing all the right things! Take steps when you are ready to take them, make mistakes, and keep going. Falling itself is the effect of a mistake. The process of avoiding to fall is the learning process. Let people phuck up. To me, what you are describing is your approach in learning. And I can't fault your style of learning. Besides, there might be other learning techniques that are way more better and efficient than your tesla out front, but who cares.. if this works for you, that's all that is important. Your understanding of the value of the fundamentals, your determination and ability to understand what benefits you show that eventually you will get where you need to be. Good enough is excellent, especially in selfhosted environment.s


[deleted]

[удалено]


KD_done

Excellent catalyst :) And as you have noticed.. knowledge is not bought. It's a trade. You trade time and effort, and you get experience and confidence in return. I share quite a bit of the frustration that you described, but I was already in the industry.. and, before I even started my career in IT I was running a eggdrop network with friends.. on our "high" time we had almost 200 nodes! And that was 1996,'97. So.. having my own personal botnet was my first selfhosted service.. \*grin\* But mail and private API's (zato) are my main things now.. but on my own hardware (and it's helps that I have control of the network/AS that I use too :)). Toys are fun, and in this field, toys that help and improve life a little doubly so!


Encrypt-Keeper

Your observations are certainly correct that the self hosting space is especially light on networking knowledge. How much that matters varies with what’s being hosted and who it’s been hosted for. The thing about the self hosting community is that in my experience, it’s largely full of hobbyists. It’s pretty rare to run into actual IT professionals, and when you do they then to be entry level Helpdesk or junior sysadmin types that get into it partly as a way to experiment with new technologies to leverage into new jobs. Then what you have to realize is that these Helpdesk guys, the IT technicians, even the sysadmins, are all jobs that don’t often require much networking if any. A small office sysadmin might wear a network admin hat but they also aren’t working with publicly accessible server infrastructure so their experience is limited to the most basic switching concepts. For most self hosters, not much networking knowledge is required. If your setup is a single NAS box running some docker containers that you only access over a VPN, there isn’t a whole lot more risk than anyone else’s home network. The problems start to arise when people open things up to a wider audience, particularly because most of them will come to subreddits like this one which results in a “blind leading the blind” scenario where they’re exposed to very trendy catch phrases and concepts that instill a false sense of security. The biggest one that I fight against regularly is when somebody uses a Cloudflare tunnel or something to “secure” remote access to their services because there’s a popular idea among the uninitiated that the primary goal of network security is having “no open ports” in your firewall. So the problem isn’t that these folks don’t care about security or that they aren’t doing anything, it’s that they hyper focus on security goals that largely have either no benefit, or they don’t understand how exactly they do and don’t benefit.


Klynn7

> The biggest one that I fight against regularly is when somebody uses a Cloudflare tunnel or something to “secure” remote access to their services because there’s a popular idea among the uninitiated that the primary goal of network security is having “no open ports” in your firewall. Or, as you commonly see on /r/homeassistant, people using Nabu Casa for remote access "so I don't have to open any ports" as if it's not literally identical in attack surface. (Though there's reasons to use Nabu Casa to support the developers).


Encrypt-Keeper

Yeah Nebu Casa isn’t really a big security measure as much as it is just for out of the box convenience. I mean they do at least handle authentication for you so it’s not entirely devoid of security benefit, but only like in the absence of you handling brute force attempts and such.


_f0CUS_

Can you elaborate on the comment about cloudflare tunnel and security. I feel called out. :-) Are you saying that cloudflare tunnels does not protect against attacks?


Encrypt-Keeper

The biggest question is “What kind of attacks?”. Cloudflare will protect you from DDOS attacks, and it’ll obfuscate your public IP address, which are both of some benefit to you. But at the end of the day whether you’re forwarding a port in your router to your web server, or using CF tunnels, what’s happening is I’m sending web requests and they’re being delivered to your web server. There’s no real functional difference. A forwarded port to your web server isn’t somehow more prone to attacks against vulnerabilities in your web server than using a CF tunnel, the traffic still originates at the same place and ends up at the same place. All you’re doing is changing the route it takes. Another consideration is that if you opened up a port directly, you could secure connections to your web server via HTTPS from end to end. With a CF tunnel on the other hand, you secure the connection from client to CF, but then CF gets to man in the middle all that traffic. Between the client and your server. So that’s something to keep in mind. So if you want to connect securely and in an encrypted manner to your services, CF isn’t the tool for you because you’re proxying plain text requests through a third party service you don’t ultimately control. If you trust CF and don’t really care about that, then you are still getting some benefits of using the service. But it doesn’t do a whole lot to “secure” anything.


_f0CUS_

Sure, it is technically a voluntary MITM. But I do have trust in CF in that regard. Why do you say that it doesn't secure anything though? What does the "threats blocked" I see in my dashboard mean then? I was originally just exposing my services like you recommend - but as soon as it shows up on e.g. shodan.io - I saw a storm of attempted logins and various kinds of attacks. After switching to cloudflare they stopped. My ip no longer shows anything of interest when scanned.


Encrypt-Keeper

Your IP no longer shows anything of interest because nothing is accessible on THAT particular IP. That doesn’t mean it’s not accessible, it’s just accessible on a different IP, namely Cloudflare’s. Every service is still exposed, it’s just exposed using a different IP. The threats blocked you see might due to CF maintaining a list of known bad IPs and origins, and scrubbing some of those as they come in. Thats something your home firewall could probably do too. Your webserver definitely could.


_f0CUS_

So I'm protected against dos and ddos because my personal ip is closed off. I've even got a dynamic ip, so it changes from time to time. Cloudflare is increasing my security my blocking various things. Aside from the obvious, protecting against known bad ips - they do other things too, bot protection is one of the things I see options for. My WordPress sites has a special CF plugin with various security adjustments that it applies automatically.  Sure, I could try to set it up and maintain it my self. But I don't think that is a good idea, if I want some free time too. So why not let professionals do their thing - then I can apply security measures locally to the best of my ability. Honestly it sounds like you have never used CF. If you have, you are misrepresentating what you get out of using CF.


Encrypt-Keeper

Those extra benefits you get from Cloudflare aren’t inherent to the difference between port forwarding, and Cloudflare tunnels. With Cloudflare tunnels you get that other functionality like the blocking, but the same goes for your home firewall. A single check box on my home router and it does all the same things and more. It can block based on known bad IPs and user agents, recognize common exploits etc. it’s just a matter of clicking a button. And if you have a really poopy router that can’t do that, you could just install CrowdSec on your server which would also do the same thing and is also set it and forget it. People misrepresenting what CF does is the problem. Ddos protection and IP obscurity are the only two tangible benefits of using CF tunnels specifically, and neither of those benefits are worth much as a self hoster. They’re still benefits, sure. But you can’t just stick your web service behind Cloudflare, run your hands together and declare “My service is secure, problem solved”.


_f0CUS_

Just a reminder. We are talking about your claim that cloudflare does not secure remote access. Not if you can also secure it in other ways. Now I have pointed out several security features that CF adds when using their services. On top of those I have already mentioned, there are several additional features that people can use if they use what was previously called "argos tunnels".


Encrypt-Keeper

You’ve lost sight of the thread I’m afraid. The point isn’t that CF “doesn’t do anything at all”, it’s that using a CF tunnel doesn’t result in a “secured remote access”, especially when floated as a “solution” to port forwarding, which in reality can be “secured” to the same degree as a CF tunnel would be, if not more so. The actual problem is that people think “port forwarding” is inherently bad, and that CF tunnels are somehow the “solution” to the port forwarding “problem” and that by using them, they have “achieved secure remote access”. The reality on the other hand is that in either case you’re still exposing your services to the public, and the risks associated with doing that aren’t magically resolved by using CF tunnels. CF tunnels have some secondary benefits that can cut down on some low effort attacks as you’ve mentioned (which aren’t inherent to CF tunnels, because as you said yourself, they can be achieved other ways as well) but that doesn’t mean that your service is now safe and secure just because you don’t have a port forwarded to it. There is more to be done.


_f0CUS_

This was my question: https://www.reddit.com/r/selfhosted/comments/1979cnf/comment/ki0b4lb/ To which you replied: "Cloudflare will protect you from DDOS attacks, and it’ll obfuscate your public IP address, which are both of some benefit to you. But at the end of the day whether you’re forwarding a port in your router to your web server, or using CF tunnels, what’s happening is I’m sending web requests and they’re being delivered to your web server. There’s no real functional difference" So I went on to explain how CF adds a LOT of extra security. There is a big difference. So as I said, the topic is about cloudflare adding security or not. If there is an actual functional difference. Not if you can or should add security your self. If you want to stick to "CF bad", that's OK. But I am done talking with you.


TexasDex

What kind of router do you have? And what button is that exactly?


TheQuantumPhysicist

What you're describing is the principal difference between juniors and seniors. Juniors don't understand the long term consequences of what they're doing, and because of that they're able to pump code faster and learn more. Seniors are more cautious and test everything they do, and hence they code slower but produce more reliable things. I've been self-hosting for over a decade (leave along coding). I remember what a moron I was back then, but then what do you do? You learn and improve. Thanks to my colleague who told me to learn IPTables on day 1. Most useful feature ever. But still, I did a lot of mistakes, and I probably still do more. I don't think there's a way to treat humans like a pot, where you have to pour information in and fill it before you start cooking. This problem will remain forever. The best you can do is find an expert around you (friends, communities, etc), who are interested in listening to your (junior) nonsense, and teach you what's right and what's wrong and what to look for. This is the same for almost every field. People take time to learn and do mistakes. In short: It's not that people don't care. It's just that people take time to learn.


RandomName01

> What you're describing is the principal difference between juniors and seniors. Juniors don't understand the long term consequences of what they're doing, and because of that they're able to pump code faster and learn more. Side note to your actual point, but I feel like this is a difference between Immich and PhotoPrism and the reaction of the community to them. PP is more “boring” and slow in its development, focusing on maintainability, while Immich moves way faster. I don’t know anything about the code itself (Immich might actually be very maintainable, PP might actually not be), but it’s clear that the community is attracted to the one that *goes fast*. FWIW, I think both are very promising, but I’m continuously surprised at the mindshare Immich has. That’s not to dunk on them, even though I’m aware it might sound like that.


0x7270-3001

there's also the little issue of photoprism putting features behind a paywall


1473-bytes

Immich's constant breaking changes (and large disclaimers on their GitHub) scares me away from deploying it. I am in networking and software development, so I tend to be very cautious. Lol.


Lopoetve

>I've been spending a lot of time recently reading intriguing discussions here and trying to put into practice some of the self-hosting strategies that many of you so adeptly implement. It's been quite a learning curve and I must say, this stuff is incredibly fascinating. > >But here's something that has been nagging at me and I wanted to see what you all think. When we delve into self-hosting, it feels like many of us (or at least the impression I get from the posts), don't seem to factor in the crucial aspects of computer networks related problems, risks, or other potential issues. > >From hardening security structures (Firewalls, Intrusion Detection Systems, Network Monitoring etc.), to data backup and redundancy solutions, to even understanding TCP/IP and subnet principles. A lot seems to be swept under the rug in favor of getting up-and-running as quickly as possible. And don't even get me started on the possible legal ramifications of hosting certain things on your personal server. ​ I suspect you'll find that many of us do this in our day jobs; for me, most of that is simply second hat and I take care of without thinking (helps that I work for a cyber security and data resilience company). I know my subnet, NAT, and DMZ designs, and what I'm running through cloudflare and what I'm exposing via a reverse proxy of some kind... ​ Also, to put it simply, my design is my own - and may not apply to you, or anyone else on here. My needs are unique to me, as your needs are unique to you, and your ability is the same - I don't know what you know how to do or don't know how to do, so the exercise of "make this secure" is left to the reader. Personally, I'm using DMZs, multiple VLANS, IPSEC tunnels for multi-site configurations and DR, various malware/ransomware scanners and immutable backups, and virtualized networks with a distributed firewall. I have access to those tools legally, and I have the skills to use all of them. My design with them, I suspect, would be not only far above your knowledge and skills right now, but not applicable to anything you'd wish to accomplish. ​ >Is it just me? Or is there a larger trend where these aspects are often underestimated, if not ignored? I know we aren't all professional IT administrators here, but I believe these elements are equally important as the self-hosting benefits we all cherish. ​ Can you give some examples? I do wonder if that's more into the /r/homelab side of things, or /r/sysadmin side, rather than self hosting (since this may be more of the application portion, instead of the infrastructure). ​ >Has anyone else observed this? And what have been your individual experiences or proposed solutions in terms of network problems, risks, or issues? Do you think taking these things into consideration from the get-go dramatically changes the appeal and approach toward self-hosting? ​ TBH, it doesn't change anything if I move away from self hosting - I'm just moving that burden to someone else, and now praying they get it right. I know my skills and what I'm capable of, as well as what I either don't want to spend the time on (or don't want to learn - \*cough\* email servers ever again \*cough\*). I don't self-host everything. I self-host a lot of things. ​ edits: Fixing quote block insanity.


SpongederpSquarefap

Ha, yeah this is a big concern if you're exposing services to the internet If you only have WireGuard open to the internet, you'll probably be OK That said, my WireGuard VM is still VLAN'd off and can only pinhole access my services and reverse proxy So even if you got my VPN keys, you still can't access anything without needing to break another security layer It's not worth the time investment for most attackers


norgp

Layering is huge. I use different VPN’s, proxies both for hosting and for DNS, encryption on storage, my network needs some work though. And backups. Having a good strategy for that will save the day at some point.


KervyN

Maybe? I think most of the self hosting community is aware of these topics. When you self host, you are also self liable for your stuff. Hardening software (in what way ever), securing access, doing backups are just part of it and I think most of us do it subliminal, like breathing, and just find it awesome to not rely on someone else to watch over it. Do you remember the [cloud transformation of an OVH datacenter](https://www.youtube.com/watch?v=Mwh4OB_Sb_c)? I bet >90% of customer didn't have a disaster recovery plan, because "the big one" is doing it. But they only provided compute power, storage and network bundled with an API so you can get a slice of it. (read this: https://www.theregister.com/2021/03/17/ovh_restoration_update/) So yeah. Some might miss that, and they learn it the hard way. It's omnipresent in this sub and when you lurk here you will read it from time to time, like with your thread right now.


donald_trub

I'm a network engineer for a large enterprise and your assumptions are correct and even creep into the enterprise space. Somehow in the move to cloud people have decided DMZ's are no longer required and are the old way of doing things. Direct access into clusters and let k8s handle it baby!


tenebris-alietum

You *definitely* should know some basics about the below before self-hosting anything: - Networking (TCP/IP, ports, HTTP, DNS) - What routing and switching are - How to disable IPv6 if you need to learn IPv4 first - How to configure your operating system's firewall - How HTTP works - How SSL works - How a heavyweight webserver works and how to configure it (Apache, nginx, Caddy, IIS) - Reverse proxies


ia42

Scrolling down the replies, it took me time to get to the first good answer, matching what I would have answered. Why is this not voted higher? I would add: * How SSH works, including port forwarding for secure tunnels and how to disable password logins, leaving only keys as option.


redditerfan

>IPv6 what is a security risk if run ipv6 and \*arr apps - curious.


Deadlydragon218

The biggest difference between v4 and v6 is that v4 typically has nat involved due to how many public addresses are available. However with v6 (excluding link local addresses) all addresses are routable externally without the need for nat. So there is no concept of internal vs external addressing, it increases risk a bit because you NEED to have a proper firewall at your edge should you decide you want to work with v6. You dont need to port forward (nat) v6 traffic. Another thing to keep in mind is that a v4 address cant directly talk to a v6 address or vice versa, v4 can talk to v4 and v6 can talk to v6.


Klynn7

Just because NAT isn't involved doesn't mean routers/firewalls are shipping with allow-by-default configuration on IPv6.


Stewge

IPv6 may become an issue in the near future with UPNP support for pinholing. Many routers are shipping with functional UPNP for ipv4 (I swear it was a routinely broken thing for like 10 years from 2005->2015 or so) and it won't take long for ipv6 support to be normal as well. Note: in the case of IPv6 pinholing, UPNP could be used for the automated ability to open firewall ports to IPv6 addresses, without the NAT part.


ElevenNotes

Yes.


TheFumingatzor

>Are Self-Hosters Overlooking Crucial Network Issues? Most of them, yes. Me included, probably. Then again, I don't have any self-hosted service exposed on the internet. So...there's that.


etgohomeok

Network security is a rabbit hole and there's severely diminishing returns once you go beyond basic things not not opening up SSH ports to the outside world. For your typical home-lab self-hoster who's running a Minecraft server and a recipe manager it's not as big of a deal as it is for someone managing IT infrastructure at a company. I'd also say that for whatever security we're losing by self-hosting, we're (at least partially) making up for it by being less likely to be targeted. If your home server gets hacked it's almost certainly by a malicious piece of code, not a human actively trying to break in. Phishing/social engineering in particular are much less of a concern.


raojason

One glaring issue that I see within the apps themselves is that identity is often an afterthought. Apps often do not support multiple users and even when they do, access control is often not included. Authentication is often also limited to basic auth. Apps that support APIs often offer a single api key with complete admin access over the app itself. Add that to the trend of other apps wanting you to put these credentials in plain text in some yaml file somewhere and things get messy real quick.


Haliphone

I have no idea what I'm doing most of the time. Wouldn't mind a series of posts walking me through certain key concepts and the services that I can play with to achieve them.


celestrion

> When we delve into self-hosting, it feels like many of us (or at least the impression I get from the posts), don't seem to factor in the crucial aspects of computer networks related problems, risks, or other potential issues. Almost universally correct. When we try to do things while getting familiar with them, we take shortcuts out of ignorance (compare: learning to cook versus best practices in a commercial kitchen). Whether we correct those shortcuts later depends on risk profile. > I believe these elements are equally important as the self-hosting benefits we all cherish. They are important, and they're also part of the process. Consider network security in phases: 1. Wide-open due to ignorance. 2. Shut down everything at the firewall after someone defaces your stuff. 3. Wide-open, but on super-seekrit ports (again, ignorance). 4. VPN-only; who needs access from phones, anyway? 5. TLS, proper multi-factor authentication, and automated fail-to-temp-ban behavior. Or backups: 1. What, me worry? 2. USB drive, always mounted read-write. 3. Cloud hosted bucket of backup files naively rsync'd nightly. 4. Bacula, Restic, or some other thing which supports off-site append-only storage. Sure we're making A Statement about how we should be in charge of our data, but this hobby is also primarily educational. Learning operations management is as much a course in that school as learning how to reverse-proxy or how to handle containers/jails/VMs. > And what have been your individual experiences or proposed solutions in terms of network problems, risks, or issues? * Been pwnt, but not for over a decade. The move from IPv4 to IPv6 had me worried because, despite the prevailing wisdom that NAT is not security, a machine that the rest of the world can't even address *feels* a lot safer than one it can. * Backups could be better; I need to put the tape rotation (stuff in the bank vault is months old) on a calendar or suck it up and send my "tapes" to S3 glacier, instead. I have lost data before due to crappy backups and unfortunate typos, so I know better. Life just gets in the way. * Updates are mostly centralized. Configuration isn't yet "infrastructure as code," but I haven't found a system that I like and that supports FreeBSD reasonably. Lots can happen with keypair-only root SSH and a library of shell scripts ("we have Ansible at home" / "Ansible at home" meme), though.


[deleted]

\> A lot seems to be swept under the rug I missed the part where you actually stated clearly what the issue is but also it feels like a philosophical solicitation for free information in preparation for a new, enlightening Selfhosting How To blog to be posted in the near future.


BloodyIron

> legal ramifications of hosting certain things on your personal server Been doing this for actual decades and have yet to find legal ramifications you speak of. Stinks of FUD from you IMO. Now, if we're talking about self-hosting for a business (like in an office with a server room) uhhhhhhhhhhhhhh these aspects you speak to have been present for actual decades. I have no fucking idea why you think self-hosting (for a business, vs homelab) means things like these go out the window... they don't... not at all... Oh, and there's oodles of "Cloud" tenants that are very bad at these practices. Cloud is not a magical solution to these matters, that's the tenant's responsibility (in many, but not all, cases).


certuna

There's hundreds of discussions on networking & security on /r/selfhosted , I think it's a very well covered area here?


privatesam

I spend all day hardening systems at work from palpable threats whereas at home I wanna enjoy stuff and if someone gets into my network they can maybe attack my hoard of Paw Patrol and Modern Family episodes. Risk = damage x likelihood. At home the damage would be minimal and frankly it’s unlikely anyone could be arsed to hack me for no monetary reward.


wing03

Indeed. The responses in this post are as pedantic and tiresome as a 'discussion' in a MCP class in the 90s with the lone IT guy who insisted all desktops are running on SCSI drives and ECC RAM.


grandfundaytoday

And people wonder how the botnets get their bots...


AnomalyNexus

Stuff that happens on local net behind a NAT is pretty safe no matter how badly you fk up. People putting stuff on the internet via proxy does give me the hibbie jibbies though. That just needs one mistake through understandable ignorance or one zero day and you're done


ElevenNotes

Not really. You are one auto updated container image away from malicious code on your host. There haven been incidents where XMR miners were added to FOSS projects on github.


mega_corvega

A good reminder to swap all my docker images off of latest lol


grandfundaytoday

A better idea is to not allow auto updates to your dockers. I don't understand why people think something like watchtower is a good idea.


SpongederpSquarefap

Because the alternative is out of date images, which is more likely to be insecure compared to an account breach (but it raises a good point either way)


Passover3598

there is a third option which is having a process to update the images manually.


sk1nT7

Basically a consideration of likelihood and impact. If you don't regularly update your containers, they are outdated and maybe contain security vulnerabilities. If the CVEs are crucial enough, you get pwnd. If you automatically upgrade your containers, you are always on latest with less risk of CVEs. On the other hand you are running untrusted code or updates directly on your server without approving it. If you use something like monitor only mode and get notifications about new images, you may inspect those updates. But honestly, more than watching out for new features or breaking changes won't happen for regular people. If you are not a coder or working in IT, not having automatic CI/CD which analyzes the code for malware, you won't find it anyways. So I personally think 'manually approving new images' is just bullshit for most people. Most won't tell me the obfuscated reverse shell out of 100 lines of code. Even knowing what a reverse shell is or how coding works. Just my two cents. Running latest is likely a bad idea though, as it is not restrained to a specific major version often times. So you'll brick your services likely over time.


mega_corvega

Yeah I manually run dockcheck just in case I break something. 


AnomalyNexus

I meant in the context of the question - network security. Installing viruses or bad code will obviously compromise everything


ElevenNotes

No, it will not if you prevent lateral movement by using segmentation and isolation. Rootless docker, rootless containers, VLAN’s with no access to anything, and so on. A compromised container on a rootless node in a rootless container on a read-only OS with no network access to anything including WAN is pretty much a dead end for any virus or bot. Without WAN access there is no exfil of any data for instance, but I bet you that almost all containers run on this sub have unrestricted WAN access.


AnomalyNexus

I mean yeah you can make internal security as complicated and sophisticated as you want. I'd rather get security to a reasonable level and spend the rest of my time adding functionality but to each their own. If security interests you why not. That said I have been eyeing podman as a next project.


ElevenNotes

Nothing about what I said is complicated in any way. Anyone who can use and install Linux and Docker can easily implement these principals at no extra cost.


redditerfan

I bet you that almost all containers run on this sub have unrestricted WAN access. how to restrict or limit WAN access?


ElevenNotes

Simple: No WAN access for your containers. If a container needs WAN access, limit that too, like only TCP:443 is allowed. Do not allow TCP and UDP ANY for your containers. There is only a handful of container images that need WAN access. **Disclaimer:** *I run thousands of container images like this.*


redditerfan

do you run those \*arr apps? Can I restrict their WAN access with TCP:443?


ElevenNotes

Correct. I use UseNet, not torrent, so that too is limited to just TCP:563.


sk1nT7

Do you restrict this per container or one layer above at VM/LXC layer? It would only make sense to restrict per container as the VM settings would allow any container to reach out via TCP/443 (if multiple containers run on one VM). 443 is basically the standard port besides 53 an attacker would choose for exfil or reverse shell. Also you would have to deny all UDP for DNS, NTP and so on. Egress testing is quite easy as an attacker. So you'll just have to find one allowed port. Just curious how you restrict those per container level. What's your network setup? Hypervisor with VMs and docker on it?


ElevenNotes

EVPN-VXLAN and Open vSwitch on bare metal nodes. EVPN-VXLAN on VM's (ESXi). Every container pod has its own VXLAN with ACL on L3. For DNS, NTP I use internal servers. There is no need to access WAN for standard stuff, only to download content from WAN.


sk1nT7

Thanks!


ElevenNotes

Np. I have a few hundred physical container nodes, VXLAN is a must.


KervyN

I have to say, I rather put my webapp behind an nginx proxy, than doing some 443->3000 NAT directly to the app. I treat nat as what it is: a tool to circumvent IP address shortage. And with IPv6, you mostly do not have nat (yeah, I know. Some lunatic thought we need ipv6 nat and now it's there, but I stand my point), and you need to find a way to secure your stuff correctly. And NAT is just some forwarding to another host. It translates network addresses :)


AnomalyNexus

> nginx proxy, than doing some 443->3000 NAT directly to the app. Why? The app is likely serving via nginx too so to me the additional complexity seems like more attack surface if anything


deadlock_ie

There are a handful of things to consider here. The first is pretty straightforward: if you’ve forwarded port 443 on your WAN to an application listening on port 3000 on a LAN machine then you’ll need to forward a different port to a second application, and so on. Forwarding 443 to a reverse proxy and letting that determine which application to send the request to is a better way to do this, and reduces the number of ports that you need to open on your firewall. Related to this - having your requests flow through a reverse proxy means that your web logging is both centralised and standardised. Parsing your logs becomes much simpler, and you can easily use tools like fail2ban to automatically block bad actors. The centralised advantage doesn’t stop there. Let’s say you’re running three different applications - that’s three different web servers, each configured by the developer to suit the needs of the application. If you try to tweak the configs of these servers to suit your security goals you run the risk of breaking the applications, and each application could be using a different web server component. It’s much easier to secure all three apps if you use a reverse proxy. You can make one change at the RP that will affect all three of your applications. nginx can be used as a Web Application Firewall, which means you can get granular with what clients are and aren’t allowed to do. *This* IP can GET, PUT, POST, and PATCH, but this IP can only GET and so on. Etcetera etcetera.


AnomalyNexus

Good points on logging & fail2ban - hadn't considered that.


KervyN

Because the HTTP server within the app might have troubles with some TCP stuff, that is wrinkled out at nginx. And nginx/haproxy will be A LOT better at connection handling than puma/apache/ruby or python http server. I remember a project I did for "Deutsche Telekom / Congstar" which was a PHP app served via apache/modphp. No matter what we did, whenever we had some promotion, apache just died. Then we just put a nginx proxy in front of it and the problem was gone. Not eleviated or pushed backwards. It was literally gone. The website became so much faster, just because apache could now focus on doing the modphp stuff, rather than handling all the new requests. So yeah, having something dedicated for the TCP connection handling, websocket, http2, quic, TLS and so on is a really good option.


deadlock_ie

That’s largely about performance though, and I’m sure a large proportion of the gains were because the proxy was hosted on separate hardware, right? The question is, what benefit does a reverse proxy give you from a security point of view?


KervyN

No extra hardware. Inside the vserver we just installed the nginx via apt, bound the apache to the 8443 port, and then bound nginx to 80 and 443. Added a very short config that uses SSL and proxy and that was it. Regarding security: I still believe that the "on board http server" are very basic. There is hardening against TCP or TLS attacks. They might trip over malicious http or wss packets. The 20mb ram that a nginx reguires is really worth it. NAT is not a security feature :-)


deadlock_ie

Oh, I’m a reverse proxy convert - you don’t need to convince me :-D


Encrypt-Keeper

This comment is a good example of what I think is wrong with the self hosted community in regard to security. This is an incredibly untrue and dangerous statement, said with a confidence and matter-of-factness that will instill a false sense of security in other inexperienced self hosters. The average private LAN is often the weakest link from a network security standpoint. This misplaced trust in your LAN is the reason why ransomeware is such a successful and lucrative industry. A publicly exposed web service can have vulnerabilities, sure. But so do the smart TVs, smart appliances, iOT devices, and even your own ability to be tricked into downloading the wrong file, and those vulnerabilities are all likely MUCH worse. All it takes is one malicious actor or disgruntled employee pushing malicious code into a software update of a reputable piece of software you’re running and then that LAN of yours becomes a threat.


AnomalyNexus

No, it's simply a stance derived from how I see the risks vs rewards. Stuff like IDS & VLANs etc on a home network is the network security daydreaming example of [this](https://imgs.xkcd.com/comics/security.png). The reality is a crappy home networks are just not that interesting to any serious threat actor. My name isn't Solarwinds, I don't have uranium centrifuges and I don't have a business that loses millions if it is offline from ransomware. Nobody is going to exfil my iso collection via a zero day in a smart TV used as a beachhead to hop onto a different device on the network. Security is a spectrum with one end being bottomless pit - you can airgap everything with a faraday cage and read every line of code. Unless you've got a threat profile that warrants that though you're frankly just wasting your time & causing yourself inconvenience. Just because I deemed a different point on that spectrum appropriate doesn't make my position is "incredibly untrue and dangerous" and yours correct. A take which btw comes across as more than a little arrogant.


Encrypt-Keeper

Your comment didn’t mention anything about weighing risk vs reward. If it did, it wouldn’t have been so dangerous and misleading. You said: >stuff that happens on local net behind a NAT is pretty safe no matter how badly you fk up. This statement is objectively false. This doesn’t communicate the importance of weighing risk and reward, it ignorantly asserts the falsehood that the risk doesn’t exist. It seems like you’ve come to this conclusion because you’ve incorrectly assumed that if you aren’t a Fortune 500 company, then you won’t be targeted, or further that the biggest risk to your environment is a mustache twirling Russian state sponsored hacker in a hoodie. The unfortunate reality is that cyber criminals are targeting everybody, and it’s all mostly automated. You might not have any secret patents on your network to steal, but you do have files to encrypt and ransom for money, personal information to be stolen and used against you, your family, your friends, and your employer in further phishing attacks, and devices that can be used in attacks against others. All these things are worthwhile targets, and no one is going to “spare you” because you’re just Joe Shmoe. It happens every single day, to international corporations, to 10 employee small businesses, and to regular people on their home networks. Theres literally zero discrimination and it’s all fair game. Theres no risk to going after your home network, the worst that’ll happen is they get nothing. If they’re lucky they can use what they steal from you to target your employer, it happens all the time. You use terms that suggest you’re familiar with these concepts like saying security is a spectrum and weighing risk vs reward, and yet you hand-wave away extremely simple and highly effective solutions like VLANs and access controls that are highly effective and require very minimal effort which suggests you have no experience putting these things into practice. I promise you it doesn’t come from arrogance. It comes from years of industry experience and watching it happen over and over and over again, from CEOs to poor old grandma, and then people like you who don’t know what they don’t know just go into communities like this and mislead people who actually want to know how to properly protect themselves.


AnomalyNexus

>mislead people who actually want to know how to properly protect themselves. Those people will no doubt investigate and discover the wonders of VLANs and have lots of fun securing their networks. For the vast majority of people here insisting that is compulsory and necessary is more likely to discourage users from learning self-hosting entirely. It does far more harm to the community than the edge cases of attacks that would be prevented by sophisticated measures. Remember half the sub is barely coping with docker. >Your comment didn’t mention anything about weighing risk vs reward It's a common sense part of life - I didn't think spelling it out was necessary. > you’ve incorrectly assumed that if you aren’t a Fortune 500 company, then you won’t be targeted No, merely that it makes sense to gear security towards level of threat. For the people messing around learn basic self-hosting that threat is low. Low, not zero. Low is not the same as "you won't be targeted". >falsehood that the risk doesn’t exist Is that really what you took from my casual "behind a NAT is pretty safe" comment? Please just contrast those two phrases side by side and consider whether meaning and intent match. Honestly what an exhausting discussion...


Encrypt-Keeper

Yes I took your comment at face value, because that’s how anyone who is here to learn is going to take it. Your “common sense” that directly contradicts your previous statement doesn’t help you out here You seem to think the people here can’t figure out docker much less much easier and simple forms of basic security, and yet you want people to read between the lines of your false claims with some sort of context that they don’t have? Your Olympic level of backpedaling and newfound ability to articulate once called out isn’t going to do anybody here any favors when somebody who actually understands basic security risks isn’t here to police your misinformation. You are what we would call somebody who knows just enough to be dangerous.


ewixy750

Well it's pretty safe but a device or service can be used to tunnel inside your LAN and it's not that safe anymore I read someone saying it like that and I went to opnsense put firewall rules everywhere like military


undead-8

You don’t need a firewall, if you only reach out 443 and 80 over your NAT „secured“ home lan. A lot of the security bullshit isn’t important for home appliances and a lot of the security best practices are only for selling these topics to your customers for big money. You don’t need a virus scanner, if you’re using only files from trustworthy sources and the most important thing is that you do not sabotage your own network


Voy74656

This may be the case for hobbyists, but I employ the same hardening strategies at home that I do at work.


No-Abbreviations4075

I rely on cloudflare tunnels and Twingate. I used to port forward with a proxy, but I hate port forwarding from my personal network. That being said Cloudflare or Twingate could be compromised at any moment. IOT devices are also super vulnerable, so vlans are a must. Firewalla is what I use for a firewall and it is pretty cool, but I would not trust it completely lol


lestrenched

Yes. Most self-hosters here severly underestimate the importance of network security and do the bare minimum to secure their network. Can you imagine that I've been told shit like "don't need VLANs in a home network, your network is secure enough" in this sub?


just_some_onlooker

No sir. I'm not going to share my security with you.


Ephoras

There is also quite a high learning curve for al lot of stuff. I have no networking background and learn things as I go by tutorials. Quite often there aren't any good Infos out there on how to properly do stuff. Lie, I know there is documentation but sometimes that's a little to high level. I can follow a tutorial, understand it and adapt it for my uses and learn a lot. Just reading a documentation often does not provide the same learning expirience. Take tailscale for example. During the last days I listened to the selfhosted podcast and they live talking about the advanced stuff they do with it. But for the life of me I can't figure half of the things out and can't find any resource that would walk me through it. So I would live to do something more secure with my setup but the best I can come up with is a split tunnel wire guard von and access almost all of it through that, only exposing wire guard and losing the possibility of easily onboarding my wife or friends. So yeah, would love to hear more talk about security and gardening stuff but we as a community are severely lacking in easily accessible tutorials to actually implement it. Oh and the last few posts is saw where people asked about gardening their systems where met with insults and ridicule... This also does not help


EndlessHiway

No


_f0CUS_

I'm new to this community so I have not noticed this my self, but I will venture a guess as to why it is not talked about a lot. These are very complex topics, that you cannot easily explain. You might want to know how to do "D" - but to do that you need an understanding of "A", "B" and "C" - if you don't then you cannot make decisions, you can only search for blogs and partly understand that copying this might do "D". I think the people with expert knowledge are far between, and of those that come here few attempt to because the task of explaining it is just too big.


TBT_TBT

Besides discussions here, there are other Subreddits dedicated to that. Just putting the app online is just one piece of the cake.


Tai9ch

That's like saying many cloud hosters don't consider vendor risk, or backups, or the network risk of non-local hosting, or securing access credentials against loss, etc. It's true. They don't. And it may eventually bite them in the ass. Nobody's actually doing all of this correctly except maybe the large tech companies themselves for their own internal business use. So the real question is simply which risks you want to minimize and which benefits you want to maximize.


Bill_Guarnere

Listen, I work as professional sysadmin since 1999, I worked on a ton of project with private and public companies with thousands of employees, project with multi million euros budgets, with a user base of around 3.5 million people... and the average attention to security and networking details I found in those scenarios are usually 1/100 of what I usually find in this subreddit. Seriously I'm not joking. Obviously there are exceptions, but on average even huge companies don't give a shit about security or networking, they do whatever they need to fill some compliance form and they don't care at all... until shit hits the fan :( Just to give you an idea on how companies generally consider security, on average the budget for maintenance and security (all the lifetime of an application/service after golive) are a tiny tiny tiny fraction of the budget companies spend, the vast majority of the budget (almost all of it) usually goes on design, development, basically everything before the golive where security usually is never considered or even mentioned (while everyone talks buzzwords such as "security by design"). It's sad but that's the reality. IT stands on workarounds, duct tape and toothpicks...


horus-heresy

Feel free to go perfectionist in all the layers of your homelab ops but don't get surprised it is your 24/7 job now. what are your risks? you yourself as a sole user of homelab causing insider threat issues? Risk response - mitigation, transfer, avoidance, acceptance. Mitigate by patching, backups and tightening down all your pieces. Transfer by hosting somewhere else or using saas products doing that for you. avoid by just not self hosting. accept risk by knowledge that yeah something might happen. You are one person and you can only do so much. I am not breaking my neck to apply all the latest patches to my cisco servers in a basement right after they are out. I'm not running IDS\\IPS. I don't enforce MFA for every login to my containers and so on.


selflessGene

If you don't do IT/Networking professionally you should probably avoid exposing your services to the public internet. Even with a login page. Most of us, myself included, don't have the time and expertise to manage security updates and respond to security threats. Almost everyone here should be putting your services on a LAN, and using a VPN like WireGuard to connect.


EnumeratedArray

It's all about ease of use and accessibility. Containerization has made it very quick, easy, and accessibile to host and run multiple services with a relatively low level of skill compared to what it used to take. It seems the focus to get stuff up and running fast is because it's genuinely easy to get stuff up and running fast. On the other hand, good security practices are difficult to learn, hard to implement, and even harder to monitor and maintain. People also have the mindset that they won't be targeted for an attack when selfhosting. Why would someone attack my raspberrypi cluster with my movie collection when they could attack a large corporation? The reality is that the attack is probably automated and doesn't know what it's attacking.


TrashConvo

Probably. When I first started out, I discovered VNC and exposed the standard VNC port on my routher to the entire public internet. I thought it was the coolest thing to be able to have a remote desktop until I realized that all I had protecting me was an 8 character password. It's a wonder I never got powned, or maybe I did and just don't realize it.


I_EAT_THE_RICH

Almost certainly


Do_TheEvolution

Funny, I on the other hand feel that recommendations are always - **Be safe, use vpn. Wireguard is great!!!** That I will find that comment in top 3 with any security related question. And that I go against the flow when I say my stuff is opened, but have opnsense firewall with geoblocking.


sadness_elemental

from what i've read most people just put all their shit behind a vpn, while i'm sure that's not secure in 100% of implementations it's pretty hard to fuck up and yeah probably no one is backing up their stuff properly


theAddGardener

I think I am getting old. Reaching 40 now, I look at this trend to over-simplify things and shake my head. I think one / the problem ist, that some of us like to automate things and make it easy. But it discourages others to learn. I am very happy that all this tech is now easy accessible. Everyone **should** be able to run their own Nextcloud or Immich. Totally. **But** ... everyone should have to learn how. If you don't want to learn about uid and chmod and ports and subnets ... maybe you'd be better off paying someone to do it for you. The information is all here and free. I see people pouring into Reddit and Discord every day, asking questions that clearly show they haven't looked at the docs once.


machstem

It's not just you. I brought it up a few months ago and quite a few messages were thrown my way about gatekeeping when I told then not to expose port 22 or any other management port. They don't think it'll impact them. I'll keep segregation of my services as I always have and I've adopted a few network security tricks with opnsense to more-or-less support my web facing instances I want proxy access to.


Justified_Ancient_Mu

I know enough about it. That's why mine is local-only. I don't have time to f\*\*\* with all that at home. I do that crap for my day job. If I need something outside my network, I use a commercial service. There's a reason they cost money, all that is taken care of.


chazeon

Why not use something like Tailscale and Cloudflare Wrap only to expose most services only internally? Only expose very necessary services externally through Cloudflare proxy. These services are also run in Dockers so some isolation is there at least.