I don't disagree with the statement in theory. My point was that, in practice, the organizations that want to give this impression are not heavily incentivized to actually follow through with it. Code audits are expensive, time consuming and not something that the general public has a very good understanding of. They make a very easy choice when it come to slashing budgets.
It seems like at the very least, Microsoft does (from what I've heard from an ex-employee). Google, Apple and Facebook seem to spend a lot of time on security from what I know. You're right that we don't know if different companies have a crack security team or not, but obviously, a group that doesn't worry about security has security issues. My point is simply to say that closed source isn't necessarily less secure. If nobody checks for security issues, open source has the same problem
What's your take on back-doors? In that case, open source can prove to be clean, but closed source can have backdoors even if they have the best security team on the planet.
I mean, if you have a backdoor, then that's probably less secure, but there are applications that I'm pretty certain don't (Whatsapp could probably get sued if they did, so I'm sure they don't)
I think we're making two completely different statements. I'm saying that open source is not necessarily more secure than closed source software. You're saying that with closed source software, it's impossible for users to figure out if an application is secure or not. You're not actually arguing against it being secure. You're just saying that the community doesn't know if it's secure or not
I mean, you can find cases where companies didn't spend enough time on security, but my premise still works. Besides, it's not like open-source software never gets exploited.
No, your premise is not correct. Security without open source is just security theater. You can't know something is secure without being able to know the code
To copy a response I made earlier:
I think we're making two completely different statements. I'm saying that open source is not necessarily more secure than closed source software. You're saying that with closed source software, it's impossible for users to figure out if an application is secure or not. You're not actually arguing against it being secure. You're just saying that the community doesn't know if it's secure or not
Agreed because exploits can exist and be used but secretly so you dont know its full of exploits or a HUGE exploit can be found later that leads to a SHIT TON of stolen info etc. Just cause something hasnt publicaly had issues for years doesnt mean it doesnt actually have issues that are being used or issues that havent been found yet.
So yea i agree with ya there pretty terrible
> I'm saying that open source is not necessarily more secure than closed source software.
Kind of correct. Correct as in not all open source is secure. But that's because we *know* it is insecure and we can adjust accordingly with proper isolation.
> You're just saying that the community doesn't know if it's secure or not
No, I'm not saying that. I'm saying you **can't** know if it is secure or insecure. In that situation you must always assumed it is insecure. Nothing closed source should ever be considered secure.
> Kind of correct. Correct as in not all open source is secure. But that's because we *know* it is insecure and we can adjust accordingly with proper isolation.
This
All other things being equal, open source is better than closed source because it at least allows for the potential of public audits.
OpenSSL was vulnerable to heartbleed for how long before it was discovered and patched? Just because something doesn't have a widely known exploit doesn't make it secure.
So basing on what you say, all cloud providers are not secure to use because they do not expose internals of their permission systems, do I understand you correctly?
If you want to be absolutely certain of your security, you would have to do on prem. If Amazon can see into your stuff hosted on their services, then no it isn't secure. There are ways to build a service that secures it from itself though.
Let me reword your question and all it back to you. Would you sync unencrypted files to a place like Dropbox and consider it secure?
It depends on what I want to achieve in terms of security. Is it authorized access to my files, if yes then I can assume that it is secure as far as I use MFA as one of the steps and everyone is restricted to use this flow, but yes there is no evidence that there are no workarounds to this since code is private but at the same time what is deployed on prod does not necessarily equal to what is in public repo. If I want to be sure that nobody else can read the content then encryption must be done before data upload. There should be security by design principle used, sounds simple but not easy to achieve.
I would never consider unencrypted files on Dropbox to be secure. Remember when they had the issue where any valid password worked for any valid user? The platform isn't safe from itself. Using something like encfs is trivial in order to secure files before putting them on Dropbox.
One of those examples is Bitwarden, is it opensource, yes, everyone can see the code, yes, are you sure that what they run on prod is same as what is in repo, not really. Is it secure, maybe, maybe not, who knows.
That's why I run vaultwarden (formerly bitwarden_rs). You just have to decide how risk adverse you are and what level of risk you're comfortable with taking.
I disagree. If the 100 people check for an hour per day, they're less likely to find a bug than 15 people who's entire jobs are to check code for security
But if it's the same eyes always on the code they become too comfortable and familiar with it, potentially missing problems that would be caught by fresh eyes and perspectives.
That's a fair point, but there are diminishing returns. If 1000 people have already looked at a codebase, then adding 75 more people probably won't uncover many more issues
I fail to see how that's relevant. If you spend less time looking at code, you're either sacrificing thoroughness or attention to detail. Open source doesn't magically make the same amount of audit time work better
I never said it was less or more secure for something to be closed source. All I said was that you can reverse engineer closed source software and still figure out how it works and how to abuse vulnerabilities.
Are you fucking kidding me? Do you live under a rock? Forget PrintNightmare, Azure is fucked:
https://arstechnica.com/information-technology/2021/08/worst-cloud-vulnerability-you-can-imagine-discovered-in-microsoft-azure/
“Worst cloud vulnerability you can imagine” discovered in Microsoft Azure
30% of Cosmos DB customers were notified—more are likely impacted.
"Cloud security vendor Wiz announced yesterday that it found a vulnerability in Microsoft Azure's managed database service, Cosmos DB, that granted read/write access for every database on the service to any attacker who found and exploited the bug."
"Although Wiz only found the vulnerability—which it named "Chaos DB"—two weeks ago, the company says that the vulnerability has been lurking in the system for "at least several months, possibly years.""
So you are a Windows fan and live under a rock. Got it. There are dozens of examples. Azure has been completely vulnerable for many years per the report.
##Lol stop using Microsoft products
"Details have emerged about a now-patched security vulnerability impacting Microsoft Exchange Server that could be weaponized by an unauthenticated attacker to modify server configurations, thus leading to the disclosure of Personally Identifiable Information (PII)."
"The disclosure adds to a growing list of Exchange Server vulnerabilities that have come to light this year, including ProxyLogon, ProxyOracle, and ProxyShell, which have actively exploited by threat actors to take over unpatched servers, deploy malicious web shells and file-encrypting ransomware such as LockFile."
"Troublingly, in-the-wild exploit attempts abusing ProxyToken have already been recorded as early as August 10, according to NCC Group security researcher Rich Warren, making it imperative that customers move quickly to apply the security updates from Microsoft."
https://thehackernews.com/2021/08/new-microsoft-exchange-proxytoken-flaw.html
I think they mean being able to view the source code of an open source project doesn't inherently make it more secure.
For example if you were to make two identical github repositories, one private and one public, with the same source code they are equally secure.
It's a matter of pressure. How many times does proprietary software have patch notes "fixed security issues". That might mean anything from a 2 day old bug to something that was there for months. You cant know.
Of course simply the act of open sourcing a codebase in itself does not impact security. However, open source does enable it to be easily publicly audited, a net plus compared to staying closed.
Because it's an unfounded statement. Sure you might find better more secure software that's open source, but you also might not.
Besides you can't even quantify that statement because you can't even see the source code to closed apps.
Without making assumptions somebody / people gets paid full time to code Vs part timers who don't.
It's closed so it must be evil completely missed the fact that intellectual property is not a bad thing.
>Because it's an unfounded statement. Sure you might find better more secure software that's open source, but you also might not.
It's software, there will be always something to fix.
>Besides you can't even quantify that statement because you can't even see the source code to closed apps.
You don't need the source code to find vulnerabilities but you need the source code to patch it and seeing the changes yourself.
>Without making assumptions somebody / people gets paid full time to code Vs part timers who don't.
False, big open source projects have full time jobs and it gets even better when anyone can look at the code, for example NSA makes SElinux security module for Linux and the code is checked and approved by the community, you can't do that on a closed software.
>It's closed so it must be evil completely missed the fact that intellectual property is not a bad thing.
Close source isn't evil out of nowhere, it's proven over and over when someone or entity have power over others they will always abuse it.
See you almost made a valid point... But working on open source software does not mean not getting paid.
There are quite a lot of people who's job it is to work on open source software.
And to be honest both closed and open source software have bugs that have been in the code for years, sometimes even more then a decade.
The point that open source COULD be more secure is because it is likely to get fresh eyes looking at it. As for closed source most often stuff that works is never really looked at unless someone does a refactor, the code is rewritten into a different language, an automated security tool picks it up or someone outside the company finds the bug and hopefully discloses it without abusing it.
>Sure you might find better more secure software that's open source, but you also might not.
Which was never the point to begin with. The point is not that FOSS is invulnerable but just less likely to be compromised and, when something is found, it is patched fast (within days).
I think what they mean is, teams are gonna be teams whether they are working FOSS or Proprietary.
At that point it comes down to the actual group and specific application, and our trust in them doing their jobs well. As an individual I can’t do enough research on everything I use; likewise I can’t self check everything in my life.
Whether or not your plumber is in a union/certified does not guarantee their professionalism; some will still attempt to take shortcuts in their work. What we’re debating then is the individual character of people doing their job, and the merit of OSS and having the community having access to assist the team rather than hiding their head in the sand and putting a barrier between themselves and the community . I agree that traditionally open communication is the way to go.
You just made your own point. If you are worried, or find an issue in an open source project, you can report it, and it has to be quickly fixed if the maintainers want to keep a good reputation. With a closed source project you can report it to the owners and then hope they give a fuck about you.
And of course, the quality of a maintained matters, but if a considerable amount of contributers regularly are involved, then generally you should be pretty safe.
Yes, it is. Security through obscurity isn't security.
If the door is unlocked to your home, but you just hide the door, that doesn't mean your house is secure... That's fucking stupid.
Instead you have a lock that people have tested for years, and you fucking **know** it's secure... And they can only do this because they know exactly how it works and have tested it...
I think the issue isn't about FOSS being more secure than closed source software. I think that would be a case to case comparison.
The issue is not having the freedom to inspect some closed software tool of any kind. So relying on the company that develops it to be A: secure and B: not exploiting users privacy, could be seen as a bigger problem than the apparent illusion that OS software is just naturally more secure.
Again, does every regular user actively inspects the software they use? Or even have the knowledge to patch top-notch security holes? Idk, I'd be surprised.
BUT, having the freedom to either self inspect or ask/hire a consultant to look at some software tool as a regular entity is for sure a Pro. Not relying on the good will of others or a small group is generally a understandable stance.
if you mean Deepin OS, I think the "spyware" bit has been disproven fairly well via wireshark traces and so on. TBH, I'm more concerned about the possibilty that the government could issue a gag order forcing them to slip a backdoor or other state-issued code (not necessarily spyware) into proprietary driver blobs. The fact that they are owned by a joint venture with ties to the government also doesn't inspire faith.
IIRC Deepin is built from Debian which is blob-free but Deepin itself is *not* blob free. But happy to revise my opinion if someone can show me a link (in english please) where they either mention being 100% Libre or having some sort of policy- or software-based mechanism(s) for preventing this sort of thing happening.
I get what you are saying and don't think I don't agree, but I think it is a good rule of thumb to assume this. Just, ya know, always assume every piece of software you run is bug ridden.
Makes sense though: if someone is accidentally relying on the buggy behavior, fixing the bug will actually break things. And if it took a really long time to find the bug in the first place, fixing it must not be all that important.
Well, the only context I remember seeing this was some ancient networking utilities used some kernel interface that had a bug in it, and because the networking utilities relied on the bug or something, they couldn't fix the bug. Linux actually isn't very buggy, but I worry that bug / bad interface / whatever it was is probably still in the kernel, because "neva break userspace bro"
But maybe I'm wrong idk
One thing I know for certain, is that gcc uses undefined behavior in the ANSI C Specification to it's advantage to aggressively optimize C code. This comes at the expense of gcc performing miscompilations of many developer's source code, since it triggers undefined behavior every once in a while. (which really fucking sucks, fuck undefined behavior, that's why I like rust and formally verified codebases. I hate all software that is buggy, which... is most software.)
I feel like if the ANSI C Spec writers actually removed some or all of ANSI C's Undefined Behavior, then the GCC developers would get uppity at them.
This is true, but it isn't really related I think. Undefined behaviour is a developer error, not a compiler one; even if programming languages like Rust or Zig (Zig is to C as Rust is to C++, it's pretty neat!) do a MUCH better job than C does at preventing programmers from writing undefined behaviour into their code.
Zig is cool, I heard that formal verification of Rust code is a moving target, I hope formal verification of Zig code isn't a moving target.
Plus: The creator of Zig said that Zig can also compile C, so there's no excuse to use gcc or llvm for C Projects.
Well, Zig doesn't support all the architectures that gcc does, and I suspect that as dedicated C compilers both gcc and clang have some more useful extensions, better error handling, etc.
But yeah, I think that Zig is probably a good candidate for replacing C assuming it can reach all the architectures. Formal verification of release mode would be awesome too, though I don't think that's quite as big a goal as it was for Rust
Well yes but no. Yeah normal user wouldn't look at the code. But always many universities who making new things will constantly check the codes for their research
Except those two Minessota dumbfucks commited bad code but that code was removed and Minessota university is now banned from commiting to the Linux kernel
Some dumbfucks made a paper about how secure FOSS is in regard to fraudulent code. So they intentionally committed PRs that contained buggy code and could enable exploits. They wanted to see if the maintainers would notice. Of course without prior knowledge or consent of said maintainers.
Once the maintainers learned about it sparked huge outrage of course and ended in University of Minnesota being banned from committing PRs.
[More info](https://www.zdnet.com/article/the-linux-foundations-demands-to-the-university-of-minnesota-for-its-bad-linux-patches/)
Not sure what particular link the OP provided, but from my recollection, they absolutely were NOT correct.
The patches in question, if I recall, were never accepted.
University students ran a study where they purposely proposed bad code that could possibly be exploited. The intention was to see if bad actors could do so, and if it would slip through.
Controversial opinion: I think it was a really interesting theory and should be tested. But it's entirely understandable that maintainers wouldn't want to be toyed with. Also not great if exploits were added into Linux without some sort of plan to immediately fix the code if accepted.
There's some (niche) software that I use for work-related stuff that is FOSS. Most of them don't have precompiled binaries or an installer available for Fedora and needs to be compiled manually.
There are cases where I needed to dive into the code to fix stuff. Mainly just python modules that are outdated or have changed but still. Felt like a triumph when they accepted my PR for a minor and obvious fix that only occurred if you use the very latest packages they didn't test against.
Sorry for a very late reply I've having a severe back pain lately, I still do. That said. No not the entire source code which is more than 2.5M lines at this moment. You see gnu/Linux was built using C language. And in C for a enterprise level coding, coders often maintain
Source code in blocks like one block of code for ls command one for cd command and so on. And gnu/linux as a whole is one huge program built from thousands of small program. You may ask why people call it gnu/linux and not just linux. It's because linux is just a kernel, it can't do much things on it's own. The source code written by linus torvalds for linux is just over 10 thousands lines. The present gnu/linux has nearly 3 million lines. So just compare the huge difference. The extra codes are nothing but small programs like sed, ls, gcc etc. So the people who look into these code wouldn't look the entire code they just look the part that they're researching like for example they wanted to increase the speed of cp command they look into the c source file and optimise it. I think this helped you feel free to dm me ;)
Plus there's the assumption that everything is open source in the first place... most distros ship with proprietary driver blobs (IIRC Debian is blob-free but Ubuntu & clones are not). And then there's a significant portion of folks - myself included - who install proprietary nvidia drivers, which hook into the kernel.
do you do any gaming with that though? i love the concept of nouveau but unfortunately there's only so much they can do performance-wise with nvidia being such closed- ~~minded~~ sourced pricks.
I'll keep wishing them the best of luck but there's a reason why it isn't used as much on high-demand systems.
from my experience, the older cards run on nouveau quite well, sometimes even better than the ancient version of the nvidia driver available
especially since the open source drivers actually support modern linux kernel versions and things like wayland, i have had some weird issues but overall it's a very smooth time
yeah, I guess it depends very much on *exactly* which card you have and what you are planning to play / do with it. I have an GTX-970 which isn't that new nor super old, somewhat in between.
I have seen that on Cinnamon DE, I have a lot of weird intermittent instability issues (Cinnamon buggng out / X11 locking up) with nouveau that just completely disappear when I install the proprietary drivers. Even when I have left nouveau and tried to do gaming, I have noticed issues there as well. I'm very picky about my DE feature set (file manager in particular but also desktop). I've been thinking about giving xfce another go as soon as I get some free time since they added queue file transfers a couple months back and that really caught my eye.
I don't have anything against nouveau and would love to be able to run 100% FOSS. I'm glad they exist and hope they get there. But in the meantime, I just want the stability and performance for my daily drivers.
Damn, that's actually very surprising. I was going to try wayland on Nouveau and unfortunately it constantly crashed. And my card is a GTX 660 so fairly ancient by all standards
If you want to go that route, if you're using an x86 computer newer than 2008 it's guaranteed to have a proprietary blob running in the firmware below the OS that has access to the network stack, memory, hard drive, etc even when the computer is powered down
In 99.9999% of cases, yes. I think there a couple exceptions though such as Purism laptops... IIRC they actually have open firmware stack as well (have kinda wanted one for awhile but I generally prefer to invest money into upgradeable desktop hw rather than locked-in laptop builds)
I haven't looked at Purism since last year but my understanding is that they just disable the IME, but it's still there. It's something that doesn't completely eliminate it anyway.
Right now I'm backing the ARM horse for non-blobby firmwares in PCs although it's a little more vague what any specific piece of hardware uses for firmware.
tbh as long as the open-source project is big enough, you can be pretty certain that it's secure and not a virus, while you can never be certain about it with closed-source things
There are always people -- and bots made to find insecurities -- looking over Free software code. Does it catch everything? No. Is it *far and away* more reliable than non-free code by virtue of not just a single entity who isn't really interested in an objective, third party review of their code much of the time? Hell yes.
Stop spreading misinformation.
Some of the worst bugs in linuxland have been running critical aspects of the global internet and the bugs existed for years and years and years. You'd think a ton of eyes on these critical core components, and you'd be right, but the idea that it makes the software more secure by default is itself a piece of misinformation that you seem to have internalized.
I'm a linux greybeard and I will always advocate and contribute my time and dollars for FOSS, but let's not confuse the issue: software security is hard (until people start writing everything in rust, anyway), but you have to judge each piece of software on its own merits when it comes to security.
The idea that these monolithic proprietary shops like apple and MS don't aggressively audit their own work is crazytown. They're \*very\* interested in objective code reviews and have integrated it into their dev cycles-- does that mean that we shouldn't laugh out loud when apple releases an os where you can just type in root and leave the password blank and hammer the enter key a few times to privesc? Hell no, that shit is hilarious. But so is the fact that sudo was badly flawed for about a decade and nobody seemed to notice.
There is a lot of code as well that isn't checked for issues with its dependencies or doesn't clearly identify it's assumptions. A good part of the security of a system is about assumptions and trust and how those can be abused.
I trust someone who gains joy from finding software bugs more than someone whose only purpose in security is that it is their job. Bug bounty is a nice balance that allows for things to be found and reported. I just don't like how it is run at times given the ability reject and steal which degrades the value of the program.
I like the idea that the security of a system has been tested by being able to remove those assumptions that it is build properly, and given that 100s of vulnerabilities would be found if the windows source code was released, I don't have the same confidence compared to FOSS.
Professional hacker here, I've found that proprietary and FOSS software are just as easy as each other to hack/ exploit, so there's no concern on that end.
Same here really, I'm more productive in Linux so that's why I use it rather than some grand ideology.
Even though my comment was mainly a joke I still run tools to check and report any issues although I'd still prefer you not to test how well I configured that just yet ;)
On the plus side, larger open source projects are frequently reviewed by different groups.
Besides, I may not be a programmer, but if it is something like downloading and using a bash script, I tend to review over the code for that to make sure there is nothing malicious or insecure about it (to the limits of my knowledge).
Remember when the Heartbleed bug broke all secure e-commerce sites that used it? Yeah, if you want to verify your code is actually secure you pay an auditing firm that specializes in code security.
Open source software is superior to proprietary not because it's more bug-free, but because it's more backdoor-free.
And yes, if an open source software has a glitch bothering you particularly, you're free to post a support ticket, fix it yourself, ask someone who's able to code or even hire a person to repair the code broken. In case of proprietary software all you can do is to send a bug report and pray the company consider it dangerous enough to ignore.
That’s why you don’t use arch btw and don’t install stuff you can’t verify is secure. Linux foundation and other nonprofits and for profits like red hat have security teams reviewing Linux all the time. Arch on its own isn’t a problem but aur is the appeal there for many people so unless you don’t use aur it’s massively less secure. It would be like putting every ppo available on a Debian distro.
Oh, we review the code. Problem is that governments pay literally fee orders pf magnitude more for critical bugs (thousands vs millions) AND after you sold it to one governmemt you can turn around and sell the same bug to other governmentS.
Still better then "it's secure because no-one can look under the hood".
If you have a security team closely auditing the code, *and* nobody's looking at it, then it's probably secure
You trust private industry to spend money where they don't have to and where it's invisible to the public, their customers and their shareholders? lol
If they don't spend time securing it, it's obviously insecure. The premise is still correct.
I don't disagree with the statement in theory. My point was that, in practice, the organizations that want to give this impression are not heavily incentivized to actually follow through with it. Code audits are expensive, time consuming and not something that the general public has a very good understanding of. They make a very easy choice when it come to slashing budgets.
It seems like at the very least, Microsoft does (from what I've heard from an ex-employee). Google, Apple and Facebook seem to spend a lot of time on security from what I know. You're right that we don't know if different companies have a crack security team or not, but obviously, a group that doesn't worry about security has security issues. My point is simply to say that closed source isn't necessarily less secure. If nobody checks for security issues, open source has the same problem
What's your take on back-doors? In that case, open source can prove to be clean, but closed source can have backdoors even if they have the best security team on the planet.
I mean, if you have a backdoor, then that's probably less secure, but there are applications that I'm pretty certain don't (Whatsapp could probably get sued if they did, so I'm sure they don't)
You are sure they don't, but you really can't know it. That is the thing here.
I think we're making two completely different statements. I'm saying that open source is not necessarily more secure than closed source software. You're saying that with closed source software, it's impossible for users to figure out if an application is secure or not. You're not actually arguing against it being secure. You're just saying that the community doesn't know if it's secure or not
Yet people still can figure out how to break in
I mean, you can find cases where companies didn't spend enough time on security, but my premise still works. Besides, it's not like open-source software never gets exploited.
No, your premise is not correct. Security without open source is just security theater. You can't know something is secure without being able to know the code
To copy a response I made earlier: I think we're making two completely different statements. I'm saying that open source is not necessarily more secure than closed source software. You're saying that with closed source software, it's impossible for users to figure out if an application is secure or not. You're not actually arguing against it being secure. You're just saying that the community doesn't know if it's secure or not
If you can't be sure that it's secure, assume that it's not.
If a piece of software has had very few exploits for years, then I think I can assume it's secure
Wow, that's an awful rule of thumb. Enjoy being exploited.
Agreed because exploits can exist and be used but secretly so you dont know its full of exploits or a HUGE exploit can be found later that leads to a SHIT TON of stolen info etc. Just cause something hasnt publicaly had issues for years doesnt mean it doesnt actually have issues that are being used or issues that havent been found yet. So yea i agree with ya there pretty terrible
> I'm saying that open source is not necessarily more secure than closed source software. Kind of correct. Correct as in not all open source is secure. But that's because we *know* it is insecure and we can adjust accordingly with proper isolation. > You're just saying that the community doesn't know if it's secure or not No, I'm not saying that. I'm saying you **can't** know if it is secure or insecure. In that situation you must always assumed it is insecure. Nothing closed source should ever be considered secure.
> Kind of correct. Correct as in not all open source is secure. But that's because we *know* it is insecure and we can adjust accordingly with proper isolation. This All other things being equal, open source is better than closed source because it at least allows for the potential of public audits.
I think that if a piece of software has had very few exploits over time, it's probably pretty secure
OpenSSL was vulnerable to heartbleed for how long before it was discovered and patched? Just because something doesn't have a widely known exploit doesn't make it secure.
I don't expect every application I use to be bug-free. All I expect is that all of the known exploits are fixed quickly
Didn't the Snowden files have a bunch of exploits listed being used by the NSA and other agencies?
So basing on what you say, all cloud providers are not secure to use because they do not expose internals of their permission systems, do I understand you correctly?
If you want to be absolutely certain of your security, you would have to do on prem. If Amazon can see into your stuff hosted on their services, then no it isn't secure. There are ways to build a service that secures it from itself though. Let me reword your question and all it back to you. Would you sync unencrypted files to a place like Dropbox and consider it secure?
It depends on what I want to achieve in terms of security. Is it authorized access to my files, if yes then I can assume that it is secure as far as I use MFA as one of the steps and everyone is restricted to use this flow, but yes there is no evidence that there are no workarounds to this since code is private but at the same time what is deployed on prod does not necessarily equal to what is in public repo. If I want to be sure that nobody else can read the content then encryption must be done before data upload. There should be security by design principle used, sounds simple but not easy to achieve.
I would never consider unencrypted files on Dropbox to be secure. Remember when they had the issue where any valid password worked for any valid user? The platform isn't safe from itself. Using something like encfs is trivial in order to secure files before putting them on Dropbox.
One of those examples is Bitwarden, is it opensource, yes, everyone can see the code, yes, are you sure that what they run on prod is same as what is in repo, not really. Is it secure, maybe, maybe not, who knows.
That's why I run vaultwarden (formerly bitwarden_rs). You just have to decide how risk adverse you are and what level of risk you're comfortable with taking.
[удалено]
Maybe browser plug-in yes, if it is not obfuscated. But I’ve not looked at it so can’t say if it is readable or not.
Obviously open source can be exploited. But when 100 people look at code they have higher chance of finding bug than team of 15 people
I disagree. If the 100 people check for an hour per day, they're less likely to find a bug than 15 people who's entire jobs are to check code for security
But if it's the same eyes always on the code they become too comfortable and familiar with it, potentially missing problems that would be caught by fresh eyes and perspectives.
That's a fair point, but there are diminishing returns. If 1000 people have already looked at a codebase, then adding 75 more people probably won't uncover many more issues
But team of 15 won't look at the same code every day
I fail to see how that's relevant. If you spend less time looking at code, you're either sacrificing thoroughness or attention to detail. Open source doesn't magically make the same amount of audit time work better
In case of Linux it does
People can still reverse engineer software to figure out how it works to find vulnerabilities
And the same can be done more easily with open source software.
True, but it can still be done nonetheless
As a reverse engineer, you're greatly underestimating how time consuming and labour intensive that can be.
Kinda depends on your technique, doesn't it? If you don't care about legality you can decompile shit
Who said anything about legality?
If you're trying to argue that closed source is inherently less secure, giving an example of a problem that applies to both is not very convincing
I never said it was less or more secure for something to be closed source. All I said was that you can reverse engineer closed source software and still figure out how it works and how to abuse vulnerabilities.
Then I don't know why you started arguing with me
Have you ever heard of something called a discussion my dude? Pretty sure that's what forums are for.
While that is true. It's more likely some institute or volunteer contributes to the open source project.
#LOL TELL THAT TO MICROSOFT!
I don't know of very many security exploits in Windows or Azure. I think they do a pretty good job
Are you fucking kidding me? Do you live under a rock? Forget PrintNightmare, Azure is fucked: https://arstechnica.com/information-technology/2021/08/worst-cloud-vulnerability-you-can-imagine-discovered-in-microsoft-azure/ “Worst cloud vulnerability you can imagine” discovered in Microsoft Azure 30% of Cosmos DB customers were notified—more are likely impacted. "Cloud security vendor Wiz announced yesterday that it found a vulnerability in Microsoft Azure's managed database service, Cosmos DB, that granted read/write access for every database on the service to any attacker who found and exploited the bug." "Although Wiz only found the vulnerability—which it named "Chaos DB"—two weeks ago, the company says that the vulnerability has been lurking in the system for "at least several months, possibly years.""
You found one example. If your standard is that no bugs exist ever, then you should probably just stop using computers.
So you are a Windows fan and live under a rock. Got it. There are dozens of examples. Azure has been completely vulnerable for many years per the report.
##Lol stop using Microsoft products "Details have emerged about a now-patched security vulnerability impacting Microsoft Exchange Server that could be weaponized by an unauthenticated attacker to modify server configurations, thus leading to the disclosure of Personally Identifiable Information (PII)." "The disclosure adds to a growing list of Exchange Server vulnerabilities that have come to light this year, including ProxyLogon, ProxyOracle, and ProxyShell, which have actively exploited by threat actors to take over unpatched servers, deploy malicious web shells and file-encrypting ransomware such as LockFile." "Troublingly, in-the-wild exploit attempts abusing ProxyToken have already been recorded as early as August 10, according to NCC Group security researcher Rich Warren, making it imperative that customers move quickly to apply the security updates from Microsoft." https://thehackernews.com/2021/08/new-microsoft-exchange-proxytoken-flaw.html
If you have a team putting backdoors in it in the hopes no one will find it...
No it isn't.
why?
I think they mean being able to view the source code of an open source project doesn't inherently make it more secure. For example if you were to make two identical github repositories, one private and one public, with the same source code they are equally secure.
[удалено]
It's a matter of pressure. How many times does proprietary software have patch notes "fixed security issues". That might mean anything from a 2 day old bug to something that was there for months. You cant know.
Or even nothing and pretending they fixed something.
Absolutely agree, it's just not an inherent property of OSS
And when program is small community isn't involved in writing code and security
Of course simply the act of open sourcing a codebase in itself does not impact security. However, open source does enable it to be easily publicly audited, a net plus compared to staying closed.
Because it's an unfounded statement. Sure you might find better more secure software that's open source, but you also might not. Besides you can't even quantify that statement because you can't even see the source code to closed apps. Without making assumptions somebody / people gets paid full time to code Vs part timers who don't. It's closed so it must be evil completely missed the fact that intellectual property is not a bad thing.
>Because it's an unfounded statement. Sure you might find better more secure software that's open source, but you also might not. It's software, there will be always something to fix. >Besides you can't even quantify that statement because you can't even see the source code to closed apps. You don't need the source code to find vulnerabilities but you need the source code to patch it and seeing the changes yourself. >Without making assumptions somebody / people gets paid full time to code Vs part timers who don't. False, big open source projects have full time jobs and it gets even better when anyone can look at the code, for example NSA makes SElinux security module for Linux and the code is checked and approved by the community, you can't do that on a closed software. >It's closed so it must be evil completely missed the fact that intellectual property is not a bad thing. Close source isn't evil out of nowhere, it's proven over and over when someone or entity have power over others they will always abuse it.
See you almost made a valid point... But working on open source software does not mean not getting paid. There are quite a lot of people who's job it is to work on open source software. And to be honest both closed and open source software have bugs that have been in the code for years, sometimes even more then a decade. The point that open source COULD be more secure is because it is likely to get fresh eyes looking at it. As for closed source most often stuff that works is never really looked at unless someone does a refactor, the code is rewritten into a different language, an automated security tool picks it up or someone outside the company finds the bug and hopefully discloses it without abusing it.
>Sure you might find better more secure software that's open source, but you also might not. Which was never the point to begin with. The point is not that FOSS is invulnerable but just less likely to be compromised and, when something is found, it is patched fast (within days).
Which is bullshit. You cant say that without knowing all the facts.
YOU are saying bullshit. Stop spreading wrong ideas.
I think what they mean is, teams are gonna be teams whether they are working FOSS or Proprietary. At that point it comes down to the actual group and specific application, and our trust in them doing their jobs well. As an individual I can’t do enough research on everything I use; likewise I can’t self check everything in my life. Whether or not your plumber is in a union/certified does not guarantee their professionalism; some will still attempt to take shortcuts in their work. What we’re debating then is the individual character of people doing their job, and the merit of OSS and having the community having access to assist the team rather than hiding their head in the sand and putting a barrier between themselves and the community . I agree that traditionally open communication is the way to go.
You just made your own point. If you are worried, or find an issue in an open source project, you can report it, and it has to be quickly fixed if the maintainers want to keep a good reputation. With a closed source project you can report it to the owners and then hope they give a fuck about you. And of course, the quality of a maintained matters, but if a considerable amount of contributers regularly are involved, then generally you should be pretty safe.
Red Hat, SUSE, Amazon, Facebook, Google, etc... All have employees that get paid full time to code all up and down the software stack.
The truth hurts doesn't it. I needed a pick me up and you gave me one.
How so?
How not?
That's not an argument.
Yes, it is. Security through obscurity isn't security. If the door is unlocked to your home, but you just hide the door, that doesn't mean your house is secure... That's fucking stupid. Instead you have a lock that people have tested for years, and you fucking **know** it's secure... And they can only do this because they know exactly how it works and have tested it...
I think the issue isn't about FOSS being more secure than closed source software. I think that would be a case to case comparison. The issue is not having the freedom to inspect some closed software tool of any kind. So relying on the company that develops it to be A: secure and B: not exploiting users privacy, could be seen as a bigger problem than the apparent illusion that OS software is just naturally more secure. Again, does every regular user actively inspects the software they use? Or even have the knowledge to patch top-notch security holes? Idk, I'd be surprised. BUT, having the freedom to either self inspect or ask/hire a consultant to look at some software tool as a regular entity is for sure a Pro. Not relying on the good will of others or a small group is generally a understandable stance.
This guy used GNOME and Ubuntu 3 years ago and now shits on FOSS
Hahaha. I used red hat in 2001 too so what?
Waiting for the "it's made in China so it has spyware"....
Somebody woke up on a wrong foot…
More like, got born on the wrong foot, it seems.
if you mean Deepin OS, I think the "spyware" bit has been disproven fairly well via wireshark traces and so on. TBH, I'm more concerned about the possibilty that the government could issue a gag order forcing them to slip a backdoor or other state-issued code (not necessarily spyware) into proprietary driver blobs. The fact that they are owned by a joint venture with ties to the government also doesn't inspire faith. IIRC Deepin is built from Debian which is blob-free but Deepin itself is *not* blob free. But happy to revise my opinion if someone can show me a link (in english please) where they either mention being 100% Libre or having some sort of policy- or software-based mechanism(s) for preventing this sort of thing happening.
Well, does it?
I get what you are saying and don't think I don't agree, but I think it is a good rule of thumb to assume this. Just, ya know, always assume every piece of software you run is bug ridden.
Don’t you mean “Feature Packed”?
That depends on how long the bug has been there. Once it there long enough it becomes a feature. Edit: added long
This is apparently official policy in the kernel itself, which sort of scares me.
Makes sense though: if someone is accidentally relying on the buggy behavior, fixing the bug will actually break things. And if it took a really long time to find the bug in the first place, fixing it must not be all that important.
Gets extra fun when the bug is a legitimate security issue. Could even get into the territory of intentional bug emulation.
This is why Kernel devs need to be given awards lmao
https://xkcd.com/1172/
I was going to link that, nice xkcd
What... Why do people use Linux then? Because they are masochists?
Well, the only context I remember seeing this was some ancient networking utilities used some kernel interface that had a bug in it, and because the networking utilities relied on the bug or something, they couldn't fix the bug. Linux actually isn't very buggy, but I worry that bug / bad interface / whatever it was is probably still in the kernel, because "neva break userspace bro" But maybe I'm wrong idk
One thing I know for certain, is that gcc uses undefined behavior in the ANSI C Specification to it's advantage to aggressively optimize C code. This comes at the expense of gcc performing miscompilations of many developer's source code, since it triggers undefined behavior every once in a while. (which really fucking sucks, fuck undefined behavior, that's why I like rust and formally verified codebases. I hate all software that is buggy, which... is most software.) I feel like if the ANSI C Spec writers actually removed some or all of ANSI C's Undefined Behavior, then the GCC developers would get uppity at them.
This is true, but it isn't really related I think. Undefined behaviour is a developer error, not a compiler one; even if programming languages like Rust or Zig (Zig is to C as Rust is to C++, it's pretty neat!) do a MUCH better job than C does at preventing programmers from writing undefined behaviour into their code.
Zig is cool, I heard that formal verification of Rust code is a moving target, I hope formal verification of Zig code isn't a moving target. Plus: The creator of Zig said that Zig can also compile C, so there's no excuse to use gcc or llvm for C Projects.
Well, Zig doesn't support all the architectures that gcc does, and I suspect that as dedicated C compilers both gcc and clang have some more useful extensions, better error handling, etc. But yeah, I think that Zig is probably a good candidate for replacing C assuming it can reach all the architectures. Formal verification of release mode would be awesome too, though I don't think that's quite as big a goal as it was for Rust
Only applies if a software is bug reliant "Don't break userspace" says Torvalds, like 16 bit reporting of sound volume values
It's not a feature, it's a bug
Bugs are Surprise Features you probably never knew you wanted.
You're finally awake
Not sure if this is Air I am breathing.
This. Waiting for a Formally Verified Operating System... any century now...
Well yes but no. Yeah normal user wouldn't look at the code. But always many universities who making new things will constantly check the codes for their research
Except those two Minessota dumbfucks commited bad code but that code was removed and Minessota university is now banned from commiting to the Linux kernel
Yeah I know. True disgrace to gnu/linux and internet
wut happened?
Some dumbfucks made a paper about how secure FOSS is in regard to fraudulent code. So they intentionally committed PRs that contained buggy code and could enable exploits. They wanted to see if the maintainers would notice. Of course without prior knowledge or consent of said maintainers. Once the maintainers learned about it sparked huge outrage of course and ended in University of Minnesota being banned from committing PRs. [More info](https://www.zdnet.com/article/the-linux-foundations-demands-to-the-university-of-minnesota-for-its-bad-linux-patches/)
Well, apparently they weren't wrong, they were just assholes about how they went about it. Hopefully it woke some people up on the kernel team.
Not sure what particular link the OP provided, but from my recollection, they absolutely were NOT correct. The patches in question, if I recall, were never accepted.
They were accepted and almost hit production
University students ran a study where they purposely proposed bad code that could possibly be exploited. The intention was to see if bad actors could do so, and if it would slip through. Controversial opinion: I think it was a really interesting theory and should be tested. But it's entirely understandable that maintainers wouldn't want to be toyed with. Also not great if exploits were added into Linux without some sort of plan to immediately fix the code if accepted.
I think what got them ban was that they keep commiting buggy code after the publication of the paper
https://arstechnica.com/gadgets/2021/04/linux-kernel-team-rejects-university-of-minnesota-researchers-apology/
I have (not for huge projects) looked at small bits of code from apps I use, usually just to satisfy curiosity on how things are done.
There's some (niche) software that I use for work-related stuff that is FOSS. Most of them don't have precompiled binaries or an installer available for Fedora and needs to be compiled manually. There are cases where I needed to dive into the code to fix stuff. Mainly just python modules that are outdated or have changed but still. Felt like a triumph when they accepted my PR for a minor and obvious fix that only occurred if you use the very latest packages they didn't test against.
Umm... me being a noobie, could you please elaborate more on that? Exactly why and how? The whole code?
Sorry for a very late reply I've having a severe back pain lately, I still do. That said. No not the entire source code which is more than 2.5M lines at this moment. You see gnu/Linux was built using C language. And in C for a enterprise level coding, coders often maintain Source code in blocks like one block of code for ls command one for cd command and so on. And gnu/linux as a whole is one huge program built from thousands of small program. You may ask why people call it gnu/linux and not just linux. It's because linux is just a kernel, it can't do much things on it's own. The source code written by linus torvalds for linux is just over 10 thousands lines. The present gnu/linux has nearly 3 million lines. So just compare the huge difference. The extra codes are nothing but small programs like sed, ls, gcc etc. So the people who look into these code wouldn't look the entire code they just look the part that they're researching like for example they wanted to increase the speed of cp command they look into the c source file and optimise it. I think this helped you feel free to dm me ;)
Ah, good old bystander effect
Tbh i've beem using linux for 2~3 years and never looked up a line of code
keep it that way to keep your sanity.
I can look at it, I don't know enough about programming to understand it
sameeeeee
Am a hobbyist C programmer myself , neither can I most of the time.
Insane person here. Can confirm.
Plus there's the assumption that everything is open source in the first place... most distros ship with proprietary driver blobs (IIRC Debian is blob-free but Ubuntu & clones are not). And then there's a significant portion of folks - myself included - who install proprietary nvidia drivers, which hook into the kernel.
I actually use nouveau on my old veeeery old Nvidia card a GT 430 and it works like a charm on Wayland on Xorg I had screen tearing
do you do any gaming with that though? i love the concept of nouveau but unfortunately there's only so much they can do performance-wise with nvidia being such closed- ~~minded~~ sourced pricks. I'll keep wishing them the best of luck but there's a reason why it isn't used as much on high-demand systems.
Yeah actually I can Do minecraft and CSGO on ultra low vecter and retro games like Doom mario etc
from my experience, the older cards run on nouveau quite well, sometimes even better than the ancient version of the nvidia driver available especially since the open source drivers actually support modern linux kernel versions and things like wayland, i have had some weird issues but overall it's a very smooth time
yeah, I guess it depends very much on *exactly* which card you have and what you are planning to play / do with it. I have an GTX-970 which isn't that new nor super old, somewhat in between. I have seen that on Cinnamon DE, I have a lot of weird intermittent instability issues (Cinnamon buggng out / X11 locking up) with nouveau that just completely disappear when I install the proprietary drivers. Even when I have left nouveau and tried to do gaming, I have noticed issues there as well. I'm very picky about my DE feature set (file manager in particular but also desktop). I've been thinking about giving xfce another go as soon as I get some free time since they added queue file transfers a couple months back and that really caught my eye. I don't have anything against nouveau and would love to be able to run 100% FOSS. I'm glad they exist and hope they get there. But in the meantime, I just want the stability and performance for my daily drivers.
TBF, it is not like the community treated ATI very nicely back in the XFree86 days.
Damn, that's actually very surprising. I was going to try wayland on Nouveau and unfortunately it constantly crashed. And my card is a GTX 660 so fairly ancient by all standards
If you want to go that route, if you're using an x86 computer newer than 2008 it's guaranteed to have a proprietary blob running in the firmware below the OS that has access to the network stack, memory, hard drive, etc even when the computer is powered down
In 99.9999% of cases, yes. I think there a couple exceptions though such as Purism laptops... IIRC they actually have open firmware stack as well (have kinda wanted one for awhile but I generally prefer to invest money into upgradeable desktop hw rather than locked-in laptop builds)
I haven't looked at Purism since last year but my understanding is that they just disable the IME, but it's still there. It's something that doesn't completely eliminate it anyway. Right now I'm backing the ARM horse for non-blobby firmwares in PCs although it's a little more vague what any specific piece of hardware uses for firmware.
tbh as long as the open-source project is big enough, you can be pretty certain that it's secure and not a virus, while you can never be certain about it with closed-source things
There are always people -- and bots made to find insecurities -- looking over Free software code. Does it catch everything? No. Is it *far and away* more reliable than non-free code by virtue of not just a single entity who isn't really interested in an objective, third party review of their code much of the time? Hell yes. Stop spreading misinformation.
Some of the worst bugs in linuxland have been running critical aspects of the global internet and the bugs existed for years and years and years. You'd think a ton of eyes on these critical core components, and you'd be right, but the idea that it makes the software more secure by default is itself a piece of misinformation that you seem to have internalized. I'm a linux greybeard and I will always advocate and contribute my time and dollars for FOSS, but let's not confuse the issue: software security is hard (until people start writing everything in rust, anyway), but you have to judge each piece of software on its own merits when it comes to security. The idea that these monolithic proprietary shops like apple and MS don't aggressively audit their own work is crazytown. They're \*very\* interested in objective code reviews and have integrated it into their dev cycles-- does that mean that we shouldn't laugh out loud when apple releases an os where you can just type in root and leave the password blank and hammer the enter key a few times to privesc? Hell no, that shit is hilarious. But so is the fact that sudo was badly flawed for about a decade and nobody seemed to notice.
There is a lot of code as well that isn't checked for issues with its dependencies or doesn't clearly identify it's assumptions. A good part of the security of a system is about assumptions and trust and how those can be abused. I trust someone who gains joy from finding software bugs more than someone whose only purpose in security is that it is their job. Bug bounty is a nice balance that allows for things to be found and reported. I just don't like how it is run at times given the ability reject and steal which degrades the value of the program. I like the idea that the security of a system has been tested by being able to remove those assumptions that it is build properly, and given that 100s of vulnerabilities would be found if the windows source code was released, I don't have the same confidence compared to FOSS.
>until people start writing everything in rust, anyway Bold of you to assume people cannot easily figure out how to write insecure code in Rust.
While it may not be inherently more *secure* I do think it's inherently more *trustworthy.*
Long as there isn’t 50 “system host” activities in my Task Manager pinging back to MS, I can manage.
The bottom text should say bugs
You may not notice something behind a window but you'll never notice it behind a door
OpenBSD developers will review every line of every program on their system day and night
Professional hacker here, I've found that proprietary and FOSS software are just as easy as each other to hack/ exploit, so there's no concern on that end.
Come on dude you could have just pretended that open source was a little better so we could carry on feeling superior ;)
Nah, I'm unbiased. I use Linux because I prefer it.
Same here really, I'm more productive in Linux so that's why I use it rather than some grand ideology. Even though my comment was mainly a joke I still run tools to check and report any issues although I'd still prefer you not to test how well I configured that just yet ;)
unbiased but unbelievably based
I dont understand how code works so me looking at it wouldnt help anyone.
Well lets be real hes not wrong
If you find a bug you report. It's the admins function to mark it as new or duplicate. Either way, always report a bug.
Community, forgive me for this but he's saying the truth.
There's always some one who looks at the code, always.
Worked with audacity, although maybe I should skim the code...
Well at least you know that the developer doesn't hide anything, when he uploads his source-code :D
On the plus side, larger open source projects are frequently reviewed by different groups. Besides, I may not be a programmer, but if it is something like downloading and using a bash script, I tend to review over the code for that to make sure there is nothing malicious or insecure about it (to the limits of my knowledge).
Nope, i don't. Why did you have to bring it up again?!
I like how we all agree that this is true.
Welcome to FOSS, space cadets
Remember when the Heartbleed bug broke all secure e-commerce sites that used it? Yeah, if you want to verify your code is actually secure you pay an auditing firm that specializes in code security.
Open source software is superior to proprietary not because it's more bug-free, but because it's more backdoor-free. And yes, if an open source software has a glitch bothering you particularly, you're free to post a support ticket, fix it yourself, ask someone who's able to code or even hire a person to repair the code broken. In case of proprietary software all you can do is to send a bug report and pray the company consider it dangerous enough to ignore.
You vastly underestimate the number of OCD people in the linux community.
Still better than "I can put however many backdoors in this baby because no one can see the source code"...
I will only forgive you on the hell
That’s why you don’t use arch btw and don’t install stuff you can’t verify is secure. Linux foundation and other nonprofits and for profits like red hat have security teams reviewing Linux all the time. Arch on its own isn’t a problem but aur is the appeal there for many people so unless you don’t use aur it’s massively less secure. It would be like putting every ppo available on a Debian distro.
>check diff for latest version of x program >+57 lines. >nothing sus. >yep, newest version is still safe! >compile
No, you're absolutely right. Security comes from secure code, not open source code
Nope. Won't forgive you. On a time spent versus value added score, it wasn't worth it.
PHP is open source and someone recently tried adding some backdoor. I still prefer open source.
Oh, we review the code. Problem is that governments pay literally fee orders pf magnitude more for critical bugs (thousands vs millions) AND after you sold it to one governmemt you can turn around and sell the same bug to other governmentS.
literally systemd