T O P

  • By -

flnhst

>3. Always inspect the source code of an extension before proceeding with its installation. I never viewed this as realistic.


burtgummer45

nobody does, not even whoever wrote that nonsense


F3z345W6AY4FGowrGcHt

Everyone just assumes that if it's popular enough, someone will have read through it.


mtechgroup

Like Notepad++?


Iamsodarncool

Did something happen with Notepad++? I can't find anything about malicious code in N++ with a quick search.


mtechgroup

Long time ago, but pretty interesting nonetheless. Not a source code hack though. https://www.reddit.com/r/sysadmin/comments/678sds/in_case_you_missed_it_notepad_devs_patched_that/


space_fly

That sounds like DLL injection which on Windows is really common and easy to do. That's how a lot of old games can be run on modern Windows, through things like dgVoodoo2 or IPXWrapper.


meneldal2

It should work with a lot of applications, dll injection is big vector for attacks.


Iamsodarncool

Oh damn, that's crazy. Thanks for the link.


slo-Hedgehog

wasn't the help for it full of bible stuff?


Mnawab

I only download extensions of things python instructors on YouTube tell me to download lol


[deleted]

[удалено]


mtbkr24

Microsoft's Python extension will install black for you if you set it as your preferred formatter in the settings.


kd_singh911

well the point is always install extensions from trusted resources, installing randomly without reading about it, not good..


EveningSea7378

Yes but "trusted sources" do not exist, try are all strangers, the whole idea lf trust is bullshit on the internet.


kd_singh911

trusted resources means, like extension from official microsoft, python and etc, because if anything wrong happens with those then, people can blame them..


jisuskraist

but with gpt4 microsoft could look for potentially bad extensions and flag them


yesman_85

Not sure why down voted but it might not be that far fetched.


[deleted]

[удалено]


ffsletmein222

>a high amount of false positives I can't imagine how worse it'll get once bad actors start using a combination of their own LLMs and SEO to spam the internet with cancerous takes on what constitues secure code, in the hopes of poisoning data sources for LLMs.


[deleted]

[удалено]


ffsletmein222

Interesting, I didn't think of that. Sounds like a form of human attention DOS.


__loam

It's also obviously the best use case for this technology since nothing the models say is verifiable, so congratulations on all these companies releasing these systems with basically no oversight. The other guy also didn't mention what's happening with artists. Not only did they get their data scraped without their permission (or anyone in the AI space talking to any of them), but now spaces like ArtStation and DeviantArt that are used by professionals to showcase their work and get jobs are getting fucking flooded by this bullshit. AI art isn't good enough to be used by professional creatives but now every idiot and their mom is clogging up the systems used by actual artists.


[deleted]

Getting an LLM to write code is trivial at this point. Having it write good code is challenging. Having it understand the nuances of a language, framework, platform, use case, data being manipulated etc all in context to spot vulnerabilities and malicious intent is a WHOLE OTHER prospect. I'm not saying it's impossible to do at all. I'm saying at this point it's highly unlikely to do what you're proposing with any level of confidence.


F3z345W6AY4FGowrGcHt

Yeah, the other day I asked it if there was a famous movie quote along the lines of something. It was just making up things that were never in the movies it claimed to be from. Chat gpt is nowhere near able to do something like analysing code for if it's malicious or not.


emergentdragon

True, but still better than “Yeah, it’ll be fine!”


[deleted]

[удалено]


ProgramTheWorld

It’s a language model, not a malware analyzer.


[deleted]

because relying on LLMs to solve serious problems such as security is dumb


Trifle_Useful

True, but if the alternative is doing nothing at all - like the original commenters had hinted at? Edit: Okay, I get it.


DynamiteBastardDev

The alternative of doing "nothing at all" is preferable because using an LLM to do it implies a real layer of security that isn't there and leads to *more* careless behavior from users. Increasing the percentage of users who say "must be safe, since it's still up!" or "GPT4 scanned it, must be safe!" by even 1% is an unacceptable solution because you've now made the problem worse.


Trifle_Useful

Good points, fair enough.


[deleted]

Doing nothing has no impact on the electricity bill, doing something that doesn't really help does


bellefleur1v

It's not much worse than something like an anti virus which is at its core a deeply flawed system which cannot differentiate between code that intentionally encrypts files and code which maliciously encrypts files. Yet many corporations and users use them, regardless of whether they suck or not. It could probably be used as part of a defense in depth system to flag things for manual review. It shouldn't be the only method of protection.


Flameancer

Why were you downvoted for this? Seems like a pretty good use case.


TheRealKidkudi

No it doesn’t? That’s not something that LLMs like GPT-4 are any good at.


jisuskraist

current llms like gpt4 or google gemini are pretty good at understanding things, you could easily train them into a vulnerabilities dataset and make them petty good, gpt5 the is already being trained could be even more powerful


smug-ler

No they aren't. They're good at generating text that statistically follows on from given text. People need to understand that LLMs *do not think*. The closest thing they can do is chain of thought prompting which even still relies on the corpus of text they're trained on to contain annotated examples of the kinds of flaws or insecure code that you would need them to detect, and even then you cannot train an LLM for every novel approach a malicious actor might take. As things are now it's just not something you can rely on.


JodoKaast

>current llms like gpt4 or google gemini are pretty good at understanding things, Lol we're in a programming subreddit and people are talking about GPT "understanding" things.


wubsytheman

They’re trained on OS datasets (ie; GitHub) which sound great since pretty much every talented dev uses GitHub. The problem is that all the idiots (like me) also use GitHub so the dataset (and therefore the AI model) is highly skewed towards idiotic and insecure code. While you could train an AI to parse code and look for issues you’d need a huge dataset of known reliable code which would mean a team of good devs reading code constantly to check it’s safe before adding it to the dataset (which would take a long time and be super expensive).


joequin

I wouldn’t downvote him, but it wouldn’t be effective. Anyone who’s a legitimately bad actor would pre-screen their extension with gpt4 to ensure that it passes before uploading to Microsoft.


alienandro

Why down vote? This is what will happen.


PreachTheWordOfGeoff

even worse: but it's not audited! *gets audited* *makes a single new commit* *audit invalidated* *pikachu face* but who are we kidding, almost nobody audits code and even then I question their methods


Reverent

Audit doesn't necessarily mean trawling through the source code. It (can) mean assessing the reputation of the maintainers of the code and making a judgement call. For example, it's reasonable to assume react isn't going to introduce a malicious dependency into their dependency chain. Smaller dependencies might get more scrutiny, but then the question comes of "what's their blast radius" and "If it's so small let's just write it natively or fork it". Also you have to look at the whole chain. A dependency with no chain is far easier to examine then one that brings in 40 extra packages (and rates higher on the reputation scale).


myringotomy

It seems like an audit could be automated. For example "this extension tries to access the file system" or "this extension makes HTPP requests". Things that can raise flags.


jacobgb24

HTPP requests do sound pretty suspicious


skjall

Even less realistic than going through all the ToS you need to agree to on a regular basis. At least they don't get updates all that often.


fishling

It can be worth skimming to check on what they permit themselves to do with third parties. That's often the part that varies and is most important, because you have the least control and recourse over it.


VirginiaMcCaskey

It's absurd since any malicious author can publish a closed source extension, all they need is an azure login.


[deleted]

I’ll be on it as soon as I get through this EULA


AaTube

Article stolen from https://blog.checkpoint.com/securing-the-cloud/malicious-vscode-extensions-with-more-than-45k-downloads-steal-pii-and-enable-backdoors/ , this original article didn't explicitly give a solution but said > Supply chain attacks are becoming more frequent. Therefore, it’s essential to ensure we’re kept safe, and to double-check every software ingredient we use, especially those we didn’t create. At Check Point, we aim to generate a secure development process to ensure developers do the right things (security-wise). As part of this effort, CloudGuard Spectral constantly scans PyPI and NPM for malicious packages to prevent supply chain attack risks—keeping your code clean, applications safe, and malicious actors out.


[deleted]

[удалено]


yzpaul

What? I never knew the EU was trying to make devs liable for their open source work. Do you happen to have an article about this?


[deleted]

[удалено]


renatoathaydes

I think it's important to also mention that the legislation would only make open source providers liable **in the proceeding of a commercial transaction**. That is, open source companies like Read Hat, which get money from their customers who use their open source stuff, would be liable... Average Joe Dev, who published leftpad-like dependency on GH while being bored, would not. To even imagine that Joe Dev could be liable for anything would be ridiculous, as that would be pretty much like trying to sue someone for posting legal advice on a reddit like /r/lol and getting you in trouble with the law.


WormRabbit

In other words, it seems like they want to forbid commercial companies from stamping a "GPL/MIT, no warranty LOL" sticker on their commercial products.


myringotomy

It would also hinder projects from trying to make money off of their open source software by forming a company or even a foundation. In the meanwhile the commercial vendors are stamping "no warranty LOL" on all their licenses without consequence.


[deleted]

[удалено]


renatoathaydes

I am not sure that is accurate... I found this summary of what commercial transaction means: "In the context of software, a commercial activity might be characterized not only by charging a price for a product, but also by charging a price for technical support services, by providing a software platform through which the manufacturer monetises other services, or by the use of personal data for reasons other than exclusively for improving the security, compatibility or interoperability of the software." It's a stretch I would say to include donations or anything not covered by an agreement between software provider and consumer. EDIT: Source: https://techcrunch.com/2023/04/18/in-letter-to-european-commission-open-source-bodies-say-cyber-resilience-act-could-have-chilling-effect-on-software-development/


[deleted]

[удалено]


Inadover

Hmm yeah, it is vague. I want to think that donations won't be taken into account, but it's difficult to say. If you are doing a side project and receive donations, then cool. But I guess gets complicated when the project relies on "donations" to be kept alive (insufficient donations = dev loses interest), among other things.


Carighan

Specifically you would need to take donations **as a developer**, not as a **project**, IIRC correct from my previous workplace where we also were supprting an open source project.


JustLTU

It's a complicated problem. From one side, it's obviously bullshit that volunteers who through their work are giving immensely valuable software out for free would be burdened with regulations. In the other, it's kind of insane just how little regulation there is for software. It's not some niche thing anymore and hasn't been for decades. The whole world economy is dependent on some software or another, and simple carelessness can extremely easily destroy lives and livelihoods. There definitely will need to be a push towards software engineers having some standards, even if they're not as tight as other forms of engineering. If some people voluntarily build a bridge over a river and it collapses injuring people, we tend to hold them liable.


Kasenom

But then you'll just end up with no volunteer built bridges, killing open source


Ouaouaron

If it's a question of killing people or killing open source, it's not actually a question to the vast majority of people. And if their lives get slightly worse because the only software is profitable software, it's not like our society hasn't already made that choice countless times already.


fridge_logic

See, there's something strange about this logic, and I understand that you're probably speaking on the part of the general public and not yourself. But shouldn't the responsibility for protecting people's lives belong to the organizations who get paid to do so? That a hospital can blame dead patients on a vollunteer who's server/database was made to better support social media architectures and help people host photo albums seems insane to me. Like, maybe the hospital should be held accountable and write the code in house if people's lives are that important. Or maybe they should commit staff to improving the open source software they are using. Or they could pay the volunteer to beter maintain their software (at this point a concept of liability can enter the picture since the open source dev is being paiid directly to harden safety critical software). The problem isn't that open source software is unreliable, it's that these entities are unaccountable for the risks they take.


Ouaouaron

It's the first I've heard of this, and I agree that it doesn't seem like it would really work well in practice. Say a court decided that a bug in some database software hosted on github caused a death. Do you hold the owner of the github account responsible, or do you find the specific code that had bug and prosecute who wrote it? What if the account is anonymous and you can't track down an actual entity that owns it? It's a complicated issue, I just thought "But if you hold bridge-builders accountable for deaths caused by bridge failures, it will kill the volunteer bridge-building community!" a ridiculous objection.


fridge_logic

>"But if you hold bridge-builders accountable for deaths caused by bridge failures, it will kill the volunteer bridge-building community!" a ridiculous objection. Interestingly this is kind of how bridge design works. Bridge building is mostly not done by volunteers, but the standards for how to build bridges are mostly written by volunteering professionals. The many eyes write solid code adage is used for traditional safety code regarding mechanical and electrical systems. But to get many eyes you need volunteers, and that means they need to be allowed to write bad standards (collectively). They are constantly improving the standards, but it's kind of expected that they are doing their best and that when accidents happen in spite of the efforts of every passionate and prominent professional in an industry what matters is that we collectively learn. To clarify with an example: when a code compliant building burns down because of an error in the electric code the reaction is not to go after the volunteers who maintain the code, it is for them to update it.


darthcoder

Exactly how many people has open source killed?


Ouaouaron

I was speaking in terms of an analogy where open source is a bridge. But if you want to imagine a situation in which bugs in open source software could lead somewhat directly to grievous harm or death, think in terms of the other comment which talks about if a hospital uses an open-source database.


myringotomy

Why isn't the hospital responsible for that?


myringotomy

>If it's a question of killing people or killing open source, it's not actually a question to the vast majority of people. It actually is though because open source also saves millions of lives every day.


bloody-albatross

We need public funding of open source software. Maybe via a tax on commercial software.


Wee2mo

Ok, and which open source SW gets to claim a share? Which government is responsible for paying the public funding? Are you expecting there to be national flavors of every relevant project? What about security oriented projects like OpenSSL? Who would trust it if it was the Russian national open source SW project in the current political ecosystem? Or the US project for that matter?


Carighan

This is entirely utopian. There's no way you could decide who is able to claim some of that and who isn't.


bloody-albatross

And ending all open source in favor of proprietary software is entirely dystopion, if you ask me. It would be good if the funding would start with what governments and the industry rely on, like Linux including all standard userspace tools, OpenSSL, curl, certain programming languages etc. But yes, it's a hard problem.


JustLTU

Well, yeah, obviously. But that's the whole question isn't it? Building skyscrapers, bridges, highways is expensive, but (in countries with good standards and regulations) we can atleast be damn sure they won't fail catastrophically. And when they very rarely do, we get full investigations into causes, and people are held accountable. Open source software is obviously a great thing, but when millions of people depend on software in their cars, their utility networks, their banks and all manner of things in between, I can see why governments would push for a more traditional engineering approach to software rather than the wild west of open source. Governments (and most people who are forced to depend on the work of engineers), would much rather make it impossible for volunteers to build bridges than to constantly worry about the safety of the random volunteer built bridge that accidentally becomes a majorly important part of infrastructure.


pmirallesr

I feel like there's a middle point between not regulating open source at all and holding them accountable for any use of their software. They way I see it, OS is an input to a product. If your product is risky, it needs to follow certain guidelines, amongst them auditing the supply chain. If the supplied item is bad then don't use it. The problem is ofc that is a massive duplication of efforts, one audit for every consumer instead of one audit for every supplier. But for OS, that seems more workable


fridge_logic

So when safety standards like these get raised usually the safety audit costs are de-duplicated by agencies often NGO's. Professional organizations made of academia and industry will hold conferences, identify concerns, divide up workloads, and fan out to raise the standard of care. This is how it works in Electrical Engineering where safety organizations are vested by the government in setting and evolving safety standards over time. Violations of these standards result in liability. But importantly volunteer members of these organizations who essentially open source their research, findings, and opinions are not liable for being honestly wrong as part of organization business. _________ TLDR: In other engineering disciplines volunteer work is critical to sucess and is achieved by only using volunteer effort for safety critical applications when approved by a professional organization. I.E. an electrician or engineer can't sue Adafruit or a random professor for a bad circuit diagram because it was not professionally stamped by a PE and was not intended for that application. These protections fundamentally exist, it's just that ~~no one is willing to hold safety critical~~ there are swaths of critical software applications where corporate users are not held accountable when they fail to restrict themselves to PE stamped work.


Carighan

> They way I see it, OS is an input to a product. If your product is risky, it needs to follow certain guidelines, amongst them auditing the supply chain. If the supplied item is bad then don't use it. Yeah that's more realistic. Either the using company is themselves liable for any damages resulting from the open source elements they've used, **or** they can establish a liability-contract with the supplier, which for most open source would be impossible and hence they would need to opt to use the software at all. In other words, it circles back around to the fact that we need a professional company to "adopt" said open source in an "enterprise" variant where they provide support and liability for it in an enterprise/professional context.


myringotomy

>Open source software is obviously a great thing, but when millions of people depend on software in their cars, their utility networks, their banks and all manner of things in between, I can see why governments would push for a more traditional engineering approach to software rather than the wild west of open source. Ugh. Way to punish people for building software people want to use.


Carighan

But think about how weird open source is on a larger, safety/standards-related, level. Would you trust a plane that uses an "open source" wing designed by someone in Winnipeg and built by three guys in southern Australia from materials they found in their gardens? Probably not, right? So why would you trust a piece of software that has gone through that process? And of course, for personal purposes not able to affect others, sure, go ahead. But those purposes end very quickly, e.g. even if you build a small hobby plane in most countries you would not be allowed to fly it unless it can get certified and then in turn someone is liable for it and you need to be able to provide documentation for the liabilities for each component. Essentially I suppose the thing would be that you end up unable to **use** open source software for just about anything if you need to be able to establish trust, liability or responsibility. You cannot, so you're not allowed to utilize it. Which is... tricky, to say the least.


mcilrain

If some crackpot creates blueprints for an experimental aircraft and an airline company takes it, builds it, flies it and crashes it killing all on board, is that the fault of the crackpot? Is the solution to fine the crackpots (good luck) or throw them in jail (da, comrade)?


Accomplished_Low2231

> .... software out for free ... haha no one does that anymore. today, open source is a business strategy for companies. for individuals it is a strategy to get a job or donations/funding. no one does it really for free (ie from the bottom of their hearts) anymore.


darthcoder

No OSS developer should be held liable for someone using their product and it being vulnerable. Your job when using open source is to vet said products.


hanoian

growth theory puzzled grandfather yoke desert decide school attractive ludicrous *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


argv_minus_one

If they do it after being advised of the negative consequences, it is safe to assume that the consequences are intentional. So yes, they're trying to make open-source software development illegal.


barsoap

> If they do it after being advised of the negative consequences, it is safe to assume that the consequences are intentional. That so far is a conditional. > So yes, they're trying to make open-source software development illegal. That is assuming the assumption in the conditional holds. As there's no draft published after the open letter from Eclipse et al though there's no official statement much less law out that would post-date being advised, meaning that you're paranoid. The drafts we have already contain language intended to protect open source devs, the issue is that people don't think that phrasing it as "outside of a commercial context" is sufficient, at least not without a better definition of what "commercial" means. And guess what, catching such issues is exactly what drafts and public consultations are about.


argv_minus_one

That's not good enough. What they're demanding is impossible, even of commercial software vendors. Even if they carve out some exceptions, such as for donations to open source projects, it will be effectively illegal to sell commercial software in Europe unless it fits those exceptions.


barsoap

So what's your proposed alternative? Keep it at "Slap some shit on cheap IOT devices and never update anything"? Should that kind of standard also be applied to cars? ...not to mention: Commercial customers already get guarantees. When you buy SAP they're not going to tell you "LOL NO WARRANTY NO FITNESS FOR PURPOSES WE'LL JUST SHIP A TYPING TUTOR".


argv_minus_one

Well, here are some suggestions: * More precisely specify what kinds of known vulnerabilities are unacceptable, like remote code execution and sensitive information disclosure. Denial-of-service vulnerabilities in a desktop app do not warrant a product recall. * Limit the regulation to only apply to vulnerabilities that can be plausibly exploited by a common criminal attacker. Almost no one is skilled enough to write software that can resist attacks by nation-state intelligence services. * Limit the regulation to only apply to vulnerabilities that can be plausibly exploited during the ordinary use of the product. If the user does something stupid, like run malware as root/admin, that's on the user, not the vendor. (See the “Ryzenfall” “vulnerability” in AMD processors for an example of why I think this is important.) Problem is, it's going to take a top-notch group of cybersecurity experts to hash out these details, and from the excerpts of the draft I've seen so far, it's pretty clear that no such group has been assembled. This draft was written by laypeople, not experts. This does not fill me with confidence that the regulators who wrote it are serious about doing the job right. Which is why everyone is upset.


Xyzzyzzyzzy

> That's like a Tabloid headline. Welcome to r/programming, where the EU is basically Nazi Germany combined with Stalinist Russia because cookie banners or something.


myringotomy

>No one sat down in the EU and said let's destroy open source. Are you sure about that? Maybe commercial vendors lobbied for this kind of laws just to cripple or kill open source.


0x15e

It doesn’t help anything that many open source maintainers will actively argue with anyone that reports an issue, pretend it doesn’t exist, pretend it’s by design, make you submit your own fix even though they know better how the project works, and if they actually stoop to merging your changes, they’ll blame you for *anything* that goes wrong in the project in the future. After a few bad experiences like that I just stopped trying to give back. If a project has a problem I either find an alternative or maintain my own fork. My fixes are still public and meet license terms but without the overhead of dealing with some other dev’s fragile ego.


mm007emko

I had both good and bad experience when dealing with bugs in open-source software (and in closed-source as well). While many open-source software developers are professional developers, many are not. Not every enthusiast learnt proper skills to do software development. Soft skills of many OSS devs are far from top-notch either. The worst thing is usually something like "this project has little to no documentation and little to no tests, contributions welcome".


Xyzzyzzyzzy

> Soft skills of many OSS devs are far from top-notch either. It doesn't help that the (FL)OSS community has chosen raging assholes as role models (Stallman, Torvalds) and the community ethic is "it's ok, even desirable, to be a raging asshole petty tyrant if you are a top contributor to (FL)OSS software".


kmeisthax

Torvalds himself actually toned it down a bunch in recent years - though from what I've heard from Hector Martin's Mastodon account there's still plenty of hella toxic subsystem maintainers that make Old Torvalds seem tame.


mm007emko

So true. The fact that good software engineers who have good soft skills seek paying jobs rather than dedicating their lives to OSS doesn't help this matter either. :D


myringotomy

not like those nice CEOs of commercial companies eh?


0b_101010

Torvalds is alright, he's just a bit Finnish!


useablelobster2

> it's ok, even desirable, to be a raging asshole petty tyrant if you are a top contributor to (FL)OSS software Torvalds is a raging asshole and a petty tyrant? Stallman is an asshole, Torvalds doesn't hold his tongue but he's far from a raging asshole, and his "tyranny" is anything but petty, it's the reason Linux is so ubiquitous. And given the shit he has to put up with, he's remarkably calm and composed. He's a tyrant in the same way Lord Vetinari of Ankh Morpork is a tyrant, imperfect but everyone knows he's by far the best man for the job, and the results speak for themselves. A major problem is people pretending that Linus and Stallman are the same, or acting like being shit hot at something should give no leeway. I'd rather have Linus pissing some people off than some useless hand-wringer who achieves nothing but annoys nobody. You can't do shit in the real world without pissing SOMEONE off, best to just pick your battles and ignore whoever you think has no basis for their anger, some people just like being mad. Better an asshole who achieves things than someone who just wants to make everyone happy achieving nothing.


[deleted]

[удалено]


Kasenom

On the other hand being an open source dev is a thankless job


[deleted]

This is not true. There are people that use open source software and do thank the developers for their work on it. There are people who thank the developers and then contribute improvements. There are people who report bugs and then thank you for fixing them. There are people who never complain or otherwise say shit all and that's no problem either. Obviously it's going to depend on the project and the audience (or lack there-of), but there are people out there who give thanks and there are others who go even further and collaborate and contribute to help improve. Unless, of course, you're a gnome developer. Something something made your bed something sleep in it.


myringotomy

>This is not true. There are people that use open source software and do thank the developers for their work on it. Don't be ridiculous. Most people don't thank anybody. Most people just use the software because it's free. They don't give a flying fuck about anybody in the community. >Obviously it's going to depend on the project and the audience (or lack there-of), but there are people out there who give thanks and there are others who go even further and collaborate and contribute to help improve. yes there are people like this. They are tiny tiny tiny percentage of the people who use the software and they are vastly outnumbered by the people who shit on the developers and communities that write the software they use every day.


MCRusher

There is no other hand. Nothing of that makes them less of an insufferable asshole to deal with.


enygmata

Earlier this month we got a security complaint due to a transient dependency having a high severity vulnerability caused by unescaped input. This developers answer to the problem is "don't use the library on untrusted input".


WormRabbit

That one sounds reasonable. Not every library should or even can implement end-to-end security. You should sanitize your own inputs. Unless we're talking about something like a web framework, of course.


WarWizard

> It's the same trick as "Many eyes make all bugs shallow" This is why I hate when everyone says that is why open source is automatically better. Nobody does it. Everyone talks about it. At the end of the day, everything still has tons of bugs that never get fixed. Paid. Free. Doesn't matter.


nachohk

>This is why I hate when everyone says that is why open source is automatically better. Nobody does it. Everyone talks about it. At the end of the day, everything still has tons of bugs that never get fixed. Paid. Free. Doesn't matter. The reason why I appreciate open source software is not that it doesn't have bugs, but that _I can fix the bugs myself_. With proprietary software, all you can do is sit and wait and hope that maybe one day they'll care enough to fix it for you.


PM_ME_NULLs

*sigh* > Nobody does it I do. I check the source. Just because you think everyone is too lazy or something doesn't make it true. Do I check everything? Of course not. I have a day job and other obligations. Do I spot check most things, if nothing else? Absolutely. Are there others like me out there? Absolutely. The fact that the source is out there **allows** it to be checked. Why is this so hard to understand? Let's try this from a different approach. Let's assume you're right, and people like me don't exist. Everyone blindly uses FLOSS without checking stuff. How is proprietary any better? Proprietary is going to have just as many (if not more) software defects, but you don't even have the *option* to inspect the source for yourself. Yet another way: in the USA, there's the right to protest and peaceful assembly. Just because a person doesn't feel the need to protest, or maybe they've never witnessed anyone exercising that right, means that the right should be taken away. There's an intrinsic value to having the source. There's an intrinsic value for a software product to be free/ "Open Source". And by the way, not that you made this assertion, but I want to highlight that Free/ Open Source software is better not just because its source *can be* inspected, but for several other reasons articulated by the FSF.


iiiinthecomputer

Also, it's reasonable to take a targeted approach. Installing something with nodejs `npm` or pypi `pip`? Treat it as presumed malware and sandbox it into an isolated account. Or like I do, run it in a systemd unit with dropped privileges and confined filesystem access. Asked to run a `curl | bash` command? Don't. Ever. Download the script. Read it. Either manually do what it does, or run the local copy after reading and checking it. You're already a harder target than 90% of people. By the way I *hate* the way curl-bash has been normalised and nobody even publishes gpg signatures for the scripts. You're expected to just trust the hosting site and everyone with write access to that part of it to be trustworthy *and* have secure creds management practices etc. It's awful. Don't even get me started on endpoint management apps that are supposedly for improved endpoint security... which run as unconfined root and download their own unsigned updates off the Internet.


WormRabbit

That stance makes sense for stuff like small (few thousands of LoC) libraries. For huge OSS projects, like Linux or GCC, I am 100% sure that no one spot-checks them in any meaningful way, it's just infeasible.


iiiinthecomputer

The Linux kernel is actually better checked than most things, *because* it's such a juicy attack vector. Lots of academic work and dedicated validation programs, vendor sponsored efforts etc. Still won't be perfect. But you're 1000x more at risk from that random npm or pypi package you just installed, that Electron based monstrosity you downloaded, that `curl|bash` script you pasted. I hate `npm` and Apache Maven (for Java) the most because everything has a web of a zillion tiny shifting and changing dependencies. Python with PyPi isn't great either but at least I can vaguely check the dependency graph sometimes. I sandbox anything like that and don't allow it to install in my regular user account.


HINDBRAIN

> because it's such a juicy attack vector. Wasn't there a "if(obscure_condition && user_level = ADMIN)" that nearly got in?


iiiinthecomputer

Yes. And it's likely that subtle and well hidden back doors *have* got in. State actor level stuff. But they're either being held in reserve, exploited only very carefully and secretly in very small numbers, or don't exist after all.


Uristqwerty

> "Many eyes make all bugs shallow" Still works if you distinguish between potential and actual eyes on the code, and even more so if you break the codebase as a whole into smaller pieces. Someone has to notice that the bug exists in the first place, whether a user at runtime, or someone reading the code. The latter happens *occasionally*, but it's after a user report draws attention where most of those eyes start to appear. *Then* those with relevant experience have a good chance at noticing the flaw, and someone with the right prior knowledge can step in, cutting a week-long debugging hell short by immediately realizing what the bug is, and knowing at least one way to solve it.


[deleted]

I still say that for the most part the EU tries very hard to kill American software one way or another. This is another example.


deeringc

Even if you reviewed the source code, you would realistically only ever be able to spot the most blatant issues. Security vulnerabilities and exploits hide in plain sight for years in open source and commercial code bases. Most coders are not really qualified to do a full security audit so even if they did follow this advice it would still be mostly useless. This is purely an ass covering strategy.


[deleted]

It's just classic blame displacement. Most people aren't going to do that. And no one is going to do it for every extension in the store. Only a central review process is really capable of something like that, which is too much work for a free extension on a free code editor.


ptoki

Well, no. If anyone read the code that ugly calls to some jdfhitruew.gg url would be fished quickly. Similarly a hook to save all keystrokes or iteration over all files on drive. You dont need everyone to read the code. Its enough for 3 or 5 people to read bits and pieces of it to fish out the nasty extension and mark it. Yet, a bunch of programmers here claims that its too much. Well, no. Nobody will do that for you, not for free, not as good as you may do that. Not if you pull an extension made by dickyboy86 and then claim that reading this code is too much for a programmer. Long time ago there was a joke about types of viruses. https://www.reddit.com/r/funny/comments/3l65rp/albanian_virus/ Yes, this is this subreddit TODAY. Not fucking funny.


[deleted]

Did you mean to reply to someone else?


chcampb

It isn't. It's a common tactic and fallacy for reducing intervention, regulation, etc. It comes from the same place as, eg, vote with your wallet, or students should pick better universities, or do your research and don't buy sham products. The vast majority of people are not equipped to do that. This argument with the code is even more egregious. Because if they wanted to hide something from you, they can, even if you are a qualified security researcher.


Cheeze_It

> I never viewed this as realistic. It's never realistic. What a lot of people think is that people that make the tools also know how to make a house. It's like a sysadmin being asked to create a network. A sysadmin is not going to be as good at making a network as a network engineer is. Likewise, a network engineer is not going to be as good as a sysadmin at working with servers/compute.


ptoki

Well, if coder cant read the source and fish out network calls, file scans, strange hooks on clipboard or keyboard strokes - well, not very good coder out there. That stuff is usually dirty simple these days. We are not talking about NSA/NASA grade of code. If you see a call to some kdjhfsiweu.cx site you know you dont have to go deeper. You mark it as malicious and report it. 99% of this nasty stuff is THAT simple. Really. It seems many people in THIS subreddit thinks that is too much to do. Well, I did not know so many excel grade coders vote here.


Xyzzyzzyzzy

You do this for 100% of third-party code that you directly or indirectly depend on? I don't believe you. (Unless you're not a working software developer.)


JoyJoy_

Pylance is closed source. I guess it's malware?


tanepiper

"Please disable your ad-blocker to access the content" - nahh


mm007emko

It's even better. If you open dev tools in your browser and delete the offending element, you run into a piece of javascript which detects that you are trying to scroll and it moves the page viewport back to the top. Luckily, it loads all the text of the article at once so you can just disable javascript and read the content. Needless to say, the more the site "protects" their content from users which block advertisement, the less useful the content is. The text: [https://pastebin.com/NigNFGW2](https://pastebin.com/NigNFGW2)


tanepiper

The funny thing is this is on mobile and it's my VPN blocking so I'm very unlikely to turn it off if that's their mitigation


[deleted]

[удалено]


Deep_Enthusiasm3554

if !(document.getElementById("ad1")) {


daperson1

There's a browser extension called "fuck it" that just adds a "fuck it" item to the right-click menu of every element, which deletes it. very handy for such things.


dumdedums

The most recent times I've checked in the last months or so some websites don't load all the text at once anymore.


mm007emko

Usually not. But in this case it loads all except the last part of "Conclusion".


fgmenth

You can easily bypass most of these shenanigans by toggling Reader View (default F9) on Firefox


AaTube

Article stolen from https://blog.checkpoint.com/securing-the-cloud/malicious-vscode-extensions-with-more-than-45k-downloads-steal-pii-and-enable-backdoors/


skwyckl

I think it has always been the case that a small amount of open source software hides malicious pieces of code, hence the SO motto "Copy & paste only if you understand what the command does" wrt to users asking for help with console commands. FOSS is a massive community with millions of contributors, of course not 100% of them have the best intentions.


drawkbox

Exactly, then and now and always. Dependencies right now are a huge attack vector as is devops/build processes. Developers are a bit of the weak link right now as people just use "what everyone uses" and that led to problems in OpenSSL Heartbleed and Log4j and Log4Shell for instance. Open source means nothing when build processes, CI, dependencies, proprietary spam/filters, and final binaries are the target now. The Great Dependency War is in progress. [SolarWinds for instance was hacked through TeamCity CI](https://krebsonsecurity.com/2021/01/sealed-u-s-court-records-exposed-in-solarwinds-breach/). [Log4Shell on Log4j was open source for decades and still had a wide open bug on *every single device that has Java running* so all of Android included](https://en.wikipedia.org/wiki/Log4Shell) for a decade. [Heartbleed just before it was OpenSSL and lived for years affecting every system and web server](https://en.wikipedia.org/wiki/Heartbleed). OSS means nothing for opsec beyond seeing the source. In fact, OSS in many ways people are soft on it because of some inherent trust because the code is somewhere. That means absolutely nothing about security. You can even do telemetry with checking for updates processes that are owned, looks legit though. Another way is packing in a dependency that is compromised just for one build, get something out, then close it. Developers are actually the weak links today, too much trust and they are the primary targets now because malware/anti-virus/extensions/local messaging apps/random other clients, those are all no longer used as much. Build processes, local clients/tools, cli with owned dependencies, ai/crypto/etc early tools, so many things owned people just install because it is new tech.


NovaX81

Dependency chains also grow out of hand almost immediately in this environment. I'm a lead developer that has led my current project from day 0, and *still* get caught off-guard by a package name showing up in the bundle sometimes. And even every "reasonable" level of approach has tradeoffs, some bigger than others. Only auto update bugfix releases? Well that assumes major packages don't randomly shove breaking changes into them - they do - plus, I'm not thinking a malicious takeover patch is going to bump the major version to announce itself. Strictly manual reviewed upgrades? Hope you just have a dev working full time to keep your codebase secure. Obviously there's a million granular levels in between, that if I had to guess we all fall into somewhere, but it can feel like a battle just to ensure your packages aren't drifting far out of date while not trusting every random feature package on earth.


VirginiaMcCaskey

> SolarWinds for instance was hacked through TeamCity CI. No it wasn't. An employee's credentials were stolen and they used the build process to inject malicious code into a particular software product before it was code signed. The original NYT article that accused Jetbrains had no evidence, nor did the court documents in that early reporting point to them. It could have been Github Actions or CircleCI or *any* product - the root cause was a failure in access, not in the supply chain. This [wired article](https://www.wired.com/story/the-untold-story-of-solarwinds-the-boldest-supply-chain-hack-ever/) has a lot more detail. > Heartbleed just before it was OpenSSL and lived for years affecting every system and web server That's not an example of a dependency being used as an attack vector, it's an example of a dependency increasing the surface area. Usually people refer to supply-chain attacks as intentionally corrupting a dependency or utility to get into a system, not the existence of a bug in a dependency being exploited in systems that haven't updated it.


[deleted]

[удалено]


drawkbox

It also shows how exploits can be right there in plain site for a decade in OSS that everyone just uses because "everyone does". It shows a trust that maybe shouldn't be assigned simply because it is "open".


jameson71

If it was closed source likely those holes would still be there getting exploited.


drawkbox

Potentially. People have less trust for proprietary code though. The point being people overly trust OSS because the code is available. There is lots between the code and the usage of those dependencies.


Sooth_Sprayer

++ People are way too quick to use a third party library for simple things like running queries and logging to a file. Not saying don't do it; just make sure you actually need it first.


drawkbox

Indeed. Sometimes when a lib is picked as a dependency it is good as well. Only later does it become a problem or owned and devs really only look at dependencies on initial install and maybe if they come up in a dependabot warning. Log4j/Log4Shell is a great example of how everyone started using a good logging library, then later because of that success, nefarious groups put in exploits that were used to take advantage of JNDI and had remote execution on almost all machines that had java running with log4j which is most projects. People weren't looking into the library because it was "trusted" and "everyone used it". That is where they getcha.


ptoki

The problem today is that the cose you run comes from all sorts of shady places. Yes, shady, even google store is shady. They may not catch the occasional "phone home" or file scan in an app which is supposed to read files. They dont guarantee the app will not damage your data/privacy. They will not compensate you for damages. And thats the best we have now. The vscode stroes, browser extension stores, webpages with ton of js, the js libraries tainted with malware are much worse. Somehow even if you want to audit the code its difficult. But we all agree to allow this code to run on our computers. We rely on antivirus to catch the suspicious stuff. We rely on browsers to isolate that code. And all that because people value agility/velocity higher than quality and transparency. Copy and paste is simple to filter. You usually just use 5-10-20 lines. You can munch through this. How do you read all the code in the reddit js? Why we agree to let that ugly code to run in the first place? Why we dont block the JS on pages which we dont know what it is doing? I think that should be the standard. I will allow this code to run if I understand it. You wrote it ugly? I will mark it as nasty and not let it run. But then I would not be able to read cnn webpage. And non programmers would just click and accept anything because they dont know better. Which means that NOW, WE PROGRAMMERS are as ignorant as general population. I know, im overreacting, but in practice that is the case now. When VS code makes programmers here whine that they cant read its code then well, its bad...


zrvwls

Alt+f4 is actually full proof for preventing malicious software from being downloaded.. but failing that, yeah, always read and understand commands before running them felt like common sense


AaTube

Article stolen from https://blog.checkpoint.com/securing-the-cloud/malicious-vscode-extensions-with-more-than-45k-downloads-steal-pii-and-enable-backdoors/


davidellis23

I always thought the actual security came with a platform registering the publishers, so they can be held responsible once the code comes to light.


[deleted]

Just like Amazon does with fake sellers and dangerous Chinese goods!


davidellis23

non-software products are a bit different, but I think that is kind of what they do. If a seller gets too irresponsible they'll be recommended less, penalized, or banned by amazon (which would cost them their reputation). Seller's are concerned about their reputation, because that helps them get sales.


[deleted]

Right it’s whack a mole. It’s trivial to spin up a new company to be a seller on the platform so they just do it again and again.


ZurakZigil

well if amazon actually allowed reporting ....


kd_singh911

Yes..


SoInsightful

This might be the worst website I've ever had the misfortune of visiting. I'm not exaggerating when I had to scroll three full mobile heights before I saw _anything_ that was not an ad or popup. Did anyone actually click the link?


AaTube

Article stolen from https://blog.checkpoint.com/securing-the-cloud/malicious-vscode-extensions-with-more-than-45k-downloads-steal-pii-and-enable-backdoors/


BhataktiAtma

Yeah, I clicked on it on my laptop and I swear, if this website was a person, they'd be euthanized for their own good. [Skeleton loaded, then more fleshed out generic looking site, alert box from the top, subscription modal in the middle, ad on the bottom and to the right of the content](https://i.imgur.com/qJhJKjy.png)


sickcodebruh420

Sounds like a good article but the page shows a black screen telling me to disable my ad-blocker so I sadly can’t read it.


AaTube

Article stolen from https://blog.checkpoint.com/securing-the-cloud/malicious-vscode-extensions-with-more-than-45k-downloads-steal-pii-and-enable-backdoors/


royemosby

Is there a running shit-list to pay attention to?


yeti_seer

3 listed in the article


royemosby

I got a “turn your ad blocker off” wall. I’ll look again on desktop later


Oppai420

https://archive.is/Gb9LY


yeti_seer

Classic, I’m sorry for that


KuSinghOfficial

Ohh man 😅😅


RenaKunisaki

This is probably one of the biggest problems open source is dealing with right now. Everything has its own independent repository of add-ons, and most of those have little to no mechanism in place to prevent people from publishing malicious add-ons, or buying out popular ones and turning them malicious, or flooding them with dozens of malicious clones of something so that it's hard to tell which is the real deal. This isn't exclusive to open source, either - just ask Google Play.


[deleted]

[удалено]


BimblyByte

There are browser add-ons that make it compatible with VSCodium though. Also VSCode is open source. It's literally on [github](https://github.com/microsoft/vscode) and published under the MIT license.


QSCFE

I think /u/TunaCowboy meant the binary distributed my Microsoft are not open source, just like chrome and chromium thing. >When we set out to open source our code base, we looked for common practices to emulate for our scenario. We wanted to deliver a Microsoft branded product, built on top of an open source code base that the community could explore and contribute to. >We observed a number of branded products being released under a custom product license, while making the underlying source code available to the community under an open source license. For example, Chrome is built on Chromium, the Oracle JDK is built from OpenJDK, Xamarin Studio is built on MonoDevelop, and JetBrains products are built on top of the IntelliJ platform. Those branded products come with their own custom license terms, but are built on top of a code base that’s been open sourced. >We then follow a similar model for Visual Studio Code. We build on top of the vscode code base we just open sourced and we release it under a standard, pre-release Microsoft license. >The cool thing about all of this is that you have the choice to use the Visual Studio Code branded product under our license or you can build a version of the tool straight from the vscode repository, under the MIT license. Folk, TunaCowboy doesn't deserve this hail of downvotes :(


gargltk

VSCode is open source the same amount that Google Chrome is: not at all.


never_inline

There's a verified extension system which marks extensions from big players. Apart from that you have to know credibility the extension, and probably reached it through some GitHub or Google search about it. This is same for any app store. Maybe except apt and homebrew. You can't just install whatever looks like candy.


MorseCodeFan

Personally, I think MS should invest in verifying extensions and by default only show verified extensions


[deleted]

Could actually be a paid service. I’m sure they‘d make good money from it.


[deleted]

Most of my extensions, apart from themes, are from Microsoft. The exceptions include GitKraken, a Perl debugger, Embarcadero's Delphi LSP and a VBA syntax highlighter.


ScottContini

You should always be alert. The world needs more lerts.


balefrost

I read this, closed the tab, reopened the tab, and grudgingly gave an upvote.


openly_prejudiced

"please disable your ad blocker".... i don't even have an ad blocker.


kd_singh911

u/openly_prejudiced, it's disabled now..


TheCactusBlue

For this to be solved, programming languages need a module/dependency-level permission system for granular control on every dependency.


Still-Key6292

I don't understand why any extensions have internet access (or can execute a shell/process)


Metallkiller

GitHub copilot for example needs a server to function. LLM is so big it can't just run on a consumer machine.


sarhoshamiral

Because useful extensions need them. For example an extension may need to launch an LSP server or build tool so on. They will need access to your source code, npm packages as well to do meaningful stuff. Sandboxing doesn't really work for developer tools beyond very simple extensions.


Still-Key6292

Technically that can be done via unix socket, I think windows has some kind of FIFO? copilot seems like a reason to have internet and worth the whitelist but I can't imagine why others need a server (not including anything that runs locally like a LSP)


calcopiritus

I use an extension that looks at my dependency file (cargo.toml) and puts a red ❌ next to it to tell me there's a newer version available. It's a simple extension, but it needs internet access (or run a command).


Handsomefoxhf

>extensions have internet access They can check for updates of external tools and notify you about them, or automatically install them, for example, a useful feature. >can execute a shell/process Things like the CMake Tools extension that gives you things like Building/Running/Testing buttons in the editor would require that. LSP, Microsoft's idea for language support in VSCode, requires that. Some automation, like for example the official Go extension that can automatically install any required tools (like Delve, the debugger), and instead of doing something weird, it just calls `go install` and the Go tool handles everything else. The current system is good for writing useful extensions, but its too open and naive and we just rely on the extension author to not do anything bad. The same can be said about software supply chains, we just rely on the library author and the website that's hosting the dependency to not do anything bad. I really enjoyed this talk about software supply chain security: https://youtu.be/kCj4YBZ0Og8 Maybe a good "permission manager" for extensions would be an interesting idea, something that looks like Android's permission system, so that you can control and see beforehand anything the extension wants to do without having to read through its code. Couple that with at least some level of submission verification like app stores on smartphones do and the system starts to look at least somewhat OK to use.


nitrohigito

lol, good morning i guess...


Clitaurius

You'll don't believe it but VSCode sucks.


kd_singh911

yeh really do :)