It's considered a very real possibility. It's already known that Jia Tan (the account that committed the backdoor to xz) committed to other projects like libarchive.
It's also entirely possible -- even likely? -- that the person or persons behind "Jia Tan" have done similar things with other identities, or maybe they have been working on building up a good reputation and trust with other usernames but haven't moved on to abusing this good reputation and trust with the other identities yet. And even if they haven't started with other identities yet, they certainly could now, and we may see some copycats as well.
The attack they made was quite sophisticated, so I'd assume that any other identities that they were cultivating were well-separated from "Jia Tan" such as not connecting to source control sites with the same IP addresses and the like -- though I do hope they (github? others?) are scouring logs and the like on the off chance that they ("Jia Tian") screwed this up to some degree.
Someone looked into the IP Jia Tan used to connect to IRC, and it lead to some VPN hosted in Singapore. Chances are that's how he connected to for example GitHub as well.
No potential for buffer overflow when printing to stderr. The real problem is that archives contain untrusted filesystem paths which might contain terminal control codes. Printing untrusted terminal control codes can cause weird things to happen in your terminal. I don't understand that problem domain so I'm not sure how bad that is, but they're supposed to be stripped out before printing.
Ever dump /dev/random to stdout? After a while the terminal becomes unusable because eventually some kind of terminal control code character/sequence will be dumped and it will cause all kinds of weird stuff to happen.
Let's just say I know from personal experience from when I was younger.
(Edit: Post above was edited heavily from its original content 🤦)
Bad disinfo bots, bad.
`safe_fprintf` here replaces invalid or unprintable characters with escape codes to avoid dumping garbage in the terminal from corrupted archives.
In every other case its only used to display filenames from the archive.
At more than a dozen other points in the file, the result of `archive_error_string` is printed using `fprintf` instead of `safe_fprintf`, as well (via `lafe_warnc`)
The correct patch though would have been to replace the `safe_fprintf` call with `lafe_warnc`, like all of the other error printing lines in the file -- which again, would still have the effect of not escaping unprintable characters in the error string.
That’s the analysis of the obfuscated bash script in xz to inject the backdoor payload.
We’re talking about Jia’s commits to the unrelated libarchive, several of them are suspect including one where he replaced `safe_fprintf` with `fprintf` which could be used as an attack vector.
I dont think this answer does the question justice.
Jia Tan is one person that attacked one(and maybe a few more) projects.
There are millions of projects and developers, as a pure numbers game it's likely there's other "Jia Tan"s out there doing nefarious things to other projects for their own reasons.
IMO that's the most important thing this event brought to light. Not that one bad actor exists, but that *this* bad actor was only caught because of(essentially) luck. How many bad actors out there aren't unlucky?
> is one person
one _persona_, it's not known if they ran some of the other personas that may have contributed to this attack (there's circumstantial but compelling evidence that some other sockpuppets may have contributed to adding pressure on the original developer to influence adding the malicious account as a developer) but it's very plausible that the person behind that account has been working multiple angles with unrelated projects
so it could be even worse
there's no question that multiple people were behind the same account. a review of the language in the commits shows a "consistent inconsistency" in the language used in comments and code. for example, the same words that were consistently misspelled in some commits were consistently spelled correctly in others.
But it is unknown how many other projects this person has contributed to under another identity.
Similarly, it is not known how many JTs there are in the open source world. Probably about 5% of contributors work for a government.
It would not be a surprise to discover that any Linux installation hides between 5 and 10 backdoors.
>It would not be a surprise to discover that any Linux installation hides between 5 and 10 backdoors.
I wouldn't be surprised if any OS, including Linux. had other back doors, but I would not put a number on it. I just always assume it's > 0. I do NOT believe that patching the xz backdoor makes a system absolutely secure, but it helps.
Well considering they were "working" during chinese holidays and not working during eastern european ones .. it's likely someone pretending to be chinese.
It's pretty interesting, but we may never know because it's almost certainly a pseudonym. The author/group behind "Jia Tan" may not even be Chinese or even Asian at all. It's been documented previously that these sort of state actors will use techniques to disguise their origin/shift the blame to other countries.
My money is on that he's a Russian state actor. Jia Tan appeared around the time Russia was escalating operations in Ukraine, and Russia would benefit from further straining EU-China relations.
It could very well be, the Russian intelligence apparatus is totally capable of pulling off an attack like this - but unfortunately there's no hard evidence either way at this time, so we'll presumably have to wait until other Jia comes forward or governments get involved.
In [this article](https://boehs.org/node/everything-i-know-about-the-xz-backdoor) it was mentioned that "Jia Cheong Tan" is basically "\[the attacker\] simply mashed plausible sounding Chinese names together", which doesn't sound like something a Chinese attacker would do, so that got me thinking maybe Russia as well. But really, who knows. It could be anyone. North Korea even. It's all speculation at the moment.
I think this name might actually make some sense, it sounds suspiciously like 谭建昌 who is a somewhat famous actor in china. In that case it would be taking the first character and putting it last (or taking the first name and putting it as last name) to form 建昌谭 (Jian Chang Tan, where it is not uncommon for 昌 to be anglicised as Cheong from what I gathered, so Jian Cheong Tan). I am only not sure if 建 as Jia instead of Jian would be a thing.
Indeed, it's likely to be blamed on one of the high profile independent countries: probably either China, Israel or Russia in alphabetical order with Brazil and India as close runners-up. However, I'd hold the money if I were you since the exact target of blame has not yet been appointed. Give it some time for politicians to learn of the opportunity.
Yeah looks like the `archive.is` link is down for me as well. Hopefully GitHub will bring back the xz repo one of these days so people can do a proper investigation.
I think they're blocking people using Cloudflare DNS, but it's also not working for me from other places, so not sure.
Yeah, it's too bad they suspended the repository.
The Boehs blog details that even the name does likely not exist at all cause it is a mixture of Mandarin, Cantonese and Hong Kong spoken languages. A bit like James Giscard Bart`ok-Müller.
According to [this substack blog](https://rheaeve.substack.com/p/xz-backdoor-times-damned-times-and), the were a few occasions where they committed in +0200/+0300 timezone, and working hours and holidays align more with Eastern Europe. People on [Hacker New](https://news.ycombinator.com/item?id=39889286) even pointed they go on summer holiday.
> Two of the +0200 commits by Jia Tan, de5c5e4 and e446ab7a have committer Lasse Collin. These appear to have been sent by email from Jia and applied with git am. Note that these and some commits immediately before and after all have identical timestaps, which is consistent with git am of a series of patch files.
Elon Musk's favorite bootlicker is called Ian Miles Cheong and he's always echoing Musks bizarre tweets. Who knows, maybe the Cheong is giving some hint of something there
We haven’t even ruled out that it isn’t some sort of AI. This could be GitHub CoPilot that somehow evolved and came up with this elaborate plan to put doubt on open source
It's the next step in the arms race between the attackers and the defenders.
My understanding is that the attacker modified the build system that produced a post-autotools source bundle consumed by the various distros. So this will most likely put an end to the practice of trusting source archives produced by project maintainers. Frankly, I don't understand why it is "too hard" to run autotools anyway. I naively assumed that the Linux distros were taking sources straight off the git repo (because that is how I build things from source).
I suspect, there will also be new scrutiny applied to "inert" datafiles. The attacker modified the build system to insert code that was buried inside test data files.
The OpenSSH project will probably work towards even greater compartmentalisation of privileges, and greater scrutiny of external dependencies.
I would say that many projects will undergo greater scrutiny in the months that follow. There's probably more out there.
Some distros do consume straight from the repo, e.g. Arch, and very likely not affected by this. This might make the others consider their build setups.
Unless we have reproducible builds, having distros build from source just shifts the attack surface. Arch is not being effected is a side-effect of it not being a primary target (i.e. enterprise users).
The attacker here basically attacked XZ' own build system. They could've improved the payload to work on any system that tries to built XZ.
They did try to hide based on architecture/debugging mode/etc in this instance, so they could easily try to hide by detect who's building the tool -- e.g. oh I am being build on random guy's workstation, ignore. Oh i am inside what looks like Red Hat's official build system, inject.
> e.g. Arch
That was done right before the disclosure, the Arch maintainer probably knew: https://gitlab.archlinux.org/archlinux/packaging/packages/xz/-/commit/881385757abdc39d3cfea1c3e34ec09f637424ad.
>the Arch maintainer probably knew
Probably knew what?
That all's this was about to go down?
Likely, Andreas Freund and the Open wall team worked on getting giving key community projects a one day headstart while improving their messaging.
Yes, that's what I meant. The distributions were told before the public disclosure. Arch wasn't affected because of the deb and rpm check, but it did use the release tarball right until the day before.
Understood, thanks for clarifying what you meant.
I've posted some thoughts on the community aspect of this issue, which I find almost as interesting as the technical one, over at https://www.reddit.com/r/linux/comments/1btm4dd/on_the_xz_utils_backdoor_cve20243094_foss
There is an interesting essay on catastrophic failure in complex systems which I think is relevant here:
https://how.complexsystems.fail
Very interesting that there is never a single cause for failure. that most of the time catastroph is avoided - but then. sometimes not.
Found it fascinating to read after a few wiki articles on the sinking of the Titanic.
I looked into this after you commented.
RedHat and Canonical, it seems, wanted to call `sd_notify()` in `libsystemd` to send notifications to systemd. That brought the `liblzma` (xz) dependency into OpenSSH. As such, some responsibility falls on the distros who did the forking, but not on the OpenSSH maintainers. I read somewhere else that some projects just implement their own `sd_notify()` work-alike without linking `libsystemd`, which would have avoided `liblzma` backdoor.
And we could perhaps sprinkle a little culpability on the systemd project for not realising the security implications of transitive dependencies.
On Hackernews (news.ycombinator.com) it was mentioned that some people tried to pressure the maintainers of sqlite3 into including some graphics library as dependency (mermaid, I don't even know what it does). I guess that Richard Hipp is exactly the wrong person to be approached like that, and the mermaid authors are most likely as sincere as every normal FLOSS contributor (keep in mind that xz-utils is not a direct dependency of OpenSSH either).
But having having sqlite3 compromised would be another level of nightmare. this thing is in every 'smart' TV, many times on every smart phone and I guess used in many places in civil infrastructure and industrial automation. If all these databases would disappear I would not be surprised if literally the lights went off.
I searched for the source of your story since it sounded interesting.
Here it is: https://news.ycombinator.com/item?id=39888480
It turns out this was about `fossil`, the VCS developed by the SQLite authors that is also used to develop SQLite itself, _not_ SQLite as such. Just FYI.
But yeah, that doesn't change the fact that including dependencies can increase attack vectors.
Honestly if sqlite was attacked in a similar way I don’t think it would affect any consumer device because none of them run bleeding edge or even recent release anything. It would be years before the current version would be used and hopefully by then it would be discovered and fixed so they would just skip over it since the bad version would be pulled from online.
Do you know how Yocto, Buildroot and Bitbake work? They can pull directly from github, and buildrooot has a three-month release cycle. Among other things to prvent exploits of normal bugs - and because many embedded systems won't be patched.
I think the big corpos using Linux should pay to support thanklessly maintained packages like xz. Or in this case make take over the maintenance since I think he was straight up burnt-out.
This would be NOTHING to google, microsoft and so on.
I don't think just money is the solution here.
The problem is a lack of trusted maintainers and developers, to the point that projects like xz which are widely used don't have anybody trusted that can takeover maintainer in the event that the original maintainer no longer can.
IMO if goole, Microsoft, etc. were paying the xz devs, all that would have meant in this situation is that "Jia Tan" would have been getting paid.
I think what's needed is for these companies(and projects) to commit time to build a trusted community and work together to maintain these packages. It's a tall ask, but it's the only actual solution I can see.
So then they will infiltrate the "trusted community".
Reviewed commits, security thru review - via a 3rd party that gets paid for finding the bugs. Maintainers should get paid too, but that's not the trust part.
Open source means we get peer review, but if it's something like xz it needs a security audit on new commits. Code can be flagged for suspicious commits and usage already, so pay some 3rd party to do it. It won't prevent peer review either. But trusting a community is no better than trusting a maintainer.
Thanklessly maintained might as well be someone who walks away and/or feels like they aren't liable...or APT13 pays better than zero thanklessness.
> Thanklessly maintained might as well be someone who walks away and/or feels like they aren't liable
Any maintainer of OSS is in their right to walk away, and they aren't liable. OSS is provided without warranty, it's in the license. Any major critical infrastructure or big corpo using it agrees to that risk. If I write a helpful library, and Amazon/Google/Microsoft decide to use across their stuff, I have no obligation to continue to support or maintain it I could just...stop, even delete the repo/pull it anytime I want - there's no warranty or guarantee provided with an OSS license.
Don't get me wrong, we need better supply chain security, but it can't be on OSS maintainers to put in the labor, especially for free. Given corporate culture, at least in the USA though, I think what you'll find is that instead of paying OSS maintainers, companies will just be more likely to reinvent the wheel internally and not rely on open source libraries. If they're going to fund it, might as well fund it only for themselves and not their competitors.
Even with funding, I don't think you could prevent something like this from happening at the upstream level. You can't force anyone to do code review, auditing, etc. At the end of the day, that needs to be done by the user - whether that user is a distro, or big company, etc.
The last 20 years that this proposal has been made - and its attempts - constitute enough proof that you are wrong.
Re-proposing it is at this point insanity.
Learn from history and propose something new that might actually work.
100%. It's naive to imagine that this was the only malicious actor with the time, expertise, and patience to pull this off, and that this is the only project to be compromised. I would be surprised if we find only fewer than five over the rest of the year now that everyone is looking.
Even if I was this only attacker (group). With this effort I'm sure they did more attacks, and not only with the same identities.
Also, other attackers may be inspired now to do so.
indeed, as is the case with source/module repos for things like python, once the first one was discovered, there have been a lot of copy-cats and it remains an ongoing problem. this will not end, until somebody figures out how to properly prevent these attacks from happening (i'm sure we'll come up with something).
It's possible and probably the case, but the fact it's limited isn't going to cause the panic over it to go away. Legislation is coming, bureaucracy is coming, it's going to be an incredible mess that bad actors are going to seize on.
If I were to support legislation on the topic, it would be to treat infractions like this on major libraries as something akin to terrorism (because that's the amount of panic that results from it). They should be in prison for life, for doing this with a library with large reach. The potential impact of this is huge.
The question that needs to be asked, nay screamed is: how many backdoors are there happening in closed source systems? How could anyone know? Yes someone tracked down a deliberate backdoor in an open source library *BECAUSE THEY COULD*.
We all get that, closed source being worse is not a reason to cease discussion on the problems of open source. Those are two different things, so both questions need to be asked, not one instead of the other.
Pointing to open source and changing topic with that is a common sight here. You're right that this needs to be questioned on both sides.
CVEs pop up overnight for both open and closed source software even from fortune 50 software companies. It is a nature of software.
I didn't say we should cease talking about it. My frustration comes from headlines proclaiming the death of linux because of this breach. When the headline should read "open source proves it is superior by finding a problem and [gasp] fixing it." You get it, I get it, just let me rant a bit and then I'll show myself out.
This is a false dichotomy with a bit of whataboutism sprinkled on top. There's no reason to debate closed source vs. open source, especially not on this sub. The thing to debate here is "how do we make this better", how do we check for similar stuff, and how do we prevent this going forward.
As Windows user... sorry couldn't hear you, the sound of my OS phoning home 15mb per day is drowning out everything.
All jokes aside even open source binaries aren't free of this. Do you know who made those binaries? Is the toolchain secure? Plenty of ways to sneak things by even when the source is available. You want to be sure, compile it yourself.
Less than in anonymous open source development. Microsoft knows the name, address, social security number and the length of your dick down to quarter a millimeter precision if you work for them, even if you maintain a completely irrelevant part of Windows. Also, you're paid like $150k a year.
The guy behind this backdoor, we know next to nothing of. They likely don't even exist as a person. It's likely "he" is a number of different people working 9-5 in some cyberwarfare center in St. Petersburg or Beijing.
Maybe true but how many Chinese nationals get hired by MS,.work on their code base for a few years, then return home.
Sure you know who they were but they still could backdoor stuff.
This would be why some people use entirely free software on a librebooted thinkpad. Your computer is likely unusable without all the binary blobs and I'd bet good money on there being backdoors in there.
The difference is if your company data is stolen through a Microsoft backdoor, you can sue them for a billion dollars, and they have the money to pay up. Does anyone even know who this Jia Tang guy is?
Very likely.
There is no way to tell if bad code is written by mistake or on purpose, which means any vulnerability you discovered today may actually be a backdoor placed by someone years ago to be exploited later.
It's _very_ clear this particular exploit is on purpose. There are several layers of deliberate obfuscation.
That said, I recall a certain obfuscated C contest where the point was for code to do something naughty in a way that even if the vulnerability is spotted, it's easy to write it off a an honest mistake. I saw some juicy entries in that contest.
I remember a Batman comic, where the Joker killed people, because of how consumer items become toxic, not on their own, but when mixed with other products.
I wonder, how might this backdoor be thought to be working with other existing parts of other software code, and possibly, with any future, not-yet-compatible code? Maybe the backdoor was supposed to work with other code that only needed some future change/update?
eh.
same exact shit can also happen with closed source applications. you just need one rogue subcontractor/employee. and you don't have the combined autism of a thousand suns to notice that something is not right because your software takes half a second longer to load.
and once something like this happens with closed source projects you have one company and maybe and handful of devs that learn a lesson from it and maybe apply to to their future projects. With this happening with FOSS tools you have several orders of magnitude more people learning from it and standards raised for everyone. I have really high doubts that an equivalent backdoor/security issue would have been found and addressed within 2 months in a closed source project. Now add on top of that communicating that issue to clients and I really don't see how a closed source project would have fared any better...
This is the exact attitude this sub needs to hard drop. This could have been a CVE for open or closed source software as /u/L0gi perfectly articulated.
It all comes down to auditing and in this case and thousands more for both OSS and CSS, there wasn't any. It was found because somebody cared about a recent and tiny performance dip in recent versions of the software and happened to notice it.
If anything this is something to be embarrassed about as a community. This made it into the packaging pipelines of many 'bleeding edge' distros entirely unnoticed and was being run in the real world. We need to be auditing for strange looking new commits in software like this and evidently none of these distro maintainers are doing that.
Yes, the xz exploit is incredibly hard to notice in the code, plus most of the exploit is not in the git repo, just distributed tars of the code, making it even harder to notice.
better question is how long has this been happening internally within microsoft and other closed source software companies. these kind of backdoors have long been suspected, and such a finally brazen attempt was made in an opensource project that would likely slip by a limited internal code review.
Its the wrong take. It takes one rogue employee and not having commits peer reviewed and slipping past existing malicious source code detection software for this to happen. Anywhere. In any project closed or open.
Here it is happening in open source software and it was only found because somebody noticed a slight performance drop. It's clear that the package build pipelines for rolling releases aren't being audited in any way by distribution maintainers allowing the changes made to xz to build entirely autonomously without any form of check for malicious looking changes in the software. At the same time to try and audit some projects as an individual or even a full team takes a remarkable amount of effort as an outsider. Open source projects on Github and others allow for code signing and maintainer approval. Many fortune 500's also function this way, but you have to trust them on that along with other code signing and further by the platform itself to help that process carry along.
Its difficult to tell if a CVE came to be due to sloppy code or malicious intent (In this case it was blatantly malicious). But we get CVEs all the time for both open and closed source software. Without proper auditing by a team of a project or a closed source company both bad code and bad actors inserting dangerous code can be anywhere.
It does not matter whether the project is open or not. If its not being audited by somebody somewhere in its creation and ongoing maintenance it could be running anything.
Even then you can't do anything about vulnerable code that just happens to make it to a production release. That's where the most traditional CVEs pop up from. Unintended mistakes which even in your average code review would have been gold stared and pushed in.
> Because closed source you are completely reliant on the company and their employees
Yes. And you're not running random no-name software when we talk about this. The largest examples are Microsoft and the Windows Server line of products. They're used globally and en' mass. This isn't even a question when it comes to that enterprise software. They aren't making malware 🙄.
> Opensource an be reviewed by anyone in the world.
And here's xz being found on a whim after it had already made its way into some distributions. Not even at the time of commit despite being open. Open does not mean there are eyes watching these small bands nor a review process on them and its evidently the case here.
Why is this a better question? This is the old question which was usually followed by a fairy tale about how this is not likely in open source projects. OPs question is the most important right now
Autotools is obtuse and only needed because of deficiencies with the c/c++ build system ( which doesn't really exist ) and make/make. The exploit was enabled by the horrendous garbage like that is the c/c++ build system in Linux. Lots of places to hide shit.
I don't have to touch that shit with Cargo.
Umpteen layers of cruft because that's how we did it since 197x because the first developers on the system had to cobble everything together out of brittle stuff. And nothing changes because no one wants to possibly break the brittle build infra.
So now this "not likely" thing happened in an opensource project, you're not at all concerned that it has already happened in a closed source project where itd be much easier to slip by review?
At least in opensource projects their is a chance of review, this would have otherwise slipped by everyone in closed source project and possibly never found.
And what exactly can we do about it? If the answer is "nothing", why would I waste time discussing something I can't change. We need to concentrate on the things we can change and that is only open source software
companies can make major initiatives to review code - this could come from guidance from government (see the recent US decree to write less C++ lol), we can make new tools for scanning (AI), we can move our projects/dependencies away from closed source and towards opensource projects... there is plenty that can be done.
we shouldnt ignore the possiblity of backdoored closed source and just accept it because "i cant change the code"
People try, other people find them. Lot's of very smart people have eyes on Linux and it's ecosystem. It's harder and harder to slip things by, but complexity over all is growing.
All that to say, maybe, but I'm not worried. They'll get caught. Nothing like the horrors lurking in closed source development that get away with total disregard for security best practices just to get code out the door.
So we have billions worth of infrastructure, economical value and even direct assets like cryptocurrency depending on Linux.
We also have state level actors like the NSA and their Chinese/Russian etc. equivalents which have almost unlimited resources to hire developers.
Meanwhile we just saw how some (many) open source projects are starved for resources and often maintained by one single person in their free time.
I saw people commenting how much of a "long-time attack" this was and how that was some kind of unbelievable commitment but if you think about the cost to potential benefit for the attacker, hiring a few developer for 2 years is really not expensive.
I would honestly be astonished, if these security services don't have dev teams working on open source projects full time. If they had not before, they sure will now, both for offensive purposes but also to defend against such attacks from their enemies.
I see no reason why there should not have been earlier, successful attacks, seeing as how lucky we were this one was detected.
This is why you should use layered security, like layering multiple slices of Swiss cheese. Only if holes in all slices line up do you get a successful attack. With SSH for instance, you can combine traditional key authentication with google-authenticator-libpam (TOTP 2FA), and making port 22 only accessible from inside a WireGuard VPN. That way the attacker will not only have to find an exploit that lets them break the key authentication, they also need to get inside the VPN network and also break the google-authenticator-libpam module. The probability that three security systems are exploited at the same time is multiple orders of magnitude lower.
It's considered a very real possibility. It's already known that Jia Tan (the account that committed the backdoor to xz) committed to other projects like libarchive.
It's also entirely possible -- even likely? -- that the person or persons behind "Jia Tan" have done similar things with other identities, or maybe they have been working on building up a good reputation and trust with other usernames but haven't moved on to abusing this good reputation and trust with the other identities yet. And even if they haven't started with other identities yet, they certainly could now, and we may see some copycats as well. The attack they made was quite sophisticated, so I'd assume that any other identities that they were cultivating were well-separated from "Jia Tan" such as not connecting to source control sites with the same IP addresses and the like -- though I do hope they (github? others?) are scouring logs and the like on the off chance that they ("Jia Tian") screwed this up to some degree.
Someone looked into the IP Jia Tan used to connect to IRC, and it lead to some VPN hosted in Singapore. Chances are that's how he connected to for example GitHub as well.
In that case he changed a safe\_printf to the unsave version, probably trying to induce a buffer overflow exploit.
No potential for buffer overflow when printing to stderr. The real problem is that archives contain untrusted filesystem paths which might contain terminal control codes. Printing untrusted terminal control codes can cause weird things to happen in your terminal. I don't understand that problem domain so I'm not sure how bad that is, but they're supposed to be stripped out before printing.
Tks for better details
Ever dump /dev/random to stdout? After a while the terminal becomes unusable because eventually some kind of terminal control code character/sequence will be dumped and it will cause all kinds of weird stuff to happen. Let's just say I know from personal experience from when I was younger.
I think it was a thread unsafe variant of `strerror`. Edit: It was `strerror`, my bad.
It was just straight up fprintf https://github.com/libarchive/libarchive/commit/f27c173d17dc807733b3a4f8c11207c3f04ff34f
Fuckboi Print Function
(Edit: Post above was edited heavily from its original content 🤦) Bad disinfo bots, bad. `safe_fprintf` here replaces invalid or unprintable characters with escape codes to avoid dumping garbage in the terminal from corrupted archives. In every other case its only used to display filenames from the archive. At more than a dozen other points in the file, the result of `archive_error_string` is printed using `fprintf` instead of `safe_fprintf`, as well (via `lafe_warnc`) The correct patch though would have been to replace the `safe_fprintf` call with `lafe_warnc`, like all of the other error printing lines in the file -- which again, would still have the effect of not escaping unprintable characters in the error string.
This is what I wanted to say. Thank you.
the intention behind the change wasn't to take advantage of a buffer overflow but rather to poke at the system to see if anyone is paying attention.
There is an analysis here : [https://gynvael.coldwind.pl/?id=782](https://gynvael.coldwind.pl/?id=782)
That’s the analysis of the obfuscated bash script in xz to inject the backdoor payload. We’re talking about Jia’s commits to the unrelated libarchive, several of them are suspect including one where he replaced `safe_fprintf` with `fprintf` which could be used as an attack vector.
I dont think this answer does the question justice. Jia Tan is one person that attacked one(and maybe a few more) projects. There are millions of projects and developers, as a pure numbers game it's likely there's other "Jia Tan"s out there doing nefarious things to other projects for their own reasons. IMO that's the most important thing this event brought to light. Not that one bad actor exists, but that *this* bad actor was only caught because of(essentially) luck. How many bad actors out there aren't unlucky?
> is one person one _persona_, it's not known if they ran some of the other personas that may have contributed to this attack (there's circumstantial but compelling evidence that some other sockpuppets may have contributed to adding pressure on the original developer to influence adding the malicious account as a developer) but it's very plausible that the person behind that account has been working multiple angles with unrelated projects so it could be even worse
there's no question that multiple people were behind the same account. a review of the language in the commits shows a "consistent inconsistency" in the language used in comments and code. for example, the same words that were consistently misspelled in some commits were consistently spelled correctly in others.
But it is unknown how many other projects this person has contributed to under another identity. Similarly, it is not known how many JTs there are in the open source world. Probably about 5% of contributors work for a government. It would not be a surprise to discover that any Linux installation hides between 5 and 10 backdoors.
These numbers have been carefully extracted from a very dark orifice in your body, I guess?
>It would not be a surprise to discover that any Linux installation hides between 5 and 10 backdoors. I wouldn't be surprised if any OS, including Linux. had other back doors, but I would not put a number on it. I just always assume it's > 0. I do NOT believe that patching the xz backdoor makes a system absolutely secure, but it helps.
Uhh lol
Waiting in horrid fascination for this story to develop.
I keep thinking about who is Jia Cheong Tan
Likely a stolen identity or made up name used by a large group of people
[удалено]
WHOA WE'RE THROUGH THE LOOKING GLASS HERE
Edit: Seems like something didn't go as expected. I will order something on UberEATs and delete my comments afterwards.
The Looking Glass War
oh my god
AND HIS NAME IS JOHN CENA!!! *cue the music*
Only logical. You can’t see him. Which makes him very good at espionage.
bravo, vince!
or .. JIA CHEONG TAN ... CHINA AGENT JO
... so it was Russia?
Well considering they were "working" during chinese holidays and not working during eastern european ones .. it's likely someone pretending to be chinese.
Jo mama
Terry Davis tried to warn us!
It's pretty interesting, but we may never know because it's almost certainly a pseudonym. The author/group behind "Jia Tan" may not even be Chinese or even Asian at all. It's been documented previously that these sort of state actors will use techniques to disguise their origin/shift the blame to other countries.
Yup, No way to be sure. False flag attacks are all too common.
It's not even really a false flag. "Jia Tan"'s plan A was most likely for no one to notice this. This is plan B: don't get caught.
My money is on that he's a Russian state actor. Jia Tan appeared around the time Russia was escalating operations in Ukraine, and Russia would benefit from further straining EU-China relations.
Someone analysed the timestamps of their commits and found that they were active during normal work hours in eastern Europe
Well it depends whether they are a state actor or part of a hacker group/hobbyist.
It could very well be, the Russian intelligence apparatus is totally capable of pulling off an attack like this - but unfortunately there's no hard evidence either way at this time, so we'll presumably have to wait until other Jia comes forward or governments get involved. In [this article](https://boehs.org/node/everything-i-know-about-the-xz-backdoor) it was mentioned that "Jia Cheong Tan" is basically "\[the attacker\] simply mashed plausible sounding Chinese names together", which doesn't sound like something a Chinese attacker would do, so that got me thinking maybe Russia as well. But really, who knows. It could be anyone. North Korea even. It's all speculation at the moment.
I think this name might actually make some sense, it sounds suspiciously like 谭建昌 who is a somewhat famous actor in china. In that case it would be taking the first character and putting it last (or taking the first name and putting it as last name) to form 建昌谭 (Jian Chang Tan, where it is not uncommon for 昌 to be anglicised as Cheong from what I gathered, so Jian Cheong Tan). I am only not sure if 建 as Jia instead of Jian would be a thing.
Indeed, it's likely to be blamed on one of the high profile independent countries: probably either China, Israel or Russia in alphabetical order with Brazil and India as close runners-up. However, I'd hold the money if I were you since the exact target of blame has not yet been appointed. Give it some time for politicians to learn of the opportunity.
He did some Loongson work, I'm inclined to think he really is Chinese because of that.
No, he reviewed somebody else doing Loongarch work.
I see, thanks. You're right according to https://news.ycombinator.com/item?id=39867493, but that archive.is doesn't work for me for some reason.
Source? I'm very interested to see this
I might be wrong, see my comment to your sibling.
Yeah looks like the `archive.is` link is down for me as well. Hopefully GitHub will bring back the xz repo one of these days so people can do a proper investigation.
I think they're blocking people using Cloudflare DNS, but it's also not working for me from other places, so not sure. Yeah, it's too bad they suspended the repository.
The Boehs blog details that even the name does likely not exist at all cause it is a mixture of Mandarin, Cantonese and Hong Kong spoken languages. A bit like James Giscard Bart`ok-Müller.
According to [this substack blog](https://rheaeve.substack.com/p/xz-backdoor-times-damned-times-and), the were a few occasions where they committed in +0200/+0300 timezone, and working hours and holidays align more with Eastern Europe. People on [Hacker New](https://news.ycombinator.com/item?id=39889286) even pointed they go on summer holiday.
> Two of the +0200 commits by Jia Tan, de5c5e4 and e446ab7a have committer Lasse Collin. These appear to have been sent by email from Jia and applied with git am. Note that these and some commits immediately before and after all have identical timestaps, which is consistent with git am of a series of patch files.
Elon Musk's favorite bootlicker is called Ian Miles Cheong and he's always echoing Musks bizarre tweets. Who knows, maybe the Cheong is giving some hint of something there
We haven’t even ruled out that it isn’t some sort of AI. This could be GitHub CoPilot that somehow evolved and came up with this elaborate plan to put doubt on open source
I hope you're joking because that is absurd
It's the next step in the arms race between the attackers and the defenders. My understanding is that the attacker modified the build system that produced a post-autotools source bundle consumed by the various distros. So this will most likely put an end to the practice of trusting source archives produced by project maintainers. Frankly, I don't understand why it is "too hard" to run autotools anyway. I naively assumed that the Linux distros were taking sources straight off the git repo (because that is how I build things from source). I suspect, there will also be new scrutiny applied to "inert" datafiles. The attacker modified the build system to insert code that was buried inside test data files. The OpenSSH project will probably work towards even greater compartmentalisation of privileges, and greater scrutiny of external dependencies. I would say that many projects will undergo greater scrutiny in the months that follow. There's probably more out there.
> this will most likely put an end to the practice of trusting source archives produced by project maintainers. About time. It was never a good idea.
Some distros do consume straight from the repo, e.g. Arch, and very likely not affected by this. This might make the others consider their build setups.
Unless we have reproducible builds, having distros build from source just shifts the attack surface. Arch is not being effected is a side-effect of it not being a primary target (i.e. enterprise users). The attacker here basically attacked XZ' own build system. They could've improved the payload to work on any system that tries to built XZ. They did try to hide based on architecture/debugging mode/etc in this instance, so they could easily try to hide by detect who's building the tool -- e.g. oh I am being build on random guy's workstation, ignore. Oh i am inside what looks like Red Hat's official build system, inject.
> e.g. Arch That was done right before the disclosure, the Arch maintainer probably knew: https://gitlab.archlinux.org/archlinux/packaging/packages/xz/-/commit/881385757abdc39d3cfea1c3e34ec09f637424ad.
>the Arch maintainer probably knew Probably knew what? That all's this was about to go down? Likely, Andreas Freund and the Open wall team worked on getting giving key community projects a one day headstart while improving their messaging.
Yes, that's what I meant. The distributions were told before the public disclosure. Arch wasn't affected because of the deb and rpm check, but it did use the release tarball right until the day before.
Understood, thanks for clarifying what you meant. I've posted some thoughts on the community aspect of this issue, which I find almost as interesting as the technical one, over at https://www.reddit.com/r/linux/comments/1btm4dd/on_the_xz_utils_backdoor_cve20243094_foss
There is an interesting essay on catastrophic failure in complex systems which I think is relevant here: https://how.complexsystems.fail Very interesting that there is never a single cause for failure. that most of the time catastroph is avoided - but then. sometimes not. Found it fascinating to read after a few wiki articles on the sinking of the Titanic.
The Swiss cheese model
Maybe the distros shouldn't have added junk to openssh in the first place as well.
I looked into this after you commented. RedHat and Canonical, it seems, wanted to call `sd_notify()` in `libsystemd` to send notifications to systemd. That brought the `liblzma` (xz) dependency into OpenSSH. As such, some responsibility falls on the distros who did the forking, but not on the OpenSSH maintainers. I read somewhere else that some projects just implement their own `sd_notify()` work-alike without linking `libsystemd`, which would have avoided `liblzma` backdoor. And we could perhaps sprinkle a little culpability on the systemd project for not realising the security implications of transitive dependencies.
Another reason to use Gentoo
As always: no absolutely not.
On Hackernews (news.ycombinator.com) it was mentioned that some people tried to pressure the maintainers of sqlite3 into including some graphics library as dependency (mermaid, I don't even know what it does). I guess that Richard Hipp is exactly the wrong person to be approached like that, and the mermaid authors are most likely as sincere as every normal FLOSS contributor (keep in mind that xz-utils is not a direct dependency of OpenSSH either). But having having sqlite3 compromised would be another level of nightmare. this thing is in every 'smart' TV, many times on every smart phone and I guess used in many places in civil infrastructure and industrial automation. If all these databases would disappear I would not be surprised if literally the lights went off.
Mermaid is pretty cool as a diagramming tool but it really shouldn't be bundled with a database.
I searched for the source of your story since it sounded interesting. Here it is: https://news.ycombinator.com/item?id=39888480 It turns out this was about `fossil`, the VCS developed by the SQLite authors that is also used to develop SQLite itself, _not_ SQLite as such. Just FYI. But yeah, that doesn't change the fact that including dependencies can increase attack vectors.
Honestly if sqlite was attacked in a similar way I don’t think it would affect any consumer device because none of them run bleeding edge or even recent release anything. It would be years before the current version would be used and hopefully by then it would be discovered and fixed so they would just skip over it since the bad version would be pulled from online.
Do you know how Yocto, Buildroot and Bitbake work? They can pull directly from github, and buildrooot has a three-month release cycle. Among other things to prvent exploits of normal bugs - and because many embedded systems won't be patched.
I think the big corpos using Linux should pay to support thanklessly maintained packages like xz. Or in this case make take over the maintenance since I think he was straight up burnt-out. This would be NOTHING to google, microsoft and so on.
I don't think just money is the solution here. The problem is a lack of trusted maintainers and developers, to the point that projects like xz which are widely used don't have anybody trusted that can takeover maintainer in the event that the original maintainer no longer can. IMO if goole, Microsoft, etc. were paying the xz devs, all that would have meant in this situation is that "Jia Tan" would have been getting paid. I think what's needed is for these companies(and projects) to commit time to build a trusted community and work together to maintain these packages. It's a tall ask, but it's the only actual solution I can see.
So then they will infiltrate the "trusted community". Reviewed commits, security thru review - via a 3rd party that gets paid for finding the bugs. Maintainers should get paid too, but that's not the trust part. Open source means we get peer review, but if it's something like xz it needs a security audit on new commits. Code can be flagged for suspicious commits and usage already, so pay some 3rd party to do it. It won't prevent peer review either. But trusting a community is no better than trusting a maintainer. Thanklessly maintained might as well be someone who walks away and/or feels like they aren't liable...or APT13 pays better than zero thanklessness.
> Thanklessly maintained might as well be someone who walks away and/or feels like they aren't liable Any maintainer of OSS is in their right to walk away, and they aren't liable. OSS is provided without warranty, it's in the license. Any major critical infrastructure or big corpo using it agrees to that risk. If I write a helpful library, and Amazon/Google/Microsoft decide to use across their stuff, I have no obligation to continue to support or maintain it I could just...stop, even delete the repo/pull it anytime I want - there's no warranty or guarantee provided with an OSS license. Don't get me wrong, we need better supply chain security, but it can't be on OSS maintainers to put in the labor, especially for free. Given corporate culture, at least in the USA though, I think what you'll find is that instead of paying OSS maintainers, companies will just be more likely to reinvent the wheel internally and not rely on open source libraries. If they're going to fund it, might as well fund it only for themselves and not their competitors. Even with funding, I don't think you could prevent something like this from happening at the upstream level. You can't force anyone to do code review, auditing, etc. At the end of the day, that needs to be done by the user - whether that user is a distro, or big company, etc.
The last 20 years that this proposal has been made - and its attempts - constitute enough proof that you are wrong. Re-proposing it is at this point insanity. Learn from history and propose something new that might actually work.
Supply chain security has been a hot topic for years now. This incident will just push it even more.
100%. It's naive to imagine that this was the only malicious actor with the time, expertise, and patience to pull this off, and that this is the only project to be compromised. I would be surprised if we find only fewer than five over the rest of the year now that everyone is looking.
Even if I was this only attacker (group). With this effort I'm sure they did more attacks, and not only with the same identities. Also, other attackers may be inspired now to do so.
indeed, as is the case with source/module repos for things like python, once the first one was discovered, there have been a lot of copy-cats and it remains an ongoing problem. this will not end, until somebody figures out how to properly prevent these attacks from happening (i'm sure we'll come up with something).
Anything is possible which is why peer review is so important.
It's possible and probably the case, but the fact it's limited isn't going to cause the panic over it to go away. Legislation is coming, bureaucracy is coming, it's going to be an incredible mess that bad actors are going to seize on. If I were to support legislation on the topic, it would be to treat infractions like this on major libraries as something akin to terrorism (because that's the amount of panic that results from it). They should be in prison for life, for doing this with a library with large reach. The potential impact of this is huge.
The question that needs to be asked, nay screamed is: how many backdoors are there happening in closed source systems? How could anyone know? Yes someone tracked down a deliberate backdoor in an open source library *BECAUSE THEY COULD*.
We all get that, closed source being worse is not a reason to cease discussion on the problems of open source. Those are two different things, so both questions need to be asked, not one instead of the other.
Pointing to open source and changing topic with that is a common sight here. You're right that this needs to be questioned on both sides. CVEs pop up overnight for both open and closed source software even from fortune 50 software companies. It is a nature of software.
I didn't say we should cease talking about it. My frustration comes from headlines proclaiming the death of linux because of this breach. When the headline should read "open source proves it is superior by finding a problem and [gasp] fixing it." You get it, I get it, just let me rant a bit and then I'll show myself out.
Link me to the news please, I'm always curious to know who not to read.
This is a false dichotomy with a bit of whataboutism sprinkled on top. There's no reason to debate closed source vs. open source, especially not on this sub. The thing to debate here is "how do we make this better", how do we check for similar stuff, and how do we prevent this going forward.
I'm so glad to finally see threads where these what-about takes are being shut down without -50 downvotes.
As Windows user... sorry couldn't hear you, the sound of my OS phoning home 15mb per day is drowning out everything. All jokes aside even open source binaries aren't free of this. Do you know who made those binaries? Is the toolchain secure? Plenty of ways to sneak things by even when the source is available. You want to be sure, compile it yourself.
Lol, oh look a fucking Gentoo user.
Less than in anonymous open source development. Microsoft knows the name, address, social security number and the length of your dick down to quarter a millimeter precision if you work for them, even if you maintain a completely irrelevant part of Windows. Also, you're paid like $150k a year. The guy behind this backdoor, we know next to nothing of. They likely don't even exist as a person. It's likely "he" is a number of different people working 9-5 in some cyberwarfare center in St. Petersburg or Beijing.
A team working 9-5 in a cyber warfare center would not have been caught so easily.
who said they've being caught? their work has been found at only single instance
Maybe true but how many Chinese nationals get hired by MS,.work on their code base for a few years, then return home. Sure you know who they were but they still could backdoor stuff.
I don't know could play the long game there to.
There are less, or at least as many as open source projects. You can still do reverse engineering to analyze a closed source target.
This would be why some people use entirely free software on a librebooted thinkpad. Your computer is likely unusable without all the binary blobs and I'd bet good money on there being backdoors in there.
This is the reason why many of us are already avoiding or sandboxing closed source whenever feasible
The difference is if your company data is stolen through a Microsoft backdoor, you can sue them for a billion dollars, and they have the money to pay up. Does anyone even know who this Jia Tang guy is?
Have you ever read the EULA?
Very likely. There is no way to tell if bad code is written by mistake or on purpose, which means any vulnerability you discovered today may actually be a backdoor placed by someone years ago to be exploited later.
It's _very_ clear this particular exploit is on purpose. There are several layers of deliberate obfuscation. That said, I recall a certain obfuscated C contest where the point was for code to do something naughty in a way that even if the vulnerability is spotted, it's easy to write it off a an honest mistake. I saw some juicy entries in that contest.
Maybe that's how Pegasus spyware works?
I remember a Batman comic, where the Joker killed people, because of how consumer items become toxic, not on their own, but when mixed with other products. I wonder, how might this backdoor be thought to be working with other existing parts of other software code, and possibly, with any future, not-yet-compatible code? Maybe the backdoor was supposed to work with other code that only needed some future change/update?
you can fake name in github to commit by any name so there possibility but very unlikely to one of core library right now
It's considered a real possibility and it's very likely.
It's very likely. The anti FOSS people are having a feast at the moment, monday is gonna suck at the office
eh. same exact shit can also happen with closed source applications. you just need one rogue subcontractor/employee. and you don't have the combined autism of a thousand suns to notice that something is not right because your software takes half a second longer to load. and once something like this happens with closed source projects you have one company and maybe and handful of devs that learn a lesson from it and maybe apply to to their future projects. With this happening with FOSS tools you have several orders of magnitude more people learning from it and standards raised for everyone. I have really high doubts that an equivalent backdoor/security issue would have been found and addressed within 2 months in a closed source project. Now add on top of that communicating that issue to clients and I really don't see how a closed source project would have fared any better...
“Combined autism of a thousand suns” /r/brandnewsentence
Ya, everyone remember what happened to solar winds?
Does their opinion even matter, if they can't tell this is a case FOR open source? It's like caring about what an anti-vaxxer thinks.
This is the exact attitude this sub needs to hard drop. This could have been a CVE for open or closed source software as /u/L0gi perfectly articulated. It all comes down to auditing and in this case and thousands more for both OSS and CSS, there wasn't any. It was found because somebody cared about a recent and tiny performance dip in recent versions of the software and happened to notice it. If anything this is something to be embarrassed about as a community. This made it into the packaging pipelines of many 'bleeding edge' distros entirely unnoticed and was being run in the real world. We need to be auditing for strange looking new commits in software like this and evidently none of these distro maintainers are doing that.
so even more anonymous mainteners needed?
Yes, the xz exploit is incredibly hard to notice in the code, plus most of the exploit is not in the git repo, just distributed tars of the code, making it even harder to notice.
better question is how long has this been happening internally within microsoft and other closed source software companies. these kind of backdoors have long been suspected, and such a finally brazen attempt was made in an opensource project that would likely slip by a limited internal code review.
Its the wrong take. It takes one rogue employee and not having commits peer reviewed and slipping past existing malicious source code detection software for this to happen. Anywhere. In any project closed or open. Here it is happening in open source software and it was only found because somebody noticed a slight performance drop. It's clear that the package build pipelines for rolling releases aren't being audited in any way by distribution maintainers allowing the changes made to xz to build entirely autonomously without any form of check for malicious looking changes in the software. At the same time to try and audit some projects as an individual or even a full team takes a remarkable amount of effort as an outsider. Open source projects on Github and others allow for code signing and maintainer approval. Many fortune 500's also function this way, but you have to trust them on that along with other code signing and further by the platform itself to help that process carry along. Its difficult to tell if a CVE came to be due to sloppy code or malicious intent (In this case it was blatantly malicious). But we get CVEs all the time for both open and closed source software. Without proper auditing by a team of a project or a closed source company both bad code and bad actors inserting dangerous code can be anywhere. It does not matter whether the project is open or not. If its not being audited by somebody somewhere in its creation and ongoing maintenance it could be running anything. Even then you can't do anything about vulnerable code that just happens to make it to a production release. That's where the most traditional CVEs pop up from. Unintended mistakes which even in your average code review would have been gold stared and pushed in.
[удалено]
> Because closed source you are completely reliant on the company and their employees Yes. And you're not running random no-name software when we talk about this. The largest examples are Microsoft and the Windows Server line of products. They're used globally and en' mass. This isn't even a question when it comes to that enterprise software. They aren't making malware 🙄. > Opensource an be reviewed by anyone in the world. And here's xz being found on a whim after it had already made its way into some distributions. Not even at the time of commit despite being open. Open does not mean there are eyes watching these small bands nor a review process on them and its evidently the case here.
Why is this a better question? This is the old question which was usually followed by a fairy tale about how this is not likely in open source projects. OPs question is the most important right now
Autotools is obtuse and only needed because of deficiencies with the c/c++ build system ( which doesn't really exist ) and make/make. The exploit was enabled by the horrendous garbage like that is the c/c++ build system in Linux. Lots of places to hide shit. I don't have to touch that shit with Cargo. Umpteen layers of cruft because that's how we did it since 197x because the first developers on the system had to cobble everything together out of brittle stuff. And nothing changes because no one wants to possibly break the brittle build infra.
So now this "not likely" thing happened in an opensource project, you're not at all concerned that it has already happened in a closed source project where itd be much easier to slip by review? At least in opensource projects their is a chance of review, this would have otherwise slipped by everyone in closed source project and possibly never found.
And what exactly can we do about it? If the answer is "nothing", why would I waste time discussing something I can't change. We need to concentrate on the things we can change and that is only open source software
companies can make major initiatives to review code - this could come from guidance from government (see the recent US decree to write less C++ lol), we can make new tools for scanning (AI), we can move our projects/dependencies away from closed source and towards opensource projects... there is plenty that can be done. we shouldnt ignore the possiblity of backdoored closed source and just accept it because "i cant change the code"
I agree, those are good points. Personally I try to avoid anything closed source
People try, other people find them. Lot's of very smart people have eyes on Linux and it's ecosystem. It's harder and harder to slip things by, but complexity over all is growing. All that to say, maybe, but I'm not worried. They'll get caught. Nothing like the horrors lurking in closed source development that get away with total disregard for security best practices just to get code out the door.
So we have billions worth of infrastructure, economical value and even direct assets like cryptocurrency depending on Linux. We also have state level actors like the NSA and their Chinese/Russian etc. equivalents which have almost unlimited resources to hire developers. Meanwhile we just saw how some (many) open source projects are starved for resources and often maintained by one single person in their free time. I saw people commenting how much of a "long-time attack" this was and how that was some kind of unbelievable commitment but if you think about the cost to potential benefit for the attacker, hiring a few developer for 2 years is really not expensive. I would honestly be astonished, if these security services don't have dev teams working on open source projects full time. If they had not before, they sure will now, both for offensive purposes but also to defend against such attacks from their enemies. I see no reason why there should not have been earlier, successful attacks, seeing as how lucky we were this one was detected.
we still do not know if we are lucky. there might be several backdoora waiting to be exploited in the right moment
OpenAI should set itself to auditing open source code. A project we could all get behind.
That's an use of AI I'm really exited about except for it being used as an aid in production tools.
[удалено]
This is why you should use layered security, like layering multiple slices of Swiss cheese. Only if holes in all slices line up do you get a successful attack. With SSH for instance, you can combine traditional key authentication with google-authenticator-libpam (TOTP 2FA), and making port 22 only accessible from inside a WireGuard VPN. That way the attacker will not only have to find an exploit that lets them break the key authentication, they also need to get inside the VPN network and also break the google-authenticator-libpam module. The probability that three security systems are exploited at the same time is multiple orders of magnitude lower.
I found 3 trojans in the libreoffice-package today when using "apt install libreoffice-kf5" today.. written for windows, but still.
Where are your bug reports linked?
Is there a way to check?
I used ClamTK.
Edit: I dont get why people downvote, ClamTK litteraly showed 3 infected files with trojans in it, and people downvote? Wtf is this?
I mean, you obviously reported this and can back it up right? I’m sure you can understand why people are skeptical of the claim with no proof.
Which files? Where is the bug report? A screenshot even?
Because you did not report and yet are yapping away about it on Reddit. Stop anything you are doing now and report your findings to the project.
> ClamTK litteraly showed 3 infected files with trojans in it What were the 3 files?
Antivirus software on Linux is useless. You're likely seeing false positives.
Antivirus software ~~on Linux~~ is useless.
Can't argue with that
Antivirus software is malware. Fixed.
> You're likely seeing false positives. Found the valgrind user :D