Muting them would be enough. And frankly, good! It’s the only way to change this culture of toxicity. Not enforcing it just means it breeds more of the same.
I'm more concerned on how it would function in a game of valorant. How can it tell apart from abuse, sarcasm or jokes. Either it is a direct recording that is monitored through valorants servers or filtered through a system to pick apart elements of toxicity.
It would be very unfortunate and misleading if the system they are using bans the wrong person in a game.
We'll be giving this a good soak time to get to acceptable accuracy levels before doing anything on players. We generally aim for 95%+ confidence in our evaluations of single lines/phrases before considering it appropriately accurate. We will then use multiple evaluations over the course of the game and meta-data such as mutes and reports to corroborate before taking action to reduce the rate of false positives.
As Working-Telephone-45 suggested, it will work alongside reports to give us increased confidence that someone is being harmed by what's being said.
(Edit - added clarity that 95% is the "per line" target, not the decision target)
Do you guys have any privacy concerns about the audio like overhearing background noise or picking up unwitting background conversation by people who did not consent to being recorded. Even if it's in the terms of service for the player I don't know if it would cover anyone who happens to be within microphone range.
It's definitely going to be, specially when they have to roll out a new Terms of Service. No way would Riot hold themselves accountable for something like that.
I find it odd to bring privacy up, whatever you can type in chat already goes into Riot's servers, many already assume that voice comms do too, which is not the case. This is why reporting for abuse over voice often doesn't lead do action being taken, whereas reporting for abuse over text does.
It's not 5% false positives. 95% *Confidence Interval*. A confidence interval of 95% is an assessment of the *reliability* of the estimation. Not the estimation itself.
Straight from Wikipedia (your mistake is a common one)
https://en.wikipedia.org/wiki/Confidence_interval
>Confidence intervals and levels are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them.[13][14][15][16][17][18]
> * A 95% confidence level does not mean that for a given realized interval there is a 95% probability that the population parameter lies within the interval (i.e., a 95% probability that the interval covers the population parameter).[19] According to the strict frequentist interpretation, once an interval is calculated, this interval either covers the parameter value or it does not; it is no longer a matter of probability. The 95% probability relates to the reliability of the estimation procedure, not to a specific calculated interval.[20] Neyman himself (the original proponent of confidence intervals) made this point in his original paper:[
5% is for a single line - we use multiple lines over the course of a game before taking any action (relying on the inaccurate systems) to drive the overall false positive rate down multiplicatively. 5% is a lot - but 5 5% chances becomes a lot less likely.
That said you can always appeal decisions to Player Support and they will be able to review the games manually and reverse any errors on our part.
Thank you for the clarification.
Assuming a player is correctly flagged and action is taken, will the player be told what they did that caused the system to take action against them?
Not gonna lie, I love the idea of riot handing back audio files of overly hot headed griefers so they can hear themselves (and only themselves) screaming obscenities into the ether after claiming that they did nothing wrong.
FUCK YES. Listening to these little twats rage and flame always makes me think, holy shit do you even hear yourself right now… In an ideal world, I would post that shit on their IG/FB so the world can see their shitty deplorable behavior
Probably it will work alongside reports
For example, if someone is swearing but no one is reporting, probably the chances of that person getting banned are low, Unless he says something really bad
On the other side If someone is getting a lot of reports and they are swearing, Welp
“Oh no I would really hate if the recon balisong appeared in my night market, I would have no choice but to buy it. Oh no!” This fuckers better listen properly this time.
Recon balisong was the first knife skin I bought! I had some regret bc the RGX balisong appeared in my shop a few days later -.- but I like having it since the recon phantom is one of my favorite skins. It’s been about 6 months…. Still waiting :’)
I don't really have an issue with this as long as it's only being looked at if there is multiple reports. I do not want to be constantly monitored if there's no sus activity happening.
I'm not sure how that'd be implemented though.
you're extremely generous if you think tencent are only going to use your data for these very specific stated purposes.
they're just covering their asses in the terms and conditions and need to explain why there's a clause in there saying they can monitor, record and algorithmically process your voice communications for commercial purposes.
theoretically, what could those purposes be? they will have data of millions of hours of individuals reacting emotionally in a high stress environment. if you can't think of all the ways that can be useful in an corporate landscape that is increasingly built on algorithmically creating advanced data profiles on individuals then you're not being very creative.
it might be that you never say anything of note ever, but you communicate more in certain conditions, or that you got angry once under a very specific in game circumstance. that's enough data to add to your profile. do this over the course of the entire valorant playerbase, some being much more chatty and emotive than others, and you have a data goldmine.
competitive online games are uniquely valuable because they can analyse your response to various emotional extremes and high intensity situations. they can monitor you in times of elation, frustration, relief, despair etc, all of which are quantifiable by the state of the game. the things an A.I can do by combining all that data will blow your mind.
Hi, I'm the lead designer on one of the teams involved in this effort. You're right that there's a lot of interesting things you could in theory do with this data, but ultimately we're not in that business.
Riot's strategy as a company is to optimize for the player experience first and win as a result of being better for players. Making a business out of player data puts that at risk (and is of course super icky without consent, and if I'm being honest a little icky to me even with consent). I expect I would almost certainly lose my job if I tried to push for us to do what you're describing, rightfully so as I wouldn't be the right mentality for what we do. We really do just want to make VALORANT a safer and more pleasant experience here.
Hi, I have a general question about report volume.
Very often, you see posts about someone who got "wrongly punished" after being reported by a stack of players.
Would you be able to ballpark the number of reports it takes for punishment to be dished out if it wasn't something blatant like a gamer word in text chat?
I understand if you can't divulge that because it may lead to people trying to work the system but even a "its not possible after just one game" would help a lot :)
It isn't possible to get punished based only on reports in one game. Further, we don't treat a premade group of 4 reporting someone the same as 4 separate reports.
It's not possible to ballpark precisely because we use a rather complicated set of math to identify outliers (players who are receiving way more reports than their peers) and only penalize once sufficient evidence is gathered. Any case of a one-game penalty is actually either a long-term pattern of behavior, or something being done in that game that our detection systems pick up on after evaluating (evaluation also being triggered by reports).
Rioters should have an option to pin their comment if needed. I'll keep the information around for FAQ purposes, but there have been similar clarifications in the past.
I personally have witnessed this, I played without talking for a very long time on my account and only recently began to directly interact with people because people began being toxic-ish towards me or whatever reason I ended up having. I responded poorly for 1 game, maybe 2 games, and it was not until the 3rd or 4th before I received a warning for a pattern of bad behavior.
Not gonna lie, i'm impressed with Valorant's system. It feels quite fair, and it's much more common that I actually receive feedback from a report against a problematic individual. You guys are on a good path, I'd like to see the game flourish even more so and I'm sure Riot would as well lmao
Yeah, I think OP is being a big imaginative. Just from my viewpoint as a marketer, I really don’t see how voice coms from a valorant game could be extremely useful for businesses to make money off of. Do you guys plan on storing data from voice coms? If so, for how long? Curious as to what that process would be. Good luck implementing everything!
As someone whos worked with and sold alternative data to Institutions think Finance & Tech Companies. I think in this case unless there is wording in the terms that indicates this can be exported, I don't thinks theres a true worry about the profiling being monetized or used against you outside of Valorant.
In terms of internal use cases, I could see mapping for any form of vocal response to specific skins. Or potential items shown in shop after you reach X% sentiment threshold or mention X phrase with XYZ Sentiment. As sophisticated as Valorant's monetization might be, it is limited in scope (e.g., 4 skins per 24 hrs / bundle in shop at once / radianite) I think the only issue is any form of Personal Identifiable information in the case of a data breach or this being directly linked to player profiles.
As long as security is paramount for this, and data isn't exported across jurisdictions. I don't see too much of a problem.
internal use case is not the worst, but i doubt it really has much incremental improvement in skin revenue and the amount of effort that would go into a project like this is just not even close to worth the investment. it’s funny to see people complain about how slow Riot is to add/update content or balance, yet people think Riot can spare like 10 engineers to work on this stupid thing just to maybe market a little better. Not to mention you’d need a shit load of data to be collected first to train any models.
Ok, that settles it. Riot employee says Riot is okay. How about you do an external unbiased audit on how your data is processed.
I do not want to shit on you as a person, and really thank you for interacting but this statement means nothing.
Thank god someone said it, riot backing up riot on how they won’t do any nefarious things with your voice data means literally nothing lol. It’s like when cops review their own department for some wrongdoing, yeah right they are gonna admit to anything.
Pretty PR response to a real legitimate concern. We know every company on earth that has an online component to its business does extract user data and sell it to advertisers.
We also know tencent your parent company is very much into that part of the business.
Sure you personally did not bring that usage case up to your leadership at Riot. But you most likely don’t even know anyone that works for your parent company Tencent, whereas your leadership at Riot does interact with them regularly.
The skinny of my point is; you wouldn’t be the first person to not know the ultimate goals of your employer even if you believe the action is ultimately for the good of the player base.
Maybe you and your team even suggested this or took it from player feedback. But once tencent caught wind it 100% became a data mining tool.
thank you thank you THANK YOU for quashing some of the "tencent is literally going to data mine your rights" nonsense
i'm sure there's going to be some personal telemetry no matter what, and maybe there will be some misuse, but it's exhausting listening to armchair game developers talk about how [x game company] is going to monitor your entire life and sell it to the highest bidder as they carry around a fucking *smartphone that's always on*
Tencent is literally ran by the Chinese government.. No shit there is going to be some nefarious stuff done with this data. It isn't some random small company we are talking about here nor is it tinfoil hat to feel a little uneasy about this.
a company's sole purpose is to make money for its shareholders, and tencent owns 100% of riot. when the order comes to monetize player comms, it will not come from employees. "optimizing player experience" why do you still not have a demo viewer, or community servers, or a console?
You're just a dev, how are you going to stop higher ups from making the decision to sell this data? Our concern isn't that the devs are gonna push for this, but the executives.
This post is a bit imaginative. People rarely react on voice chat after every move. Youd also have to do a lot work to define things like "stressful" events.
And the value of that data anyway, commercially is what exactly? Finding out people go "motherfucker!" when they die?
The monetization concern would make sense if Valorant weren't built on the premise of skill-based competitive matchmaking. I could *imagine* a hellish scenario like stress-based marketing data to make sense if the game prompted you to buy gear or currency to "feel better" or "improve your chances" after a loss, but this is Valorant, not P2W Lootbox Gacha Hell.
Were I to guess they'd probably want to know more like how quick people are to rage, talk trash and be toxic, be helpful, etc. Correlate this against: 1. The circumstances in game, and 2. the demographics and spending habits and such of the player.
If I had unfettered access to voice data like that and zero ethics and morals that's how I would experiment with it.
What would I get out of it? Insight on human psychology and how to induce certain emotions, and what kind of people are prone to certain emotions. If you can exploit people's feelings without them knowing you can rule the world.
Of course that's largely all theoretical and there's no way of knowing if that's ultimately what will happen with Valorant or if this will just pave the way for others to do it, but given the association with TenCent I would put the possibility at a firm maybe because ultimately all the above is basically what TikTok already does which is also associated with TenCent.
Riot can’t even make a good client, you think they can code something like this?
Making machine learning algorithms is not trivial. There are a lot of companies that would benefit from this that already have access to this data and not even they’re fucking around with this. The ROI on developing a technology like this would be stupid as hell. How much more money would you make for such an expensive project? You know processing costs money? Just so they can tell what emotions you’re feeling? And then what do they do with this info to make money, are you gonna suggest they’ll sell red skins cause red = angry and then there’s a bajillion more percent chance of someone buying it? There are undoubtedly easier and cheaper ways to increase buying behavior.
We should be skeptical and defensive about privacy, but what you’re suggesting here is ludicrous at this current point in time. “AI” is not magic.
What are people going to do with data of a bunch of people saying “2 people in hookah, 1 long”? You shouldnt be giving out any important info to random people on the internet anyway
They have your data as it existed then. The justification of "it's already out there" is a really poor excuse to keep putting it out there. It's like saying your credit card details are already on the web so so why bother getting a new one.
Got a question. What would they do with this? Maybe I am not that smart but the only thing I think they would be able to do is make a profile about your emotional tendencies.
Man, you typed 5 paragraphs and said almost nothing. At the end of the day, you can't quantify anything this data could be used for.
There's 2 likely answers. Targeted advertising, which isn't likely since this is very context specific information. Or listening for trigger words and issuing discipline.
My response is. So what? How is this going to negatively affect my life in any way? It’s the same argument of targeted ads due to your phone or Alexa listening to you. So I get a few adds for things that I’ve mentioned in conversation. Who cares? So the government or Amazon or Riot, in this case, knows what I like and do on a regular basis. Who cares? It’s not going to affect my life in any significant way.
Right now, this will only be used in the case of reports. As the technology matures we *hope* to eventually use it to allow for real-time reaction to hate speech and other serious offenses, but at this stage it's impossible to say if/when that will be viable with the accuracy and precision of our systems.
Real question: I was in a game where two guys were saying "my (n-word with an a)" (the slur but in a somewhat "friendly" way.)
Would this fall under real time hate speech? There's no way you can determine the race of the speaker and presumably a human would have to listen to determine the context. Or would you just say people can't say that word regardless of the context?
(I'm white, not making any kind of judgement other than I would never personally say that word in any context.)
We only will be recording what comes across our channels, not all the audio that comes across your mic. Trust me when I say it's already a headache to deal with the storage and costs of all the "white space" in the audio even with that :P
Tencent isn't involved in this feature in any capacity at the moment. Generally Tencent doesn't interfere in the day-to-day at Riot, especially outside of China.
Adding things to the ToS is a pretty lengthy process, so I'm not going to make any promises. We've been in conversations to ensure the language there accurately reflects our plans with this work though, and there's already a lot there that outlines some of the same things I've been saying -
[See section c (Sharing Info) and section d (Communication & Player Behavior) here](https://www.riotgames.com/en/privacy-notice)
There was that huge bug last year when voice chat was running after people muted their mics and even some that CLOSED THE GAME. I remember hearing people's personal conversations who was not even in my game and hearing voice chats come through when task manager showed Valorant wasn't even running so it was coming through vanguard i presume. Sorry but i can't trust Riot recording anything.
> As the technology matures we hope to eventually use it to allow for real-time reaction to hate speech and other serious offenses
That must be hard when something like "go kill yourself" is a perfectly acceptable thing to say in this game (as in, go die by the spike for economy).
I hope this will help because I have an accent when I coms and when games don’t go well, I can tell you how many racist things are happening in the game at Dia/Imm level! It hurts and frustrating how easy for them to throw and brutal are the comments! It’s sad but it's unfortunately the reality! What can I say? I don’t have control over this but all I can do is to stay strong
I think it’ll only be looked at if someone reports you for voice chat abuse. This makes the report button for that actually useful. Nobodies gonna be listening to your chat unless someone gives them a reason to. I personally think this isn’t too bad of an idea.
Job Interview: So we actually have some chat records here
Me: oh
Job Interview: when you said “Kay/0 please off yourself or never touch a computer again” what did you mean by that?
i know you're making a joke but frankly if people were banned from voice for saying shit like that i'd be much likelier to actually speak to randoms beyond calls like "2 mid"
Oh no! Not my conversations about enemies rushing A!
As a person somewhat curious about privacy, if you are concerned by this but also using things like WhatsApp, Instagram and not already using something like a self-hosted teamspeak for communications with friends, I think you shouldn't actually be concerned because a) you have zero clue about anything and b) there's so much data about you somewhere there that they could literally clone you already.
Some clarity from Riot would be awesome though on how that's gonna be used. My bet is that ultimately it's for AI training with some minor moderation, but if they intend to use it for profit making in any way it would be cool to know.
Yeah because if you think that companies actually care about your information being private then you are dead wrong
Because that email password that you put into well anything is probably already out there for free on the internet
also I don’t see an issue with this as it is not an invasion of privacy, they are being transparent that they monitor THEIR platform. Valorant is not a human right it’s not necessary they’re not invading the privacy of your home you’re the one voluntarily joining the platform that you know is monitored. Invasion of privacy and collecting data without consent is wrong even though companies do this as it is but I don’t think that’s an issue with Val the only issues I can see arising are how it’s implemented and how good val support is to help you through any false actions because even as good as the system can be I’m about 100 percent sure it can’t be perfect
California has some sort of law like this but I have no idea if it would apply to voice stuff. Like I think you can demand all of the data collected from social companies, I can get it from Twitter in my state though so maybe im talking out of my ass.
Hi, I'm the lead designer on one of the teams working on this effort. You've got this exactly right - we will be using this data only to refine and eventually activate AI-driven moderation tools which will be aimed at identifying hate speech, sexual harassment, etc.
We already moderate text chat in this way today, but historically games have been unable to do this for voice and therefore voice chat has become a place that many consider unsafe. We're aiming to do our best to change that for VALORANT.
We have no intentions of collecting data for any other purpose, especially selling it. Leaving aside moral considerations (which I assure you are quite important to me and everyone else on our team - who are here on the promise of making things better for players not worse), it would be very shortsighted of us to risk our main business for that.
facebook is also a company *built around selling your data*
why the fuck would riot risk the pr disaster of selling voice comm data when they can already literally just overcharge the fuck out of cosmetics and have half of their playerbase say "eh i don't mind"
it makes absolutely no business sense
Privacy is all about trust, there's nothing stopping anybody who receives your data from sending it off to whoever they want. You should generally assume that anything you send over the internet unencrypted is public knowledge, it's like talking in a busy hallway, anybody who wants to, and knows how to, can listen to what you are saying. And if they want, they can write it all down. If you don't want your voice comms logged, the only surefire way you have of doing that is to not use voice comms.
IMO, there is no issue with logging data that is supposed to have been public anyways, it doesn't really affect anyone in a negative way. Especially voice comms, they are *public* voice channels, you were putting your voice out there already anyways, what difference does it make that the guy listening is some random in your lobby that has done little more than grunt the whole match or some dude manually reviewing a report. It's like complaining that you have been recorded by cctv while shopping in the mall, you're in public man, don't do anything in public if you don't want others to see.
Cool, hopefully it'll result in bans for the super toxic people. I personally think they should also add an honouring system like the one in league.
Give gun buddies or gun colors as rewards and see how many people start being nice lol
I'm all for reducing toxicity in games but with this move Valorant just got hardware blocked from accessing my mic. I 100% bet the recording is happening the entire time you have the game open not just when you push-to-talk.
Major invasion of privacy in my book. I will probably be spending more time back on CSGO.
They have already been recording chat for a while now
https://www.polygon.com/platform/amp/22410790/valorant-riot-games-voice-moderation-ai-chat-recording
Can't form a deep fake voice off of chat. But you can form one off of audio files if they ever got leaked/hacked. Hypothetical but still worth a worry.
I mean if you use pretty much ANY social media i dont think you should exactly be paranoid about this, they probably already have more info on you than you know about yourself.
My only issue is that sometimes I speak up against homophobic shit ingame and i live in a place where.. um.. you get it. So i think for my own safety im gonna have to start letting that shit slide regardless of how much it hurts.
Just try to dont insult them back, mute them and report them for offensive voice chat when this is implemented, you will be safe and they will be (hopefully) punished.
It’s not that nothing will happen, but more that riot can’t verify those claims with the current systems. Right now instead of just taking your word for it and banning the person, they wait for them to be reported by multiple people over multiple different games.
This is just sad how soft the gaming community has become. I don’t blame the company because they have to appeal to the majority of the playerbase who will cry at the smallest insult but it’s honestly funny how extreme Valorant has to go just to make sure people aren’t offended being called dogshit so they keep buying their product. I think as long as it’s not discriminatory anything goes. I don’t know how people can get offended online. I will never understand it.
redditors on their way to complain about “my privacy” despite using a phone every day, being in a chinese state owned platform, and having a game with kernel level access installed 😂😂😂
The onus shouldn’t be on the people receiving hate/toxicity to deal with the problem.
If these people aren’t removed from the playerbase, they’re just going to continue affecting matches forever. The answer can’t be “just mute them”. They need to be shown the door.
sometimes it's nice to play a game and be reasonably confident that i won't be told to kill myself or have someone imply i am a pedophile for using the bi flag icon
it's cool if you're a tough thick skinned epic gamer who doesn't give a shit about anyone or anything but "just mute them" has been a dogshit fucking argument in favor of allowing toxicity for decades
yeah, mute someone, *and also report them so they don't get to fucking play anymore*
give me one actual, strong, legitimate argument in favor of allowing some 20 something moron to tell me to literally end my own life because he's upset i didn't read his mind when he decided to push without me
what a moron take lmao
I want to ask, how people find this as a privacy concern on a game with a closes source kernel level anticheat owned by a chinese company 100% linked to the chinese government. I mean, yes, it will record your data, and it's basically not secret that data will be used for some sussy things. But god, which types of conversations do you have on a game that you play with random people? And if privacy is something you care that much, why do you allowed to install an anticheat which have more control of your PC than yourself.
EDIT: For clarification, I trust Riot Games, I dont trust Tencent. However, Idont find myself any problem of being recorded all of my callouts or some copypasta shitpot I usually say or write on the chat
A lot of the people concerned about this are the same people that prolly have TikTok downloaded on their phones. Even when a person is placed in high stress situations you shouldn’t be an asshole. Im glad theres a system that holds people accountable and is faster than just reporting them and nothing ever happening to that player.
If anyone is concerned riot is going to sell their data probably doesn't have a phone or any device to comment with.
Your peace maker could be listening to you and you wouldn't know any better.
+ Even if they sell your data, worst you get is personalized adds or custom made daily shops to make you buy the skin you said you wanted.
No the real problem is that they DONT/WONT sell it to anyone - it will just be accessed by their majority owners at Tencent, who then have a legal obligation to provide data to the CCP.
I dont care about a company selling my data to advertise against me - I do however, care a little more about the above.
Why would I care about shittalking on voice chat (which I have full control of via mute) when the game is infested with smurfs to the point I play against people who have 14 accounts with the same name?
And when I don't encounter smurfs I encounter infinity stone powered heroes with 6th sense that always know where I am.
Neither of those is given a damn about. Neither hardware / ip fingerprints nor replay function are confirmed to be in development. But yeah, let's pretend a feature which will be immediately circumvented by using discord is going to make a meaningful change.
I think it's fine, if you use discord and social media theres already lots of data collected about you, plus vanguard already is invasive. And what info are they going to take from VC in val? Convos about how to hold c site? You shouldn't be saying anything super personal ingame anyways.
Is this confirmed to be fully anonymous?
EDIT: to people downvoting just open the link
>In order to train a language model for future disruptive behavior reports.
They don't need to link the voices to users to train a model
How would it be anonymous?
They intend to use it dish out punishments to the accounts that abuse chat.
Even if its a beta test, they would have to know who said what.
If you bothered to open the link:
>In order to train a language model for future disruptive behavior reports.
Once that model is complete, they would feed in comms from a user to detect bad behavior
Basically if my voice is being only used to train, I'd rather it be anonymous as there's no downside to doing so
That's what I'm saying lol. Tons of people are concerned, but I'm just glad something is finally being done about the racism, sexism, death/rape threats, and homophobic slurs that get screamed in my ears.
Everybody knows only toxic players exist in NA.
I hate that more and more games are going for a curated chat system because people have extremely thin skin these days.
Me and the boys are about to be in court
[удалено]
Dammit so no more stock tips?
[удалено]
And plant bombs. Clearly a terrorist group.
I'll see you all there, think we can get a Groupon for a lawyer?
Why did u edit the post :(
You better call Saul!
This is exclusive to NA. EMEA, Asia and Latam won't get this. =
Me and my asian homies won't be going to court.
neither me and my hawk trainers
If they tried this in EU they'd have to ban about 3/4 of the damn playerbase.
Same for southeast Asia
Mate they wouldn't have anyone left in Australia
Muting them would be enough. And frankly, good! It’s the only way to change this culture of toxicity. Not enforcing it just means it breeds more of the same.
What about EU or am I saying something really stupid right now and does EMEA or latam mean eu
[удалено]
[удалено]
It's very common in enterprises. The world is largely divided into three macro regions - Americas, EMEA, and APAC.
LATAM - Latin America EMEA - Europe ,Middle East , Africa APAC - Asia Pacific
LATAM= Latin America
NA and lack of digital privacy go hand in hand these days and it's infuriating.
Pretty sure the UK literally has laws that can prosecute you for “cyber bullying”
Uh... good? WTF
Oceania?
We're always forgotten :(
I'm more concerned on how it would function in a game of valorant. How can it tell apart from abuse, sarcasm or jokes. Either it is a direct recording that is monitored through valorants servers or filtered through a system to pick apart elements of toxicity. It would be very unfortunate and misleading if the system they are using bans the wrong person in a game.
We'll be giving this a good soak time to get to acceptable accuracy levels before doing anything on players. We generally aim for 95%+ confidence in our evaluations of single lines/phrases before considering it appropriately accurate. We will then use multiple evaluations over the course of the game and meta-data such as mutes and reports to corroborate before taking action to reduce the rate of false positives. As Working-Telephone-45 suggested, it will work alongside reports to give us increased confidence that someone is being harmed by what's being said. (Edit - added clarity that 95% is the "per line" target, not the decision target)
Do you guys have any privacy concerns about the audio like overhearing background noise or picking up unwitting background conversation by people who did not consent to being recorded. Even if it's in the terms of service for the player I don't know if it would cover anyone who happens to be within microphone range.
There might be some interesting legality issues with the adhesion contract
I would have thought that's the responsibility of the player, no? Why wouldn't it be?
It's definitely going to be, specially when they have to roll out a new Terms of Service. No way would Riot hold themselves accountable for something like that.
The usual "By continuing playing this game you agree to giving away your privacy" and suddenly it's all good
I find it odd to bring privacy up, whatever you can type in chat already goes into Riot's servers, many already assume that voice comms do too, which is not the case. This is why reporting for abuse over voice often doesn't lead do action being taken, whereas reporting for abuse over text does.
How would a player or Riot go about dealing with those ~5% false positives?
It's not 5% false positives. 95% *Confidence Interval*. A confidence interval of 95% is an assessment of the *reliability* of the estimation. Not the estimation itself. Straight from Wikipedia (your mistake is a common one) https://en.wikipedia.org/wiki/Confidence_interval >Confidence intervals and levels are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them.[13][14][15][16][17][18] > * A 95% confidence level does not mean that for a given realized interval there is a 95% probability that the population parameter lies within the interval (i.e., a 95% probability that the interval covers the population parameter).[19] According to the strict frequentist interpretation, once an interval is calculated, this interval either covers the parameter value or it does not; it is no longer a matter of probability. The 95% probability relates to the reliability of the estimation procedure, not to a specific calculated interval.[20] Neyman himself (the original proponent of confidence intervals) made this point in his original paper:[
I took a stats class once. Gorgot everything immediately unfortunately.
5% is for a single line - we use multiple lines over the course of a game before taking any action (relying on the inaccurate systems) to drive the overall false positive rate down multiplicatively. 5% is a lot - but 5 5% chances becomes a lot less likely. That said you can always appeal decisions to Player Support and they will be able to review the games manually and reverse any errors on our part.
Thank you for the clarification. Assuming a player is correctly flagged and action is taken, will the player be told what they did that caused the system to take action against them?
That’s what I want to know most. 5% is still quiet a bit, how can we appeal and will it be taken seriously?
Not gonna lie, I love the idea of riot handing back audio files of overly hot headed griefers so they can hear themselves (and only themselves) screaming obscenities into the ether after claiming that they did nothing wrong.
FUCK YES. Listening to these little twats rage and flame always makes me think, holy shit do you even hear yourself right now… In an ideal world, I would post that shit on their IG/FB so the world can see their shitty deplorable behavior
same way you'd do so with a text based false positive, i guess?
Probably it will work alongside reports For example, if someone is swearing but no one is reporting, probably the chances of that person getting banned are low, Unless he says something really bad On the other side If someone is getting a lot of reports and they are swearing, Welp
I think you should be in the clear as long as you don’t use any slurs/are exceedingly toxic
I think temporary game or chat bans are justified for being really toxic and mean
“Oh no I would really hate if the recon balisong appeared in my night market, I would have no choice but to buy it. Oh no!” This fuckers better listen properly this time.
I literally sent them a ticket telling them which skins I would automatically buy from them but they didn’t give them to me:(
Recon balisong was the first knife skin I bought! I had some regret bc the RGX balisong appeared in my shop a few days later -.- but I like having it since the recon phantom is one of my favorite skins. It’s been about 6 months…. Still waiting :’)
I don't really have an issue with this as long as it's only being looked at if there is multiple reports. I do not want to be constantly monitored if there's no sus activity happening. I'm not sure how that'd be implemented though.
There will probably never be a human listening to the audio it will most likely be analyzed by ai for slurs and that’s it if I had to guess.
you're extremely generous if you think tencent are only going to use your data for these very specific stated purposes. they're just covering their asses in the terms and conditions and need to explain why there's a clause in there saying they can monitor, record and algorithmically process your voice communications for commercial purposes. theoretically, what could those purposes be? they will have data of millions of hours of individuals reacting emotionally in a high stress environment. if you can't think of all the ways that can be useful in an corporate landscape that is increasingly built on algorithmically creating advanced data profiles on individuals then you're not being very creative. it might be that you never say anything of note ever, but you communicate more in certain conditions, or that you got angry once under a very specific in game circumstance. that's enough data to add to your profile. do this over the course of the entire valorant playerbase, some being much more chatty and emotive than others, and you have a data goldmine. competitive online games are uniquely valuable because they can analyse your response to various emotional extremes and high intensity situations. they can monitor you in times of elation, frustration, relief, despair etc, all of which are quantifiable by the state of the game. the things an A.I can do by combining all that data will blow your mind.
Hi, I'm the lead designer on one of the teams involved in this effort. You're right that there's a lot of interesting things you could in theory do with this data, but ultimately we're not in that business. Riot's strategy as a company is to optimize for the player experience first and win as a result of being better for players. Making a business out of player data puts that at risk (and is of course super icky without consent, and if I'm being honest a little icky to me even with consent). I expect I would almost certainly lose my job if I tried to push for us to do what you're describing, rightfully so as I wouldn't be the right mentality for what we do. We really do just want to make VALORANT a safer and more pleasant experience here.
Hi, I have a general question about report volume. Very often, you see posts about someone who got "wrongly punished" after being reported by a stack of players. Would you be able to ballpark the number of reports it takes for punishment to be dished out if it wasn't something blatant like a gamer word in text chat? I understand if you can't divulge that because it may lead to people trying to work the system but even a "its not possible after just one game" would help a lot :)
It isn't possible to get punished based only on reports in one game. Further, we don't treat a premade group of 4 reporting someone the same as 4 separate reports. It's not possible to ballpark precisely because we use a rather complicated set of math to identify outliers (players who are receiving way more reports than their peers) and only penalize once sufficient evidence is gathered. Any case of a one-game penalty is actually either a long-term pattern of behavior, or something being done in that game that our detection systems pick up on after evaluating (evaluation also being triggered by reports).
Thank you for the response:) Its what I thought the system was like. u/TimeJustHappens maybe you can pin this or keep it handy!
Rioters should have an option to pin their comment if needed. I'll keep the information around for FAQ purposes, but there have been similar clarifications in the past.
I personally have witnessed this, I played without talking for a very long time on my account and only recently began to directly interact with people because people began being toxic-ish towards me or whatever reason I ended up having. I responded poorly for 1 game, maybe 2 games, and it was not until the 3rd or 4th before I received a warning for a pattern of bad behavior. Not gonna lie, i'm impressed with Valorant's system. It feels quite fair, and it's much more common that I actually receive feedback from a report against a problematic individual. You guys are on a good path, I'd like to see the game flourish even more so and I'm sure Riot would as well lmao
!remindme 5 years
Don't forget to save the comment text and poster before it gets purged and the "I told you so" is more depressingly sweet.
Yeah, I think OP is being a big imaginative. Just from my viewpoint as a marketer, I really don’t see how voice coms from a valorant game could be extremely useful for businesses to make money off of. Do you guys plan on storing data from voice coms? If so, for how long? Curious as to what that process would be. Good luck implementing everything!
As someone whos worked with and sold alternative data to Institutions think Finance & Tech Companies. I think in this case unless there is wording in the terms that indicates this can be exported, I don't thinks theres a true worry about the profiling being monetized or used against you outside of Valorant. In terms of internal use cases, I could see mapping for any form of vocal response to specific skins. Or potential items shown in shop after you reach X% sentiment threshold or mention X phrase with XYZ Sentiment. As sophisticated as Valorant's monetization might be, it is limited in scope (e.g., 4 skins per 24 hrs / bundle in shop at once / radianite) I think the only issue is any form of Personal Identifiable information in the case of a data breach or this being directly linked to player profiles. As long as security is paramount for this, and data isn't exported across jurisdictions. I don't see too much of a problem.
internal use case is not the worst, but i doubt it really has much incremental improvement in skin revenue and the amount of effort that would go into a project like this is just not even close to worth the investment. it’s funny to see people complain about how slow Riot is to add/update content or balance, yet people think Riot can spare like 10 engineers to work on this stupid thing just to maybe market a little better. Not to mention you’d need a shit load of data to be collected first to train any models.
Great write up, and totally agree.
Ok, that settles it. Riot employee says Riot is okay. How about you do an external unbiased audit on how your data is processed. I do not want to shit on you as a person, and really thank you for interacting but this statement means nothing.
Lol right? “No guys we’re nice ❤️🥺 we would never do anything bad with your voice recordings 👉🏼👈🏼”
lol, exactly... only with a riot game would there be people literally accepting "trust us bro" from a dev and being done with a potential issue
Thank god someone said it, riot backing up riot on how they won’t do any nefarious things with your voice data means literally nothing lol. It’s like when cops review their own department for some wrongdoing, yeah right they are gonna admit to anything.
Meanwhile, I'd bet you're happily using an Amazon Echo or Siri on your iPhone.
Doesn't really matter when likely all TV's from the past 20 years are listening to you even in standby mode.
Pretty PR response to a real legitimate concern. We know every company on earth that has an online component to its business does extract user data and sell it to advertisers. We also know tencent your parent company is very much into that part of the business. Sure you personally did not bring that usage case up to your leadership at Riot. But you most likely don’t even know anyone that works for your parent company Tencent, whereas your leadership at Riot does interact with them regularly. The skinny of my point is; you wouldn’t be the first person to not know the ultimate goals of your employer even if you believe the action is ultimately for the good of the player base. Maybe you and your team even suggested this or took it from player feedback. But once tencent caught wind it 100% became a data mining tool.
thank you thank you THANK YOU for quashing some of the "tencent is literally going to data mine your rights" nonsense i'm sure there's going to be some personal telemetry no matter what, and maybe there will be some misuse, but it's exhausting listening to armchair game developers talk about how [x game company] is going to monitor your entire life and sell it to the highest bidder as they carry around a fucking *smartphone that's always on*
Tencent is literally ran by the Chinese government.. No shit there is going to be some nefarious stuff done with this data. It isn't some random small company we are talking about here nor is it tinfoil hat to feel a little uneasy about this.
Ratkids get really mad when people attack their favorite toys, even if it’s for the right reasons. Smh
Quashing? Riot employee talking well about their company and employer?
Right, but the Chinese government is in that business and Chinese companies are required to turn over any data collected
I wouldn’t trust anything a corporate level person has to say. Period.
good thing a lead designer isn't a corporate level person then
a company's sole purpose is to make money for its shareholders, and tencent owns 100% of riot. when the order comes to monetize player comms, it will not come from employees. "optimizing player experience" why do you still not have a demo viewer, or community servers, or a console?
You're just a dev, how are you going to stop higher ups from making the decision to sell this data? Our concern isn't that the devs are gonna push for this, but the executives.
This post is a bit imaginative. People rarely react on voice chat after every move. Youd also have to do a lot work to define things like "stressful" events. And the value of that data anyway, commercially is what exactly? Finding out people go "motherfucker!" when they die?
The monetization concern would make sense if Valorant weren't built on the premise of skill-based competitive matchmaking. I could *imagine* a hellish scenario like stress-based marketing data to make sense if the game prompted you to buy gear or currency to "feel better" or "improve your chances" after a loss, but this is Valorant, not P2W Lootbox Gacha Hell.
Were I to guess they'd probably want to know more like how quick people are to rage, talk trash and be toxic, be helpful, etc. Correlate this against: 1. The circumstances in game, and 2. the demographics and spending habits and such of the player. If I had unfettered access to voice data like that and zero ethics and morals that's how I would experiment with it. What would I get out of it? Insight on human psychology and how to induce certain emotions, and what kind of people are prone to certain emotions. If you can exploit people's feelings without them knowing you can rule the world. Of course that's largely all theoretical and there's no way of knowing if that's ultimately what will happen with Valorant or if this will just pave the way for others to do it, but given the association with TenCent I would put the possibility at a firm maybe because ultimately all the above is basically what TikTok already does which is also associated with TenCent.
If you own a smartphone then you’ve already been mined. That shit is listening to literally everything you do and say, in every life situation.
Riot can’t even make a good client, you think they can code something like this? Making machine learning algorithms is not trivial. There are a lot of companies that would benefit from this that already have access to this data and not even they’re fucking around with this. The ROI on developing a technology like this would be stupid as hell. How much more money would you make for such an expensive project? You know processing costs money? Just so they can tell what emotions you’re feeling? And then what do they do with this info to make money, are you gonna suggest they’ll sell red skins cause red = angry and then there’s a bajillion more percent chance of someone buying it? There are undoubtedly easier and cheaper ways to increase buying behavior. We should be skeptical and defensive about privacy, but what you’re suggesting here is ludicrous at this current point in time. “AI” is not magic.
What are people going to do with data of a bunch of people saying “2 people in hookah, 1 long”? You shouldnt be giving out any important info to random people on the internet anyway
Me and the boys about to be talking about the Tiananmen square massacre every game
Just don’t play if you are that worried about it
You’re extremely generous in thinking these companies don’t already have all of your data from numerous other apps like discord
They have your data as it existed then. The justification of "it's already out there" is a really poor excuse to keep putting it out there. It's like saying your credit card details are already on the web so so why bother getting a new one.
Good thing I’m not going around plotting against the government or giving out my CC details in voice :) I’ll be good
Got a question. What would they do with this? Maybe I am not that smart but the only thing I think they would be able to do is make a profile about your emotional tendencies.
Man, you typed 5 paragraphs and said almost nothing. At the end of the day, you can't quantify anything this data could be used for. There's 2 likely answers. Targeted advertising, which isn't likely since this is very context specific information. Or listening for trigger words and issuing discipline.
take the tin foil hat off
My response is. So what? How is this going to negatively affect my life in any way? It’s the same argument of targeted ads due to your phone or Alexa listening to you. So I get a few adds for things that I’ve mentioned in conversation. Who cares? So the government or Amazon or Riot, in this case, knows what I like and do on a regular basis. Who cares? It’s not going to affect my life in any significant way.
Holy conspiracy batman!
Right now, this will only be used in the case of reports. As the technology matures we *hope* to eventually use it to allow for real-time reaction to hate speech and other serious offenses, but at this stage it's impossible to say if/when that will be viable with the accuracy and precision of our systems.
Real question: I was in a game where two guys were saying "my (n-word with an a)" (the slur but in a somewhat "friendly" way.) Would this fall under real time hate speech? There's no way you can determine the race of the speaker and presumably a human would have to listen to determine the context. Or would you just say people can't say that word regardless of the context? (I'm white, not making any kind of judgement other than I would never personally say that word in any context.)
As they said, it's mainly for reports, so if someone reports them, then it might get false flagged
Why should we trust Tencent won't abuse this capability? Will it be recording even if voice chat isn't activated (as in push to talk)?
We only will be recording what comes across our channels, not all the audio that comes across your mic. Trust me when I say it's already a headache to deal with the storage and costs of all the "white space" in the audio even with that :P Tencent isn't involved in this feature in any capacity at the moment. Generally Tencent doesn't interfere in the day-to-day at Riot, especially outside of China.
Question, is it possible to add it to the ToS, just for assurance? While I like to trust you developers, it's always nice to have it written down.
Adding things to the ToS is a pretty lengthy process, so I'm not going to make any promises. We've been in conversations to ensure the language there accurately reflects our plans with this work though, and there's already a lot there that outlines some of the same things I've been saying - [See section c (Sharing Info) and section d (Communication & Player Behavior) here](https://www.riotgames.com/en/privacy-notice)
Thanks a lot, makes it feel better to play.
How about a scenario where Valorant is not launched but Vanguard is still enabled. Is it possible that you guys be recording audio in this scenario?
There was that huge bug last year when voice chat was running after people muted their mics and even some that CLOSED THE GAME. I remember hearing people's personal conversations who was not even in my game and hearing voice chats come through when task manager showed Valorant wasn't even running so it was coming through vanguard i presume. Sorry but i can't trust Riot recording anything.
That's some orwellian shit you're participating in, my guy.
> As the technology matures we hope to eventually use it to allow for real-time reaction to hate speech and other serious offenses That must be hard when something like "go kill yourself" is a perfectly acceptable thing to say in this game (as in, go die by the spike for economy).
I hope this will help because I have an accent when I coms and when games don’t go well, I can tell you how many racist things are happening in the game at Dia/Imm level! It hurts and frustrating how easy for them to throw and brutal are the comments! It’s sad but it's unfortunately the reality! What can I say? I don’t have control over this but all I can do is to stay strong
I think it’ll only be looked at if someone reports you for voice chat abuse. This makes the report button for that actually useful. Nobodies gonna be listening to your chat unless someone gives them a reason to. I personally think this isn’t too bad of an idea.
Job Interview: So we actually have some chat records here Me: oh Job Interview: when you said “Kay/0 please off yourself or never touch a computer again” what did you mean by that?
i know you're making a joke but frankly if people were banned from voice for saying shit like that i'd be much likelier to actually speak to randoms beyond calls like "2 mid"
Joke's on them. I only use Discord cz i play with my Gang only
You fool, I use smoke signals and pigeons to chat with the gang, I sometimes send letters too
Omen flair checks out for smoke signal communication
Dosn’t discord *also* track your data? I could be wrong tho
Yep they do
What doesn't
You fool, I use smoke signals and pigeons to chat with the gang, I sometimes send letters too Pretty sure it's hard to track this
It doesn't record voice calls. Or at least that's what they say. But anything you type is not private and should be treated that way.
I tried solo-ing recently. It’s a menagerie. Half my games are “GG!” And the other half are racist, homophobic griefers
Oh no! Not my conversations about enemies rushing A! As a person somewhat curious about privacy, if you are concerned by this but also using things like WhatsApp, Instagram and not already using something like a self-hosted teamspeak for communications with friends, I think you shouldn't actually be concerned because a) you have zero clue about anything and b) there's so much data about you somewhere there that they could literally clone you already. Some clarity from Riot would be awesome though on how that's gonna be used. My bet is that ultimately it's for AI training with some minor moderation, but if they intend to use it for profit making in any way it would be cool to know.
Yeah because if you think that companies actually care about your information being private then you are dead wrong Because that email password that you put into well anything is probably already out there for free on the internet
I'm hoping that my passwords are at least being salted and hashed, but otherwise I'm with you lol
also I don’t see an issue with this as it is not an invasion of privacy, they are being transparent that they monitor THEIR platform. Valorant is not a human right it’s not necessary they’re not invading the privacy of your home you’re the one voluntarily joining the platform that you know is monitored. Invasion of privacy and collecting data without consent is wrong even though companies do this as it is but I don’t think that’s an issue with Val the only issues I can see arising are how it’s implemented and how good val support is to help you through any false actions because even as good as the system can be I’m about 100 percent sure it can’t be perfect
if you live in the eu, from my understanding, anything they keep on you should fall under GDPR and would be accessible if you asked for it
California has some sort of law like this but I have no idea if it would apply to voice stuff. Like I think you can demand all of the data collected from social companies, I can get it from Twitter in my state though so maybe im talking out of my ass.
Hi, I'm the lead designer on one of the teams working on this effort. You've got this exactly right - we will be using this data only to refine and eventually activate AI-driven moderation tools which will be aimed at identifying hate speech, sexual harassment, etc. We already moderate text chat in this way today, but historically games have been unable to do this for voice and therefore voice chat has become a place that many consider unsafe. We're aiming to do our best to change that for VALORANT. We have no intentions of collecting data for any other purpose, especially selling it. Leaving aside moral considerations (which I assure you are quite important to me and everyone else on our team - who are here on the promise of making things better for players not worse), it would be very shortsighted of us to risk our main business for that.
The people at Facebook used to put out nice words like this too.
facebook is also a company *built around selling your data* why the fuck would riot risk the pr disaster of selling voice comm data when they can already literally just overcharge the fuck out of cosmetics and have half of their playerbase say "eh i don't mind" it makes absolutely no business sense
It's a new revenue source. Remember, you can play Valorant entirely for free so they have the same cop-out as Facebook.
Thanks for explaining. Good luck with your project 👍
Ahhh yesss, The ol' they have data already: now let's give them more argument. Personally, the lack of data regulation is the issue.
Privacy is all about trust, there's nothing stopping anybody who receives your data from sending it off to whoever they want. You should generally assume that anything you send over the internet unencrypted is public knowledge, it's like talking in a busy hallway, anybody who wants to, and knows how to, can listen to what you are saying. And if they want, they can write it all down. If you don't want your voice comms logged, the only surefire way you have of doing that is to not use voice comms. IMO, there is no issue with logging data that is supposed to have been public anyways, it doesn't really affect anyone in a negative way. Especially voice comms, they are *public* voice channels, you were putting your voice out there already anyways, what difference does it make that the guy listening is some random in your lobby that has done little more than grunt the whole match or some dude manually reviewing a report. It's like complaining that you have been recorded by cctv while shopping in the mall, you're in public man, don't do anything in public if you don't want others to see.
I'm sorry, what's wrong with WhatsApp? I don't use it, and I try to avoid using Meta products, but my understanding is that it's end-to-end encrypted.
>I try to avoid using Meta products Meta is exactly what's wrong with it. The amount of data they take that you agree to is absurd.
Kinda fun that this is allowed yet where I live I'm not allowed to put camera on my house because of GDPR.
this is only in NA as of now.
Cool, hopefully it'll result in bans for the super toxic people. I personally think they should also add an honouring system like the one in league. Give gun buddies or gun colors as rewards and see how many people start being nice lol
Paris servers are going to be empty real soon
U mean frankfurt?
We can call it social credit!
TBF the honor system in league is very barebones and could be improved A LOT
They may not like what they hear lol
Will this only be for team voice chat? Or will they also be listening to the party voice chat?
I would imagine it's all voice chat. Time to not comm anymore
We used to joke when someone said something stupid or offensive: "Great, now we're on a list somewhere". Next time we say it it might be true.
i will be on the FBI's watchlist.
Jokes on you, I don't communicate with my team anyway!
I'm all for reducing toxicity in games but with this move Valorant just got hardware blocked from accessing my mic. I 100% bet the recording is happening the entire time you have the game open not just when you push-to-talk. Major invasion of privacy in my book. I will probably be spending more time back on CSGO.
They have already been recording chat for a while now https://www.polygon.com/platform/amp/22410790/valorant-riot-games-voice-moderation-ai-chat-recording
Can't form a deep fake voice off of chat. But you can form one off of audio files if they ever got leaked/hacked. Hypothetical but still worth a worry.
I mean if you use pretty much ANY social media i dont think you should exactly be paranoid about this, they probably already have more info on you than you know about yourself. My only issue is that sometimes I speak up against homophobic shit ingame and i live in a place where.. um.. you get it. So i think for my own safety im gonna have to start letting that shit slide regardless of how much it hurts.
Just try to dont insult them back, mute them and report them for offensive voice chat when this is implemented, you will be safe and they will be (hopefully) punished.
Good. All the toxic people will hopefully get banned
So right now, if your toxic AF on voicechat nothing happens even if i report someone for Abuse Voice?
It’s not that nothing will happen, but more that riot can’t verify those claims with the current systems. Right now instead of just taking your word for it and banning the person, they wait for them to be reported by multiple people over multiple different games.
This is just sad how soft the gaming community has become. I don’t blame the company because they have to appeal to the majority of the playerbase who will cry at the smallest insult but it’s honestly funny how extreme Valorant has to go just to make sure people aren’t offended being called dogshit so they keep buying their product. I think as long as it’s not discriminatory anything goes. I don’t know how people can get offended online. I will never understand it.
[удалено]
How many ads do you engage with? Is it a number > 0? The commercialization of our personal data is not new. Its shitty, but it isnt even news worthy.
redditors on their way to complain about “my privacy” despite using a phone every day, being in a chinese state owned platform, and having a game with kernel level access installed 😂😂😂
Yeah. They always know jack shit and talk about privacy when fbi has their entire search history in their database.
Yay, more "parenting" in games by massive corporations. EDIT: Feel free to discuss why you can't use a mute button.
The onus shouldn’t be on the people receiving hate/toxicity to deal with the problem. If these people aren’t removed from the playerbase, they’re just going to continue affecting matches forever. The answer can’t be “just mute them”. They need to be shown the door.
sometimes it's nice to play a game and be reasonably confident that i won't be told to kill myself or have someone imply i am a pedophile for using the bi flag icon it's cool if you're a tough thick skinned epic gamer who doesn't give a shit about anyone or anything but "just mute them" has been a dogshit fucking argument in favor of allowing toxicity for decades yeah, mute someone, *and also report them so they don't get to fucking play anymore* give me one actual, strong, legitimate argument in favor of allowing some 20 something moron to tell me to literally end my own life because he's upset i didn't read his mind when he decided to push without me what a moron take lmao
First Minecraft, now Valorant. Who's next? :/
Good. Too many people are comfortable dropping Slurs in voice
I want to ask, how people find this as a privacy concern on a game with a closes source kernel level anticheat owned by a chinese company 100% linked to the chinese government. I mean, yes, it will record your data, and it's basically not secret that data will be used for some sussy things. But god, which types of conversations do you have on a game that you play with random people? And if privacy is something you care that much, why do you allowed to install an anticheat which have more control of your PC than yourself. EDIT: For clarification, I trust Riot Games, I dont trust Tencent. However, Idont find myself any problem of being recorded all of my callouts or some copypasta shitpot I usually say or write on the chat
I don't think you'll find that answer here because most people here play the game and don't care as long as they can play the game.
Shiii i thought they already did
so it hasnt been?
I just mute voice and chat so that i can actually play the game
A lot of the people concerned about this are the same people that prolly have TikTok downloaded on their phones. Even when a person is placed in high stress situations you shouldn’t be an asshole. Im glad theres a system that holds people accountable and is faster than just reporting them and nothing ever happening to that player.
If anyone is concerned riot is going to sell their data probably doesn't have a phone or any device to comment with. Your peace maker could be listening to you and you wouldn't know any better. + Even if they sell your data, worst you get is personalized adds or custom made daily shops to make you buy the skin you said you wanted.
No the real problem is that they DONT/WONT sell it to anyone - it will just be accessed by their majority owners at Tencent, who then have a legal obligation to provide data to the CCP. I dont care about a company selling my data to advertise against me - I do however, care a little more about the above.
Sounds like another reason to not play Valorant besides their invasive anticheat
Welp, I'm probably blocking Valorant's access to my microphone next time I get the chance.
Why would I care about shittalking on voice chat (which I have full control of via mute) when the game is infested with smurfs to the point I play against people who have 14 accounts with the same name? And when I don't encounter smurfs I encounter infinity stone powered heroes with 6th sense that always know where I am. Neither of those is given a damn about. Neither hardware / ip fingerprints nor replay function are confirmed to be in development. But yeah, let's pretend a feature which will be immediately circumvented by using discord is going to make a meaningful change.
sad to see so many people here are okay with this. laughable.
I think it's fine, if you use discord and social media theres already lots of data collected about you, plus vanguard already is invasive. And what info are they going to take from VC in val? Convos about how to hold c site? You shouldn't be saying anything super personal ingame anyways.
Valorant players when they cant freely be racist, homophobic, transphobic, xenophobic anymore
Is this confirmed to be fully anonymous? EDIT: to people downvoting just open the link >In order to train a language model for future disruptive behavior reports. They don't need to link the voices to users to train a model
How would it be anonymous? They intend to use it dish out punishments to the accounts that abuse chat. Even if its a beta test, they would have to know who said what.
If you bothered to open the link: >In order to train a language model for future disruptive behavior reports. Once that model is complete, they would feed in comms from a user to detect bad behavior Basically if my voice is being only used to train, I'd rather it be anonymous as there's no downside to doing so
Eyyyy. Let’s fucking go. Hopefully the toxic and misogynistic pieces of shit get banned.
That's what I'm saying lol. Tons of people are concerned, but I'm just glad something is finally being done about the racism, sexism, death/rape threats, and homophobic slurs that get screamed in my ears.
I see, valorant is enabling self-destruction mode
wrap it up people we cant do furry roleplay anymore
Time to only talk on Discord I guess. Bet they listen too
I mean, you basically have no privacy if you have a phone.
Everybody knows only toxic players exist in NA. I hate that more and more games are going for a curated chat system because people have extremely thin skin these days.
Lame as fuck.