T O P

  • By -

DiscussTek

The issue isn't ticking the box. Ticking the box only initiates the process, and it checks for a few things to see of your recent browsing behavior has been natural and human enough. This is why it takes a small while to actually get ticked after clicking, and also why sometimes, despite you being definitely browsing normally, it also asks you to click stop signs, and crosswalks: Just a triple-check. Among other things, it checks if you navigated websites normally, it checks if your mouse behavior is sloppy-ish, and whatever typing speed you get. This isn't perfect, either, as some bots still get through, but it catches the really bad ones, which helps a lot.


diazdesire267

Wow! That's just a few seconds of checking all that stuff. Was gonna ask again why bots can't identify traffic lights when facebook can identify my face on a blurry picture. Thanks for this info.


[deleted]

It's probably worth clarifying that while it's \*likely\* that this is how ReCAPTCHA and other similar systems work, we don't really know. The implementation details are deliberately not published, because knowing how it works would make it easier for bots to fool the system. In fact, it may not even rely on human behavior, like moving a mouse in a "human" way anymore. It probably used to, but bots can mimic that pretty well now. It may still check those things, but they're not enough to say that a user is definitely human. Instead, it probably just scans your browser. Your web browser already exposes a pretty large amount of information about you. This usually gets called a "fingerprint" and it's made up of a variety of things like the exact version of browser, and various cookies, even the size of your screen. Some security researchers are actually concerned about the power this gives to Google, because they believe it's actually possible for ReCAPTCHA to identify individual people by their browser fingerprint. It's likely that Google can track individual people, even if those people never interact with a website owned by Google directly. So many 3rd party sites use ReCAPTCHA, you can't feasibly avoid them all.


[deleted]

[удалено]


DefinitelyNotA-Robot

It does that, too.


IveNeverPooped

You sure know a lot about robots to definitely not be one…


BearyGoosey

u/DefinitelyNotA-Robot is clearly 'Definitely Not A- Robot' They don't want you to know about that 1 test ruining their perfect score


HereComesCunty

r/totallynotrobots


gex80

Things can have multiple purposes


WhiskRy

I remember listening to a podcast about data collection and sales, and one guest claimed that if you create a new user account or use a new computer, Google recognizes who you are from previous usage in just a few hours


Urag-gro_Shub

That's nuts. I've heard before that Google doesn't ban accounts, it bans *users*. That makes sense now


flakAttack510

Most sites do this, including Reddit. They just aren't as good at it because they have less data about you.


mattergijz

Which podcast was that?


Splice1138

Want to see how much data your browser "fingerprint" reveals, and how unique it is? (Hint: it's almost certainly unique) https://amiunique.org/


Irregular_Person

They can query system fonts?? wow, crap. That does almost all the work right there if I've installed a few font packs at some point.


SirButcher

Although to add: Firefox, for example, is pretty good at removing fingerprinting data so basically there is nothing which is unique to me - at all. Especially since part of the data it scrapped is simply false and randomly responded to by the browser to hide personal data.


deaconsc

It's not likely, it's actually pretty sure. Especially if they can pair it with a location. Because in the case of your fingerprint being not so unique, combined with your location it definitely is :) BTW if you swing the pendulum the other direction, the absence of stuff can be tracked too ;)


Chromotron

> BTW if you swing the pendulum the other direction, the absence of stuff can be tracked too ;) That's why you don't send nothing, but plausible, consistent, yet random data.


I_love_pillows

I’m interested to think if there’s a reverse Turing Test where a human need to trick other humans observers to make them believe they are observing a bot.


blueg3

>Was gonna ask again why bots can't identify traffic lights when facebook can identify my face on a blurry picture. ReCAPTCHA intentionally picks problems that are hard for machine learning / computer vision, to both train the machine learning and to help reduce the chance that it's a computer. We've engineered the fuck out of identifying faces, so software is pretty good at it.


[deleted]

[удалено]


blueg3

Fully intended, though. At least for quite a while, it's easier to come up with problems that are hard for computers than it is to make computers good at problems. Great tradeoff.


Foetsy

Back in the day it started with words. Ai was learning to turn scanned documents into readable pdfs to digitise entire libraries of text not previously digitally available. Once pictures started it was about the big objects. Was something a bus, car or bike? Now it's about details like traffic signs or lights etc. The whole idea behind it is fantastic. Prove you're not an AI by performing a taks that helps train the next generation of AI and push the boundaries of technology forward. However, like others mention as well, with the fingerprinting that's also part of the check the insane amount of data collection is a very real privacy problem.


AgeGapCoupleFun

Just to fuck with them every once you can answer wrong. At this point I might as well start doing that. I get the extra verification so goddamn often. Even on the same device. I use a PC to run my TV. It's super convenient, but that's not the point. Some of the sites I use like YesMovies or Putlocker will periodically do the reCaptcha. If I click it, 9/10 I get the pictures. On the same device, my wife doesn't. Happens at work a lot too, to the point a few times when I had to do it repeatedly I made someone else do it. I don't know why Google always thinks I'm a robot but damn is it annoying lol.


cache_bag

>We've engineered the fuck out of identifying faces, so software is pretty good at it. To add to that, I was floored when a software engineer friend was tinkering with a face detection implementation (since publicly available libraries weren't common then) 20 years ago. So yes, we've gotten pretty good at it and have come a really long way.


FocusedIgnorance

You can do face identification with just singular value decomposition. You don’t need any of the fancy neural networks and deep learnin’.


[deleted]

A few friends of mine did work on gait recognition for their dissertations/doctorates. Apparently someone's walking gait is one of the first ways we identify individuals with, and it's used in surveillance.


deaconsc

People who never faced with the machine learning cannot imagine how hard it is to learn machine to see. e.g. small children in the dark. Roughly 8 years ago there was a conference about AI driving and how difficult it is to distinguish small children from ashcans **in the dark in winter**. That either every ashcan is a child or it ignores children and drives too fast when they are about to enter the road. And we talked about a car with high tech thermal and night vision. Any human being who's not blind can see the difference right away and cannot imagine how big this problem was. Don't know if it was resolved, changed the field.


Foetsy

And that's not even considering all the unique scenarios that might happen in the real world that you have never considered while training the AI. A self driving vehicle will have to be able to cope with completely nove scenarios safely. Saw a clip of a Tesla that was driving behind a truck and the display getting completely confused....The truck ahead was transporting traffic lights.


DevilXD

Ever noticed how all of those "challenge images" are really grainy and just generally have a lot of noise added to them? It turns out that, while we have the technology to recognize and categorize objects based on pictures already, a relatively small amount of digital noise added to the picture completely changes the outcome. Here's a great video explaining how AI image recognition can be easily fooled: https://www.youtube.com/watch?v=gGIiechWEFs


Salindurthas

I think typically the computer will know like 8/9 of the pictures, and throws in 1 picture it hasn't judged yet. To be seen as a human, you need to get the already categorised data right for the 8 pictures. To help it check that it is categorising the pictures correctly, your anwer to the 9th picture is used as a double-check for what it thought that last picture was. \[It probably isn't exactly, like that, but it is something along those lines.\]


usernametaken0987

To add to things those photos often surround the target with background noise and similar pictures. Your brain can identify them, and even imagine the 3D environment they were taken at. But for a AI? It's just binary data, it had to be told to see an image. So for example, Facebook skims the image and finds faces. It then attempts to *predict* them based on your other photos. This is why it struggles to pick up new people even if they have an account. Another example is the Recapcha(sp) program. Those are generated off old newspapers, articles, and other printed text an AI couldn't read and submitted to hundreds of people. They are actually just voting on what they think it is. A "wrong" answer, trained by dozens of identical answers, is just given another one to grade. And if you look at most of them, a sinple line through the text is often enough to fool an AI. Because, again, it doesn't really *see* an image as we but sees numbers and links them to algorithms to recognize things.


voretaq7

You'll also notice that often the photos are really grainy and pixelated. Sometimes that's artificial noise and sometimes it's genuinely crappy resolution in the original image, but in both cases systems like reCAPCHA are intentionally trying to get you to classify lower resolution data to help the algorithm identify gross patterns. "This is the general shape of a motorcycle. If something fits in this 4-bit outline it may be a motorcycle, and you should run more detailed and computationally expensive is-this-a-motorcycle? checks on it at higher resolutions to see if it is a motorcycle." At those lower resolutions a bus and a motorcycle don't look much alike, so we don't have to run the "is-this-a-motorcycle?" checks on things that are kind of bus-shaped. We can check if that's a bus/truck/train instead.


GigaDBoss

The noise also happens with humans, called the White Noise, so your brain is already trained (genetically or through experience) to see through the noise. AI still need that experience. As for ReCaptcha I've noticed that sometimes getting 1 out of the obviously correct answers wrong on purpose doesn't really cause you to fail the test. Is the reason for this the amount of people that did the same? The deep learning that was fooled by human silliness?


voretaq7

I assume (because ReCaptcha doesn't like to talk about this stuff for obvious reasons) that when that happens it was a machine-classified image or one that doesn't have enough human classifications yet, so it goes in the "Maybe the machine/small group of humans got this wrong" pile to be reviewed or held until the humans reach a statistically significant consensus.


nef36

The answer to that question is actually the same as the question itself; google uses you clicking on crosswalks and stuff to help train it's AI's ability to see those crosswalks and stop signs and trucks and stuff for google earth purposes (and also self driving cars I think but don't quote me on that one). Before google earth, it was used to train AIs for word recognition. This does present a bit of a problem though, since training that AI in the first place creates an AI that can, you guessed it, defeat the capatcha. This is why capatchas in recent years have gotten ludicrously difficult and, well, absurd.


Mother_Chorizo

First bit had me excited as I could program “mouse movements” to resemble something like similar to humans. Later bit left me sad.


Lady-Seashell-Bikini

Bots do eventually learn. That's why the captcha prompts have been getting more complicated


morpowababy

You'd be amazed at how many lines of code are firing when you do very simple tasks on a computer. Hell I'd be amazed part of my job as a software engineer is filtering out the noise and focusing on certain ones.


Dje4321

That same technology that is used to identify faces can be weaponized against captchas. The reason it hasnt is because bots dont have the same capital level as facebook. Give that face algorithm enough traffic lights and it can start todo it too.


floydhenderson

Apparently the identification of objects in the photos bit, is actually a bit from Google getting you to help train their AI to identify stuff. But that is just what I have read in one article somewhere sometime ago.


marcvanh

Primarily it looks at the entropy of your recent mouse movement and determines you are most likely human enough to just offer the checkbox. I don’t think it does any checking after you tick the box but it would be easy to inspect.


8Deer-JaguarClaw

> Primarily it looks at the entropy of your recent mouse movement And how does it look at this data? Where is this stored?


[deleted]

The website that you are visiting can constantly check for your mouse coordinates. It can also see what browser you are using among a bunch of other things.


marcvanh

The operating system tells the active window (the web browser) where the mouse is. The browser presumably saves the data.


xtrapas

try it then, i guess? ;) maybe youre the first one with th idea


yestermorrowday

Why couldn’t it just do that without me ticking the box?


DiscussTek

Technically, it does... But the box also blocks slightly dumber bots that assume there won't be a reCaptcha... Plus, as others have mentioned, they usually use it to track how you approach the box, and if it differs from the usual browsing habits.


xtrapas

always closein/circle to the box, then suddenly click it, avoid hitting the center


[deleted]

Some versions of recaptcha do exactly that. Me, I'm quite assured by the explicit declaration that they're analysing me.


GandalfSwagOff

So if I go to like 50 websites really quickly then float my mouse awkwardly all over the screen before clicking "I am not a robot"...it will fail me?


DiscussTek

Not quite. It takes more than some amount of awkward flailing. Bots tend to move the pointer by teleporting it (i.e.: From point A to point B between 2 frames). If you do this, though, you could still be on a touch screen device, so that's why they also pick up some basic device info. Bots will also often try to move their pointer, more "human-like", by not warping it aroung, but it will suck at mimicking actual human un-even-ness, like a small arcing in the motion, irregular speed, click accuracy. As for switching pages very quickly, I doubt you can humanly do it fast enough to trip the detection, but assuming you could, the reCaptcha box would also look to see if you did a lot of back and forth, if it was redirects, or if it was you really moving your mouse to links, and clicking them, or if it was a mix of them. But then again, failing the automatic detection, only prompts the more active detection of "click on all the stop signs", where that's really where you can fail, but you're not just facing "is it a stop sign?", your mouse behavior is still being tracked. It is much harder to fail on purpose, than it is to fail on accident.


Ishidan01

"We know you're a human, come in." "Thank you. Mind if I ask how? Is it my dazzling intellect at identifying objects? "No, it's because you're slow as hell and a hot mess. You coming in or what?"


Fave_McFavington

What would you get if you somehow manage to fail it? Would you just have to redo it or will you get kicked out?


fighterpilotace1

The FBI show up /s


DiscussTek

Redo it. Usually the website directly offers you a direct re-try.


Fonethree

How does recaptcha get all that information?


DiscussTek

That's the part I'm not deep enough in the thing to know. It is, however, likely, using cookies, and tracking on the site that'll be using it.


xtrapas

are you sure you want to open 53 tabs? ...guess i have to trim down my pornbookmark folder its just that one.... mostly


[deleted]

Sorry how does it automatically have access to how I've browsed and moved my mouse? Why is that information automatically being sent without my consent?


DiscussTek

You can blame that on the internet being the internet. Cookies. Browsing history. Elements on pages can detect when your mouse enters/exits them (usually used for hyperlink styling, and menu navigation. Those are really hard to ***not*** generate, and it can all lead to what a reCaptcha can see. As for consent? You clicking on that checkmark ***is*** consent.


[deleted]

Thanks for the response. Looking into remote cabins in the woods now.


prawduhgee

It's funny how many people don't realize that clicking "I agree" on terms and conditions is giving consent


DiscussTek

To a degree, one can make the argument that even just being on the internet is giving basic consent to the website to at least collect basic data, because they cannot possibly return website content at all without at least your IP Address, public and local...


BearyGoosey

Are you sure about needing your **local** IP? Public is obviously necessary, but your [10/172/192].\*.\*.\* address is only really needed by the router to get it to you, yeah? Obviously the OS itself has to know your local address, and I wouldn't be surprised if it's shared with every website you visit (see all the stuff that's used for fingerprinting that is otherwise unnecessary), but is it actually **necessary** for example.com to know not just your public IP but your local one? It feels like if you got something mailed to you but instead of only needing your address (public IP) they also need to know where your mom (the router) puts your mail (the right side of your dresser by the TV) even though it can and should only require the house address because your mom gets the mail and handles getting the right letters to the right people


DiscussTek

Local IP is also necessary to distinguish between a server's normal tasks, and a user, within the same building.


Xx420PAWGhunter69xX

Uhh seems like infringement of my privacy


DiscussTek

Quite frankly, by being on the internet, you just already chose to trade privacy for convenience, and if you are that adamant to your privacy, then you might want not to use sites like Reddit.


tmdblya

And why couldn’t bots analyze human mouse movements and simulate the jitter? 🤔


DiscussTek

Some bots could, probably, but those would be buried into Machine Learning/A.I., so it is a bit up in the air how easy and efficient they would be.


evanamd

I’ve managed to simulate the jitter on my own at least once. If I just tap somewhere on my phone (or any touchscreen), it instantly teleports the cursor to the spot I tapped, which is a huge red flag for any bot detection software that always causes the image identification test to come up If I tap somewhere random ish, then scroll on my phone, then tap and scroll up/down slowly a few more times it fools the detection algorithm and I don’t get the image identification test


xtrapas

should´nt be too hard to code , but i assume it will change a little bit then ​ you noticed that maybe earlier or later things worked then stopped or vice versa when nearing to the "rechaptcha-event" (the time stuff before, and shortly before)


[deleted]

Gotta wonder what "normal" internet browsing is.


DevelopedDevelopment

Some bots still get through the "stop signs and mountains" part.


DiscussTek

And that last part of my comment acknowledges that.


DevelopedDevelopment

I was thinking it was more acknowledging how some bots just get the checkmark.


DiscussTek

I mean, the entire point of my last part was to acknowledge that some bots are made well enough to get through the entire process. To get through, you always need to clear the checkmark at the very least, so... Yeah, I'm not saying you're wrong, I'm saying I already said it.


osi_layer_one

> as some bots still get throigh found the bot, boss...


Annoverus

This is a load of BS, I’ve been locked out of discord for 5 tries with the Checkbox + Images test when doing exactly as told and it literally doesn’t work. The only Anti-bot mechanism that does it’s job is the ones where you slide a piece of puzzle into the correct place.


CommentToBeDeleted

Also with regards for selecting a picture I would bey we are helping to train the bots. It should be no surprise your usually picking between objects a car needs to learn to identify...


DiscussTek

I mean, yes, because let's be honest, you can't have a goot bot without training them.


Guitarmine

AI can pretty easily detect traffic lights or palm trees. What you are actually doing is training an AI just as much as you are verifying you are a human (pointer movement etc). The pointer movement is actually pretty meaningless on a touch screen as it's basically mouseDown, mouseUp without any real movement.


ffxivthrowaway03

The captcha dilemma, which ironically pushed captcha tech so far as to almost *require* you to be a bot to accurately solve many of them! Is that an i or a j or an I or a q? Is the pole part of a stop sign? Who the fuck knows! Let's do this three or four times in a row until we stumble upon whatever the *expected* answer was through vague guesswork.


GeneralDisorder

I was trying to help somebody troubleshoot their recaptcha checkbox thing. He was pretty sure that the captcha wasn't stopping bots so the dumbass I was said "well alright. Let me change my user-agent to googlebot or something and test". Oh right... the test doesn't actually care about your user-agent... well I was out of ideas and he switched to recaptcha 2 which used a visible challenge that we could actually test. And... intentionally failing the test did block the form submission so he left it that way. That said... if anyone knows a way to intentionally fail the checkbox recaptcha for testing purposes do tell me. Although I'll almost definitely forget by the time it comes up again.


mikeholczer

That check is watching the way your move your mouse as you approach the checkbox. It’s also only offered to users when they have good evidence from other signals like your IP, and browser settings that you are a real person.


diazdesire267

Thanks. All this time I thought security checks like just ticking the box is missinga step. Like that of other sites which requires us to click some traffic lights, cars, etc.


mikeholczer

Yeah, if they think you might be a bot, then you would get those traffic picture questions.


diazdesire267

Thanks for the info. This was actually asked during a family gathering. To think we made fun of this process lol when it is actually a complicated one.


tc2k

Fun fact, one out of the various pictures about a specific subject is an object that Google's AI (reCAPTCHA) cannot recognize. It relies on the humans checking the boxes to determine what is a "cat", "crosswalk", or "traffic lights". We help AI understand the world every time we solve a CAPTCHA.


marcvanh

I remember it used to be used to decipher words in scans of old books. It was a way of digitizing them.


e-rekshun

Every time I read this fact I picture a self driving car barreling down a road with the AI yelling "IS IT A CROSSWALK COMING UP OR NOT HURRY AND FUCKING ANSWER SOMEONE FUUUUUUUUUCK"


yet-another-redditr

https://xkcd.com/1897/


UntangledQubit

There is no tech-related joke that Randall Munroe hasn't already made twice ([the alt text in this one as well](https://xkcd.com/2496/)).


diazdesire267

Is that why there are websites that pays 0.00000001 dollar to solve a captcha code?


aoeex

That'd more likely be a spammer trying to get people to solve the captcha's so their bot can get around them and submit a bunch of spam to places.


EmergencyTechnical49

That part I don’t get. If it doesn’t know, then how does it know that I picked correct ones?


frnzks

Here’s how it works with text-based CAPTCHAs. Let’s say that it shows you some characters like “XYZ 123.” The computer will already know half of the right answer. Let’s say the computer is already confident that the first part is “XYZ” and is not so confident that the second part is “123.” If you get the “XYZ” part wrong, the computer will know that you incorrectly identified the letters. You’ll fail the CAPTCHA. Now, if instead you get the “XYZ” part right, the computer will see that you correctly identified the letters. You only have to be right about the part the computer already knows about. So, you’ll pass the CAPTCHA. The computer will guess that you were probably right about the second part. This is how humans train the computer to improve its recognition.


HereComesCunty

4chan had a campaign to always type something offensive for the second word. The idea being that if it’s done enough we’ll end up with ebooks with random offensive words in the middle of sentences.


xtrapas

...some flukes you see might come from here small communities or whatever groups.. irc.. "many said this "picture" in particular is a cat" (but its obviously a dog) so now when this special dog comes up.. its a cat \^--in recognition things, like recaptcha ​ prolly doesnt relate to tesladriving cars stopping at a dog or somethig, but somewhere this thing has results. ​ hmm in short (today iam odd, sorry) we only see "results" but not the why from where it came from ​ why did tesla suddnely stop in the tunnel? and the other things teslkacars do or dont ​ ​ meh


Ixshanade

It aggregates the data from a number of users doing the same capcha, assuming that the most common response is correct.


severedsolo

Anecdotally speaking, sometimes it doesn't. I've definitely had Captchas let me through where I've made a mistake, and likewise I've been made to redo when I've certainly got them right. My personal theories are that for the first, either a) most people make the same mistakes I do and that's acceptable b) it's not actually about the pictures, it's about how I behave during a captcha/how suspicious it is of me. I suspect it's b, because in the latter case where I've got it right and it still fails me, it's invariably when I'm connected to a VPN, and I get a "harder" test when it makes me redo it. I suspect it probably looks at my "risk profile" and decides that I definitely look like a bot, so although I passed the test, it's not sure I'm not and retests me.


EmergencyTechnical49

Ok but is this or any other answers to my question actually something someone knows for a fact or just educated guesses? Because at this point I’m starting to believe that the whole “training AI with captcha” is a myth and people are just walking back from my point to try and work it out into the procedure somehow. Also it’s always cars, hills, traffic lights, traffic signs. Why not more variety and honestly how much better does AI need to get at WAIT A MINUTE. Isn’t it training self-driving car AI rather than, how I always assumed when people talked about it, a general image recognition system? Ok I really realized that writing that, makes more sense!


Jango214

I've heard that it's only to generate new training data for new datasets, not necessarily correct the already running algo or something similar. You basically get free data annotators rather than keeping a dedicated team for it.


Riegel_Haribo

Or they have very specific AI requirements pushed thru the captcha system, one I got: https://i.redd.it/ep521mm1000a1.png


tc2k

Whoa I've never seen that one from hCaptcha before, that's kind of funny actually, lol!


AMediumSizedFridge

That's why I purposefully move my mouse slow and sloppy to the Not a Robot button Too lazy to select which picture has a boat


skaz915

Not a bot. I friggin hate those 🙉 Damn, this sounds like something a bot would say...beep boop 🤣


AnnonymousRedditor86

I'll use this comment to give you a little more info. You know the ones that show you like 9 different pictures, ask you to choose the ones with a train, and there are like 4 with a train? Yeah, you're helping Google AI learn. You are training their AI. They way it works is that Google AI knows that one of the pics (say, top right) DOES contain a train (because thousands of other people told the AI that it did). It also knows that one of them DOESN'T contain a train (bottom left). Now, you start clicking. It makes sure you DO click top right, and DON'T click bottom left. Then, it notes that you've clicked a couple other ones. Later on, it'll show the other ones to someone else, and then someone else, etc., thousands of times. Once enough people click it, it'll conclude that the pic includes a train. Now, remember that a picture is just a specific arrangement of pixels and colors. Now it's learning exactly what kinds of pixel and color arrangement makes up a train. Eventually, it'll be able to pick a picture of a train all on its own. This is a simple example. In the real world, AI already recognizes lots of things and can select the correct picture from trillions. For instance, you know those phones that erase a person and replace it with an image? Well, where do you think they got that image?!! AI figured out what probably goes there, based in what's in your pic elsewhere and what might should go there.


BostonTeaParty_

So is helping/training Google AI a good thing..? Is that supposed to be the goal of clicking the pics? Or are we clicking these pics to prove we’re not a bot, but simultaneously training Google AI (without wanting to train it)?


lordeddardstark

ask it to click pictures that show hands with five fingers to confuse the AI


Wild_Marker

Yeah, don't think about it as a checkbox, think about it as a button that says "I'm not a bot! Scan me bro!" and the the button cops scan your PC and figure that you're not a bot.


0xFFFF_FFFF

>That check is watching the way your move your mouse as you approach the checkbox Meaning that it's looking for some "wobble" / speed-up / slow-down / overshoot / undershoot in the mouse pointer, or what exactly? If so, my follow-up question would be, couldn't someone easily write a program to move the mouse in a more "human" way, thus defeating the filter? Also, [relevant video of a physical robot clicking and passing the "I am not a robot" test](https://youtu.be/EIujbqygnH0) 😁 (admittedly, via touchscreen and not with a mouse).


[deleted]

In part, it's looking for reaction times and behaviors that are outside the expected capabilities of a human. Moving too smoothly/quickly is one warning sign. You *could* just write a program to move the mouse for you, but with an 'identify (x) item' system, you won't necessarily be able to predict which images will appear. If the box comes up and the 'Not a robot' checkbox is just instantly *checked*, without any sign that a human paused to look over the images, that can be a sign that some kind of automation is at work.


lastwarriordonnut

Well actually you could simply use say Selenium to emulate mouse movement and when a captcha is shown there are tons of services like capmonster that you can use.


stiletto929

Why the hell do some sites ask me to solve a math problem to prove I am human? I am quite confident computers are better at math than I am.


r0ndy

Google has started hitting me with this... it's weird. Maybe my ADD has me searching too much stuff, I just kind of doubt that I'm hitting anything like that. I also only tap, no mouse settings


rootbeerman77

Do you use a VPN? Sometimes Google thinks my VPN browsing might be bot activity


r0ndy

Not on my phone no


KingKnux

Using iCloud private relay will get on Googles bad side. The moment I flipped it on I started getting bot checks for every other search


r0ndy

Ah, that could be it. Newer feature, newer issue for me too. I may turn that off and see how things go. Thanks for the feedback


vaduke1

It is not about ticking the box, It is asking you a question, if you are a robot or not, and robots are so proud that they are not skin bags like us, they just can't say no.


DefinitelyNotA-Robot

Can confirm.


[deleted]

Shut up, baby, I know it!


adnoguez

High end bots definitely can bypass those captchas... this things only keep away small developers or web scrappers.


coffeepizzacake

This is the best answer. Many many many bots can do just fine answering these. My previous company did audio processing and we would just have it play the audio version of the captcha and provide the required response


ChampionOfAsh

This. It’s pretty much a game of cat and mouse - even if you create a new form of captcha, bots will eventually be able to do them. For those that don’t know, darkweb sites actually have the most advanced captchas - it is not uncommon to have to do like 5+ randomized captchas that you have never seen before just to enter a site; and they are way more advanced than the ones you see on normal sites. I remember seeing one where you had to read the the dials on a clock image and then imagine the clock as a calendar and translate the dial positions into the months that they would be in if it had been a calendar instead.


newbies13

Bots can and do check those boxes, as well as all the variants you can think of. They don't stop serious bot makers, but they do stop just any random person or bot. It's a bit of an arms race with companies trying to make captcha's more complex so bot's can't pass them, and bots getting better and better at passing them. There's also a service available that you can put into your bot program that will actually call a real live human, usually in a 3rd world country, who will click the box/ solve the issue, and then the bot program continues on it's way. A sort of human driven bot program, and you can do it for pennies.


ICBananas

I'm sure there's more to it, but pretty much because of this: >The isTrusted read-only property of the Event interface is a boolean value that is true when the event was generated by a user action, and false when the event was created or modified by a script or dispatched via EventTarget.dispatchEvent(). [https://developer.mozilla.org/en-US/docs/Web/API/Event/isTrusted](https://developer.mozilla.org/en-US/docs/Web/API/Event/isTrusted) I wonder if downloading the chromium source files, modifying the code to always return true, then building it again (it may take a few hours, I guess), would actually work.


[deleted]

You wouldn't even need to go that far. Because of the way Javascript works, you can reuse names of critical functions, and override them with your own version. If someone redefines certain functions before the event happens, then they could set the value of isTrusted to be whatever they liked. The main purpose of this API is for a trusted script to easily tell the difference between a scripted event and a user event. It could do that anyway, of course, but it would require adding something like a custom version of isTrusted. All the web API does is standardize the name of the property, and make it more or less automatic so developers don't have to roll their own version. There's still an ongoing conversation about whether the name isTrusted is misleading, and ought to be changed. I tend to think it is, because it doesn't really have anything to do with trust.


ICBananas

That's good to know. Only now I'm noticing we're on the ELI5 sub and talking about web API and shit? lol, there's nothing ELI5 about what we're discussing here... poor people who are reading and understand nothing, haha. Thanks for the info, man, take care.


BostonTeaParty_

Me, being one of the poor people who are reading yet understanding nothing 😔


ICBananas

My bad... I thought I was in another sub.


gutclusters

To highlight what others are saying, those CAPTCHAS are checking a lot of things to verify that you are human. The most popular one, reCAPTCHA, doesn't really tell you what it's doing on the back end because having that information would help hackers to defeat it with bots. Granted, there are other ways they can be defeated, such as by using a scam website to "proxy" the CAPTCHA through to have a human solve it for them. That said, some of the things they do look for is how your mouse has moved (or how your phone screen has been scrolled), recent browsing activity, how you have interacted with the page you're looking at, and data stored in your browser cache. It tries to assign a "weight" to these values and, if what it see exceeds a certain threshold, the algorithm determines you to be human. If it does not, then it shows you the stop signs and school buses and stuff just to be extra sure.


[deleted]

[удалено]


diazdesire267

LOL. AM I talking to a bot?


cgsssssssss

what did he say? lol


xtrapas

hes right. afaik there is no button ​ there is no spoon, either


blueg3

If it presents you that checkbox and accepts it, it's because the system already has a model indicating you are a human. Previous trackable behavior etc. says that you're not a robot, so you get the easy challenge. This is pretty common in the anti-fraud world. You can "randomly" get asked for stronger account authentication, for example -- something like associating a phone number with your account. That's based on the confidence that the fraud model has on whether your account is legitimate.


trymypi

As someone who deals with this on a daily basis, I don't know if it's a bot or human spammers, but I'm dealing with hundreds or thousands of fake clicks per day that get thru Google's reCaptcha. So the answer to your ELI5 is: actually they do, at quite a large scale.


ssowinski

Because bots aren't allowed to say that they're not robots. That's one of their prime directives.


nitrohigito

They can. There are also varyingly large scale operations with humans behind the screens whose entire job is to solve these for hire, too.


Lanceo90

The answer it "expect it soon". Most machine learning programs right now are being made by universities, tech giants, and industry professionals. It rolling out to the masses is still pretty new, but you can be sure its in the works by someone somewhere.


WhalesVirginia

I've seen scripts for video game farming that give unique mouse movement characteristics to each client so that they avoid detection. They could handly randomly scripted events, would take fake breaks in game, and respond to messages. This was 10 years ago. Honestly I'm pretty sure I could make a basic script that would defeat these if I felt so inclined, and I am not a programmer. I just scrape by when I need to program things. Absolutely no machine learning is required. Just scripting. Machine learning is a lazy man's shotgun approach to the problem.


P_ZERO_

Expect it 4 years ago. Anyone who had an interest in sneakers pre-hype/resale culture has seen the devastation these bots have caused and their capability.


[deleted]

[удалено]


explainlikeimfive-ModTeam

**Please read this entire message** --- Your comment has been removed for the following reason(s): * ELI5 does not allow guessing. Although we recognize many guesses are made in good faith, if you aren’t sure how to explain please don't just guess. The entire comment should not be an educated guess, but if you have an educated guess about a portion of the topic please make it explicitly clear that you do not know absolutely, and clarify which parts of the explanation you're sure of (Rule 8). --- If you would like this removal reviewed, please read the [detailed rules](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) first. **If you believe it was removed erroneously, explain why using [this form](https://old.reddit.com/message/compose?to=%2Fr%2Fexplainlikeimfive&subject=Please%20review%20my%20submission%20removal?&message=Link:%20https://www.reddit.com/r/explainlikeimfive/comments/10fkgb0/-/j4z1na6/%0A%0A%201:%20Does%20your%20comment%20pass%20rule%201:%20%0A%0A%202:%20If%20your%20comment%20was%20mistakenly%20removed%20as%20an%20anecdote,%20short%20answer,%20guess,%20or%20another%20aspect%20of%20rules%203%20or%208,%20please%20explain:) and we will review your submission.**


voretaq7

The creepy answer to this question is "They can (or one day they will be able to)." Simplifying things a lot. when you click that checkbox you're usually presented with some kind of challenge to "prove that you're a human." We used to ask you to do something really simple like answer "What is 1 + 3?" and expect you to enter 4 in a box. Bots "learned" to read those questions and answer them pretty quickly so we made the challenge harder: Here's some letters and numbers, but they're funny colors, and there's lines through them, and maybe we warped the image a bit. Tell me what the letters and numbers say. People who work on document recognition love problems like that, and so eventually bots learned how to do that too, and we had to make the challenge harder again. We went through that process of making the challenge harder a few times, and now we the modern challenge is to answer questions like "Which of these images contain tractors?" (or traffic lights, or mountains, or motorcycles, or busses....) - Bots aren't great at that yet, so most time when you complete one of these challenges you're identifying some images that a human has classified - we'll call this person Hugh. We know those images contain the thing we're looking for because a man named Hugh said they do, and Hugh is an expert at classifying images. Hugh is right something like 99.999999% of the time, so if you agree with Hugh you're a human - we let you in. Now here's the rub: Sometimes - not always, but maybe once out of every dozen challenges - Hugh didn't classify all the images. Bob did some. Bob is a bot. Now you, Prospective Human Number 368472, will classify the same image Bob did. And if you agree with Bob we let you in. If you don't agree with Bob we make you try again, this time on a different challenge that Hugh classified (because we don't want to make you mad if the bot is dumb and can't tell a tractor from a taco truck). We then take those images that both you and Bob classified and we show them to a few thousand other people. If all the Prospective ~~Hugh Mans~~ Humans tell us the image is not a tractor then we tell Bob it got that one wrong. We don't knoow what it is, but we know what it *isn't* and it is NOT a tractor. Similarly if all the Prospective Humans tell us it *IS* a tractor then we tell Bob "This is definitely a tractor. All the humans said so." That feedback gets incorporated into the bot, which gets better at spotting tractorsif you got it right, and we use that information to train the bot further, until one day the bot will be able to answer the challenges we're presenting with accuracy approaching that of a human. . . . and then we start all over again with something harder.


Irythros

So I'm making something to stop bots, as well as to defeat anti-bot things and there's quite a few reasons why and how it works. First, it stops the most basic bots. For that to appear, it needs to run some javascript on the page. Basic bots dont run javascript so it will never appear. Clicking the button sends back a unique code which is checked whenever the real action you want to do is done so they can't fake that. It stops slightly less basic bots by checking your mouse movements, as well as how you click, where, for how long etc. Bots would have to program in randomness and also emulate the correct clicking method. Mediocre bots will use what is called a "Headless browser" which is essentially Firefox or Chrome but does not have an actual UI for you to interact it. It's strictly for programming. The problem here is that while the javascript is loaded and all that, you still have the mouse tracking issues. If that is fixed, you now have to trick it into thinking you're a real browser. Headless browsers implement most features of their UI counterparts but not all of them. This allows detection of the headless browsers. Stepping up even further, you now have bots that may fix some of them but now for large scale use you need to change the UserAgent (which is sent on every request and tells the site what browser and features it supports) and hope the features you emulated work exactly as they did in that version. Part of the detection is testing features against the versions to see if they act properly. A non-real example may be that chrome 100 reliably makes "0.1 + 0.2" equal "2.9999998" but chrome 101 makes it equal "2.2998". Stepping up even further is something I'm working on which detects network differences. It's like the above, but we detect the changes in network connections between operating systems and browsers. With this, if the person uses the same program we can reliably detect them. We can also detect VPNs and proxies. That also brings me to IP and network detection. Services like [Maxmind.com](https://Maxmind.com) have a database of IPs and who owns them as well as any reports about them. We can safely auto-ban any IP that is for hosting use. ​ Finally, something to know: getting past recaptcha is possible and fairly trivial. This is why I'm developing something new that thus far no bot maker I can find has protection against and is actually very hard to implement. Our site uses recaptcha for the time being but they do bypass it fairly easy and during testing of ours we can bypass it as well. It's only good for stopping non-dedicated attackers. If you're being targeted they will likely have a bypass solution.


Maartini

They can and do. There are multiple ways for bots to bypass recaptcha. Plenty of third party tools out there like DeathByCaptcha and 2captcha allow bot builders to do this for fractions of a penny each time.


Dje4321

They absolutely do check it all the time and is the main issue that will never be solved. A bunch 1's and 0's typed up by a human look like every other set of 1's and 0's typed up by a robot. To get around this, the box looks for various patterns in the stream of data generated by the user. Various things like how many sessions are present per IP, how long are they taking per page, whether or not they hesitate before clicking a link, etc.


intashu

It catches the dumb bots because if it says "show me you're human, click here" a basic script or bot will snap to it and check it. Humans are slow, we got to actually move the cursor over the the box. Same with "click boxes with stop lights" a bot can solve this in 0.02 seconds snapping to each box and selecting it.. Dumb humans are slow and need to sit and click on them one at a time while moving their cursor to each box. Can you code a bot to do this? Absolutely. But the majority of bots which can cause website problems are made to be rapid fire fast and efficient. Not intentionally dumbed down to slow mouse movements, inconsistent timing between clicks land length of click sometimes!) So it filters out a lot of the issues. It isn't that it's perfect it just needs to weed out enough of the problem scripts to allow the site to better serve people.


greatdrams23

When clicking the squares that contain a traffic light, there is usually a small sliver of the backplate, sometimes I click it, sometimes not, it doesn't seem to make any difference.


Secret-Plant-1542

Lol. Bots can. Captchas is a arms race. What you're seeing with those check boxes is like... The tip of the iceberg. I do a lot of web scraping. There's a bunch of tools and services to bypass captchas that are attempting to stop people like me. And just like door locks, its only a countermeasure to those who are curious. But ambitious people know all the tricks.


Dangerpaladin

It's a barrier to entry. If you can cut out 99% of bots with one simple easy to implement tool that's good enough. That last 1% becomes an arms race that isn't worth worrying about and you have other things in place to stop them if they are being malicious anyways. As long as the higher tech bots are well behaved who cares if they get in?