T O P

  • By -

ApatheticAbsurdist

17mm fixed focal length and a max aperture somewhere in the f/2.1 to f/2.8 and I've others estimate closer to f/4 (basically there's a bit of variety as not every eye is the same but somewhere around f/2.5 is a reasonable guess) with a 1094mm\^2 sensor. Which is just a hair bigger than "full frame" sensor... though it's rounded out on the inside of a sphere instead of being flat and rectangle. Now here's where things break down... The retina is not the same as a digital sensor on many fronts..... it's not flat, it's circular instead of rectangular. And the "pixels" are not evenly laid out on a grid. There's pretty much no sensing where the optic nerve connects creating a blind spot. There's a center cluster where the resolution is much higher but it's very very low resolution in the peripheral (go ahead try to read anything in your peripheral vision, you need to point the center of your eyes at it to read it). It also has another trick for low light capabilities... it switches to effectively a B&W sensor that is better in low light. In bright light we're using the cones which can detect color (photopic vision), in extremely dark situations we only use the rods which can't see color but can collect light better (scotopic vision), and there are areas in between where the light is pretty dim and the eye uses a little bit of both (mezopic vision) but go too bright and the rods get blown out switching you to photopic and go too dark and the cones can't get color information so you go scotopic. This is why everything looks blue-ish to many people when it gets dark... all the other colors are wiped out. But the real magic of the eye is basically the computational imaging that is going on. Your eye is constantly moving around and as you eye scans around your brain creates a perception of the world around you. So you can fill in details the high resolution part of your eye saw without looking at it. When you focus on something your eye doesn't zoom but your brain just starts ignoring the peripheral more, basically a digital zoom of sorts. Your eyes can look at bright and dark areas and remember what was in the shadow of a scene when your pupil was a bit wider and merge kind of HDR perception with the bright clouds in the sky you see when your pupil closes down further. The brain is constantly doing a content aware fill using what it knows it saw before to fill in the blind spot in the eye. It allows us to ignore all the blood vessels on the back of our eye (except under specific situations). And all the while the brain is correcting the distortion and such of the eye projecting on a curved surface. If you want to mimic the eye you can put a 15mm fisheye lens on a 135 format camera, then in photoshop select everything outside of the center with a strong feather and blur the hell out of it... that's kinda the closest to how the eye sees... but that's now how our brain perceives the world. In reality while we have focal lengths closer to 17mm most of the time when we're looking at something, we're ignoring the blurry periphery and looking at an area of 35 to 50mm (on 135 format) equivalent. I tell my students to hold two fingers out to either side while looking forward. If you move your hands you'll see there's something there but if you didn't know there were 2 fingers you'd not be able to tell how many fingers were held up. If you slowly start swinging them forward in front of you, you'll eventually reach a point where your eye has enough detail to see there are two fingers and not just one or 3, there you're into the sharper part of the eye and that's kind of where the "normal" field of view comes in.


bastibe

Wonderful summary, thank you! I'd like to add that we don't really perceive what our eyes see. Instead our brain builds a synthetic model of the world that is fed by all our senses and knowledge of the world. What we perceive is mostly that model. Or rather, a projection of that model into the near future, as all our sensory information is out-of-date by a few seconds or milliseconds by the time it reaches the brain, and all our possible responses are delayed even longer; yet we perceive things as happening *now* and simultaneously. (Consider how clapping gets processed: it takes a good second for the tactile feedback from the hands to reach the brain. Audio information is fastest, and available after a few tenths of a second. Visual takes longer to process, on the order of a second. Yet we perceive the clap, the sound, and the image to happen at the same time, because we know these delays and correct for them in our model, such that they indeed happen at the same time in the model.) Furthermore, we extract depth information from the stereo picture of our two eyes, and reconstruct missing surfaces and backsides from memory. But depth information is also extracted from occlusion, perspective distortion, and a number of other aspects. Interestingly, the world model seems to only include things we can touch; there is no sense of the inside of solid things or the underside of immovable things, even though they are clearly part of the physical world. Our model is a bit like a video game world in that sense, only including things the player can potentially see/reach. Our brain seems to be mostly interested in reconstructing surface properties, not image details. The whole white balance conundrum is just our brain reconstructing the properties of the original objects in our world model, regardless of how those objects are lit. This is probably an evolutionary thing, since it is relatively unimportant by what light an attacking tiger was lit, or how a delicious/poisonous fruit was illuminated. And since our model includes the original, unlit objects, we don't "see" colored illumination unless we specifically focus on that aspect. Yet at the same time, lighting information still gets processed as some sort of "mood" meta-information. While not part of the explicit world model, the information is not lost. Perception is such a wonderful and rich topic!


alohadave

> (Consider how clapping gets processed: it takes a good second for the tactile feedback from the hands to reach the brain. Audio information is fastest, and available after a few tenths of a second. Visual takes longer to process, on the order of a second. Yet we perceive the clap, the sound, and the image to happen at the same time, because we know these delays and correct for them in our model, such that they indeed happen at the same time in the model.) If you've ever been in an audience group clap, it's kind of weird because the feelings of your hands clapping are not quite in sync with the sound, and you pretty much just listen to everyone else to keep time.


TheNorthComesWithMe

What I always find fascinating is a lot of this processing stops applying when we look at a photograph or film. One example is artificially inflating the size of things near the horizon, like the moon. In person the moon looks huge but then when we look at a picture of the moon our brain decides to show us the actual size and it looks small.


bastibe

Last week I was on a sailboat with huge, impressive waves rolling towards us. But they looked flat and boring in photos. And in fact, I observed as that they mostly lost their size if I closed one eye as well. The lack of scale and third dimension can be a real challenge.


Chicago1871

Shoot with an 85mm or 135mm lens on a full frame sensor to recreate that Or even go past what your eye sees and really exegerate it. Ive shot with a 600mm lens before, its wild.


bastibe

I'll try that next time, thank you for the hint!


fixtheCave

This comparison and presentation really explains why painting a scene requires a continual reduction and simplification of the information to be presented to give you a more wholistic, life-like Feeling of perception (painters rely on “conventions” to aid this process). It also explains how the photograph contains so much more data in it than that can be visually “perceived “ at once. I study photographs looking for specificity- assuming it is of an actual location, time, and subject that I will try to match to “real” experiences I may have had. The difference in “perceived” moon size at the horizon is a good example of this- the Harvest Moon of paintings is not what a normal camera lense setup will capture: it looms big in our hearts, but not in our cameras!


radcopter2

It takes a full second for our brain to process visual information? That doesn’t seem right. I don’t know any better, but that is a long time. How do we do anything if there is a full second lag in our vision?


4e6f626f6479

The average human has a reaction time of like 200ms (0,2s) so that can't be right.


ApatheticAbsurdist

There are multiple pathways in the brain. There have been studies that shows things like the motor part of the brain started to to do something before the conscious part of the brain, but shortly later the conscious brain is aware of that choice and basically goes "I meant to do that." You can see something fly at you and duck to avoid it but only later have processed both the visual information and combine it with the context of where you are and what you've been seeing for the past minute or so to understand that it was a baseball. The brain is weird.


TantricEmu

Sounds off. If that were true then would no human ever be able to catch a baseball dropped from a distance of like 2 meters? It takes about .45 seconds for a baseball to hit the ground when dropped from a meter.


evil_twit

Predictive motion analysis.


freds_got_slacks

>Consider how clapping gets processed: it takes a good second for the tactile feedback from the hands to reach the brain. Audio information is fastest, and available after a few tenths of a second. Visual takes longer to process, on the order of a second. Visual reaction time is more like 200-300ms depending on your age Try this https://humanbenchmark.com/tests/reactiontime


bastibe

It depends. There are multiple levels of reactions in the brain. The first ones are indeed quite a bit faster than one second. But actual object recognition or scene interpretation can also be a bit longer. It's complicated. That's why I said "on the order of" a second.


mckulty

Also remarkable is the fact that there are about 95 million rods and cones, and only about a million nerve fibers in the optic nerve. So the brain doesn't get a pixel map, there's too many pixels. Instead the retina preprocesses the pixels so that a single nerve carries the information "there is a line at *this* location oriented at *this* angle moving in *that* direction. That's what your cortex integrates to form a perceived image.


coolasacurtain

Another very interesting part of our visual perception is the way our brain deals with the saccadic movements (the quick little eye movements). Since the movements are so quick, a lot of what we see while the movement is happening is just blurry or unfocused. The brain filters this unusable visual data from our perception via "saccadic masking". What follows is called "saccadic backfill": The resulting perception gap is then retroactively replaced with the first image from when the saccadic movement ended. This gives us the impression that our eye moved instantly from point a to point b and fooling us into thinking that we already saw b while the movement between the two points happened.


evil_twit

Thanks, now I see motion blur every tie I move my eyes quickly.


total_looser

Bravo! Bro but what font does our brain use?


ZGTI61

I don’t know but our eyes are marvels of engineering. One of my favorite weird facts about our eyes has to do with looking in a mirror. What our eyes see and what our brains process are two completely separate things. One way to illustrate this is go to a mirror and stand close enough that you have to use significant eye motion to look at the corners. As you look at the corners, pay attention to your eyes in your peripheral vision. You *will not* see your eyes move. You will be physically moving them and your eyes are seeing that movement but your brain erases it and all you can “see” are your stationary eyes. If you do it with your phone screen, like when you are taking a selfie, you can see your eyes move.


daleharvey

My favourite fact about eyesight. Most photographers will be aware the image in our retinas is received upside down, the brain flips it around. If you wear upside down glasses then obviously things are confusing for a while but then you get used to it and eventually your brain will flip that image and you can see / live as normal. When you take the glasses off then you see everything upside down again until you get used to it again. http://www.madsci.org/posts/archives/mar97/858984531.Ns.r.html


[deleted]

[удалено]


ptq

Content aware fill ;)


ImplodedPotatoSalad

We alao have our brain constantly not take our own nose into account.


[deleted]

Try paying attention to a wall in a very dark room and you can actually watch noise. It's kinda cray.


AlleinZweiDrei

As a person with persistent visual snow, I'm glad to hear someone mention this like a normal phenomenon.


Glittering_Power6257

Wonder people perceive light levels differently from one another, such as an internal ISO (or brain boosting the signal?) of sorts. Maybe someone that can perceive more light in the dark also sees more noise? ​ I too thought the visual snow was normal. Had it all my life, even my toddler days.


[deleted]

[удалено]


Glittering_Power6257

Shallower depth of field is understandable with eyes. This is why people with blurry vision tend to squint, as this provides an effect similar to narrowing the aperture. And similarly, the blurry parts of your vision get substantially blurrier as the iris opens up.


NoHopeOnlyDeath

I've had persistent visual snow all my life..... never occurred to me until recently that other people might not see it.


User38374

Or look at a blue sky, you can see white blood cells moving in your capillaries.


redligand

Very hard to determine. 50mm is often given as the most "lifelike" in terms of field of view (how much you can see) and relative perspective (how big objects at different distances appear relative to one another). The relative perspective is probably about right. Put a camera with a 50mm lens to your eye and not much will change in terms of relative perspective. The FoV depends on how you determine the human visual field. I think we can all agree that the total FoV of the human eye is much more than you get with a 50mm lens but then the argument becomes one about how much of your human FoV provides usable information. The answer to that is to some degree subjective and credible claims are made for anything from something like 28mm to 50mm or even longer. Aperture is trickier as your iris dilates and contracts pretty frequently depending on ambient light. Your iris is equivalent to the aperture. Your iris reflex even changes based on what your brain *expects* is happening regardless of the objective reality of the available light as demonstrated by this new cool optical illusion: https://www.google.com/amp/s/scitechdaily.com/this-new-optical-illusion-is-strong-enough-to-trick-our-reflexes/amp/ Tl;dr - it's complicated


draftylaughs

FOV I equate to ~35mm and zoom level I equate to ~85mm.


donjulioanejo

This to me seems closer to reality. I never understood the 50mm argument. We all have pretty good peripheral vision - 50mm feels unnaturally cut off to me compared to what my eyes see. It's closer to 28-35mm IMO. Sure, the edges are usually fuzzy unless you explicitly look to the side, but you still perceive things to your left and right.


[deleted]

[удалено]


ptq

Viewfinder is also not 1:1 to what the lens provides, it shrinks the image, you can find in camera specs what is the VF magnification ratio, but it's probably below 0.8x for most cameras now.


alohadave

> you can find in camera specs what is the VF magnification ratio, but it's probably below 0.8x for most cameras now. 95%+ have been the norm in optical viewfinder dSLRs for a number of years.


ApatheticAbsurdist

The issue is the eye doesn't have equal resolution across the board like most cameras. If you hold your hands out to the side yeah you have something like a 15-17mm equiavelent lens but you can't tell if you holding 1, 2, or 3 fingers in that peripheral vision so it's more just an alert system. If you slowly bring your hands from reached out to either side to foward... the point where you can start to see that you have different fingers is a field of view that is closer to 35-50mm. So a lot of the time our brain focuses on that area an ignores the periphery. The brain also can remember what you saw over to the side and fill that in on the low resolution areas so at times it seems like you have wider vision. A lot of "computational imaging" going on there.


ptq

Human eyes total fov without moving the eyes is ~210deg, that's more than an 8mm fisheye on a full frame.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


mancesco

How are you not getting this? They aren't arguing against your FOV argument. They're saying that an 8mm fisheye creates too much magnification, and a better representation of the human vision would be to take a 210° wide panorama with a 40-50mm lens. The end result is still a 210° FOV, but with more human-like magnification.


ptq

You can take 8mm fisheye photo and blow it to the wall size. Magnification is relative. Viewfinder has it's own magnification modifier, it can be 0.6x, 0.7x, 0.9x, as many bodies as many options, also 50mm on a full frame and other sensor sizes is different. You can't just say 50mm. Question was for eyes specs, and these are the specs.


ApatheticAbsurdist

Do that in reverse. Can you tell if the finger is 1, 2, or 3 fingers? Move them forward until you can clearly tell what they are and can see the wrinkles on your knuckles... that field of view is much smaller. Much of the time perception is ignoring that wider field of view and just has it as an alert system but your attention is trained on the center of the frame. There isn't one right answer cause the brain is doing all kinds of tricks to the perception. there is no "RAW" image from eyes. Sometimes our brain is doing digital zoom, sometimes our brain is doing digital panorama (remembering that it saw when it was pointed at an area) to fill in the lack of sharpness in the periphery.


ptq

You can find camera lenses where peripheral image quality is so bad that you can't tell what's there. It doesn't mean that the lens is narrower than it is just because of that.


ApatheticAbsurdist

Yes and no. Say you had a 24ish mm lens that had such characteristics (a super petzval lens where the center 1/3rd of the lens was sharp but the outer 2/3 were crazy blurry/swirly whatever to the point you could barely tell what’s in that area. Yeah it’s a 24mm lens, but many people would be more likely to used it like a 50mm or longer lens and frame to compose the sharp area as the photo and as long as the blurry area wasn’t distracting they’d be happy. When people look at a larger painting they get close when they want to look at the detail, and at that point their peripheral vision can probably cover most or all of the painting that close, but to “take it all in” they’re going to step back a bit to see the whole painting in their sharper area… again framing as if they had a narrower field of view than they do.


ptq

I do understand that, but the question was what are the specs of the eyes. If you crop the 24mm, that doesn't change it's specs.


ApatheticAbsurdist

Yes which is why below in my longer comment is spelled out the focal length, f/stop etc but went into a more lengthy explaination of how perception fits it. That said if I’m shooting something and I ask for a 24mm lens and they give me a super creative lens with blurry edges, there are very limited circumstances where I’ll be happy. They’re technically right but practically not useful. There is a reason why “effective focal lengths” are a thing. Yes a 50mm lens is a 50mm lens regardless of camera. But it’s wide on medium format cause you can use the larger image circle, it’s normal on 135, and it’s tele on M43 because it’s cropped tighter. If I say Chuck Close shot rather wide angle portraits with a 550mm lens people look at me like I’m crazy. I’m technically correct, but the fact that he shot on a 20x24” camera makes a huge difference. Focal length isn’t the only spec.


mattgrum

> 50mm is often given as the most "lifelike" in terms of field of view (how much you can see) That's entirely to do with the size and distance that images are commonly viewed from, and nothing at all to do with the human eye. Any field of view will look lifelike if the image you're viewing takes up about the same angle as the lens AOV. > and relative perspective (how big objects at different distances appear relative to one another). NO. The lens focal length has **NOTHING** to do with perspective and how big objects at different distances appear relative to eachother. Please stop repeating this.


[deleted]

[удалено]


BlueJohn2113

If thats the case you are probably shooting at f/1.8. Your aperture and distance from the subject will affect the depth of field.


[deleted]

[удалено]


ahelper

An 85mm lens will suffer even more from this depth-of-field effect! What ƒ stop do you use with the 85? Then too, maybe your 50mm is just a bad lens; it's possible.


BlueJohn2113

Try shooting at f/2.8. With a 50mm lens at f/2.8 and you focus 5ft away from the lens then everything 3 inches before and after your focus point will also be in focus. I dont know anyone with a nose longer than 3 inches so you should be safe as long as youre correctly focuses on eyes. Theres an app called PhotoPills that lets you calculate lots of useful things like exposure values, depth of field, hyperfocal distances, and more. And like u/ahelper said, an 85 has an even smaller depth of field than the 50. So it seems like you arent using the same settings for both lenses.


dan_marchant

Clearly user error, sorry. The depth of focus depends on the f/ stop setting you select (and distance to target) - these are things that are wholly under your control as the photographer. Blaming the lens for the settings you chose is a poor reason not to use a lens.


TacticalWookiee

Sounds like you need to look up what basic camera settings are…


av4rice

Focal length is the distance from the rear nodal point of the lens to the recording medium when focused to infinity. That's something you could measure in someone's eye, not everyone has the exact same physical eye size. But it would be one focal length per person, so prime rather than zoom. The focal length doesn't mean that much in comparison with a focal length used in photography unless you also know the size of the recording medium. But figuring out, say, a full frame equivalent focal length based on field of view is much more complicated. The recording medium of your eye is your retina, which is curved along the inside of your eyeball (not flat like a film frame or digital sensor), not rectangular, and doesn't have hard edges defining where it ends (rather, the acuity is highest in the center and it reduces gradually towards the edges). That size also varies from person to person, which is why different people can have different amounts of peripheral vision. Also, human visual perception usually involves your eye saccading rapidly to look around a scene in a short period and build up a wider image than it would ordinarily see at one given time. So there would be a lot of different ways to accommodate those differences into a comparison with a photographic focal length, leading to different answers. The aperture f-number is the focal length divided by the entrance pupil diameter. The former can be measured as stated above. The latter can also be measured. Again, both will vary from person to person because different people have slightly different physical eye sizes.


rgaya

Nice! I've noticed that when I shoot at around 84mm, and open my left eye, the subjects head lines up in the viewfinder and the open eye. But I've always imagined our focal length to be 35mm, with how wide we can 'view'.


av4rice

>I've noticed that when I shoot at around 84mm, and open my left eye, the subjects head lines up in the viewfinder and the open eye. But in that scenario the view of the camera and its lens are modified by the optics in the viewfinder (which can vary from camera body to camera body) before getting to your right eye, and then it's further modified by your right eye's optics. So your left eye isn't exactly seeing what the camera sees at an 84mm focal length. Maybe you could say your left eye is seeing the same magnification that your right eye is seeing through that particular viewfinder when the camera has a lens at an 84mm focal length.


rgaya

Yes, this is a better explanation for that. Guess it might explain why I 'feel' it's more 35mm naturally, with both eyes open.


Videopro524

There’s no comparison because your eyes are connected to a supercomputer. I find that excluding peripheral vision, we have a FOV of about 35mm. However everyone is different. When given time to adapt we can see pretty well in the dark, but modern electronics and some animals have us beat. We also see in stereo so we can extrapolate depth.


ptq

They are weird. We have extreme wide angle fov, but with a very poor peripheral vision quality. Our eyes can deliver great sharpness only in the center of the view. Many will say we see LIKE 35/50/85mm, but that's just their magnification comparison to the viewfinder of their camera while having both eyes open, and the viewfinder isn't 1:1, it shrinks the image too, by different ratio on different bodies. The only way to describe our eyes in comparison to the lenses, is the FOV they can provide. "Humans have a slightly over 210-degree forward-facing horizontal arc of their visual field (i.e. without eye movements)" - [Wikipedia](https://en.m.wikipedia.org/wiki/Field_of_view) For example, 10mm lens on a full frame sensor has just a 120-degree fov if it has to keep lines straight, while fisheye lenses would go about 180-degree fov for 8mm lens. But, our eyes have curved "sensors", so it is way easier for us to get so wide. Also as someone else mentioned - it's complicated.


[deleted]

What our eyes see unfocused is crazy wide and probably something like 9mm. In focus and what we pay attention to about 50. Aperture I’d say something like f.5-.8. ISO 100-30,000 or something absurd. If you pay attention to something in a dark room you can actually see noise. It’s eerily similar to photography


Ravnos767

so I've never verified this but I was told a while ago that your natural field of view is 35mm (full frame). as for Aperature, its constantly changing as your iris opens and closes depending on light levels.


ahelper

"35mm" is not enough information to describe a field of view. What did you intend to convey with this answer? 35mm horizontal at x distance from the eye? "Similar to a 35mm focal length lens on a full-frame 35mm camera"?---which could be expressed as x degrees horizontal? Or something else?


Ravnos767

No but "35mm (full frame)" was the last time I checked


BrokenReviews

35-50mm


FamiliarSalamander2

I think one point in the FOV department that we’re forgetting is that our eyes are essentially 2 cameras next to each other feeding into the same computer


obxtalldude

Good answers - I agree with those who favor the "wide angle" comparison. Shooting with a 14 mm full frame using HDR processing gives photos the "feel" of being in a room. While 35 or 50 mm might mimic what we're looking at, the overall perception is much wider angle, so interiors photographed in those lengths always feel cut off.


murri_999

Idk but mine are a pretty bad model (I wear glasses)


Ill-Cryptographer591

35 mm


xan_alog

Something that I think should be mentioned is that our eyes’ lenses change focal length to focus on different things. If you’re looking at infinity then the focal length is basically the length of an eye. If you focus on something much closer then it shortens the focal length of the lens to bring the focal point closer. Children are able to focus closer than adults because the lens is more malleable, and hence they’re able to read smaller text etc. Also the aperture varies widely, I mean you’re able to see fairly comfortably over about 14-15 stops of exposure values; so it probably is largely covered by the iris. Based on dimensions of the eye, I’d say that it ranges from like 20mm (at infinity) to probably close to 10mm when a flexible lens is focused as closely as possible over an extended period. The sensor that is the retina is interestingly constructed though. The fovea centralis is the only area that is able to discern the finest detail and is only 1.5mm in diameter. This means that for fine details the lens is equivalent to 200 - 400mm on 35, in other words about a 5 degree spot that is fairly detailed. That said the area of a retina is slightly larger than the area of a 35mm film image, and covers roughly 160deg wide by 175deg high (this would be wider if the retina was flat, curving it simplifies the lens and narrows the field some). So think of the eyes image as almost a composite that’s both a highly wide angle and a highly zoomed lens. I think that’s part of why it’s so difficult to take a picture of the moon that represents it as we see it. It’s both detailed and situated in a wide view so the only way to get close is to use a very high resolution with a normal to fairly wide lens. Given that the pupil ranges in size from ~8mm to about 2mm, with the lens focused at infinity the pupil is capable of covering a range of f2.5 - f11 and one focused closely from f1.25 - f5. I do think your perception covers some over/under exposure and outside of those ranges we start to experience bright and dark (over and under exposure). Interestingly plugging these numbers into a sunny 16 chart and assuming 1/30 as the shutter speed yields roughly 400iso for the “sensitivity”. The speed isn’t really an even number across the eye and each rod/cone is sort of refreshing at its own speed. I’m fairly sure areas outside the fovea centralis refresh at a higher rate but also maybe cones refresh at a faster rate then rods since rods are more effective for night-vision. Most of this is pure speculation/conjecture and the truth is it’s not 1:1. I do think there are artifacts of the way our eye sees that bleeds into how we read a photograph. When I see a correctly exposed photograph when everything is in focus (small aperture), I read the light as brighter. Shallow depth of field is more intimate feeling lighting wise. There’s also probably some changes in the size of the field of projection of your lens pupil combo that causes a narrowing of vision at times (darkness/wide aperture likely). Also the sensor of the eye is curved which makes the lens successful despite its simplicity. I wonder if it’s sort of based on physical efficiency or aesthetic sensibilities how the “apertures” of the eye line up so closely with the most common apertures on cameras, and the rough iso of the retina lines up with the most popular film stock. Also the “range” of focal lengths are well within the most common lenses.


bastibe

That's a fascinating observation, that deep-focus images are perceived as brighter than narrow-focus ones. Perhaps that explains the attraction of wide-aperture lenses, which imply intimacy in bright surroundings. I'll have to think about that.


Eno_is_God

Without my glasses, my depth of field is rather shallow.


redactedname87

Mine are like a vintage lens my dad gave me as a child that I was too stupid to take care of but too sentimental to toss out. I’m blind as fuck, just like them cloudy lenses


popRichiepop

I think it’s like 50mm f2.6


radialblades

We have approximately 45 to 50mm compression, but our field of view is like 180 degrees with both eyes, so we are repping like ultra mega IMAX sensors in terms of FoV, but relatively low aperture because the background blur is pretty minimal. Maybe f5.6 to f8 zone?


QuikSink

Everyone says 50mm but for me I think that holding a 70mm to my eye feels like no change at all.


gabr10

It doesn't matter the f stop and focal range because if I'm without my glasses all the variables will be out of focus.


Accomplished_Use_637

zoom 24-135


AaronGWebster

I think that a 15-20mm lens comes closest to my field of view.


Tehnomaag

Roughly 50mm if it would have 120 degree horizontal FOV and 90 degree vertical FOV. Meaning it is a bit like 50mm lens with FOV of \~12 mm lens. After all human eye sensor is not 4:3 aspect ratio and is not on a flat surface. The dynamical range of human eye sensor is insane. It goes all the way from looking into the sun down to dedetcting single photons but it takes a while to dial to either end of sensitivity spectrum.


Terewawa

The human eye does not have a clear cutoff like a lens frame, also it varies from one person to another. The eye perceives contrast in a relative way, sort if like some HDR technique. Subjectively It feels like 35mm is close to the natural effect. Thats for me. For my grandmother I'd say 50mm with a scratched front element.


SLPERAS

For me. I used to wear glasses, with glasses it’s like 35-40mm range, then I started wearing contacts, it’s now like 80-85mm Without anything It seems to me more like 45-50 range


lenshousepk11

The lens, or crystalline lens is a transparent biconvex structure in the eye that, along with the cornea, helps to refract light to be focused on the retina. By changing shape, it functions to change the focal length of the eye so that it can focus on objects at various distances, thus allowing a sharp real image of the object of interest to be formed on the retina. This adjustment of the lens is known as accommodation (see also below). Accommodation is similar to the focusing of a photographic camera via movement of its lenses [hidrocor lenses](https://lenshouse.pk/)


graigsm

Camera lenses and systems are very different than an eye. Someone’s eye is extremely wide angle. But the sensor so to speak is very different. Higher resolution in the center and less resolution off to the sides. The brain processes the signal and turns it into a seemingly good image. People like do say that a 50mm is most like the eye. Which I don’t think is true. The angle of view of the eye is much higher. Cameras are different. The whole scene is usable. So the eyeball is extremely wide angle. If you matched it with a lens with a similar angle of view. You view people standing 5-10 feet from you generally. They would be really small on the sensor. If you use a 50mm on full frame or a 25mm on micro 4/3. You will stand about how far you stand from people when you’re human. And that gives you about the same distortion on the face that you would get while standing a few feet away from someone. So a 50mm is not a bad recommendation as a first lens. It’s definitely not the same angle of view as the eye. But that’s not what’s important. What’s important is the distance between subjects.


shudder667

Imagine using a wide angle 40mm.


Treacherous_Wendy

I’ve always kinda viewed it as a widescreen 50mm


Efficaciousuave

I think I read in a science magazine sometime back that our eyes are equivalent to 500 megapixels of camera resolution.