Forums > Photography Talk > Size of camera sensor light receptors?

Photographer

Mad Hatter Imagery

Posts: 1669

Buffalo, New York, US

What would be the size range between camera sensor light receptors? Basically the equivalent of a pixel which I assume includes one of each primary color.

And how much space between these receptors?

Feb 24 24 07:44 pm Link

Photographer

The Other Place

Posts: 556

Los Angeles, California, US

Why do you want to know?

See here.

Feb 24 24 08:24 pm Link

Photographer

R.EYE.R

Posts: 3436

Tokyo, Tokyo, Japan

Pixel Pitch?
Varies between sensors and manufacturers.

Feb 25 24 07:35 am Link

Photographer

Mad Hatter Imagery

Posts: 1669

Buffalo, New York, US

The Other Place wrote:
Why do you want to know?

See here.

I am just curious how many photos are needed to trigger a light receptor, and what percent of photons miss any target on a sensor chip. I think our only images of the blackhole at the center of our galaxy was constructed using radiotelescopes spread wide across the planet. I want to visualize how much light information we miss with the receptors as close to each other as we can make them.

Feb 25 24 07:39 pm Link

Photographer

The Other Place

Posts: 556

Los Angeles, California, US

Mad Hatter Imagery wrote:
I am just curious how many photos are needed to trigger a light receptor, and what percent of photons miss any target on a sensor chip. I think our only images of the blackhole at the center of our galaxy was constructed using radiotelescopes spread wide across the planet. I want to visualize how much light information we miss with the receptors as close to each other as we can make them.

Hmmm... it would have been nice if you had given such detail in your OP.

In this case, it's likely more effective to do research outside of MM.  Also, you should be aware that sensor shifting techniques can fill the gaps between photosites, yielding a higher resolution than that inherent in the sensor, and, thus, giving more "light information."

Feb 26 24 06:49 am Link

Photographer

Mad Hatter Imagery

Posts: 1669

Buffalo, New York, US

The Other Place wrote:

Hmmm... it would have been nice if you had given such detail in your OP.

In this case, it's likely more effective to do research outside of MM.  Also, you should be aware that sensor shifting techniques can fill the gaps between photosites, yielding a higher resolution than that inherent in the sensor, and, thus, giving more "light information."

What keywords would I need to get a measure of the space between photosites and the sensitivity of each?

Feb 26 24 08:55 am Link

Photographer

Mark Salo

Posts: 11725

Olney, Maryland, US

Mad Hatter Imagery wrote:
What keywords would I need to get a measure of the space between photosites and the sensitivity of each?

"size of photosites"

Feb 26 24 02:10 pm Link

Photographer

Frozen Instant Imagery

Posts: 4152

Melbourne, Victoria, Australia

Mad Hatter Imagery wrote:

I am just curious how many photos are needed to trigger a light receptor, and what percent of photons miss any target on a sensor chip. I think our only images of the blackhole at the center of our galaxy was constructed using radiotelescopes spread wide across the planet. I want to visualize how much light information we miss with the receptors as close to each other as we can make them.

A number of modern sensors feature “gapless micro lenses” - the idea being to avoid missing any photons.

Also, sounds like you might want to read up on the Bayer Matrix - Bayer sensors only collect one colour at each sensel, and they interpolate colours.

Feb 29 24 07:43 pm Link

Photographer

The Other Place

Posts: 556

Los Angeles, California, US

Frozen Instant Imagery wrote:
A number of modern sensors feature “gapless micro lenses” - the idea being to avoid missing any photons.

I realized that I forgot to mention "gapless" microlens arrays, but I didn't have time to add it to the thread.


Frozen Instant Imagery wrote:
Also, sounds like you might want to read up on the Bayer Matrix - Bayer sensors only collect one colour at each sensel, and they interpolate colours.

Keep in mind that there are other CFA's (color filter arrays) in use -- not just Bayer.

Also, the interpolation of Bayer colors is not done by the sensor.  The camera can do the interpolation at a later stage to record jpegs and video codecs, or, in the case of raw files, the interpolation is done in "post."

Mar 01 24 10:25 am Link

Photographer

Frozen Instant Imagery

Posts: 4152

Melbourne, Victoria, Australia

The Other Place wrote:

I realized that I forgot to mention "gapless" microlens arrays, but I didn't have time to add it to the thread.

Keep in mind that there are other CFA's (color filter arrays) in use -- not just Bayer.

Also, the interpolation of Bayer colors is not done by the sensor.  The camera can do the interpolation at a later stage to record jpegs and video codecs, or, in the case of raw files, the interpolation is done in "post."

Sure.

The main alternative I recall to Bayer is the Fuji one (X Trans, is it?), but I was suggesting reading up on Bayer to avoid confusion suggested by the OP’s mention of “one of each colour” - searching for “Bayer matrix” will lead to a wealth of discussion of the subject, probably also leading to a discussion of alternatives (I don’t remember the name of that honeycomb layout)

And yes, the interpolation of colours is done during RAW processing / Bayer de-mosaic, together with some other processing including white balance / colour temperature processing. My point was that colours are not collected separately at each “pixel”. Indeed, with a Bayer matrix you collect colour data at different spatial frequencies for different colours.

Mar 05 24 11:47 pm Link

Photographer

Storytelling-Images

Posts: 111

Port Charlotte, Florida, US

Mad Hatter Imagery wrote:

I am just curious how many photos are needed to trigger a light receptor, and what percent of photons miss any target on a sensor chip. I think our only images of the blackhole at the center of our galaxy was constructed using radiotelescopes spread wide across the planet. I want to visualize how much light information we miss with the receptors as close to each other as we can make them.

Did they change this website to "Model Mayhem and Astrophotography" recently? That’s a combination I definitely didn’t see coming. If you want to discuss theoretical astrophysics and Quantum Physics, I  would highly suggest you go to a more appropriate source. Some astronomers are well-versed in sensor technologies.

Mar 06 24 05:42 am Link

Photographer

Mark Salo

Posts: 11725

Olney, Maryland, US

Is there a list of allowed topics?

Storytelling-Images wrote:
Some astronomers photographers are well-versed in sensor technologies.

Fixed that for you.

Mar 06 24 09:49 am Link

Photographer

The Other Place

Posts: 556

Los Angeles, California, US

Frozen Instant Imagery wrote:
The main alternative I recall to Bayer is the Fuji one (X Trans, is it?),

Yes.  The X-trans cfa might currently be the most popular alternative to the Bayer cfa.  However, that hasn't always been the case, and the Bayer cfa hasn't always been the most popular sensor cfa.

Currently, RGBW sensors are manufactured and most notably employed in cameras like the Blackmagic 12K.

One cfa that gives gorgeous colors is the simple, "striped" RGB array, that was used in the sought-after Panavision Genesis and Sony F35 cameras.  Figures "d" and "e" in this diagram depict two different striped arrangements.

Of course, there have beene plenty of other cfa arrangements over the years.


Frozen Instant Imagery wrote:
My point was that colours are not collected separately at each “pixel”.

Not so with Foveon sensors and multi-sensor cameras.

A Foveoan sensor collects red, green and blue values in each individual "photosite."

Furthermore, multi-sensor cameras utilize a beam-splitting prism to send the image to three separate mono-chromatic sensors, with each sensor having a different overall color filter (red, green or blue).  So, essentially, each red, green and blue "photosite" share the same location.  This arrangement was popular for decades, and I think that Ikegami (and perhaps Hitachi) still makes 3-chip cameras.

Frozen Instant Imagery wrote:
Indeed, with a Bayer matrix you collect colour data at different spatial frequencies for different colours.

Not sure what you mean here.

Mar 06 24 01:20 pm Link

Photographer

Storytelling-Images

Posts: 111

Port Charlotte, Florida, US

Mark Salo wrote:
Is there a list of allowed topics?

Fixed that for you.

Thanks for the correction, but I actually meant to say astronomers. Many astronomers, not photographers, have PhD’s in physics or optics and have helped develop sensors designed specifically for visual and infrared astronomy, including Hubble and JWST. If you want to expand the range of topic related to model photography, I suppose we could start threads regarding interpretation of the light-bending lens capabilities of other galaxies and black holes, but it would be a short discussion.

Mar 14 24 06:13 am Link

Photographer

Frozen Instant Imagery

Posts: 4152

Melbourne, Victoria, Australia

The Other Place wrote:

Frozen Instant Imagery wrote:
The main alternative I recall to Bayer is the Fuji one (X Trans, is it?),

Yes.  The X-trans cfa might currently be the most popular alternative to the Bayer cfa.  However, that hasn't always been the case, and the Bayer cfa hasn't always been the most popular sensor cfa.

Currently, RGBW sensors are manufactured and most notably employed in cameras like the Blackmagic 12K.

One cfa that gives gorgeous colors is the simple, "striped" RGB array, that was used in the sought-after Panavision Genesis and Sony F35 cameras.  Figures "d" and "e" in this diagram depict two different striped arrangements.

Of course, there have beene plenty of other cfa arrangements over the years.


Frozen Instant Imagery wrote:
My point was that colours are not collected separately at each “pixel”.

Not so with Foveon sensors and multi-sensor cameras.

A Foveoan sensor collects red, green and blue values in each individual "photosite."

Furthermore, multi-sensor cameras utilize a beam-splitting prism to send the image to three separate mono-chromatic sensors, with each sensor having a different overall color filter (red, green or blue).  So, essentially, each red, green and blue "photosite" share the same location.  This arrangement was popular for decades, and I think that Ikegami (and perhaps Hitachi) still makes 3-chip cameras.


Not sure what you mean here.

On a classic Bayer array the green filters appear twice as often as the red and blue - we get green samples at double the spatial frequency. Putting it another way, we could detect green line pairs at twice the frequency as we could red line pairs or blue line pairs (oversimplifying, I know). My point there being that you can't say "the photo sites are X microns apart" and expect that to give you everything you need to know about the ability to distinguish fine colour detail - the OP was asking (in paraphrase) about pixel pitch.

Would I be correct in thinking that most of the multi-chip designs were video cameras, rather than stills? I recall mention of multi-chip video cameras, but I don't think I have come across one for stills.

Mar 17 24 01:56 am Link

Photographer

Frozen Instant Imagery

Posts: 4152

Melbourne, Victoria, Australia

The Other Place wrote:

Frozen Instant Imagery wrote:
The main alternative I recall to Bayer is the Fuji one (X Trans, is it?),

Yes.  The X-trans cfa might currently be the most popular alternative to the Bayer cfa.  However, that hasn't always been the case, and the Bayer cfa hasn't always been the most popular sensor cfa.

Currently, RGBW sensors are manufactured and most notably employed in cameras like the Blackmagic 12K.

One cfa that gives gorgeous colors is the simple, "striped" RGB array, that was used in the sought-after Panavision Genesis and Sony F35 cameras.  Figures "d" and "e" in this diagram depict two different striped arrangements.

Of course, there have beene plenty of other cfa arrangements over the years.


Frozen Instant Imagery wrote:
My point was that colours are not collected separately at each “pixel”.

Not so with Foveon sensors and multi-sensor cameras.

A Foveoan sensor collects red, green and blue values in each individual "photosite."

Furthermore, multi-sensor cameras utilize a beam-splitting prism to send the image to three separate mono-chromatic sensors, with each sensor having a different overall color filter (red, green or blue).  So, essentially, each red, green and blue "photosite" share the same location.  This arrangement was popular for decades, and I think that Ikegami (and perhaps Hitachi) still makes 3-chip cameras.


Not sure what you mean here.

On a classic Bayer array the green filters appear twice as often as the red and blue - we get green samples at double the spatial frequency. Putting it another way, we could detect green line pairs at twice the frequency as we could red line pairs or blue line pairs (oversimplifying, I know). My point there being that you can't say "the photo sites are X microns apart" and expect that to give you everything you need to know about the ability to distinguish fine colour detail - the OP was asking (in paraphrase) about pixel pitch.

Would I be correct in thinking that most of the multi-chip designs were video cameras, rather than stills? I recall mention of multi-chip video cameras, but I don't think I have come across one for stills.

Mar 17 24 01:57 am Link

Photographer

Frozen Instant Imagery

Posts: 4152

Melbourne, Victoria, Australia

(this came out a mess, making it look like I was saying what someone else said - deleting it because of the confusion)

Mar 17 24 01:57 am Link

Photographer

Frozen Instant Imagery

Posts: 4152

Melbourne, Victoria, Australia

(Having trouble getting the quotes to work cleanly - giving up and deleting them!)

On a classic Bayer array the green filters appear twice as often as the red and blue - we get green samples at double the spatial frequency. Putting it another way, we could detect green line pairs at twice the frequency as we could red line pairs or blue line pairs (oversimplifying, I know). My point there being that you can't say "the photo sites are X microns apart" and expect that to give you everything you need to know about the ability to distinguish fine colour detail - the OP was asking (in paraphrase) about pixel pitch.

Would I be correct in thinking that most of the multi-chip designs were video cameras, rather than stills? I recall mention of multi-chip video cameras, but I don't think I have come across one for stills.

Mar 17 24 01:58 am Link