Post by dkennedy on Oct 20, 2005 4:45:42 GMT -5
HDTV and the Brain and Why HDTV always Looks Good
October 19, 2005
By Ed Milbourn, HDTV Magazine
It is interesting to note how the human animal perceives images. By an investigation of the many factors of human image perception, one can better understand and appreciate how and why HDTV is such a pleasurable experience and how certain factors contribute to perceived imaged quality.
The brain is a combination analog and digital system, with the strength of the stimulus (such as light on the eyes) determining the firing threshold of the brain's neurons. That's is the analog part. The image pattern information is encoded in a series of pulses both time and width modulated. That's the digital part. So, the analog part initiates "awareness" and transmits the strength of the stimulus. The digital part contains the actual image information of the optical input. (Note: Please understand that no one knows exactly how the brain really works. Even the most eminent neuroscientists disagree on how the brain responds to stimuli. My conclusions in the article are a reasonable amalgamation of several various theories.)
The visual imager (the eye) lens has the optical performance similar to that found on a compact digital camera. The human visual acuity is about one minute of arc (1/60 of a degree) with 20/20 vision. This is roughly equivalent to being able to discern one pixel of a one million-pixel HDTV image at about three times the picture height. However, an HDTV image appears much sharper than an equivalent size SDTV image at must greater distances. How can that be, if the eye cannot resolve any more detail than it can at three times the picture height? The answer is related to the how the brain recognizes and interprets images.
Once an image impression has been stored in memory and characterized, only a fraction of the visual input information is required to bring the image to conscience recognition. This is especially true if the image is moving. The brain ignores most of the eye's detail input of images in motion, and instead interprets the moving images from data that represents comparatively crude patterns. From this rough pattern data, the brain interpolates the image using stored images as a reference. Obviously, this interpolation activity can generate errors, especially if the input data is of low quality or sparse. Such corrupted data requires more brain computation power to accurately interpret, and this results in perceived "fuzzy" images. The more accurate the incoming data from the eye, the less brain computation power is required and the more accurate (detailed) the image is perceived. This is true with both static and motion images, however, with static images, the brain goes into a "study" mode, and more detailed data input is required to properly interpret the images. There is a point however, at which an increase of data (picture detail) will result in very little, if any, increase in perceived definition. We are very close to that point with our present HDTV system. Once the image is below about one million pixels, the perceived image detail, even at several times the picture height, drops off very rapidly. Other data corruption, such as noise, artifacts, etc, is interpreted as errors and result in a lower detailed image.
This brain interpolation system is obviously very efficient and quite accurate (most of the time), although it does render low quality images less that esthetically pleasing, and, conversely, high quality image that emulate the "real world," very pleasing (usually). If this were not true, television would not be viable. Early television, i.e. those monochrome systems on the "low end" of SDTV, would most times be viewed on receivers generating "noisy," "ghosty," images. But the brain could ignore most of this and allow television to become a marketable medium. As color, increased definition, and screen sizes were made available, television became closer to generating images that emulated the real world. But, still, the best SDTV images are on that side of the curve that did not allow the brain to interpret the most pleasing pictures. With HDTV we are very close to the optimal point - at any distance.
October 19, 2005
By Ed Milbourn, HDTV Magazine
It is interesting to note how the human animal perceives images. By an investigation of the many factors of human image perception, one can better understand and appreciate how and why HDTV is such a pleasurable experience and how certain factors contribute to perceived imaged quality.
The brain is a combination analog and digital system, with the strength of the stimulus (such as light on the eyes) determining the firing threshold of the brain's neurons. That's is the analog part. The image pattern information is encoded in a series of pulses both time and width modulated. That's the digital part. So, the analog part initiates "awareness" and transmits the strength of the stimulus. The digital part contains the actual image information of the optical input. (Note: Please understand that no one knows exactly how the brain really works. Even the most eminent neuroscientists disagree on how the brain responds to stimuli. My conclusions in the article are a reasonable amalgamation of several various theories.)
The visual imager (the eye) lens has the optical performance similar to that found on a compact digital camera. The human visual acuity is about one minute of arc (1/60 of a degree) with 20/20 vision. This is roughly equivalent to being able to discern one pixel of a one million-pixel HDTV image at about three times the picture height. However, an HDTV image appears much sharper than an equivalent size SDTV image at must greater distances. How can that be, if the eye cannot resolve any more detail than it can at three times the picture height? The answer is related to the how the brain recognizes and interprets images.
Once an image impression has been stored in memory and characterized, only a fraction of the visual input information is required to bring the image to conscience recognition. This is especially true if the image is moving. The brain ignores most of the eye's detail input of images in motion, and instead interprets the moving images from data that represents comparatively crude patterns. From this rough pattern data, the brain interpolates the image using stored images as a reference. Obviously, this interpolation activity can generate errors, especially if the input data is of low quality or sparse. Such corrupted data requires more brain computation power to accurately interpret, and this results in perceived "fuzzy" images. The more accurate the incoming data from the eye, the less brain computation power is required and the more accurate (detailed) the image is perceived. This is true with both static and motion images, however, with static images, the brain goes into a "study" mode, and more detailed data input is required to properly interpret the images. There is a point however, at which an increase of data (picture detail) will result in very little, if any, increase in perceived definition. We are very close to that point with our present HDTV system. Once the image is below about one million pixels, the perceived image detail, even at several times the picture height, drops off very rapidly. Other data corruption, such as noise, artifacts, etc, is interpreted as errors and result in a lower detailed image.
This brain interpolation system is obviously very efficient and quite accurate (most of the time), although it does render low quality images less that esthetically pleasing, and, conversely, high quality image that emulate the "real world," very pleasing (usually). If this were not true, television would not be viable. Early television, i.e. those monochrome systems on the "low end" of SDTV, would most times be viewed on receivers generating "noisy," "ghosty," images. But the brain could ignore most of this and allow television to become a marketable medium. As color, increased definition, and screen sizes were made available, television became closer to generating images that emulated the real world. But, still, the best SDTV images are on that side of the curve that did not allow the brain to interpret the most pleasing pictures. With HDTV we are very close to the optimal point - at any distance.