Method and system for processing images with doubled image sensor

FIELD: physics.

SUBSTANCE: image processing system may include a control circuit configured to determine if a device is operating in single sensor mode (with one active sensor) or double sensor mode (with two active sensors). When operating in single sensor mode, data may be provided directly to a pixel preprocessing unit from the sensor interface of the active sensor. When operating in double sensor mode, image frames from the first and second sensors are transmitted to a pixel preprocessing unit alternately. For example, in one embodiment, image frames from the first and second sensors are recorded in memory and then read in the pixel preprocessing unit alternately.

EFFECT: wider range of technical capabilities of an image forming apparatus, particularly image data processing.

19 cl, 79 dwg, 4 tbl

 

The LEVEL of TECHNOLOGY

The present disclosure generally relates to digital imaging devices, and more particularly, to systems and method for processing image data obtained using the image sensor to digital imaging.

This section is intended for acquaintance of the reader with the various aspects of the art that may be relevant to various aspects of the technologies that are described and/or claimed in the claims below. This is a comprehensive review of the expected useful in supplying the reader with a level of technology to promote a better understanding of various aspects of the present disclosure. Accordingly, it should be clear that this statement should be interpreted in this light, and not as a recognition of the right of prior art.

In recent years, digital imaging became more popular, at least partially, thanks to such devices that are becoming more and more affordable for the average consumer. Furthermore, in addition to some number of stand-alone digital cameras available on the market currently, it is not unusual to digital imaging were built as part of drogowomostowego device, such as a desktop or the laptop, cell phone or portable media player.

To obtain image data, most digital imaging devices include image sensor, which includes a number of elements of the detection of the light (e.g., photodetectors), made with the ability to convert light detektirovanii by the image sensor into an electrical signal. The image sensor may also include a matrix of color filters, which filters the light captured by the image sensor to collect information about the color. Image data captured by the image sensor, can then be processed by a pipeline of image processing, which can apply a number of different image processing to the image data to generate a full color image that can be displayed for viewing on the display device, such as a monitor.

Although the traditional image processing techniques, as a rule, aimed at creating a visible image, which is both objectively and subjectively, is appealing to the viewer, such traditional technologies may disproportionately to take action in response to errors and/or distortions in the image data, when the worn device imaging and/or image sensor. For example, the defective pixels on the image sensor, which can be caused by manufacturing defects or operational failure may undergo failure in accurate reading of light levels and, if not corrected, can manifest as artifacts appearing in the resulting processed image. Additionally, the drop in intensity of illumination at the edges of the image sensor, which may be due to deficiencies in the production of the lens may have an adverse effect on the measurement of characteristics and may result in an image with uneven overall light intensity. The pipeline of image processing may also perform one or more treatments to increase the sharpness of the image. The traditional technology of sharpening, however, may not adequately take into account the existing noise in the image signal, or may be unable to distinguish the noise from the contours and textured areas in the image. In such cases, the traditional technology of sharpening can actually increase the appearance of noise in the image, which generally is undesirable.

Another operation of the image processing, which can be applied to image data captured by the image sensor, is Opera the Oia eliminate patchiness. Because the matrix of color filters, as a rule, provides the color data on a single wavelength for each pixel of the sensor, a full set of color data is interpolated for each color channel, in order to reproduce the full color image (for example, RGB image (color model red-green-blue)). Traditional technologies eliminate the mosaic, as a rule, interpolating the color values for missing data in the horizontal or vertical direction, usually, depending on several types of constant threshold values. However, such conventional technology, eliminating the patchiness may not sufficiently take into account the location and direction of edges within the image, which can lead to artifacts circuits, such as jagged contours, artifacts chess Board or rainbow artifacts introduced in the full-color image, in particular, along the diagonal edges within the image.

Respectively, should be paid attention to various considerations when processing digital images using a digital camera or other device imaging, in order to improve the appearance of the resulting image. In particular, some aspects of the disclosure described below can be adopted in the ü measures in response to one or more disadvantages, briefly mentioned above.

The INVENTION

The essence of some of the embodiments disclosed in the materials of the present application, described below. It should be clear that these aspects are presented merely to provide the reader a brief summary of some of these embodiments, and that these aspects are not intended to limit the scope of this disclosure. In fact, this disclosure may include a variety of aspects that may not be set forth below.

The present disclosure provides various techniques for processing image data obtained using a digital image sensor. In accordance with aspects of the present disclosure, one such technology may relate to the processing of image data in a system that supports multiple image sensors. In one embodiment, the image processing system may include a control circuit, configured to determine whether the device is operating in the mode of a single sensor and an active sensor) or the dual mode sensor (with two active sensors). When working in the mode of single sensor data can be provided directly to the preprocessing block of pixels from the sensor interface of the active sensor. When the work is in the dual mode sensor, the image frames from the first and second sensors are served to the preprocessing block of pixels alternating manner. For example, in one embodiment, the image frames from the first and second sensors are stored in the memory, and then read in the preprocessing block of pixels alternating manner.

Various improvements of the characteristics noted above, may exist in relation to various aspects of the present disclosure. Additional symptoms may also be included in these various aspects. These refinements and additional features may exist individually or in any combination. For example, various characteristics described below in relation to one or more of the illustrated embodiments may be incorporated in any of the above aspects of the present disclosure alone or in any combination. Again, the summary presented above is intended only to familiarize the reader with certain aspects and contexts of embodiments of the present disclosure without restriction in relation to the claimed subject matter.

BRIEF DESCRIPTION of DRAWINGS

The case of a patent or application contains at least one drawing executed in color. Copies of this publication, patent or patent application with color drawings will be provided by the Patent office upon request and payment of the required fees.

Various aspects of this race is rite can be better understood by reading the subsequent detailed description and upon reference to the drawings, on which:

Fig. 1 is a simplified block diagram depicting components of an exemplary electronic device that includes a device for the formation of images and diagram of the image processing performed with the opportunity to implement one or more of the technologies of image processing described in the present disclosure;

Fig. 2 shows a graphic representation of the pixel block 2x2 matrix of color filters of the Bayer array, which can be implemented in the device forming images according to Fig. 1;

Fig. 3 is a perspective view of an electronic device according to Fig. 1 in the form of road computing device in accordance with aspects of the present disclosure;

Fig. 4 is a front view of an electronic device according to Fig. 1 in the form of a desktop computing device in accordance with aspects of the present disclosure;

Fig. 5 is a front view of an electronic device according to Fig. 1 in the form of a handheld portable electronic device in accordance with aspects of the present disclosure;

Fig. 6 is a rear view of the electronic device shown in Fig. 5;

Fig. 7 is a structural diagram illustrating the logic of the pre-processing of image signals (ISP) and logic pipeline ISP, which can be implemented in the scheme of image processing according to Fig. 1, in accordance with aspects of the present is accretia;

Fig. 8 is a more detailed structural diagram showing an implementation option logic pre-treatment ISP in Fig. 7, in accordance with aspects of the present disclosure;

Fig. 9 is a block diagram of a sequence of operations depicting a method for processing image data in the logic of pre-treatment ISP in Fig. 8, in accordance with the embodiment;

Fig. 10 is a structural diagram illustrating the configuration registers are double-buffered and control registers that can be used for processing the image data in the logic of pre-treatment ISP, in accordance with one embodiment;

Fig. 11-13 - time diagrams depicting different modes to start processing the image frame, in accordance with the variants of the implementation of these technologies;

Fig. 14 is a diagram depicting the control register in more detail in accordance with one embodiment;

Fig. 15 is a block diagram of a sequence of operations depicting a method for use of the preprocessing block of pixels for processing image frames, when the logic of pre-treatment ISP in Fig. 8 is operating as a single transducer;

Fig. 16 is a block diagram of a sequence of operations depicting a method for use of the unit predvaritelniye pixels for processing image frames, when the logic of pre-treatment ISP in Fig. 8 is operating in the dual mode transducer;

Fig. 17 is a block diagram of a sequence of operations depicting a method for use of the preprocessing block of pixels for processing image frames, when the logic of pre-treatment ISP in Fig. 8 is operating in the dual mode transducer;

Fig. 18 is a block diagram of a sequence of operations depicting a method in which an active sensor, but the image with the first image sensor sends the image frames to the preprocessing block of pixels, along with the fact that the second image sensor sends the image frames to block aggregation, so that the statistics of image formation for the second sensor immediately available when the second image sensor continues to send images to the preprocessing block of pixels at a later time, in accordance with one embodiment;

Fig. 19 is a graphical depiction of the various fields of imaging, which can be set within the image frame source captured by the image sensor, in accordance with aspects of the present disclosure;

Fig. 20 is a structural diagram that provides a more detailed view of one in the approaches of the implementation of the preprocessing block of pixels ISP, as shown in logic pre-treatment ISP in Fig. 8, in accordance with aspects of the present disclosure;

Fig. 21 is a sequence diagram of operations that illustrate how temporal filtering may be applied to pixel data of the image taken by the preprocessing block of pixels ISP in Fig. 20, in accordance with one embodiment;

Fig. 22 illustrates a set of reference image pixels and the corresponding set of current pixels in the image that can be used to determine one or more parameters for processing temporal filter shown in Fig. 21;

Fig. 23 is a block diagram of a sequence of operations illustrating processing for applying temporal filtering to the current pixel in the image set of the image data, in accordance with one embodiment;

Fig. 24 is a block diagram of a sequence of operations that shows the method to calculate the Delta values of motion for use with temporal filtering of the current pixel of the image according to Fig. 23, in accordance with one embodiment;

Fig. 25 is a block diagram of a sequence of operations illustrating another processing to apply temporal filtering to the current image pixel of an image dataset, which includes used the e different gains for each color component of the image data, in accordance with another embodiment;

Fig. 26 - precedence diagram illustrating how the temporal filtering technology, which uses a separate table movement and brightness for each color component of the pixel data of the image taken by the preprocessing block of pixels ISP, shown in Fig. 20, in accordance with an additional embodiment;

Fig. 27 is a block diagram of a sequence of operations illustrating processing for applying temporal filtering to the current image pixel of the set of image data using tables movement and brightness, as shown in Fig. 26, in accordance with an additional embodiment;

Fig. 28 depicts a sample data raw image full resolution that can be captured by the image sensor, in accordance with aspects of the present disclosure;

Fig. 29 illustrates an image sensor, which can be performed with the opportunity to apply grouping to the data of the raw image full resolution Fig. 28 to output the sample grouped data the raw image, in accordance with the embodiment of the present disclosure;

Fig. 30 depicts a sample of grouped data the raw image, to the which can be provided by the image sensor in Fig. 29, in accordance with aspects of the present disclosure;

Fig. 31 depicts grouped data, the raw image in Fig. 30 after re-sampling filter to compensate the group for issuance, in accordance with aspects of the present disclosure;

Fig. 32 depicts a filter compensation grouping, which can be implemented in the preprocessing block of pixels ISP in Fig. 20, in accordance with one embodiment;

Fig. 33 is a graphical depiction of various sizes step that can be applied to a differential analyzer to select the Central input pixels and the index/phase to filter compensation grouping, in accordance with aspects of the present disclosure;

Fig. 34 is a block diagram of a sequence of operations illustrating processing for scaling the image data using a filter compensation grouping in Fig. 32, in accordance with one embodiment;

Fig. 35 is a block diagram of a sequence of operations illustrating processing for determining the Central pixel of the current input source for horizontal and vertical filtering filter compensation grouping in Fig. 32, in accordance with one embodiment;

Fig. 36 is a block diagram of a sequence of operations illustrating clicks the processing to determine the index for selecting filter coefficients for horizontal and vertical filtering filter compensation grouping in Fig. 32, in accordance with one embodiment;

Fig. 37 is a more detailed structural diagram showing an implementation option block aggregation, which can be implemented in logic pre-treatment ISP, as shown in Fig. 8, in accordance with aspects of the present disclosure;

Fig. 38 shows the different cases of the edges of the frame image, which may be considered in the application of technologies for the detection and correction of defective pixels in the statistical processing unit statistical processing of Fig. 37, in accordance with aspects of the present disclosure;

Fig. 39 is a block diagram of a sequence of operations illustrating processing for performing detection and correction of defective pixels during statistical processing, in accordance with one embodiment;

Fig. 40 shows a three-dimensional profile representing the intensity of light depending on the position of the pixel for the conventional lens device imaging;

Fig. 41 - colour drawing, which shows a non-uniform light intensity across the image, which may be the result of inhomogeneities in the lens shading;

Fig. 42 is a graphical illustration of a frame forming a raw image, which is the BOJ area shading correction lens and a net gain in accordance with aspects of the present disclosure;

Fig. 43 illustrates the interpolation of the gain value for a pixel of the image covered by four points gain bordering grid, in accordance with aspects of the present disclosure;

Fig. 44 is a block diagram of a sequence of operations illustrating processing for determining an interpolated gain values that can be applied to the pixels imaging during surgery shading correction lens in accordance with the embodiment of the present disclosure;

Fig. 45 is a three - dimensional profile, representing the interpolated gain value that can be applied to the image, which shows the characteristics of the light intensity shown in Fig. 40, when performing shading correction lens, in accordance with aspects of the present disclosure;

Fig. 46 shows a color drawing of Fig. 41, which shows improved uniformity of light intensity after operation applied shading correction lens, in accordance with aspects of the present disclosure;

Fig. 47 graphically illustrates how the radial distance between the current pixel and the center of the image can be calculated and used to determine the radial component of the gain correction for the change of the lens, in accordance with one embodiment;

Fig. 48 is a block diagram of a sequence of operations illustrating processing by which the radial gain and the interpolated gain from the grid of amplification are used to determine the total gain that can be applied to the pixels imaging during surgery shading correction lens in accordance with the embodiment of the present technology;

Fig. 49 is a structural diagram showing an implementation option logic pipeline ISP in Fig. 7, in accordance with aspects of the present disclosure;

Fig. 50 is a more detailed view showing the variant of implementation of the processing unit raw pixels, which may be implemented in logic pipeline ISP in Fig. 49, in accordance with aspects of the present disclosure;

Fig. 51 shows the different cases of the edges of the frame image, which may be considered in the application of technologies for the detection and correction of defective pixels during processing unit processing the raw pixels shown in Fig. 50, in accordance with aspects of the present disclosure;

Fig. 52-54 is a flowchart of the operational sequence of the method that depict various processing for the detection and correction of defective peak of the oil, which can be executed in the processing unit of unprocessed pixels in Fig. 50, in accordance with one embodiment;

Fig. 55 shows the location of two green pixels in the block of 2x2 pixels of the image sensor Bayer that can be interpolated in the application of technologies correction of uneven green during the processing logic, the processing of the raw pixels in Fig. 50, in accordance with aspects of the present disclosure;

Fig. 56 illustrates a set of pixels, which includes the Central pixel and the associated horizontal neighboring pixels, which can be used as part of processing the horizontal filtering for noise reduction, in accordance with aspects of the present disclosure;

Fig. 57 illustrates a set of pixels, which includes the Central pixel and the associated vertical neighboring pixels, which can be used as part of processing vertical filtering for noise reduction, in accordance with aspects of the present disclosure;

Fig. 58 is a simplified diagram of the sequence of operations, which depicts how the elimination of the mosaic can be applied to the structure of the raw Bayer image to create a full color RGB image;

Fig. 59 depicts a set of pixels of the image patterns is Bayer, from which can be selected horizontal and vertical components of energy for interpolation of the green color while eliminating the mosaic structure of the Bayer image, in accordance with one embodiment;

Fig. 60 shows a set of pixels horizontally, which can be filtered to determine the horizontal component of the interpolated green values while eliminating the mosaic structure of the Bayer image, in accordance with aspects of the present technology;

Fig. 61 shows a set of vertical pixels, which may be filtered to determine the vertical component of the interpolated green values while eliminating the mosaic structure of the Bayer image, in accordance with aspects of the present technology;

Fig. 62 shows the different blocks of 3x3 pixels, which may be filtered to determine interpolated values of the red and blue while eliminating the mosaic structure of the Bayer image, in accordance with aspects of the present technology;

Fig. 63-66 provide a flowchart of the operational sequence of the method that depict various processing for interpolation of green, red and blue colors while eliminating the mosaic structure of the URS image Bayer, in accordance with one embodiment;

Fig. 67 shows a color drawing of the scene of the original image that can be captured by the image sensor and processed in accordance with aspects of the techniques to eliminate patchiness, disclosed in the materials of the present application;

Fig. 68 shows a color drawing of the structure of the Bayer image scene image shown in Fig. 67;

Fig. 69 shows a colored drawing of an RGB image, reconstructed using traditional technologies eliminate the mosaic based on the schema image Bayer in Fig. 68;

Fig. 70 shows a colored drawing of an RGB image, reconstructed from the structure of the Bayer image in Fig. 68, in accordance with aspects of the techniques to eliminate patchiness, disclosed in the materials of the present application;

Fig. 71 is a more detailed view showing one variant of implementation of the processing unit RGB, which can be implemented in logic pipeline ISP in Fig. 49, in accordance with aspects of the present disclosure;

Fig. 72 is a more detailed view showing one variant of implementation of the processing unit YCbCr color model, luminance-chromaticity blue chromaticity red), which can be implemented in logic pipeline ISP in Fig. 49, in accordance with aspects of the present disclosure;

Fig 73 - graphic image areas of active source for luminance and chrominance as defined within the buffer of the source using a 1-planar format, in accordance with aspects of the present disclosure;

Fig. 74 - graphic regions of the active source for luminance and chrominance as defined within the buffer of the source using a 2-plane format, in accordance with aspects of the present disclosure;

Fig. 75 - structural diagram illustrating the logic of sharpening images, which can be implemented in the processing unit YCbCr, as shown in Fig. 72, in accordance with one embodiment;

Fig. 76 - structural diagram illustrating the logic of the amplification circuits, which can be implemented in the processing unit YCbCr, as shown in Fig. 72, in accordance with one embodiment;

Fig. 77 is a graph showing the relationship of the attenuation factors of color subjected to the sharpening, brightness, in accordance with aspects of the present disclosure;

Fig. 78 - structural diagram illustrating the logic for adjusting brightness, contrast and color (BCC), which can be implemented in the processing unit YCbCr, as shown in Fig. 72, in accordance with one embodiment; and

Fig. 79 shows the t of the color wheel the color tones and saturation in the color space YCbCr, defining different angles hue and saturation value, which can be used during the color settings logic settings BCC, shown in Fig. 78.

DETAILED DESCRIPTION of SPECIFIC embodiments

One or more specific embodiments of the present disclosure will be described below. These described embodiments of are just a few examples disclosed in the present technology. Additionally, in an attempt to give a concise description of these embodiments, all signs the actual implementation may not be described in the description of the invention. Should be taken into account that the development of any such actual implementation, as a technical project or development, numerous specific implementation decisions need to be taken to achieve the specific goals of the developers, such as compatibility with the relevant system relating to commercial activity restrictions, which may vary from one implementation to another. Moreover, it should be taken into account that these are aimed at the design effort might be complex and time-consuming, but, nevertheless, could be commonplace in the development, design and production for ordinary experts in this about the Asti equipment, having the benefit of this disclosure.

When introducing elements of various embodiments of the present disclosure forms the singular refers to mean that there is one or more elements. The terms "comprising", "including" and "having" refers to including the composition and mean that there may be additional elements other than the listed elements. Additionally, it should be clear that references to "one implementation" or "variant implementation of the present disclosure is not meant that should be interpreted as excluding the existence of additional embodiments that also incorporate the listed attributes.

As will be described below, the present disclosure generally relates to technology for processing the image data received through the one or more devices of the read image. In particular, some aspects of the present disclosure may relate to a technology for detecting and correcting defective pixels, the technology to eliminate the mosaic structure of raw images, technologies to enhance the sharpness of the luminance image using a multirange Unsharp mask and technologies for application of the coefficients of the lens shading correction needmore the values of the shading of the lens. In addition, it should be clear that disclosed in the present technology can be applied to still images and moving images (e.g. video) and can be used in any suitable type of application imaging, such as a digital camera, an electronic device having an integrated digital camera, security system or surveillance system, the formation of medical images, and so on.

Considering the above points, Fig. 1 is a structural diagram illustrating an example of an electronic device 10 that may include processing the image data using one or more of image processing technologies, briefly mentioned above. Electronic device 10 may be any type of electronic device, such as a road or desktop computer, mobile phone, digital media player, or the like, which is performed with the opportunity to receive and process image data, such as data obtained using one or more components of the read image. Solely as an example, electronic device 10 may be a portable electronic device such as an iPod® or iPhone®available from Apple Inc. from Cupertino, California. Additionally, the electronic device is in 10 may be a desktop computer or road, such as the MacBook®, MacBook® Pro, MacBook Air®, iMac®, Mac® Mini, or Mac Pro®available from Apple Inc. In other embodiments, implementation, electronic device 10 may also be a model of an electronic device from another manufacturer that is able to receive and process image data.

Regardless of their form (e.g., portable or reportative), it should be understood that electronic device 10 may include processing the image data using one or more imaging technologies, briefly described above, which may include, among others, technologies for detection and/or correction of the defective pixels, technology shading correction lens technology eliminate patchiness or technology increase the sharpness of images. In some embodiments, implementation, electronic device 10 may use these technologies image processing to the image data stored in the memory of the electronic device 10. In additional embodiments, implementation, electronic device 10 may include one or more imaging devices, such as built-in or external digital camera is made with the possibility to obtain image data, which then can be processed by the electronic device 10 using one or more the of the above-mentioned image processing technologies. Embodiments of showing how portable and reportative embodiments of electronic device 10 will be further described below in Fig. 3-6.

As shown in Fig. 1, electronic device 10 may include various internal and/or external components that contribute to the functioning of the device 10. Normal specialists in the art will note that various functional blocks shown in Fig. 1 may include hardware components (including circuits), software elements (including computer code stored on a machine-readable medium) or a combination of elements of both hardware and software. For example, illustrated in the present embodiment, electronic device 10 may include ports 12, input/output (I/O), input structures 14, one or more processors 16, the device 18 memory, non-volatile memory device 20, the card(s) 22 of the expansion network device 24, a source 26 of the power unit 28 display. Additionally, electronic device 10 may include one or more devices 30 imaging, such as a digital camera, and circuit 32 image processing. As will be further described by the who, scheme 32 image processing can be performed with the opportunity to implement one or more of the above-described image processing technologies while processing the image data. As can be taken into account, the image data processed by the circuits 32 image processing, can be retrieved from the memory 18 and/or non-volatile memory device(s) 20 or may be obtained from use of the device 30 of the image.

Before continuing it should be clear that the structural scheme of the device 10 shown in Fig. 1, assumes a high level by the control circuit, depicting various components that may be included in such a device 10. That is, the connection line between each separate component, as shown in Fig. 1, may not necessarily represent paths or directions in which the data move or are transferred between the various components of the device 10. In fact, as described below, depicts the processor(s) 16, in some examples, includes multiple processors, such as the main processor (CPU, CPU), and specialized processors pictures and/or video processors. In such scenarios, implementation, processing the image data mainly can run these specialized processors, such about what atom, now downloading information such tasks from the main processor (CPU).

With regard to each of the illustrated components in Fig. 1, the ports 12 I/O may include ports that are executed with the opportunity to join a variety of external devices such as a power source, an output device (e.g. headset or headphones, or other electronic devices (such as handheld devices and/or computers, printers, projectors, external display devices, modems, docking station (docking station), and so on). In one embodiment, the ports 12 I/O can be performed with the opportunity to join an external device imaging, such as a digital camera to obtain image data that can be processed using circuits 32 image processing. Ports 12 I/O can support any suitable type of interface such as a universal serial bus (USB)port serial port IEEE-1394 (FireWire)port, Ethernet or modem and/or port connection power supply AC/DC.

In some embodiments, the implementation of some ports 12 I/O can be performed with the opportunity to provide more than one function. For example, in one embodiment, the ports 12 I/O may include proprietary port from Apple Inc., which the output can function, not only to facilitate the transfer of data between the electronic device 10 and the external source, but also to attach the device 10 to the interface of the charging energy, such as a power adapter designed to provide power from the electrical wall outlet, or the interface cable is made with the possibility to take power from another electric device, such as a desktop or the laptop, charging source 26 power (which may include one or more rechargeable batteries). Thus, the port 12 I/O can be performed with the ability to operate dual as a data port, and port connection to a power source, AC/DC, for example, depending on the external component, which is attached to the device 10 through port 12 I/O.

The input structures 14 may provide user input or feedback to the processor(s) 16. For example, the input structures 14 may be configured to control one or more functions of the electronic device 10, such as running applications on the electronic device 10. Solely as an example, the input structures 14 may include keys, sliding elements, switches, controls, keys, buttons, scroll wheels, keyboards, mice, behold the weed (perceiving touch panel and so forth, or some combination thereof. In one embodiment, the input structures 14 may allow a user to navigate a graphical user interface (GUI)displayed on the device 10. Additionally, the input structures 14 may include perceiving touch mechanism provided with the device 28 of the display. In this embodiment, the user can select or interact with the displayed interface elements through perceiving touch mechanism.

The input structures 14 may include various devices, circuits and wiring, through which user input or feedback is provided by one or more processors 16. Such input structures 14 may be configured to control the operation of device 10 by executing an application on the device 10 and/or any interfaces or devices connected to or used by electronic device 10. For example, the input structures 14 may provide the user the ability to navigate through the displayed user interface or application interface. Examples of structures 14 input may include keys, sliding elements, switches, controls, keys, buttons, scroll wheels, keyboards, mouse, touch pad, and so on.

In some embodiments, the implementation structure 14 and input device 28, the display may be provided together, for example, as in the case of "touch screen", which perceives the touch mechanism is provided together with the device 28 of the display. In this embodiment, the user can select or interact with the display of the user interface elements through perceiving the touch mechanism. Thus, the display interface may provide interactive functionality that provides the user the ability to navigate through the display interface, touching the device 28 of the display. For example, user interaction with structures 14 input, such as to interact with the user interface or application interface displayed on the device 26 of the display may generate electrical signals that serve as a sign of user input. These input signals can be routed through suitable wiring circuits, such as the hub input or data bus, to one or more processors 16 for further processing.

In addition to handling various input signals received through the structure(s) 14 input, the processor(s) 16 can control the overall operation of the device 10. For example, the processor(the) 16 may provide the processing capability to execute the operating system, programs, user interface and application interface, and any other functions of the electronic device 10. The processor(s) 16 may include one or more microprocessors, such as one or more microcomputers "General application", one or more microcomputers special purpose and/or specialized microprocessors (ASIC), or a combination of such processing components. For example, the processor(s) 16 may include one or more processors with a set of instructions (e.g., RISC (calculation with a reduced instruction set)and graphics processor (GPU), video processors, audio processors and/or related chip sets. As will be taken into account, the processor(s) 16 may be attached to one or more data buses for transferring data and instructions between the various components of the device 10. In some embodiments, the implementation, the processor(s) 16 may include processing capabilities to run applications of imaging in the electronic device 10, such as Photo Booth®, Aperture®, iPhoto®, or Preview®, available from Apple Inc., or application Camera ("Camera") and/or Photo ("Photo")supplied by Apple Inc. and available on the iPhone®.

Instructions or data that must be processed by the processor(s) 16 may be stored on machine-readable media, Taconic device 18 memory. The device 18 of the memory may be provided as a volatile memory, such as random access memory (RAM, RAM), or as non-volatile memory, such as persistent storage device (RAM, ROM), or as a combination of one or more devices of RAM and ROM. The memory 18 can store a variety of information and can be used for various purposes. For example, the memory 18 may store firmware for the electronic device 10, such as the basic input/output system (BIOS), operating system, various programs, applications, or any other procedure that can be performed on the electronic device 10, including the functions of the user interface processor functions, and so on. In addition, the memory 18 may be used for buffering or caching during operation of the electronic device 10. For example, in one embodiment, the memory 18 may include one or more frame buffers for buffering video data as they appear on the device 28 to display.

In addition to the device 18 to the memory, the electronic device 10 may optionally include non-volatile storage device 20 for permanent storage of data and/or instructions. Non-volatile storage device 20 can switch the diamonds in itself-in flash memory, the hard drive or any other optical, magnetic and/or solid carriers, or some combination of the three. Thus, although depicted as a single device in Fig. 1 for clarity, it should be clear that the non-volatile memory device(s) 20 may include a combination of one or more of the above storage devices, working together with the processor(s) 16. Non-volatile storage device 20 can be used to store firmware, data files, image data, software, operating system and software implemented application, information about wireless connections, personal information, user preferences, and any other suitable data in accordance with aspects of the present disclosure, the image data stored in the nonvolatile storage device 20 and/or the device 18 of the memory can be processed by the circuits 32 image processing before displaying on the display device.

An implementation option, illustrated in Fig. 1, may also include one or more card slots (cards), or expansion slots. The slot can be made with the ability to accept cards 22 extensions that can be used to add options the regional opportunities such as additional memory, the functionality of I/O or network, electronic device 10. This card 22 extensions can be connected to a device through any type of suitable connector and can be access from inside or outside with respect to the housing of the electronic device 10. For example, in one embodiment, the card 24 extensions can be a card, flash memory, such as a map format SecureDigital (SD), mini - or micro-SD card, CompactFlash card, or the like, or may be a PCMCIA device. Additionally, the card 24 extensions can be a card of a subscriber identity module (SIM) for use with the embodiment of the electronic device 10, which provides for a cell phone.

The electronic device 10 also includes a network device 24, which may be a network controller or a network interface card (NIC), which may provide network connectivity through 802.11 wireless connection or any other network standard, such as a local area network (LAN), wide area network (WAN), such as the development of the GSM standard (global system for mobile communications) with enhanced data rate GSM (EDGE), the data network 3G or the Internet. In some embodiments, implementation, network device 24 can cover the ü connection provider of interactive digital multimedia content, such as music service iTunes®, available from Apple Inc.

The source 26 of the power supply device 10 may include the ability to power the device 10 as in reportative and portable installations. For example, the portable device 10 may include one or more batteries, such as lithium-ion battery to power the device 10. The battery can be recharged by attaching the device 10 to an external power source, for example to an electrical wall outlet. In reportative installation source 26 power supply may include a power supply unit (PSU), made with the ability to take energy from the wall outlet and distribute energy in various components of reportative electronic device, such as a desktop computer system.

The device 28, the display may be used to display various images generated by the device 10, such as a GUI for an operating system or image data (including still images and video data)processed by the circuits 32 image processing, as will be further described below. As mentioned above, the image data may include image data obtained using the device 30 imaging, or image data, the extracted samati 18 and/or a volatile storage device 20. The device 28, the display may be any suitable type of display device, such as a liquid crystal display device (LCD), a plasma display device or the display device on the organic light-emitting diodes (OLED). Additionally, as described above, the device 28, the display may be provided together with the above-described perceiving touch mechanism (e.g., touch screen)that can function as part of the management interface for the electronic device 10.

Illustrated device(s) 30 imaging may be provided as a digital camera, is configured to receive as still images and moving images (e.g. video). The camera 30 may include a lens and one or more image sensors made with the ability to capture and convert light into electrical signals. Just as an example, the image sensor may include an image sensor in CMOS (complementary elements of the metal oxide semiconductor, CMOS) (for example, a sensor with an active pixel (APS) CMOS) or the sensor on the CCD (charge-coupled devices, CCD). Typically, the image sensor in the camera 30 includes an integrated circuit having a matrix of pixels, in which the each pixel includes a photodetector for reading light. How will you take into account the specialists in this field of technology, the photodetectors in pixels imaging usually detect the intensity of light captured by the lens of the camera. However, the photodetectors themselves unable to detect the wavelength of the captured light and, thus, are not able to determine the color information.

Accordingly, the image sensor may further include a matrix of digital filters (CFA), which may cover or be placed on top of a matrix of pixels of the image sensor to capture color information. Matrix of digital filters may include a small matrix of color filters, each of which may overlap a corresponding pixel of the image sensor and filter captured the light wavelength. Thus, when used together, the matrix of color filters and photodetectors can give information both on the wavelength and intensity regarding light captured through the camera, which can give an idea of the captured image.

In one embodiment, the matrix of color filters may include a matrix of filters Bayer, which provides the structure of the filter, that is, 50% green, 25% red and 25% blue. For example, in Fig. 2 shows a block of 2x2 pixels Bayer CFA, which includes 2 of the item green (Gr and Gb), 1 (R) red and 1 (B) blue. Thus, image sensor, which uses a matrix of color filters of Bayer, can give information about the intensity of light received by the camera 30 on the wavelengths of green, red and blue colors, whereby each pixel of the image registers only one of the three colors (RGB). This information, which may be indicated by reference as "raw image data" or data in "raw"area, then may be processed using one or more techniques to eliminate patchiness for converting raw images into a full color image, usually by interpolation of a set of values of red, green and blue colors for each pixel. As will be further described below, these technologies eliminate the mosaic can be performed by circuits 32 image processing.

As mentioned above, the circuit 32 of the image processing may include various stages of image processing, such as detection/correction of defective pixels, shading correction lens, eliminating the patchiness and increase image sharpness, noise reduction, gamma correction, improving the quality of the image, the color space conversion, image compression, subdirectly color and scaling operations of the image, and so on. In some embodiments, implementation, circuit 32 of the image processing may include various sub-components and/or discrete logic blocks, which together form a "pipeline" image processing for performing each of the different stages of image processing. These subcomponents may be implemented using hardware (e.g., digital signal processor or ASIC) or software or by a combination of hardware components and software. Various processing images, which can be provided by the circuits 32 image processing and, in particular, such processing operations related to the detection/correction of defective pixels, shading correction lens, eliminating patchiness and increase the sharpness of the image, will be described in more detail below.

Before continuing it should be noted that, although various embodiments of various technologies for image processing, described below, may use CFA Bayer disclosed at the moment the technology is not means limited in this respect. In reality the and, specialists in the art will note that the image processing techniques provided in the materials of the present application may be applicable to any suitable matrix of color filters, including filters RGBW, filters, CYGM, and so on.

Again with reference to electronic device 10, Fig. 3-6 illustrate various forms that can be received by electronic device 10. As mentioned above, electronic device 10 may take the form of a computer, including computers, which are generally portable (such as a road, a tablet and tablet computers as well as computers, which generally are not portable (such as desktop computers, workstations and/or servers), or other types of electronic devices, such as a handheld portable electronic device (e.g., digital media player or mobile phone). In particular, Fig. 3 and 4 depict an electronic device 10 in the form of road computer 40 and the desktop computer 50, respectively. Fig. 5 and 6 show front and rear views respectively of an electronic device 10 in the form of a handheld portable device 60.

As shown in Fig. 3 depicts a road computer 40 includes a housing 42, the device 28 display ports 12 I/O and input structures 14. Structure 14 vodemort to include keyboard and mouse touchpad which are made as an integral part of housing 42. Additionally, the input structure 14 may include various other keys and/or switches that can be used to interact with the computer 40, for example, to turn on the power or start the computer management GUI or application running on the computer 40, as well as configure various other aspects related to the computer 40 (e.g., volume, brightness of the display device and so on). The computer 40 may also include various ports 12 I/O, which provide connectivity to additional devices, as described above, such as FireWire® or USB port multimedia interface HDMI (high-definition) or any other type of port that is suitable for connection to an external device. Additionally, the computer 40 may include network connectivity (for example, network device 26), memory (e.g., memory 20) and storage (e.g., storage device 22 as described above with reference to Fig. 1.

In addition, trip computer 40, in the illustrated embodiment, may include a built-in device 30 imaging (e.g., camera). In other embodiments, the implementation of road computer 40 may be used external to the measure (for example, external USB camera or web camera)attached to one or more ports 12 I/O, instead of or in addition to the built-in camera 30. For example, the external camera can be iSight®, available from Apple Inc. The camera 30, internal or external, may provide for the capture and record images. Such images can then be viewed by the user using the application to view the images or can be used by other applications, including applications, video conferencing, such as iChat®, and applications of editing/viewing images, such as Photo Booth®, Aperture®, iPhoto®, or Preview®, available from Apple Inc. In some embodiments, the implementation of the road shows, the computer 40 may be a model of a MacBook®, MacBook® Pro, MacBook Air® and PowerBook®, available from Apple Inc. Additionally, the computer 40, in one embodiment, may be a portable tablet computing device, such as a model of tablet PC iPad®, also available from Apple Inc.

Fig. 4 additionally illustrates an implementation option, in which the electronic device 10 is provided as a desktop computer 50. As will be taken into account, the desktop computer 50 may include a number of signs that may be generally similar to the road provided by the computer 40, is provided in Fig. 4, but may be in General more General construct. As shown, the desktop computer 50 can be accommodated in the housing 42, which includes the device 28 display, and various other components described with reference to the block diagram shown in Fig. 1. In addition, the desktop computer 50 may include an external keyboard and mouse (input structures 14), which can be connected to the computer 50 via one or more ports 12 I/O (e.g., USB) or can communicate with the computer 50 wireless means (for example, radio frequency (RF), Bluetooth, etc). Desktop computer 50 also includes a device 30 imaging, which can be built-in or external camera, as described above. In some embodiments, the implementation depicted desktop computer 50 may be a model iMac®, Mac® mini, or Mac Pro®available from Apple Inc.

As additionally shown, the device 28, the display may be configured to generate different images that can be viewed by the user. For example, during operation of the computer device 50 28 display may display a graphical interface 52 user ("GUI")that allows the user to interact with the operating system and/or application running on the computer 50. GUI 52 can the t to include different levels, Windows, screens, templates, or other graphical elements that can be displayed on all or part of the device 28 of the display. For example, in the depicted embodiment, the GUI 52 of the operating system may include various graphical icons 54, each of which may correspond to various applications that can be opened or executed upon detection of user selection (e.g., via the keyboard/mouse or touch screen). Icon 54 may be displayed in the dock (the type of the toolbar) 56 or within one or more graphical elements 58 of the window displayed on the screen. In some embodiments, the implementation choice of icon 54 may cause the processing of hierarchical navigation so that the choice of icon 54 brought to the screen or opened another graphical window that includes one or more icons or other GUI elements. Solely as an example, GUI 52 of the operating system shown in Fig. 4, may be of the version of the operating system Mac OS®, available from Apple Inc.

Continuing with reference to Fig. 5 and 6, the electronic device 10 is illustrated in the form of a portable handheld electronic device 60, which may be an iPod® or iPhone®available from Apple Inc. In the depicted is the version of the implementation of the handheld device 60 includes a housing 42, which can function to protect the interior components from physical damage and to shield them from electromagnetic interference. The housing 42 may be formed from any suitable material or combination of materials, such as plastic, metal or composite material, and can provide some frequencies of electromagnetic radiation, such as a wireless signal, the opportunity to go through the schemes of wireless communication (for example, network device 24), which can be located within housing 42, as shown in Fig. 5.

The housing 42 also includes various patterns 14 user input through which a user can interact with a handheld device 60. For example, each input structure 14 may be configured to manage one or more corresponding features of the device, when pressed or driven. As an example, one or more structures 14 input can be performed with the opportunity to call "home" screen 42 or menu that should be displayed to switch between on duty, active or enabled/disabled mode to mute a call for applications cell phone to increase or decrease the output volume, and so on. It should be clear that the illustrated structure is URS 14 input are merely exemplary and that the handheld device 60 may include any number of suitable structures for user input, existing in various forms, including buttons, switches, keys, buttons, scroll wheels, and so forth.

As shown in Fig. 5, the handheld device 60 may include ports 12 I/O. for Example, depicted ports 12 I/O may include the port 12a of the private (corporate) connection to send and receive files of data or charging source 26 power and port 12b audio connection for connecting the device 60 to the device audio output (for example, headphones or speakers). In addition, variants of implementation, where the handheld device 60 provides the functionality of a mobile phone, the device 60 may include a port 12c I/O reception card subscriber identity module (SIM) (e.g., map 22 extension).

The device 28 of the display, which may be LCD, OLED, or any suitable type of display device may display various images generated by the handheld device 60. For example, the device 28 display can display various indicators 64 system issuing feedback to the user regarding one or more States of the handheld device 60, such as power status, signal strength, connection with external devices, and so forth. The display device may also display the GUI 52, which provides user is the user the opportunity to interact with the device 60, as described above with reference to Fig. 4. GUI 52 may include graphical elements such as icons 54, which may correspond to various applications that can be opened or executed upon detection of user selection of the respective icon 54. As an example, one of the icons 54 may submit an application 66 camera that can be used with the camera 30, as shown by the imaginary lines of the circuit in Fig. 5) to produce images. With brief reference to Fig. 6, illustrates the rear view of the handheld electronic device 60, shown in Fig. 5, which shows the camera 30 as being implemented as a part of the housing 42 and is positioned on the back side of the handheld device 60.

As mentioned above, image data obtained by the camera 30 can be processed using circuits 32 processing, which may include hardware (e.g., located within the housing 42 and/or software stored in one or more storage devices (e.g., memory 18 or the nonvolatile storage device 20) of the device 60. Images obtained using applications 66 of the camera and the camera 30 may be stored on the device 60 (e.g., in storage device 20 and can be viewed at a later time using the application 68 viewing photos.

The handheld device 60 may also include a variety of elements of input and output audio data. For example, input/output, shown generally at number 70 references may include the receiver input, such as one or more microphones. For example, in cases where the handheld device 60 includes the functionality of a cell phone, the receiver input can be configured to accept user input of audio, such as the user's voice. Additionally, the elements 70 input/audio output can include one or more transmitters output. Such transmitters output can include one or more speakers, which can operate to transmit audio signals to the user, such as during playback of music data using the application 72 media player. In addition, variants of implementation, where the handheld device 60 includes the application of a cell phone, there may be additional transmitter 74 audio output, as shown in Fig. 5. Like the transmitter output elements 70 I/o audio transmitter 74 output may also include one or more speakers configured to transmit audio signals to the user, such as voice data received during those whom Efanova call. Thus, the elements 70 and 74 input/output audio can work together to function as elements of the audio transmission and reception of the phone.

Now, having provided some context in relation to the various forms that can accept electronic device 10, the present description will focus on circuits 32 of the image processing shown in Fig. 1. As mentioned above, the circuit 32 of the image processing can be implemented using hardware components and/or software and may include various processing units that define the processing pipeline image signals (ISP). In particular, the following discussion may focus on aspects of image processing technologies described in this disclosure, in particular related to technologies for the detection/correction of defective pixels, technology shading correction lens technologies eliminate the mosaic image and technologies increase the sharpness of the image.

Next, with reference to Fig. 7 illustrates a high-level structural diagram depicting certain functional components that may be implemented as part of the circuits 32 of the image processing in accordance with one embodiment, disclosed at this mo is UNT technologies. In particular, Fig. 7 is intended to illustrate how image data can pass through circuit 32 for processing images in accordance with at least one embodiment. In order to give a General overview of schemes 32 image processing, a General description of how these functional components operate to process image data provided here with reference to Fig. 7, along with the fact that a more accurate description of each of the illustrated functional components and their respective sub-components will be further described below.

With reference to the illustrated variant implementation, the circuit 32 of the image processing may include logic 80 pre-processing for processing the image signals (ISP)logic 82 pipeline ISP and logic 84 management. The image data captured by the device 30 of the image can be processed by logic 80 pre-treatment ISP and analyzed for statistical data collection of images that can be used to determine one or more parameters of the control logic 82 of the conveyor ISP and/or device 30 imaging. Logic 80 pre-treatment ISP can be performed with the opportunity to collect data image is tion of the input signal of the image sensor. For example, as shown in Fig. 7, the device 30 imaging may include a camera having one or more lenses 88 and the sensor(s) 90 images. As described above, the sensor(s) 90 images may include a matrix of color filters (for example, a Bayer array) and, thus, can provide information about the intensity and wavelength captured by each pixel of the imaging sensor 90 images to produce a set of raw data image that can be processed by logic 80 pre-treatment ISP. For example, the output 92 of the device 30 imaging can be accepted by the interface 94 of the sensor, which can then provide the data 96 raw image to logic 80 pre-treatment ISP, for example, based on the type of sensor interface. As an example, the interface 94 of the sensor may use an interface standard mobile architecture imaging (SMIA) or other serial or parallel interfaces camera, or some combination of the three. In some embodiments, the implementation logic 80 pre-treatment ISP can work within their own areas of clock signals and may provide asynchronous communication interface 94 of the sensor to maintain on tchiki images of different sizes and synchronization requirements.

Data 96 raw image may be issued in logic 80 pre-treatment ISP and processed pixel in a number of formats. For example, each pixel of the image can have a bit depth of 8, 10, 12 or 14 bits. Logic 80 pre-treatment ISP may perform one or more image-processing operations on the data, 96 raw images, as well as to collect statistical data about data 96 image. The operation of the image processing, and statistical data collection can be performed with the same or different accuracies bit depth. For example, in one embodiment, processing the raw pixel data 96 image can be performed with an accuracy of 14 bits. In such scenarios, the implementation of the raw pixel data received by the logic 80 pre-treatment ISPS that have a bit depth of less than 14 bits (for example, 8 bits, 10 bits, 12 bits)can be increasing sample rate up to 14 bits for the purpose of image processing. In yet another embodiment, the aggregation can occur with an accuracy of 8 bits, and thus, the raw pixel data having a higher bit depth can be down-sampling in 8-bit format for statistical purposes. As will be made is about attention, decreasing the quantization to 8 bits can reduce the amount of hardware (e.g., square) and also to reduce the processing complexity/computational complexity for statistical data. Additionally, the raw data image can spatially averaged out, so that the statistics were more resistant to noise.

In addition, as shown in Fig. 7, the logic 80 pre-treatment ISP can also receive pixel data from the memory 108. For example, as shown by the number 98 reference raw pixel data can be sent to the memory 108 of the interface 94 of the sensor. The raw pixel data in the memory 108, and then may be issued in logic 80 pre-treatment ISP for processing, as indicated by the number 100 links. The memory 108 may be part of the device 18 of the memory storage device 20 or can be a separate dedicated memory within the electronic device 10 and may include direct memory access (DMA). In addition, in some embodiments, the implementation logic 80 pre-treatment ISP can act within its own realm of clock signals and to provide asynchronous communication interface 94 of the sensor to keep the sensor in different sizes and have different synchronization requirements.

After receiving data 96 (interface 94 of the sensor) or 100 (from memory 108) raw image logic 80 pre-treatment ISP may perform one or more image-processing operations, such as temporal filtering and/or filter compensation grouping. The processed image data can then be issued in logic 82 of the conveyor ISP (output 109) for additional processing before being displayed (for example, the device 28 display) or may be sent in memory (output signal 110). Logic 82 of the conveyor ISP receives pre-processed data directly from the logic of 80 pre-treatment ISP or from the memory 108 (input signal 112), and may provide additional processing of the image data in an unprocessed region, as well as in RGB and YCbCr. The image data processed by the logic 82 of the conveyor ISP, then can be displayed (signal 114) on the unit 28 display for viewing by the user and/or may optionally be processed graphics machine or GPU. Additionally, the output from logic 82 of the conveyor ISP can go in the memory 108 (signal 115), and the device 28, the display may read the image data from the memory 108 (signal 116, which, in some embodiments, the exercise may be performed with the opportunity to put into effect Iwate one or more frame buffers. In addition, in some implementations, the output logic 82 of the conveyor ISP may also be provided to the machine 118 compression/recovery after compression (signal 117) for encoding/decoding image data. The coded image data can be stored, and then later to recover after compression before displaying on the device 28 display (signal 119). As an example, compression machine or "encoder" 118 can be machine compression standard JPEG (joint expert group on image) for coding still images, or machine compression standard H.264 for encoding video, or some combination thereof, and the corresponding machine recovery ratio for decoding the image data. Additional information regarding image-processing operations that may be provided in the logic 82 of the conveyor ISP will be described in more detail below with reference to Fig. 49-79. Moreover, it should be noted that the logic 82 of the conveyor ISP can also receive raw image data from the memory 108, as shown by the input signal 112.

Statistics 102 defined logic 80 pre-treatment ISP, may be issued in unit control logic 84. Statistics 102, for example, may include statistical data of the sensor is images, related to automatic exposure, automatic white balancing, auto-focus, flicker detection, the black level compensation (BLC), shading correction lens and so on. Logic 84 of the control may include a processor and/or microcontroller, configured to perform one or more procedures (e.g., firmware)that can be configured to determine, based on the received statistical data 102, the parameters 104 of the management device 30 of the image, and parameters 106 control logic 82 processing pipeline ISP. Solely as an example, the parameters 104 control may include control parameters, sensor (e.g., gain, time of accumulation of charges to control exposure, flash control camera settings lens control (e.g., focal distance for focusing or zoom) or a combination of such parameters. Settings 106 management ISP may include gain levels and coefficients matrix color correction (CCM) for automatic white balancing and color settings (for example, during the processing of RGB), and the parameters of the shading correction lens, which, as described below, can be determined on the basis of pairs is m balance the white point. In some embodiments, the implementation logic 84 management, in addition to statistical data analysis 102, also analyzes the historical statistical data that can be stored in the electronic device 10 (e.g., in memory 18 or the storage device 20).

Due to the generally complex design circuits 32 of the image processing shown in the materials of the present application, it may be useful to divide the description logic 80 pre-treatment ISP and logic 82 pipeline ISP in different sections, as shown below. In particular, Fig. 8 - 48 of the present application may refer to the description of various embodiments and aspects of logic 80 pre-treatment ISP along with the fact that Fig. 49 - 79 of this application may refer to the description of various embodiments and aspects of logic 82 processing pipeline ISP.

The logic of pre-treatment ISP

Fig. 8 is a more detailed structural diagram showing the functional logical blocks that may be implemented in logic 80 pre-treatment ISP, in accordance with one embodiment. Depending on the configuration of the device 30 of the imaging and/or interface 94 of the sensor, as described above in Fig. 7, the data of the raw image may be issued in logic 80 pre-treatment ISP od is them or more sensors 90 images. In the depicted embodiment, the raw data image may be issued in logic 80 pre-treatment ISP first sensor 90a images (Sensor0, Sensor 0) and the second sensor 90b images (Sensor1, Sensor 1). As will be further described below, each sensor 90a and 90b of the images may be performed with the opportunity to apply grouping to the data image full resolution, in order to improve the signal-to-noise ratio of the image signal. For example, technology can be applied to groups, such as grouping 2x2, which can interpolate "grouped pixels from the raw image on the basis of four image pixels full resolution of the same color. In one embodiment, this may result in the existence of four accumulated signal components associated with the grouped pixel based on a single component of the noise, thereby improving the signal-to-noise ratio of the image data, but reducing the total resolution. Additionally, grouping can also result in uneven or irregular spatial sampling of the image data, which can be adjusted by filtering the compensation of the group, as will be described in more detail below.

As shown, the sensors 90a 90b images can provide data of a raw image as signals Sif0 and Sif1, respectively. Each of the sensors 90a and 90b images in General may be associated with respective blocks 120 (StatsPipe0) and 122 (StatsPipe1) statistical processing that can be executed with the ability to process image data to determine one or more sets of statistical data (as indicated by signals Stats0 and Stats1), which includes statistical data related to automatic exposure, automatic white balancing, auto-focus, flicker detection and compensation of the black level and shading correction lens, and so on. In some embodiments, implementation, when only one of the sensors 90a or 90b is actively receiving an image, the image data can be sent in StatsPipe0 and StatsPipe1, if required statistical data. For example, to provide an example, if StatsPipe0 and StatsPipe1 both available StatsPipe0 can be used to collect statistical data for one color space (e.g., RGB), and StatsPipe1 can be used to gather statistical data to another color space (e.g., YUV or YCbCr). That is, the blocks 120 and 122 statistical processing can run in parallel to collect multiple sets of statistics for each frame of image data obtained by the active sensor.

The present is the version of the implementation of the five asynchronous data sources are provided in the tool 80 pre-treatment ISP. These include: (1) direct input from the sensor interface, the corresponding Sensor0 (90a) (indicated by reference as Sif0 or Sens0), (2) direct input from the sensor interface, the corresponding Sensor1 (90b) (indicated by reference as Sif1 or Sens1), (3) input data Sensor0 from memory 108 (indicated by reference as SifIn0 or Sens0DMA), which may include a DMA interface, (4) input Sensor1 data from memory 108 (indicated by reference as SifIn1 or Sens1DMA) and (5) data set image frames of the input data Sensor0 and Sensor1 retrieved from memory 108 (indicated by reference as FeProcIn or ProcInDMA). The tool 80 pre-treatment ISP may also include numerous destinations that can be routed image data from the sources, with each addressee information can be either a memory location (e.g., 108), or the processing unit. For example, in the present embodiment, the tool 80 pre-treatment ISP includes six recipients: (1) Sif0DMA to receive data Sensor0 in memory 108, (2) Sif1DMA to receive Sensor1 data in the memory 108, (3) the first block 120 aggregation (StatsPipe0), (4) the second block 122 statistical analysis (StatsPipe1), (5) block 130 pre-treatment pixels (FEProc) and (6) FeOut (or FEProcOut) in the memory 108 or the conveyor 82 ISP (described in more detail below). In one embodiment, the tool 80 tentatively the satisfactory processing of the ISP can be configured to only some of the recipients are valid for a particular source, as shown in table 1, below.

Table 1
Example of valid recipients pre-treatment ISP for each source
SIf0
DMA
SIf1
DMA
StatsPipe0StatsPipe1FEProcFEOut
Sens0XXXXX
Sens1XXXXX
Sens0DMAX
Sens1DMAX
ProcInDMAXX

For example, according to table 1, the source Sens0 (sensor interface Sensor0) can be performed with the opportunity to provide the data to the destination: SIf0DMA (signal 134), StatsPipe0 (signal 136), StatsPipe1 (signal 138), FEProc (signal 140) or FEOut (signal 142). As for FEOut, data source, in some examples, may be issued in FEOut to bypass the processing of the pixels through FEProc, for example, in order to debug or test. Additionally, the source Sens1 (interface sensor Sensor1) can be performed with the opportunity to provide the data to the destination: SIf1DMA (signal 144), StatsPipe0 (signal 146), StatsPipe1 (signal 148), FEProc (signal 150) or FEOut (signal 152), the source Sens0DMA (data Sensor0 from memory 108) may be configured to provide data to StatsPipe0 (signal 154), the source Sens1DMA (Sensor1 data from memory 108) may be configured to provide data to StatsPipe1 (signal 156), and the source ProcInDMA (data Sensor0 and Sensor1 from memory 108) may be configured to provide data to FEProc (signal 158) and FEOut (signal 160).

It should be noted that illustrated this variant implementation is configured, the button Sens0DMA (shots Sensor0 from memory 108) and Sens1DMA (shots Sensor1 from memory 108) are provided only to StatsPipe0 and StatesPipe1 respectively. This configuration provides the means 80 pre-treatment ISP the ability to hold a certain number of previous frames (e.g., 5 frames) in memory. For example, due to a delay or lag between the time when the user triggers an event capture (for example, translating the system images from preview mode in capture mode or recording, or even through a simple on or initialization of the image sensor) using the image sensor, before, when the captured image scene, not every frame that the user means to capture, can be captured and processed essentially in real time. Thus, by holding a certain number of previous frames in the memory 108 (e.g., from the previous phase), these previous frames can be processed later, or side by side with the staff, in fact, captured in response to an event capture, thus compensating for any such delay and providing a more complete set of image data.

As for the illustrated configuration of Fig. 8, it should be noted that StatsPipe0 120 is configured to receive one of the inputs 136 (Sens0), 146 (Sens1) and 154 (Sens0DMA), as defined logic 124 of choice, such as a multiplexer. Similarly, the logic 126 of choice can the t to select the input signal 138, 156 and 148 to provide to StatsPipe1 and logic 132 may select the input signal 140, 150 and 158 to provide to FEProc. As mentioned above, the statistical data (Stats0 and Stats1) may be provided to logic 84 management to define the different management options that can be used to operate the device 30 of the imaging and/or logic 82 processing pipeline ISP. As can be taken into account, the blocks 120, 122 and 132) selection logic shown in Fig. 8, can be provided by any suitable type of logic, such as a multiplexer, which selects one of many input signals in response to the control signal.

Block 130 pixel processing (FEProc) can be performed with the opportunity to perform various operations of the image processing on the data, the raw image pixel. As shown, FEProc 130, as a processing unit, the recipient may receive image data from sources Sens0 (signal 140), Sens1 (signal 150) or ProcInDMA (signal 158) as logic 132 of choice. FEProc 130 may receive and output various signals (for example, Rin, Hin, Hout, and Yout - who can represent the history of the movement and brightness data used during temporal filtering) when performing pixel processing, which may include temporary filtering and filtering to compensate for the purpose of grouping, as will be further described below. Conclusion 109 (FEProcOut) block 130 pixel processing can then be forwarded to logic 82 of the conveyor ISP, for example, through one or more queues with the management principle of first - in-first-out (FIFO), or may be sent to the memory 108.

In addition, as shown in Fig. 8, logic 132 of choice, in addition to the adoption of the signals 140, 150 and 158 may additionally receive signals 159 and 161. Signal 159 may provide advance notice of the processed data raw images from StatsPipe0, and the signal 161 may provide advance notice of the processed data raw images from StatsPipe1. As will be described below, each of the blocks aggregation may apply one or more operations prior to data processing the raw image before collecting statistics. In one embodiment, each block aggregation can perform some degree detection/correction of defective pixels, shading correction lens, the compensation of the black level and inverse compensation of the black level. Thus, the signals 159 and 161 may represent raw data images were processed using the aforementioned operations early processing (as will be described in more detail below in Fig. 37). Thus, the logic 13 selection logic gives 80 pre-treatment ISP flexibility ensure not subjected to preliminary data processing the raw images from Sensor0 (signal 140) and Sensor1 (signal 150) or subjected to early data processing raw images from StatsPipe0 (signal 159) and StatsPipe1 (signal 161). Additionally, as shown by blocks 162 and 163 selection logic, the logic 80 pre-treatment ISP also has the flexibility of records are not subjected to preliminary data processing the raw images from Sensor0 (signal 134) or Sensor1 (signal 144) in the memory 108 or write subjected to preliminary data processing the raw images from StatsPipe0 (signal 159) or StatsPipe1 (signal 161) in the memory 108.

For operation control logic 80 pre-treatment ISP includes a unit 164 controls pre-treatment. Block 164 control can be performed with the opportunity to initialize the control registers program (specified by the reference in the materials of the present application as "start registers") to configure and start the processing frame image and select the appropriate Bank(s) registers for updating the data registers are double buffered. In some embodiments, the implementation of block 164 control can also provide logic control performance in order to register information about the cycles of the clock signal, the delay memory and quality of service (QOS). In addition, the block 164 control can also control the dynamic Gating of clock signals, which may be used claviola of the operation clock signals for one or more parts means 0 pre-treatment ISP, when there is not enough data in the input queue of the active sensor.

Using "start registers mentioned above, the block 164 management may be able to control the update of the various parameters for each of the processing units (e.g., StatsPipe0, StatsPipe1 and FEProc) and can interact with the interfaces of sensors to control the starting and stopping of processing units. In General, each of the blocks pre-treatment works on a frame-by-frame basis. As described above (table 1), the input processing blocks may be of the sensor interface (Sens0 or Sens1) or from the memory 108. In addition, the processing units may use different parameters and configuration data that can be stored in the corresponding data registers. In one embodiment, the data registers associated with each processing unit or the addressee information can be grouped in blocks, forming a group of the Bank of registers. In the embodiment according to Fig. 8, seven groups of banks of registers can be defined in the tool pre-treatment ISP: SIf0, SIf1, StatsPipe0, StatsPipe1, ProcPipe, FEOut and ProcIn. The address space of each block registers are duplicated to provide two banks of registers. Only registers are double buffered illustrated as an example in the second Bank. If the register is not double the th buffering, address in the second Bank may be displayed in the address of the same register in the first Bank.

As for the registers are double buffered registers from one Bank active and used by the processing units, while the registers from another Bank made the shadow. The shadow register can be updated by the block 164 management during the current interval, while the hardware uses the active registers. Determining which Bank to use for a particular processing unit in a particular frame may be prescribed field "NextBk" (next Bank) in the starting register corresponding to the source supplying the image data to the processing unit. Essentially, NextBk is a field that provides unit 164 controls the ability to control which Bank of registers becomes active after triggering events for the next frame.

Before a detailed description of the operation of the start register, Fig. 9 gives a General method 166 for processing image data on a frame-by-frame basis in accordance with the present technology. Starting at step 168, the processing units of the addressee specified (scheduled) data source (for example, Sens0, Sens1, Sens0DMA, Sens1DMA or ProcInDMA)are in idle state. This may indicate that the processing for the current frame is completed, and therefore the block 164 management m who can prepare for processing the next frame. For example, at step 170 updated programmable parameters for each processing unit of the recipient. For example, this may include updating fields NextBk in the starting register corresponding to the source, and update any parameters in registers data corresponding to blocks of the recipient. Then, at step 172, the triggering event can set blocks of the recipient in the running state. In addition, as shown in step 174, each block of the destination specified by the source completes its processing for the current frame, and the method 166 can then return to step 168 to process the next frame.

Fig. 10 depicts a view structural diagram showing two banks of registers 176 and 178 of data that can be used by different units of recipient funds pre-treatment ISP. For example, Bank 0 (176) may include registers 1-n data (176a-176d)and Bank 1 (178) may include registers 1-n data (178a-178d). As described above, an implementation option, shown in Fig. 8, can use the Bank register (Bank 0), with seven groups of banks of registers (for example, SIf0, SIf1, StatsPipe0, StatsPipe1, ProcPipe, FEOut and ProcIn). Thus, in this embodiment, the address space of the block of registers each register is duplicated to provide the second Bank of registers (Bank 1).

Fig. 10 also Illus which demonstrates the start register 180, which may correspond to one of the sources. As shown, the start register 180 includes field 182 "NextVld" and the aforementioned field 184 "NextBk". These fields can be programmed before starting the processing of the current frame. In particular, NextVld can specify the recipient(s) information, it should be sent to the data source. As described above, NextBk can choose the appropriate register of Bank0 or Bank1 for each specified recipient, as indicated by NextVld. Although not shown in Fig. 10, the start register 180 may also include bits of bringing in work readiness, indicated by the reference in the materials of the present application as "start bit", which can be installed to bring in work readiness start register. When the detected triggering event 192 for the current frame, NextVld and NextBk can be copied in the field 188 CurrVld and field 190 CurrBk appropriate current or "active" register 186. In one embodiment, the current case(s) 186 may be registers, read-only, which can be installed in the hardware, along with being available for teams of software within the means 80 pre-treatment ISP.

As will be taken into account, for each source preprocessing ISP may provide suitable is th the start register. For the purposes of this disclosure, the starting registers corresponding to the above sources Sens0, Sens1, Sens0DMA, Sens1DMA and ProcInDMA may be indicated by reference as Sens0Go, Sens1Go, Sens0DMAGo, Sens1DMAGo and ProcInDMAGo respectively. As mentioned above, the control unit can use the start registers to control the ordering of processing frames within the means 80 pre-treatment ISP. Each starting register contains field NextVld and field NextBk to specify which recipients will be valid and which Bank of registers (0 or 1) will be used accordingly for the next frame. When there is a triggering event 192 the next frame, field NextVld and NextBk copied into the corresponding active register 186 is read-only, which indicates the current valid recipients and non banks, as shown above in Fig. 10. Each source can be made with the ability to work asynchronously and can send data to any of its valid recipients. In addition, it should be clear that, for each recipient, usually only one channel can be active during the current frame.

As for bringing in work readiness and starting register 180, the confirmation bit bring in work readiness or "start bit" in the start register 180 results in work readiness correspond to the second source related fields NextVld and NextBk. To run the various modes available, depending on whether the input is read the source data from the memory (for example, Sens0DMA, Sens1DMA or ProcInDMA) or if the source input from the sensor interface (for example, Sens0 or Sens1). For example, if the input comes from the memory 108, putting in the work readiness of the start bit may serve as a triggering event, because the block 164 management has control over when data is read from the memory 108. If the image frames are input sensor interface, the triggering event may depend on the setting of the time at which the corresponding starting register is working in readiness for when receiving data from the sensor interface. In accordance with the present embodiment, three different technologies for the installation of the launch from the input interface of the sensor shown in Fig. 11-13.

First, with reference to Fig. 11, illustrates the first scenario, in which the launch takes place, as soon as all recipients specified in the source of transition from state employment or run in the idle state. Here, the signal VVALID data (196) represents the signal of the image data from the source. Pulse 198 represents the current frame of image data, the pulse 202 represents the next frame data shows the I, and the interval 200 is human blanking interval of 200 (VBLANK) (for example, represents the time difference between the last line of the current frame 198 and the next frame 202). The time difference between the leading edge and the rear edge of the pulse 198 represents the interval 201 of the frame. Thus, in Fig. 11, the source can be made with the ability to launch, when all specified recipients completed processing operation on the current frame 198 and go into idle state. In this scenario, the source is provided in the work readiness (for example, by setting bits of bringing in work readiness or "start" bit) before the mailing finished processing, so that the source can launch and initiate processing of the next frame 202, as only specified recipients go into idle state. During HR interval 200 blanking processing units can be customized and configured for the next frame 202 using banks of registers prescribed starting register corresponding to the source, before arriving input data source. Solely as an example, the read buffers used by FEProc 130 can be filled before arrives next frame 202. In this case, the shadow registers, which correspond to the banks of registers can be updated the donkey triggering events, thus providing full frame interval to configure the registers are double-buffered for the next frame (for example, after the frame 202).

Fig. 12 illustrates a second scenario in which the source starts by bringing in work readiness start bit start register corresponding to the source. In this configuration "start to start" blocks recipient specified by the source, already unemployed and the conversion start bit into the work readiness is a triggering event. This trigger mode can be used for registers that do not have double buffering, and therefore are updated during human extinction (for example, in contrast to the shadow register is double-buffered in the time interval 201 of the frame).

Fig. 13 illustrates a third run mode, in which the source is started after detection of the beginning of the next frame, i.e. increasing VSYNC. However, it should be noted that in this mode, if the base register is given in the work readiness (by setting the start bit) after the next frame 202 has already started processing, the source will use the target recipients and banks of registers corresponding to the previous frame, as fields CurrVld and CurrBk not updated to the initial processing of the recipient. This leaves HR interval blanking the La setting processing units of the addressee and may potentially be the consequence of dropped frames, in particular, when operating in the dual mode sensor. It should be noted, however, that this mode, however, can result in a fine, if circuit 32 of the image processing are operating as a single sensor, which uses the same banks of registers for each frame (for example, the recipient (NextVld) and banks of registers (NextBk) are not changed).

Next, with reference to Fig. 14, the register 180 control (or start register) is illustrated in more detail. The start register 180 includes a "start" bit 204 of bringing in work readiness and field 182 NextVld and field 184 NextBk. As described above, each source (for example, Sens0, Sens1, Sens0DMA, Sens1DMA or ProcInDMA) means 80 pre-treatment ISP may have a corresponding starting register 180. In one embodiment, the start bit 204 may be a single bit field, and the starting register 180 may be referred to in the work readiness by setting the start bit 204 1. Box 182 NextVld can contain a number of bits corresponding to the number of recipients in the means of pre-treatment ISP. For example, in the embodiment shown in Fig. 8, vehicle pre-treatment ISP include six recipients: Sif0DMA, Sif1DMA, StatsPipe0, StatsPipe1, FEProc and FEOut. Thus, the start register 180 may include six bits is in box 182 NextVld with one bit, corresponding to each recipient, and set the destination set to 1. Similarly, box 182 NextBk can contain a number of bits corresponding to the number of registers data in the tool 80 pre-treatment ISP. For example, as described above, an implementation option means 80 pre-treatment ISP, shown in Fig. 8, may include seven data registers: SIf0, SIf1, StatsPipe0, StatsPipe1, ProcPipe, FEOut and ProcIn. Accordingly, field 184 NextBk may include seven bits, with one bit corresponding to each data register and the data registers corresponding to Bank 0 and 1, are selected by setting their corresponding bits to 0 or 1, respectively. Thus, using the starting register 180 source for launching knows exactly which blocks the recipient should receive the data frame and which banks register should be used to configure the specified blocks of the recipient.

Additionally, due to the configuration with dual sensor supported by the circuits 32 ISP, vehicle pre-treatment ISP can operate in the mode configuration with a single sensor (e.g., only one sensor is receiving data) and configuration mode dual sensor (for example, both sensors are receiving data). In a typical configuration is with a single sensor input from a sensor interface, such as Sens0, go to StatsPipe0 (for aggregation) and FEProc (for pixel processing). In addition, the frames of the sensor can also go into memory (SIf0DMA) for future processing described above.

An example of how fields NextVld corresponding to each source of funds 80 pre-treatment ISP can be configured by the operation of a single sensor is shown below in table 2.

As described above with reference to table 1, the tool 80 pre-treatment ISP can be configured so that only some of the recipients are valid for a particular source. Thus, recipients in table 2, marked "X"are intended to indicate that the tool 80 pre-treatment ISP is not configured to provide a specific data source can send data frame to the address information. For such recipients field bits NextVld specific source corresponding to the address information can always be set to 0. It should be clear, however, that this is just one of embodiments and, in fact, in other embodiments, implementation of the tool 80 pre-treatment ISP can be configured so that each source was able to ask each is th available to the recipient block.

The configuration shown above in table 2, represents a single sensor, in which only Sensor0 provides data frame. For example, the register Sens0Go specifies the recipients as being SIf0DMA, StatsPipe0 and FEProc. Thus, at the start of each frame of image data Sensor0 sent to these three destinations. As described above, SIf0DMA can save frames in memory 108 for later processing, StatsPipe0 applies statistical processing to determine various statistical data points, and FEProc processes the frame, for example, using temporal filtering and filter compensation grouping. In addition, in some configurations require additional statistical data (for example, statistical data in different color spaces), StatsPipe1 can also be used (in accordance with NextVld set 1) during a single sensor. In such scenarios, the implementation of the data frame Sensor0 go as StatsPipe0 and StatsPipe1. In addition, as shown in the present embodiment, the interface is only one sensor (for example, Sens0 or, alternatively, Sen0) is the only active source during a single sensor.

With this in mind, Fig. 15 provides a block diagram of the sequence of operations of a method depicting the method 206 for about what Abadi data frame in the tool 80 pre-treatment ISP, when active, only one sensor (for example, Sensor0). Although the method 206 illustrates, in particular, the processing of the frame data Sensor0 through FEProc 130 as an example, it should be clear that this processing can be applied to any other power source and a corresponding block of the recipient in the tool 80 pre-treatment ISP. Starting at step 208, Sensor0 begins collecting image data and sending the captured frames to the tool 80 pre-treatment ISP. Block 164 management can initialize the programming of the start register, the corresponding Sens0 (interface Sensor0)to determine the target recipients (including, FEProc) and the registers of the Bank to use, as shown in step 210. Thereafter, the logic 212 solutions assesses whether the triggering source of the event. As described above, the data frame entered from the interface of the sensor may use a different trigger modes (Fig. 11-13). If the triggering event is not detected, processing 206 continues to wait for the launch. As soon as it is launched, the next frame becomes the current frame and sent to FEProc (and other target destinations) for processing at step 214. FEProc can be configured using the settings data on the basis of the data register (ProcPipe) prescribed in the field NextBk register Sens0Go. After about abode the current frame is completed at step 216, method 206 may return to step 210, in which case Sens0Go is programmed for the next frame.

When and Sensor0 and Sensor1 means 80 pre-treatment ISP both active, statistical processing is somewhat simple, since each input sensor may be processed by a corresponding block statistics, StatsPipe0 and StatsPipe1. However, as illustrated by an implementation option means 80 pre-treatment ISP provides only a single processing unit pixels (FEProc), FEProc can be done with the ability to switch between processing frames corresponding to the input data Sensor0, and frames corresponding to the data input Sensor1. As will be taken into account, the image frames are read from FEProc in the illustrated embodiment, in order to avoid a state in which image data of a single sensor are processed in real time along with the fact that the image data from another sensor is not processed in real-time. For example, as shown in table 3 below, which depicts one possible configuration fields NextVld in the starting registers for each source, when the tool 80 pre-treatment ISP mode dual sensor input data from each sensor is sent to the memory (SIf0DMA and SIf1DMA) and suitable to the th block aggregation (StatsPipe0 and StatsPipe1).

Table 3
Example NextVld for each source: the dual Mode sensor
SIf0DMASIf1DMAStatsPipe0StatsPipe1FEProcFEOut
Sens0Go1X1000
Sens1GoX10100
Sens0DMAGoXX0XXX
Sens1DMAGoXXX0XX
ProcInDMAGoXX XX10

Footage of the sensor in the memory are sent to FEProc source ProcInDMA so that they alternated between Sensor0 and Sensor1 frequency, based on their respective frame rates. For example, if Sensor0 and Sensor1 both are collecting image data at 30 frames per second (fps), their frames of sensor can be interrupted 1 to 1. If Sensor0 (30 fps) is receiving the image data from the doubled frequency Sensor1 (15 fps), the alternation, for example, may have a value of 2 to 1. That is, two data frames Sensor0 read from memory for each one frame data Sensor1.

Fig. 16 depicts a method 220 for processing the frame data in the tool 80 pre-treatment ISP with two sensors at the same time receiving the image data. At step 222, as Sensor0 and Sensor1 start collecting images. As will be taken into account, Sensor0 and Sensor1 can get the image frames with different frame rates, resolutions and so on. At step 224, the collected footage from Sensor0 and Sensor1, stored in the memory 108 (e.g., using mailing SIf0DMA and SIf1DMA). Then the source ProcInDMA reads the frame data from the memory 108 alternating manner, as shown in step 226. As described, the frames may be interspersed between the data Sensor0 and Sensor1 data depending the frequency of the frame, with which data are collected. At step 228, get the next frame from ProcInDMA. After that, at step 230, the field NextVld and NextBk starting register corresponding to the source, here ProcInDMA can be programmed depending on whether the next frame data Sensor0 Sensor1 or. After that, the logic 232 solutions assesses whether the triggering source of the event. As described above, the data entered from the memory can be started by bringing in work readiness start bit (for example, "start to start"). Thus, the launch could happen as soon as the start bit start register is set to 1. As soon as it is launched, the next frame becomes the current frame and sent to FEProc for processing at step 234. As described above, FEProc can be configured using the settings data on the basis of the data register (ProcPipe) prescribed in the field NextBk register ProcInDMAGo. After processing of the current frame is completed at step 236, the method 220 may return to step 228 and continue.

Additional work event, which means 80 pre-treatment ISP has a capability to handle, is a configuration change during image processing. For example, this event may occur when the tool 80 pre-processing the TCI ISP switches with configuration with a single sensor configuration with dual sensor, or Vice versa. As described above, the field NextVld some sources may be different depending on how actively do one or both of the image sensor. Thus, when the configuration of the sensor changes, the block 164 control pretreatment ISP may release all blocks of the recipient before they set the new source. This can prevent invalid configurations (for example, assign multiple sources to one destination information). In one embodiment, the blocks of the recipient can be performed by setting fields NextVld all of the registers starting at 0, so the conclusion of all recipients, and bringing in work readiness start bit. After the blocks recipient exempt, starting registers can reconfigure themselves depending on the current mode of the sensor and image processing can continue.

The method 240 for switching between configurations with single and dual sensor shown in Fig. 17, in accordance with one embodiment. From step 242, is identified by a next frame of image data from a particular source means 80 pre-treatment ISP. At step 244, the target recipients (NextVld) are programmed in the starting register corresponding to the source. Then, at step 246, depending on the spruce recipients, NextBk is programmed to point to the correct data registers associated with the target addresses of the information. Thereafter, the logic 248 solutions assesses whether the triggering source of the event. As soon as it is launched, the next frame is sent to the blocks of the addressee specified by NextVld, and processed blocks of the recipient using the appropriate data registers prescribed by NextBk, as shown in step 250. Processing continues to step 252 where the processing of the current frame is completed.

Then logic 254 solutions determines whether to change the target recipients for the source. As described above, the configuration NextVld start registers corresponding Sens0 and Sens1, can vary depending on whether one sensor or two sensors. For example, referring to table 2, if only active Sensor0, data Sensor0 go to SIf0DMA, StatsPipe0 and FEProc. However, with reference to table 3, if both sensors Sensor0 and Sensor1 active, the data Sensor0 not sent directly to FEProc. Instead, as mentioned above, the data Sensor0 and Sensor1 stored in the memory 108 and read in FEProc alternating image source ProcInDMA. Thus, if no change of the addressee is not found in the logic 254 solutions, block 164 management concludes that the configuration of the sensor has not changed, and the method 240 vozvrashaetsya step 246, where field NextBk starting register of source is programmed to point to the correct registers data for the next frame, and continues to move.

However, if the logic 254 solutions a change is detected, the recipient, the block 164 management determines that there has been a change to the configuration of the sensor. For example, it could be a switching from a single sensor in the dual mode sensor, or sensors disconnected in General. Accordingly, the method 240 continues to step 256 in which all the bits of the fields NextVld for all the starting registers are set to 0, thus practically putting out the frame is sent to any destination on the next trigger signal. Then, in the logic 258 decisions, determination is made regarding whether all blocks of the destination switched to an idle state. If not, the method 240 is waiting on logic 258 solutions, up until all the blocks recipient has not completed its current operation. Then, in the logic 260 of solutions, determination is made regarding whether to continue the processing of the images. For example, if the change recipient, presented by the conclusion of the work as Sensor0 and Sensor1, the image processing ends at step 262. However, if it is determined that image processing should continue, then the method 240 is returned to Etap, and field NextVld start registers are programmed in accordance with the current mode of operation (for example, a single sensor or dual sensor). As shown here, the stages 254-262 starting to clean registers and fields recipient can work together to specified number 264 references.

Then, Fig. 18 shows an additional variant of implementation as block flow diagrams of the method (method 265), which provides another mode of operation, the dual sensor. Method 265 depicts a state in which one sensor (for example, Sensor0) is actively collecting image data and sending the image frames to FEProc 130 for processing, along with the sending of image frames in StatsPipe0 and/or memory 108 (Sif0DMA), along with the fact that another sensor (e.g., Sensor1) is inactive (e.g., disabled, as shown in step 266. Logic 268 solutions then detects the condition in which Sensor1 will become active in the next frame to send the image data to FEProc. If this condition is not satisfied, the method 265 returns to step 266. However, if this condition is satisfied, the method 265 continues to move in step 264 (together, the stages 254-262 in Fig. 17), whereby the fields of the target sources are cleared and reconfigured at step 264. For example, at step 264, the field NextVld starting register associated with Snsor1, can be programmed to prescribe FEProc as a destination, as well as StatsPipe1 and/or memory (Sif1DMA), along with the fact that the field NextVld starting register associated with Sensor0, can be programmed to clear FEProc as a destination. In this embodiment, although the footage, captured by Sensor0, should not be sent to FEProc in the next frame, Sensor0 can remain active and continue to send the image frames to StatsPipe0, as shown in step 270, along with the fact that Sensor1 captures and sends data to FEProc for processing at step 272. Thus, both sensors, Sensor0 and Sensor1, can continue to operate in this mode "dual sensor", although only the image frames of a single sensor are sent to FEProc for processing. For the purposes of this example, the sensor sending frames to FEProc for processing, you can specify the link as "active sensor", sensor, which does not send frames FEProc, but is still sending data to the blocks of the statistical processing may be included by reference as a "semi-active sensor, and the sensor, which generally does not collect data that can be referenced as "active sensor".

One of the advantages of the above technologies is that, as the statistics continue to be collected for semi-active sensor (Sensor0), the next time ProAct wny the sensor enters the active state, and the current active sensor (Sensor1) goes into semi-active or inactive, semi-active sensor can start collecting data within one frame, since the color balance and options document may already be available due to the ongoing collection of statistical data of the image. This technology can be a link as "hot switching" image sensors and avoids the disadvantages associated with "cold starts" image sensors (for example, running without the available statistical information). In addition to energy savings, since each source is asynchronous (as mentioned above), semi-active sensor can operate at a lower frequency clocking and/or personnel during the semi-active period.

Before proceeding with a more detailed description of the operations of statistical processing and pixel processing depicted in the logic 80 pre-treatment ISP in Fig. 8 ISP, it is assumed that a brief introduction regarding the definitions of the various areas of the frame will help to facilitate a better understanding of the present subject matter. With this in mind, different areas of the frame that can be specified within the frame of the source image, illustrated in Fig. 19. The frame format of the source provided to the circuits 32 of the image processing is to can use mosaic or linear addressing modes described above, which may use pixel formats within 8, 10, 12 or 14-bit accuracy. Frame 274 source images, as shown in Fig. 19, may include a region 276 of the frame of the sensor region 276 raw frame and the active region 278. Frame 276 sensor is usually the maximum frame size that the sensor 90 images may issue in scheme 32 image processing. Region 278 raw frame can be defined as the area of the frame 276 source that is sent to the logic of 80 pre-treatment ISP. The active region 280 may be defined as part of the frame 274 source, typically in the range of 278 raw frame over which executes processing for a particular operation of the image processing. In accordance with variants of the implementation of this technology, this active region 280 may be the same or may be different for different image-processing operations.

In accordance with aspects of the present technology, the logic 80 pre-treatment ISP only accepts raw frame 278. Thus, for the purposes of the present description, the global frame size for logic 80 pre-treatment ISP and may be assumed as the size of the raw AC is RA, as defined by the width of 282 and 284 height. In some embodiments, the implementation of the offset from the edges of the frame 276 sensor to the raw frame 278 may be determined and/or stored logic 84 management. For example, the logic 84 management may include firmware, which can determine the area 278 raw frame based on the input parameters, such as the x offset 286 and y offset 288, which are prescribed relative to the frame 276 sensor. In addition, in some cases, the processing unit within logic 80 pre-treatment ISP or logic 82 of the conveyor ISP may have a certain active area, so that the pixels in the raw frame, but outside the active region 280, will not be processed, i.e. remain unchanged. For example, the active region 280 for a particular processing unit, having a width of 290 and a height of 292 may be determined based on the displacement x 294 and offset y 296 relative to the raw frame 278. In addition, in cases where the active region does not specifically defined, one alternative implementation schemes 32 image processing can prevent the active region 280 was the same as the raw frame 278 (for example, the x offset 294 and y offset 296 both equal to 0). Thus, for the purposes of image-processing operations performed on the image data, the boundary at which conditions can be defined in relation to the boundaries of the raw frame 278 or active region 280.

Keeping these points in mind and with reference to Fig. 20, illustrated is a more detailed view of logic 130 pre-treatment pixels ISP (previously described in Fig. 8), in accordance with the embodiment of the present technology. As shown, the logic 130 pre-treatment pixels ISP includes a temporal filter 298 and the filter 300 compensation grouping. Time filter 298 may take one of the input signals Sif0, Sif1, FEProcIn image or in advance, the processed image signals (for example, 159, 161), and can operate on the raw pixel data before running any additional processing. For example, a time filter 298 may first process the image data to reduce noise by averaging image frames in the temporal direction. The filter 300 compensation group, which is described in more detail below, may apply the scaling and resampling to data grouped the raw image from the image sensor (e.g., 90a, 90b)to maintain a uniform spatial distribution of pixels in the image.

Time filter 298 may be adaptable to the pixel based on the characteristics of the movement and brightness. For example, when the movement of the pixel is high, the intensity of the filter, the AI can be reduced, in order to avoid the appearance of "building footprint" or "artifacts of ghosting" in the resulting processed image, whereas the intensity of the filter may be increased when the detected little or no movement. Additionally, the intensity of the filter may also be adjusted based on the brightness data (such as "brightness"). For example, as the brightness of the image increases, artifacts, filtering can become more visible to the human eye. Thus, the intensity of the filter can further be reduced when the pixel has a high level of brightness.

When using the time filter time filter 298 can accept data from the reference pixel (Rin) and the input history of the movement (Hin), which can be from the previous filtered or the source frame. Using these parameters, a time filter 298 may provide the output of the movement (Hout) and filtered output pixel (Yout). The filtered output Yout pixel is then passed to the filter 300 compensation grouping, which can be done with the ability to perform one or more scaling operations on the filtered output data Yout pixel to produce an output signal FEProcOut. The processed pixel data FEProcOut can then re is to alatise to logic 82 pipeline ISP, as explained above.

With reference to Fig. 21, illustrates the precedence diagram depicting the processing 302 temporal filtering, which may be performed temporal filter shown in Fig. 20, in accordance with the first embodiment. Time filter 298 may include 2-taking the filter in which the coefficients of the adaptive filter is configured on per-pixel basis, at least partially, be based on motion and brightness. For example, the input pixels x(t)with a variable t representing a time value can be compared with the reference pixel r(t-1) filtered previous frame or a previous original frame for forming an index of the motion search in the table 304 history of the movement (M) 304, which may contain the filter coefficients. Additionally, based on the input h(t-1) the history of the movement may be determined by the output h(t) the history of the movement corresponding to the current input pixel x(t).

The output h(t) the history of the movement and the filter coefficient, K, can be determined on the basis of the Delta d(j,i,t) of motion (j,i) represent the coordinates of the spatial location of the current pixel x(j,i,t). Delta d(j,i,t) motion can be calculated by determining the maximum of the three absolute Delta between the source and the reference pixels for the three horizontally compatible with the but located pixels of the same color. For example, with brief reference to Fig. 22, illustrates the spatial distribution of the three jointly located reference pixels 308, 309 and 310, which correspond to the original input pixels 312, 313 and 314. In one embodiment, the Delta motion can be calculated on the basis of these source and target pixels using the formula below:

d(j,i,t)=max3[abs(x(j,i-2,t)-r(j,i-2,t-1)),(abs(x(j,i,t)-r(j,i,t-1)),(abs(x(j,i+2,t)-r(j,i+2,t-1))] (1a)

The block diagram of the sequence of operations depicting this technology to determine the value of the Delta motion, additionally illustrated below in Fig. 24. In addition, it should be clear that the technology to calculate the Delta values of the movement, as shown above in equation 1a (and below in Fig. 24), is intended only to provide one of the ways to determine the value of the Delta motion.

In other embodiments, the implementation of an array of pixels of the same color could be evaluated to determine the value of the Delta movement. For example, in addition to the three pixels, the specified link in equation 1a, one variant of implementation to determine the values of the Delta motion may also include an assessment of the absolute Delta between pixels of the same color from two lines above (e.g., j-2; provided patterns Bayer) reference pixels 312, 313 and 314, and their respective jointly arranged pixels and two lines below (for example, j+2; provided patterns Bayer) reference pixels 312, 313 and 314, and their respective jointly arranged pixels. For example, in one embodiment, the Delta value can be expressed as set forth below:

d(j,i,t)=max9[abs(x(j,i-2,t)-r(j,i-2,t-1)),(abs(x(j,i,t)-r(j,i,t-1)),(abs(x(j,i+2,t)-r(j,i+2,t-1)),(abs(x(j-2,i-2,t)-r(j-2,i-2,t-1)),(abs(x(j-2,i,t )-r(j-2,i,t-1)),(abs(x(j-2,i+2,t)-r(j-2,i+2,t-1)),(abs(x(j+2,i-2,t)-r(j+2,i-2,t-1))(abs(x(j+2,i,t)-r(j+2,i,t-1)),(abs(x(j+2,i+2,t)-r(j+2,i+2,t-1))] (1b)

Thus, in the embodiment depicted by equation 1b, the Delta value movement can be determined by comparing the absolute Delta between 3x3 array of pixels of the same color with the current pixel (313), which is located in the center of a 3x3 array (for example, generally speaking, an array of 5x5 for color structures Bayer, if taken into account the pixels of different colors). Should be taken into account that any two-dimensional array of pixels of the same color (including, for example, arrays with all pixels in the same row (for example, equation 1a), or arrays, with all pixels in the same column) with the current pixel (e.g., 313), which is located in the center of the array, could be analyzed to determine the value of the Delta movement. In addition, despite the fact that the Delta value could be determined as the minimum of the absolute deltas (e.g., as shown in equations 1a and 1b, in other embodiments, implementation of the Delta value motion could also be selected as an average value or median absolute deltas. Additionally, the above technology can also be applied to other types of matrices color filter (for example, RGBW, CYGM, etc. and not sublattice is someways exceptional for structures Bayer.

Returning to Fig. 21 as soon as the Delta value of the movement is determined, the index search traffic, which can be used for the selected filter coefficient K from table 304 movement (M), can be calculated by summing the Delta d(t) for the current pixel (e.g., spatial location (j,i)) with the input signal h(t-1) history of the movement. For example, the filter coefficient K may be determined as set forth below:

K=M[d(j,i,t)+h(j,i,t-1)](2a)

Additionally, the output h(t) the history of the movement can be determined using the following formula:

h(j,i,t)=d(j,i,t)+(1-K)×h(j,i,t-1) (3a)

Then the brightness of the current input pixel x(t) can be used to generate an index search of brightness in the table 306 brightness (L). In one embodiment, the luminance table can contain attenuation coefficients, which can be between 0 and 1 and can be selected on the basis of the index of brightness. The second filter coefficient, K', can be calculated by multiplying the first coefficient K of the filter on the attenuation of brightness, as shown in the following equation:

K'=K×L[x(j,i,t)](4a)

A specific value for K' can then be used as the filtration coefficient for time filter 298. As described above, a time filter 298 may be 2-lateral filter. Additionally, a time filter 298 may be configured as a filter with an infinite impulse response (IIR, IIR) using the previous filtered frame or as a filter is with the finite impulse response (FIR, FIR) using the previous original frame. Time filter 298 can compute the filtered output pixel y(t) (Yout) using the current input pixel x(t), the reference pixel r(t-1) and K' filter using the following formula:

y(j,i,t)=r(j,i,t-1)+K'(x(j,i,t)-r(j,i,t-1))(5a)

As described above, the processing 302 temporal filtering, as shown in Fig. 21 may be performed based on a pixel by pixel. In one embodiment, one and the same table M and table L brightness can be used for all color components such as R, G and B). Additionally, some of the options for implementation may provide a mechanism for traversal, in which temporal filtering may be dispensed with, for example, in response to the control signal from the logic 84 management. In addition to the, as described below with reference to Fig. 26 and 27, one variant of implementation of the temporary filter 298 may use a separate table movement and brightness for each color component of the image data.

Variant implementation of technology temporal filtering is described with reference to Fig. 21 and 22, may be better understood by considering Fig. 23, which depicts a block diagram of the sequence of operations of the method, illustrating the method 315 in accordance with the above described embodiment. Method 315 begins at step 316, where the current pixel x(t), located in the spatial location (j,i) of the current frame image data accepted by the system 302 temporal filtering. At step 317, the value of d(t) of the Delta motion is determined for the current pixel x(t), at least partially, on the basis of one or more jointly located reference pixels (for example, r(t-1)) from the previous frame of image data (for example, image frame immediately preceding the current frame). Technology to determine the value d(t) Delta movement on stage 317 additionally explained below with reference to Fig. 24 and can be performed in accordance with equation 1a, as shown above.

As soon as the obtained value of d(t) Delta movement from step 317, the index table the motion search can opredeliatsa using the value of d(t) of the Delta motion and input values h(t-1) the history of the movement, the appropriate spatial location (j,i) from the previous frame, as shown at step 318. Additionally, although not shown, the value of h(t) the history of the movement corresponding to the current pixel x(t)can also be determined at step 318, once you know the value d(t) Delta movement, for example, by using equation 3a, shown above. Then, at step 319, the first filter coefficient K may be chosen from table 304 motion using index table lookups with step 318. The index definition table motion search and select the first filter coefficient K of table movement can be performed in accordance with equation 2a, as shown above.

Then, at step 320, the attenuation coefficient can be selected from table 306 brightness. For example, table 306 brightness can contain attenuation coefficients in the range between approximately 0 and 1, and the attenuation coefficient can be selected from table 306 brightness using the value of the current pixel x(t) as the reference table. Once selected, the attenuation coefficient, a second coefficient K' filter may be determined at step 321 using the selected attenuation coefficient and the first filter coefficient K (step 319), as shown in equation 4a, above. Then, at step 322, subjected to the e temporal filtering output value y(t), corresponding to the current input pixel x(t), is determined on the basis of the second coefficient K' filter (step 320), the values together set the reference pixel r(t-1) and the values of the input pixel x(t). For example, in one embodiment, the output value y(t) can be determined in accordance with equation 5a, as shown above.

With reference to Fig. 24, more illustrated step 317 to determine the value d(t) of the Delta motion of method 315 in accordance with one embodiment. In particular, the determination of the values of d(t) Delta movement may generally correspond to the operations shown above in accordance with equation 1a. As shown, step 317 may include the sub-steps 324-327. Since podata 324, identified a set of three horizontally adjacent pixels having the same color value as the current input pixel x(t). As an example, in accordance with the embodiment shown in Fig. 22, the image data may include image data Bayer, and three horizontally adjacent pixel may include the current input pixel x(t) (313), the second pixel 312 of the same color to the left of the current input pixel 313 and the third pixel of the same color to the right of the current input pixel 313.

Then, on podate 325, identified three together is what's the reference pixel 308, 309 and 310 of the previous frame corresponding to the selected set of three horizontally adjacent pixels 312, 313 and 314. Using the selected pixels 312, 313 and 314, and the three together are located reference pixels 308, 309 and 310 of the absolute value of the difference between each of the three selected pixels 312, 313 and 314, and their respective jointly located reference pixels 308, 309 and 310, respectively, are defined on podate 326. Then podate 327 maximum value of the three differences with padata 326 is selected as the value of d(t) Delta movement for the current input pixel x(t). As described above, Fig. 24, which illustrates the technology of computing Delta values of motion shown in equation 1a, is intended only to provide one of the embodiments. In fact, as described above, any two-dimensional array of pixels of the same color with the current pixel, which is placed in the center of the array, can be used to determine the value of the Delta movement (for example, equation 1b).

Another variant implementation of the technology for applying temporal filtering to the image data is additionally shown in Fig. 25. For example, since the signal-to-noise ratio for different color components of the image data may be different, the gain may be applied to the current is the Ixelles so, to the current pixel is amplified to select values of the movement and the brightness of the table 304 movement and tables 306 brightness. By applying appropriate gain factor, which is dependent on the color ratio signal/noise may be more compatible among different color components. Just as an example, in an implementation which uses the data in the raw Bayer image, the red and blue colors can be generally more sensitive than the green channels (Gr and Gb) colors. Thus, through appropriate depending on the color gain to each processed pixel oscillation signal/noise between each color component can be reduced, thereby reducing, among other things, artifacts, ghosting images, and also consistency on different colors once achieved automatic white balance.

With this in mind, Fig. 25 provides a block diagram of the sequence of operations of a method depicting the method 328 to apply temporal filtering to the image data by the block 130 pre-treatment, in accordance with this embodiment. From step 329, the current pixel x(t), located in the spatial location (j,i) of the current frame of image data, PR is focused on system 302 temporal filtering. At step 330, the value d(t) of the Delta motion is determined for the current pixel x(t), at least partially, on the basis of one or more jointly located reference pixels (for example, r(t-1)) from the previous frame of image data (for example, image frame immediately preceding the current frame). Step 330 may be similar to step 317 in Fig. 23 and can use the operation described in equation 1 above.

Then, at step 331, the index information table movement can be determined using the values of d(t) Delta movement, the input values h(t-1) the history of the movement, corresponding to the spatial location (j,i) from the previous frame (for example, an appropriate, jointly located reference pixel r(t-1)), and the gain associated with the color of the current pixel. Then, at step 332, the first filter coefficient K may be chosen from table 304 motion using the index information table defined in step 331. Just as an example, in one embodiment, the filter coefficient K and the index information table movement can be defined as set forth below:

K=M[gain[c]×(d( j,i,t)+h(j,i,t-1))],(2b)

while M represents the table movement, and thus gain[c] corresponds to the gain associated with the color of the current pixel. Additionally, although not shown in Fig. 25, it should be clear that the output h(t) the history of the movement for the current pixel may be determined and may be used to apply temporal filtering to jointly located pixel of the subsequent image frame (for example, the next frame). In the present embodiment, the output h(t) the history of the movement for the current pixel x(t) can be determined using the following formula:

h(j,i,t)=d(j,i,t)+K[h(j,i,t-1)-d(j,i,t)] (3b)

Then, at step 333, the attenuation coefficient can be selected from table 306 brightness using the index information table, brightness, defined on the basis of the amplification factor (gain[c])associated with the color of the current pixel x(t). As described above, the attenuation factors stored in the luminance table, can have a range of approximately from 0 to 1. Then, at step 334, the second coefficient K' of the filter can be calculated on the basis of the attenuation coefficient (step 333) and the first filter coefficient K (step 332). Just as an example, in one embodiment, the second coefficient K' of the filter and the index information table, the brightness can be defined as set forth below:

K'=K×L[gain[c]×x(j,i,t)](4b)

Then, at step 335, subjected to temporal filtering output value y(t)corresponding to t is the current input pixel x(t), is determined on the basis of the second coefficient K' of the filter (step 334), the values together set the reference pixel r(t-1) and the values of the input pixel x(t). For example, in one embodiment, the output value y(t) can be determined as set forth below:

y(j,i,t)=x(j,i,t)+K'(r(j,i,t-1)-x(j,i,t))(5b)

Continuing on Fig. 26, shows an additional variant of the implementation of the processing 336 temporal filtering. Here, the processing 336 temporal filtering can be done in some way that is similar to the version of the implementation described in Fig. 25, except that, instead of application-dependent color gain (for example, gain[c]) for each input pixel and shared tables movement and brightness, a separate table movement and brightness are provided for the adih color component. For example, as shown in Fig. 26, table 304 can include a table 304a movement corresponding to the first color, the table 304b movement corresponding to the second color, and the table 304c movement corresponding to the n-th color, while n is the number of colors present in the raw data image. Similarly, table 306 brightness may include a table 306a brightness corresponding to the first color, the table 306b brightness corresponding to the second color, and the table 304c movement corresponding to the n-th color. Thus, in the embodiment, where the data is raw image data are image Bayer, three tables movement and brightness can be provided for each of color components of red, blue and green. As described below, the selection of coefficients K filtering and attenuation coefficient may depend on the table motion and brightness selected for the current color (e.g., color of the current input pixel).

Method 338, illustrating additional option exercise for temporal filtering using dependent color table movement and brightness, as shown in Fig. 27. As will be taken into account, various calculations and formulas that can be applied in a manner 338 may be similar to option implementation shown in Fig. 23, but to the specific table movement and brightness, selectable for each color, or a similar variant implementation, shown in Fig. 25, but with the replacement of the use-dependent color gain coefficient[c] gain when selecting dependent color table movement and brightness.

From step 339, the current pixel x(t), located in the spatial location (j,i) of the current frame image data accepted by the system 336 temporal filtering (Fig. 26). At step 340, the value of d(t) of the Delta motion is determined for the current pixel x(t), at least partially, on the basis of one or more jointly located reference pixels (for example, r(t-1)) from the previous frame of image data (for example, image frame immediately preceding the current frame). Step 340 may be similar to step 317 in Fig. 23 and can use the operation shown in equation 1, above.

Then, at step 341, the index information table movement can be determined using the values of d(t) of the Delta motion and input values h(t-1) the history of the movement, corresponding to the spatial location (j,i) from the previous frame (for example, an appropriate, jointly located reference pixel r(t-1)). Then, at step 342, the first filter coefficient K may be chosen from one of the available tables movement (e.g., 304a, 304b, 304c) based on the current color is the input pixel. For example, is identified by one of the appropriate table movement, the first filter coefficient K can be selected using the index table lookups movement on the stage 341.

After selecting the first filter coefficient K is selected as the luminance table corresponding to the current color, and the attenuation coefficient is selected from the selected brightness table based on the value of the current pixel x(t), as shown in step 343. Then, at step 344, the second coefficient K' of the filter is determined based on the attenuation coefficient (step 343) and the first filter coefficient K (step 342). Then, at step 345, subjected to temporal filtering output value y(t)corresponding to the current input pixel x(t), is determined on the basis of the second coefficient K' filter (step 344), the values together set the reference pixel r(t-1) and the values of the input pixel x(t). Despite the fact that the technology shown in Fig. 27, may be more costly to implement (for example, due to the memory required to store additional tables movement and brightness), in some cases, it may offer additional improvements in the artifacts of ghosting and consistency in different colors once achieved automatic white balance.

In accordance with additional options implemented is tvline, processing of temporal filtering, provided for a temporary filter 298, may use a combination depending on the color gain and specific color tables movement and/or brightness to apply temporal filtering to the input pixels. For example, in one of these embodiments, a single table can be provided for all color components, and the search index table movement to select the first coefficient (K) of the filter from the table motion can be determined on the basis dependent color gain (for example, as shown in Fig. 25, stages 331-332), along with the fact that the search index tables brightness may not be dependent on the color gain applied to it, but can be used to select the attenuation of brightness from one of the many tables brightness depending on the color of the current input pixel (e.g., as shown in Fig. 27, step 343). Alternatively, in another embodiment, can be provided by numerous tables movement, and the search index table movement (without applied depending on the color gain) can be used to select the first coefficient (K) of the filter from the table the motion corresponding to the color of the current input pixel (e.g., campocatino in Fig. 27, step 342), along with the fact that a single table with the brightness can be provided for all color components, and the search index tables brightness to select the attenuation of brightness can be determined on the basis dependent color gain (for example, as shown in Fig. 25, stages 333-334). In addition, in one embodiment, which uses a color filter of Bayer, one table movement and/or the luminance table may be provided for each of color components of red (R) and blue (B), along with the fact that the common table movement and/or grid brightness can be provided for both color component green (Gr and Gb).

Conclusion time filter 298 may subsequently be sent to the filter 300 compensation grouping (BCF), which can be made with the possibility to process the pixels of the image to compensate for non-linear location (for example, non-uniform spatial distribution) of color samples as a result of grouping the sensor(s) 90a or 90b images so that subsequent operation of the image processing logic 82 of the conveyor ISP (for example, elimination of patchiness, etc)that depend on the linear arrangement of the color samples, could work properly. For example, the following, with reference to Fig. 28 shows a sample of 346 full permitted the I picture data Bayer. This may represent raw data image of the sample full resolution captured by the sensor 90a (or 90b) images attached to logic 80 pre-treatment ISP.

As will be taken into account, under certain conditions, image capture may not be practical to send the image data of full resolution captured by the sensor 90a images to the circuits 32 ISP for processing. For example, when capturing video data, in order to preserve the appearance of the moving image the fluid from the scene, the human eye may require a frame rate of at least about 30 frames per second. However, if the number of pixel data contained in each frame of sampling full resolution, exceeds the capabilities of data processing circuits 32 ISP, when exposed to the sample at 30 frames per second, the filter compensation grouping can be used together with grouping sensor 90a image to reduce the resolution of the image signal, in addition, along with the improvement of the signal-to-noise ratio. For example, as described above, various technology groups, such as grouping 2x2, can be used for the development of "grouped" pixel of the raw image by averaging the values of surrounding pixels in the active region 280 raw to the DRA 278.

With reference to Fig. 29, an implementation option sensor 90a images, which can be made with the possibility of grouping data 346 full resolution image in Fig. 28 in order to produce appropriate grouped data 358 raw image shown in Fig. 30, is illustrated in accordance with one embodiment. As shown, the sensor 90a images can capture data 346 raw image full resolution. Logic 357 grouping can be performed with the opportunity to apply grouping to the data 346 raw image full resolution, which may be issued in logic 80 pre-treatment ISP using interface 94a sensor, which, as described above, can be SMIA interface or any other parallel or serial interfaces camera.

As is illustrated in Fig. 30, the logic 357 grouping can apply grouping 2x2 data 346 raw image full resolution. For example, with regard to grouped data 358 of the image pixels 350, 352, 354 and 356 can generate the structure of Bayer and can be determined by averaging the values of the pixels according to 346 raw image full resolution. For example, with reference to both Fig. 28 and 30, the grouped pixel 350 Gr which may be determined as the average or mean values of pixels 350a-350d Gr full resolution. Similarly grouped pixel 352 R can be determined as an average value of pixels 352a-352d R full permissions, grouped pixel 354 B may be determined as an average value of pixels 354a-354d B full permissions, and grouped pixel 356 Gb can be determined as an average value of pixels 356a-356d Gb full resolution. Thus, in the present embodiment, the grouping 2x2 may provide a set of four pixels full resolution, including the upper-left (e.g., 350a), upper right (for example, 350B water), lower left (e.g., 350c) and bottom right (for example, 350d) pixels that are averaged to output the grouped pixel located in the center of the square formed by the set of four pixels full resolution. Accordingly, the grouped block 348 Bayer shown in Fig. 30, contains four podpisala"that are 16 pixels contained in the blocks 348a-348d of Bagheera in Fig. 28.

In addition to the reduced spatial resolution, the group also offers the additional benefit of reducing noise in the image signal. For example, whenever an image sensor (e.g., 90a) is exposed to light, may be a certain amount of noise, such as photon noise, linked to the CSO with the image. This noise may be random or systematic, and it can also occur from numerous sources. Thus, the amount of information contained in the image captured by the image sensor, can be expressed in terms of the signal-to-noise ratio. For example, each time the image is captured by the sensor 90a images and sent to the processing circuit, such as circuit 32 ISP, there may be some noise in the pixel values, as the process of reading and transmission of the image data, by its nature, brings a "readout noise" in the image signal. This "readout noise" can be random and generally unavoidable. By using the average values of four pixels noise (e.g., photon noise) can in General be reduced regardless of the source of the noise.

Thus, when considering data 346 full resolution image in Fig. 28 each structure 348a-348d Bayer (block 2x2) contains 4 pixels, each of which contains signal and noise components. If each pixel, for example, in block 348a Bayer is read separately, then there are four signal components and four noise components. However, through the use of grouping, as shown in Fig. 28 and 30 so that the four pixels (for example, 350a, 350B water, 350c, 350d) could be presented in one the nom pixel (for example, 350) in the grouped image data, the same area occupied by the four pixels in the data 346 full image resolution, can be read as a single pixel with only one instance of the noise component, thereby improving the signal-to-noise.

In addition, despite the fact that this version of the implementation depicts the logic 357 grouping in Fig. 29 as being made with the possibility to apply the treatment groups 2x2, should be taken into account that the logic 357 grouping can be performed with the opportunity to apply any suitable type of processing groups, such as grouping 3x3, vertical group, horizontal group, and so forth. In some embodiments, the implementation of the sensor 90a images can be performed, allowing you to select different grouping modes during image capture. Additionally, in additional embodiments, the implementation of the sensor 90a images can also be performed with the opportunity to apply the technology, which can be referenced as "pass", in which instead of samples averaged pixels logic 357 selects only some of the pixels from the data 346 full resolution (for example, every other pixel, every 3 pixels and so on)to output to the tool 80 pre-treatment ISP processing. In addition, despite the fact that only the sensor 90a of the images is shown in Fig. 29, should be taken into account that the sensor 90b images can be implemented in a similar manner.

As also shown in Fig. 30, one of the effects of the treatment groups is that the spatial sampling of the grouped pixels may not be evenly spaced. This spatial distortion, in some systems, resulting in jagged contours (e.g., jagged contours), which is generally undesirable. In addition, because some stages of the image processing logic 82 of the conveyor ISP may depend on the linear arrangement of the color samples in order to work properly, the filter 300 compensation grouping (BCF) can be used to perform oversampling and repositioning of the grouped pixels so that the grouped pixels were distributed spatially evenly. That is, BCF 300 essentially compensates for the uneven spatial distribution (e.g., shown in Fig. 30) by oversampling the provisions of the samples. For example, in Fig. 31 illustrates perediskretizatsii plot grouped data 360 image after processing by BCF 300, the block 361 Bayer containing uniformly distributed of predisclosure is installed pixels 362, 363, 364 and 365, corresponds to the grouped pixels 350, 352, 354 and 356, respectively grouped data 358 image in Fig. 30. Additionally, in the embodiment that uses a pass (for example, instead of grouping), as mentioned above, spatial distortion, shown in Fig. 30 may not be present. In this case, BCF 300 may function as a lowpass filter to reduce artifacts such as aliasing), which can occur as a result, when the pass is used by the sensor 90a images.

Fig. 32 shows a block diagram of the filter 300 compensation grouping in accordance with one embodiment. BCF 300 may include logic 366 compensation grouping that can handle the grouped pixels 358 to apply horizontal and vertical scaling using logic 368 horizontal scaling and logic 370 vertical scaling, respectively, to peregistrirovat and to reposition the grouped pixels 358, so that they are arranged in a spatially uniform distribution, as shown in Fig. 31. In one embodiment, the operation(and) scaling performed by BCF 300 may be performed using horizontal and vertical is Noah multi-tap polyphase filter. For example, filtering may include the selection of appropriate pixel data from the input image source (for example, grouped data 358 image displayed by the sensor 90a image), multiplying each of the selected pixels on the filtration coefficient and summing the resulting values to form an output pixel in the desired destination information.

The selection of pixels used in the scaling operations, which may include the Central pixel and the surrounding neighboring pixels of the same color, can be determined using separate differential analyzers 372, one for vertical scaling and one for horizontal scaling. In the depicted embodiment, differential analyzers 372 can be a digital differential analyzers (DDA) and can be made with the possibility to control the current position of the output pixel during scaling operations in vertical and horizontal directions. In the present embodiment, the first DDA (indicated by reference as 372a) is used for all color components during the horizontal scale and the second DDA (indicated by reference as 372b) is used for all color components during the vertical scaling. Only in ka is este example, DDA 372 may be provided as a 32-bit data register, which contains complementary to 2 number fixed point, with 16 in the integer part and 16 bits for the fractional part. 16-bit integer part can be used to determine the current position value for the output pixel. The fractional part of the DDA 372 can be used to determine the current index or phase, which may be based on a fractional position between pixels of the current provisions of the DDA (for example, corresponding to the spatial location of the output pixel). Index or phase can be used to select the appropriate set of coefficients from a set of tables 374 of filter coefficients. Additionally, the filtering may be performed on each color component using pixel of the same color. Thus, the filter coefficients can be selected not only on the basis of the phase of the current provisions of the DDA, but also on the color of the current pixel. In one embodiment, 8 phases may be present between each input pixel and, thus, the vertical and horizontal components of the filter can use the table of coefficients depths of 8 to 3 bits of the high order 16-bit fractional part was used for expression of the current phase or index. Thus, as used in the Mat the materials of this application, will be understood that the term data "raw image", or the like, must specify a reference to the data multicolor image, which are collected by a single sensor with the structure of the matrix of color filters (e.g., Bayer), covering it, they provide numerous color components in the same projection. In yet another embodiment, a separate DDA can be used for each color component. For example, in such embodiments, the implementation of the BCF 300 may allocate the components of R, B, Gr and Gb of raw data image and treat each component as a separate projection.

In operation, the horizontal and vertical scaling may include initialization DDA 372 and execution of multi-tap polyphase filter using the integer and the fractional parts of the DDA 372. Despite the fact that it runs independent of and separate DDA, the operation of the horizontal and vertical scaling are performed in a similar manner. The value of the step or the step size (DDAStepX for horizontal scaling and DDAStepY for vertical scaling determines how the value of the DDA (currDDA) persived once you have identified each output pixel, and multi-tap polyphase filtering is repeated using the next value currDDA. Nab is emer, if the step value is less than 1, the image is scaled with increasing resolution, and if the step value is greater than 1, the image is scaled with downsampling. If the step value is 1, the scaling does not occur. In addition, it should be clear that the same or different step sizes can be used for horizontal and vertical scaling.

The output pixels are generated with 300 BCF in the same order as the input pixels (e.g., using patterns Bayer). In the present embodiment, the input pixels may be classified as being odd or even based on their ordering. For example, with reference to Fig. 33, illustrated by the graphical representation of the locations of the input pixels (line 375) and the corresponding location of the output pixel based on different values DDAStep (rows 376-380). This example shows the string represents a row of pixels of red (R) and green (Gr) data in a raw Bayer image. For horizontal filtering, the red pixel at position 0,0 in line 375 can be regarded as an even-numbered pixel, a green pixel in the position of 1.0 at line 375 can be regarded as odd pixel, and so on. As for the locations of output pixels, the odd and even the pixels can be determined on the basis of the lowest bit in the fractional part (lower 16 bits) DDA 372. For example, assuming DDAStep 1.25, as shown at line 377, the least significant bit corresponds to bit 14 of the DDA, as this bit gives a resolution of 0.25. Thus, the output pixel is red in position DDA (currDDA) 0,0 can be considered an even pixel (the least significant bit, bit 14 has a value of 0), the output pixel is green in currDDA 1,0 (bit 14 is set to 1), and so on. In addition, although Fig. 33 described in relation to the filtering of the horizontal direction (using DDAStepX), it should be clear that the definition of even and odd input and output pixels can be applied in an identical manner for vertical filtering (using DDAStepY). In other embodiments, the implementation of DDAs 372 can also be used to track the locations of the input pixels (for example, instead of tracking the desired output pixel locations). In addition, should be taken into account that DDAStepX and DDAStepY can be installed in the same or different values. In addition, provided that the structure is used Bayer, it should be noted that the starting pixel used BCF 300 could be any of the pixel Gr, Gb, R and B, for example, depending on whether the pixel is located in a corner within the active region 280.

With this in mind, odd/even input pixels are used to form the even/odd output is the breaking of the pixels, respectively. Provided locations of output pixels, alternating between even and odd position, the Central location of the input pixel source (specified by reference in the materials of the present application as "currPixel") for the purpose of filtering is determined by rounding DDA to the nearest even or odd location of the input pixel for even or odd output pixel locations (based DDAStepX), respectively. In the embodiment where the DDA 372a made with the ability to use 16 bits to represent the integer part and 16 bits to represent the fractional part, currPixel can be determined for odd and even positions currDDA using equations 6a and 6b above:

Even the location of the output pixel may be determined based on bits [31:16]:

(currDDA+1,0) & 0xFFFE.0000(6a)

The odd location of the output pixel may be determined based on bits [31:16]:

(currDDA)|0x0001.0000(6b)

Essentially, the above equations represent the operation of rounding, which is why the odd and even position of the output pixel, as defined by the s through currDDA, are rounded to the nearest even and odd positions of the input pixels, respectively to select currPixel.

Additionally, the current index or phase (currIndex) can also be determined in each currDDA position. As described above, the index value or phases represent the fractional position between the pixel position of the output pixel relative to the provisions of the input pixels. For example, in one embodiment, 8 phases can be defined between each position of the input pixel. For example, again with reference to Fig. 33, 8 values 0-7 index is provided between the first input red pixel at position 0,0 and the next input pixel red in the position of 2.0. Similarly, 8 values 0-7 index is provided between the first input pixel green in regulation of 1.0 and the next input pixel green in regulation of 3.0. In one embodiment, the values currIndex can be determined according to equations 7a and 7b below, and for even and odd output pixel locations, respectively:

Even the location of the output pixel may be determined based on bits [16:14]

(currDDA+0,125)(7a)

The odd location of the output pixel can be determined on the basis of the NII bits [16:14]

(currDDA+1,125)(7b)

For odd positions, additional shift of 1 pixel is equivalent to adding the four offset to the index of the coefficient for odd output pixel locations, to account for the offset index between different color components in relation to the DDA 372.

As soon as currPixel and currIndex were identified in a specific location currDDA, processing filter may select one or more neighboring pixels of the same color on the basis of currPixel (selected Central input pixel). As an example, in the embodiment, the logic 368 horizontal scale includes 5-taking polyphase filter, and the logic 370 vertical scaling includes 3-taking polyphase filter, two pixels of the same color on each side currPixel in the horizontal direction can be selected for horizontal filtering (e.g., -2, -1, 0, +1, +2), and one pixel of the same color on either side of currPixel in the vertical direction can be selected for vertical filtering (e.g., -1, 0, +1). In addition, currIndex can be used as an index to select the appropriate filter coefficients from table 374 coefficients filter deprimente to the selected pixels. For example, using a variant implementation of the 5-idler horizontal/3-outlet vertical filtering, five tables depth 8 may be provided for vertical filtering. Although illustrated as part of the BCF 300, should be taken into account that the table 374 of filter coefficients, in some embodiments, the exercise may be stored in memory, which is typically separate from the BCF 300, such as memory 108.

Before a more detailed description of the operations of the horizontal and vertical scaling table 4 below shows examples of how values currPixel and currIndex as defined on the basis of various provisions of the DDA use different values DDAStep (for example, could be applied to DDAStepX or DDAStepY).

2,5
Table 4
Filter compensation grouping examples of DDA calculations currPixel and currIndex
Exit
Pixel
(Odd or even)
DDA
Step
1,25DDA
Step
1,5DDA
Step
1,75DDA
Step
2,0
current
DDA
current
Index
current
Pixel
current
DDA
current
Index
current
Pixel
current
DDA
current
Index
current
Pixel
current
DDA
current
Index
current
Pixel
00,0000,0000,0000,000
11,25111,5211,7531243
0223443,564404
13,75334,565the 5.2515647
0546606748808
16,25577,5278,757/td> 910411
07,568941010,521012012
18,757910,561112,2551314415
010010120121401416016
1at 11.25 11113,5213of 15.7531518419
012,521215416of 17.561820020
1of 13.7531316,561719,2511922423
01541618018214 2224024
116,2551719,521922,7572326427
0of 17.56182142224,522428028
1of 18.7571922,562326,2552730431
0200 20240242802832032

To give an example, let us assume that the selected step size DDA (DDAStep) 1,5 (line 378 of Fig. 33), with the current position of the DDA (currDDA)starting at 0, indicating even the position of the output pixel. To determine currPixel can apply equation 6a, as shown below:

currPixel (determined as bits [31:16] of the result)=0;

Thus, 0,0 currDDA (line 378) input of the Central pixel of the source to filter corresponds to the input pixel in the red in the 0,0-line 375.

To determine currIndex in even currDDA 0,0 equation 7a can be used as shown below:

currIndex (determined as bits [16:14] of the)=[000]=0;

Thus, in the current position 0,0 currDDA (line 378) 0 currIndex can be used to select filter coefficients from table 374 coefficients of the filter.

Accordingly, the filtering (which may be vertical or horizontal depending on whether DDAStep in the X-direction (horizontal) is if Y (vertical)), can be applied on the basis of certain values currPixel and currIndex on currDDA 0.0, DDA 372 increment by DDAStep (1,5), and define the following values currPixel and currIndex. For example, in the following currDDA position 1.5 (odd) position currPixel can be determined using equation 6b, as set forth below:

currPixel (determined as bits [31:16] of the result)=1;

Thus, at the currDDA position 1.5 (line 378) input of the Central pixel of the source to filter corresponds to the input pixel, green in regulation 1,0 line 375.

In addition, currIndex odd currDDA of 1.5 can be determined using equation 7b, as shown below:

currIndex (determined as bits [16:14] of the)=[010]=2;

Thus, in the current currDDA position 1.5 (line 378) value 2 currIndex can be used to select the appropriate filter coefficients from table 374 of filter coefficients. Filtering (which may be vertical or horizontal depending on whether DDAStep in the X-direction (horizontal) or Y (vertical)), thus, can be applied using these values currPixel and currIndex.

Then DDA 372 again increment by DDAStep (1,5), yielding a value of 3.0 currDDA. currPixel corresponding currDDA 3,0, can be determined using the-W equations 6a, as shown below:

currPixel (determined as bits [31:16] of the result)=4;

Thus, in the position of 3.0 currDDA (line 378) input of the Central pixel of the source to filter corresponds to the input pixel in the red position 4,0 line 375.

Then currIndex on even currDDA 3,0 can be determined using equation 7a, as shown below:

currIndex (determined as bits [16:14] of the) =[100]=4;

Thus, in the current situation 3,0 currDDA (line 378) value 4 currIndex can be used to select the appropriate filter coefficients from table 374 of filter coefficients. As will be taken into account, DDA 372 may continue to pereselitsa on DDAStep for each output pixel, and filtering (which may be vertical or horizontal depending on whether DDAStep in the X-direction (horizontal) or Y (vertical)) can be applied using currPixel and currIndex defined for each value currDDA.

As described above, currIndex can be used as an index to select the appropriate filter coefficients from table 374 coefficients of the filter to apply to the selected pixels. The filtering processing may include obtaining the values of the pixels around the Central source is exile (currPixel), multiplying each of the selected pixels on the proper filter coefficients selected from the table 374 of filter coefficients on the basis of currIndex, and summing the results to obtain the value of the output pixel at the location corresponding to the currDDA. In addition, since this variant implementation uses 8 phase between pixels of the same color, using a variant implementation of the 5-idler horizontal/3-outlet vertical filtering, five tables depth 8 may be provided for vertical filtering. In one embodiment, each of the entries in the table of coefficients may include a 16-bit complementary to 2 number fixed point, with 3 integer bits and 13 fractional bits.

In addition, provided the structure of the image Bayer in one embodiment, the vertical component scaling may include four separate 3-branch polyphase filter, one for each color component: Gr, R, B, and Gb. Each of the 3-drop filters can use DDA 372 to control step-by-step change of the current Central pixel and the index for the coefficients, as described above. Similarly, the horizontal components of the scale may include four separate 5-branch polyphase filter, one for each color is howling components: Gr, R, B, and Gb. Each of the 5-drop filters can use DDA 372 to control step-by-step change (for example, through DDAStep) of the current Central pixel and index ratios. However, it should be understood that more or fewer taps may be used by the element horizontal and vertical scaling in other variants of the implementation.

As for the boundary cases, the pixels used in the processing of horizontal and vertical filtering may depend on the relative position of the current provisions of the DDA (currDDA) relative to the boundaries of the frame (for example, the boundaries defined by the active region 280 in Fig. 19). For example, for horizontal filtering, if the currDDA position, compared with the Central position of the input pixel (SrcX) and width (SrcWidth) of the frame (for example, a width of 290 active region 280 in Fig. 19), indicates that the DDA 372 close to the border, so there are not enough pixels to perform 5-bypass filtration, you can repeat the input edge pixels of the same color. For example, if the selected Central input pixel is on the left contour of the frame, the Central pixel may be repeated twice for horizontal filtering. If the Central input pixel is located near the left contour of the frame, so that only one pixel has R is sparganii between the Central input pixel and the left path, for horizontal filtering one available pixel is repeated in order to provide two pixel values to the left of the center input pixel. In addition, the logic 368 horizontal scaling can be configured so that the number of input pixels (including the original and repeated pixels) could not exceed the input width. This may be expressed as set forth below:

this DDAInitX represents the initial position of the DDA 372, DDAStepX represents the value of DDA step in the horizontal direction, and BCFOutWidth represents the width of the frame extracted from 300 BCF.

As for vertical filtering, if the currDDA position, compared with the Central position of the input pixel (SrcY) and width (SrcHeight) of the frame (for example, a width of 290 active region 280 in Fig. 19), indicates that the DDA 372 close to the border, so there are not enough pixels to perform 3-outlet filter, you can repeat the input edge pixels. In addition, the logic 370 vertical scaling can be configured so that the number of input pixels (including the original and repeated pixels) could not exceed the input height. This may be expressed as set forth below:

this DDAInitY is initially the position DDA 372, DDAStepY represents the value of DDA step in the vertical direction, and BCFOutHeight represents the width of the frame extracted from 300 BCF.

Next, with reference to Fig. 34, a block diagram of a sequence of operations depicting a method 382 for filtering compensation grouping to the data of the image taken by the block 130 pre-processing of the pixels in accordance with the embodiment. Will be taken into account that the method 382 illustrated in Fig. 34, can be used for both vertical and horizontal scaling. From step 383, DDA 372 initialized, and is determined by the value of step DDA (which may correspond to DDAStepX for horizontal scaling and DDAStepY for vertical scaling. Then, at step 384, determines the current position of the DDA (currDDA) on the basis of DDAStep. As described above, currDDA may correspond to the location of the output pixel. Using currDDA method 382 may determine the Central pixel (currPixel) according to the input pixel, which can be used to filter compensation group, to determine the corresponding output value at the currDDA, as indicated at step 385. Then, at step 386, the index corresponding currDDA (currIndex), can be determined on the basis of the fractional position between pixels currDDA relative to the input pixels (for example the EP, line 375 in Fig. 33). As an example, in the embodiment where the DDA includes 16 integer bits and 16 fractional bits, currPixel may be determined in accordance with equations 6a and 6b, and currIndex may be determined in accordance with equations 7a and 7b, as shown above. Although 16-bit integer part/16-bit fractional configuration described in materials of this application as an example, should be taken into account that other configurations DDA 372 may be used in accordance with the present technology. As an example, other embodiments of DDA 372 can be configured to include a 12-bit integer part and a 20-bit fractional part, 14-bit integer part and an 18-bit fractional part, and so on.

Once defined currPixel and currIndex, source pixels of the same color around currPixel can be selected for multi-tap filter, as indicated by step 387. For example, as described above, one implementation may use the 5-outlet polyphase filtering in the horizontal direction (for example, 2 pixels of the same color on either side of currPixel) and can use 3-outlet, multi-phase filtering in the vertical direction (for example, 1 pixel of the same color on either side of currPixel). Then, at step 388, as only the pixels of the source of selected, the filter coefficients can be selected from table 374 filter coefficients BCF 300 on the basis of currIndex.

After this phase 389 filtering can be applied to the pixels of the source to determine the value of the output pixel corresponding to the position represented currDDA. For example, in one embodiment, the pixels may then be multiplied by their respective coefficients of the filter, and the results can be summed to obtain the value of the output pixel. The direction in which the filtering is applied at step 389, can be vertical or horizontal depending on whether DDAStep in the X-direction (horizontal) or Y (vertical). Finally, at step 263 DDA 372 increment by DDAStep at step 390, the method 382 returns to step 384, whereby the following value of the output pixel is determined by using technology to compensate the grouping described in materials of this application.

With reference to Fig. 35, step 385 to determine currPixel of ways 382 illustrated in detail in accordance with one embodiment. For example, step 385 may include a step 392 to determine whether the location of the output pixel corresponding to the currDDA (step 384), even or odd. As described above, even or odd output pixel may be determined by the change on the basis of the lowest bit currDDA based DDAStep. For example, assuming DDAStep 1.25 value currDDA 1.25 can be defined as odd, since the least significant bit (corresponding to bit 14 of the fractional part DDA 372) is set to 1. As for the value of currDDA 2.5, bit 14 is set to 0, thereby indicating an even location of the output pixel.

In logic 393 solutions determination is made regarding whether the location of the output pixel corresponding to the currDDA, even or odd. If the output pixel is even, logic 393 solutions continues at step 394 where currPixel is determined by the increment value currDDA 1 and rounding the result up to the middle of the even arrangement of the input pixel, as represented by equation 6a above. If the output pixel is odd, then the logic 393 solutions moves to step 395 where currPixel is determined by rounding the values currDDA to the middle of the odd location of the input pixel, as represented by equation 6b above. The value of currPixel can then be used to step 387 382 way to select pixels source for filtration, as described above.

Moreover, with reference to Fig. 36, step 386 to determine currIndex of ways 382 illustrated in detail in accordance with one embodiment. For example, step 386 can include with the BOJ step 396 definitions whether the location of the output pixel corresponding to the currDDA (step 384), even or odd. This determination can be performed in a similar manner as step 392 in Fig. 35. In logic 397 solutions determination is made regarding whether the location of the output pixel corresponding to the currDDA, even or odd. If the output pixel is even, logic 397 solution passes to step 398 where currIndex is determined by the increment value currDDA one step index that defines currIndex, based on the integer bits of the lower order and two fractional bits of the highest order DDA 372. For example, in the embodiment, in which the 8 phases are provided between each of the pixels of the same color and in which the DDA includes 16 integer bits and 16 fractional bits, one step index may correspond 0,125, and currIndex may be determined based on bits [16:14] values currDDA plus 0,125 (for example, equation 7a). If the output pixel is odd, logic 397 solutions moves to step 399, which currIndex is determined by the increment value currDDA one step index single shift of the pixel defining currIndex, based on the integer bits of the lower order and two fractional bits of the highest order DDA 372. Thus, in the embodiment, in which the 8 phases are provided between each of the pixels od is nakiwogo color and in which the DDA includes 16 integer bits and 16 fractional bits, one step index may correspond 0,125, one shift of a pixel may correspond to 1.0 (shift 8 indexes the steps to the next pixel of the same color), and currIndex may be determined based on bits [16:14] values currDDA, increased to 1,125 (for example, equation 7b).

Although illustrated the current implementation provides BCF 300 as a component unit 130 pre-treatment pixels, other options for implementation may include BCF 300 in the pipeline data processing the raw image pipeline 82 ISP, which, as further described below, may include logic detection/correction of defective pixels, the blocks of the gain/offset/compensation logic noise reduction, logic shading correction lens and the logic of elimination of patchiness. In addition, variants of implementation, where the above logic detection/correction of defective pixels, the blocks of the gain/offset/compensation logic noise suppression logic shading correction lens does not rely on the linear arrangement of pixels, BCF 300 may be combined with the logic of eliminating the mosaic, to filter compensation group and to reposition the pixels before the elimination of the mosaic, as the elimination of patchiness in General does not rely on a uniform is prostranstvennoe the location of the pixels. For example, in one embodiment, BCF 300 may be included anywhere between the sensor output and the logic correct demographic, temporal filtering, and/or detection/correction of defective pixels to be applied to the raw data of the image to compensate for the grouping.

As described above, the output BCF 300, which may be the output FEProcOut (109)having a spatial data uniformly distributed in the image (for example, a sample of 360 in Fig. 31), may be sent to the logic 82 pipeline ISP for additional processing. However, to switch the focus of this description logic 82 pipeline ISP, first will be given a more detailed description of the various functionalities that can be provided by blocks of statistical processing (e.g., 122 and 124), which can be implemented in logic 80 pre-treatment ISP.

Returning to the General description of the blocks 120 and 122 statistical processing, these blocks can be performed with the opportunity to collect various statistical data on the image sensors, which capture and output raw image (Sif0 and Sif1), such as statistics related to automatic exposure, automatic white balancing, automatic focusing, detection is arcane, compensation of the black level and shading correction lens, and so on. When the action in this way, the blocks 120 and 122 statistical processing you can apply one or more image-processing operations to their respective input signals Sif0 (Sensor0) and Sif1 (Sensor1).

For example, with reference to Fig. 37, a more detailed view of a structural schematic block 120 aggregation associated with Sensor0 (90a), is illustrated in accordance with one embodiment. As shown, the block 120 statistical processing may include the following functional blocks: logic 460 detection and correction of defective pixels, the logic 462 the black level compensation (BLC), logic 464 shading correction lens, logic 466 inverse BLC and logic 468 collection of statistical data. Each of these functional blocks are described below. In addition, it should be clear that the block 122 statistical processing associated with Sensor1 (90b), can be implemented in a similar manner.

First, the output logic 124 (for example, Sif0 or SifIn0) is a logic 460 preliminary correction of defective pixels. As will be taken into account, it may be clear that the "defective" pixels must specify a reference to the pixels of the image within the sensor(s) 90 images, which undergo failure in accurate reading ur the init illumination. Defective pixels can be attributed to a certain number of factors and may include the "bright" (or leaking) pixels "are stuck" pixels and "dead" pixels. "Bright" pixel usually appears as being brighter than the non-defective pixel, provided the same degree of lighting in the same spatial location. "Bright" pixels can be obtained due to errors in the reset and/or high leakage. For example, "bright" pixel may exhibit greater than normal, the leakage of the charge relatively non-defective pixels, and thus may appear to be brighter than the non-defective pixels. Additionally "extinct" or "are stuck" pixels may be the result of contamination, such as dirt or other leaving a trace material contaminating the image sensor during the process of manufacturing and/or Assembly, which may cause some defective pixels to be darker or brighter than the non-defective pixel, or may cause defective pixel to be recorded in a specific value regardless of the magnitude of light, which he actually exposed. Additionally, extinct and are stuck pixels can also be the result of a failure of the schemes that arise during operation of the image sensor. As an example, "elipse" pixel can appear as always which is included (for example, a fully charged) and thus seems more vivid, while the "dead" pixel seems to be always disabled.

Logic 460 detection and correction of defective pixels (DPDC) in logic 80 pre-treatment ISP may adjust (e.g., to replace the values of the defective pixels are defective pixels before they are taken into account when collecting statistical data (e.g., 468). In one embodiment, the correction of defective pixels is performed independently for each color component (for example, R, B, Gr and Gb for patterns Bayer). Typically, the logic 460 DPDC may provide dynamic correction of defects, and the locations of the defective pixels are detected automatically on the basis of the directional gradients, calculated using neighboring pixels of the same color. As will be clear, the defects can be "dynamic" in the sense that the characterization of a pixel as being defective at a given time may depend on the image data of neighboring pixels. As an example, are stuck pixel that is always on maximum brightness may not be considered defective pixel, if location are stuck pixel is in the area of the current image, which is under predominantly bright or white flowers. Conversely, if C is Lipsi pixel is within the scope of the current image, which is under the dominance of black or darker colors, you are stuck pixel can be identified as a defective pixel during processing logic 460 DPDC and adjusted accordingly.

Logic 460 DPDC can use one or more neighboring pixels of the same color on each side of the current pixel to determine whether the current pixel is defective, with the use of directional gradients from pixel to pixel. If the current pixel is identified as being defective, the value of the defective pixel can be replaced by the value of the neighboring pixel in the horizontal direction. For example, in one embodiment, uses five neighboring pixels of the same color that are inside the boundaries of the raw frame 278 (Fig. 19), with the five neighboring pixels include the current pixel and two adjacent pixels on each side. Thus, as illustrated in Fig. 38, for a given color component c and for the current pixel P, the neighboring pixels P0, P1, P2, and P3 to the horizontal may be taken into account by the logic 460 DPDC. It should be noted, however, that depending on the location of the current pixel P, the pixels outside of the raw frame 278 are not considered when calculating the gradients of hotfixes to the pixel.

For example, as shown in Fig. 38, in the case 470 "left-hand path, the current pixel P is at the left edge of the raw frame 278 and, thus, the neighboring pixels P0 and P1 outside of the raw frame 278 is not taken into account, leaving only the pixels P, P2, and P3 (N=3). In the case of the "left-hand path + 1" the current pixel P is at a single pixel from the left edge of the raw frame 278 and, thus, the pixel P0 is not taken into account. This leaves only the pixels P1, P, P2, and P3 (N=4). In addition, the "centered" case 474 pixels P0 and P1 on the left side of the current pixel P and the pixel P2, and P3 on the right side of the current pixel P is located within the boundaries of the raw frame 278, and therefore all of the neighboring pixels P0, P1, P2, and P3 (N=5) taken into account when calculating the gradients from pixel to pixel. Additionally, such cases 476 and 478 may meet as approaching the right edge of the raw frame 278. For example, assuming a case 476 "right path - 1", the current pixel P is off by one pixel from the right edge of the raw frame 278 and, thus, the pixel P3 is not taken into account (N=4). Similarly, in the case of 478 "right path", the current pixel P is at the right edge of the raw frame 278 and, thus, do not accept the Xia into account both the neighboring pixel, P2 and P3 (N=3).

In the illustrated embodiment, for each neighboring pixel (k = 0 to 3) within the boundaries of the image (e.g., raw frame 278), the gradients from pixel to pixel can be calculated as follows:

Gk=abs(P-Pk)for0k3(only for k within the raw frame)(8)

Once the gradients from pixel to pixel have been determined, the detection of defective pixels can be performed by logic 460 DPDC, as set forth below. First of all, it is assumed that the pixel is defective, if the specified number of gradients Gkis at or below a given threshold, denoted by the variable dprTh. Thus, for each pixel accumulates account (C) the number of gradients of neighboring pixels within the image frame, which are at or below the threshold value dprTh. As an example, for each neighboring pixel within the raw AC is RA C gradients G kthat are at or below the threshold value dprTh, can be calculated as follows:

C=kN(GkdprTh),(9)

for0k3(only for k within the raw frame)

As will be taken into account, depending on the color component threshold value dprTh may vary. Then, if the accumulated score of C is defined smaller than or equal to the maximum account indicated by the variable dprMaxC, the pixel may be considered defective. This logic is expressed below:

If (CdprMaxC), the pixel is defective.(10)

Defective pixels are replaced with the use of a certain number of rules of substitution. For example, in one embodiment, the defective pixel can be replaced by the pixel immediately to the left of the him, P1. In boundary condition (for example, P1 is outside of the raw frame 278) of the defective pixel can be replaced by the pixel directly to the right of him, P2. In addition, it should be clear that the values of the substitution may be retained or disseminated for the subsequent detection of defective pixels. For example, with reference to the set of pixels horizontally, as shown in Fig. 38, if P0 or P1 previously identified logic 460 DPDC as being defective pixels and their corresponding values of substitution can be used to detect and replace defective pixel of the current pixel P.

To summarize the above-described technology of detection and correction of defective pixels, the block diagram of the sequence of operations depicting such processing, is provided in Fig. 39 and is indicated by the number 480 links. As shown, processing begins 480 at step 482, which is the current pixel (P), and identified a set of neighboring pixels. In accordance with the embodiment described above, the neighboring pixels may include two pixels horizontally the same color components from opposite sides of the current pixel (for example, P0, P1, P2, and P3). Then, at step 484, the horizontal gradients from pixel to pixel are calculated in respect of each neighboring pixel in PR the Affairs of the raw frame 278, as described in equation 8 above. Then, at step 486 is defined expense C number of gradients that are less than or equal to a specific threshold dprTh. As shown in logic 488 solutions, if C is less than or equal dprMaxC, then processing 480 proceeds to step 490, and the current pixel is identified as being defective. Defective pixel is then adjusted at step 492 using value replacement. Additionally, returning to the logic 488 solutions, if C is greater than dprMaxC, then processing continues to step 494, and the current pixel is determined as non-defective, and its value is not changed.

It should be noted that the technologies for detection/correction of defective pixels in the statistical pre-processing ISP may be less robust than the detection/correction of defective pixels, which are in the logic 82 of the conveyor ISP. For example, as will be described in more detail below, the detection/correction of defective pixels is performed in the logic 82 of the conveyor ISP, in addition to the dynamic correction of defects, can provide a fixed correction of defects, in which the location of the defective pixels is known a priori and loaded into one or more tables of defects. In addition, the correction Dina is practical defects in the logic 82 of the conveyor ISP may also consider the gradients in the horizontal and vertical directions, and may also take into account the detection/correction of the formation of speckles, as will be described below.

With reference to Fig. 37, the output signal of the logic 460 DPDC is then passed to logic 462 the black level compensation (BLC). Logic 462 BLC can provide independent digital gain, offset, and truncation for each color component c (e.g., R, B, Gr and Gb for Bayer) on the pixels used for collecting statistical data. For example, as expressed in the following operation, the input value for the current pixel of the first shifting a signed value, and then multiplied by the gain.

Y=(X+O[c])×G[c],(11)

while X represents the input pixel for the color component c (e.g., R, B, Gr or Gb), O[c] represents a 16-bit signed offset for the current color component c, and G[c] represents the gain value for the color component c. In one embodiment, the coefficient G[c] gain can be 16-bit unsigned numbers is m with 2 integer bits and 14 fractional bits (for example, 2.14 in the floating point representation), and the coefficient G[c] may be applied with rounding. Solely as an example, the coefficient G[c] gain can have a range between 0 to 4X (for example, 4-fold the value of the input pixel).

Then, as shown by equation 12 below, the calculated value of Y, which has the token can then be truncated to the range minimum and maximum values:

Y=(Y<min[c]) ? min[c]:(Y>max[c]) ? max[c]:Y(12)

Variables min[c] and max[c] can be represented by 16-bit values "truncation" for the minimum and maximum output values, respectively. In one embodiment, the logic 462 BLC can also be performed with the opportunity to support the number of pixels that were truncated above and below the maximum and minimum values, respectively, for each color component.

Then the output of logic 462 BLC is sent to the logic 464 shading correction lens (LSC). Logic 464 LSC can be performed with the opportunity to apply the appropriate gain for pixel-by-pixel basis, to compensate for the decrease in the intensity, which is usually approximately proportional to the distance from the optical center of the lens 88 device formirovaniya images. As can be taken into account, such a fall can be the result of the geometrical optics of the lens. As an example, a lens having the ideal optical properties, can be modeled as the fourth power of the cosine of the angle of incidence, cos4(θ), indicated by reference as the law cos4. However, since the production of the lens is not perfect, various non-uniformity of the lens can cause the optical properties deviate from the assumed model cos4. For example, the thinner edges of the lens usually are most uneven. Additionally, the uneven patterns of shading the lens can also be the result of a matrix of microlenses within the sensor images are not perfectly aligned with the color matrix filter. In addition, the filter infrared radiation (IR) in some lenses may cause the drop to be dependent on the light source and, thus, the gain of shading the lens can adapt depending on the identified light source.

With reference to Fig. 40 illustrates a three-dimensional profile 496, representing the intensity of light depending on the position of the pixel for a typical lens. As shown, the light intensity near the center 498 lens gradually decreases in the direction of the angle is whether the edges of 500 lens. Uneven shading of the lens depicted in Fig. 40, can be better illustrated by Fig. 41, which shows a color drawing of the image 502, which exhibits a decrease in light intensity towards the corners and edges. In particular, it should be noted that the light intensity approximately in the center of the image seems brighter than the light intensity at the corners and/or edges of the image.

In accordance with the variants of the implementation of these technologies gain lens shading can be specified as a two-dimensional grid of gain for each color channel (for example, Gr, R, B, Gb for Bayer array). Grid point gain can be distributed with constant horizontal and vertical intervals within the raw frame 278 (Fig. 19). As described above in Fig. 19, the raw frame 278 may include an active region 280, which defines the area in which processing is performed for the specific operation of the image processing. As for the correction of shading, active treatment area, which may be indicated by reference as LSC area, defined within the scope 278 raw frame. As will be described below, the region of the LSC must be completely inside or on the borders of the grid reinforcement, otherwise the results can is to be uncertain.

For example, with reference to Fig. 42 shows the region 504 LSC and mesh 506 gain, which can be defined within the raw frame 278. Region 504 LSC may have a width 508 and height 510 and may be defined by the offset of 512 x and offset 514 y with respect to the boundaries of the raw frame 278. Also provided by the offset of the grid (for example, the offset of 516 x grid and offset 518 y grid) from the base 520 grid 506 gain to the first pixel 522 in the field 504 LSC. These shifts can be within a first interval of the grid for a given color component. The horizontal (x-direction) and vertical (y-direction) intervals 524 and 526 grid points, respectively, can be set independently for each color channel.

As described above, using the matrix of color filters of the Bayer array can be defined in 4 color channel mesh reinforcement (R, B, Gr and Gb). In one embodiment, only 4K (4096) grid points may be available and, for each color channel, the base address for the initial location of the gain of the grid may be provided, for example, by use of the pointer. In addition, horizontal (524) and vertical (526) the spacing of the grid points can be defined in terms of pixels with a resolution of one color projection and, in some embodiments, domestic may provide for the spacing of the grid points, split level 2, such as 8, 16, 32, 64, or 128, etc. in horizontal and vertical directions. As can be taken into account through the use of degree 2 may be achieved by the effective implementation of the interpolation gain using operations offset (for example, division and addition. Using these parameters the same gain value can be used even at that time, as is changing the crop sensor. For example, only a few parameters need to be updated to align grid points on the cropped area (for example, updating offsets 524 and 526 grid) instead of updating all gain values. Just as an example, this can be useful when framing is used during operations of the digital zoom. In addition, despite the fact that the grid 506 gain shown in the embodiment according to Fig. 42 depicted as having a generally ravnovesnye grid point, it should be clear that in other embodiments, the implementation of grid points may not be necessarily reversename. For example, in some embodiments, the implementation of grid points can be distributed unevenly (e.g., logarithmically), so that the grid points are less concentrated in the Central region 504 LSC, but the more concentrated interevene to the corners region 504 LSC, where typically more noticeable distortion of lens shading.

In accordance with the present disclosed technology shading correction lens, when the location of the current pixel is located outside the area 504 LSC does not apply no gain (for example, a pixel is transmitted unchanged). When the location of the current pixel is in the location grid reinforcement, can be used gain value in this particular grid point. However, when the location of the current pixel is located between the grid points, the gain may be interpolated using bilinear interpolation. An example of interpolation of the gain for the location of the "G" pixel in Fig. 43 below.

As shown in Fig. 43, the pixel G is located between the pixels G0, G1, G2 and G3 grid, which may correspond to the upper left, upper right, lower left and lower right gain, respectively, relative to the location G of the current pixel. The horizontal and vertical size of the grid spacing is represented by X and Y, respectively. Additionally, ii and jj are the horizontal and vertical pixels, respectively, relative to the position of the upper-left coefficient G0 gain. Based on these factors gain, but what would be the position of G, thus, it can be interpolated as follows:

G=(G0(Y-jj)(X-ii))+(G1(Y-jj)(ii))+(G2(jj)(X-ii))+(G3(ii)(jj))XY(13a)

Members in equation 13a above, can then be combined to obtain the following expression:

G=G0[XY-X(jj)-Y(ii)+(ii)( j)]+G1[Y(ii)-(ii)(jj)]+G2[X(jj)-(ii)(jj)]+G3[(ii)(jj)]XY(13b)

In one embodiment, the interpolation method can be performed incrementally, instead of using a multiplier for each pixel, thereby reducing the computational complexity. For example, the member (ii)(jj) can be implemented using an adder, which may be initialized to 0 at location (0, 0) grid 506 gain and pereselitsa the current line number every time when the current column number is incremented by the pixel. As described above, since the values of X and Y can be chosen as powers of two, the interpolation of the gain can be performed with the COI is whether the simple shift operations. Thus, the multiplier is only required at the point G0 grid (instead of each pixel), and only addition operations are required to determine the interpolated gain for the remaining pixels.

In some embodiments, the implementation of the interpolation gain between grid points can use 14-bit accuracy, and the coefficients of the grid can be an unsigned 10-bit values with 2 integer bits and 8 fractional bits (for example, representation of 2.8 floating point). Using this rule, the gain may have a range between 0 and 4X, and the resolution of the gain between grid points can be set to 1/256.

Technology shading correction lens can be further illustrated by processing 528 shown in Fig. 44. As shown, processing 528 begins at step 530, where the position of the current pixel is relative to the border region 504 LSC in Fig. 42. Then logic 532 decisions determines whether the position of the current pixel within the region 504 LSC. If the position of the current pixel is outside the scope of 504 LSC, processing 528 continues to step 534, and no gain does not apply to the current pixel (e.g. pixel is transmitted unchanged).

If the position of the current pixel is within the blast 504 LSC, processing 528 continues to logic 536 solutions, which additionally is determined whether the position of the current pixel grid point within the grid 504 amplification. If the position of the current pixel corresponds to a grid point, then the gain value in the grid point is chosen and applied to the current pixel, as shown in step 538. If the position of the current pixel does not correspond to a grid point, then processing 528 continues to step 540, and the gain is interpolated on the basis of adjacent grid points (e.g., G0, G1, G2 and G3 in Fig. 43). For example, the interpolated gain may be calculated in accordance with equations 13a and 13b, as described above. After this processing ends 528 to step 542 where the interpolated gain from step 540 is applied to the current pixel.

As will be taken into account, processing 528 may be repeated for each pixel of the image data. For example, as shown in Fig. 45 illustrates a three-dimensional profile depicting the gain that can be applied to each position of a pixel within the region LSC (e.g., 504). As shown, the gain applied in the corners 544 image, can be in General larger than the gain applied to the center 546 image, due to the higher PA the value of the light intensity at the corners, as shown in Fig. 40 and 41. By using this at the moment technologies shading correction lens appearance falls light intensity in the image can be reduced or essentially eliminated. For example, in Fig. 46 provides an example of how a color drawing of the image 502 from Fig. 41 can look after corrections of lens shading. As shown, compared with the original image in Fig. 41, the total light intensity in General more uniform throughout the image. In particular, the intensity of the light approximately in the center of the image may be essentially equal to the values of the light intensity at the corners and/or edges of the image. Additionally, as mentioned above, the calculation of the interpolated gain (equations 13a and 13b), in some embodiments, the implementation can be replaced additive "Delta" between grid points by using their interests patterns consistent increment column and row. As will be taken into account, this reduces the computational complexity.

In additional embodiments, the implementation in addition to using the gain grid, a global gain for each color component, which is scaled as a function of distance from the center of the image is available. The center of the image may be provided as an input parameter and can be estimated by analyzing the amplitude of the light intensity of each pixel of an image is evenly lit image. The radial distance between the identified Central pixel and the current pixel, in this case, can be used to obtain linearly scaled radial gain Gras shown below:

Gr=Gp[c]×R,(14)

when Gp[c] represents a parameter of the global gain for each color component c (e.g., component R, B, Gr and Gb for patterns Bayer), and R is the radial distance between the Central pixel and the current pixel.

With reference to Fig. 47, which shows the region 504 LSC, as described above, the distance R can be calculated or estimated using several technologies. As shown, the pixel C corresponding to the center of the image, may have coordinates (x0, y0), and the current pixel G may have to the ninety (x G, yG). In one embodiment, the logic 464 LSC can calculate the distance R using the following equation:

R=(xG-x0)2+(yG-y0)2(15)

In yet another embodiment, a simpler formula, shown below, can be used to obtain estimated values for R:

R=α×max(abs(xG-x0),abs(yG-y0))+β×min(abs(xG-x0),abs(yG-y0)) (16)

In equation 16, the coefficients α and β estimates can be scaled to 8-bit values. Purely as an example, in one embodiment, α may be approximately equal 123/128 user guide, and β may be approximately equal 51/128 to give the estimated value for R. using these values of the coefficients largest error can have a value of approximately 4%, with an average error of approximately 1.3%. Thus, even if the technology assessment may be somewhat less accurate than the use of technology calculations when determining R (equation 15), the limit of the permissible error is low enough to assessed value or R were suitable for determining the radial components of the gain of the presented technologies shading correction lens.

Radial coefficient of Grthe gain can then be multiplied by the interpolated value G of the gain grid (equation 13a and 13b) for the current pixel to determine the total gain that can be applied to the current pixel. Output pixel Y is obtained by multiplying the X value of the input pixel on overall gain, as shown below:

Y=(G×Gr×X)(17)

Thus, in accordance with the present technology, the shading correction lens can be performed using only the interpolated gain as interpolated gain and radial components of the gain. Alternatively, the shading correction lens can also be performed using only the radial gain together with table, radial gain, which compensates for radial error of approximation. For example, instead of a rectangular grid 506 gain, as shown in Fig. 42, may be provided with radial mesh reinforcement with many grid points that define the gain in radial and angular directions. Thus, when determining the gain to be applied to the pixel, which is not aligned with one of the points of the radial grid within region 504 LSC, interpolation can be applied using the four grid points that cover the pixel, for determining the proper interpo the new gain lens shading.

With reference to Fig. 48, the use of interpolated and radial components of the gain shading correction lens illustrated processing 548. It should be noted that the processing 548 may include stages, which is similar to the processing 528 described above in Fig. 44. Accordingly, the following stages were numbered in the same room links. From step 530, is the current pixel, and determines its location relative to the area 504 LSC. Then logic 532 decisions determines whether the position of the current pixel within the region 504 LSC. If the position of the current pixel is outside the scope of 504 LSC, processing 548 continues to step 534, and no gain does not apply to the current pixel (e.g. pixel is transmitted unchanged). If the position of the current pixel is within the region 504 LSC, then processing 548 may proceed to step 550 and logic 536 decisions. First, with reference to the step 550, the data is retrieved, identifying the center of the image. As described above, the determination of the center of the image may include an analysis of the amplitudes of the light intensity for the pixels under uniform illumination. This may occur, for example, during calibration. Thus, it should be clear that the stage 550 is not necessarily cover you re Islena the center of the image for processing each pixel, but may indicate a link to retrieve data (e.g., coordinates) of a previously defined center image. Once identified the center of the image processing 548 may continue to step 552, which is determined by the distance between the center of the image and the location of the current pixel (R). As described above, the R value can be calculated (equation 15) or estimated (equation 16). Then, at step 554, the radial component of Grgain can be calculated using the distance R and the parameter of the global gain of the corresponding color component of the current pixel (equation 14). The radial component of Grgain can be used to determine the total gain, as will be described in step 558, below.

Returning to the logic 536 decisions is determined whether the position of the current pixel grid point within the grid 504 amplification. If the position of the current pixel corresponds to a grid point, then the gain value in the grid point is determined, as shown at step 556. If the position of the current pixel does not correspond to a grid point, then processing 548 continues to step 540, and the interpolated gain is calculated on the basis of adjacent grid points (e.g., G0, G1, G2 and G3 in Fig. 43). Nab is emer, the interpolated gain may be calculated in accordance with equations 13a and 13b, as described above. Then, at step 558, the overall gain is determined based on the radial gain, determined in step 554, and one of the gain grid (step 556) or interpolated gain (540). As can be taken into account, it can depend on what branch is the logic 536 solutions during processing 548. The total gain is then applied to the current pixel, as shown in step 560. Again, it should be noted that similar processing 528, processing 548 is also repeated for each pixel of the image data.

The use of radial gain with gain grid can offer various advantages. For example, the use of radial gain provides the ability to use a single common grid gain for all color components. This can greatly reduce the total memory space required to store the individual meshes of the gain for each color component. For example, in the image sensor Bayer using a single mesh reinforcement for each of the components R, B, Gr and Gb can reduce the data grid gain approximately 75%. the AK will be taken into account, this data reduction gain grid can reduce implementation costs, as table data gain grid can be responsible for a significant amount of memory or the chip area in a hardware image processing software. In addition, depending on the hardware implementation using a single set of grid values of the gain may offer additional benefits, such as reduction of the total area of the crystal (for example, such as when the grid values of the gain stored in vnutrikvartalniy memory) and reduced demand for bandwidth memory (e.g., such as when the grid values of the gain stored in uncrystalline external memory).

Having carefully describes the functionality of the logic 464 shading correction lens shown in Fig. 37, the output signal of the logic 464 LSC subsequently sent to the logic 466 inverse black level compensation (IBLC). Logic 466 IBLC provides the gain, offset and truncated independently for each color component (for example, R, B, Gr and Gb), and performs overall inverse function logic 463 BLC. For example, as shown in a subsequent operation, the input value of the first pixel is multiplied by the gain, and then shifting a signed value.

Y=(X×G[c])+O[c],(18)

while X represents the input pixel for the color component c (e.g., R, B, Gr or Gb), O[c] represents a 16-bit signed offset for the current color component c, and G[c] represents the gain value for the color component c. In one embodiment, the coefficient G[c] gain can be in the range between approximately 0 to 4X (4 times the magnitude of the input value X of the pixel). It should be noted that these variables may be the same variables described above in equation 11. The calculated value of Y may be truncated by the range of the minimum and maximum values, for example, equation 12. In one embodiment, the logic 466 IBLC can be performed with the opportunity to support the number of pixels that were truncated above and below the maximum and minimum values, respectively, for each color component.

After that, the output signal of the logic 466 IBLC is made by block 468 statistical Dan is s, which may involve the collection of various statistics about the sensor(s) 90 images, such as related to automatic exposure (AE), automatic white balancing (AWB), auto focus (AF)detection of flicker and so on. Below is provided a brief overview of the value of statistical data AWB, AE, and AF.

As for white balancing, the response of the image sensor in each pixel may depend on the light source, because the light source is reflected from objects in the scene image. Thus, each pixel value recorded in the scene image is related to the color temperature of the light source. When a white object is illuminated under low color temperature, it may appear reddish in the captured image. On the contrary, a white object, which is covered under a high color temperature, may appear bluish in the captured image. The purpose of the white balancing, therefore, is to adjust the RGB values so that the image looked to the human eye, as if it was taken with a canonical illumination. Thus, in the context of statistical data imaging, related to white balancing, color information of white objects is going to determine the color temperature of the light source. In General, the white balancing algorithms may include two main stages. First, the estimated color temperature of the light source. Secondly, the estimated color temperature is used to set the gain values of the color and/or determine/set the coefficients matrix color correction. Such gains can be combined analog and digital gain of the image sensor, and a digital gain ISP.

Then the automatic exposure generally indicates a reference to the processing for setting the integration time of the pixel and the gain to adjust the brightness of the captured image. For example, the automatic exposure can control the amount of light from the scene that is captured by the sensor(s) images by setting the integration time. In addition, the auto focus may refer to determining the optimal focal length of the lens, in order to essentially optimize the focus of the image. Thus, various types of statistical data, among other things, can be determined and collected by block 468 collection of statistical data. As shown, the output STATS0 block 468 collection of statistical data can be sent to the memory 108 and sent to the logic 84 management and the and alternatively, you may go directly to the logic 84 of the control.

As described above, the logic 84 of the control may process the collected statistical data to determine one or more parameters to control device 30 of the imaging and/or circuits 32 image processing. For example, such control parameters may include parameters for operation of the lens sensor 90 images (for example, settings of the focal length), the parameters of the image sensor (for example, analog and/or digital gain, integration time), and the parameters pipeline ISP (for example, the digital values of the gain coefficients matrix color correction (CCM)). Additionally, as mentioned above, in some embodiments, the implementation of the aggregation can occur with a precision of 8 bits and, thus, the raw pixel data having a higher bit depth can be down-sampling in 8-bit format for statistical purposes. As described above, decreasing the quantization to 8 bits (or any other lower bit resolution) can reduce the size (e.g., area) of the hardware, and also to reduce the complexity of processing, as well as to provide statistics as the data definition the ability to be more resistant to noise (for example, using the spatial averaging of the image data).

Before proceeding to the detailed description logic 82 of the ISP pipeline downstream from the logic of the 80 pre-treatment ISP, it should be clear that the arrangement of the various functional logical blocks in blocks 120 and 122 statistical analysis (for example, logical blocks 460, 462, 464, 466 and 468) and the block 130 pre-treatment pixels ISP (for example, logical blocks 298 and 300) is intended only to illustrate one of the embodiments of the present technology. Indeed, in other embodiments, implementation of logical blocks, illustrated in the materials of the present application may be arranged in a different order or may include additional logic blocks, which may perform additional image processing function, not described specifically in the materials of this application. In addition, it should be clear that the operation of the image processing performed in units of aggregation (e.g., 120 and 122), such as shading correction lens, the detection/correction of defective pixels and the compensation of the black level, are within blocks of the statistical processing in order to collect statistical data. Thus, the processing operation is performed on image data taken by the blocks is statisticheskoi processing, in fact not reflected in the signal 109 image (FEProcOut), which is derived from logic 130 pre-treatment pixels ISP and sent to the logic 82 pipeline ISP.

Before continuing it should also be noted that, provided sufficient processing time and the similarity between many of the processing requirements of the various operations described in materials of this application, you can reconfigure the functional blocks shown in the materials of this application, to perform image processing rather consistent manner, rather than a conveyor of the class. As will be clear, this can further reduce the overall costs of implementing the hardware, but also can increase the bandwidth to the external memory (for example, to cache/store the intermediate results/data).

Logic conveyor ("main") processing ISP

Having described in detail above logic 80 pre-treatment ISP, the present description hereinafter will switch focus to the logic 82 pipeline ISP. Typically, the logic function 82 of the conveyor ISP is to take the raw data image that can be issued from the logic of 80 pre-treatment ISP or retrieved from the memory 108 and to perform additional steps of processing the images, for example, before displaying the image data on the device 28 to display.

Structural diagram showing an implementation option logic 82 of the conveyor ISP, shown in Fig. 49. As illustrated, the logic 82 of the conveyor ISP may include logic 562 processing raw pixels, logic 564 processing RGB and logic 566 processing YCbCr. Logic 562 processing raw pixels can perform various operations of image processing, such as detection and correction of defective pixels, the correction of the shading of the lens, eliminating the mosaic, and the use of gain for automatic white balancing and/or set the black level, as will be further described below. As shown in the present embodiment, the input signal 570 in logic 562 processing raw pixels can be output signal 109 of unprocessed pixels (signal FEProcOut) from the logic of 80 pre-treatment ISP or data 112 raw pixels from the memory 108, depending on the current configuration logic 568 selection.

As a result of operations eliminate patchiness performed within the logic 562 processing raw pixel output signal 572 image signal may be in the area of RGB and can be subsequently forwarded to the logic 564 processing RGB. For example, as shown in Fig. 4, logic 564 processing accepts RGB signal 578, which may be the output signal 572 or signal 574 RGB image from the memory 108, depending on the current configuration logic 576 choice. Logic 564 processing RGB may provide various operation settings of RGB colors, including color correction (for example, using matrix color correction), the application of factors to enhance color for automatic white balancing, as well as mapping the global hue, as will be further described below. Logic 566 processing RGB can also provide a color space conversion RGB image data in the YCbCr color space (luminance/chrominance). Thus, the output signal 580 image signal may be in the area YCbCr and subsequently may be sent to the logic 566 YCbCr processing.

For example, as shown in Fig. 49, logic 566 processing accepts YCbCr signal 586, which may be the output signal 580 of logic 564 processing RGB or signal 582 YCbCr from the memory 108, depending on the current configuration logic 584 choice. As will be further described below, the logic 566 YCbCr processing may include the operation of the image processing in the color space YCbCr, including scaling, color suppression, sharpening, brightness, brightness, contrast, and the responses (BCC), displaying the degree of contrast YCbCr, thinning color and so on. The output signal 588 image signal logic 566 processing YCbCr can be sent to the memory 108 or may be derived from logic 82 pipeline ISP as signal 114 images (Fig. 7). The signal 114 of the image can be sent to the device 28 to display (either directly or through the memory 108 for viewing by the user, or can optionally be processed using a compression machine (for example, the encoder 118), CPU/GPU, the graphics machine or the like.

In accordance with the variants of the implementation of these technologies logic 82 of the conveyor ISP can support the processing of the raw pixel data 8-bit, 10-bit, 12-bit or 14-bit formats. For example, in one embodiment, 8-bit, 10-bit or 12-bit input data can be converted to 14-bit at the input of the logic 562 processing raw pixels, and processing the raw pixels and RGB processing can be performed with 14-bit precision. In the latter embodiment, a 14-bit image data can be subjected to down-sampling rate up to 10 bits before converting RGB data to the color space of YCbCr and YCbCr processing logic 566) can be performed with 10-bit precision.

In order to provide testorone description of the various functions, provided logic 82 pipeline ISP, each of the logic 562 raw pixels, logic 564 processing RGB and logic 566 processing YCbCr, and internal logic to perform various image-processing operations that can be implemented in each respective logic block 562, 564 and 566 will be sequentially described below, starting with logic 562 processing the raw pixels. For example, the following, with reference to Fig. 50 illustrates a structural diagram showing a more detailed view of a variant of implementation of the logic 562 processing unprocessed pixels in accordance with the embodiment of the present technology. As shown, the logic 562 processing raw pixel includes logic 590 gain, bias and truncation (GOC), logic 592 detection/correction of defective pixels (DPDC), logic 594 noise reduction, logic 596 shading correction lens, logic 598 GOC and logic 600 eliminating patchiness. In addition, although the examples described below assume the use of a matrix of color filters of the Bayer sensor(s) 90 images, it should be clear that other embodiments of the present technology can also use other types of color filters.

The input signal 570, which will signal the raw image, first accepted l is hikoi 590 gain bias and truncation (GOC). Logic 590 GOC may provide similar functions and can be implemented in a similar manner with respect to the logic 462 BLC block 120 statistical processing logic 80 pre-treatment ISP, as described above in Fig. 37. For example, the logic 590 GOC can provide digital gain, offset and limit (truncation) independently for each color component R, B, Gr and Gb of the image sensor Bayer. In particular, the logic 590 GOC can perform automatic white balancing, or to set the black level raw data image. In addition, in some embodiments, the implementation logic 590 GOC can also be used to adjust and compensate for the offset between the color components of the Gr and Gb.

In operation, the input value for the current pixel of the first shifting a signed value and is multiplied by the gain. This operation can be performed using the formula shown in equation 11 above, while X represents the input pixel for the color component R, B, Gr or Gb, O[c] represents a 16-bit signed offset for the current color component c, and G[c] represents the gain value for the color component c. Values for G[c] can be determined in advance during statistical processing (e.g., in block 80 PR is dwarfelles processing ISP). In one embodiment, the coefficient G[c] gain can be 16-bit unsigned number with 2 integer bits and 14 fractional bits (for example, floating point representation 2.14), and the coefficient G[c] may be applied with rounding. Solely as an example, the coefficient G[c] gain can have a range between 0 to 4X.

The calculated value Y of a pixel (which includes the factor G[c] gain and offset O[c]) from equation 11 is then truncated the range of the minimum and maximum values in accordance with equation 12. As described above, the variables min[c] and max[c] can be represented by 16-bit values "truncation" for the minimum and maximum output values, respectively. In one embodiment, the logic 590 GOC can also be performed with the opportunity to support the number of pixels that were truncated above and below the ranges of maximum and minimum values, respectively, for each color component.

Then the output signal of the logic 590 GOC is sent to the logic 592 detection and correction of defective pixels. As described above with reference to Fig. 37 (logic 460 DPDC), defective pixels can be attributed to a certain number of factors and may include the "bright" (or leaking) pixels "are stuck" pixels and "dead" pixels, "Yar is their" pixels demonstrate higher than normal, the leakage of the charge relatively non-defective pixels, and, thus, may appear brighter than a non-defective pixel, and thus "are stuck" pixel looks like which is always enabled (e.g., fully charged) and thus seems more vivid, while a dead pixel looks like which is always disabled. Essentially, it may be desirable to have a scheme for detecting pixel, which is sufficiently robust to identify and take action in response to different types of fault scenarios. In particular, compared with the logic 460 pre-treatment DPDC, which may only provide detection/correction of dynamic defects, logic 592 DPDC pipeline may provide detection/correction constant or static defect detection/correction of dynamic errors, as well as removing speckles.

In accordance with the variants of the implementation of the disclosed at the moment technologies detection/correction of defective pixels is performed by the logic 592 DPDC, can occur independently for each color component (for example, R, B, Gr and Gb) and may include various operations for detecting defective pixels, and for correcting the detected defective pixels. For example, in one embodiment, the operation of which the detection of defective pixels may include the detection of static defects, dynamic defects, as well as the detection of speckle, which can be specified by reference to electrical interference or noise (e.g., photon noise)that may be present in the image sensor. By analogy speckle may appear in the image as artifacts by the appearance of random noise-like manner, which statics may appear on the display device, such as a television display device. In addition, as noted above, the dynamic correction of defects is seen as being dynamic in the sense that the characterization of a pixel as being defective at a given time may depend on the image data of neighboring pixels. For example, are stuck pixel that is always on maximum brightness may not be considered defective pixel, if location are stuck pixel is in the area of the current image, which is under the dominance of bright white flowers. On the contrary, if you are stuck pixel is within the scope of the current image, which is under the dominance of black or darker colors, you are stuck pixel can be identified as a defective pixel during processing logic 592 DPDC and adjusted accordingly.

As for the static detection is effektov, the location of each pixel is compared with a table of defective pixels, which can store data corresponding to the location of pixels that are known that are defective. For example, in one embodiment, the logic 592 DPDC can control the detection of defective pixels (for example, using the mechanism or counter register) and, if a particular pixel is observed as repeatedly cease to operate, the location of this pixel is stored in the table of static defects. Thus, during the detection of static defects, if it is determined that the location of the current pixel is in the table of static defects, then the current pixel is identified as being defective pixel, and the value of replacement is determined and temporarily stored. In one embodiment, the value of the substitution may be the value of the previous pixel (based on the scan order) of the same color components. Value substitution can be used for static correction of the defect during the detection and correction of dynamic defects/speckles, as will be described below. Additionally, if the previous pixel is outside of the raw frame 278 (Fig. 19), its value is not used, and the static defect can be corrected in time processing dynamic correction of defects. In addition, due to memory considerations table static defects can store a finite number of location records. For example, in one embodiment, the table of static defects may be implemented as a FIFO queue (first - in-first-out), made with the ability to store a total of 16 locations for every two lines of image data. Location in a particular table static defects, however, will be adjusted using the previous values of the replacement pixel (preferable by processing detection of dynamic defects described below). As mentioned above, embodiments of the present technology can also include re-updating table of static defects over time.

Options for implementation may provide that table static defects was implemented in vnutrikvartalniy memory or uncrystalline memory. As will be taken into account, the use of vnutrikvartalniy implementation may increase the total area/volume of the crystal, while the use of uncrystalline implementation can reduce the area/size of the crystal, but to increase demand for bandwidth memory. Thus, it should be clear that the table is static defects may shall be implemented on-chip or outside of the crystal depending on the specific requirements for implementation, that is, the total number of pixels that must be maintained within a table of static defects.

Processing dynamic detection of defects and speckles can be shifted in time relative to the processing of static detection of defects described above. For example, in one embodiment, the processing of dynamic detection of defects and speckles may begin after processing the static detection of defects analyzed two scanning lines (e.g., rows) of pixels. As can be taken into account, this provides an opportunity to identify static defects and their respective replacement values were determined before, how is the dynamic detection of defects/speckles. For example, during processing of dynamic detection of defects/speckles, if the current pixel was previously marked as a static defect, instead of applying discoveries dynamic defects/static speckles defect simply adjusted using the previously estimated values of substitution.

As for the dynamic detection of defects and speckles, these processes may occur sequentially or in parallel. Detection and correction of dynamic defects and speckles that are performed by logic 592 DPDC, can palaghat is camping on the adaptive detection circuit with the use of directional gradients from pixel to pixel. In one embodiment, logic is used 592 DPDC, which can select the eight immediate neighbors of the current pixel having the same color component, which are within the raw frame 278 (Fig. 19). In other words, the current pixel and its eight immediate neighbors P0, P1, P2, P3, P4, P5, P6 and P7 can generate a 3x3 plot, as shown in Fig. 51.

It should be noted, however, that depending on the location of the current pixel P, the pixels outside of the raw frame 278 are not considered when calculating the gradients from pixel to pixel. For example, with regard to the "upper left" of the case 602, shown in Fig. 51, the current pixel P is at the upper left corner of the raw frame 278 and, thus, the neighboring pixels P0, P1, P2, P3 and P5 outside of the raw frame 278 is not taken into account, leaving only the pixels P4, P6 and P7 (N=3). In the "upper" case 604, the current pixel P is at the upper edge of the raw frame 278 and, thus, the neighboring pixels P0, P1 and P2 outside of the raw frame 278 is not taken into account, leaving only the pixels P3, P4, P5, P6 and P7 (N=5). In conclusion, in the "lower right" case 606 the current pixel P is at the bottom right corner of the raw frame 278 and, thus, the neighboring pixels P2, P4, P5, P6, and P7 outside of the raw frame 278 is not taken into account, the OS is avlee only the pixels P0, P1 and P3 (N=3). In the "left" case 608, the current pixel P is at the left edge of the raw frame 278 and, thus, the neighboring pixels P0, P3, and P5 outside of the raw frame 278 is not taken into account, leaving only the pixels P1, P2, P4, P6 and P7 (N=5).

In the "Central" case 610 all pixels P0-P7 are within the raw frame 278 and, thus, are used in the determination of gradients from pixel to pixel (N=8). In the "right" case 612, the current pixel P is at the right edge of the raw frame 278 and, thus, the neighboring pixels P2, P4, and P7 outside of the raw frame 278 is not taken into account, leaving only the pixels P0, P1, P3, P5 and P6 (N=5). Additionally, in the "lower left" case 614 current pixel P is at the bottom left corner of the raw frame 278 and, thus, the neighboring pixels P0, P3, P5, P6, and P7 outside of the raw frame 278 is not taken into account, leaving only the pixels P1, P2, and P4 (N=3). In the "lower" case 616 current pixel P is at the bottom edge of the raw frame 278 and, thus, the neighboring pixels P5, P6, and P7 outside of the raw frame 278 is not taken into account, leaving only the pixels P0, P1, P2, P3, and P4 (N=5). In conclusion, in the "lower right" case 618, the current pixel P is at the bottom right corner of the raw frame 278 and, thus, the neighboring pixels P2, P4, P5, P6, and P7 outside of the raw kad is and 278 are not taken into account, leaving only the pixels P0, P1 and P3 (N=3).

Thus, depending on the position of the current pixel P, the number of pixels used to determine the gradients from pixel to pixel can have a value of 3, 5 or 8. In the illustrated embodiment, for each neighboring pixel (k = 0 to 7) within the borders of the image (e.g., raw frame 278) gradients from pixel to pixel can be calculated as follows:

Gk=abs(P-Pk)for0k7(only for k within the raw frame)(19)

Additionally, the average gradient, Gavcan be calculated as the difference between the current pixel and the average value of Pavits surrounding pixels, as shown by the equations below:

Pav=(kNPk) while N=3, 5 or 8 (depending on the position of the pixel)(20a)

Gav=abs(P-Pav)(20b)

Gradient values from pixel to pixel (equation 19) can be used to define a case of dynamic defect, and the average value of neighboring pixels (equations 20a and 20b) can be used to identify cases of speckle, as further described below.

In one embodiment, the dynamic detection of defects can be performed by logic 592 DPDC, as set forth below. First of all, it is assumed that the pixel is defective, if a certain number of gradients Gkis at or below a given threshold, denoted by the variable dynTh (threshold dynamic defect). Thus, for each pixel accumulates account (C) the number of gradients of neighboring pixels within the boundaries of the Adra image, which are at or below the threshold value dynTh. The threshold value dynTh can be a combination of constant component threshold and dynamic threshold, which may depend on the activity present in the surrounding pixels. For example, in one embodiment, the dynamic threshold for dynTh can be determined by calculating the values of Phfhigh-frequency component on the basis of the summation of absolute differences between the average values of Pavpixel (equation 20a) and each neighboring pixel, as illustrated below:

Phf=8NkNabs(Pav-Pk)in this case N=3, 5, or 8.(20c)

In cases where the pixel is located in the corner of the image (N=3) or at the edge of the image (N=5), Phfcan be multiplied by 8/3 or 8/5, respectively. As can be taken into account, this ensures that high-frequency composing the I P hfnormalized on the basis of the eight neighboring pixels (N=8).

As soon as Phfdefined threshold value dynTh detection dynamic defect can be calculated as shown below:

dynTh=dynTh1+(dynTh2×Phf),(21)

this dynTh1represents the constant component threshold and dynTh2represents the dynamic component of the threshold value of Phfin equation 21. Different constant components dynTh1the threshold value may be provided for each color component, but for each pixel of the same color dynTh1is identical. Just as an example dynTh1can be installed so that it is at least higher than the variance of the noise in the image.

Dynamic component dynTh2a threshold can be determined on the basis of some characteristics of the image. For example, in one embodiment, dynTh2can be determined using the stored empirical data relating to the exposure time and/or integration of the sensor. Empirical data can be determined during calibration of the image sensor (for example, 90) and can link the values of the dynamic components of threshold values that can be selected for dynTh2with each of the certain number of data points. Thus, on the basis of the current value of the exposure time and/or integration of sensor, which can be determined during statistical processing logic 80 pre-treatment ISP, dynTh2can be determined by selecting the values of the dynamic threshold value from the stored empirical data that corresponds to the current value of the exposure time and/or integration of the sensor. Additionally, if the current value of the exposure time and/or integration of the sensor does not correspond directly to one particular experienced by data points, dynTh2can be determined by interpolation of the values of the dynamic threshold values associated with data points between which gets the current value of the exposure time and/or integration of the sensor. In addition, like a permanent part of dynTh1the threshold value of the dynamic component dynTh2the threshold value may have different values for each color component. Thus, the composite threshold dynTh may vary for each of the color components the options (for example, R, B, Gr, Gb).

As mentioned above, each pixel is defined expense C the number of gradients of neighboring pixels within the image frame, which are at or below the threshold value dynTh. For example, for each neighboring pixel within the raw frame 278 accumulated account C gradients Gkthat are below the threshold dynTh, can be calculated as follows:

C=kN(GkdynTh),(22)

for0k7(only for k within the raw frame).

Then, if the accumulated score of C is defined smaller than or equal to the maximum account indicated by the variable dynMaxC, the pixel can be considered as a dynamic defect. In one embodiment, different values for dynMaxC can be provided for conditions N=3 (right), N=5 (edges), and N=8. This logic is expressed below:

If (CdynMaxC), those who use pixel P is defective. (23)

As mentioned above, the location of the defective pixels can be stored in the table of static defects. In some embodiments, the implementation of the minimum gradient value (min(Gk)), calculated during the dynamic detection of the defect for the current pixel can be stored and can be used for sorting defective pixels, so that a higher minimum gradient value indicates greater severity of the defect and should be corrected during the correction pixels earlier than the adjusted less serious defects. In one embodiment, the pixel may be necessary to be processed over multiple frames of an image before storing in the table of static defects, for example, by filtering the locations of defective pixels over time. In the latter embodiment, the location of the defective pixel can be stored in the table of static defects only if the defect appears in a specific number of successive images in the same location. In addition, in some embodiments, the implementation table static defects can be executed with ability to sort the stored location of the defective pixels on the basis of the minimum values of the gradient. For example, the highest minimum the gradient value may indicate a defect greater "severity". By ordering locations this image priority correction of static defect can be installed so that the most serious or important defects were corrected first. Additionally, table static defects can be updated over time to include newly discovered static defects and ordering them accordingly based on their respective minimum values of the gradient.

Detection of speckle, which can occur in parallel with the processing of dynamic detection of the defect described above can be performed by determining whether the value of Gav(equation 20b) above the threshold spkTh detection of speckle. Like threshold dynTh dynamic defect threshold value spkTh speckle can also include a constant and dynamic components, indicated by reference as spkTh1and spkTh2, respectively. In General, constant and dynamic components spkTh1and spkTh2can be installed more "aggressive" compared with values dynTh1and dynTh2in order to avoid false detection of speckle in the image areas, which can be heavier textured, or others, such as text, foliage, some patterns tissues, etc. Respectively, one in which the version of the implementation of the dynamic component spkTh 2the threshold value of speckle can be increased to vysokochastotnykh areas of imaging and reduced to "flatter" or more uniform areas. The threshold value spkTh detection of speckle can be calculated as shown below:

spkTh=spkTh1+(spkTh2×Phf),(24)

this spkTh1represents the constant component threshold value, and when this spkTh2represents the dynamic component of the threshold value. Detection of speckle can then be determined in accordance with the following expression:

if (Gav>spkTh), then the current pixel P is subjected to the formation of speckle.(25)

As soon as the defective pixels have been identified, the logic 592 DPDC can apply a correction pixel depending on the type of the detected defect. For example, if the defective pixel has been identified as a static defect, the pixel is replaced with the stored value substitution as described above (for example, the value of the previous pixel of the same color is omponent). If the pixel was identified as a dynamic defect or as speckle, the correction of the pixel can be performed as set forth below. First of all gradients are calculated as the sum of absolute differences between the Central pixel and the first and second neighboring pixels (for example, calculation of the Gkaccording to equation 19) for four directions: horizontal (h) directions, vertical (v) direction, diagonally positive direction (dp) and diagonally to the negative (dn), as shown below:

Gh=G3+G4(26)

Gv=G1+G6(27)

Gdp=G2+G5(28)

Gdn=G0+G7(29)

Then, the correction value PCpixel can be determined by linear interpolation of two adjacent pixels associated with directional gradient Gh , Gv, Gdpand Gdnwhich has the smallest value. For example, in one embodiment, the Boolean expression below can Express the computation of PC:

if (min==Gh)(30)

PC=P3+P42;

else if (min==Gv)

PC=P1+P62;

else if (min==Gdp)

PC=P2+P52;

else if (min==Gdn)

PC=P0+P72.

Correction pixels, implemented by the logic 592 DPDC, can also provide for exceptions in the boundary conditions. For example, if one of the two neighboring pixels associated with the selected direction is receiving the interpolation outside of the raw frame, the value of the neighboring pixel that is within the raw frame, replaced instead. Thus, using this technology, the adjustment pixel value is equivalent to the value of the neighboring pixel within the raw frame.

It should be noted that the technologies for detection/correction of defective pixels, applied logic 592 DPDC during pipeline processing ISP, are more robust compared to logic 460 DPDC logic 80 pre-treatment ISP. As described in the embodiment above, the logic 460 DPDC only performs detection and correction of dynamic defects using neighboring pixels in the horizontal direction only, whereas the logic 592 DPDC involves the detection and correction of static defects, dynamic defects, as well as speckle, using neighboring pixels in both the horizontal and vertical directions.

As will be taken into account, the saving location of the defective pixels using the table of static defects may provide a temporary filtering of defective pixels with lower memory requirements. For example, compared to many traditional technologies that preserve the full image and remaneat temporal filtering for identification of static defects on time, embodiments of the present technology retain only the locations of the defective pixels that typically can be performed using only the percentage of memory required to store a full frame image. In addition, as described above, maintaining a minimum gradient values (min(Gk)) provides the opportunity for the effective use of tables of static defects with the prioritization order of the locations in which defective pixels are corrected (for example, starting with those that will be most visible).

Additionally, the use of threshold values, which include a dynamic component (e.g., dynTh2and spkTh2) can help to reduce false detection of defects, a problem often encountered in conventional image processing systems when processing vysokochastotnykh parts of an image (for example, text, foliage, some structures cloths etc). In addition, the use of directional gradients (e.g., h, v, dp, dn) for the correction of pixels can reduce the visual artifacts, if there is a false detection of the defect. For example, filtering in the direction of the minimum gradient may result in a correction that still gives acceptable results in most cases, even in cases of false detection. To omnitele, the inclusion of the current pixel P in the computation of the gradient can improve the detection accuracy of the gradient, especially in the case of bright pixels.

The above-described technology of detection and correction of defective pixels that are implemented by the logic 592 DPDC can be summarized near the flowchart of sequences of operations of the methods described in Fig. 52-54. For example, first with reference to Fig. 52, illustrated processing 620 for the detection of static defects. First, starting with step 622, the input pixel P is taken at the first time T0time. Then, at step 624, the location of the pixel P is compared with the values stored in the table of static defects. Logic 626 solutions determines whether it found the location of the pixel P in the table of static defects. If the location P is a table of defective pixels, the process 620 continues to step 628, where P is marked as a static defect and is determined by the value of the substitution. As described above, substitutions may be determined based on the value of the previous pixel (in scan order) of the same color component. Processing 620 then proceeds to step 630, where the processing 620 proceeds to the processing of 640 processing dynamic defects and speckles, illustrated in Fig. 53. Additionally, if the logic 626 solutions offers the position of a pixel P is defined not in the table of static defects, processing 620 proceeds to step 630, without performing step 628.

Continuing on Fig. 53, the input pixel P is taken at the moment of time T1, as shown in step 642, for processing, to determine whether there is a dynamic defect or speckle. The moment of time T1 may represent a temporal offset with respect to the processing 620 static detection of defects in Fig. 52. As described above, the processing of dynamic detection of defects and speckles may begin after processing the static detection of defects analyzed two scanning lines (e.g., rows) of pixels, thus providing time for identification of static defects and their respective replacement values the opportunity to decide to how the dynamic detection of defects/speckles.

Logic 644 decisions determines whether the input pixel P is marked previously as a static defect (for example, via step 628 processing 620). If P is marked as a static defect, the process 640 may continue processing the correction of the pixel shown in Fig. 54, and may bypass the remaining steps shown in Fig. 53. If the logic 644 decisions determines that the input pixel P is not a static defect, the process continues to step 646, and the identified neighboring pixels that mo is ut be used in the processing of dynamic defects and speckles. For example, in accordance with the embodiment described above and illustrated in Fig. 51, the neighboring pixels may include 8 immediate neighbors of the pixel P (for example, P0-P7), thus forming an area of 3x3 pixels. Then, at step 648, the gradients from pixel to pixel are calculated in respect of each neighboring pixel within the raw frame 278, as described in equation 19 above. Additionally, the average gradient (Gav) can be calculated as the difference between the current pixel and the average value of its surrounding pixels, as shown in equations 20a and 20b.

Processing 640, which diverts to step 650 for dynamic detection of defects in logic 658 solutions for the detection of speckles. As noted above, the dynamic detection of defects and detection of speckles in some embodiments, the implementation can take place in parallel. At stage 650 is defined expense C number of gradients that are less than or equal to the threshold value dynTh. As described above, the threshold value dynTh may include a constant and dynamic components, and, in one embodiment, may be determined in accordance with equation 21 above. If C is less than or equal to the maximum account dynMaxC, then processing 640 continues to step 656, and t is cushy pixel is marked as being dynamic defect. After this processing 640 may continue processing the correction of the pixels shown in Fig. 54, which will be described below.

Returning to the branch after step 648, for detection of speckles, logic 658 solutions determines whether the average gradient Gavgreater than the threshold value spkTh detection of speckle, which can also include a constant and a dynamic component. If Gavis greater than the threshold value spkTh, the pixel P is marked as containing speckle at step 660, and then the processing 640 goes in Fig. 54 for correcting specalog pixel. In addition, if the output of both of the blocks 652 and 658 decision logic is set to "No", this indicates that the pixel P does not contain dynamic defects, speckle or even static defects (logic 644). Thus, when the logic pins 652 and 658 solutions both have the value "No", the processing 640 may conclude, at step 654, whereby the pixel P is passed, unchanged, since defects (e.g., static, dynamic, or speckle) were not detected.

Continuing on Fig. 54, provides processing 670 correction of pixels in accordance with the technologies described above. At step 672, the input pixel P is taken from processing 640 in Fig. 53. It should be noted that the pixel P may be taken by 670 processing from step 64 (static defect) or from step 656 (dynamic defect) and 660 (speclly defect). Logic 674 solutions then determines whether marked pixel P as a static defect. If the pixel P is a static defect, the process 670 continues and ends at step 676, whereby the static defect is corrected using the values of the substitution defined in step 628 (Fig. 52).

If the pixel P is not identified as a static defect, the process 670 continues from logic 674 solution prior to step 678, and evaluated the directional gradients. For example, as described above with reference to equations 26-29, gradients can be calculated as the sum of absolute differences between the Central pixel and the first and second neighboring pixels for the four directions (h, v, dp and dn). Then, at step 680, is identified by a directional gradient, with the lowest value, and thereafter, the logic 682 solutions evaluates whether one of the two neighboring pixels associated with the minimum gradient outside the frame image (e.g., raw frame 278). If both of the neighboring pixel within the image frame, the process 670 proceeds to step 684, and the correction value of the pixel (PC) is determined by applying a linear interpolation to the values of two neighboring pixels, as illustrated by equation 30. After that, the input pixel P can be adjusted to use the observed values of P Ccorrection of the interpolated pixel, as shown in step 690.

Returning to the logic 682 solutions, if it is determined that one of the two neighboring pixels located outside the frame image (e.g., raw frame 165), instead of using values outside pixel (Pout) logic 592 DPDC can replace the value of Pout value of the other neighboring pixel that is inside the frame image (Pin), as shown in step 686. Then, at step 688, the value of PCcorrection of a pixel is determined by interpolation of the values of Pin and replaced the values of Pout. In other words, in this case, PCmay be equivalent to the value of the Pin. Finally, at step 690, the pixel P is corrected using a value of PC. Before continuing, it should be clear that the specific processing of the detection and correction of defective pixels contained in the materials of the present application with reference to the logic 592 DPDC, refers to reflecting only one of the possible embodiments of the present technology. Indeed, depending on the design and/or cost constraints may a number of options and features can be added and removed so that the overall complexity and reliability logic 460 detection/correction of defects was between the more simple logic 460 is present is ugenia/correction implemented in the module 80 pre-treatment ISP, and logic detection/correction of the defects described herein with reference to the logic 592 DPDC.

Returning to Fig. 50, the adjusted pixel data outputted from the logic 592 DPDC, and then accepted the logic 594 noise suppression for further processing. In one embodiment, the logic 594 noise reduction can be performed with the opportunity to implement a two-dimensional adapt to the contour of the low pass filtering to suppress noise in the image data while preserving detail and textures. Adapt to the contour of the thresholds can be set (for example, logic 84 control) on the basis of present levels of lighting so that the filtering could be enhanced in low light conditions. Additionally, as briefly mentioned above with reference to the determination of the values of dynTh and spkTh, the noise variance can be determined earlier for this sensor, so that the threshold value of noise reduction can be installed directly above the noise variance so that during processing, noise reduction noise is suppressed without significant impact on the texture and the detail in the scene (for example, to avoid/reduce false detection). With color Bayer array logic 594 noise can obrabecim shall be independent of each color component Gr, R, B, and Gb using a shared 7-outlet horizontal filter and 5-outlet vertical filter. In one embodiment, processing noise reduction can be performed by correcting the unevenness in the color components of green (Gb and Gr), and then performing horizontal filtering and vertical filtering.

Heterogeneity green (GNU) usually has a light luminance contrast between pixels Gr and Gb provided uniformly illuminated flat surface. Without correction or compensate for this unevenness some artifacts, such as artifacts "labyrinth"may appear in the full-color image after elimination of tessellation. During processing unevenness green it can include determining, for each pixel green data in the raw Bayer image, whether the absolute difference between the current pixel (G1) green and green pixel to the right and below (G2) of the current pixel is less than the threshold value correction GNU (gnuTh). Fig. 55 illustrates the location of the pixels G1 and G2 in the field of 2x2 Bayer patterns. As shown, the color of the pixels surrounding G1, may be dependent on whether the current pixel is a green pixel Gb or Gr. For example, if G1 is Gr, G2 is a Gb pixel to the right of G1 is R (red), and the pixel is m below G1 is B (blue). Alternatively, if G1 is Gb, then G2 is Gr, and the pixel to the right of G1 is B, while the pixel below G1 is R. If the absolute difference between G1 and G2 is less than the threshold value correction GNU, then the current pixel G1 green is replaced by the average value of G1 and G2, as shown by the logic below:

if (abs(G1-G2)gnuTh);G1=G1+G22(31)

As can be taken into account, the use of the correction of uneven green can thus help to protect the pixels G1 and G2 from the averaging circuits, thereby improving and/or maintaining sharpness.

Horizontal filtering is applied to the track for the correction of non-uniformity of the green and, in one embodiment, may include 7-taking a horizontal filter. Calculated gradients at the edges of each outlet of the filter, and if they are above the threshold value, the horizontal contour (horzTh), the exhaust filter is minimized to the Central pixel, as will be illustrated below. The horizontal filter is obrabatyvati image data independently for each color component (R, B, Gr, Gb), and can use unfiltered values as input values.

In the example of Fig. 56 shows the graphical representation of the set of pixels P0 through P6 horizontally, with a Central outlet located on P3. On the basis of the pixels shown in Fig. 56, the gradient of the contour for each tap of the filter can be calculated as follows:

Eh0 = abs(P0-P1)(32)

Eh1 = abs(P1-P2)(33)

Eh2 = abs(P2-P3)(34)

Eh3 = abs(P3-P4)(35)

Eh4 = abs(P4-P5)(36)

Eh5 = abs(P5-P6)(37)

Gradients Eh0-Eh5 circuit can then use the horizontal component of the filter to define the output, Phorzthe horizontal filter is tion, using the formula shown in equation 38 below:

this horzTh[c] - threshold horizontal path for each color component c (e.g., R, B, Gr and Gb) and C0-C6 - coefficients of taps of the filter corresponding to the pixels P0-P6, respectively. Conclusion Phorzhorizontal filter can be applied in the location of the Central pixel P3. In one embodiment, the coefficients C0-C6 of taps of the filter can be a 16-bit complementary values of two with 3 integer bits and 13 fractional bits (3.13 when floating point). In addition, it should be clear that the coefficients C0-C6 of taps of the filter need not be symmetric with respect to the Central pixel P3.

Vertical filtering is also applied logic 594 noise reduction after treatment correction of non-uniformity and horizontal filtering green. In one embodiment, the operation of the vertical filter can provide 5-taking the filter, as shown in Fig. 57, with a Central tap vertical filter located on P2. The vertical filtering processing may occur in a similar way as the horizontal filtering processing described above. For example, gradients are calculated on the contour of each of the second outlet of the filter and if it is above the threshold value of the vertical contour (vertTh), the exhaust filter is formed to the Central pixel P2. The vertical filter can process the image data independently for each color component (R, B, Gr, Gb), and can use unfiltered values as input values.

On the basis of the pixels shown in Fig. 57, the vertical gradients of the circuit for each outlet of the filter can be calculated as follows:

Ev0 = abs(P0-P1)(39)

Ev1 = abs(P1-P2)(40)

Ev2 = abs(P2-P3)(41)

Ev3 = abs(P3-P4)(42)

Gradients Ev0-Ev5 can then be used vertical filter to determine the output signal, Pvertvertical filtering, using the formula shown in equation 43 below:

this vertTh[c] - threshold vertical contour for each of the color component c (e.g., R, B, Gr and Gb) and C0-C4 are coefficients of taps of the filter corresponding to the pixels P0-P4 of Fig. 57, respectively. Conclusion Pvertvertical filter can be applied in the location of the Central pixel P2. In one embodiment, the coefficients C0-C4 of taps of the filter can be a 16-bit complementary values of two with 3 integer bits and 13 fractional bits (3.13 when floating point). In addition, it should be clear that the coefficients C0-C4 of taps of the filter does not have to be symmetric with respect to the Central pixel P2.

Additionally, with regard to boundary conditions, when the neighboring pixels are outside of the raw frame 278 (Fig. 19), the values of pixels outside the boundaries of the repeated value of the pixel of the same color on the contour of the raw frame. This rule can be implemented for operations both horizontal and vertical filtering. As an example, again with reference to Fig. 56, in the case of horizontal filtering, if the pixel P2 is a contour pixel on the left-most edge of the raw frame, and the pixels P0 and P1 are outside of the raw frame, the pixel values P0 and P1 are replaced by the pixel value P2 for horizontal filtering.

Returning again to the block diagram of logic 562 processing raw pixels shown in Fig. 50, the output logic 594 pod is ing noise is subsequently sent to the logic 596 shading correction lens (LSC) for processing. As described above, technologies shading correction lens may include the use of appropriate amplification factor for pixel-by-pixel basis to compensate for the falling light intensity that may be the result of geometrical optics lens, manufacturing defects, misalignment matrix of microlenses and color matrix filter, and so on. In addition, the filter infrared radiation (IR) in some lenses may cause the drop to be dependent on the light source and, thus, the gain of shading the lens can adapt depending on the detected light source.

In the depicted embodiment, the logic 596 LSC conveyor 82 ISP can be implemented in a similar way, and thus, to provide broadly the same functions as the logic 664 LSC module 80 pre-treatment ISP, as described above with reference to Fig. 40-48. Accordingly, to avoid redundancy, it should be clear that the logic 596 LSC illustrated at the moment of option exercise performed with the opportunity to work in generally the same way as logic 460 LSC, and, as such, the description of technology shading correction lens provided above, there will not recur. However, to summarize the above, should be onate, the logic 596 LSC can independently process each color component data flow raw pixels to determine the gain to be applied to the current pixel. In accordance with the above-described variants of the implementation of the gain shading correction lens can be determined on the basis of a specific set of grid points gain distributed across the image frame, the interval between each grid point is determined by the number of pixels (for example, 8 pixels, 16 pixels, etc). If the location of the current pixel corresponds to a grid point, the value of the gain associated with this grid point, is applied to the current pixel. However, if the location of the current pixel is located between the grid points (e.g., G0, G1, G2 and G3 in Fig. 43), the value of the gain LSC can be calculated by interpolation of the grid points between which is located the current pixel (equations 13a and 13b). This processing represented by the processing 528 in Fig. 44. In addition, as mentioned above with reference to Fig. 42, in some embodiments, the implementation of grid points can be distributed unevenly (e.g., logarithmically), so that the grid points are less concentrated in the Central region 504 LSC, but is more concentrated to the corners region 504 LSC, where typically more noticeable distortion ZAT is in the lens.

Additionally, as described above with reference to Fig. 47 and 48, the logic 596 LSC can also use the radial component of the gain with the gain values of the grid. The radial component of the gain may be determined based on the distance of the current pixel from the center of the image (equations 14-16). As mentioned, the use of radial gain provides the ability to use a single common grid gain for all color components, which can greatly reduce the total memory space required to store the individual meshes of the gain for each color component. This reduction of data strengthen the grid can reduce implementation costs, as data tables reinforcement mesh can be responsible for a significant amount of memory or the chip area in a hardware image processing software.

Then, again with reference to the block diagram 562 logic processing unprocessed pixels in Fig. 50, the output logic 596 LSC is then passed to the second block 598 gain, bias and truncation (GOC). Logic 598 GOC can be applied to removal of patchiness (logical block 600) and may be used to perform automatic white balancing on the output logic 596 LSC. In the depicted embodiment, the logic 598 GOC may be R alisova in the same way, as logic 590 GOC (and logic 462 BLC). Thus, in accordance with equation 11 above, the input accepted by the logic 598 GOC, first shifting a signed value, and then multiplied by the gain. The resulting value is then truncated the range of maximum values and minimum values in accordance with equation 12.

After that, the output signal of the logic 598 GOC is sent to the logic 600 eliminating the mosaic processing to produce a full color image (RGB) based on the raw input data Bayer. As will be taken into account, the raw output signal of the image sensor that uses a matrix of color filters, such as the Bayer array is "incomplete" in the sense that each pixel is filtered to receive only one color component. Thus, data collected for a single pixel alone is insufficient to determine the color. Accordingly, technologies eliminate the mosaic can be used for forming a full color image from a Bayer raw data through interpolation of the missing color data for each pixel.

Next, with reference to Fig. 58, illustrated sequence 692 operations graphics processing, which gives a General overview about how the us is wound patchiness can be applied to the structure 694 raw Bayer image, to produce full-color RGB. As shown, the plot 696 4x4 raw image 694 Bayer may include separate channels for each color component, including channel 698 green, channel 700 red and the channel 702 blue. Since each pixel of the imaging sensor Bayer receives data only for one color, the color data for each color channel 698, 700 and 702 may be incomplete, as indicated by the "?"symbol. Through the use of technology 704 eliminate patchiness missing color samples from each channel can be interpolated. For example, as shown by the number of 706 links, interpolated data G' can be used to fill the missing samples in the green color channel. Similarly, the interpolated data R' (in combination with interpolated data 706 G') can be used to fill the missing samples in the color channel 708 red, interpolated data B' (in combination with interpolated data 706 G') can be used to fill the missing samples in the color channel 710 blue. Thus, the processing of removing the patchiness of each color channel (R, G, B) will have a full set of color data, which can then be used for reconstructing a full-color image 712 RGB.

the technology fix patchiness, which can be implemented by logic 600 eliminating patchiness, hereinafter will be described in accordance with one embodiment. In the color green channel missing color samples can be interpolated using directional low pass filter for known samples of green and high-pass filter (or gradient filter) on the adjacent color channels (for example, red and blue). As for the color channels red and blue, the missing color samples can be interpolated in a similar way, but by using a low pass filter on the known values of the red or blue color and a high-pass filter located on jointly interpolated green values. In addition, in one embodiment, the elimination of patchiness in the color green channel can be used adapts to the contour of the filter block of 5x5 pixels on the basis of the original color data Bayer. As will be further described below, using adaptable to the contour of the filter may provide for continuous weighing on the basis of the gradients subjected to horizontal and vertical filtering values, which reduces the appearance of some artifacts, such as ghosting artifacts in the image, a "checkerboard" and "Radu and", usually observed in traditional technologies eliminate the mosaic.

During the elimination of patchiness in the green channel are the original values for pixels, green pixels Gr and Gb) structure of the Bayer image. However, in order to obtain a complete data set for the green channel, the values of the green pixels may be interpolated at the pixel of red and blue patterns of the Bayer image. In accordance with the present technology of horizontal and vertical components of the energy, respectively indicated by reference as Eh and Ev, are first evaluated in pixels of red and blue on the basis of the above-mentioned block of 5x5 pixels. Values of Eh and Ev can be used to obtain a weighted path of the filtered values of the phases of the horizontal and vertical filtering, as further described below.

In the example of Fig. 59 illustrates the calculation of the values of Eh and Ev for a red pixel, centered in the block of 5x5 pixels at location (j, i), j line, and i corresponds to the column. As shown, the Eh calculation takes into account the average three rows (j-1, j, j+1) block of 5x5 pixels, and the Ev calculation takes into account the average three columns (i-1, i, i+1) block of 5x5 pixels. To calculate Eh absolute value of the sum of each of the pixels in the columns of the red (i-2, i, i+2), umnojennaya appropriate factor (e.g., -1 for columns i-2 and i+2; 2 for column i)is added to the absolute value of the amounts of each of the pixels in the columns of the blue (i-1, i+1), multiplied by the appropriate factor (e.g., 1 to column i-1; -1 for the column i+1). To calculate Ev, the absolute value of the sum of each of the pixels in the rows of red (j-2, j, j+2), multiplied by the appropriate factor (e.g., -1 for row j-2 and j+2; 2 line (j)is added to the absolute value of the sum of each of the pixels in the rows blue (j-1, j+1), multiplied by the appropriate factor (e.g., 1 for row j-1; -1 for the row j+1). These calculations are illustrated by equations 44 and 45 below:

Thus, the total energy can be expressed as: Eh+Ev. In addition, although the example shown in Fig. 59 illustrates the calculation of Eh and Ev for the Central pixel of red (j, i), it should be clear that the values of Eh and Ev can be defined in a similar way to the Central blue pixels.

Then the horizontal and vertical filtering can be applied to the structure of the buyer to obtain the filtered values Gh and Gv vertically and horizontally, which can be interpolated green values in horizontal and vertical direction is s, respectively. Filtered values of Gh and Gv can be determined using a lowpass filter on the well-known neighboring green samples in addition to the use of directional gradients of neighboring color (R or B) to receive a high frequency signal at the locations of the missing green samples. For example, with reference to Fig. 60 will be further illustrated by the example of the horizontal interpolation to determine Gh.

As shown in Fig. 60, five pixels (R0, G1, R2, G3, and R4) horizontal line 714 red image Bayer, in which R2 assumes the Central pixel at (j, i)can be considered when determining Gh. The filter coefficients associated with each of these five pixels, indicated by the number of 716 links. Accordingly, the interpolation values of the green color specified by reference as G2', for the Central pixel R2, can be determined as set forth below:

G2'=G1+G32+2R2-(R0+R22)-(R2+R42)2 (46)

Various mathematical operations, in this case, can be used to create expressions for G2', shown in equations 47 and 48 below:

G2'=2G1+2G34+4R2-R0-R2-R2-R44(47)

G2'=2G1+2G3+2R2-R0-R44(48)

Thus, with reference to Fig. 60 and equations 46-48 above, the General expression for the horizontal interpolation for the given value at (j, i) can be derived as:

Gh=(2P( j,i-1)+2P(j,i+1)+2P(j,i)-P(j,i-2)-P(j,i+2))4(49)

The vertical component Gv filter can be determined in a similar way as Gh. For example, with reference to Fig. 61, five vertical pixels (R0, G1, R2, G3, and R4) column 718 red from the image Bayer and their respective coefficients 720 filtering, in which R2 assumes the Central pixel at (j, i)can be considered when determining Gv. Using the low pass filter on known samples of green and high-pass filter in the red channel in the vertical direction, the following expression can be derived for Gv:

Gv=(2P(j-1,i)+2P(j+1,i) +2P(j,i)-P(j-2,i)-P(j+2,i))4(50)

Although the examples described in materials of this application showed the interpolation of the green color pixel in the red, it should be clear that the expression set forth in equations 49 and 50, can also be used for horizontal and vertical interpolation of the green color to blue pixels.

The final interpolated value G' green for the Central pixel (j, i) can be determined by weighting the findings of the horizontal and vertical filters (Gh and Gv) components of the energy (Eh and Ev), described above, to give the following equation:

G'(j, i)=(EvEh+Ev)Gh+(EhEh+Ev)Gv (51)

As described above, the components of Eh and Ev energy can provide adaptable to the contour of the weighing conclusions Gh and Gv horizontal and vertical filters, which can help to reduce image artifacts, such as rainbow artifacts, ghosting or chessboard in the reconstructed RGB image. Additionally, the logic 600 eliminating patchiness may provide the opportunity to crawl characteristic that can adapt to the contour of the weighting by setting each of the values of Eh and Ev 1 to Gh and Gv were ravnovzveshennuju.

In one embodiment, the horizontal and vertical weighting coefficients shown in equation 51, above, can quantize to reduce the accuracy of the weighting coefficients to set the "coarse" value. For example, in one embodiment, the weighting coefficients can quantize eight possible weights. 1/8, 2/8, 3/8, 4/8, 5/8, 6/8, 7/8 and 8/8. Other options for implementation may quantize the weights of 16 values (for example, on 1/16 16/16), 32 values (1/32 in 32/32) and so on. As can be taken into account, compared with values of polnomochnost (for example, 32-bit floating point values), the quantization weighting coefficients can reduce the complexity in the determination and application of weights to the findings of the horizontal and vertical filter.

In additional embodiments, the implementation disclosed at the moment technology, in addition to the definition and use of the horizontal and vertical components of the energy for applying weighting coefficients subjected to horizontal (Gh) and vertical (Gv) filter values can also define and use components in diagonally positive and diagonally negative directions. For example, in such embodiments, the filter can also be applied diagonally positive and diagonally negative directions. The weighting of the outputs of the filter may include a choice of two components of the highest energy and the use of selected components of the energy for weighing their respective terminals of the filter. For example, assuming that the two components of higher energy correspond to the vertical and diagonal positive directions, vertical and diagonal positive components of the energy used for weighing conclusions vertical and diagonal positive filter to determine the interpolated z is Uchenie green color (for example, in the arrangement of the pixels of red and blue in the structure of Bayer).

Then the elimination of patchiness in the color channels red and blue can be performed by interpolation of the values of the red and blue color pixels of the green structure of the Bayer image, the interpolation values of the red pixels, blue patterns of the Bayer image and interpolated values of blue color on the red pixels of the image patterns Bayer. In accordance with the present described the technology of the values of the missing pixels of red and blue can be interpolated using the filter the lower frequencies on the basis of the known neighboring pixels of red and blue and the high-pass filter on the basis of mutually-spaced values of the green pixels, which may be original or interpolated values (from processing eliminate the mosaic of green channel described above) depending on the location of the current pixel. Thus, for such embodiments should be understood that the first can be performed interpolation of the missing green values so that a complete set of values in green (both the original and the interpolated values) was available when interpolation of missing samples of red and blue.

Interpolation sacrificea red and blue can be described with reference to Fig. 62, which illustrates the various blocks of 3x3 structure of the Bayer image, which can be used eliminating the mosaic of red and blue, and the interpolated green values (denoted by G'), which could be obtained while eliminating the patchiness in the green channel. First, with reference to block 722, the interpolated red value, R'11for pixel Gr (G11) can be determined as set forth below:

R'11=(R10+R12)2+(2G11-G'10-G'12)2,(52)

where G'10and G'12are interpolated green values, as shown by the number of 730 links. Similarly, the interpolated value is blue, B'11for pixel Gr (G11) can be determined as set forth below:

B'11 =(B01+B21)2+(2G11-G'01-G'21)2,(53)

thus G'01and G'21represent interpolated values in green (730).

Then with reference to block 724 pixels, in which the Central pixel is a pixel of the Gb (G11), the interpolated red value, R'11and the value of the blue color B'11can be defined as shown in equations 54 and 55, to the following:

R'11=(R01+R21)2+(2G11-G'01-G'21)2 (54)

B'11=(B10+B12)2+(2G11-G'10-G'12)2(55)

In addition, with reference to block 726 pixels, the interpolation of the red color value for the pixel is blue, B11may be determined as set forth below:

R'11=(R00+R02+R20+R22)4+(4G'11-G'00-G'02-G'20-G'22)4 ,(56)

thus G'00, G'02, G'11, G'20and G'22are interpolated green values, as shown by the number of 732 links. In conclusion, the interpolation of the blue color value for the pixel in the red, as shown by block 728 pixels may be calculated as set forth below:

B'11=(B00+B02+B20+B22)4+(4G'11-G'00-G'02-G'20-G'22)4(57)

Although an implementation option, described above, relied on color contrasts (e.g., gradients) to determine interpolare is the R values of the red and blue colors, another option may not provide interpolated values of the red and blue colors using color ratios. For example, the interpolated values of green (blocks 730 and 732 may be used to obtain color correlations in the locations of the pixels of the red and blue patterns of the Bayer image, and linear interpolation ratios can be used to determine the interpolated color ratios for the missing color samples. The value of the green color, which can be interpolated or the original value can be multiplied by the interpolated color value to obtain the final interpolated color values. For example, interpolation of pixel values of the red and blue using the color ratios can be performed in accordance with the following formulas, equations 58 and 59 show the interpolation of red and blue color of the pixel Gr, equations 60 and 61 show the interpolation of red and blue color of the pixel Gb, equation 62 shows the interpolation of the red color value for the pixel is blue, and equation 63 shows the interpolation of the blue color value for the red pixel:

R'11= 11(R10G'10)+(R12G'12)2(58)

(R'11is interpolated when G11is the Gr pixel)

B'11=G11(B01G'01)+(B21G'21)2(59)

(B'11is interpolated when G11is the Gr pixel)

R'11=G11(R01G '01)+(R21G'21)2(60)

(R'11is interpolated when G11is the Gb pixel)

B'11=G11(B10G'10)+(B12G'12)2(61)

(B'11is interpolated when G11is the Gb pixel)

R'11=G'11(R00G'00)+(mi> R02G'02)+(R20G'20)+(R22G'22)4(62)

(R'11interpolated pixel B11blue)

B'11=G'11(B00G'00)+(B02G'02)+(B20G'20)+(B22G'22)4 (63)

(B'11is interpolated at the pixel R11red)

As soon as the missing color samples were interpolated for each pixel in the image from the structure of the Bayer image, the full sample of color values for each of the color channels red, green and blue (e.g., 706, 708 and 710 in Fig. 58) can be combined to generate a full color RGB image. For example, returning to Fig. 49 and 50, the output signal of the logic 572 562 processing raw pixel may be a signal of an RGB image to 8, 10, 12 or 14-bit formats.

Next, with reference to Fig. 63-66 illustrated various flow charts of the sequence of operations of the method illustrating the processing for elimination of the mosaic structure of the raw Bayer image in accordance with the disclosed variants of implementation. More precisely, the processing 740 in Fig. 63 depicts a determination of which of the color components must be interpolated for a given output pixel p On the basis of determination by processing 740 may perform one or more of the processing 750 (Fig. 64) to interpolate the values of the green color processing 762 (Fig. 65) to interpolate the values of red or processed 774 (Fig. 66) to interpolate the values of sin is the first color (for example, through logic 600 eliminating patchiness).

Starting with Fig. 63, the processing 740 begins at step 741, when receiving the input pixel P. the Logic 742 decisions determines the color of the output pixel. For example, it may depend on the location of the pixel within the structure of the Bayer image. Accordingly, if P is identified as being a green pixel (for example, Gr or Gb), the processing 740 proceeds to step 744 to obtain interpolated values of the red and blue colors for P. for example, This may include the continuation of the movement processing and 774 762 in Fig. 65 and 66, respectively. If P is identified as being the red pixel, the processing 740 proceeds to step 746 to obtain interpolated values of green and blue colors for P. This, in addition, may include processes and 774 750 processing in Fig. 64 and 66, respectively. Additionally, if P is identified as being a blue pixel, the processing 740 proceeds to step 748 to obtain interpolated values of green and red colors for P. This, in addition, may include processes 750 and 762 processing in Fig. 64 and 65, respectively. Each of the processes 750, 762 and 774 processing is additionally described below.

Processing 750 to determine interpolated values of the green color is for the input pixel P is illustrated in Fig. 64 and includes the steps 752-760. At step 752, the accepted input pixel P (for example, from processing 740). Then, at step 754, identified a set of neighboring pixels forming the block of 5x5 pixels, and P is the center of the 5x5 block. After that, the block of pixels is analyzed to determine the horizontal and vertical components of the energy on stage 756. For example, the horizontal and vertical components of energy can be determined in accordance with equations 44 and 45 to calculate the Eh and Ev, respectively. As described, the components of Eh and Ev energy can be used as weights to ensure that adapts to the contour of the filter, and therefore reducing the appearance of some artifacts elimination of patchiness in the final image. At step 758, the low pass filtering and high-pass filtering is applied in the horizontal and vertical directions to determine the output signals of the horizontal and vertical filtering. For example, the output signals of the horizontal and vertical filtering, Gh and Gv, can be calculated in accordance with equations 49 and 50. Then processing 740 continues to step 760, where the interpolated value G' green color is interpolated based on the values of Gh and Gv, weighted components Eh and Ev energy, as shown in equation 51.

For the eat, in regard to the processing of 762 in Fig. 65, the interpolation values of the red color may begin at step 764, which makes the input pixel P (for example, from processing 740). At step 766 is identified by a set of neighboring pixels forming the block of 3x3 pixels, and P is the center of the 3x3 block. After that, the low pass filtering is applied to the adjacent red pixels within a 3x3 block at step 768, and the high-pass filtering is applied (step 770) to jointly located adjacent values of the green color, which can be the original green values captured by the image sensor Bayer, or interpolated values (e.g., defined by processing 750 in Fig. 64). The interpolated value R' red for P can be determined on the basis of the output signals of the filter of low frequencies and high frequencies, as shown in step 772. Depending on the color of P, R' may be determined in accordance with one of equations 52, 54 or 56.

As for the interpolated values of blue color, can be applied processing 774 in Fig. 66. Stages 776 and 778 generally identical stages 764 and 766 processing 762 (Fig. 65). At step 780, the low pass filtering is applied to the neighboring blue pixels within a 3x3, and, at step 782, the high-pass filtering is applied to jointly located adjacent zachariassen color, which can be the original green values captured by the image sensor Bayer, or interpolated values (e.g., defined by processing 750 in Fig. 64). The interpolated value B' blue for P can be determined on the basis of the output signals of the filter of low frequencies and high frequencies, as shown in step 784. Depending on the color of P, B' may be determined in accordance with one of equations 53, 55 or 57. In addition, as mentioned above, the interpolation of red and blue colors can be defined using the color contrasts (equations 52-57) or colour correlations (equations 58-63). Again, it should be clear that the first can be performed interpolation of the missing green values so that a complete set of values in green (both the original and the interpolated values) was available when interpolation of missing samples of red and blue. For example, processing 750 in Fig. 64 can be used to interpolate any missing color samples green before performing treatments and 774 762 in Fig. 65 and 66, respectively.

With reference to Fig. 67-70 provided examples of color drawings of images processed by the logic 562 processing unprocessed pixels in the pipeline 82 ISP. Fig. 67 depicts a scene 786 original image, colorature be captured by the sensor 90 of the imaging device 30 imaging. Fig. 68 shows the raw image 788 Bayer, which may represent the raw data of pixels captured by the sensor 90 images. As mentioned above, conventional technologies eliminate patchiness may not provide for adaptive filtering on the basis of the detection circuits (for example, boundaries between areas of two or more colors) in the image data, which may not be desirable to produce artifacts in the resulting reconstructed full-color RGB image. For example, in Fig. 69 shows an image 790 RGB, reconstructed using traditional techniques to eliminate patchiness, and may include artifacts such as artifacts 792 "checkerboard" on the circuit 794. However, the comparison image 790 depicting 796 RGB in Fig. 70, which may be an example of an image reconstructed using the techniques to eliminate the mosaic described above, it can be seen that the artifacts 792 chessboard present in Fig. 69, not represented, or at least their appearance significantly reduced by the circuit 794. Thus, the image shown in Fig. 67-70, are intended to illustrate at least one of the advantages that technology is eliminating the mosaic disclosed in the materials of this application, have over traditional methods.

Returning to Fig. 49, now you have thoroughly described the job logic 562 processing raw pixels, which can output a signal 572 RGB image, the present description will focus on the description of the signal processing 572 RGB image by logic 564 processing RGB. As shown, the signal 572 RGB image can be sent to the logic 576 choice and/or memory 108. Logic 564 processing RGB can accept input 578, which may be image data RGB from the signal 572 or from the memory 108, as shown by the signal 574, depending on the configuration logic 576 choice. Data 578 RGB image may be processed by logic 564 processing RGB to perform color settings, including color correction (for example, using the matrix, color correction, color gain for automatic white balancing, and display a global flavor, and so on.

Structural diagram depicting a more detailed view of a variant of implementation of the logic 564 processing RGB, illustrated in Fig. 71. As shown, the logic 564 processing RGB includes logic 800 gain, bias and truncation (GOC), logic 802 color correction RGB, logic 804 GOC, the logic of setting the degree of contrast of the RGB and logic 808 conversion color space. The input signal 78 first received by the logic 800 gain bias and truncation (GOC). In the illustrated embodiment, the logic 800 GOC may apply the gain to perform the automatic white balancing in one or more color channels R, G or B before processing logic 802 color correction.

Logic 800 GOC may be similar to the logic 590 GOC of logic 562 processing raw pixels, except that the processed color component region RGB instead of component R, B, Gr and Gb of image data of the Bayer array. In operation, the input value for the current pixel of the first offset value O[c] with the sign and multiplied by a factor of G[c] gain as shown in equation 11 above, in this case, c is R, G, and B. As described above, the coefficient G[c] gain can be 16-bit unsigned number with 2 integer bits and 14 fractional bits (for example, floating point representation 2.14), and values for the coefficient G[c] gain can be determined in advance during aggregation (for example, in the module 80 pre-treatment ISP). The calculated value Y of a pixel (equation 11) is then truncated the range of the minimum values and maximum values in accordance with equation 12. As described above, the variables min[c] and max[c] can be represented by 16-bit values "truncation" for minimum and maximum is about output values, respectively. In one embodiment, the logic 800 GOC can also be performed with the opportunity to support the number of pixels that were truncated above and below the maximum and minimum values, respectively, for each of color components R, G and B.

The output signal of the logic 800 GOC is then sent to the logic 802 color correction. In accordance with the present disclosed technology logic 802 color correction can be performed with the opportunity to apply a color correction to image data RGB using matrix color correction (CCM). In one embodiment, the CCM may be the transformation matrix RGB 3x3, although matrices of other sizes may also be used in other variants of implementation (for example, 4x3, and so on). Accordingly, the processing of performing color correction on an input pixel has components R, G and B can be expressed as set forth below:

[R'G'B']=[CCM00CCM01CCM02CCM10CCM11 CCM12CCM20CCM21CCM22]×[RGB],(64)

R, G, and B represent the values of red, green and blue colors for the input pixel, CCM00-CCM22 represent the coefficients matrix, color correction, and R', G' and B' represent the corrected values of the red, green and blue values for the input pixel. Accordingly, the correct color values may be calculated in accordance with equations 65-67 below:

R'=(CCM00×R)+(CCM01×G)+(CCM02×B)(65)

mrow> G'=(CCM10×R)+(CCM11×G)+(CCM12×B)(66)

B'=(CCM20×R)+(CCM21×G)+(CCM22×B)(67)

Coefficients (CCM00-CCM22) CCM can be determined during the statistical processing module 80 pre-treatment ISP, as described above. In one embodiment, the coefficients for a given color channel can be chosen so that the sum of such ratios (for example, CCM00, CCM01 and CCM02 for color correction red) was equal to 1, that can help to maintain the brightness and color balance. In addition, the coefficients typically are selected so that the good gain applied to color, which is adjustable. For example, when color correcting red factor CCM00 can be greater than 1, along with the fact that one or both of the coefficients CCM01 and CCM02 can be smaller than 1. Installation factors thus may enhance the component of red (R) in the resulting adjusted value R' along with subtract some portion of the components of the blue (B) and green (G). As will be taken into account, it can take action in response to problems with overprinting, which may occur during retrieval of the original Bayer image, as part of the filtered light for a pixel of a particular color may "spill over" into neighboring pixel of a different color. In one embodiment, the coefficients of the CCM can be used as a 16-bit complementary to the two numbers with 4 integer bits and 12 fractional bits (marked with floating point 4.12). Additionally, the logic 802 color correction may include truncating the calculated adjusted color values, if the values exceed the maximum value or fall below the minimum value.

The output signal of the logic 802 color correction RGB is then passed to another block 804 logic GOC. Logic 804 GOC can be implemented in the same way as logic 800 GOC, and thus, detailed obasanjobecame functions of amplification, bias and limitations here will not recur. In one embodiment, the application logic 804 GOC after color correction may provide for automatic white balancing the image data on the basis of the corrected color values, and can also adjust the deviation of the sensor on the ratios of red to green to blue to green.

Then the output signal of the logic 804 GOC is sent to the logic 806 settings gamut RGB for further processing. For example, the logic 806 settings gamut RGB may provide for correction of the degree of contrast, display color, the approval of the histogram, and so on. In accordance with the disclosed variants of the implementation logic 806 tuning range may include the display of the input RGB values to the corresponding output RGB values. For example, the logic of setting the degree of contrast may provide three reference tables, one table for each of the components R, G and B. as an example, each reference table can be made with the ability to store up to 256 records 10-bit values, each value represents the output level. The table entries can be uniformly distributed in the range of input pixel values so that, when the input value falls between two entries, the output value could linearly interpolare Atisa. In one embodiment, each of the three reference tables for R, G and B can be duplicated so that the reference tables "were subjected to double-buffering in memory, thus providing an opportunity to a single table was used during processing, while its duplicate is updated. Based on 10-bit output values described above, it should be noted that the 14-bit image signal RGB effectively subjected to down-sampling 10-bit result from the processing of the gamma correction in the present embodiment.

The output signal of the logic 806 tuning range can be set up in memory 108 and/or logic 808 conversion color space. Logic 808 color space conversion (CSC) can be performed with the option to convert the output RGB signal from the logic 806 adjust the degree of contrast in YCbCr, where Y represents the brightness component, the Cb component is the color contrast of blue and Cr represents the component color contrast of red, each of which may be in 10-bit conversion bit depth RGB data of the 14-bit format during the operation of setting the degree of contrast. As described above, in one embodiment, the output RGB signal logic 806 tuning range can experience the change down-sampling rate up to 10 bits, and thus, converted to 10-bit YCbCr logic 808 CSC, which can then be forwarded to the logic 566 processing YCbCr, which will be further described below.

The transformation of the field of RGB to the YCbCr color space can be performed using the transformation matrix color space (CSCM). For example, in one embodiment, CSCM can be a 3x3 transformation matrix. The coefficients CSCM can be installed in accordance with the known equation of transformation, such as standards BT.601 and BT.709. Additionally, the coefficients CSCM can be flexible based on the desired range of the input signal and output signals. Thus, the coefficients CSCM can be determined and programmed on the basis of data collected during the statistical processing module 80 pre-treatment ISP.

Processing is the conversion of YCbCr color space for the input pixel RGB can be expressed as below:

[YCbCr]=[CSCM00CSCM01CSCM02CS CM10CSCM11CSCM12CSCM20CSCM21CSCM22]×[RGB],(68)

R, G, and B represent the current values of the red, green and blue colors for the input pixel in 10-bit format (e.g., as processed by logic 806 tuning range), CSCM00-CSCM22 represent the coefficients of the transformation matrix color space, and Y, Cb and Cr represent the resulting components of the luminance and chrominance for the input pixel. Accordingly, the values for Y, Cb and Cr can be calculated in accordance with equations 69-71 below:

Y=(CSCM00×R)+(CSCM01×G)+(CSCM02×B )(69)

Cb=(CSCM10×R)+(CSCM11×G)+(CSCM12×B)(70)

Cr=(CSCM20×R)+(CSCM21×G)+(CSCM22×B)(71)

Following converts the color space of the resulting values YCbCr can be issued from the logic 808 CSC as a signal 580, which may be processed by logic 566 YCbCr processing, as will be described below.

In one embodiment, the coefficients CSCM can be 16-bit complementary to two numbers with 4 integer bits and 12 fractional bits (4.12). In yet another embodiment, the logic 808 CSC may be further configured to apply an offset to each of the values of Y, Cb and Cr, and truncate the resulting values of the minimum and maximum value. Just as an example, assuming YCbCr values are 10-bit format, the offset may be in the range from -512 to 512, and the minimum and maximum values can be 0 and 1023, respectively.

Returning again to the block diagram of logic 82 of the conveyor ISP in Fig. 49, the signal 580 YCbCr can go to logic 584 choice and/or in memory. Logic 566 YCbCr processing can take input 586, which can be YCbCr image data from the signal 580 or from the memory 108, as shown by the signal 582, depending on the configuration logic 584 choice. Data 586 YCbCr image can then be processed by the logic 566 processing YCbCr to sharpen brightness, suppress color noise reduction color noise reduction color, and brightness, contrast and color and so on. In addition, the logic 566 YCbCr processing may include displaying the degree of contrast and scaling-processed image data as horizontal, and vertical directions.

Structural diagram depicting a more detailed view of a variant of implementation of the logic 566 YCbCr processing illustrated in Fig. 72. As shown, the logic 566 processing YCbCr includes logic 810 sharpen an image, logic 812 to adjust the brightness, contrast and/or color, logic 814 adjust the degree of contrast YCbCr logic 816 thinning color and logic 818 scale. Logic 566 YCbCr processing can be executed with the ability to process pixel data formats 4:4:4, 4:2:2 or 4:2:0 using the memory configurations 1 projection 2 projection or 3 projections. In addition, in another embodiment, the input signal 586 YCbCr may provide information about brightness and color as 10-bit values.

As will be taken into account, the reference in 1 projection 2 projection or 3 projection indicates the number of projections forming images used in memory images. For example, in the format of 3 projections of each of the components Y, Cb or Cr may use a separate respective projection memory. In format 2, the projections of the first projection may be provided for component (Y) of the luminance and the second projection, which punctuates the sample Cb and Cr, may be provided for a component of the chrominance (Cb and Cr). In format 1, the projection of a single projection of the memory punctuated by samples of luminance and chrominance. In addition, with regard formats 4:4:4, 4:2:2 and 4:2:0 may be taken into account that the format of 4:4:4 specifies a reference to the format of the sample, in which each of the three component YCbCr is sampled at the same frequency. In the format of 4:2:2 components Cb and Cr subdirections at half the sampling frequency brightness component Y, thus reducing the resolution component Cb and Cr by half in the horizontal direction. Similarly, the format 4:2:0 subdirectory components Cb and Cr chrominance both in vertical and horizontal directions.

Information processing YCbCr may occur within the active area of the source specified in the source buffer, while the active area of the source contains valid pixel data. For example, with reference to Fig. 73 illustrates the buffer 820 source, defined in the materials of this application active region 822 source. In the illustrated example, the source buffer may be 1 projection format 4:4:4, providing for the pixels of the source 10-bit values. The active region 822 source may be prescribed separately for samples of luminance (Y) and samples of the chrominance (Cb and Cr). Thus, it should be clear that the active region 822 source may actually include multiple source area for samples of brightness and svetlost is. The beginning of the active regions 822 source for brightness and chromaticity can be determined on the basis of an offset from the base address 824 (0,0) buffer source. For example, the initial position 826 (Lm_X, Lm_Y) for the active area of the source brightness can be determined offset 830 x and offset 834 y relative to the base address 824. Similarly, the initial position 828 (Ch_X, Ch_Y) for the active area of the source color can be determined by the offset 832 x and offset 836 y relative to the base address 824. It should be noted that in this example the offset 832 and 836 y for luminance and chrominance, respectively, may be equal. On the basis of the starting position 826 active area of the source brightness can be determined by the width 838 and 840, each of which can represent the number of samples the brightness in the x and y directions, respectively. Additionally, based on the initial position 828 active area of the source color can be determined by the width 842 and height 844, each of which can represent the number of color samples in the x and y directions, respectively.

Fig. 74, in addition, gives an example showing how the active area of the source for samples of luminance and chrominance can be defined in two projections. For example, as shown, the active region 822 source brightness can be defined in n the ditch buffer 820 source (with base address 824) plot the specified width 838 and 840 with respect to the initial position 826. The active region 848 source color can be determined in the second buffer 846 source (with base address 706) as the site of the specified width 842 and height 844 relative to the starting position 828.

With the above points in mind and returning to Fig. 72, the signal 586 YCbCr first received by the logic 810 sharpen an image. Logic 810 sharpen an image can be performed with the opportunity to perform processing sharpen and highlight the contours of the image to increase the detail of the texture and contours of the image. As will be taken into account, the increase in the sharpness of the image can improve the perceived resolution of the image. However, in General, it is desirable that the existing noise in the image is not found as a texture and/or paths, and thus, did not rise during processing sharpening.

In accordance with the present technology logic 810 sharpen an image can perform the sharpening of the image using a portable filter Unsharp mask on the component of the luminance (Y) signal is YCbCr. In one embodiment, may be provided two or more low-pass filters with Gaussian characteristic different the dimensional scale. For example, in the embodiment, which includes two filters with Gaussian characteristic, the output signal (for example, Gaussian blur) of the first filter with both Gaussian with radius (x), is subtracted from the output signal of the second filter with Gaussian characteristic having a second radius (y), while x is greater than y, to generate an Unsharp mask. Additional Unsharp mask can also be obtained by subtracting the output signals of filters with Gaussian characteristic of the input Y. In some embodiments, the implementation of the technology may also provide a comparison operations basic adaptive thresholds that can be performed using Unsharp masks so that, on the basis of the results of the comparison(s), the value of amplification could be added to the base image, which can be selected as the source input image Y and the output signal of one of the filters with Gaussian characteristic, to generate a final output signal.

With reference to Fig. 75 illustrates a structural diagram depicting exemplary logic 850 to perform sharpen an image in accordance with the variants of the implementation of the disclosed at the moment technologies. Logic 850 represents the mask multirange ner is scoi filter, which can be applied to the input image Yin brightness. For example, as shown, Yin is received and processed two filters 852 (G1) and 854 (G2) lower frequencies with Gaussian characteristics. In this example, the filter 852 may be a 3x3 filter, and the filter 854 may be a 5x5 filter. Should be taken into account, however, that in additional embodiments, the implementation can also be used for more than two filter with Gaussian characteristic of different scales (for example, 7x7, 9x9, etc). As will be taken into account, as a result of processing filter low-pass high-frequency components, which generally correspond to noise, can be removed from the output signals G1 and G2 to create a "blurred" image (G1out and G2out). As will be described below, using the blurred input image as the base image provides the opportunity for noise suppression as part of filter sharpening.

Filter 852 both Gaussian 3x3 and filter 854 both Gaussian 5x5 can be defined as follows:

G1=[G11G11G11G11 G10G11G11G11G11]256G2=[G22G22G22G22G22G22G21G21G21G22G22G21G20G21G22G22G21G21G21G22G2 G22G22G22G22]256

Just as an example, the filter values G1 and G2 with Gaussian characteristics can be chosen in one embodiment, as set forth below:

G1=[282828283228282828]256G2=[99999912121299121612991212129999/mn> 99]256

Based on Yin, G1out and G2out can be generated in three Unsharp mask, Sharp1, Sharp2 and Sharp3. Sharp1 can be defined as an Unsharp image G2out filter 854 both Gaussian, wichitaeagle of the Unsharp image G1out filter 852 both Gaussian. As Sharp1 is essentially the difference between the two low pass filters, it may be indicated by reference as mask "middle band", since high-frequency components of noise already filtered in Unsharp images G1out and G2out. Additionally, Sharp2 can be calculated by subtracting G2out from the input image Yin brightness and Sharp3 can be calculated by subtracting G1out from the input image Yin brightness. As will be described below, the basic scheme of adaptive thresholds can be applied using Unsharp mask Sharp1, Sharp2 and Sharp3.

With reference to the logic 856 choice, the base image can be selected on the basis of the signal UnsharpSel management. In the illustrated embodiment, the base image may be input image Yin or filtered output signals G1out or G2out. As will be taken into account when the original image have a high variance of the noise (n is an example, almost as high as the variability of the signal), using the original image Yin as the base image while sharpening may not adequately provide for the suppression of noise components during sharpening. Accordingly, when a particular threshold noise content detected in the input image, the logic 856 choice may be adapted to select one of the exposed filter the lower frequencies of the output signals G1out or G2out, which was reduced high-frequency content, which may include noise. In one embodiment, the signal value UnsharpSel control can be determined through the analysis of statistical data in the statistical processing module 80 pre-treatment ISP to determine the noise content of the image. As an example, if the input image Yin has a low noise content, so that the noise appearance likely will not increase as a result of processing of sharpening an input image Yin can be selected as the base image (for example, UnsharpSel = 0). If the input image Yin is defined containing significant noise, so that the processing of sharpening can increase the noise can get one of the filtered depicts the th G1out or G2out (for example, UnsharpSel = 1 or 2, respectively). Thus, through the use of adaptive technology to select the base image logic 850 essentially provides the noise cancel function.

Then the gain can be applied to one or more of the masks Sharp1, Sharp2 and Sharp3 in accordance with the scheme basic adaptive thresholds, as described below. Then Unsharp values Sharp1, Sharp2 and Sharp3 can be compared to various thresholds SharpThd1, SharpThd2 and SharpThd3 (not necessarily respectively) through block 858, 860 and 862 of the comparator. For example, the value Sharp1 always compared SharpThd1 in block 858 comparator. Using the relevant block 860 comparator threshold SharpThd2 can be compared with Sharp1 or Sharp2, depending on the logic 866 choice. For example, the logic 866 choice can choose Sharp1 or Sharp2 depending on the signal status SharpCmp2 control (for example, SharpCmp2 = 1 selects Sharp1; SharpCmp2 = 0 selects Sharp2). For example, in one embodiment, the state SharpCmp2 may be determined depending on the variance of the noise/noise content of the input image (Yin).

In the illustrated embodiment, in General, it is preferable to set SharpCmp2 and SharpCmp3 to select Sharp1, if not detected that the image data have a relatively low magnitude noise. This is because Sharp1, being a difference between the output signals of the filters G1 and G2 of the lower frequencies with Gaussian characteristic, in General, less sensitive to noise, and thus may help reduce the degree to which their values are changed SharpAmt1, SharpAmt2 and SharpAmt3 due to fluctuations in the level of noise in "noisy" image data. For example, if the original image has a high variance of the noise, some of the high-frequency components can not be captured when using constant thresholds and, thus, may increase during processing sharpening. Accordingly, if the noise content of the input image is high, then some part of the noise content may be present in Sharp2. In such cases, SharpCmp2 can be set to 1 to select the mask Sharp1 middle band, which, as described above, has reduced high frequency content due to the existence of difference between the two output signals are low-pass filters and, thus, less sensitive to noise.

As will be taken into account, such processing can be applied to the selection Sharp1 or Sharp3 through logic 864 choice running SharpCmp3. In one embodiment, SharpCmp2 and SharpCmp3 can be set to 1 by default (for example, to use Sharp1), and be set to 0 only for those input image is s, which have been identified as having generally low variance noise. This gives essentially the scheme basic adaptive thresholds, in which the choice of the comparison value (Sharp1, Sharp2 or Sharp3) is adaptive on the basis of the noise variance of the input image.

On the basis of the output signals of blocks 858, 860 and 862 of the comparator subjected to sharpen the output image Ysharp can be determined through the application of enhanced Unsharp mask to the base image (e.g., selected by logic 856). For example, first, with reference to the block 862 comparator SharpThd3 compared to the B input provided by the logic 864 choices in the materials of the present application will be indicated link as "SharpAbs" and may be equal to one of Sharp1 or Sharp3, depending on the state SharpCmp3. If SharpAbs is greater than the threshold value SharpThd3, the coefficient SharpAmt3 gain is applied to Sharp3, and the resulting value is added to the base image. If SharpAbs is smaller than the threshold value SharpThd3, it can be applied weakened coefficient Att3 gain. In one embodiment, the weakened coefficient Att3 gain may be determined as set forth below:

Att3=SharpAmt 3×SharpAbsSharpThd3(72)

this SharpAbs is Sharp1 or Sharp3, as determined by the logic 864 choice. The choice of the base image, summed with the full gain (SharpAmt3) or attenuated by the gain (Att3), is logic 868 selection on the basis of the output signal of the block 862 comparator. As will be taken into account, the use of attenuated gain may take action in response to situations in which SharpAbs is not greater than the threshold value (for example, SharpThd3), but the variance of the image noise, however, is close to the threshold. This can help reduce the noticeable transitions between hard and soft pixel. For example, if the image data is transferred without weakened gain in such circumstances, the resulting pixel may look like a defective pixel (for example, are stuck pixel).

Then, the similar processing can be applied to the unit 860 of the comparator. For example, depending on the state SharpCmp2, the logic 866 selection can ensure acivate Sharp1 or Sharp2 as input in block 860 comparator, which is compared with a threshold value SharpThd2. Depending on the output signal of block 860 comparator coefficient SharpAmt2 gain and low gain, based on SharpAmt2, Att2, applies to Sharp2 and added to the output logic 868 selection described above. As will be taken into account, weakened coefficient Att2 gain can be calculated in some way similar to equation 72, above, except that the coefficient SharpAmt2 gain and threshold SharpThd2 apply to SharpAbs, which can be selected as Sharp1 or Sharp2.

After that, the coefficient SharpAmt1 strengthen or weakened coefficient Att1 gain is applied to Sharp1, and the resulting value is summed with the output logic 870 choice to produce an output signal Ysharp subjected to sharpening pixel (logic 872). The choice of application of the coefficient of SharpAmt1 strengthen or weakened factor Att1 gain can be determined on the basis of the output signal of the block 858 comparator that compares Sharp1 threshold value SharpThd1. Again weakened coefficient Att1 gain can be determined in some way similar to equation 72, above, except that the coefficient SharpAmt1 gain and threshold SharpThd1 apply to Sharp1. Resultyou the s subjected to the sharpening pixel value, scaled using each of the three masks, are added to the output pixel Yin to generate subjected to sharpen the output signal Ysharp, which, in one embodiment, may be truncated to 10 bits (assuming YCbCr processing occurs with 10-bit precision).

As will be taken into account, compared with traditional technologies Unsharp masking, technology, sharpen images, set forth in this disclosure, may provide a better selection of textures and contours additionally, along with the suppression of noise in the output image. In particular, the present technology can be quite useful in applications in which the images captured by, for example, with the use of image sensors in CMOS, demonstrate sufficient signal-to-noise ratio, such as images taken in low-light conditions with the use of cameras lower resolution, built-in portable devices (e.g. mobile phones). For example, when the noise variance and the variability of the signal is comparable, it is difficult to use a constant threshold value to sharpen, because some of the noise components were subjected to the sharpening along with textures and contours. Accordingly, the technology provided in the mA is the materials of this application, as described above, can filter out the noise from the input image using a multirange filters both Gaussian to extract the characteristics of the Unsharp image (for example, G1out and G2out), to ensure subjected to sharpen the image, which also shows reduced noise content.

Before continuing, it should be clear that the illustrated logic 850 meant giving only one of the exemplary embodiments of the present technology. In other embodiments, the implementation of additional or some features may be provided by logic 810 sharpen an image. For example, in some embodiments, the implementation instead of applying the attenuated gain logic 850 may just skip the base value. Additionally, some variants of implementation may not include blocks 864, 866 or 856 selection logic. For example, blocks 860 and 862 of the comparator can simply take the values Sharp2 and Sharp3, respectively, instead of the output signal of the selection of the blocks 864 and 866, respectively, of the logic of choice. Despite the fact that such options exercise can be signs of sharpening and/or noise suppression, which are resistant to errors as the implementation shown is th in Fig. 75, should be taken into account that such design choices can be a result associated with cost and/or commercial activity restrictions.

In the present embodiment, the logic 810 sharpen an image also may provide evidence of selection circuits and the suppression of color, once the received output signal YSharp subjected to image sharpness. Each of these additional characteristics will be further described below. First, with reference to Fig. 76, an exemplary logic 874 to perform the selection of circuits that can be implemented downstream from the logic 850 sharpen in Fig. 75, illustrated in accordance with one embodiment. As shown, the original input value Yin is processed by the filter 876 Zobele detection circuit. Filter 876 Zobele can determine the value YEdge of the gradient on the basis of the 3x3 block of pixels (indicated below by reference as "A") of the original image, and Yin is the Central pixel of the 3x3 block. In one embodiment, the filter 876 Zobele can calculate YEdge through the operation of convolution of the original image data to detect changes in horizontal and vertical directions. This processing is shown below in equations 73-75.

Sx=[10-120-210-1]Sy=[121000-1-2-1]

Gx=Sx×A,(73)

Gy=Sy×A,(74)

YEdge=Gx×Gy, (75)

in this case, Sxand Syare matrix operators for detecting a gradient of the intensity contour in the horizontal and vertical directions, respectively, and Gxand Gyrepresent a gradient image, which contain derivatives of the horizontal and vertical changes, respectively. Accordingly, the output signal YEdge is defined as the product Gxand Gy.

YEdge then accepted logic 880 choice along with mask Sharp1 middle band, as described above in Fig. 75. On the basis of the signal EdgeCmp management Sharp1 or YEdge is compared with a threshold value, EdgeThd, in block 878 comparator. State EdgeCmp, for example, may be determined based on the noise content of the image, thus giving scheme basic adaptive thresholds for detection and extraction of contours. Then the output signal of the block 878 comparator may be provided to logic 882 choice and can be used full gain or weakened gain. For example, when the selected B-enter in block 878 comparator (Sharp1 or YEdge) is above EdgeThd, YEdge is multiplied by the gain of the circuit, EdgeAmt to determine the amount of separation of the Oia circuit, which should be applied. If B is input at block 878 comparator is less than EdgeThd, weakened the gain of the circuit, AttEdge, can be used to avoid noticeable transitions between the selected path and the source pixel. As will be taken into account, AttEdge can be calculated in a similar way as shown in equation 72, above, but EdgeAmt and EdgeThd apply to "SharpAbs", which can have a value of Sharp1 or YEdge, depending on the output signal of the logic 880 choice. Thus, the pixel circuit selected using the gain (EdgeAmt) or weakened gain (AttEdge), can be added to YSharp (output logic 850 in Fig. 75)to get subjected to the selection circuit output pixel Yout, which, in one embodiment, may be truncated to 10 bits (assuming YCbCr processing occurs with 10-bit precision).

As for signs of suppression of color provided by the logic 810 sharpen an image, such evidence may weaken the color brightness contours. Typically, the color suppression can be performed by applying gain chroma (attenuation coefficient), smaller than 1 depending on the values (YSharp, Yout)obtained with stages sharpen brightness and/or selection circuits, op the sled above. In the example of Fig. 77 shows a graph 890, which includes a curve 892, representing the gain of the color which can be selected for the respective subjected to sharpen brightness values (YSharp). The data represented by the graph 890 can be implemented as a reference value table YSharp and the corresponding gain of the color, between 0 and 1 (attenuation coefficient). Reference tables are used to approximate the curve 892. Regarding the values YSharp, which is centered between the two attenuation factors in the reference table, linear interpolation can be applied to the two attenuation factors, the corresponding values YSharp above and below the current value YSharp. In addition, in other embodiments, implementation of the input brightness value can also be selected as one of the values Sharp1, Sharp2 or Sharp3, certain logic 850, as described above in Fig. 75, or values YEdge, a certain logic 874, as described in Fig. 76.

Then the output signal of the logic 810 sharpen image (Fig. 72) is processed by logic 812 adjust the brightness, contrast and color (BCC). A functional block diagram depicting an implementation option customization logic BCC, illustrated in Fig. 78. As shown, the logic 812 includes the impact block 894 processing of brightness and contrast, block 896 adjust global hue and block 898 adjust the saturation. Illustrated in the present version of the implementation involves the processing of data in YCbCr 10-bit accuracy, although other components may use different bit depth. The function of each block 894, 896 and 898 are described below.

First of all, with reference to the block 894 processing of brightness and contrast, offset, YOffset, first is subtracted from the brightness data (Y)to set the black level to zero. This is done to ensure that adjusts the contrast does not change the black levels. Then, the luminance value is multiplied by the value of the enhancement to apply the contrast adjustment. As an example, the value of the degree of contrast may be a 12-bit unsigned with 2 integer bits and 10 fractional bits, thus providing a range of enhancement up to 4 times the pixel value. Then adjust the brightness can be realized by adding (or subtracting) the offset value of the brightness of the brightness data. As an example, the brightness offset in the present embodiment, may be 10-bit complementary value of two, which can range between from -512 to +512. In addition, it should be clear that the brightness adjustment is performed after n is the construction contrast, in order to avoid offset changes in the permanent component of changing the contrast. After this initial YOffset is added back to the configured data brightness to reinstall the black level.

Blocks 896 and 898 provide for the adjustment of brightness based on the characteristics of the hue data Cb and Cr. As shown, the offset 512 (assuming 10-bit processing) is first subtracted from the data Cb and Cr, to set the position of the range is approximately zero. The color tone is then configured in accordance with the following equations:

Cbadj=Cb cos(θ)+Cr sin(θ),(76)

Cradj=Cr cos(θ)-Cb sin(θ),(77)

while Cbadjand Cradjare configured values Cb and Cr, and θ represents the angle of hue, which can be computed as set forth below:

θ=arctan(CrCb)(78)

Above is shown the operation depicted by the logic within the block 896 adjust global hue and can be represented by the following matrix operation:

[CbadjCradj]=[KaKb-KbKa][CbCr],(79)

thus Ka = cos(θ), Kb = sin(θ), and θ is defined above in equation 78.

Then adjust the saturation can be applied to values Cbadjand Cradjas shown by block 898 adjust the saturation. In the illustrated embodiment, the saturation adjustment is performed by applying a global multiplier saturation, and based on the color tone of the multiplier saturation for each of the values Cb and Cr. Based on the hue saturation adjustment can improve colour reproduction. The color tone of the color can be represented in the color space YCbCr, as shown by the graph 904 color wheel on the IG. 79. As will be taken into account, the color wheel 904 of the hue and saturation YCbCr can be output offset is identical to the color wheel in the color space the HSV (hue, saturation, and intensity) of about 109°. As shown, the graph 904 includes the values of the circle representing the multiplier (S) saturation within the range from 0 to 1, and the angular values representing θ, as defined above, within the range between 0 to 360°. Each θ may represent a different color (e.g., 49° = Magenta, 109° = red, 229° = green, etc). The color tone colors with a specific angle θ color tone can be adjusted by selecting the appropriate multiplier S saturation.

Returning to Fig. 78, the angle θ hue (calculated in block 896 adjust global hue) can be used as index information table 900 saturation Cb and reference table 902 saturation Cr. In one embodiment, the reference tables 900 and 902 saturation can contain 256 values of saturation, uniformly distributed in the range of color tones from 0-360° (for example, the first record information table is at 0°, and the last entry is 360°), and the saturation value S at the given pixel can be determined by a linear interpolationtype saturation in the reference table directly below and above the current angle θ of the colour tone. The final saturation value for each component Cb and Cr can be obtained by multiplying the global saturation value (which may be a global constant for each of Cb and Cr) with a defined based on the hue value saturation. Thus, the final adjusted values Cb' and Cr' can be determined by multiplying Cbadjand Cradjwith their respective final values of saturation, as shown in block 898 adjustment based on hue saturation.

After that, the output signal of the logic 812 BCC is passed to logic 814 settings range YCbCr, as shown in Fig. 72. In one embodiment, the logic 814 tuning range may provide a non-linear function display for channels Y, Cb and Cr. For example, the input values of Y, Cb and Cr are displayed in the corresponding output values. Again, assuming YCbCr data are processed in 10 bits, can be used a lookup table with 256 interpolated 10-bit entries. Three such reference table may be provided one for each of the channels Y, Cb and Cr. Each of the 256 input records can be evenly distributed, and the output value can be determined by linear interpolation of the output values displayed in the indexes directly above and below the current volt is about the index. In some embodiments, the implementation neinterpretovat reference table has 1024 entries (for 10-bit data), can also be used, but might be considerably large memory requirements. As will be taken into account by adjusting the output values of the reference tables, the function to adjust the degree of contrast YCbCr can also be used to perform some filter effects to images, such as black and white, Sepia, negative image, solarization, and so on.

Then thinning color can be applied through logic 816 thinning color to the output signal of the logic 814 tuning range. In one embodiment, the logic 816 thinning color can be made with the ability to perform horizontal decimation to thin out the data from YCbCr 4:4:4 format 4:2:2, in which the information on the chrominance (Cr and Cr) is subjected domain downsampling samples at half-rate data of brightness. Solely as an example, thinning can be performed by applying a 7-outlet lowpass filter, such as polophony filter Lanczos, set of 7 pixels horizontally, as shown below:

Out=C0×in (i-3)+C1×in(i-2)+C2×in(i-1)+C3×in(i)+C4×in(i+1)+C5×in(i+2)+C6×in(i+3)512,(80)

thus in(i) represents the input pixels (Cb or Cr), and C0-C6 represents the coefficients of the filter 7-drop filter. Each input pixel has an independent factor (C0-C6) filter to provide the flexibility to shift the phase subjected to filtering chrominance samples.

In addition, thinning color, in some cases, can also be performed without filtering. This can be useful when the source image was originally taken in the format of 4:2:2, but has been increasing discretize the tion in the format of 4:4:4 YCbCr processing. In this case the resulting thinned image 4:2:2 is identical to the original image.

Then YCbCr data derived from logic 816 thinning color can be scaled using logic 818 scaling before the conclusion of block 566 processing YCbCr. Function logic 818 scaling may be similar to the functionality of the logic 368, 370 scaling filter 300 compensate for the grouping unit 130 pre-treatment of pixels as described above with reference to Fig. 28. For example, the logic 818 scaling can perform horizontal and vertical scaling as two stages. In one embodiment, 5-taking polyphase filter can be used for vertical scaling, and 9-taking polyphase filter can be used for horizontal scaling. Multi-tap polyphase filters can multiply the pixels selected from the image source to the weighting factor (for example, the coefficient of the filter), and then summarize the conclusions to form a pixel of the destination. The selected pixels can be selected depending on the current position of the pixel and the number of taps of the filter. For example, with a vertical 5-lateral filter can be selected two adjacent pixel on each vertical side of the current pixel, and the horizontal resolution is the mental 9-lateral filter can be selected four neighboring pixels on each horizontal side of the current pixel. The coefficients of the filter can be provided from the information table, and can determine the current fractional position between pixels. The output signal 588 logic 818 scaling is then output from the block 566 YCbCr processing.

Returning to Fig. 49, the processed output signal 588 may be sent to the memory 108 or may be derived from logic 82 pipeline ISP as signal 114 image to the hardware of the image (for example, the device 28 display) for viewing by the user. In some embodiments, the implementation of the signal 114 image can optionally be processed by the graphics processing unit and/or machine compression and saved to recovery after compression and outputting to a display device. Additionally, one or more frame buffers can also be provided to control the buffering of the image data displayed on the display device, in particular, with respect to the data of the video image.

As will be understood, various image processing techniques described above and relating, inter alia, to the detection and correction of defective pixels, shading correction lens, eliminating patchiness and increase the contrast of the image provided in the materials of this application only as an example. Respectively, should be to understand the but that the present disclosure should not be construed as being limited only by the examples above. In fact, approximate the logic depicted in the materials of the present application, may be subject to some changes and/or additional characteristics in other variants of implementation. In addition, should be taken into account that the above-described technology can be implemented in any suitable way. For example, circuit components 32 image processing and, in particular, the module 80 pre-treatment ISP and module 82 of the conveyor ISP can be implemented using hardware (e.g., appropriately configured circuits), software (e.g., via computer program comprising executable code stored on one or more tangible machine-readable media, or through the use of a combination of elements of both hardware and software.

Specific embodiments of described above have been shown as examples, and it should be clear that these options for implementation may be allowing various modifications and alternative forms. In addition, it should be clear that the invention is not meant limiting the Noah specific open forms, but rather to cover all modifications, equivalents and alternatives falling within the nature and scope of this disclosure.

1. System for processing image signals, comprising:
logic pre-treatment containing:
the first input is configured to accept image data from the first image sensor;
the second input is configured to accept image data from the second image sensor;
the preprocessing block of pixels is configured to process image data received from the first and second image sensors, on a frame-by-frame basis; and
the control unit pre-processing is executed with the capability to manage the logic of the pre-processing mode of a single sensor, if active, only one of the first and second image sensors, and dual mode sensor, if active both of the first and second image sensors;
when operating in this mode, a single sensor the image frames obtained by the active sensor, are issued directly to the preprocessing block of pixels to be processed by the interface sensor active sensor; and
thus when operating in the dual mode sensor, the image frames obtained by the first sensor, and the image frames received second date is the IR, issued to the preprocessing block of pixels alternating manner.

2. System for processing image signals according to claim 1, in which the first image sensor includes a first sensor interface, while the second image sensor includes a second interface of the sensor and the first sensor interface and the second interface of each sensor is associated with the corresponding control register.

3. System for processing image signals according to claim 1, in which each respective control register contains a field of the destination field of the Bank of registers and bits bring in work readiness.

4. System for processing image signals according to claim 3, in which when operating in the single mode of the sensor control unit pre-processing is executed with the ability to program the control register associated with an active sensor, through:
record the set of values of the recipients in the recipient, while the set of values recipients specifies one or more blocks of the destination within the logic of pre-processing that must take image data obtained by the active sensor, and one or more blocks of the destination include the preprocessing block of pixels; and
record the set of values of the Bank registers in the field of the Bank of registers, the set of values of the Bank registers specifies one or Bo is its registers data from the first and second banks of registers, which can be used to configure one or more blocks of the destination.

5. The image processing system according to claim 4, in which the control unit pre-processing performed with the opportunity to lead in the work readiness of the control register associated with an active sensor, while the processed current frame image by setting bit bring in work readiness in a value of 1, wait for a triggering event and, when the detected triggering event, initiating the processing of the next frame image by sending the next frame image to blocks-recipients, a given set of values of the recipients for processing.

6. The image processing system according to claim 3, in which when operating in the dual mode sensor control unit pre-processing is executed with the ability to program the control registers associated with the first and second image sensors, through:
for a control register associated with the first image sensor, record the set of values of the recipients in the recipient, while the set of values recipients specifies one or more blocks of the destination within the logic of pre-processing that must take image data obtained by the first image sensor, and one or more blocks of the destination include in trojstvo memory write a set of values of the Bank registers in registers, while the set of values of the Bank registers specifies one or more data registers of the first and second banks of registers that can be used to configure one or more blocks of the destination specified by the value set of recipients; and
for a control register associated with the second image sensor, record the set of values of the recipients in the recipient, while the set of values recipients specifies one or more blocks of the destination within the logic of pre-processing that must take image data obtained by the second image sensor, and one or more blocks of the destination include a memory device, and writing the set of values of the Bank registers in registers, while the set of values of the Bank registers specifies one or more data registers of the first and second banks of registers that can be used to configure one or more blocks-recipients.

7. System for processing image signals according to claim 6, in which after a triggering event, the current frame obtained by the first sensor image and the current frame obtained by the second sensor image, recorded in a memory device.

8. The image processing system according to claim 6, containing a control register associated with the memory item and the control register, associated with the memory, the programmable control unit pre-processing so that the image frames obtained by the first image sensor and second image sensor, could be read from the memory and provided to the preprocessing block of pixels alternating manner.

9. System for processing image signals according to claim 7, in which the ratio with which the image frames obtained by the first image sensor, alternate image frames obtained by the second sensor, depends at least in part on the frame rate of the image data obtained by the first image sensor, and the frame rate of the image data obtained by the second image sensor.

10. System for processing image signals according to claim 6, in which the preprocessing block of pixels contains:
the first block of the statistical processing performed with the opportunity to process the image data to obtain one or more sets of statistical data of the image;
the second block aggregation made with the possibility to process the image data to obtain one or more sets of statistical data of the image;
thus when operating in the dual mode sensor one or more recipients specified by the set of values of the addressees stored in the register is driven by the I, associated with the first image sensor additionally includes a first block aggregation, and one or more recipients specified by the set of values of the addressees stored in the control register associated with the second image sensor additionally includes a second block aggregation.

11. Method for processing image data in the scheme of pre-processing system for processing image signals (ISP), with multiple sources of input images, the input sources of the images include first and second image sensors, and the method is that:
use the control unit to determine whether the scheme of pre-processing mode of a single gauge, or the first mode, dual sensor, but only one of the first and second image sensors, active mode single sensor and both of the first and second image sensors, active in the first mode, the dual sensor;
if the scheme of pre-processing is running in single mode sensor, process the image frames obtained by the active sensor images by programming the control register associated with the active image sensor, to define one or more recipients and data registers to access otci frame image, lead in working the willingness of the control register associated with the active image sensor and, upon detection of the triggering event for the frame image, send the image frame to the destination specified in the control register, the recipients include at least the preprocessing block of pixels is configured to process the image frame; and
if the scheme of pre-processing is operating in the first mode, dual sensor, program the control register associated with the first image sensor to induce the image frames obtained by the first image sensor, to record in the memory device, programming the control register associated with the second image sensor to induce the image frames obtained by the second sensor image, recorded in a memory device, programming the control register associated with the memory device, to specify one or more recipients and data registers for processing image frames are read from the memory device, put in the work readiness of the control register associated with the memory device and, after detecting a triggering event for the subsequent image frame, send the next frame image to the recipients specified in the control register associated with the device p is mate, recipients include at least the preprocessing block of pixels, made with the possibility to process the next frame of the image, while the image frames are read from the memory to contain the image frames obtained by the first image sensor and second image sensor arranged alternating manner.

12. The method according to claim 11, in which the scheme of pre-processing is additionally performed with the opportunity to operate in the second mode, dual sensor, the first image sensor is active and the second sensor images proactive in the second mode of the dual sensor; and
if the scheme of pre-processing is operating in the first mode, dual sensor, a control register associated with the first image sensor, programmed to induce the image frames obtained by the first image sensor, go to the preprocessing block of pixels, and the control associated with the second image sensor, programmed to induce the image frames obtained by the second sensor image, go to the block aggregation, but not to the preprocessing block of pixels.

13. The method according to claim 11, in which when the operation of a single sensor, the triggering event occurs as soon as realty, set the control register associated with the active image sensor, go into the idle state.

14. The method according to claim 11, in which at least one of data registers specified by the control register associated with the active image sensor associated with a corresponding shadow register, the contents of the shadow register can be updated for the next frame image, while the current frame image is processed using the contents of the mentioned at least one data register.

15. The method according to claim 11, in which when operating in the dual mode sensor triggering event occurs when found in the work readiness bits of bringing into the work readiness of the control register associated with the memory device.

16. The method according to claim 11, wherein a set of input images in the scheme of pre-treatment include asynchronous image source.

17. The method according to claim 11, consisting in the fact that:
find, does the mode in which currently operates the scheme pre-treatment;
if the mode is changed, program all the control registers associated with the source image input circuit pre-processing so that each input image was not made with the ability to send data of image is of Azania to any blocks-recipients;
start each input source images after detection of the respective triggering events;
determine whether all blocks in the destination in the idle state; and
if all blocks in the destination are in the idle state, continue processing the image data based on the current mode.

18. The method according to 17, in which the mode change is that moving from a single sensor to the dual mode sensor or switching mode dual sensor single sensor.

19. The method according to 17, in which the mode change is that the switch from a single sensor or dual sensor mode, in which both of the first and second image sensors are not active, and continue processing the image data on the basis of the current regime is that do not handle additional image data up until at least one of the first and second image sensors does not enter the active state.



 

Same patents:

FIELD: physics, optics.

SUBSTANCE: invention relates to a camera and a system having a camera, wherein the ratio of the distance between the lens and the sensor to the focal distance varies during exposure. The invention also relates to a method of deconvoluting image data. A variation frequency which enables to form an image which is invariant with respect to movement is set.

EFFECT: reduced blur due to movement.

17 cl, 24 dwg, 1 tbl

FIELD: physics, photography.

SUBSTANCE: invention relates to image capturing devices. The result is achieved due to that the image capturing device includes a photographic lens which forms an image of an object, a photoelectric conversion unit located in the predicted image plane of the photographic lens, a display unit which displays the photographed image obtained by the photoelectric conversion unit, an image display control unit which displays the photographed image through the display unit after obtaining the photographed image through the photoelectric conversion unit, a distance information acquisition unit which obtains information on distance in the photographed image, and a blur correction unit which corrects blurring on the photographed image based on information on distance obtained by the distance information acquisition unit. The image display control unit displays the photographed image, where multiple distances in the photographed image are focused.

EFFECT: correcting blurring based on information on distance of an object included in the photographed imaged.

13 cl, 25 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to a video surveillance and camera control system capable of performing panoramic turning and tilted turning of the camera. The camera platform system has a camera which captures an object image to generate a frame image, camera platforms which turn a camera about a panning axis and a tilt axis and image processors which generate a visual image based on the frame image. When a camera passes through a predefined angular position for turning about the tilt axis, an image processor generates a first visual image corresponding to the image formed by turning the frame image by an angle greater than 0 degrees but less than 180 degrees about the panning axis in a predefined angular position before generating a second visual image corresponding to the image formed by turning the frame image 180 degrees about the panning axis.

EFFECT: reducing unnaturalness of change in direction of movement of an object in a visual image in order to reduce errors when tracking an object.

8 cl, 15 dwg

FIELD: physics.

SUBSTANCE: method is carried out using, in a displacement metre, a correlator which performs the function of determining the variance of signal increments based on squaring difference values of correlated signals from linear photodetectors in digital form, and an interpolator is made in form of a unit which performs interpolation using the formula: χ^=Δm(D1D1)/[2(D12D0+D1)], where D-1, D1, D0 denote signal variances, χ^ is displacement, Δm is the pixel size of the auxiliary photodetector.

EFFECT: reduced image displacement measurement error.

4 dwg

FIELD: physics, computation hardware.

SUBSTANCE: in compliance with this invention, sequence of images including multiple lower-resolution images is contracted. Vectors of motion between reference image in sequence and one or several nest images in sequence are defined. The next forecast image is generated by application of motion vectors to reconstructed version of reference image. Difference between next actual image and next forecast image is generated. Image in sequence from set to set is decoded and SR technology is applied to every decoded set for generation of higher-resolution image by rime interpolation and/or spatial interpolation of reference and difference images. Compression of sequence of images includes steps of determination of vectors of motion between reference image and at least one of extra image of sequence of images. Note here that obtained vector of motion is applied to forecast at least one extra image to calculate difference in mages between at least one extra image and forecast of at least one extra image, respectively.

EFFECT: high-resolution imaging by superhigh resolution technology.

13 cl, 5 dwg

FIELD: chemistry.

SUBSTANCE: invention relates to system and method of recording procedure for recorder. Proposed system comprises time code generator for time code generation for synchronisation of electronic data. Recorder transceiver executes wireless communication of time code to multiple cameras. Cameras fix video and audio data while appropriate camera time data dispatchers combine receive time code with recorded said data to be transmitted via wireless communication line for writing in recorder memory. Recorder can receive and memorise audio data from warning system while computer can communicate with recorder for appropriate editing of stored camera data and warning data to obtain edited data.

EFFECT: efficient use of recorder.

14 cl, 11 dwg

Digital camera // 2510866

FIELD: physics, communication.

SUBSTANCE: invention relates to digital camera with moving mirror. Proposed camera comprises microcomputer 110 that features live scan mode to control images generated by CMOS-sensor 130 or image data obtained by pre-processing of said image data so that these are displayed on LCD 150 as moving images in real time. Note here that when trigger button 141 receives live scan automatic focusing switch-on instruction, microcomputer 110 controls said moving mirror to displace it on optical path to measure by AF-transducer 132 and out of it thereafter to live scan mode.

EFFECT: expanded operating performances for digital camera with moving mirror.

28 cl, 41 dwg

FIELD: physics.

SUBSTANCE: brightness distribution is determined for each of multiple image data portions, the characteristic value of each brightness distribution is calculated from said brightness distribution and a correcting value is found for tonal correction, which is carried out with respect to the combined image data based on the obtained characteristic value of brightness distribution.

EFFECT: carrying out tonal correction to obtain a combined image, having suitable brightness and contrast.

10 cl, 6 dwg

FIELD: information technology.

SUBSTANCE: device has an image sensor which includes an array of a plurality of image forming pixels and a plurality of focus determining pixels which receive light rays passing through exit pupils of image forming lenses while they are partially shielded, a vertical output line, a vertical summation unit which sums, in the vertical direction of the image sensor, signals from a plurality of pixels, aligned in one column, and a control unit which performs control so that the vertical summation unit is always off when the focus determining pixel is included among pixels having signals to be summed, when summing signals from a plurality of pixels in the vertical direction and reading said signals through the vertical summation unit.

EFFECT: enabling mixing of signals of image forming pixels and focus determining pixels.

7 cl, 32 dwg

FIELD: radio engineering, communication.

SUBSTANCE: video system 10 on a chip for image stabilisation has a main photodetector array 11 and two secondary mutually perpendicular linear photodetector arrays 12 and 13 (with a larger pixel area), first and second random access memory 14 and 15, inputs N1…Nk of which are connected to corresponding outputs N1…Nk of the secondary mutually perpendicular linear photodetector arrays 12 and 13, outputs N1…Nk of which are also connected to inputs N1…Nk of first and second controllers 16 and 17 for calculating correlation, respectively, the second inputs M1…Mk of which are connected to corresponding outputs of the first and second random access memory 14 and 15, wherein outputs of the first and second controllers for calculating correlation are connected to inputs of a control unit 18.

EFFECT: high sensitivity to image shift, wider range of compensated shifts and shift accelerations, accuracy of measuring shift and size and weight characteristics of the device.

2 dwg

FIELD: engineering of systems for analyzing television images, in particular, for stabilizing an image in television images.

SUBSTANCE: in accordance to the invention, first digital image and at least second image have a set of pixels, and each pixel has associated address for display and is represented by color. System user sets a color matching interval, or system uses a predetermined color matching interval, then in first digital image a pixel is selected, for example, representing an element in an image, which is either fuzzy because of element movement, or appears trembling due to camera movement, and is matched within limits of interval with a pixel of second image. The interval ensures compensation, required during change of lighting. After selection of a pixel in first image, it may be matched with all pixels in the second image, where each pixel of the second image, having matching color within limits of matching interval, is stored in memory, and pixel color is selected, closest to pixel of first image. Then pixel addresses are changed in second image so that the address of pixel positioned in second image, closest color-wise to the pixel in the first image, is assigned the same address on the display as the pixel of first image and the resulting rearranged second image is dispatched into memory for storage.

EFFECT: creation of efficient image stabilization method.

9 cl, 11 dwg

FIELD: devices for reading, recording and reproducing images, and method for correcting chromatic aberrations.

SUBSTANCE: processing of correction is performed with consideration of diaphragm aperture size and object image height in image reading lens. The output signal of the camera signal processing circuit (4) by means of switch (5) is sent to block (6) for correction of chromatic aberration. Value of aperture of diaphragm (31) in lens (1) for reading image, and coordinates of pixel, relatively to which correction processing is performed, from the block (6) for correction of chromatic aberration is sent to block (10) for computation of transformation ratio. The length of focal distance of approach or withdrawal of lens (1) for reading image and camera trembling correction vector are sent to block (10) for computing transformation ratio, then transformation ratio is produced for each color to be dispatched to chromatic aberration correction block (6), where the signal, corrected in block (6) for chromatic aberration correction is compressed in data compression circuit (15) for transmission to record carrier in device (17) for recording and reproduction and unpacked in data unpacking circuit (18) for transmission to switch (5).

EFFECT: increased quality of image, such as color diffusiveness.

6 cl, 10 dwg

FIELD: information technologies.

SUBSTANCE: method and the device for stabilisation of the image containing set of shots is offered, and estimates motion vectors at level of a shot for each shot, and is adaptive integrates motion vectors to yield, for each shot, the vector of a motion which is subject to use for stabilisation of the image. The copy of the reference image of a shot is biased by means of the corresponding is adaptive the integrated vector of a motion. In one version of realisation of the invention, the perimetre of the data unit of the image is supplemented with the reserve of regulation which is subject to use for neutralisation of images, in other variant vertical and horizontal builders are handled independently, and plans of motion evaluation related to the MPEG-4 coder, used for evaluation of vectors at level of macroblocks, and histograms.

EFFECT: possibility to delete astable motions at maintenance of a natural motion of type of scanning of the film-making plan, with the underload requirement for additional specialised plans and the underload magnification of computing complexity.

26 cl, 4 dwg

FIELD: information technologies.

SUBSTANCE: invention can be used for underwater shooting, provision of surveillance, visual inspection and control of underwater shooting parametres and diver actions from surface in process of underwater-technical or diagnostic works at a depth under water. Underwater television control system comprises video portable camera block installed under water in leak-tight box and video camera fixed on helmet of diver's suit and installed in leak-tight box, leak-tight sources of light for illumination of video filming object, the following components installed under water - control unit, monitor, units for power supply of light sources, unit of communication with diver, unit of audio-video recording, terminals of video-audio recording unit are connected to information inputs of monitor, unit of system power supply, accumulator and unit of accumulator charging.

EFFECT: improved efficiency of underwater-technical works control, monitoring over divers' work under water due to increased reliability and validity of information obtained in process of underwater shooting.

14 cl, 5 dwg

FIELD: physics; video technology.

SUBSTANCE: invention relates to video surveillance devices. The result is achieved due to that, a camera (16) and a receiver (28) of a swiveling base are connected to each other so as to transmit a video signal. A web-server (50) sends a video signal beyond the border to a camera (16) and receives a signal from outside for remote control of the camera and a signal for remote control of the swiveling base. A control unit (40) controls the camera (16) in accordance with the signal for remote control of the camera. The signal for remote control of the swiveling base is superimposed on the video signal to be transmitted to the receiver (28) of the swiveling base using the video signal circuit (52). The receiver (28) of the swiveling base extracts the signal for remote control of the swiveling base from the video signal and controls rotation of the base (14) in accordance with the signal for remote control of the swiveling base. The given configuration can be used for transmission with superposition, and the camera and the swiveling base can be easily controlled through communication with the external environment.

EFFECT: controlling swiveling base of a camera through a remote control signal.

10 cl, 6 dwg

FIELD: physics; computer engineering.

SUBSTANCE: invention relates to computer engineering for determining and reducing parametres of video cameras to given values, where the video cameras operate in a machine vision system consisting of three video cameras, two of which provide a detailed image and the third is for scanning. The result is achieved due to that, a device is proposed for automatic adaptive three-dimensional calibration of a binocular machine vision system, which has a first video camera, first image input unit, first orientation unit, second video camera, second image input unit, second orientation unit, system controller and control unit. The device also includes a third video camera, third image input unit and a third orientation unit. Accuracy of calibrating the machine vision system is achieved due to successive pairwise calibration of different pairs of video cameras.

EFFECT: calibration of a machine vision system consisting of three video cameras which, after calibration, should be placed on a single line straight line and directed perpendicular this line, where the two outermost video cameras have a narrow view angle and different focal distances and the third video camera which is placed in the centre between the outermost video cameras has a wide view angle.

4 dwg

FIELD: physics, photography.

SUBSTANCE: invention relates to television and digital photography and more specifically to image stabilisation methods. The result is achieved due to that two additional linear photodetectors of considerably smaller area, made in form of rows (columns) are placed on a single crystal together with the main photodetector matrix, a signal is read from the additional two linear photosensitive devices with horizontal frequency many times greater than the frame frequency of the main photodetector matrix. The pixel size along the linear photodetector is selected such that it is several times less than the pixel size of the main matrix. To main equality of sensitivity of the main matrix and the additional linear photodetectors, in the latter the pixel size in the direction across reading is increased in proportion to reduction of the longitudinal size and reading time. Further, three video data streams are picked: one main one and two auxiliary ones, from which the shift of the crystal relative the image formed by the lens is calculated.

EFFECT: compensation for the effect of the shaking of the hands of the operator.

2 cl, 6 dwg

Digital camera // 2384968

FIELD: physics, photography.

SUBSTANCE: invention relates to image capturing devices. The result is achieved due to that the digital camera includes a microcomputer (110) having a "live" display mode which controls such that image data generated by a CMOS sensor (130) or image data obtained through predefined processing of image data generated by the CMOS sensor (130) are displayed on a liquid-crystal display (150) as a moving image in real time. When the down button (141) receives an instruction relative the beginning of the automatic focusing operation in "live" display mode, the microcomputer (110) controls the movable mirror such that it enters the optical path in order to measure trough an AF sensor (132) and then enable the movable mirror to come out of the optical path in order to return the digital camera to the "live" display mode.

EFFECT: display of a subject image of a frame in "live" mode through an electronic view finder in a digital camera with a movable mirror.

7 cl, 41 dwg

FIELD: information technology.

SUBSTANCE: digital photographic camera has a support structure, an objective lens held by the support structure and having an optical axis, a sensitive element held by the support structure under the objective lens and having a certain number of adjacent pixel rows, where each pixel row contains a certain number of pixels, and each pixel includes an image sensor, and the image signal processor connected to the sensitive element includes an image scaling device which is configured to scale each pixel row in accordance with the scaling factor which differs from the adjacent pixel row. The image scaling device is configured to correct the oblique angle between the sensitive element of the photographic camera and the objective lens, the image of which is being captured.

EFFECT: avoiding geometrical distortions caused by the position of image capturing apparatus relative the object whose image is being captured.

25 cl, 16 dwg

FIELD: information technology.

SUBSTANCE: low-power mobile device for capturing images can create a stereo image and stereo video in time from one fixed type. For this purpose, statistics from an auto-focusing process is used to create a block depth map of one fixed type. In the block depth map, artefacts are suppressed and the depth map of the image is created. 3D left and right stereotypes are created from the depth map of the image using a 3D surface reconstruction process based on the Z-buffer and a mismatch map, which depends on the geometry of binocular vision.

EFFECT: providing a simple calculation process for detecting and estimating depth information for recording and creating stereo video in real time.

29 cl, 24 dwg

Up!