Auto-focus control using image statistics data based on coarse and fine auto-focus scores

FIELD: physics, photography.

SUBSTANCE: invention relates to digital imaging devices. The result is achieved due to that statistics logic may determine a coarse position which indicates an optimal focus area which, in one embodiment, may be determined by searching for the first coarse position in which a coarse auto-focus score decreases with respect to a coarse auto-focus score at a previous position. Using this position as a starting point for fine score searching, the optimal focal position may be determined by searching for a peak in fine auto-focus scores. In another embodiment, auto-focus statistics may also be determined based on each colour of the Bayer RGB, such that, even in the presence of chromatic aberrations, relative auto-focus scores for each colour may be used to determine the direction of focus.

EFFECT: determining an optimal focal position using auto-focus statistics.

19 cl, 4 tbl, 97 dwg

 

The level of technology

[0001] the Present disclosure relates in General to devices for digital image formation and, in particular, to systems and method for processing image data obtained using the image sensor device of a digital image formation.

[0002] This section is intended to introduce the reader to various aspects of the prior art, which may be associated with various aspects of the present invention that are described and/or claimed below. It is expected that this review will be useful to provide the reader with basic information to facilitate a better understanding of various aspects of the present disclosure. Accordingly, it should be understood that these provisions should be seen in this light, and not as admissions of prior art.

[0003] In recent years, devices for digital image formation are becoming increasingly popular due at least in part, to the fact that such devices are becoming more affordable for the average consumer. In addition, a number of standalone digital cameras currently available on the market, there is a tendency of integration of the devices forming the digital image to another electronic device, such as a stationary or portable computer, a cellular phone is the background or portable media player.

[0004] To obtain the image data, the majority of devices forming a digital image includes an image sensor, which provides a number of svetovosprinimayuschih elements (e.g., photodetectors), configured for converting light detected by the image sensor into an electrical signal. The image sensor may also include a matrix of color filters, which filters the light captured by the image sensor to capture color information. Image data captured by the image sensor, can then be processed by a pipeline of image processing, which can apply various operations of the image processing to the image data to generate full-color images that may be displayed for viewing on the display device, such as a monitor.

[0005] while traditional methods of image processing, in General, aimed at the formation of the observed image, which is objectively and subjectively gives pleasure to the observer, these traditional methods may not adequately cope with errors and/or distortions in the image data made by the forming device of the image and/or image sensor. For example, the defective pixels on the image sensor, which may be due to p is izvodstvennye defects or failure in the operation, you may not be able to accurately perceive the levels of light and, in the absence of correction, may manifest as artifacts in the resulting processed image. Additionally, the decrease in the intensity of light at the edges of the image sensor, which may be due to defects in the manufacture of the lens, may adversely affect the characteristic dimensions and can lead to the creation of images, in which the total light intensity is non-uniform. The pipeline of image processing may also implement one or more processes to improve the sharpness of the image. However, traditional methods of sharpening may not adequately take into account the existing noise in the image signal, or may be unable to distinguish the noise from the edges and textured regions in the image. In such cases, traditional methods of sharpening can actually increase the visibility of noise in the image that, in General, undesirable. In addition, you can perform various additional processing steps images, some of which may be based on image statistics collected machine statistics.

[0006] Another operation of the image processing, which can be applied to image data captured by the image sensor, an operation demotiki (conversion RGB in a color image). Since the matrix of color filters in General provides the color data on a single wavelength for each pixel of the sensor, a full set of color data, in General, is interpolated for each color channel to reproduce a full-color image (for example, RGB image). Traditional methods demosaic, in the General case, interpolating the values for the missing color data in the horizontal or vertical direction, in the General case, depending on some type of fixed threshold. However, such traditional methods demosaic may not adequately take into account the position and the direction of the edges in the image, which can lead to boundary artifacts, such as aliasing, chess artifacts or rainbow artifacts introduced in the full-color image, particularly along the diagonal edges in the image.

[0007] Accordingly, various considerations should be followed when processing a digital image obtained by a digital camera or other device imaging to improve the appearance of the resulting image. In particular, some aspects of the following disclosure are designed to eliminate one or more deficiencies, briefly mentioned above.

The invention

[0008] the Following presents a brief presentation what s disclosed here some embodiments. It should be understood that these aspects are presented merely to provide the reader with a brief summary of some of these embodiments, and that these aspects are not intended to limit the scope of this disclosure. In fact, this disclosure may include various aspects which may not be set forth below.

[0009] the Present disclosure provides various approaches to the collection and processing of statistical data in the processor of the image signal (ISP). In one embodiment, machine statistics can be implemented in the preprocessing block ISP, as a result, statistics are collected to be processed by the ISP pipeline, located after the preprocessing block. In accordance with one aspect of the disclosure, a machine statistics may be configured to obtain statistics relating to auto white balance, auto exposure and autofocus. In one embodiment, the machine statistics may make primary Bayer RGB data obtained by the image sensor, and can be configured to perform one or more transformations of the color space for the pixel data in other color spaces. The set of pixel filters may be configured to accumulate the sums of the pixel d is the R conditionally based on the characteristics YC1C2, given a pixel provided for each pixel of the filter. Depending on the selected color space, pixel filters can generate the amount of colors that can be used to match the current light source with a set of reference light sources with which the image sensor has been calibrated against.

[0010] In accordance with another aspect of the disclosure, the statistics of AF can be used to generate indicators of coarse and fine AF to determine the optimal focal distance at which you want to place a lens associated with the image sensor. For example, the logic of statistics can determine a rough position, which indicates the optimal area of focus, which, in one embodiment, can be determined by finding the first coarse position, in which the metric coarse AF is reduced relative to the previous position. Using this position as a starting point to search for an exact figure, the optimum focus position can be determined by finding the peak in the data is accurate AF. Statistics AF can also be defined on the basis of each Bayer RGB color, so that, even in the presence of chromatic aberration, the relative performance of the AF for what each color can be used to determine areas of focus. In addition, the collected statistics can be displayed in the memory and used by the ISP for processing the received image data.

[0011] In connection with various aspects of the present disclosure may have different specification of the above signs. In addition, signs can also be included in these various aspects. These refinements and additional features may exist individually or in any combination. For example, different characteristics, discussed below in connection with one or more of the illustrated embodiments may be incorporated in any of the above aspects of the present disclosure alone or in any combination. Again, the above summary is intended only to familiarize the reader with certain aspects and contexts of embodiments of the present disclosure, without limiting the claimed subject matter.

Brief description of drawings

[0012] the File of the patent or application contains at least one drawing executed in color. Copies of this patent or published patent application with color drawings will be provided by the patent office upon request and payment of required fees.

[0013] Various aspects of this disclosure can be better understood after reading the following detailed description and referring the drawings, in which:

[0014] Fig. 1 is a simplified block diagram depicting components of an example electronic device that includes a device forming an image and diagram of the image processing configured to implement one or more of the image processing methods described in this disclosure;

[0015] Fig. 2 is a graphical representation of the pixel block 2×2 matrix Bayer color filters, which can be implemented in the device forming the image shown in Fig. 1;

[0016] Fig. 3 is a perspective view of the electronic device shown in Fig. 1 in the form of a portable computing device, in accordance with aspects of the present disclosure;

[0017] Fig. 4 is a front view of the electronic device shown in Fig. 1 in the form of a stationary computing device, in accordance with aspects of the present disclosure;

[0018] Fig. 5 is a front view of the electronic device shown in Fig. 1 in the form of a handheld portable electronic device in accordance with aspects of the present disclosure;

[0019] Fig. 6 is a rear view of the electronic device shown in Fig. 5;

[0020] Fig. 7 is a block diagram illustrating the logic of pre-processing the image signal (ISP) and logic pipeline ISP that can be implemented in the schema image processing, pok the marks in Fig. 1, in accordance with aspects of the present disclosure;

[0021] Fig. 8 is a more detailed block diagram showing an implementation option logic pre-treatment ISP, shown in Fig. 7, in accordance with aspects of the present disclosure;

[0022] Fig. 9 is a flowchart of operations depicting a method for processing image data in the logic of pre-treatment ISP, shown in Fig. 8, in accordance with the embodiment;

[0023] Fig. 10 is a block diagram illustrating the configuration registers double-buffer and control registers that can be used for processing the image data in the logic of pre-treatment ISP, in accordance with one embodiment;

[0024] Fig. 11-13 - time diagrams depicting different modes to start processing the image frame, in accordance with the variants of implementation of the present invention;

[0025] Fig. 14 is a schematic detail showing the control register, in accordance with one embodiment;

[0026] Fig. 15 is a flowchart of operations depicting a method of use of the preprocessing block of pixels for processing image frames, when the logic of pre-treatment ISP, shown in Fig. 8 runs in one-sensor bench mode;

[0027] Fig. 16 is a flowchart of operations depicting a method of using the project for the preprocessing block of pixels for processing image frames, when the logic of pre-treatment ISP, shown in Fig. 8 is in dvuhmetrovom mode;

[0028] Fig. 17 is a flowchart of operations depicting a method of use of the preprocessing block of pixels for processing image frames, when the logic of pre-treatment ISP, shown in Fig. 8 is in dvuhmetrovom mode;

[0029] Fig. 18 is a flowchart of operations depicting a method in which both of the image sensor is active, but in which the first image sensor sends the image frames on the preprocessing block of pixels, whereas the second image sensor sends the image frames on block aggregation, making the statistics of image formation for the second sensor is available immediately when the second image sensor continues to send frames of image preprocessing block of pixels at a later time, in accordance with one embodiment.

[0030] Fig. 19 is a graphic representation of the different areas of image formation which can be set in the source image frame captured by the image sensor, in accordance with aspects of the present disclosure;

[0031] Fig. 20 is a block diagram that provides a more detailed view of one possible implementation of the preprocessing block of pixels ISP, asanoha in logic pre-treatment ISP, it is shown in Fig. 8, in accordance with aspects of the present disclosure;

[0032] Fig. 21 is a processing diagram illustrating the possible use temporal filtering on the pixel data of the image taken by the preprocessing block of pixels ISP, shown in Fig. 20, in accordance with one embodiment;

[0033] Fig. 22 illustrates a set of pixels of the reference image and the set of corresponding pixels of the current image, which can be used to determine one or more parameters for the temporal filtering process shown in Fig. 21;

[0034] Fig. 23 is a flowchart of operations illustrating a process for applying temporal filtering to the pixel of the current image set of image data, in accordance with one embodiment;

[0035] Fig. 24 is a flowchart of operations illustrating a method of calculating the value of the Delta movement for use in temporal filtering of the pixel of the current image shown in Fig. 23, in accordance with one embodiment;

[0036] Fig. 25 is a flowchart of operations illustrating another process for applying temporal filtering to the pixel of the current image of an image dataset, which includes the use of different gains for each color component of image data, where the availa able scientific C with another embodiment;

[0037] Fig. 26 - processing scheme, showing how the temporal filtering, which uses a separate table movement and brightness for each color component of the pixel data of the image taken by the preprocessing block of pixels ISP, shown in Fig. 20, in accordance with an additional embodiment;

[0038] Fig. 27 is a flowchart of operations illustrating a process for applying temporal filtering to the pixel of the current image set of image data using tables movement and brightness, as shown in Fig. 26, in accordance with an additional embodiment;

[0039] Fig. 28 depicts the discretization of the primary image data at full resolution, which can be captured by the image sensor, in accordance with aspects of the present disclosure;

[0040] Fig. 29 illustrates an image sensor, which can be configured for use binning to the primary image data at full resolution, shown in Fig. 28 to output the sample subjected to the binning of the primary image data, in accordance with the embodiment of the present disclosure;

[0041] Fig. 30 depicts the discretization subjected to the binning of the primary image data, which may be provided by the image sensor shown in Fig. 29, in accordance with the laws the AI with aspects of the present disclosure;

[0042] Fig. 31 depicts subjected to binning the raw data image of Fig. 30 after the implementation of the resampling filter compensation binning to ensure, in accordance with aspects of the present disclosure;

[0043] Fig. 32 depicts a filter compensation binning, which can be implemented in the preprocessing block of pixels ISP, shown in Fig. 20, in accordance with one embodiment;

[0044] Fig. 33 is a graphic representation of the different values of the step that can be applied to a differential analyzer to select the center input pixel and the index/phase filtering with binning compensation, in accordance with aspects of the present disclosure;

[0045] Fig. 34 is a flowchart of operations illustrating a process for scaling the image data using a filter compensation binning shown in Fig. 32, in accordance with one embodiment;

[0046] Fig. 35 is a flowchart of operations illustrating a process for determining the current input source of the Central pixel for horizontal and vertical filtering performed by the filter compensation binning shown in Fig. 32, in accordance with one embodiment;

[0047] Fig. 36 is a flowchart of operations illustrating a process for determining the index to select the CoE is fficientow filtering for horizontal and vertical filtering, performed by the filter compensation binning shown in Fig. 32, in accordance with one embodiment.

[0048] Fig. 37 is a more detailed block diagram showing an implementation option block aggregation, which can be implemented in logic pre-treatment ISP, shown in Fig. 8, in accordance with aspects of the present disclosure;

[0049] Fig. 38 shows the different cases of boundary frame image that can be considered when applying the methods of detecting and correcting defective pixels in the statistical processing unit statistical processing shown in Fig. 37, in accordance with aspects of the present disclosure;

[0050] Fig. 39 is a flowchart of operations illustrating a process for implementing the detection and correction of defective pixels during statistical processing, in accordance with one embodiment;

[0051] Fig. 40 shows a three-dimensional profile, showing the dependence of light intensity on the pixel's position for the conventional lens device of image formation;

[0052] Fig. 41 - colour drawing, which shows a non-uniform distribution of light intensity across the image, which may be the result of uneven shading of the lens;

[0053] Fig. 42 is a graphical illustration of the primary to the DRA imaging, which includes the area of the shading correction lens and the net gain, in accordance with aspects of the present disclosure;

[0054] Fig. 43 illustrates the interpolation values of the amplification factor for the pixel of the image, surrounded by four boundary grid points gain, in accordance with aspects of the present disclosure;

[0055] Fig. 44 is a flowchart of operations illustrating a process for determining interpolated values of gain that can be applied to the pixels forming the image during the operation, shading correction lens in accordance with the embodiment of the present invention;

[0056] Fig. 45 is a three - dimensional profile, representing the interpolated gain values that can be applied to the image, which shows the characteristics of the light intensity shown in Fig. 40 when performing shading correction lens, in accordance with aspects of the present disclosure;

[0057] Fig. 46 shows a color drawing of Fig. 41, which shows an improved uniformity of light intensity after the operation, shading correction lens, in accordance with aspects of the present disclosure;

[0058] Fig. 47 graphically illustrates how the radial distance between the current item is xelem the center of the image can be calculated and used to determine the radial component of the gain shading correction lens, in accordance with one embodiment;

[0059] Fig. 48 is a flowchart of operations illustrating a process that allows the use of radial gain and the interpolated gain from the grid of gain to determine the total gain that can be applied to the pixels forming the image during the operation, shading correction lens in accordance with the embodiment of the present invention;

[0060] Fig. 49 is a graph showing the area of the white and the axis of the high and low color temperature in the color space;

[0061] Fig. 50 is a table that demonstrates how you can configure the gain of the white balance for different conditions of the reference light source, in accordance with one embodiment;

[0062] Fig. 51 is a block diagram showing a machine statistics, which can be implemented in logic pre-treatment ISP, in accordance with the embodiment of the present disclosure;

[0063] Fig. 52 illustrates the lower primary sampling Bayer RGB data in accordance with aspects of the present disclosure;

[0064] Fig. 53 depicts a two-dimensional color histogram, which can be assembled machine statistics, shown in Fig. 51, in accordance with one option to the Tulane;

[0065] Fig. 54 depicts a zoom and pan in a two-dimensional color histogram;

[0066] Fig. 55 is a more detailed view, showing the logic for implementing the pixel filter machine statistics, in accordance with one embodiment;

[0067] Fig. 56 is a graphical representation of a possible assessment of the position of the pixel in the color space C1-C2 on the basis of pixel conditions specified for the pixel filter, in accordance with one embodiment;

[0068] Fig. 57 is a graphical representation of a possible assessment of the position of the pixel in the color space C1-C2 on the basis of pixel conditions specified for the pixel filter in accordance with another embodiment;

[0069] Fig. 58 is a graphical representation of a possible assessment of the position of the pixel in the color space C1-C2 on the basis of pixel conditions specified for the pixel filter in accordance with another embodiment;

[0070] Fig. 59 is a graph showing how you can determine the time integration of the image sensor to compensate for the flicker, in accordance with one embodiment;

[0071] Fig. 60 is a detailed block diagram illustrating logic that can be implemented in machine statistics, shown in Fig. 51, and sconfig is corresponding to gather statistics AF in accordance with one embodiment;

[0072] Fig. 61 is a graph depicting the method of implementation of AF using values of coarse and accurate AF, in accordance with one embodiment;

[0073] Fig. 62 is a flowchart of operations depicting the process for implementation of AF using values of coarse and accurate AF, in accordance with one embodiment;

[0074] Fig. 63 and 64 show the thinning primary Bayer data to obtain values of brightness, balanced on the white level;

[0075] Fig. 65 shows a method of implementation of AF using the relative values of AF for each color component, in accordance with one embodiment;

[0076] Fig. 66 is a more detailed view of block aggregation, shown in Fig. 37, showing how to use Bayer RGB histogram data to assist in the compensation of the black level, in accordance with one embodiment;

[0077] Fig. 67 is a block diagram showing an implementation option logic pipeline ISP, shown in Fig. 7, in accordance with aspects of the present disclosure;

[0078] Fig. 68 is a more detailed view showing a variant of the implementation of the processing unit of pervi the data of the pixels, which can be implemented in logic pipeline ISP, shown in Fig. 67, in accordance with aspects of the present disclosure;

[0079] Fig. 69 shows the different cases of boundary frame image that can be considered when applying the methods of detecting and correcting defective pixels during processing by the primary processing unit pixels shown in Fig. 68, in accordance with aspects of the present disclosure;

[0080] Fig. 70-72 is a flowchart of operations depicting various processes for detecting and correcting defective pixels, which may be implemented in the primary processing unit pixels shown in Fig. 68, in accordance with one embodiment;

[0081] Fig. 73 shows the position of two green pixels in the pixel block 2×2 Bayer image sensor, which can be interpolated in the application of the methods of correction of heterogeneity green during processing logic processing of the primary pixels shown in Fig. 68, in accordance with aspects of the present disclosure;

[0082] Fig. 74 illustrates a set of pixels, which includes the Central pixel and the associated horizontally neighboring pixels, which can be used as part of the process of horizontal filtering for noise reduction, in accordance with aspects of the present disclosure;

<> [0083] Fig. 75 illustrates a set of pixels, which includes a Central pixel and its associated vertical neighboring pixels, which can be used as part of the process of vertical filtering for noise reduction, in accordance with aspects of the present disclosure;

[0084] Fig. 76 is a simplified block diagram of operations that depicts how you can apply demosaic to the primary template Bayer image to create a full-color RGB images;

[0085] Fig. 77 depicts a set of pixels of the Bayer pattern image, from which you can display horizontal and vertical energy components for interpolation of the green color during demosaic of the Bayer pattern image, in accordance with one embodiment;

[0086] Fig. 78 shows a set of horizontal pixels may be filtered to determine the horizontal components of the interpolated green values during demosaic of the Bayer pattern image, in accordance with aspects of the present invention;

[0087] Fig. 79 shows a number of vertical pixels, which may be filtered to determine the vertical component of the interpolated green values during demosaic of the Bayer pattern image, according the aspects of the present invention;

[0088] Fig. 80 shows the various pixel blocks of 3×3, which can be filtered to determine interpolated values of the red and blue during demosaic of the Bayer pattern image, in accordance with aspects of the present invention;

[0089] Fig. 81-84 show a flowchart of operations depicting various processes for interpolation of green, red and blue colors during demosaic of the Bayer pattern image, in accordance with one embodiment;

[0090] Fig. 85 shows a color drawing of the scene of the original image that can be captured by the image sensor and processed in accordance with aspects disclosed here are methods demosaic;

[0091] Fig. 86 shows a color drawing of the Bayer pattern image of the scene image shown in Fig. 85;

[0092] Fig. 87 shows a colored drawing of an RGB image, reconstructed using the traditional method demosaic on the basis of the Bayer pattern image shown in Fig. 86;

[0093] Fig. 88 shows a colored drawing of an RGB image, reconstructed from a Bayer pattern image shown in Fig. 86, in accordance with aspects disclosed here are methods demosaic;

[0094] Fig. 89 is a more detailed view showing dinarian implementation of the processing unit RGB, which can be implemented in logic pipeline ISP, shown in Fig. 67, in accordance with aspects of the present disclosure;

[0095] Fig. 90 is a more detailed view showing one variant of implementation of the processing unit YCbCr, which can be implemented in logic pipeline ISP, shown in Fig. 67, in accordance with aspects of the present disclosure;

[0096] Fig. 91 is a graphic representation of the active regions of the source brightness and color specified in the source buffer using a 1-planar format, in accordance with aspects of the present disclosure;

[0097] Fig. 92 is a graphic representation of the active regions of the source brightness and color specified in the source buffer using a 2-plane format, in accordance with aspects of the present disclosure;

[0098] Fig. 93 is a block diagram illustrating the logic for sharpening images, which can be implemented in the processing unit YCbCr shown in Fig. 90, in accordance with one embodiment;

[0099] Fig. 94 is a block diagram illustrating the logic underlining the edges, which can be implemented in the processing unit YCbCr shown in Fig. 90, in accordance with one embodiment.

[0100] Fig. 95 is a graph showing the ratio of the attenuation coefficient of the chrominance from the luminance values at the surface is high sharpness, in accordance with aspects of the present disclosure;

[0101] Fig. 96 is a block diagram illustrating the logic of brightness, contrast and color (BCC) of the image that can be implemented in the processing unit YCbCr shown in Fig. 90, in accordance with one embodiment; and

[0102] Fig. 97 shows the color range of the hue and saturation in the color space YCbCr, specifying different angles tone and saturation value, which can be used for the color adjustment in the logic of the adjustment BCC, shown in Fig. 96.

Detailed description of specific embodiments

[0103] the Following will describe one or more specific embodiments of the present disclosure. These described embodiments of are merely examples of the invention described here. Additionally, with the aim of providing a brief description of these embodiments, the description of the present invention may be described not all signs the actual implementation. It is obvious that the development of any such actual implementation, as in any engineering or design project, numerous solutions, implementation dependent, must be taken to achieve the specific goals of the developers, for example, coordination with the limitations associated with the system and the enterprise, which can vary from implementation to implementation. the moreover, obviously, such design can be complex and take a long time, but, nevertheless, will be routine business of designing, manufacturing and production specialists who have studied this disclosure.

[0104] When the primary reference of the elements of the various embodiments of the present disclosure, the use of their names in the singular implies the presence of one or more elements. The terms “comprising”, “including” and “having” are inclusive sense and imply that there may be additional elements other than the listed elements. Additionally, it should be understood that references to “one implementation” or “variant implementation of the present disclosure should not be interpreted as excluding the existence of additional embodiments that also include the mentioned signs.

[0105] As will be discussed below, the present disclosure relates, generally, to methods of processing image data received through the one or more devices of the read image. In particular, some aspects of this disclosure relate to methods for detecting and correcting defective pixels, methods demosaic template primary image, the methods of sharpening the luminance image using m is gamestable Unsharp mask, and methods of gain lens shading correction shading non-uniformity of the lens. In addition, it should be understood that the invention described here can be applied to both still images and moving images (e.g. video), and can be used in the application form the image of any suitable type, for example, a digital camera, an electronic device with a built-in digital camera, security or surveillance, medical imaging and so on

[0106] referring to all of the above, refer to Fig. 1, there is shown a block diagram illustrating an example of an electronic device 10, which can ensure the processing of image data using one or more image processing methods, briefly mentioned above. Electronic device 10 may be an electronic device of any type, for example, a portable or stationary computer, mobile phone, digital media player, etc. that is configured for receiving and processing image data, for example, data obtained using one or more components of the read image. Solely by way of example, electronic device 10 may be a portable electronic device, for example, the hotel iPod® or iPhone®, available from Apple Inc. Cupertino, California. Additionally, electronic device 10 may be a portable or stationary computer, for example, a model of a MacBook®, MacBook® Pro, MacBook Air®, iMac®, Mac® Mini, or Mac Pro® available from Apple Inc. In other embodiments, implementation, electronic device 10 may also represent a model of the electronic device from another manufacturer, which is able to receive and process image data.

[0107] it Should be understood that electronic device 10, regardless of its form (e.g., portable or reportative), can provide processing of image data using one or more image processing methods, briefly described above, which may include, including, methods of correction and/or detection of defective pixels, methods of shading correction lens, methods demosaic or methods to improve the image sharpness. In some embodiments, implementation, electronic device 10 may apply such methods of image processing to the image data stored in the memory of the electronic device 10. In additional embodiments, implementation, electronic device 10 may include one or more devices forming the image, for example, a built-in or external digital camera, configured to received the I picture data, which can then be processed by the electronic device 10 using one or more of the above-mentioned image processing methods. Embodiments of how portable and reportative embodiments of electronic device 10 will be further discussed below in Fig. 3-6.

[0108] As shown in Fig. 1, electronic device 10 may include various internal and/or external components that are involved in the function of the device 10. Specialists in the art it is obvious that the various functional blocks shown in Fig. 1 may include hardware elements (including the scheme), software elements (including computer code stored on a machine-readable medium) or a combination of hardware and software elements. For example, in the illustrated embodiment, electronic device 10 may include ports 12, input/output (I/O), input structures 14, one or more processors 16, memory device 18, non-volatile storage 20, the card(s) 22 of the expansion network device 24, a source 26 of the power supply and the display 28. Additionally, electronic device 10 may include one or more devices 30 of image formation, for example, a digital camera and circuit 32 image processing. How will additional the sustained fashion described below, scheme 32 image processing can be configured to implement one or more of the above methods of data processing for processing the image data. Obviously, the image data processed by the circuit 32 of the image processing can be retrieved from the memory 18 and/or device (s) 20 non-volatile storage, or can be obtained by using the device 30 form the image.

[0109] Before continuing, it should be understood that the system block diagram of the device 10 shown in Fig. 1, is designed to be a control circuit of a high level, depicting various components that may be included in such a device 10. Thus, the connecting lines between the individual components shown in Fig. 1, do not always represent the path or direction of transfer or transmission of data between the various components of the device 10. Indeed, as discussed below, depicts(e) the processor(s) 16 may(ut), in some embodiments, implementation, include multiple processors, for example, the main processor (e.g., CPU), and specialized processors image and/or video. In such scenarios, implementation, processing of the image data can mainly be these specialized processors, thus effectively removing these tasks from the main processor (CPU).

[0110] In respect of each of the components illustrated in Fig. 1, the ports 12 I/o may include ports configured to connect to various external devices, for example, a power source, the audio output device (e.g., head headset or headphones), or other electronic devices (such as handheld devices and/or computers, printers, projectors, external displays, modems, docking stations, and so on). In one embodiment, the ports 12 I/o can be configured to connect to an external device imaging, for example, a digital camera for obtaining image data that can be processed using the scheme 32 image processing. Ports 12 I/o can support any suitable type of interface, for example, a universal serial bus (USB) port serial port IEEE-1394 (FireWire), Ethernet port, or modem, and/or the connection port of the adapter.

[0111] In some embodiments, implementation, certain ports 12 I/o can be configured to provide more than one function. For example, in one embodiment, the ports 12 I/o may include proprietary port from Apple Inc., which can function not only to facilitate transfer of data m is waiting for the electronic device 10 and the external source, but to connect the device 10 to the interface of the charging/power supply such as the AC power adapter is designed to supply power from the electrical outlet, or the interface cable configured for power from another electric device, such as a portable or stationary computer, charging source 26 power (which may include one or more akumulatorow). Thus, the port 12 I/o can be configured for dual functioning as a port for data transfer and as the connection port of the power adapter according to, for example, whether the external component connected to the device 10 through port 12 input/output.

[0112] the input Structures 14 may provide the processor(s) 16 user input or feedback. For example, the input structures 14 may be configured to control one or more functions of the electronic device 10, for example, applications running on the electronic device 10. Solely by way of example, the input structures 14 may include buttons, sliders, switches, controls, keys, knobs, scroll wheels, keyboards, mouse, touch pad, etc. or some combination of the three. In one embodiment, the input structures 14 may allow the user to navigate p is a graphical user interface (GUI), displayed on the device 10. Additionally, the input structures 14 may include tactile-sensitive mechanism provided in conjunction with the display 28. In such scenarios, the implementation, the user can select the displayed interface elements or interact with them through the tactile-sensitive mechanism.

[0113] the input Structures 14 may include various devices, schemes and channels for supplying user input or feedback on one or more processors 16. Such input structures 14 may be configured to control the function of the device 10, the applications running on the device 10, and/or any interfaces or devices connected to the electronic device 10 or used them. For example, the input structures 14 may allow the user to navigate through the displayed user interface or application interface. Examples of structures 14 input may include buttons, sliders, switches, controls, keys, knobs, scroll wheels, keyboards, mouse, touchpad and so on

[0114] In some embodiments, implementation, structure 14 and input device 28, the display may be secured together, for example, in the case of “touch screen”, so that the tactile-sensitive mechanism is provided cowith the display 28. In such scenarios, the implementation, the user can select the displayed interface elements or interact with them through the tactile-sensitive mechanism. Thus, the display interface may provide interactive functionality that allows the user to navigate the displayed interface, touching the display 28. For example, user interaction with structures 14 input, for example, to interact with the user interface or application interface that is displayed on the display 28 can generate electrical signals indicating user input. These input signals can be routed through suitable channels, for example, the input hub or data bus, one or more processors 16 for further processing.

[0115] in Addition to processing the various input signals received through the structure(s) 14 input, the processor(s) 16 may(ut) to control the operation of device 10 as a whole. For example, the processor(s) 16 may(ut) to provide processing capabilities to perform the operating system, programs, user interfaces and application interfaces, and any other functions of the electronic device 10. The processor(s) 16 may(ut) to include one or more microprocessors, for example, one or more microprocessors General is the purpose”, one or more special purpose microprocessors and/or microprocessors, application-specific (ASIC) or a combination of such processing components. For example, the processor(s) 16 may(ut) to include one or more processors with a fixed set of commands (e.g., RISC), and the graphics processor (GP), video processors, audio processors and/or related chip sets. Obviously, the processor(s) 16 may(ut) to be connected(s) to one or more data buses for transferring data and instructions between the various components of the device 10. In some embodiments, the implementation, the processor(s) 16 may(ut) to provide processing capabilities to run applications of image formation on the electronic device 10, for example, Photo Booth®, Aperture®, iPhoto®, or preview®, available from Apple Inc., or application “Camera” and/or “Photo” backed by Apple Inc. and available on the iPhone®.

[0116] the Instructions or data to be processed by the processor(s) 16 may be stored on a machine-readable carrier, for example, storage device 18. The storage device 18 may be provided as a volatile memory, such as random access memory (RAM), or as non-volatile memory, such as permanent memory (ROM), or as a combination of one or more devices of RAM and ROM. In memory 18 can store R sliczna information and it can be used for different purposes. For example, in the memory 18 may store firmware for the electronic device 10, for example, the system basic input/output system (BIOS), operating system, various programs, applications, or any other procedure that can be performed on the electronic device 10 includes user interface functions, processor functions, etc. in Addition, the memory 18 can be used for buffering or caching during operation of the electronic device 10. For example, in one embodiment, the memory 18 include one or more personnel buffers for buffering video data when displaying them on the display 28.

[0117] in Addition to the mass storage device 18, the electronic device 10 may optionally include non-volatile storage 20 for permanent storage of data and/or instructions. Non-volatile storage 20 may include flash memory, a hard disk drive or any other optical, magnetic and/or solid state storage media, or some combination of the three. Thus, although for clarity in Fig. 1 shows a single device, it should be understood that the device(a) 20 non-volatile storage(ut) to include a combination of one or more of the above storage devices, acting is x together with the processor(s) 16. Non-volatile storage 20 may be used to store firmware, data files, image data, programs and applications, information, wireless, personal information, user preferences, and any other suitable data. In accordance with aspects of the present disclosure, the image data stored in non-volatile storage 20 and/or the storage device 18, can be processed by the circuit 32 of the image processing to display.

[0118] an implementation Option, shown in Fig. 1, may also include one or more slots for cards or expansion. Card slot can be configured for receiving Board 22 extensions, which you can use to add functionality, such as additional memory, the functionality of I/o or networking opportunities in the electronic device 10. Such Board 22 extensions can connect to the device through any type of suitable connector, and access to it can be done internally or externally relative to the housing of the electronic device 10. For example, in one embodiment, the card 24 extensions can be a flash memory card, for example, SecureDigital card (SD), mini - or microSD, CompactFlash card, etc. or may be a PCMCIA device. Additionally the, - 24 extension can be card subscriber identity module (SIM) for use according to a variant implementation of the electronic device 10, which provides mobile phone.

[0119] the Electronic device 10 also includes a network device 24, which may be a network controller or network interface card (NIC), which may provide an opportunity for network communication according to the wireless standard 802.11, or any other suitable standard network connection, for example, a local area network (LAN), wide area network (WAN), for example, network-Enhanced data rate for GSM evolution (edge), the data network 3G or Internet. In some embodiments, implementation, network device 24 may provide a connection with an online provider of digital media such as music service iTunes®, available from Apple Inc.

[0120] the Source 26 of the power supply device 10 may include opportunities supply device 10, as in deportation and portable versions. For example, in a portable version, the device 10 may include one or more batteries, for example, lithium-ion battery, for providing power to the device 10. The battery can be recharged by connecting the device 10 to an external power source, for example, to an electrical rozek is. In reportative performed, the source 26 of the power supply may include a power supply unit (PSU), configured to select power from the electrical outlet, and to distribute power to the various components reportative electronic device, such as a stationary computer system.

[0121] the Display 28 can be used to display various images generated by the device 10, for example, a GUI for an operating system or image data (including still images and video data) processed by the circuit 32 of the image processing, as will be further described below. As mentioned above, the image data may include image data obtained using the device 30 of the image-forming or image data retrieved from the memory 18 and/or non-volatile storage 20. Display 28 may be any suitable type of display such as a liquid crystal display (LCD), a plasma display, or, for example, organic-led (osid) display. Additionally, as discussed above, the display 28 may be provided in conjunction with the above tactile-sensing mechanism (e.g., touch screen) that can function as part of the management interface for the electronic device 10.

[0122] Illustrated(s) of the mouth of austo(a) 30 imaging can(ut) to be provided(s) digital camera, configured to obtain still images and moving images (e.g. video). The camera 30 may include a lens and one or more image sensors configured to capture and convert light into electrical signals. Solely by way of example, the image sensor may include an image sensor based on CMOS (for example, a sensor with an active pixel (APS) based on CMOS) or a sensor based on CCD (charge coupled device). In General, the image sensor in the camera 30 includes an integrated circuit having a matrix of pixels in which each pixel includes a photodetector for detecting light. Specialists in the art it is obvious that the photodetectors in the pixels forming the image, in the General case, register the intensity of the light captured through the lens of the camera. However, the photodetectors themselves, in General, not capable of detecting the wavelength of the captured light and, thus, are not able to determine the color information.

[0123] Accordingly, the image sensor may further include a matrix of color filters (CFA), which may overlap or be placed on top of the pixel array of the image sensor to capture color information. The matrix of color filters may include Matri is the small color filters, each of which can overlap the corresponding pixel of the image sensor and filter captured the light wavelength. Thus, when used together, the matrix of color filters and photodetectors can provide information of the wavelength and intensity regarding light captured by the camera, which can represent the captured image.

[0124] In one embodiment, the matrix of color filters may include a Bayer matrix of color filters, which provides a template filter, which consists of 50% green items, 25% red items and 25% of blue elements. For example, in Fig. 2 shows that the pixel block of 2×2 Bayer CFA includes 2 green element (Gr and Gb), 1 red element (R), and 1 blue item (B). Thus, the image sensor, which uses a Bayer matrix of color filters, can provide information regarding the intensity of light received by the camera 30 on the wavelengths of green, red and blue colors, so that each image pixel records only one of the three colors (RGB). This information, which may be cited as the “primary image data” or data “primary region”, can then be processed using one or more methods demosaic d is I the transformation of primary image data into a full color image, in the General case, by interpolating the set of values of red, green and blue for each pixel. As will be further described below, such methods demosaic can be implemented by the circuit 32 of the image processing.

[0125] As mentioned above, the circuit 32 of the image processing can provide the various stages of image processing, for example, the operation detection/correction of defective pixels, shading correction lens, demotiki and improve image sharpness, noise reduction, gamma correction, image enhancement, color space conversion, image compression, color subdirectly and image scaling, etc., In some embodiments, the implementation, the circuit 32 of the image processing may include various sub-components and/or discrete logic blocks, which together form a “pipeline” image processing for implementing each of the different stages of image processing. These subcomponents can be implemented using hardware (e.g., digital signal processors or ASIC) or software, or through a combination of hardware and software components. Various processing images, which can be provided by circuit 32 image processing and, in particular, the processing operations related to the detection of the th/correction of defective pixels, shading correction lens, demosaic and sharpens the image, will be described in more detail below.

[0126] Before continuing, it should be noted that although various embodiments of the various image processing methods, discussed below, can use the Bayer CFA, described here, the invention should not be limited in this regard. Indeed, specialists in the art it is obvious that the provided methods of image processing can be applied to any suitable type of a matrix of color filters, including filters RGBW, filters, CYGM, etc.

[0127] Returning to the electronic device 10, Fig. 3-6 illustrate various forms that can be received by electronic device 10. As mentioned above, electronic device 10 may take the form of a computer, including computers, which, in General, are portable (e.g., computers laptop, laptop and tablet), and computers, which, in General, are reportative (for example, desktop computers, workstations and/or servers), or another type of electronic device, such as a handheld portable electronic device (e.g., digital media player or mobile phone). In particular, in Fig. 3 and 4 shows an electronic device 10 in the form of itatinga computer 40 and the stationary computer 50, respectively. Fig. 5 and 6 show front and rear views, respectively, of an electronic device 10 in the form of a handheld portable device 60.

[0128] As shown in Fig. 3 depicts a portable computer 40 includes a housing 42, a display 28, the ports 12, input/output and input structures 14. The input structures 14 may include a keyboard and touchpad with mouse function, which combined with the housing 42. Additionally, the input structure 14 may include various other buttons and/or switches that you can use to interact with the computer 40, for example, to turn on the power or starting the computer, to operate a GUI or application running on the computer 40, and adjust various other aspects related to the computer 40 (e.g., sound volume, display brightness and so on). The computer 40 may also include various ports 12 I/o, which provide connectivity to additional devices discussed above, such as FireWire® or USB port multimedia interface high-definition (HDMI), or any other type of port that is suitable for connection to an external device. Additionally, the computer 40 may include a network connection (e.g., network device 26), memory (e.g., memory 20), and in the ability of storage (for example, the storage device 22 as described above with reference to Fig. 1.

[0129] in Addition, laptop computer 40, in the illustrated embodiment, may include a built-in device 30 imaging (e.g., camera). In other embodiments, implementation, portable computer 40 can use an external camera (for example, an external USB camera or a web camera, connected to one or more of the ports 12 I/o instead of the built-in camera 30 or together with it. For example, an external camera may be a camera iSight®, available from Apple Inc. The camera 30, internal or external, can provide capture and record images. The user can view these images using the application to view the images, or the images can use other applications which includes video conferencing, such as iChat®, and application viewing/editing images, such as Photo Booth®, Aperture®, iPhoto®, or preview®, available from Apple Inc. In some embodiments, the implementation depicted portable computer 40 may be a model of a MacBook®, MacBook® Pro, MacBook Air® and PowerBook®, available from Apple Inc. Additionally, the computer 40, in one embodiment, may be a portable tablet computing device such as the R model of tablet PC iPad®, also available from Apple Inc.

[0130] Fig. 4 additionally illustrates an implementation option, in which the electronic device 10 is provided as a stationary computer 50. It is obvious that the stationary computer 50 may include a number of attributes that can be, in General, similar to those provided portable computer 40 shown in Fig. 4, but may have, in General, more common form factor. As shown, the desktop computer 50 may be enclosed in a casing 42, which includes a display 28, and various other components discussed above in relation to the flowchart shown in Fig. 1. In addition, the desktop computer 50 may include an external keyboard and mouse (input structures 14), which can be connected to the computer 50 via one or more ports 12 I/o (e.g., USB) or may be in communication with the computer 50 without the aid of wires (e.g., RF, Bluetooth, and so on). Desktop computer 50 also includes a device 30 of the image formation, which can be an internal or external camera, as discussed above. In some embodiments, the implementation depicted desktop computer 50 may be a model iMac®, Mac® mini, or Mac Pro® available from Apple Inc.

[0131] As further shown, the display 28 may be konfigurera the n to generate different images, who can view the user. For example, during operation of the computer 50, the display 28 may display a graphical user interface (“GUI”) 52, which allows the user to interact with the operating system and/or application running on the computer 50. GUI 52 may include various layers, Windows, screens, templates, or other graphical elements that can be displayed on an entire, or section, of the device 28 of the display. For example, in the present embodiment, GUI 52 of the operating system may include various graphical icons 54, each of which may correspond to various applications that can be opened or executed upon detection of user selection (e.g., via input from the keyboard/mouse or touch screen). Icon 54 may be displayed in the dock 56 or one or more graphical window elements 58 displayed on the screen. In some embodiments, the implementation, the choice of icon 54 may lead to hierarchical navigation process, so the choice of icon 54 leads to the screen or opens another graphical window that includes one or more icons or other GUI elements. Solely by way of example, GUI 52 operating system, from Raheny in Fig. 4, may form version of the operating system Mac OS®, available from Apple Inc.

[0132] In Fig. 5 and 6, the electronic device 10 is illustrated in the form of a portable handheld electronic device 60, which may be an iPod® or iPhone® available from Apple Inc. In the present embodiment, the handheld device 60 includes a shell 42, which may function to protect the interior components from physical damage and for shielding from electromagnetic interference. The sheath 42 can be formed from any suitable material or combination of materials, such as plastic, metal or composite material, and may allow certain frequencies of electromagnetic radiation, for example, the wireless signal to penetrate to the wireless communication scheme (for example, network device 24), which can be located in the shell 42, as shown in Fig. 5.

[0133] the Shell 42 also includes various patterns 14 user input by which the user can interact with a handheld device 60. For example, this input structure 14 may be configured to control one or more functions of the corresponding device when pressed or activated. By way of example, one or more structures 14 input can be SC is figuritively to call the “main” screen 42 or menu, subject to display, to switch between modes: sleep, Wake or on/off power to turn off the call for applications cell phone to increase or decrease the volume level, etc. Should be understood that the illustrated structure 14 input are merely exemplary, and that the handheld device 60 may include any number of suitable structures for user input, existing in various forms, including buttons, switches, keys, knobs, scroll wheels, and so on

[0134] As shown in Fig. 5, the handheld device 60 may include various ports 12 input/output. For example, depicts ports 12 I/o may include proprietary connection port 12a for transmission and reception of data files or charging source 26 and power connection port 12b audio to connect your device 60 to the audio output device (such as headphones or speakers). In addition, variants of implementation, where the handheld device 60 provides the functionality of a mobile phone, the device 60 may include a port 12c of the I/o reception card subscriber identity module (SIM) (e.g., circuit Board 22 of the extension).

[0135] the Device 28 of the display, which can be an LCD, acid or any sentence is hydrated display type, can display various images generated by the handheld device 60. For example, the display 28 may display various system indicators 64, providing feedback to the user regarding one or more States of the handheld device 60, for example, power status, signal strength, connection of the external device, and so on, the Display may also display the GUI 52, which allows the user to interact with the device 60, as discussed above with reference to Fig. 4. GUI 52 may include graphical elements such as icons 54, which may correspond to various applications that can be opened or executed upon detection of user selection of the corresponding icon 54. By way of example, one of the icons 54 may submit an application 66 camera that can be used in conjunction with the chamber 30 (shown in dashed lines in Fig. 5) to produce images. Briefly referring to Fig. 6, illustrates the rear view of the handheld electronic device 60, shown in Fig. 5, which shows the camera 30, which is integrated into the housing 42 and is located on the back side of the handheld device 60.

[0136] As mentioned above, image data obtained by the camera 30 can be processed using the scheme 32 processing of the images, which may include equipment (e.g., located in the shell 42) and/or software stored on one or more storage devices (e.g., memory 18 or the non-volatile storage 20) of the device 60. Images obtained using applications 66 of the camera and the camera 30 may be stored on the device 60 (e.g., in storage device 20) and can be viewed at a later time using the application 68 viewing photos.

[0137] the Handheld device 60 may also include a variety of elements of input and output audio. For example, the input/output of audio specified in the General case, the reference position 70, may include the receiver input, for example, one or more microphones. For example, when the handheld device 60 includes the functionality of a cell phone, the receiver input can be configured for receiving user input of audio, for example, the user's voice. Additionally, the elements 70 input/audio output can include one or more transmitters output. Such transmitters output can include one or more speakers, which can operate to transmit audio signals to the user, such as playback of music data using the application 72 medial the EPA. In addition, variants of implementation, where the handheld device 60 includes the application of a cell phone may be provided with an additional transmitter 74 audio output, as shown in Fig. 5. Like the transmitter output elements 70 I/o audio transmitter 74 output may also include one or more speakers configured to transmit audio signals to the user, such as voice data, received during a telephone output. Thus, the elements 70 and 74 input/output audio can act together with this function, as the elements of the transmit and receive audio of the phone.

[0138] Providing some context in relation to the various forms that can accept electronic device 10, will focus on the circuit 32 of the image processing shown in Fig. 1. As mentioned above, the circuit 32 of the image processing may be implemented using hardware and/or software components, and may include various processing units that form a processing pipeline image signal (ISP). In particular, we can consider the aspects of image processing methods described in this disclosure, particularly related to methods for detection/correction of defective pixels, methods of shading correction lens, methods demosaic and methods to improve the escasty image.

[0139] In Fig. 7 illustrates a simplified block diagram of a high level, representing several functional components that can be implemented as part of the circuit 32 of the image processing, in accordance with one embodiment of the invention described here. In particular, Fig. 7 is intended to illustrate how image data can flow through the circuit 32 of the image processing in accordance with at least one embodiment. To provide a General overview of scheme 32 image processing, there are provided a General description of these functional components for processing the image data with reference to Fig. 7, while a more specific description of each of the illustrated functional components and their respective sub-components, will be provided below.

[0140] According to the illustrated variant implementation, the circuit 32 of the image processing may include logic 80 pre-processing pre-processing for processing the image signal (ISP) logic 82 pipeline ISP and the control logic 84. The image data captured by the device 30 forming the image may first be processed by the logic 80 pre-treatment ISP and analyzed to capture the statistics of the image that can be used on lesofat to determine one or more parameters of the control logic 82 of the conveyor ISP and/or device 30 form the image. Logic 80 pre-treatment ISP can be configured to capture image data from the input signal of the image sensor. For example, as shown in Fig. 7, the device 30 of the formation of the image may include a camera having one or more lenses 88 and the sensor(s) 90 of the image. As discussed above, the sensor(s) 90 image can(ut) to include a matrix of color filters (for example, a Bayer filter) and, thus, to provide information intensity and wavelength of light that is captured in each pixel forming the image sensors 90 image to provide a set of primary image data that can be processed by logic 80 pre-treatment ISP. For example, the output signal 92 of the device 30, the image formation can be accepted by the interface 94 of the sensor, which can then transmit the primary data 96 image logic 80 pre-treatment ISP on the basis of, for example, interface type sensor. By way of example, the interface 94 of the sensor may use an interface standard architecture imaging for mobile devices (SMIA) or other serial or parallel interfaces camera or some combination of the three. In some embodiments, implementation, logic 80 pre-treatment ISP can act in its with stvennoi region of the clock signal and may provide an asynchronous interface to the interface 94 of the sensor to support image sensors of different sizes and requirements constant.

[0141] the Primary data 96 image can be transmitted to the logic of 80 pre-treatment ISP and processed on a pixel-by-pixel basis in several formats. For example, each pixel of the image can have a bit depth of 8, 10, 12 or 14 bits. Logic 80 pre-treatment ISP may implement one or more image-processing operations on the primary data 96 images, and also collect statistics on the data of 96 images. The operation of the image processing, and statistical data collection, can be the same or different precision values for the bit depth. For example, in one embodiment, the primary processing of pixel data 96 image can be carried out with 14-bit precision. In such scenarios, the implementation of the primary pixel data received by the logic 80 pre-treatment ISPS that have a bit depth of less than 14 bits (for example, 8 bits, 10 bits, 12 bits) can be increasing sample rate up to 14 bits for image processing. In another embodiment, the aggregation can occur with a precision of 8 bits and, thus, the initial pixel data having a higher bit depth, can be subjected to down-sampling to 8-bit format in order statistics. Obviously, the lower the th quantization to 8 bits can reduce the size (for example, the area of the equipment and also to reduce the complexity of the processing/computing for statistical data. Additionally, the primary image data can be subjected to spatial averaging, so that the statistics were more resistant to noise.

[0142] in Addition, as shown in Fig. 7, the logic 80 pre-treatment ISP can also receive pixel data from the memory 108. For example, as indicated by the reference position 98, the primary pixel data can be sent to the memory 108 of the interface 94 of the sensor. Primary pixel data present in the memory 108 may then be received at the logic 80 pre-treatment ISP for processing as specified by the reference position 100. The memory 108 may be included in the storage device 18, the storage device 20 or can be a separate specialized memory in the electronic device 10 and may include direct memory access (DMA). In addition, in some embodiments, implementation, logic 80 pre-treatment ISP may act in its own region of the clock signal and to provide an asynchronous interface to the interface 94 of the sensor to support sensors of different sizes and having different requirements for the clock.

[0143] Having primary data 96 images (from interface 94 of the sensor) or 100 (from memory 108), the logic 80 PR is dwarfelles processing ISP may implement one or more image-processing operations, for example, a temporary filter and/or filter compensation binning. The processed image data can then be transmitted to the logic 82 of the conveyor ISP (output 109) for additional processing prior to display (for example, the device 28 of the display), or may be sent in memory (output signal 110). Logic 82 of the conveyor ISP receives pre-processed data, either directly from the logic of the 80 pre-treatment ISP, or from the memory 108 (input signal 112), and may provide for additional processing of the image data in the primary region, as well as in RGB and YCbCr. The image data processed by the logic 82 of the conveyor ISP may then be output (signal 114) on the display 28 for viewing by the user and/or may optionally be processed by the machine graphics or GPU. Additionally, the output signal of logic 82 of the conveyor ISP may be sent to the memory 108 (signal 115), and the display 28 can read image data from the memory 108 (signal 116, which, in some embodiments, implementation, may be configured to implement one or more frame buffer. In addition, in some implementations, the output signal of logic 82 of the conveyor ISP can also enter the machine 118 compression/ascend, the Deco (signal 117) for encoding/decoding image data. Coded Dan is haunted image can be saved and then be subjected to ascend, the Deco to the display unit 28 display (signal 119). By way of example, the machine compression or “encoder” 118 can be a machine of JPEG compression for encoding still images or machine compression H. 264 encoding video, or some combination of them and also the suitable machine the ascend, the Deco for decoding the image data. Additional information regarding image-processing operations that may be provided in the logic 82 of the conveyor ISP will be described in more detail below in relation to Fig. 67-97. Also, it should be noted that the logic 82 of the conveyor ISP can also take the primary image data from the memory 108, depicted in the form of the input signal 112.

[0144] the Statistics 102 defined logic 80 pre-treatment ISP can be passed to the block control logic 84. Statistical data 102 may include, for example, the statistics of the image sensor pertaining to auto exposure, automatic white balance, autofocus, detect flicker, black level compensation (BLC), shading correction lens, etc. Control logic 84 may include a processor and/or microcontroller configured to perform one or more procedures (e.g., firmware) that may be configured to determine, on the basis of rinyateh statistical data 102, settings 104 of the management device 30 of the image formation and characteristics of the 106 control logic 82 pipeline ISP. Solely by way of example, the parameters 104 controls can include parameters control sensor (for example, the gain, the integration time for exposure control, flash control camera settings lens control (e.g., focal distance for focusing or zoom), or a combination of such parameters. Settings 106 management ISP may include gain levels and coefficients matrix color correction (CCM) for automatic white balance and color adjustment (for example, when processing RGB), as well as the parameters of the shading correction lens, which, as discussed below, can be determined on the basis of the white balance point. In some embodiments, implementation, control logic 84 may, in addition to statistical data analysis 102, analyze historical statistics, which can be stored on the electronic device 10 (e.g., in memory 18 or vault 20).

[0145] because, in General, complex shown here design scheme 32 image processing, it may be advantageous to separately consider the logic of 80 pre-treatment ISP and logic 82 of conveyor installations and ISP in different sections, as shown below. In particular, Fig. 8-66 of the present application may apply to the consideration of the various embodiments and aspects of logic 80 pre-treatment ISP, whereas Fig. 67-97 of the present application may apply to the consideration of the various embodiments and aspects of logic 82 pipeline ISP.

The logic of pre-treatment ISP

[0146] In Fig. 8 shows a more detailed block diagram showing the functional logical blocks that can be implemented in logic 80 pre-treatment ISP, in accordance with one embodiment. Depending on the configuration of the device 30 of the image-forming and/or interface 94 of the sensor, as discussed above in Fig. 7, the primary image data can be transmitted to the logic of 80 pre-treatment ISP one or more sensors 90 images. In the present embodiment, the primary image data can be transmitted to the logic of 80 pre-treatment ISP first sensor 90a image (Sensor0) and the second sensor 90b image (Sensor1). As will be further described below, each sensor 90a and 90b of the image may be configured for use binning to the data image with full resolution to increase the signal-to-noise ratio of the image signal. For example, you can use the method bin is Inga, for example, binning 2×2, which can interpolate “subjected to binning” primary image pixel on the basis of four image pixels full resolution of the same color. In one embodiment, this may lead to the fact that the four components of accumulated signal associated with subjected to binning pixel correspond to one of the noise component, which increases the signal-to-noise ratio of the image data, but reduces the overall resolution. Additionally, the binning can also lead to uneven or nonuniform spatial sampling of the image data, which can be corrected by filtering the compensated binning, which will be discussed below in more detail.

[0147] As shown, the sensors 90a and 90b image can provide primary image data as signals Sif0 and Sif1, respectively. Each of the sensors 90a and 90b of the image, in General, can be associated with corresponding blocks 120 (StatsPipe0) and 122 (StatsPipe1) aggregation, which can be configured to process image data to determine one or more sets of statistics (indicated by the signals Stats0 and Stats1), which includes statistics related to automatic exposure, automatic white balance, autofocus, detection is of flickering, compensation of the black level and shading correction lens, etc., In some embodiments, implementation, when only one of the sensors 90a or 90b is actively receiving an image, the image data can go both StatsPipe0 and StatsPipe1, if you need more statistics. For example, to provide one example, if both StatsPipe0 and StatsPipe1 available, StatsPipe0 can be used to gather statistics for a one color space (e.g., RGB), and StatsPipe1 can be used to collect statistics for a different color space (e.g., YUV or YCbCr). Thus, the blocks 120 and 122 statistical processing can be used in parallel to collect multiple sets of statistics for each frame of image data obtained by the active sensor.

[0148] In the present embodiment, five asynchronous data sources is provided in the device 80 pre-treatment ISP. They include: (1) direct input from the sensor interface, the corresponding Sensor0 (90a) (referred to as Sif0 or Sens0), (2) direct input from the sensor interface, the corresponding Sensor1 (90b) (referred to as Sif1 or Sens1), (3) input data Sensor0 from memory 108 (referred to as SifIn0 or Sens0DMA), which may include a DMA interface, (4) input Sensor1 data from memory 108 (referred to as SifIn1 or Sens1DMA), and (5) the data set of the image frames of the input data Sensor0 and Sensor1 learned from pam the ti 108 (referred to as FeProcIn or ProcInDMA). The device 80 pre-treatment ISP may also include multiple destinations, which can be routed image data source and each destination can represent either a cell stored in memory (e.g., 108), or the processing unit. For example, in the present embodiment, the device 80 pre-treatment ISP includes six destinations: (1) Sif0DMA to receive data Sensor0 in memory 108, (2) Sif1DMA to receive Sensor1 data in the memory 108, (3) the first block 120 aggregation (StatsPipe0), (4) the second block 122 statistical analysis (StatsPipe1), (5) block 130 pre-treatment pixels (FEProc), and (6) FeOut (or FEProcOut) in the memory 108 or the conveyor 82 ISP (described in more detail below). In one embodiment, the device 80 pre-treatment ISP can be configured so that only some of the destinations are valid for a particular source, as shown in the following table 1.

Table 1
An example of a valid destination device preprocessing ISP for each source
SIf0
DMA
SIf1
DMA
StatsPipe0StatsPipe1FEProcFEOut
Sens0XXXXX
Sens1XXXXX
Sens0DMAX
Sens1DMAX
ProcInDMAXX

[0149] for Example, according to table 1, the source Sens0 (sensor interface Sensor0) may be configured to transmit data to the destination SIf0DMA (signal 134), StatsPipe0 (Signa is 136), StatsPipe1 (signal 138), FEProc (signal 140) or FEOut (signal 142). In relation FEOut, data source, in some cases, can do FEOut crawled pixel processing through FEProc, for example, for debugging or testing. Additionally, the source Sens1 (interface sensor Sensor1) may be configured to transmit data to the destination SIf1DMA (signal 144), StatsPipe0 (signal 146), StatsPipe1 (signal 148), FEProc (signal 150) or FEOut (signal 152), the source Sens0DMA (data Sensor0 from memory 108) may be configured to transfer data to StatsPipe0 (signal 154), the source Sens1DMA (Sensor1 data from memory 108) may be configured to transfer data to StatsPipe1 (signal 156), and ProcInDMA source (data Sensor0 and Sensor1 from memory 108) may be configured to transfer data to FEProc (signal 158) and FEOut (signal 160).

[0150] it Should be noted that the illustrated variant implementation is configured in such a way that Sens0DMA (shots Sensor0 from memory 108) and Sens1DMA (shots Sensor1 from memory 108) act only on StatsPipe0 and StatesPipe1, respectively. This configuration allows the device 80 pre-treatment ISP to keep a certain number of previous frames (e.g., 5 frames) in memory. For example, due to a delay or lag between when the user triggers an event capture (for example, the transition system of image formation mode will prefix inogo view in capture mode or recording, or even just starting or initializing the image sensor) using the image sensor and capture the scene image, not every frame that the user intended to capture, can be captured and processed essentially in real time. Thus, by storing a certain number of previous frames in the memory 108 (e.g., from phase preview), these previous frames can be processed later, or together with the staff, in fact, captured in response to an event capture, thus compensating for any such delay and providing a more complete array of image data.

[0151] the configuration illustrated in Fig. 8, it should be noted that StatsPipe0 120 is configured to receive one of the inputs 136 (Sens0), 146 (Sens1) and 154 (Sens0DMA) defined logic 124 of choice, such as a multiplexer. Similarly, the logic 126 may select the input signal 138, 156 and 148 for filing StatsPipe1, and logic 132 may select the input signal 140, 150 and 158 for submission to FEProc. As mentioned above, the statistical data (Stats0 and Stats1) can be transmitted to the control logic 84 to define the different management options that can be used for operation of the device 30 of the image-forming and/or logic 82 pipeline ISP. Acevi is but what logical blocks (120, 122, and 132) choice, shown in Fig. 8, may be provided with any suitable type of logic, such as a multiplexer, which selects one of the multiple input signals in response to the control signal.

[0152] the Block 130 pixel processing (FEProc) can be configured to perform various image-processing operations on the primary image data on a pixel-by-pixel basis. As shown, FEProc 130, the processing unit destination can receive image data from sources Sens0 (signal 140), Sens1 (signal 150) or ProcInDMA (signal 158) through logic 132 of choice. FEProc 130 may also receive and output various signals (for example, Rin, Hin, Hout, and Yout, which can present a history of the movement and brightness data used during temporal filtering) when performing processing operations of pixels, which may include temporary filtering and filtering by binning compensation, which will be further described below. The output signal 109 (FEProcOut) block 130 pixel processing can then be routed to the logic 82 of the conveyor ISP, for example, through one or more queues are first-in, first-out (FIFO), or may be sent to the memory 108.

[0153] in Addition, as shown in Fig. 8, logic 132 of choice, in addition to receiving signals 140, 150 and 158 may additionally take signaly and 161. Signal 159 may be represented by “pretreated” primary image data from StatsPipe0, and the signal 161 may represent “pre-processed” primary image data from StatsPipe1. As will be discussed below, each of the blocks aggregation may apply one or more pre-processing operations to the primary image data to collect statistics. In one embodiment, each of the blocks of the statistical processing may exercise the degree detection/correction of defective pixels, shading correction lens, the compensation of the black level and inverse compensation of the black level. Thus, the signals 159 and 161 may represent the primary image data, processed using the aforementioned pre-processing operations (which will be discussed below in more detail in Fig. 37). Thus, the logic 132 of choice gives logic 80 pre-treatment ISP flexibility in the provision, either the primary image data that have not been pre-treatment, Sensor0 (signal 140) and Sensor1 (signal 150) or pre-treated primary image data from StatsPipe0 (signal 159) and StatsPipe1 (signal 161). Additionally, as specified logical blocks 162 and 163 select logic 80 pre-treatment ISP also has the flexibility of either the recording of the primary image data, no pre-processing of Sensor0 (signal 134) or Sensor1 (signal 144) in the memory 108, or write the preprocessed raw data images from StatsPipe0 (signal 159) or StatsPipe1 (signal 161) in the memory 108.

[0154] the control logic 80 pre-treatment ISP, provided the block 164 control pre-treatment. Block 164 control can be configured for initialization and programming of control registers (called here “the starting registers”) to configure and start the processing frame image and to select the appropriate(s) Bank(s) registers for the update, double-buffered data registers. In some embodiments, implementation, block 164 control can also provide logic control performance for registration periods of the clock signal timeout when accessing memory and information quality of service (QOS). In addition, the block 164 control can also control the dynamic Gating the clock signal, which can be used to disable the supply of the clock signal to one or more sections of the device 0 pre-treatment ISP in the absence of enough data in the input queue of the active sensor.

[0155] by using the aforementioned “start registers allows the unit 164 controls rablet update various parameters for each of the processing units (e.g., StatsPipe0, StatsPipe1 and FEProc) and to interact with the interfaces of sensors to control the starting and stopping of processing units. In General, each of the sets of preliminary processing operates on a frame-by-frame basis. As discussed above (table 1), the input processing blocks can be accessed from sensor interface (Sens0 or Sens1) or from the memory 108. In addition, the processing units may use different parameters and configuration data that can be stored in the corresponding data registers. In one embodiment, the data registers associated with each processing unit or the destination can be grouped into blocks that make up the group of banks of registers. According to a variant implementation, shown in Fig. 8, the pre-processing unit ISP, you can set seven groups of banks of registers: SIf0, SIf1, StatsPipe0, StatsPipe1, ProcPipe, FEOut and ProcIn. The address space of each block registers are duplicated to provide two banks of registers. In the second Bank presents only double-buffered registers. If the register is not double buffered, the address in the second Bank may appear to address the same register in the first Bank.

[0156] For double-buffered registers, the registers from one Bank active and used by the processing units, while the registers of the other Bank are the shadow. Those who Eva the register can be updated by the block 164 management during the current HR interval, while the equipment uses the active registers. Determining which Bank to use for a given unit of processing in a particular frame can be set field “NextBk” (next Bank) in the starting register corresponding to the source image data are received at the processing unit. Essentially, NextBk is a field that allows you to block 164 management to decide which Bank of registers is active at the launch event for the next frame.

[0157] Before proceeding to a detailed consideration of the operation of the start register, will consider is shown in Fig. 9 General method 166 processing of the image data on a frame-by-frame basis in accordance with the present invention. From step 168, the processing units of the destination specified by the data source (for example, Sens0, Sens1, Sens0DMA, Sens1DMA or ProcInDMA) are inactive. This may indicate that the processing for the current frame is completed, and thus, the block 164 management can prepare for processing the next frame. For example, at step 170, the user-programmable parameters for each processing unit assignments are updated. This step may include, for example, updating a field NextBk in the starting register corresponding to the source, and update any parameters in registers data corresponding to blocks of the destination. After the addition, at step 172, the start event can translate blocks of purpose in working condition. In addition, as shown in step 174, each block of the destination specified by the source completes its processing for the current frame, and then the method 166 may return to step 168 to process the next frame.

[0158] In Fig. 10 shows a block diagram illustrating two banks of registers 176 and 178 of data that can be used by different units of the destination device preprocessing ISP. For example, Bank 0 (176) may include registers 1-n (176a-176d) data, and Bank 1 (178) may include registers 1-n (178a-178d) data. As discussed above, an implementation option, shown in Fig. 8, can use the Bank register (Bank 0), with seven groups of banks of registers (for example, SIf0, SIf1, StatsPipe0, StatsPipe1, ProcPipe, FEOut and ProcIn). Thus, in this embodiment, the address space of the block of registers each register is duplicated to provide the second Bank of registers (Bank 1).

[0159] Fig. 10 also illustrates the start register 180, which may correspond to one of the sources. As shown, the start register 180 includes field 182 “NextVld” and the aforementioned field 184 “NextBk”. These fields can be programmed to start processing the current frame. In particular, NextVld can specify the point(s) of destination, where doliniarskie data from the source. As discussed above, NextBk can choose the appropriate data register from Bank0 or Bank1 for each specified destination that is specified in NextVld. Although it is not shown in Fig. 10, the start register 180 may also include readiness bit, called the “start bit”, which can be configured for alerting the starting register. When the event is detected, 192 run for the current frame, NextVld and NextBk can be copied in the field 188 CurrVld and field 190 CurrBk appropriate current or “active” register 186. In one embodiment, the current(s) case(s) 186 may be registers are read-only which can be installed by the equipment being unavailable for teams of software in the device 80 pre-treatment ISP.

[0160] it is Obvious that, for each source preprocessing ISP can provide the appropriate starting register. For the purposes of this disclosure, the starting registers corresponding to the above sources Sens0, Sens1, Sens0DMA, Sens1DMA, and ProcInDMA, can be named Sens0Go, Sens1Go, Sens0DMAGo, Sens1DMAGo and ProcInDMAGo, respectively. As mentioned above, the control unit can use the start registers to control the sequencing of the processing of frames in the device 80 pre-treatment ISP. Each register contains the starting field NetVld and field NextBk to specify what destinations will be valid, and which Bank of registers (0 or 1) will be used, respectively, for the next frame. When an event occurs 192 start of the next frame, field NextVld and NextBk copied into the corresponding active register 186 is read-only, which indicates the current valid destinations and Bank numbers, as shown above in Fig. 10. Each source can be configured for asynchronous actions and can send data in any of their actual destinations. In addition, it should be understood that for each destination, in the General case, only one channel can be active in the current frame.

[0161] In relation to alerting and starting register 180, the announcement bits readiness or “start bit” in the start register 180 leads the preparation of the corresponding source using the appropriate fields NextVld and NextBk. To run the available different modes depending on whether the input is read the source data from the memory (for example, Sens0DMA, Sens1DMA or ProcInDMA), or the input source is coming from sensor interface (for example, Sens0 or Sens1). For example, if you are writing from memory 108, in itself, alerting the start bit can serve as a start event, because the block 164 control prucalopride, when data is read from the memory 108. If the image frames are input sensor interface, the start event may depend on clocking alerting the appropriate starting register in relation to the time of receiving data from the sensor interface. In accordance with the present embodiment, and Fig. 11-13 show three different approaches to the clock starting from the input sensor interface.

[0162] In Fig. 11 illustrates the first scenario, in which the launch takes place after moving all destinations specified by the source from employed or working status to inactive. In this case, the data signal VVALID (196) represents the signal of the image data from the source. Pulse 198 represents the current frame of image data, the pulse 202 is the next frame of image data, and the interval of 200 represents the vertical blanking interval (VBLANK) 200 (for example, represents the period of time between the last line of the current frame 198 and the next frame 202). The time interval between the leading edge and the rear edge of the pulse 198 is HR interval 201. Thus, according to Fig. 11, the source can be configured to start when all these destinations will finish processing operation on the current frame 198 and go into inactive status is of. In this scenario, the source is ready (for example, by setting bits readiness or “start” bit) before the destination is finished processing, so that the source can launch and initiate processing of the next frame 202, as soon as these destinations will go into an inactive state. During the vertical blanking interval of 200 processing units can be installed and configured for the next frame 202 using banks of registers, set the starting register corresponding to the source, before entering the input sensor data. Solely by way of example, the read buffers used FEProc 130 can be filled before receipt of the next frame 202. In this case, the shadow registers, which correspond to the banks of registers can be updated after the event start, thus allowing full HR interval to set the double-buffered registers for the next frame (for example, after the frame 202).

[0163] Fig. 12 illustrates a second scenario in which the source is triggered by alerting the start bits in the start register corresponding to the source. In this configuration, the “run-on-start”, the blocks of the destination specified by the source is no longer active, and making ready the start bit is the event for USCA. This trigger mode can be used for registers that are not double-buffered, and thus is updated within vertical blanking interval (for example, in contrast to the update, double-buffered shadow register for HR interval 201).

[0164] Fig. 13 illustrates a third run mode, in which the source is started after detection of the beginning of the next frame, i.e., growing VSYNC. However, it should be noted that in this mode, if the base register is ready (by setting the start bit) after the next frame 202 has already started processing, the source will use the target destinations and banks of registers corresponding to the previous frame, as fields CurrVld and CurrBk not updated before the destination begins processing. It leaves no a single vertical blanking interval for establishment of processing units of the destination and can potentially result in dropped frames, in particular when working in dvuhmetrovom mode. However, it should be noted that this mode, however, can lead to accurate work, if circuit 32 image processing works in one-sensor bench mode, which uses the same banks of registers for each frame (for example, the destination (NextVld) and banks of registers (NextBk) are not changed).

[0165] the Region is Tr 180 control (or start register) is illustrated in more detail with reference to Fig. 14. The start register 180 includes a “start” bit 204 preparedness, and field 182 NextVld and field 184 NextBk. As discussed above, each source (for example, Sens0, Sens1, Sens0DMA, Sens1DMA or ProcInDMA) device 80 pre-treatment ISP may have a corresponding starting register 180. In one embodiment, the start bit 204 may be a single-bit field, and the starting register 180 may be referred to in readiness by setting the start bit 204 is set to 1. Box 182 NextVld may contain a number of bits corresponding to the number of destinations in the device 80 pre-treatment ISP. For example, according to a variant implementation, shown in Fig. 8, the pre-processing unit ISP includes six destinations: Sif0DMA, Sif1DMA, StatsPipe0, StatsPipe1, FEProc and FEOut. Thus, the start register 180 may include six bits in field 182 NextVld, and each destination corresponds to one bit, and these destinations are set equal to 1. Similarly, field 184 NextBk may contain a number of bits corresponding to the number of registers data in the device 80 pre-treatment ISP. For example, as discussed above, an implementation option device 80 pre-treatment ISP, shown in Fig. 8 may include seven data registers: SIf0, SIf1, StatsPipe0, StatsPipe1, ProcPipe, FEut and ProcIn. Accordingly, field 184 NextBk may include seven bits, and each data register corresponds to one bit, and the data registers corresponding to Bank 0 and 1, are selected by setting their corresponding bit values of 0 or 1, respectively. Thus, using the start register 180, the source, after starting, knows exactly what the destination blocks are used to receive the data frame, and which banks registers should be used to configure these blocks purposes.

[0166] Additionally, thanks dvuhmetrovoy configuration schema supports 32 ISP, the pre-processing unit ISP can act as a one-sensor bench configuration (for example, the data gets only one sensor) and dvuhmetrovoy configuration (for example, the data receive both sensors). In a typical one-sensor bench configuration, the input data from the sensor interface, for example Sens0, passed in StatsPipe0 (for aggregation) and FEProc (for pixel processing). In addition, the frames of the sensor can also be transmitted in the memory (SIf0DMA) for further processing, as discussed above.

[0167] an Example of how fields NextVld corresponding to each source device 80 pre-treatment ISP can be configured when operating in one-sensor bench mode, are presented below in table 2.

Table 2
Example NextVld for each source: one-sensor bench mode
SIf0DMASIf1DMAStatsPipe0StatsPipe1FEProcFEOut
Sens0Go1X1010
Sens1GoX00000
Sens0DMAGoXX0XXX
Sens1DMAGoXXX0XX
ProcInDMAGoXXXX 00

As discussed above with reference to table 1, the device 80 pre-treatment ISP can be configured so that only some of the destinations are valid for a particular source. Thus, the destinations in table 2 marked with “X” are intended to indicate that the device 80 pre-treatment ISP is not configured to allow a specific data source to send the data frame to the destination. For these destinations, bits field NextVld specific source corresponding to this destination can always be set to 0. However, it should be understood that this is only one variant of implementation and, indeed, in other embodiments, implementation, device 80 pre-treatment ISP can be configured so that each source is able to specify each available destination block.

[0168] the Configuration shown above in table 2, is one-sensor bench mode, in which only Sensor0 provides data frame. For example, the register Sens0Go specifies the destination as SIf0DMA, StatsPipe0 and FEProc. Thus, at startup, each frame of image data Sensor0, is passed to these three destinations. As discussed above, SIf0DMA can save frames in memory 108 for further clicks the processing, StatsPipe0 applies statistical processing to determine the various points of statistical data, and FEProc processes the frame using, for example, temporal filtering, and filtering to compensate for the binning. In addition, in some configurations, where desirable, additional statistics (e.g. statistics in different color spaces), StatsPipe1 can also be activated (corresponding NextVld is set to 1) in one-sensor bench mode. In such scenarios, implementation, data frame Sensor0 transferred to StatsPipe0 and StatsPipe1. In addition, as shown in the present embodiment, only one sensor interface (for example, Sens0 or, alternatively, Sen0) is the only active source in one-sensor bench mode.

[0169] based On the above, in Fig. 15 presents a flowchart of operations depicting a method 206 data frame in the device 80 pre-treatment ISP, when active, only one sensor (e.g. sensor 0). Although the method 206 illustrates, in particular, the processing of the frame data Sensor0 through FEProc 130 in the manner of example, it should be understood that this process can be applied to any other power source and a corresponding destination block device 80 pre-treatment ISP. From step 208, Sensor0 starts to receive the image data and to transmit zahvacheny the frames to the device 80 pre-treatment ISP. Block 164 management can initialize the programming of the start register, the corresponding Sens0 (interface Sensor0) to determine the target destination (includes FEProc) and the registers of the Bank to use, as shown in step 210. After that, the logic 212 decision determines whether there was a launch event source. As discussed above, when the input data frame from the sensor interface, you can use different startup modes (Fig. 11-13). If the start event is not detected, the process 206 continues to wait for the launch. When run, the next frame becomes the current frame and transmitted to FEProc (and other target destinations) for processing at step 214. FEProc can be configured using the settings data on the basis of the data register (ProcPipe) specified in the NextBk register Sens0Go. After processing the current frame in step 216, the method 206 may return to step 210, in which case Sens0Go is programmed for the next frame.

[0170] When both Sensor0 and Sensor1 device 80 pre-treatment ISP active, statistical processing remains, in General, direct, because the input signal of each sensor can be processed by the corresponding block statistics, StatsPipe0 and StatsPipe1. However, as illustrated version of the exercise device is istwa 80 pre-treatment ISP provides only one unit pixel processing (FEProc), FEProc can be configured to toggle between processing frames corresponding to the input data Sensor0, and frames corresponding to the data input Sensor1. It is obvious that the image frames are read from FEProc in the illustrated embodiment, in order to avoid conditions in which the image data from one sensor are processed in real time, while the image data from the other sensor is not processed in real time. For example, as shown below in table 3, which shows one possible configuration fields NextVld in the starting registers for each source when the device 80 pre-treatment ISP works in dvuhmetrovom mode, the input data from each sensor is transmitted to the memory (SIf0DMA and SIf1DMA) and the corresponding block aggregation (StatsPipe0 and StatsPipe1).

Table 3
Example NextVld for each source: dvuhmatchevyh mode
SIf0DMASIf1DMAStatsPipe0StatsPipe1FEProcFEOut
Sens0Go1X1 000
Sens1GoX10100
Sens0DMAGoXX0XXX
Sens1DMAGoXXX0XX
ProcInDMAGoXXXX10

[0171] the Frames of the sensor in the memory are sent to FEProc source ProcInDMA, so that they switch between Sensor0 and Sensor1 with frequency on the basis of their respective frame rates. For example, if Sensor0 and Sensor1 both receive image data at 30 frames per second (fps), their frames of sensor may be interrupted by one. If Sensor0 (30 fps) receives image data from a twice higher frequency than Sensor1 (15 fps), what about the interleaving can be performed, for example, in mode 2 to 1. Thus, two data frames Sensor0 read from memory for each frame of data Sensor1.

[0172] based On the above, Fig. 16 depicts a method 220 of the data frame in the device 80 pre-treatment ISP with two sensors at the same time receiving the image data. At step 222, Sensor0 and Sensor1 start to receive the image frames. Obviously, Sensor0 and Sensor1 can get the image frames with different frame rates, resolutions, etc., In step 224, the frames received from Sensor0 and Sensor1, stored in the memory 108 (e.g., using destinations SIf0DMA and SIf1DMA). Then, ProcInDMA source reads the frame data from the memory 108 alternately, as shown in step 226. As discussed, the frames may be interspersed between the data Sensor0 and Sensor1 data depending on the frame rate at which data is obtained. At step 228 of ProcInDMA get the next frame. Then, at step 230, the field NextVld and NextBk starting register corresponding to the source, in this case ProcInDMA can be programmed depending on whether the next frame data Sensor0 Sensor1 or. After that, the logic 232 decision determines whether there was a launch event source. As discussed above, the data input of the memory can be run by alerting the start bit (for example, the tunes “launch-on-start”). Thus, the launch may happen when the start bit start register is set to 1. When run, the next frame becomes the current frame and transmitted to FEProc for processing at step 234. As discussed above, FEProc can be configured using the settings data on the basis of the data register (ProcPipe) specified in the NextBk register ProcInDMAGo. After processing of the current frame at step 236, the method 220 may return to step 228 and continue.

[0173] Additional operational event, for which the configured processing device 80 pre-treatment ISP, is the change of configuration during image processing. For example, such an event may occur when the device 80 pre-treatment ISP goes from one-sensor bench configuration to dvuhmetrovoy configuration or Vice versa. As discussed above, field NextVld some sources may vary depending on whether one or both of the image sensor. Thus, when changing the sensor configuration, the block 164 control pretreatment ISP may release all blocks to the destination before they will be given a new source. This avoids invalid configurations (for example, assignment of multiple sources one is the destination). In one embodiment, the release block assignment can be done by setting fields NextVld all start registers 0, thus deactivating all destinations, and alerting the start bit. After the liberation of the blocks of the destination, the starting registers can be reconfigured depending on the current mode of the sensor, and image processing can continue.

[0174] the Method 240 switch between single and dvuhmetrovoy configurations shown in Fig. 17, in accordance with one embodiment. From step 242, is identified by a next frame of image data from a particular source device 80 pre-treatment ISP. At step 244, the target destination (NextVld) are programmed in the starting register corresponding to the source. Then, at step 246, depending on the target destination, NextBk is programmed to specify the correct data registers associated with the target destinations. After that, the logic 248 decision determines whether there was a launch event source. When run, the next frame is transmitted in units of destination specified NextVld, and processed blocks of the destination using the appropriate registers of the data specified NextBk, as shown in step 250. Processing continues until the floor is PA 252, where the processing of the current frame is completed.

[0175] Then, the logic 254 decision determines any changes to the target destinations for the source. As discussed above, the configuration in NextVld start registers corresponding Sens0 Sens1 and may change depending on whether one sensor or two sensors. For example, according to table 2, if only the active Sensor0, data Sensor0 transferred to SIf0DMA, StatsPipe0 and FEProc. However, according to table 3, if both active Sensor0 and Sensor1, the data Sensor0 not passed directly to FEProc. Instead, as mentioned above, the data Sensor0 and Sensor1 stored in the memory 108 and read in FEProc alternately ProcInDMA source. Thus, if the logic 254 decision not detected shift of the target destination, the block 164 management concludes that the configuration of the sensors has not changed, and the method 240 returns to step 246, where the field NextBk starting register of source is programmed to specify the correct register data for the next frame, and continues.

[0176] If the logic 254 decision of the detected change of destination, the block 164, the control determines that there was a change sensor configuration. For example, this can result in switching from one-sensor bench mode in dvuhmatchevyh mode, or disable all sensors. Accordingly, the method 240 transition is t to step 256, in which all the bits of the fields NextVld for all the starting registers are set equal to 0, thus effectively prevents the user from sending frames to any destination on the next run. Then, on the logic 258 decision, determination is made whether all the blocks of the destination switched to inactive. If not, the method 240 is waiting on logic 258 decision until all the blocks assignment completes its current operation. Then, on the logic 260 decision, determination is made whether to continue the processing of the images. For example, if there is a change of destination, expressed in deactivation of both Sensor0 and Sensor1, the image processing ends at step 262. However, if it is determined that image processing should continue, then the method 240 returns to step 244, and the fields NextVld start registers are programmed in accordance with the current mode of operation (e.g., one-sensor bench or dvuhmetrovym). As shown here, the stages 254-262 starting to clean registers and fields of the destination together to define a reference position 264.

[0177] Fig. 18 shows an additional variant of the implementation by the flowcharts of operations (method 265), which provides for other dvuhmatchevyh mode. Method 265 depicts a condition where one sensor (for example, Sensor0) actively developed the image data and transmits the image frames to FEProc 130 for processing, simultaneously transferring the image frames in StatsPipe0 and/or memory 108 (Sif0DMA), while the other sensor (e.g., Sensor1) not active (e.g., turned off), as shown in step 266. Then logic 268 decision detects a condition under which Sensor1 becomes active on the next frame to send image data to FEProc. If this condition is not met, then the method 265 returns to step 266. If this condition is satisfied, the method 265 passes to the implementation of the action 264 (jointly stages 254-262 in Fig. 17), resulting in that field destination source is cleared and reconfigured at step 264. For example, at step 264, the field NextVld starting register associated with Sensor1, can be programmed to indicate FEProc as a destination, as well as StatsPipe1 and/or memory (Sif1DMA), whereas field NextVld starting register associated with Sensor0, can be programmed to clean FEProc as a destination. In this embodiment, although the frames captured by Sensor0, not passed on to FEProc on the next frame, Sensor0 can remain active and continue to send the image frames in StatsPipe0, as shown in step 270, while Sensor1 captures and sends data to FEProc for processing at step 272. Thus, both sensors, Sensor0 and Sensor1 can continue to work in this “dvuhmetrovom” mode, although FEProc for education is otci sent the image frames from only one sensor. For the purposes of this example, a sensor that transmits frames to FEProc for processing, may be cited as the “active sensor”, sensor, which transmits the frame to FEProc, but still sends data in blocks of statistical processing may be cited as the “semi-active sensor, and the sensor does not receive any data, may be cited as the “inactive sensor”.

[0178] One advantage of the above approach is that, because of the continuation of obtaining statistics for semi-active sensor (Sensor0), at the next transition of the semi-active sensor in the active state and the transition of the current active sensor (Sensor1) in semi-active or inactive, semi-active sensor can receive data in one frame, since the color balance and exposure can be available due to continued statistics of the image. This approach may be cited as the “hot switching” image sensors and avoids the disadvantages associated with “cold starts” image sensors (for example, runs in the lack of available statistical information). In addition, to save power, since each source is asynchronous (as mentioned above), semi-active sensor can operate at a reduced clock rate and/or frame rate for semi-active lane the ode.

[0179] Before proceeding with a more detailed description of the statistical processing and processing operations of the pixels depicted in the logic 80 pre-treatment ISP, shown in Fig. 8, it is assumed that a brief introduction regarding the definitions of the various areas of the frame ISP, it would be useful to facilitate a better understanding of the present subject matter. Based on the above, different areas of the frame that can be set in the source image frame shown in Fig. 19. The format for the source frame coming in scheme 32 image processing, can use the above mosaic or linear addressing modes, because it can use pixel formats with 8, 10, 12 or 14-bit precision. Source frame 274 image, as shown in Fig. 19, may include a region 276 frame sensor, the primary region 276 of the frame and the active region 280. Frame 276 sensor is, in General, the maximum frame size that the sensor 90 of the image may grant scheme 32 image processing. Primary area 278 of the frame can be set as the area of the frame 276 sensor, which is transmitted to the logic of 80 pre-treatment ISP. The active region 280 can be set as the area of the original frame 274, usually in the primary region 278 of the frame where the processing is carried out for cretney operation of the image processing. In accordance with the variants of implementation of the present invention, this active region 280 may be the same or different for different image-processing operations.

[0180] In accordance with aspects of the present invention, the logic 80 pre-treatment ISP only accepts primary frame 278. Thus, for the purposes of this review, the global frame size for logic 80 pre-treatment ISP can be assumed equal to the primary frame size, which is determined by the width 282 and 284 height. In some embodiments, implementation, offset from the edges of the frame 276 sensor to the primary frame 278 may opredeleniya and/or be supported by the control logic 84. For example, the control logic 84 may include hardware and software that can determine the primary region 278 frame based on the input parameters, for example, offset 286 x axis and offset 288 y-axis, which are relative to the frame 276 sensor. In addition, in some cases, the block processing logic 80 pre-treatment ISP or logic 82 of the conveyor ISP may have given active region, so that the pixels in the primary frame, but outside the active region 280 will not be processed, i.e., will remain unchanged. For example, the active region 280 for a particular processing unit, having a width of 290 and height 292, monthdate on the basis of the offset 294 x-axis and offset 296 y-axis relative to the primary frame 278. In addition, when the active region is not explicitly specified, one variant of implementation of the scheme 32 image processing may provide that the active region 280 coincides with the primary frame 278 (e.g., offset 294 x-axis and the offset 296 y is equal to 0). Thus, for the purposes of image-processing operations performed on the image data, the boundary conditions can be set relative to the edges of the primary frame 278 or active region 280.

[0181] Based on the above, refer to Fig. 20, which illustrates a more detailed view of logic 130 pre-treatment pixels ISP (previously discussed in Fig. 8), in accordance with the embodiment of the present invention. As shown, the logic 130 pre-treatment pixels ISP includes a temporal filter 298 and the filter 300 compensation binning. Time filter 298 may take one of the input image signals Sif0, Sif1, FEProcIn or pre-processed image signals (for example, 159, 161), and can act on the primary pixel data prior to performing any additional processing. For example, a time filter 298 may be initially processed image data to reduce noise by averaging image frames in the temporal direction. The filter 300 compensation binning, which is more Russ is otrin below, can apply scaling and re-sampling subjected to the binning of the primary image data from the image sensor (e.g., 90a, 90b) to maintain a uniform spatial distribution of pixels in the image.

[0182] Time filter 298 may have a pixel adaptability on the basis of the characteristics of motion and brightness. For example, in the case of high motion pixel, the degree of filtering can be reduced to avoid in the resulting processed image “lags” or “artifacts overlay images, whereas the detection of weak motion or absence of motion, the degree of filtration can be increased. Additionally, the degree of filtering can also be adjusted based on the brightness data (e.g., “brightness”). For example, increasing the brightness of the image, artifacts, filtering can become more visible to the human eye. Thus, the degree of filtering can be further reduced when the pixel has a high level of brightness.

[0183] When applying the temporal filter, time filter 298 can take the reference pixel data (Rin) and the input (Hin) the history of the movement that can occur from previous filtered or the source frame. Using these parameters, a time filter 298 mo is et to provide the output (Hout) the history of the movement and filtered pixel output (Yout). Then the filtered pixel output Yout is supplied to the filter 300 compensation binning, which can be configured to implement one or more scaling operations on data Yout filtered pixel output for generating the output signal FEProcOut. The processed pixel data FEProcOut can then be routed to the logic 82 pipeline ISP, as discussed above.

[0184] In Fig. 21 illustrates the processing scheme depicting the process 302 temporal filtering, which may be a temporary filter shown in Fig. 20, in accordance with the first embodiment. Time filter 298 may include 2-taking the filter in which the filter coefficients are regulated adaptive at popisannoe based, at least partially, be based on motion and brightness. For example, the input pixels x(t), where the variable “t” represents a time value can be compared with the reference pixel r(t-1) in the previously filtered frame or the previous original frame to generate a search index of the motion table (M) 304 history of the movement, which may contain the coefficients of the filter. Additionally, based on the input h(t-1) the history of the movement, you can define the output h(t) the history of the movement corresponding to the current input pixel x(t).

[0185] Output the data h(t) the history of the movement and filtration coefficient, K can be determined on the basis of the Delta motion d(j,i,t), where (j,i) represent the coordinates of the spatial position of the current pixel x(j,i,t). Delta motion d(j,i,t) can be calculated by determining the maximum of the three absolute Delta between the source and the reference pixels for the three horizontally adjacent pixels of the same color. For example, briefly refer to Fig. 22, which illustrates the spatial position of the three neighboring reference pixels 308, 309 and 310 corresponding to the original input pixels 312, 313 and 314. In one embodiment, the Delta motion can be calculated on the basis of these source and target pixels using the following formula:

Below in Fig. 24 further illustrates a flowchart of operations depicting this method of determining the value of Delta movement. In addition, it should be understood that the method of calculating the value of the Delta movements shown above in equation 1a (and below in Fig. 24), is intended to provide only one version of the implementation to determine the value of the Delta motion.

[0186] In other embodiments, implementation of the matrix of pixels of the same color can be evaluated to determine the value of the Delta movement. For example, in addition to the three pixels, specified in equation 1a, one variant of implementation to determine the values of the Delta d is to achieve may include the assessment of absolute deltas between pixels of the same color from two lines above (for example, j-2; assuming a Bayer pattern) of the reference pixels 312, 313 and 314, and their respective neighboring pixels, and two lines below (for example, j+2; assuming a Bayer pattern) of the reference pixels 312, 313 and 314, and their respective neighboring pixels. For example, in one embodiment, the Delta value of motion can be expressed as follows:

Thus, according to a variant implementation, represented by equation 1b, the Delta value movement can be determined by comparing the absolute Delta between the matrix of 3×3 pixels of the same color with the current pixel (313), located in the center of the 3×3 matrix (for example, real matrix of 5×5 Bayer color templates, if you count the pixels of different colors). It is obvious that any suitable two-dimensional matrix of pixels of the same color (for example, comprising a matrix, the pixels which are arranged in the same row (for example, equation 1a), or matrix, all pixels which are arranged in the same column) with the current pixel (e.g., 313), located in the center of the matrix can be analyzed to determine the value of the Delta movement. In addition, although the Delta value movement can be defined as the maximum of the absolute deltas (e.g., as shown in equations 1a and 1b), in other variants of implementation, the Delta value of the movement is also possible to choose as an average or median value of the absolute deltas. Additionally, the above approaches can also be applied to other types of matrices, color filters (for example, RGBW, CYGM, and so on), not just templates Bayer.

[0187] According to Fig. 21, after determining the value of the Delta motion search index movement, which can be used to select the filter coefficient K from table (M) 304 motion, can be calculated by summing the Delta d(t) of motion for the current pixel (e.g., spatial position (j,i)) with the input h(t-1) history of the movement. For example, the filter coefficient K can be determined as follows:

Additionally, the output h(t) the history of the movement can be determined using the following formula:

[0188] Then, the brightness of the current input pixel x(t) can be used to generate search index brightness in the table (L) 306 brightness. In one embodiment, the luminance table may contain attenuation coefficients, which can take values between 0 and 1 and can be selected on the basis of the index of brightness. The second filtration coefficient, K', can be calculated by multiplying the first coefficient K filter for attenuation of brightness, as shown in the following equation:

[0189] a Specific value for K' then mo is but used as a filtration coefficient for time filter 298. As discussed above, a time filter 298 may be 2-lateral filter. Additionally, a time filter 298 may be configured as a filter with an infinite impulse response (IIR) using the previous filtered frame, or as a filter with finite impulse response (FIR) using the previous original frame. Time filter 298 can compute the filtered output pixel y(t) (Yout) using the current input pixel x(t), the reference pixel r(t-1) and K' filtering using the following formula:

As discussed above, the process 302 temporal filtering, as shown in Fig. 21, can be implemented on a pixel-by-pixel basis. In one embodiment, the same table M movement and the table L brightness can be used for all color components such as R, G and B). Additionally, some of the options for implementation may provide a mechanism for traversal, in which the temporal filtering can be circumvented, for example in response to a control signal from the control logic 84. In addition, as will be discussed below in relation to Fig. 26 and 27, one variant of implementation of the temporary filter 298 may use a separate table movement and brightness for each color component of the image data.

[0190] a Variant implementation of the method the time the Noi filter, described with reference to Fig. 21 and 22, can be better understood by referring to Fig. 23, which shows a flowchart of operations illustrating a method 315 in accordance with the above-described embodiment. Method 315 begins with step 316, where the current pixel x(t) in position (j,i) of the current frame image data accepted by the system 302 temporal filtering. At step 317, the Delta value d(t) of the movement is determined for the current pixel x(t), at least partially, on the basis of one or more neighboring reference pixels (for example, r(t-1)) from the previous frame of image data (for example, image frame immediately preceding the current frame). The method of determining the Delta value d(t) movement on stage 317 additionally explained below with reference to Fig. 24, and may be carried out in accordance with the above equation 1a.

[0191] upon Receiving the Delta value d(t) movement on stage 317, search index table movement can be determined using the Delta value d(t) of the movement and the values h(t-1) input the history of the movement corresponding to the spatial position (j,i) from the previous frame, as shown at step 318. Additionally, although not shown, the value of h(t) the history of the movement corresponding to the current pixel x(t) can also be determined at step 318, when the value is th Delta-d(t) of the motion is known, for example, using the above equation 3a. Then, at step 319, the first filter coefficient K can be chosen from table 304 motion using the search index table movement, from step 318. Defining search index table movement and the selection of the first filter coefficient K of table movement can be carried out in accordance with the above equation 2a.

[0192] Then, at step 320, the attenuation coefficient can be chosen from table 306 brightness. For example, table 306 brightness may contain attenuation coefficients in the range of about from 0 to 1, and the attenuation coefficient can be chosen from table 306 brightness using the value of the current pixel x(t) as a search index. Selecting the attenuation coefficient, a second coefficient K' filtering can be defined at the stage 321 using the selected attenuation coefficient and the first filter coefficient K (step 319), as shown in the above equation 4a. Then, at step 322, subjected to temporal filtering output value y(t) corresponding to the current input pixel x(t), is determined on the basis of the second coefficient K' filtering (step 320), the values neighboring the reference pixel r(t-1) and the values of the input pixel x(t). For example, in one embodiment, the output value y(t) is predelete in accordance with the above equation 5a.

[0193] In Fig. 24, step 317 to determine the Delta value d(t) motion of method 315 is illustrated in more detail in accordance with one embodiment. In particular, the definition of the Delta value d(t) of the movement, in the General case, may correspond to the operations shown above in accordance with equation 1a. As shown, step 317 may include the sub-steps 324-327. Since podata 324, identified a set of three horizontally adjacent pixels having the same color value as the current input pixel x(t). By way of example, in accordance with the embodiment shown in Fig. 22, the image data may include Bayer image data, and three horizontally adjacent pixel may include the current input pixel x(t) (313), the second pixel 312 of the same color to the left of the current input pixel 313 and the third pixel of the same color to the right of the current input pixel 313.

[0194] Then, on podate 325, identified three neighboring the reference pixel 308, 309 and 310 of the previous frame corresponding to the selected set of three horizontally adjacent pixels 312, 313 and 314. Using the selected pixels 312, 313 and 314 and three neighboring reference pixels 308, 309 and 310, the absolute values of differences between each of the three selected pixels 312, 313 and 314, and their respective adjacent supporting and pixels 308, 309 and 310, respectively, are defined on podate 326. Then, on podate 327, maximum of three differences obtained podate 326, is selected as the Delta value d(t) of motion for the current input pixel x(t). As discussed above, Fig. 24, which illustrates a method of calculating Delta values of motion shown in equation 1a, is designed to provide only one option for implementation. Indeed, as discussed above, any suitable two-dimensional matrix of pixels of the same color with the current pixel in the center of the matrix can be used to determine the value of the Delta movement (for example, equation 1b).

[0195] Another variant of the method of applying temporal filtering to the image data is additionally shown in Fig. 25. For example, since the signal-to-noise ratio for different color components of the image data may vary, the gain may be applied to the current pixel, so that the current pixel receives the gain to select values of the movement and the brightness of the table 304 movement and tables 306 brightness. Applying the appropriate gain factor, which is svetosavska, you can achieve a more consistent signal-to-noise between different color components. Solely by way of example, in implementations that use perve who by Bayer image data, color channels red and blue, in General, may be more sensitive than color channels, green (Gr and Gb). Thus, applying the proper svetosavska gain for each processed pixel, it is possible, in General, to reduce the variation in signal-to-noise ratio between the color components, thereby reducing, among other things, artifacts, overlay images, as well as the consistency between different colors after a gain auto white balance.

[0196] based On the above, in Fig. 25 shows a flowchart of operations depicting a method 328 applying temporal filtering to the image data by the block 130 pre-treatment in accordance with this embodiment. From step 329, the current pixel x(t) in position (j,i) of the current frame image data accepted by the system 302 temporal filtering. At step 330, the Delta value d(t) of the movement is determined for the current pixel x(t), at least partially, on the basis of one or more neighboring reference pixels (for example, r(t-1)) from the previous frame of image data (for example, image frame immediately preceding the current frame). Step 330 may be similar to step 317, pokazannoe in Fig. 23, and can use the th operation, presented in the above equation 1.

[0197] Then, at step 331, the search index table movement can be determined using the Delta value d(t) motion, the values h(t-1) input the history of the movement corresponding to the spatial position (j,i) from the previous frame (for example, corresponding neighboring the reference pixel r(t-1)), and the gain associated with the color of the current pixel. Then, at step 332, the first filter coefficient K can be chosen from table 304 motion using the search index table movement determined in step 331. Solely by way of example, in one embodiment, the coefficient K filtering and search index table movement can be defined as follows:

where M is the table movement, and where gain[c] corresponds to the gain associated with the color of the current pixel. Additionally, although not shown in Fig. 25, it should be understood that the value of h(t) the output history of the movement for the current pixel can also be defined and can be used to apply temporal filtering to a neighboring pixel of the subsequent image frame (for example, the next frame). In the present embodiment, the output h(t) the history of the movement for the current pixel x(t) mo is but be determined using the following formula:

[0198] Then, at step 333, the attenuation coefficient can be chosen from table 306 brightness by using the search index tables brightness, defined on the basis of the amplification factor (gain[c]) associated with the color of the current pixel x(t). As discussed above, the attenuation factors stored in the table of brightness, can take values from approximately 0 to 1. Then, at step 334, the second coefficient K' filtering can be calculated on the basis of the attenuation coefficient (step 333) and the first filter coefficient K (step 332). Solely by way of example, in one embodiment, the second coefficient K' filtering and search index tables brightness can be defined as follows:

[0199] Then, at step 335, subjected to temporal filtering output value y(t) corresponding to the current input pixel x(t), is determined on the basis of the second coefficient K' filtering (step 334), the values neighboring the reference pixel r(t-1) and the values of the input pixel x(t). For example, in one embodiment, the output value y(t) can be determined as follows:

[0200] let us Turn to Fig. 26, which shows an additional variant of the process 336 temporal filtering. In this case, p is ocess 336 temporal filtering may be performed similarly to variant implementation, considered in Fig. 25, except that instead of applying svetosavska gain (for example, gain[c]) for each input pixel and shared tables movement and brightness for each color component provided with a separate table movement and brightness. For example, as shown in Fig. 26, table 304 can include a table 304a movement corresponding to the first color, the table 304b movement corresponding to the second color, and the table 304c movement corresponding to the n-th color, where n depends on the number of colors present in the primary image data. Similarly, table 306 brightness may include a table 306a brightness corresponding to the first color, the table 306b brightness corresponding to the second color, and the table 304c movement corresponding to the n-th color. Thus, under option exercise, where the primary image data are Bayer image data for each color component of red, green and blue may be provided in three tables movement and brightness. As discussed below, the choice of the coefficients K filtering and attenuation coefficient may depend on the table motion and brightness selected for the current color (e.g., color of the current input pixel).

[0201] the Method 338, demonstrating extra is an implementation option for temporal filtering with the use of svetosavska tables movement and brightness shown in Fig. 27. It is obvious that various calculations and formulas which can be used according to the method 338, may be the same as option implementation shown in Fig. 23, but the choice of a particular table movement and brightness for each color, or a similar variant implementation, shown in Fig. 25, but with the replacement of the use of svetosavska gain[c] choice svetosavska table movement and brightness.

[0202] from step 339, the current pixel x(t located at position (j,i) of the current frame image data accepted by the system 336 temporal filtering (Fig. 26). At step 340, the Delta value d(t) of the movement is determined for the current pixel x(t), at least partially, on the basis of one or more neighboring reference pixels (for example, r(t-1)) from the previous frame of image data (for example, image frame immediately preceding the current frame). Step 340 may be similar to step 317, shown in Fig. 23, and can use the operation shown in the above equation 1.

[0203] Then, at step 341, the search index table movement can be determined using the Delta value d(t) of the movement and the values h(t-1) input the history of the movement corresponding to the spatial position (j,i) from the previous frame (for example, corresponding neighboring the reference pixel r(t-1)) then at step 342, the first filter coefficient K can be chosen from one of the available tables movement (e.g., 304a, 304b, 304c) based on the color of the current input pixel. For example, identifying an appropriate table movement, the first filter coefficient K can be selected by using the search index table movement determined in step 341.

[0204] After selecting the first filter coefficient K, is selected as the luminance table corresponding to the current color, and the attenuation coefficient is selected from the selected brightness table based on the value of the current pixel x(t), as shown in step 343. Then, at step 344, the second coefficient K' filter is determined based on the attenuation coefficient (step 343) and the first filter coefficient K (step 342). Then, at step 345, subjected to temporal filtering output value y(t) corresponding to the current input pixel x(t), is determined on the basis of the second coefficient K' filtering (step 344), the values neighboring the reference pixel r(t-1) and the values of the input pixel x(t). Although the approach shown in Fig. 27, may be more costly (for example, because of the amount of memory required to store additional tables movement and brightness), it may, in some cases, to provide additional improvements to the artifacts cash is the position of the images and the consistency between different colors after gain auto white balance.

[0205] In accordance with additional options for implementation, the temporal filtering process provided a temporary filter 298, may use a combination of svetosavska gain and svetosavska tables movement and/or brightness to apply temporal filtering to the input pixels. For example, in one such embodiment, for all the color components may be provided with a single table movement, and the search index table movement to select the first filtration coefficient (K) of table movement can be determined on the basis of svetosavska gain (for example, as shown in Fig. 25, stages 331-332), whereas to the search index tables brightness may not apply svetosavska gain, but it can be used to select the attenuation of brightness from one of the multiple tables brightness depending on the color of the current input pixel (e.g., as shown in Fig. 27, step 343). Alternatively, in another embodiment, it is possible to provide multiple table movement, and the search index table movement (without the use of svetosavska gain) can be used to select the first filtration coefficient (K) of table movement corresponding to the color of the current input pixel (e.g., as shown is Fig. 27, step 342), while for all the color components, you can provide a uniform luminance table and the search table index brightness to select the attenuation of brightness can be determined on the basis of svetosavska gain (for example, as shown in Fig. 25, stages 333-334). In addition, in one embodiment, which uses the matrix Bayer color filters for each color component of red (R) and blue (B) can be provided on one flow table and/or table luminosity, while for the two color components (Gr and Gb) green can provide a common table movement and/or the luminance table.

[0206] Then, the output signal of the temporary filter 298 may be sent to the filter 300 compensation binning (BCF), which can be configured for processing pixels of an image to compensate for non-linear (for example, non-uniform spatial distribution) color samples due to binning sensor(s) 90a or 90b image to the subsequent image processing logic 82 of the conveyor ISP (for example, demosaic and so on), which depend on the linear placement of color samples to work properly. For example, in Fig. 28, shows a sample of 346 Bayer image data with full resolution. This can be a sample with a complete solution for the m primary image data, captured by the sensor 90a (or 90b) of the image connected to the logic 80 pre-treatment ISP.

[0207] it is Obvious that under certain conditions of image capture, it can be impractical to send the image data with full resolution captured by the sensor 90a image, diagram 32 ISP for processing. For example, when capturing video data, to preserve the appearance of the moving image fluid perceived by the human eye, may have a frame rate of at least about 30 frames per second. If the same amount of pixel data contained in each frame of the sample with full resolution exceeds the capacity of the processing circuit 32 ISP with a sampling rate of 30 frames per second, filter-compensated binning can be used in conjunction with binning sensor 90a image to reduce the resolution of the image signal and to simultaneously improve the signal-to-noise ratio. For example, as discussed above, various methods are used binning, for example, binning 2×2, to create “subjected to binning” primary image pixel by averaging the values of the surrounding pixels in the active region 280 of the primary frame 278.

[0208] In Fig. 29 illustrates an implementation option sensor 90a image, which can be configured for binning data 346 image is agenia full resolution, it is shown in Fig. 28, to create relevant subjected to binning primary data 358 image shown in Fig. 30, in accordance with one embodiment. As shown, the sensor 90a image can capture primary data 346 image with full resolution. Logic 357 binning can be configured for use binning to the primary data 346 image with full resolution to create subjected to binning primary data 358 image that can be transmitted to the logic of 80 pre-treatment ISP using interface 94a sensor, which, as discussed above, may be a SMIA interface or any other suitable parallel or serial interfaces camera.

[0209] According to Fig. 30, the logic 357 binning can apply binning 2×2 to the primary data 346 image with full resolution. For example, in relation subjected to binning data 358 of the image pixels 350, 352, 354 and 356 may form a Bayer pattern, and can be determined by averaging the pixel values from the primary data 346 image with full resolution. For example, according to Fig. 28 and 30, subjected to binning pixel 350 Gr can be defined as the average or mean values of pixels 350a-350d Gr full resolution. Similarly subjected to the binning pixel 352 R can determine what elite as the average value of the pixels 352a-352d R full resolution, subjected to binning pixel 354 B can be defined as the average value of the pixels 354a-354d B full resolution, and subjected to binning pixel 356 Gb can be defined as the average value of the pixels 356a-356d Gb full resolution. Thus, in the present embodiment, the binning 2×2 can provide a set of four pixels full resolution, which includes the top-left (e.g., 350a), upper right (for example, 350B water), lower left (e.g., 350c) and lower right (for example, 350d) pixel, which are averaged to obtain subjected to binning the pixel located at the center of a square formed by the set of four pixels full resolution. Accordingly, subjected to binning Bayer block 348, shown in Fig. 30 contains four superpixel” that are 16 pixels contained in Bayer blocks 348a-348d in Fig. 28.

[0210] in Addition to reducing spatial resolution, binning also provides the additional benefit of lowering the noise in the image signal. For example, whenever an image sensor (e.g., 90a) is exposed to a light signal, there may be some amount of noise, for example, the photon noise associated with the image. This noise may be random or systematic, and can also come from multiple sources. So what Braz, the amount of information contained in the image captured by the image sensor, can be expressed through the ratio signal/noise. For example, each time the image is captured by the sensor 90a image and transferred to the processing circuit, for example, the circuit 32 ISP, the pixel values can be some degree of noise, because the process of reading and transfer of the image data, by its nature, introduces “noise reading” in the image signal. This “noise readings can be random, and, in General, cannot be avoided. Through the use of the average value of four pixels, noise (e.g., photon noise) can, in General, be reduced regardless of the source of the noise.

[0211] Thus, when considering data 346 image with full resolution, shown in Fig. 28, each template 348a-348d Bayer (block 2×2) contains 4 pixels, each of which contains signal and noise component. If each pixel, for example, in the Bayer block 348a, is read separately, then there are four components of the signal and four noise components. However, due to the use of binning, as shown in Fig. 28 and 30, which allows to represent four pixels (for example, 350a, 350B water, 350c, 350d) a single pixel (e.g., 350) subjected to the binning of the image data, the area occupied by four pixels in the data 346 images on the th resolution can be read as a single pixel with only one instance of the noise components, thereby increasing the signal-to-noise.

[0212] in Addition, although this version of the implementation depicts the logic 357 binning shown in Fig. 29, as configured for the application process of binning 2×2, it is obvious that the logic 357 binning can be configured to use any suitable type of the binning process, for example, binning 3×3, vertical binning horizontal binning, etc., In some embodiments, implementation, sensor 90a image can be configured to select between different modes binning in the process of image capture. Additionally, in additional embodiments, the implementation, the sensor 90a image can also be configured to apply a method that can be called a “pass”, in which instead of the average of the pixel samples that logic 357 selects only some of the pixels from the data 346 full resolution (for example, every other pixel, every third pixel, and so on) to output to the device 80 pre-treatment ISP for processing. In addition, although Fig. 29 shows only the sensor 90a image, it is obvious that the sensor 90b image can be implemented in a similar way.

[0213] According to Fig. 30, one result of the binning process is that space, Breakfast is the only discretization subjected to the binning of the pixels can be distributed unevenly. This spatial distortion, in some systems, can lead to aliasing (e.g., jagged edges) that, in General, undesirable. In addition, because some stages of the image processing logic 82 of the conveyor ISP may depend on the linear placement of color samples to work properly, the filter 300 compensation binning (BCF) can be used to implement the re-sampling and re-placement subjected to the binning of the pixels so that subjected to binning the pixels are uniformly distributed in space. Thus, BCF 300 essentially compensates for the uneven spatial distribution (e.g., shown in Fig. 30) by re-sampling position sampling (e.g., pixels). For example, in Fig. 31 illustrates again the discretized area subjected to binning data 360 image after processing with BCF 300, in which Bayer block 361, containing uniformly distributed re-sampled pixels 362, 363, 364 and 365, corresponds subjected to binning the pixels 350, 352, 354 and 356, respectively, subjected to the binning of the data 358 image of Fig. 30. Additionally, according to a variant implementation, which uses a pass (for example, instead of binning), as mentioned above, the spatial distortion shown in Fig. 30, may be absent. In this case, BCF 300 can function as a low pass filter for attenuating artifacts (e.g., aliasing), which can occur if the application passes the sensor 90a image.

[0214] In Fig. 32 shows a block diagram of the filter 300 compensation binning in accordance with one embodiment. BCF 300 may include logic 366 compensation binning, which can handle subjected to binning the pixels 358 for applying horizontal and vertical scaling using logic 368 horizontal scaling and logic 370 vertical scaling, respectively, for re-sampling and re-hosting subjected to binning pixels 358 to ensure their uniform distribution in space, as shown in Fig. 31. In one embodiment, the operation(and) scale, provided(s) BCF 300, can be carried out using the horizontal and vertical multi-tap polyphase filter. For example, the filtering process may include the selection of appropriate pixels from the input image source (for example, subjected to binning data 358 image provided by the sensor 90a image), multiplying each of the selected pixels on the filtration coefficient, and ichauway sampling the received values to generate an output pixel to the desired destination.

[0215] the Selection of pixels used in the scaling operations, which may include the Central pixel and the surrounding neighboring pixels of the same color can be defined using separate differential analyzers 372, one for vertical scaling and one for horizontal scaling. In the present embodiment, differential analyzers 372 can be a digital differential analyzers (DDA) and can be configured to control the current output pixel position during the scaling operations in vertical and horizontal directions. In the present embodiment, the first DDA (called 372a) is used for all color components for horizontal scaling, and the second DDA (called 372b) is used for all color components with vertical scaling. Solely by way of example, DDA 372 may be provided as a 32-bit data register that contains the number with a fixed decimal point format additions to 2, with 16 bits in the integer part and 16 bits for the fractional part. 16-bit integer part can be used to determine the current position of the output pixel. The fractional part of the DDA 372 can be used to determine the current index or phase, which can be based on the inter-pixel dronepezil current position DDA (for example, the respective spatial position of the output pixel). Index or phase can be used to select the appropriate set of coefficients from a set of tables 374 coefficients of the filter. Additionally, the filtering can be carried out for each color component using pixel of the same color. Thus, the filter coefficients can be selected on the basis of not only the phase of the current position of the DDA, but the color of the current pixel. In one embodiment, the 8 phases may be present between each input pixel, and, thus, the vertical and horizontal components of the scale can use the table of coefficients a depth of 8, so 3 bits of the 16-bit fractional part is used to Express the current phase or index. Thus, when used herein, the term “primary image data”, etc., should be understood as data multicolor image obtained by a single sensor using the matrix template color filters (e.g., Bayer), crosses it, which provide multiple color components in one plane. In another embodiment, a separate DDA can be used for each color component. For example, in such embodiments, implementation, BCF 300 may allocate the components of R, B, Gr and Gb of raw data image is to be placed and to treat each component as a separate plane.

[0216] during operation, the horizontal and vertical scaling may include initializing DDA 372 and implementation of multi-tap polyphase filter using the integer and the fractional parts of the DDA 372. In the exercise individually and using separate DDA, the operation of the horizontal and vertical scaling are performed in the same way. The value of the step or the step size (DDAStepX for horizontal scaling and DDAStepY for vertical scaling determines the increment gets the value of the DDA (currDDA) after determining each output pixel, and multi-tap polyphase filtering is repeated using the next value currDDA. For example, if the step value is less than 1, the image is subjected to increasing the sampling rate, and if the step value is greater than 1, the image is subjected to down-sampling. If the step value is 1, the scaling occurs. In addition, it should be noted that for the horizontal and vertical scaling you can use the same or different step value.

[0217] the Output pixels are generated BCF 300 in the same order as the input pixels (for example, using the Bayer pattern). In the present embodiment, the input pixels can be classified as odd or even based on their KJV is adokenai. For example, in Fig. 33 illustrates a graphical representation of the positions of the input pixels (line 375) and the provisions of the relevant output pixel based on different values DDAStep (line 376-380). In this example, the depicted row represents a row of pixels of red (R) and green (Gr) in the primary Bayer image data. For horizontal filtering, the red pixel at position 0,0 in line 375 can be regarded as an even-numbered pixel, the green pixel at position 1.0 line 375 can be regarded as an odd pixel, and so on, To the provisions of output pixels, odd and even pixels may be determined on the basis of significant bits in the fractional part (16 bits) DDA 372. For example, assuming that DDAStep equal to 1.25, as shown at line 377, the LSB corresponds to bit 14 of the DDA, because this bit gives a resolution of 0.25. Thus, the output of the red pixel at the position of the DDA (currDDA) 0,0 can be seen as an even pixel (bit, bits 14 to 0), the output pixel is green in the currDDA position 1,0 (bit 14 set to 1), etc. in Addition, although Fig. 33 illustrates the filtering in the horizontal direction (using DDAStepX), it should be understood that the definition of even and odd input and output pixels can be applied in the same way in relation to the vertical filtering (using DDAStepY). In other embodiments, about what westline, DDA 372 can also be used to track the positions of the input pixels (for example, instead of tracking the position of the desired output pixel). In addition, it is obvious that DDAStepX and DDAStepY you can assign the same or different values. In addition, assuming the use of a Bayer pattern, it should be noted that the initial pixel used BCF 300 may be any of the pixels Gr, Gb, R or B depending on, for example, whether the pixel is located at the corner of the active region 280.

[0218] based On the above, odd/even input pixels are used to generate the even/odd output pixel, respectively. Provided that the position of the output pixel alternates between even and odd positions, the Central position of the input pixel source (called here “currPixel”) for the purpose of filtering is determined by rounding DDA to the nearest odd or even position of the input pixel to the even-numbered or odd-numbered positions of the output pixels (based DDAStepX), respectively. According to a variant implementation, where DDA 372a is configured to use 16 bits to represent the integer part and 16 bits to represent the fractional part, currPixel can be defined for odd and even positions currDDA using the following equations 6a and 6b:

even the position of the output peaks which it can be determined based on bits [31:16]:

(currDDA+1,0) & 0xFFFE.0000(6a)

the odd position of the output pixel can be determined based on bits [31:16]:

(currDDA)|0x0001.0000(6b)

Essentially, the above equations represent the rounding operation, which odd and even position of the output pixel determined currDDA, rounded down to the nearest even and odd positions of the input pixels, respectively, to select currPixel.

[0219] Additionally, the current index or phase (currIndex) can also be defined at each position currDDA. As discussed above, the index value or phases represent the fractional pixel position of the output pixel's position relative positions of the input pixels. For example, in one embodiment, between each input pixel position can be set to 8 phases. For example, returning to Fig. 33, 8 index values 0-7 provided between the first input red pixel at position 0,0 and the next input pixel red in positions of 2.0. Similarly, 8 index values 0-7 provided between the first input green pixel at position 1.0 and the next input pixel, green in positions of 3.0. In one embodiment, the values currIndex can be determined in accordance with the following equations 7a and 7b for odd and even positions of the output pixels, respectively:

even the provisions of output pixels can be determined based on bits [16:14]:

(currDDA+0,125)(7a)

the odd position of the output pixel can be determined based on bits [16:14]:

(currDDA+1,125)(7b)

For odd positions, additional shift of 1 pixel is equivalent to adding a displacement equal to four, the index factor for odd provisions of output pixels to account for the bias index between different color components relative to the DDA 372.

[0220] After determining currPixel and currIndex in particular the currDDA position, the filtering process may select one or more neighboring pixels of the same color on the basis of currPixel (selected Central input pixel). By way of example, under option exercise, where the logic 368 horizontal scale includes 5-taking polyphase filter, and logic 370 vertical scaling includes 3-taking polyphase filter, two pixels of the same color on both sides of currPixel in the horizontal direction can be selected for horizontal filtering (e.g., -2, -1, 0, +1, +2), and one pixel of the same color on both sides of currPixel in the vertical direction can be selected for vertical filtering (e.g., -1, 0, +1). In addition, currIndex can be used as a selection index to select the appropriate filter coefficients from table 374 permeability to note is the link to the selected pixels. For example, using a variant implementation of the 5-idler horizontal/3-outlet vertical filtering, five tables depth 8 may be provided for horizontal filtering, and three tables depth 8 may be provided for vertical filtering. Although table 374 coefficients of the filter are illustrated as part of the BCF 300, it is obvious that they may, in some embodiments, implementation, stored in a memory that is physically separate from the BCF 300, for example, in the memory 108.

[0221] Before proceeding to a more detailed consideration of operations, horizontal and vertical scaling, refer to table 4, which shows examples of how values currPixel and currIndex determined on the basis of different positions DDA using different values DDAStep (for example, in relation to DDAStepX or DDAStepY).

Table 4
Filter compensation binning - DDA calculations currPixel and currIndex
Output pixel (odd or even)Step DDA1,25Step DDA1,5Step DDA1,75Step DDA 2,0
the current DDAcurrent indexthe current pixelthe current DDAcurrent indexthe current pixelthe current DDAcurrent indexthe current pixelthe current DDAcurrent indexthe current pixel
00,0000,0000,0000,000
11,25111,5211,7531243
02,5 223443,564404
13,75334,565the 5.2515647
0546606748808
16,25577,5278,7579 10411
07,568941010,521012012
18,757910,561112,2551314415
010010120121401416016
1at 11.251 1113,5213of 15.7531518419
012,521215416of 17.561820020
1of 13.7531316,561719,2511922423
0154161801821422 24024
116,2551719,521922,7572326427
0of 17.56182142224,522428028
1of 18.7571922,562326,2552730431
020020 240242802832032

[0222] to provide For example, suppose that the selected step size DDA (DDAStep) 1,5 (line 378 of Fig. 33), and the current position of the DDA (currDDA), starting with 0, indicates an even output pixel position. To determine currPixel, you can apply equation 6a, shown below:

currPixel (determined as bits [31:16] of the result)=0;

Thus, at the currDDA position 0,0 (line 378), the original input of the Central pixel for filtering corresponds to the input pixel in the red at position 0,0 line 375.

[0223] To determine currIndex even when currDDA 0,0, you can apply equation 7a, as shown below:

currIndex (determined as bits [16:14] of the)=[000]=0;

Thus, at the currDDA position 0,0 (line 378), is currIndex 0 can be used to select filter coefficients from table 374 coefficients of the filter.

[0224] Accordingly, the filter (which may be vertical or horizontal depending on, whether measured DDAStep in the X-direction (horizontal) or Y (vertical)) can be applied on the basis of the definition which certain values currPixel and currIndex at the currDDA 0,0, and DDA 372 increases by DDAStep (1,5), and define the following values currPixel and currIndex. For example, in the following currDDA position of 1.5 (odd positions), currPixel can be determined using equation 6b as follows:

currPixel (determined as bits [31:16] of the result)=1;

Thus, at the currDDA position 1,5 (line 378), the original input of the Central pixel for filtering corresponds to the input pixel, green in positions 1.0 line 375.

[0225] in Addition, currIndex at odd currDDA of 1.5 can be determined using equation 7b, as shown below:

currIndex (determined as bits [16:14] of the)=[010]=2;

Thus, at the currDDA position 1,5 (line 378), is currIndex equal to 2, can be used to select the appropriate filter coefficients from table 374 coefficients of the filter. Filtering (which may be vertical or horizontal depending on, whether measured DDAStep in the X-direction (horizontal) or Y (vertical)) can thus be applied using these values currPixel and currIndex.

[0226] Then DDA 372 again increased by DDAStep (1,5), resulting, it is currDDA is 3.0. currPixel corresponding currDDA 3,0, can be determined using equation 6a, as shown below:

currPixel (about the certain as bits [31:16] of the result)=4;

Thus, at the currDDA position 3,0 (line 378), the original input of the Central pixel for filtering corresponds to the input pixel in the red at position 4,0 line 375.

[0227] Then currIndex even when currDDA 3,0 can be determined using equation 7a, as shown below:

currIndex (determined as bits [16:14] of the)=[100]=4;

Thus, at the currDDA position 3,0 (line 378), is currIndex equal to 4, can be used to select the appropriate filter coefficients from table 374 coefficients of the filter. Obviously, DDA 372 may continue to grow on DDAStep for each output pixel, and filtering (which may be vertical or horizontal depending on, whether measured DDAStep in the X-direction (horizontal) or Y (vertical)) can be applied using currPixel and currIndex defined for each value currDDA.

[0228] As discussed above, currIndex can be used as a selection index to select the appropriate filter coefficients from table 374 coefficients of the filter to apply to the selected pixels. The filtering process may include obtaining the original pixel values around the Central pixel (currPixel), multiplying each of the selected pixels on the proper filter coefficients selected from the table 374 coefficients filtration the basis currIndex, and summing the results to obtain the value of the output pixel at the position currDDA. In addition, since this variant implementation uses 8 phase between pixels of the same color, using a variant implementation of the 5-idler horizontal/3-outlet vertical filtering, five tables depth 8 may be provided for horizontal filtering, and three tables depth 8 may be provided for vertical filtering. In one embodiment, each of the entries in the table of coefficients may include a 16-bit number with a fixed decimal point format additions up to 2 to 3 bits of the integer part and 13 bits of the fractional part.

[0229] in Addition, assuming a Bayer pattern image, in one embodiment, the component vertical scaling may include four separate 3-branch polyphase filter, one for each color component: Gr, R, B, and Gb. Each of the 3-drop filters can use DDA 372 to control step-by-step change of the current Central pixel and the index for the coefficients, as described above. Similarly, the components of the horizontal scaling may include four separate 5-branch polyphase filter, one for each color component: Gr, R, B, and Gb. Each of the 5-drop Phi is trow can use DDA 372 to control step-by-step change (for example, through DDAStep) of the current Central pixel and index ratios. However, it should be understood that in other embodiments, implementation of the components of the horizontal and vertical scaling can use less or more branches.

[0230] For the boundary cases, the pixels used in the process of horizontal and vertical filtering may depend on the ratio of the current position of the DDA (currDDA) relative to the boundaries of the frame (for example, a defined boundary of the active region 280 in Fig. 19). For example, in the horizontal filter, if the currDDA position, when compared with the Central position of the input pixel (SrcX) and width (SrcWidth) of the frame (for example, a width of 290 active region 280 in Fig. 19), indicates that the DDA 372 close to the border, which leads to lack of pixels for the implementation of the 5-outlet filter, the input edge pixels of the same color may be repeated. For example, if the selected Central input pixel is located on the left edge of the frame, the Central pixel may be repeated twice for horizontal filtering. If the Central input pixel is near the left edge of the frame, and therefore, only one pixel between the Central input pixel and the left edge, then, for horizontal filtering, one pixel is duplicated to provide two pixel values after the and from the Central input pixel. In addition, the logic 368 horizontal scaling can be configured so that the number of input pixels (including the original and the duplicated pixels) is less than the input width. This can be expressed as follows:

where DDAInitX represents the initial position of the DDA 372, DDAStepX represents the value of DDA step in the horizontal direction, and BCFOutWidth represents the width of the frame, the output BCF 300.

[0231] For vertical filtering, if the currDDA position, when compared with the Central position of the input pixel (SrcY) and width (SrcHeight) of the frame (for example, a width of 290 active region 280 in Fig. 19), indicates that the DDA 372 close to the border, which leads to lack of pixels for the implementation of the 3-bypass filter, the input edge pixels may be repeated. In addition, the logic 370 vertical scaling can be configured so that the number of input pixels (including the original and the duplicated pixels) does not exceed the input height. This can be expressed as follows:

where DDAInitY represents the initial position of the DDA 372, DDAStepY represents the value of DDA step in the vertical direction, and BCFOutHeight represents the width of the frame, the output BCF 300.

[0232] In Fig. 34 shows a block diagram of the operations, the image is distorting the way 382 for applying filtering with a binning compensation to the image data, the adopted unit 130 pre-treatment of the pixels, in accordance with the embodiment. It is obvious that the method 382, shown in Fig. 34, you can apply both vertical and horizontal scaling. From step 383, initialized DDA 372, and is determined by the value of step DDA (which may correspond to DDAStepX for horizontal scaling and DDAStepY for vertical scaling. Then, at step 384, on the basis of DDAStep, is determined by the current position of the DDA (currDDA). As discussed above, currDDA can correspond to the position of the output pixel. Using currDDA, fashion 382 may determine the Central pixel (currPixel) of the input pixel data, which can be used to filter the compensated binning to determine the appropriate output value in currDDA, as indicated at step 385. Then, at step 386, the index corresponding currDDA (currIndex) can be identified on the basis of fractional pixel currDDA position relative to the input pixels (e.g., line 375 Fig. 33). By way of example, under option exercise, where the DDA includes a 16 bit integer part and 16 bits fractional part, currPixel can be determined in accordance with equations 6a and 6b, and currIndex can be determined in accordance with equations 7a and 7b, as shown above. While here, in the order of one example, described the conference is the tenant 16-bit integer part and 16-bit fractional part, it is obvious that, in accordance with the present invention, it is possible to use other configurations DDA 372. By way of example, can be configured in other embodiments of DDA 372, which includes a 12-bit integer part and a 20-bit fractional part, 14-bit integer part and an 18-bit fractional part, and so on

[0233] after Determining currPixel and currIndex, the original pixels of the same color around currPixel you can choose to tap filter, as indicated by step 387. For example, as discussed above, one alternative implementation may use the 5-outlet polyphase filtering in the horizontal direction (for example, by selecting 2 pixels of the same color on both sides of currPixel) and can use 3-outlet, multi-phase filtering in the vertical direction (for example, by selection of 1 pixel of the same color on both sides of currPixel). Then, at step 388, after you select the source pixels, the filter coefficients can be chosen from table 374 coefficients of the filter 300 BCF based currIndex.

[0234] then, at step 389, the filtering can be applied to the source pixels to determine the value of the output pixel corresponding to the position represented currDDA. For example, in one embodiment, the source pixels will be multiplied by their respective coefficients of the filter, and the results can be summarized floor for the treatment of the output pixel values. The direction in which the filtering at step 389, can be vertical or horizontal depending on, whether measured DDAStep in the X-direction (horizontal) or Y (vertical). Finally, at step 263, DDA 372 increases by DDAStep at step 390, the method 382 returns to step 384, due to which, the next output pixel value is determined using methods discussed here filter with compensation binning.

[0235] In Fig. 35 is illustrated in more detail step 385 to determine currPixel method 382 in accordance with one embodiment. For example, step 385 may include a step 392 to determine whether the position of the output pixel corresponding to the currDDA (step 384), even or odd. As discussed above, even or odd output pixel can be determined on the basis of the least significant bit currDDA based DDAStep. For example, assuming that DDAStep equal to 1.25, the value currDDA, equal to 1.25, can be defined as odd, since the least significant bit (corresponding to bit 14 of the fractional part DDA 372) is set to 1. For values currDDA, equal to 2.5, bits 14 to 0, thus indicating an even position of the output pixel.

[0236] On the logic 393 decision, determination is made whether the position of the output pixel corresponding to the currDDA, even or odd. If the output pixel is l is even, logic 393 decision goes to potatau 394, which currPixel is determined by increasing the value currDDA 1 and rounding the result up to the nearest even the position of the input pixel, which presents the above equation 6a. If the output pixel is odd, then the logic 393 decision goes to potatau 395, which currPixel is determined by rounding the values currDDA to the nearest odd position of the input pixel, which presents the above equation 6b. Then the value of currPixel can be used to step 387 382 way to select the source pixels for filtering, as discussed above.

[0237] In Fig. 36 is illustrated in more detail step 386 to determine currIndex of ways 382 in accordance with one embodiment. For example, step 386 may include a step 396 determine whether the position of the output pixel corresponding to the currDDA (step 384), even or odd. This definition can be carried out analogously to step 392 in Fig. 35. On the logic 397 decision, determination is made whether the position of the output pixel corresponding to the currDDA, even or odd. If the output pixel is even, logic 397 decision goes to potatau 398, which currIndex is determined by increasing the value currDDA one step index, and currIndex about is determined on the basis of the least significant bit of the integer part and the top two bits of the fractional part DDA 372. For example, under option exercise, in which the 8 phases provided between each pixel of the same color, and in which the DDA includes a 16 bit integer part and 16 bits fractional part, one step index may correspond 0,125, and currIndex can be determined based on bits [16:14] values currDDA plus 0,125 (for example, equation 7a). If the output pixel is odd, logic 397 decision goes to potatau 399, which currIndex is determined by increasing the value currDDA one step index and shifts by one pixel, and currIndex is determined based on the least significant bit of the integer part and the top two bits of the fractional part DDA 372. Thus, according to a variant implementation, in which the 8 phases provided between each pixel of the same color, and in which the DDA includes a 16 bit integer part and 16 bits fractional part, one step index may correspond 0,125, a shift by one pixel may correspond to 1.0 (shift by 8 steps of the index until the next pixel of the same color), and currIndex can be determined based on bits [16:14] values currDDA, increased to 1,125 (for example, equation 7b).

[0238] Although illustrated here is an implementation option provides BCF 300 as a component unit 130 pre-treatment pixels, other options for implementation may include BCF 300 in the processing pipeline Pervin the x of the image data of the conveyor 82 ISP, which, as further described below, may include logic detection/correction of defective pixels, the blocks of the gain/offset/compensation, noise reduction logic, logic shading correction lens and logic demosaic. In addition, variants of implementation, where the above logic detection/correction of defective pixels, the blocks of the gain/offset/compensation, noise reduction logic, logic shading correction lens does not rely on the linear arrangement of pixels, BCF 300 may be included in logic demosaic to filter with compensation binning and re-placement of pixels to demosaic because demosaic, in General, not based on a uniform spatial distribution of pixels. For example, in one embodiment, BCF 300 can be enabled somewhere between sensor input and logic demosaic, and temporal filtering and/or detection/correction of defective pixels are applied to the primary image data to compensate for the binning.

[0239] As described above, the output signal BCF 300, which may be the output signal FEProcOut (109) having uniformly distributed in the space of image data (e.g., sample 360 in Fig. 31), may be routed to the logic 82 pipeline ISP for additional treatments is key. However, before you change the focus of this review on logic 82 pipeline ISP, refer to a more detailed description of the various functionalities that can be provided by blocks of statistical processing (e.g., 122 and 124) that can be implemented in logic 80 pre-treatment ISP.

[0240] Returning to the General description of the blocks 120 and 122 statistical processing, these blocks can be configured to collect various statistical data about the image sensors that capture and provide the primary image signals (Sif0 and Sif1), for example, statistics relating to automatic exposure, automatic white balance, autofocus, flicker detection and compensation of the black level and shading correction lens, and so on, While the blocks 120 and 122 statistical processing can first apply one or more image-processing operations to their respective input signals, Sif0 (Sensor0) and Sif1 (Sensor1).

[0241] for Example, in Fig. 37 illustrates a more detailed block diagram of block 120 aggregation associated with Sensor0 (90a) in accordance with one embodiment. As shown, the block 120 statistical processing may include the following functional blocks: logic 460 detection and correction of the Def is ctyh pixels, logic 462 the black level compensation (BLC), logic 464 shading correction lens, logic 466 reverse BLC, and logic 468 statistics. Each of these functional blocks will be discussed below. In addition, it should be understood that the block 122 statistical processing associated with Sensor1 (90b) can be implemented similarly.

[0242] Initially, the output signal of logic 124 (for example, Sif0 or SifIn0) is a logic 460 pre-treatment for the correction of defective pixels. It is obvious that “defective pixels” should be understood in the sense of the pixels forming the image sensor(s) 90 images, which cannot accurately perceive the light levels. The presence of defective pixels can be attributed to several factors, and they may include “hot” (or drain) of the pixels “are stuck pixels and dead pixels. Hot pixel, in General, looks brighter than non-defective pixel, provided the same amount of light in the same spatial position. Hot pixels can occur due to faults in the reset and/or high leakage. For example, the hot pixel may exhibit excessive leakage of the charge relatively non-defective pixels and, thus, may look brighter than non-defective pixels. Additionally, the “dead” and “are stuck pixels can result from dirt, for example, dust or other trace materials, contaminating the image sensor in the process of manufacturing and/or Assembly, because of what some defective pixels can be darker or brighter non-defective pixel, or a defective pixel can be fixed at a specific value regardless of the amount of light actually falling on him. Additionally, the dead and are stuck pixels can also be caused by the failure of the scheme, which occur in the operation of the image sensor. By way of example, are stuck pixel may look like always on (e.g., fully charged) and, thus, more vivid, while a dead pixel looks like a permanently disabled.

[0243] the Logic 460 detection and correction of defective pixels (DPDC) in logic 80 pre-treatment ISP may adjust (e.g., to replace the wrong pixel values) of the defective pixels before they will be taken into account when collecting statistics (e.g., 468). In one embodiment, the correction of defective pixels is performed independently for each color component (for example, R, B, Gr and Gb for Bayer pattern). In General, the logic 460 pre-treatment DPDC can provide dynamic correction of defects, in which the position of the defective pixels are determined automatically based on the direction the military gradients, computed using neighboring pixels of the same color. As will be clear, the defects can be “dynamic” in the sense that the characterization of the pixel as defective in a given time may depend on the image data of neighboring pixels. By way of example, are stuck pixel, which always has the maximum brightness may not be considered as a defective pixel, if you are stuck pixel is located in the area of the current image, dominated by brighter or white color. On the contrary, if you are stuck pixel is within the scope of the current image that is dominated by black or darker colors, you are stuck pixel can be identified as a defective pixel in the processing logic 460 DPDC and adjust accordingly.

[0244] the Logic 460 DPDC can use one or more horizontally adjacent pixels of the same color on both sides of the current pixel to determine whether the current pixel is defective, with the use of directional gradients from pixel to pixel. If the current pixel is identified as defective, the value of the defective pixel can be replaced by the value of neighboring horizontal pixels. For example, in one embodiment, uses five horizontally adjacent pixels of the same color inside the border Pervin the th frame 278 (Fig. 19), and five horizontally neighboring pixels include the current pixel and two adjacent pixels on each side. Thus, according to Fig. 38, for a given color component c and for the current pixel P, the logic 460 DPDC can be viewed horizontally neighboring pixels P0, P1, P2, and P3. However, it should be noted that depending on the position of the current pixel P, the pixels outside of the primary frame 278 are not considered when calculating the gradients from pixel to pixel.

[0245] for Example, as shown in Fig. 38, in the case 470 “left edge”, the current pixel P is at the left edge of the primary frame 278, and, thus, the neighboring pixels P0 and P1 outside of the primary frame 278 are not considered, leaving only the pixels P, P2, and P3 (N=3). In case 472 “left edge + 1”, the current pixel P is in the same pixel block from the left edge of the primary frame 278, and thus, the pixel P0 is not considered. As a result, leaving only the pixels P1, P, P2, and P3 (N=4). In addition, the “centered” case 474, pixels P0 and P1 to the left of the current pixel P and the pixel P2, and P3 to the right of the current pixel P is located within the boundaries of the primary frame 278, and, thus, all the neighboring pixels P0, P1, P2, and P3 (N=5) are considered when calculating the gradients from pixel to pixel. Additionally, similar cases 476 and 478 may occur when reaching progarray primary frame 278. For example, in the case of 476 “right edge of -1, the current pixel P is in the same pixel block from the right edge of the primary frame 278, and thus, the pixel P3 is not considered (N=4). Similarly, in the case of 478 “right edge”, the current pixel P is at the right edge of the primary frame 278, and thus, both of the pixel P2, and P3 are not considered (N=3).

[0246] In the illustrated embodiment, for each neighboring pixel (k=0~3) inside the border of the image (for example, the primary frame 278), the gradients from pixel to pixel can be calculated as follows:

for(only for k in the primary frame)(8)

After determining gradients from pixel to pixel, the detection of defective pixels can be performed by logic 460 DPDC as follows. First of all, it is assumed that the pixel is defective, if a number of its gradient Gklocated on a specific threshold or below, which is denoted by the variable dprTh. Thus, for each pixel accumulates the counter (C) the number of gradients of neighboring pixels within the image boundaries, which are on the doorstep dprTh or below. By way of example, for each neighboring pixel within the primary frame 278 accumulated by the counter C, the gradient of the G kwho are on the threshold of dprTh or below, can be calculated as follows:

,(9)

(only for k in the primary frame)

Obviously, depending on the color component threshold value dprTh may change. Then, if it is determined that the accumulated count C is less than or equal to the maximum count indicated by the variable dprMaxC, the pixel can be considered as defective. This logic is expressed below:

if (C≤dprMaxC), the pixel is defective.(10)

[0247] the Defective pixels are replaced with the use of a number of agreements for replacement. For example, in one embodiment, the defective pixel can be replaced with a pixel directly to the left of him, P1. With the boundary condition (for example, P1 is outside of the primary frame 278), the defective pixel can be replaced with a pixel directly to the right of him, P2. In addition, it should be understood that the replacement value may be stored or distributed on subsequent detection of defective pixels. For example, in relation to the set of horizontal pixels shown in Fig. 38, if P0 or P1 were previously identified logic 460 DPDC as defective pixels, their corresponding values replacement, you can use globarena defective pixels and replace the current pixel P.

[0248] to summarize the above methods of detection and correction of defective pixels, refer to Fig. 39, where in the form of a flowchart of operations that depicts a process denoted by the reference position 480. As shown, the process begins 480 from step 482, which is the current pixel (P) and identified a set of neighboring pixels. In accordance with the above-described embodiment, the neighboring pixels may include two horizontal pixel of the same color components on both sides of the current pixel (for example, P0, P1, P2, and P3). Then, at step 484, the horizontal gradients from pixel to pixel are calculated for each neighboring pixel in the primary frame 278, as described above in equation 8. Then, at step 486, is determined by the count C of the number of gradients that are less than or equal to a specific threshold dprTh. As shown in logic 488 decision, if C is less than or equal dprMaxC, the process 480 proceeds to step 490, and the current pixel is identified as defective. Then, the defective pixel is corrected at step 492 using value replacement. Additionally, returning to the logic 488 decision, if C is greater than dprMaxC, the process proceeds to step 494, and the current pixel is identified as non-defective, and its value is not changed.

[0249] it Should be noted that the methods found is ugenia/correction of defective pixels, used for pre-aggregating ISP may be less reliable than the detection/correction of defective pixels, which is implemented in logic 82 of the conveyor ISP. For example, as will be described in more detail below, the detection/correction of defective pixels is performed in the logic 82 of the conveyor ISP, may, in addition to the dynamic correction of defects, in addition, to ensure correction of fixed defects, in which the position of the defective pixel is known in advance and loaded into one or more tables of defects. In addition, the dynamic correction of defects in the logic 82 of the conveyor ISP may also consider a pixel gradients in the horizontal and vertical directions, and may also provide detection/correction speckles, which will be discussed below.

[0250] According to Fig. 37, the output signal of the logic 460 DPDC then goes to logic 462 the black level compensation (BLC). Logic 462 BLC can provide a coefficient of the digital gain, offset and cut-off independently for each color component c (e.g., R, B, Gr and Gb for Bayer) on the pixels that are used for collecting statistics. For example, as expressed in the following operation, the input value for the current pixel is first shifted to the sign value, and then multiplied by the gain.

g the e X represents the input pixel value for the color component c (e.g., R, B, Gr or Gb), O[c] represents a signed 16-bit offset for the current color component c, and G[c] represents the gain value for the color component c. In one embodiment, the coefficient G[c] gain can represent a 16-bit unsigned integer with 2 bits of the integer part and 14 bits of the fractional part (for example, in the view 2,14 floating point), and the coefficient G[c] gain you can apply rounding. Solely by way of example, the coefficient G[c] gain can take values ranging from 0 to 4X (for example, 4-fold input pixel value).

[0251] Then, as shown below in equation 12, the calculated value Y, which is a milestone, can then be cut to the minimum and maximum range:

[0252] the Variables min[c] and max[c] may submit a signed 16-bit value cutoff for the minimum and maximum output values, respectively. In one embodiment, the logic 462 BLC can also be configured to maintain a counter of the number of pixels that were clipped above and below the maximum and minimum, respectively, for each color component.

[0253] Then, the output signal of the logic 462 BLC is forwarded to the logic 464 shading correction lens (LSC). Logic 464 LSC can be konfigurera is and to apply the appropriate gain for pixel-by-pixel basis to compensate for downturns intensity, which, in General, are approximately proportional to the distance from the optical center of the lens 88 device 30 form the image. It is obvious that such declines may be the result of geometrical optics lens. By way of example, a lens with an ideal optical properties, it is possible to simulate the fourth degree of the cosine of the angle of incidence, cos4(θ), the so-called law cos4. However, since the production of the lens is not perfect, abnormality in the lens can cause the deviation of the optical properties of the hypothetical model cos4. For example, most violations usually takes place on a more subtle edge of the lens. Additionally, the non-uniformity in the patterns of shading the lens can also cause microelisa matrix in the image sensor will not be fully aligned with the matrix of color filters. In addition, infrared (IR) filter in some lenses may cause the decline will depend on the light source, which allows to adapt the gain lens shading depending on the detected light source.

[0254] In Fig. 40 illustrates a three-dimensional profile 496, showing the dependence of light intensity on the pixel's position for a typical lens. As shown, the light intensity near the center is and 498 lens gradually subsides to the corners or the edges of 500 lens. Uneven shading of the lens depicted in Fig. 40, illustrated in more detail in Fig. 41, which shows a color drawing of the image 502, which shows the decay of the light intensity to the corners and edges. In particular, it should be noted that the light intensity near the center of the image appears to be higher than the light intensity at the corners and/or edges of the image.

[0255] In accordance with the variants of implementation of the present invention, the gain shading correction lens can be set in the form of a two-dimensional grid of gains for each color channel (for example, Gr, R, B, Gb for Bayer filter). Grid point gain can be distributed with fixed horizontal and vertical spacing in the primary frame 278 (Fig. 19). As discussed above in Fig. 19, the primary frame 278 may include an active region 280, which specifies the area in which processing is carried out for the specific operation of the image processing. In respect of transactions shading correction lens, the area active processing, which may be cited as the LSC area, set in the primary region 278 of the frame. As will be discussed below, the region of the LSC must be completely inside or on the borders of the grid gain, otherwise the results can be n is defined.

[0256] for Example, in Fig. 42 shows the region 504 LSC and mesh 506 gain that can be set in the primary frame 278. Region 504 LSC may have a width 508 and height 510 and can be specified offset 512 x axis and offset 514 on the y-axis boundaries of the primary frame 278. Also provided offset grid (for example, the offset 516 grid on the x axis and offset 518 grid y-axis) from the base 520 gain 506 grid to the first pixel 522 in the field 504 LSC. These shifts can be in the first grid spacing for a given color component. Intervals 524 and 526 of grid points in the horizontal direction (x) and vertical direction (y), respectively, can be set independently for each color channel.

[0257] As described above, assuming the use of a Bayer matrix of color filters, you can set 4 color channel (R, B, Gr and Gb) gain grid. In one embodiment, may be available only 4K (4096) grid points, and for each color channel may be provided with a base address to the start position gain grid, for example, using a pointer. In addition, horizontal (524) and vertical (526) the spacing of the grid points can be set in relation to pixels with a resolution of one color plane, and in some embodiments, the implementation can ensure that the interval of the grid points, differing in degree 2, for example, 8, 16, 32, 64, or 128, and so on, in the horizontal and vertical directions. Obviously, using a power of 2, it is possible to achieve effective implementation of the interpolation gain using shift operations (for example, division and addition. Using these parameters, the same values of gain can be used even when the area to crop the image sensor is changed. For example, to align the grid points with the cut-off region you want to update only a few parameters (for example, update the offset 524 and 526 grid) instead of having to update all the values of the amplification factor of the grid. Solely by way of example, this can be useful when pruning is used during operations of the digital zoom. In addition, although the grid 506 gain, is shown according to a variant implementation, shown in Fig. 42 depicted having, in General, ravnovesnye grid point, it should be understood that in other variants of implementation, the grid points are not required to be reversename. For example, in some embodiments, implementation of grid points can be distributed unevenly (e.g., logarithmically), so that the grid point is less concentrated in the center region 504 LSC, but more focused on in the Lam region 504 LSC, where the distortion Shader lens is usually more noticeable.

[0258] In accordance with disclosed here are methods of shading correction lens when the position of the current pixel is located outside the area 504 LSC, the gain is not applied (for example, the pixel remains unchanged). When the position of the current pixel is in the position grid gain, you can use the gain value in that particular grid point. However, when the position of the current pixel is located between the grid points, the gain can be interpolated using bilinear interpolation. Below is an example of the interpolation gain of the position of the pixel “G” in Fig. 43.

[0259] As shown in Fig. 43, the pixel G is located between the pixels G0, G1, G2 and G3 grid, which may correspond to the upper left, upper right, lower left and lower right gain, respectively, relative to the position of the current pixel G. the Horizontal and vertical size of the grid spacing represented by X and Y, respectively. Additionally, ii and jj are the horizontal and vertical offset of the pixel, respectively, relative to the position of the upper-left coefficient G0 gain. Based on these factors, the gain corresponding to the position G, it is possible, nevertheless the way interpolate as follows:

Then the members in the above equation 13a can be combined to obtain the following expression:

In one embodiment, the interpolation method can be carried out in increments and not using multiplier for each pixel, thereby reducing the computational complexity. For example, the member (ii)(jj) can be implemented using an adder, which may be initially set to 0 in position (0, 0) grid 506 gain and to increase the current line number every time when the current column number is incremented by the pixel. As discussed above, because the values of X and Y can be chosen as powers of two, the interpolation gain can be implemented using simple shift operations. Thus, the multiplier required at the point G0 grid (instead of on each pixel), and only addition operations are required to determine the interpolated gain for the remaining pixels.

[0260] In some embodiments, implementation, interpolation gain between grid points can use 14-bit accuracy, and the gain of the grid can be an unsigned 10-bit values with 2 bits of the integer part and 8 bits draw the Noah part (for example, in view of 2.8 floating point). With this agreement, the amplification factor can take values ranging from 0 to 4X, and the resolution gain between grid points can be 1/256.

[0261] Methods of shading correction lens can be further illustrated by the process 528 shown in Fig. 44. As shown, the process 528 begins with step 530, where the position of the current pixel is relative to the border region 504 LSC shown in Fig. 42. Then logic 532 decision determines whether the position of the current pixel in region 504 LSC. If the position of the current pixel is outside the scope of 504 LSC, the process 528 goes to step 534, and the gain is not applied to the current pixel (for example, the pixel remains unchanged).

[0262] If the position of the current pixel is in the area 504 LSC, the process 528 goes to logic 536 decision on which additional determination is made whether the position of the current pixel grid point in the grid 504 gain. If the position of the current pixel corresponds to a grid point, then the gain value in the grid point is chosen and applied to the current pixel, as shown in step 538. If the position of the current pixel does not correspond to a grid point, then the process 528 goes to step 540, the gain is interpolated on the basis of boundary grid points (for example, G0, G1, G2 and G3 in Fig. 43). For example, the interpolated gain can be calculated in accordance with equations 13a and 13b, as discussed above. After that, the process 528 ends at step 542, where the interpolated gain of stage 540 is applied to the current pixel.

[0263] it is Obvious that the process 528 may be repeated for each pixel of the image data. For example, in Fig. 45 illustrates a three-dimensional profile depicting the gain that can be applied to each pixel's position in the field of LSC (e.g., 504). As shown, the gain applied in the corners 544 image, in General, may be greater than the gain applied to the center 546 image due to a larger decline in the intensity of light in the corners, as shown in Fig. 40 and 41. Using the methods described herein shading correction lens, the occurrence of decay of the light intensity in the image can be reduced or essentially eliminated. For example, in Fig. 46 shows an example of how it may look like a colored drawing image 502 of Fig. 41 after applying shading correction lens. As shown, compared with the original image of Fig. 41, the total light intensity, in General, is more uniform across the image. In particular, the intensity of light near the price the tra image can be essentially equal to the values of the light intensity at the corners and/or edges of the image. Additionally, as mentioned above, the calculation of the interpolated gain (equation 13a and 13b) can, in some embodiments, implementation, replace additive “Delta” between grid points, taking advantage of the structure of sequential increasing numbers of column and row. Obviously, this reduces the computational complexity.

[0264] In additional embodiments, the implementation of the gain grid, a global gain for each color component, which is scaled as a function of distance from the center of the image. The center of the image can be provided as an input parameter and can be estimated by analyzing the amplitude of the light intensity of each pixel of an image of a uniformly illuminated image. The radial distance between the identified Central pixel and the current pixel can then be used to obtain linearly scaled radial gain, Gras shown below:

where Gp[c] represents a parameter of the global gain for each color component c (e.g., component R, B, Gr and Gb for Bayer pattern), and where R before the hat radial distance between the Central pixel and the current pixel.

[0265] According to Fig. 47, which shows the above region 504 LSC, the distance R can be calculated or estimated using several methods. As shown, the pixel C corresponding to the center of the image, may have coordinates (x0, y0), and the current pixel G may have coordinates (xG, yG). In one embodiment, the logic 464 LSC can calculate the distance R using the following equation:

[0266] In another embodiment, to obtain estimated values for R, you can use the simplified formula evaluation is shown below.

In equation 16, the estimated coefficients α and β can be scaled to 8-bit values. Solely by way of example, in one embodiment, α may be equal to approximately 123/128 user guide, and β may be equal to approximately 51/128 to provide estimated values for R. using these values, the maximum error can be approximately 4%, with an average error of approximately 1.3%. Thus, despite the fact that the method of assessment may be somewhat less accurate than the use of the method of calculation when determining R (equation 15), the boundary error is sufficiently low that allows you to use the estimated values of R DL which determine the radial component of the gain for existing methods of shading correction lens.

[0267] Then, the radial coefficient of Grgain can be multiplied by the interpolated value of the coefficient G gain grid (equation 13a and 13b) for the current pixel to determine the total gain that can be applied to the current pixel. Output pixel Y is obtained by multiplying the input pixel value X at full gain, as shown below:

Thus, in accordance with the present invention, the shading correction lens can be carried out using only the interpolated gain, together interpolated gain and the radial component of the gain. Alternatively, the shading correction lens can also be carried out using only the radial gain in conjunction with table, radial grid, which compensates for the radial error of approximation. For example, instead of a rectangular grid 506 gain, shown in Fig. 42, may be provided with radial net gain, with many grid points that define the gain in radial and angular directions. Thus, when determining the gain to be applied to the pixel, which is not aligned with one of the points of the radial is hydrated grid in region 504 LSC, interpolation can be applied using the four grid points surrounding the pixel, to determine the correct interpolated gain lens shading.

[0268] In Fig. 48, the use of interpolated and the radial component of the gain shading correction lens illustrated process 548. It should be noted that the process 548 may include steps similar to the steps in the process 528 described above in Fig. 44. Accordingly, these stages are denoted by similar reference positions. From step 530, is the current pixel and is determined by its position relative to the field 504 LSC. Then logic 532 decision determines whether the position of the current pixel in region 504 LSC. If the position of the current pixel is outside the scope of 504 LSC, the process 548 passes to step 534, and the gain is not applied to the current pixel (for example, the pixel remains unchanged). If the position of the current pixel is in the area 504 LSC, the process 548 may simultaneously proceed to step 550 and logic 536 decision. At step 550 retrieves the data that identifies the center of the image. As discussed above, the definition of the center of the image may include an analysis of the amplitudes of the light intensity for the pixels under uniform illumination. This can happen voltage is emer, during calibration. Thus, it should be understood that the step 550 is not necessarily involves repetitive calculation of the center of the image for processing each pixel, but may involve retrieving data (e.g., coordinates) of the previously defined center image. After identifying the center of the image, the process 548 may proceed to step 552, which is determined by the distance between the center of the image and position (R) of the current pixel. As discussed above, the R value can be calculated (equation 15) or estimate (equation 16). Then, at step 554, the radial component of the coefficient Grgain can be calculated using the distance R and the parameter of the global gain of the corresponding color component of the current pixel (equation 14). The radial component of the coefficient Gramplification can be used to determine the total gain, which will be discussed below at step 558.

[0269] the Logic 536 decision determines whether the position of the current pixel grid point in the grid 504 gain. If the position of the current pixel corresponds to a grid point, then the gain value in the grid point is determined, as shown at step 556. If the position of the current pixel does not correspond to a grid point, then the process 548 goes to the floor the PU 540, and the interpolated gain is calculated on the basis of boundary grid points (e.g., G0, G1, G2 and G3 in Fig. 43). For example, the interpolated gain can be calculated in accordance with equations 13a and 13b, as discussed above. Then, at step 558, the full gain is determined based on the radial gain determined in step 554, and one of the gain grid (step 556) or interpolated gain (540). Obviously, this can depend on the branch selected by the logic 536 decision making process 548. Then the full gain is applied to the current pixel, as shown in step 560. Again, it should be noted that, similarly to the process 528, the process 548 can also be repeated for each pixel of the image data.

[0270] Using the radial gain together with the gain of the grid may provide various advantages. For example, the use of radial gain allows the use of a single overall net gain for all color components. This significantly reduces the total storage capacity required to store separate grids of gains for each color component. For example, in the Bayer image sensor, the tion, the use of a uniform grid of gain for each component R, B, Gr and Gb can Arrasate amount of data grid gain approximately 75%. Obviously, this reduction in data grid gain reduces the cost of implementation, because the table data grid gain can take a significant amount of memory location or size of the chip in the image processing hardware. In addition, depending on the hardware implementation, the use of a single set of grid values of gain can provide additional benefits, such as reducing the total area of the chip (for example, when the grid values of the amplification coefficients are stored in vnutrikvartalniy memory) and reduced bandwidth requirements memory (for example, when the grid values of the amplification coefficients are stored in uncrystalline external memory).

[0271] on the Basis of fully describes the functionality of the logic 464 shading correction lens shown in Fig. 37, the output signal of the logic 464 LSC is then redirected to the logic 466 inverse black level compensation (IBLC). Logic 466 IBLC provides the gain, offset and cut-off independently for each color component (for example, R, B, Gr and Gb) and, in General, performs reverse the th function with respect to a logic 462 BLC. For example, as shown in the next operation, the input value of the first pixel is multiplied by a gain factor and then moved on symbolic value.

where X represents the input pixel value for the color component c (e.g., R, B, Gr or Gb), O[c] represents a signed 16-bit offset for the current color component c, and G[c] represents the gain value for the color component c. In one embodiment, the coefficient G[c] gain can take values in the range of about from 0 to 4X (4 times the input pixel value X). It should be noted that these variables may be the same variables that are discussed above in equation 11. The calculated value of Y can be clipped to the minimum and maximum range using, for example, equation 12. In one embodiment, the logic 466 IBLC can be configured to maintain a counter of the number of pixels that were clipped above and below the maximum and minimum, respectively, for each color component.

[0272] then, the output signal of the logic 466 IBLC is made by block 468 statistics, which can provide a collection of different points of statistical data about the sensor(s) 90 images, for example, related to automatic ustanavlivaetsya (AE), automatic white balance (AWB), auto focus (AF) detection of flicker, etc. based On the above, the description of some embodiments of block 468 statistics and various related aspects are presented below with reference to Fig. 48-66.

[0273] it is Obvious that statistics AWB, AE, and AF can be used to obtain images in digital cameras and camcorders. For simplicity, statistics AWB, AE, and AF can together be called here “statistics 3A”. According to a variant implementation of the logic of pre-treatment ISP, shown in Fig. 37, an architecture for logic 468 statistics (“the logic of statistics 3A) can be implemented in hardware, software, or combinations thereof. In addition, software or firmware control can be used to analyze the data collected by the logic 468 statistics 3A and control various parameters of the lens (e.g., focal length), a sensor (for example, the coefficients of the analog gain, the times of integration), and conveyor 82 ISP (for example, coefficients of the digital gain coefficients matrix color correction). In some embodiments, implementation, scheme 32 image processing can be configured to provide flexibility in the collection of a hundred the specifications specifications, allow the software or firmware control to implement various algorithms AWB, AE, and AF.

[0274] In regard to the white balancing (AWB), the response of the image sensor at each pixel may depend on the light source, because the light source is reflected from objects in the scene image. Thus, each pixel is recorded in the scene image associated with the color temperature of the light source. For example, in Fig. 48 illustrates 570, showing the color range of white areas at low and high colour temperatures to YCbCr color space. As shown, the x-axis of the graph 570 represents citratest (Cb) blue and the y-axis of the graph 570 represents citratest (Cr) red YCbCr color space. Schedule 570 also shows the axis 572 low color temperature and the axis 574 high color temperature. The area is 576, which are axis 572 and 574, is the color range of white areas at low and high color temperature in the color space YCbCr. However, it should be understood that the YCbCr color space is just one example of the color space that can be used in conjunction with the processing of automatic white balancing in the present embodiment. Other embodiments of whom may use any suitable color space. For example, in some embodiments, implementation, other suitable color space may include Lab color space (CIELab) (for example, based on the CIE 1976) color space, normalized red/blue (for example, the color space of R/(R+2G+B) and B/(R+2G+B); a color space of R/G and B/G color space Cb/Y and Cr/Y, and so on). Accordingly, for the purposes of this disclosure, the axis of the color space used by the logic 468 statistics 3A, can be named C1 and C2 (as in the case of Fig. 49).

[0275] When a white object is illuminated at a lower color temperature, it may look reddish in the captured image. On the contrary, the white object is illuminated at a higher color temperature, can look bluish in the captured image. Thus, the purpose of white balancing is to adjust the RGB values so that the image perceived by the human eye, as in the canonical illumination. Thus, in the context of the statistics of the formation of images related to white balance, color information of white objects is going to determine the color temperature of the light source. In General, the white balance algorithms can include two main stages. First, the estimated color temperature of the light source. Secondly, the estimated color temperature is and is used to adjust the color values of the gain and/or determination/adjustment coefficients matrix color correction. Such gain can be a combination of the coefficients of the analog and digital gain of the image sensor, and coefficients of digital gain ISP.

[0276] for Example, in some embodiments, implementation, device 30, the image formation can be calibrated using multiple different reference light sources. Accordingly, the white point of the current scene can be defined by selecting the color correction coefficients corresponding to a reference light source that most closely matches the light source of the current scene. Solely by way of example, one alternative implementation can calibrate the device 30 forming an image using five reference light sources, a light source with low color temperature light source is moderately low color temperature light source, the average color temperature of the light source is moderately high color temperature and a light source with high color temperature. As shown in Fig. 50, one alternative implementation may set the gain of the white balance using the following color correction profiles: Horizon (H) (simulating a color temperature of about 2300 degrees), filament (A or IncA) (simulating a color temperature of about 2856 degrees), D50 (imitating the color temperature is the temperature around 5000 degrees), D65 (simulating a color temperature of about 6500 degrees) and D75 (simulating a color temperature of about 7500 degrees).

[0277] depending on the light source of the current scene, the gain of the white balance can be defined using a gain corresponding to a reference light source that most closely match the current light source. For example, if the logic 468 statistics (described in more detail below in Fig. 51) determines that the current light source is approximately equal to the reference light source from the average color temperature, D50, the gain of the white balance of approximately 1,23 1,37 and can be applied to the color channels red and blue, respectively, while the green channels (G0 and G1 for Bayer data) does not apply approximately no gain (1,0). In some embodiments, the implementation, if the color temperature of the current of the light source is between the two reference light sources, the gain of the white balance can be determined by interpolation gain of the white balance between the two reference light sources. In addition, while the present example shows a device for the formation of images, calibrated using light sources H, A, D50, D65 and D75, it should be understood that Caleb ovci camera you can use any suitable type of light source, for example TL84 or CWF (fluorescent reference light sources), and so on

[0278] As will be further described below, for AWB can be provided several statistics, including two-dimensional (2D) color histogram, and the amount of RGB or YCC to provide multiple programmable color ranges. For example, in one embodiment, the logic 468 statistics can provide a set of multiple pixel filters, and AWB processing, you can select a subset of the multiple pixel filters. In one embodiment, may be provided with eight sets of filters, each of which has its own separate configurable parameters, and three sets of filters of the color range you can choose from a set to collect mosaic statistics, and statistics for each of the floating window. By way of example, the first selected filter may be configured to cover the current color temperature for accurate color evaluation, the second selected filter may be configured to cover areas with low color temperature, and the third selected filter may be configured to cover areas of high color temperature. This particular configuration may allow the AWB algorithm to adjust the scope of the current color temperature when the light source changes aetsa. In addition, 2D color histogram can be used to determine global and local light sources and to define different thresholds pixel filter for accumulation of RGB values. Again, it should be understood that the choice of three pixel filter is intended to illustrate only one option for implementation. In other embodiments, implementation, AWB statistics, you can choose less or more pixel filters.

[0279] furthermore, in addition to selecting three pixel filters, one additional pixel filter can also be used for automatic exposure (AE), which, in General, refers to the process of adjustment of the integration time and gain of the pixel for controlling luminance of the captured image. For example, auto exposure can control the amount of light of the scene captured by the sensor(s) of the image, by setting the integration time. In some embodiments, implementation, tiles and floating window statistics brightness can be collected using logic 468 statistics 3A and process for determining control parameters integration and factor usileniem.

[0280] in Addition, the autofocus may relate to determining the optimal focal length of the lens for, essentially, the optimization of f is kusonoki image. In some embodiments, the implementation, you can collect floating window high frequency statistics and you can adjust the focal length of the lens to focus the image. As further described below, in one embodiment, adjustment autofocus may use coarse and fine adjustment on the basis of one or more metrics, called autofocus (indicators AF) to focus the image. In addition, in some embodiments, implementation, statistics/indicators AF can be defined for different colors, and the relationship between statistics/indicators AF for each color channel can be used to determine the direction of focus.

[0281] Thus, these different types of statistics, including, it is possible to identify and collect by block 468 statistics. As shown, the output signal STATS0 block 468 statistics collection unit 120 Sensor0 aggregation can be sent to the memory 108 and forwarded to the control logic 84 or, alternatively, may be sent directly to the control logic 84. In addition, it should be understood that the block 122 Sensor1 statistical processing may also include similarly configured block statistics 3A, which provides statistics STATS1, as shown in Fig. 8.

[0282] As consider the network above, control logic 84, which may be a specialized processor subsystem 32 ISP device 10 may process the collected statistical data to determine one or more parameters to control device 30 of the image-forming and/or circuit 32 image processing. For example, such control parameters may include parameters for operating the lens sensor 90 of the image (for example, adjusting the focal length), the parameters of the image sensor (for example, the coefficients of analog and/or digital gain, integration time), and the parameters pipeline ISP (for example, the value of the digital gain coefficient, the coefficients matrix color correction (CCM)). Additionally, as mentioned above, in some embodiments, implementation, statistical processing can occur with a precision of 8 bits and, thus, the initial pixel data having a higher bit depth, can be subjected to down-sampling to 8-bit format in order statistics. As discussed above, decreasing the quantization to 8 bits (or any other more discobitch permissions) can reduce the size (e.g., area) of the equipment and also to reduce the complexity of processing, and also allows you to increase the resistance with artisticheskij data to noise (for example, using the spatial averaging of the image data).

[0283] accordingly, in Fig. 51 shows a block diagram depicting the logic for the implementation of one possible implementation of the logic 468 statistics 3A. As shown, the logic 468 statistics 3A may receive a signal 582, representing Bayer RGB data, which, as shown in Fig. 37, may correspond to the output signal of the logic 466 reverse BLC. Logic 468 statistics 3A can handle Bayer RGB data 582 to obtain various statistics 584, which may represent the output signal STATS0 logic 468 statistics 3A, as shown in Fig. 37, or alternatively the output signal STATS1 logic related statistics unit 122 Sensor1 statistical processing.

[0284] In the illustrated embodiment, so that the statistics were more resistant to noise, the input Bayer RGB pixels 582 first averaged logic 586. For example, the averaging can be performed in a window of size 4×4 pixels of the sensor, consisting of four Bayer of four 2×2 (for example, a block of 2×2 pixels representing the pattern Bayer), and the averaged values of red (R), green (G) and blue (B) in box 4×4 can be calculated and converted to 8 bits, as mentioned above. This process is illustrated in more detail in Fig. 52, which shows the box 588 4×4 pixels, the image is four bath Bayer fours 590 2×2. Using this configuration, each color channel comprises a block 2×2 corresponding pixels in the box 588, and the pixels of the same color can be summed up and averaged to obtain the average color values for each color channel in a box 588. For example, pixels 594 red can be averaged to obtain an average value 604 of the red (RAV), and the pixels 596 blue can be averaged to obtain an average value 606 BAVin the sample of 588. In relation to the averaging of the pixels are green, you can use several methods, because the Bayer pattern has twice as many green samples than samples of red or blue. In one embodiment, the average value 602 green (GAV) can be obtained by averaging only pixels 592 Gr, only pixels 598 Gb, or all together pixels 592 and 598 Gr and Gb. In another embodiment, it is possible to average the pixels 592 and 598 Gr and Gb in each Bayer four 590, and the average green value for each Bayer Quad 590 can be further averaged together to obtain GAV602. Obviously, the averaging of the pixel values to pixel blocks can provide noise reduction. In addition, it should be understood that the use of a block of 4×4 as a window sampling is used exclusively to provide one example. Indeed, in others the other options implementation you can use any suitable block size (e.g., 8×8, 16×16, 32×32 and so on).

[0285] subsequently, Bayer RGB values 610 a reduced scale come into logical blocks 612 and 614 conversion color space. Because some statistics 3A can rely on pixel, the pixels after applying the transform color space, the logic 612 of the color space conversion (CSC) and the logic 614 CSC can be configured to convert subjected to down-sampling Bayer RGB values 610 to one or more other color spaces. In one embodiment, the logic 612 CSC can provide a nonlinear transformation of the space, and logic 614 CSC can provide a linear transformation of the space. Thus, the logical blocks 612 and 614 CSC can convert primary image data of the RGB Bayer sensor in another color space (e.g. sRGBlinear, sRGB, YCbCr, etc.,) which may be more ideal or suitable for implementation assessment of the white point for white balance.

[0286] In the present embodiment, non-linear logic CSC 612 may be configured to perform multiplication by a 3×3 matrix, followed by a nonlinear mapping, implemented in the form of a lookup table, and in addition to the attached multiplication by another matrix of 3×3 with the added offset. This allows the color space conversion statistics 3A to duplicate a color processing processing RGB conveyor 82 ISP (for example, using the gain of the white balance, using the matrix, color correction, applying the adjustment range RGB and performing color space conversion) for a given color temperature. It can also provide conversion Bayer RGB values in a more consistent color color space, such as CIELab, or any other color space discussed above (for example, YCbCr color space, the normalized red/blue, and so on). In some conditions, Lab color space may be more suitable for operations of the white balance because the color is more linear with respect to brightness.

[0287] As shown in Fig. 51, the output pixels of the RGB Bayer signal 610 reduced scale processed using the first matrix 3×3 color correction (3A_CCM), indicated here by the reference position 614. In the present embodiment, 3A_CCM 616 may be configured to convert from RGB color space of the camera (camRGB), linear calibrated sRGB (sRGBlinear). Programmable color space conversion, which can be used in one embodiment, predstavlenije equations 19-21:

where 3A_CCM_00-3A_CCM_22 represent significant coefficients matrix 614. Thus, each component of sRlinear, sGlinearand sBlinearcolor space sRGBlinearit is possible to determine, first, determining the amount subjected to down-sampling Bayer RGB values of red, blue and green, with appropriate coefficients 3A_CCM, and then truncating this value to 0 or 255 (minimum and maximum pixel values for 8-bit pixel data), if the value is greater than 255 or less than 0. The resulting values sRGBlinearpresented on Fig. 51 reference position 618 as output 3A_CCM 616. Additionally, the logic 468 statistics 3A may maintain a counter of the number of scaled pixels of each of the components of sRlinear, sGlinearand sBlinearthat is expressed below:

3A_CCM_R_clipcount_low: number of scaled pixels sRlinear<0

3A_CCM_R_clipcount_high: number of scaled pixels sRlinear>255

3A_CCM_G_clipcount_low: number of scaled pixels sGlinear<0

3A_CCM_G_clipcount_high: number of scaled pixels sGlinear>255

3A_CCM_B_clipcount_low: number of scaled pixels sBlinear<0

3A_CCM_B_clipcount_high: number of scaled pixels sBlinear>255

[0288] Then, the pixel 618 sRGBlinearcan be processed using a nonlinear search t the blitz 620 to generate pixels 622 sRGB. The lookup table 620 may contain entries of 8-bit values, where a value of each entry in the table represents the output levels. In one embodiment, the lookup table 620 may include 65 uniformly distributed input records, and the table index is the input value from step 4. When the input value falls between intervals, the output values are linearly interpolated.

[0289] it is Obvious that the sRGB color space can represent the color space of the final image created by the device 30 imaging (Fig. 7) for a given white point, because the statistics of white balance is performed in the color space of the final image created by the device forming the image. In one embodiment, the white point can be determined by comparing characteristics of the scene image with one or more reference light sources on the basis of, for example, red to green and/or blue to green. For example, one reference light source can be D65 light source CIE standard to simulate daytime lighting conditions. In addition to D65, the calibration device 30, the image formation can also be carried out for different reference light sources, and the process of determining the white balance may include the step in the evaluation of the current light source so order processing (for example, color balance can be adjusted for the current light source on the basis of the respective calibration points. By way of example, in one embodiment, the device 30 of the image formation and logic 468 statistics 3A can be calibrated using, among D65, fluorescent (CWF) reference source of cool white light, the reference light source TL84 (other fluorescent source and the reference light source IncA (or A), which mimics incandescent lighting. Additionally, as discussed above, various other light sources corresponding to different color temperatures (for example, H, IncA, D50, D65 and D75, and so on) can also be used when calibrating the camera for processing white balance. Thus, the white point can be determined by analyzing the scene image and determining what the reference light source most closely match the current light source.

[0290] In respect of non-linear logic 612 CSC output signal 620 sRGB pixel lookup table 620 may further be processed using the second matrix 624 3×3 color correction, referred to here 3A_CSC. In the present embodiment, the matrix 624 3A_CSC shown configured for conversion of the color space sRGB color space YCbCr, although it can also be sconf generovane to convert the sRGB values in other color spaces. By way of example, you can use the following programmable color space conversion (equation 22-27):

where 3A_CSC_00-3A_CSC_22 represent significant coefficients for the matrix 624, 3A_OffsetY, 3A_OffsetC1 and 3A_OffsetC2 represent a significant shift, and C1 and C2 represent different colors, in this case, citratest (Cb) blue and citratest (Cr) red, respectively. However, it should be understood that C1 and C2 may represent any suitable citratest, not necessarily color Cb and Cr.

[0291] As shown in equations 22-27, when determining each component YCbCr, the appropriate coefficients from the matrix 624 are applied to the values 622 sRGB, and the result is summed with the corresponding offset (for example, equations 22, 24 and 26). Essentially, this stage is the stage of multiplication by a matrix of 3×1. This is the result of matrix multiplication is then subjected to the cut-off between the maximum and minimum values (for example, equation 23, 25 and 27). The corresponding minimum and maximum cutoff values may be programmable and may depend, for example, from the specific standards of forming an image or a video (e.g., BT.601 or BT.709).

[0292] the Logic 468 statistics 3A can also maintain a counter of the number of scaled pixels for each component of Y, C1 and C2, which expressed the others like:

3A_CSC_Y_clipcount_low: number of scaled pixels Y<3A_CSC_MIN_Y

3A_CSC_Y_clipcount_high: number of scaled pixels Y>3A_CSC_MAX_Y

3A_CSC_C1_clipcount_low: number of scaled pixels C1<3A_CSC_MIN_C1

3A_CSC_C1_clipcount_high: number of scaled pixels C1>3A_CSC_MAX_C1

3A_CSC_C2_clipcount_low: number of scaled pixels C2<3A_CSC_MIN_C2

3A_CSC_C2_clipcount_high: number of scaled pixels C2>3A_CSC_MAX_C2

[0293] the Output pixels of the RGB Bayer signal 610 subjected to down-sampling, can also act on linear logic 614 conversion color space, which can be configured to implement the conversion of the color space of the camera. For example, the output pixels from logic 610 586 Bayer RGB down-sampling can be processed using another matrix 630 3×3 color transformation (3A_CSC2) logic 614 CSC to convert from RGB sensor (camRGB) in linear balanced by the level of the white color space (camYC1C2) in which C1 and C2 may correspond to Cb and Cr, respectively. In one embodiment, the pixel color can be scaled brightness that can give an advantage when implementing a color filter with improved colour matching and resistance to color changes due to changes in brightness. An example of how can be the conversion of color space is and the camera using the matrix 630 3×3, below in equations 28-31:

where 3A_CSC2_00-3A_CSC2_22 represent significant coefficients for the matrix 630, 3A_Offset2Y represents a significant shift for camY, and camC1 and camC2 represent different colors, in this case, citratest (Cb) blue and citratest (Cr) red, respectively. As shown in equation 28, to determine camY, the corresponding coefficients of the matrix 630 are applied to the Bayer RGB values 610, and the result is summed with 3A_Offset2Y. This result is then subjected to the cut-off between the maximum and minimum values, as shown in equation 29. As discussed above, limits the cutoff can be programmable.

[0294] In this case, the pixels camC1 and camC2 output 632 are significant. As discussed above, in some embodiments, the implementation, the pixel color can be scaled. For example, one method for implementing scaling color shown below:

where ChromaScale is the scale factor is a floating point between 0 and 8. In equations 32 and 33, the expression (camY ? camY:1) is intended to prevent conditions such as division by zero. Thus, if camY equal to zero, the value camY is set to 1. In addition, in one embodiment, ChromaScale can take one of two possible values depending on the sign of the camC1. For example, as shown below in equation 34, ChomaScale can be assigned to the first value (ChromaScale0) if camC1 negative, otherwise it is assigned the second value (ChromaScale1):

ChromaScale = ChromaScale0 if(camC1<0)(34)

ChromaScale1 otherwise

[0295] subsequently added offset color, and the pixels camC1 and camC2 color trimmed, as shown below in equations 35 and 36, to generate the corresponding unsigned pixel values:

where 3A_CSC2_00-3A_CSC2_22 - significant coefficients matrix 630, and 3A_Offset2C1 and 3A_Offset2C2 - significant bias. In addition, the number of scaled pixels for camY, camC1 and camC2 calculated as shown below:

3A_CSC2_Y_clipcount_low: number of scaled pixels camY<3A_CSC2_MIN_Y

3A_CSC2_Y_clipcount_high: number of scaled pixels camY>3A_CSC2_MAX_Y

3A_CSC2_C1_clipcount_low: number of scaled pixels camC1<3A_CSC2_MIN_C1

3A_CSC2_C1_clipcount_high: number of scaled pixels camC1>3A_CSC2_MAX_C1

3A_CSC2_C2_clipcount_low: number of scaled pixels camC2<3A_CSC2_MIN_C2

3A_CSC2_C2_clipcount_high: number of scaled pixels camC2>3A_CSC2_MAX_C2

[0296] Thus, the nonlinear and linear logic 612 and 614 conversion color space can, in the present embodiment, to provide pixel data in different color spaces: sRGBlinear(signal 618),sRGB (signal 622), YCbYr (signal 626) and camYCbCr (signal 630). It should be understood that the coefficients for each matrix 616 (3A_CCM), 624 (3A_CSC) and 630 (3A_CSC2) conversion, and the values in the lookup table 620, can be independently set and programming.

[0297] According to Fig. 51, the output pixel color from the transformation of a nonlinear color space (YCbCr 626) or from the conversion of the color space of the camera (camYCbCr 632) can be used to generate two-dimensional (2D) color histograms 636. As shown, the logic 638 and 640 of choice which can be implemented as multiplexers or any other suitable logic may be configured to select between pixels brightness and color from the transformation of a nonlinear color space or from the conversion of the color space of the camera. Logic 638 and 640 may operate in response to respective control signals, which, in one embodiment, can be issued by the main control logic 84 circuit 32 image processing (Fig. 7) and can be installed through the software.

[0298] For this example, we can assume that logic 638 and 640 choice select convert (626) YC1C2 color space, where the first component is the brightness, and where C1, C2 of the first and second color (for example, Cb, Cr). 2D histogram 636 in the color space C1-C is generated for one window. For example, the window may be defined by the beginning and the width of the column and the beginning and the line height. In one embodiment, the position and size of window you can specify multiples of 4 pixels, and you can use the 32×32 Bina, i.e., a total of 1024 Bina. The boundaries of the bins can be placed at fixed intervals and to allow zooming and panning of the collection of histograms in specific areas of the color space, you can set the scale and offset of the pixels.

[0299] 5 bits (representing a total of 32 values) C1 and C2 after the offset and scaling can be used to determine Bina. The indices of the bins for C1 and C2, denoted here as C1_index and C2_index, can be defined as follows:

C1_index=((C1-C1_offset)>>(3-C1_scale)(37)

C2_index=((C2-C2_offset)>>(3-C2_scale)(38)

After defining indexes, Bina color histogram is incremented by the Count value (which can have a value between 0 and 3, in one embodiment, if the indices of the bins are in the range [0, 31], as shown below in equation 39. Effectively, this allows you to weigh counters color based on the brightness values (e.g., brighter pixels are increased weight, not like the rest of them (for example, 1)).

where the Count opredelaetsa is based on the selected intensity values, Y in this example. It is obvious that the steps represented by equations 37, 38 and 39 can be implemented by a logical block 644 update bins. In addition, in one embodiment, it is possible to set multiple thresholds for brightness intervals of brightness. By way of example, four threshold (Ythd0-Ythd3) can be set to five intervals of brightness, for each interval values are set Count Count0-4. For example, Count0-Count4 you can choose (for example, using logic 642 pixel) on the basis of the brightness thresholds as follows:

if (Y<=Ythd0)(40)

Count=Count0

else if (Y<=Ythd1)

Count=Count1

else if (Y<=Ythd2)

Count=Count2

else if (Y<=Ythd3)

Count=Count3

else

Count=Count4

[0300] Based on the foregoing, Fig. 53 illustrates a color histogram scaling and offsets set to zero for both C1 and C2. The CbCr space is divided into 32×32 Bina (a total of 1024 bins). In Fig. 54 shows an example of zooming and panning in 2D color histogram for extra accuracy, in which a rectangular region 646, where the small rectangle indicates the position of the 32×32 Bina.

[0301] In the beginning frame image data, the values of the bins are initially set to zero. For each pixel, included in the 2D color histogram 636, bin corresponding to a matching value C1C2, increasing what is on a certain Count (Count0-Count4), which, as discussed above, may be based on the brightness value. For each bin in the 2D histogram 636, the total count of pixels is reported as part of the collected statistical data (for example, STATS0). In one embodiment, the total count of pixels for each bin may have a resolution of 22 bits, which allows for allocation of internal memory equal to 1024×22 bits.

[0302] Returning to Fig. 51, Bayer RGB pixels (signal 610), the pixels sRGBlinear(signal 618), sRGB pixels (signal 622) and the pixels YC1C2 (for example, YCbCr) (signal 626) arrive at a set of pixel filters 650a-c, where through RGB, sRGBlinear, sRGB, or YC1C2 camYC1C2 amounts may conventionally be accumulated in the pixel conditions camYC1C2 or YC1C2 set every pixel filter 650. Thus, the values of Y, C1 and C2 from the output of the nonlinear transformation color space (YC1C2) or transformation output color space of the camera (camYC1C2) are used for conditional selection of the values of RGB, sRGBlinear, sRGB or YC1C2 for accumulation. Although the present embodiment shows that the logic 468 statistics 3A provided with 8 pixel filters (PF0-PF7), it should be understood that it may be provided with any number of pixel filters.

[0303] Fig. 55 shows a functional logic diagram depicting an implementation option pixel filter is in, in particular, PF0 (650a) and PF1 (650b) of Fig. 51. As shown, each pixel filter 650 includes selection logic, which takes Bayer RGB pixels, the pixels sRGBlinearthe sRGB pixels and the pixels of one of the YC1C2 and camYC1C2 selected other logic 654 choice. By way of example, logic 652 and 654 selection can be implemented using multiplexers or any other suitable logic. Logic 654 may choose or YC1C2 camYC1C2. The selection may be performed in response to a control signal that can be issued by the main control logic 84 circuit 32 image processing (Fig. 7) and/or installed software. Then the pixel filter 650 may use logic 656 for the evaluation pixels YC1C2 (e.g., nonlinear or chamber), the selected logic 654 choice on pixel conditions. Each pixel filter 650 may use the circuit 652 to select one of the Bayer RGB pixels, the pixels sRGBlinear, sRGB pixels, and pixel YC1C2 or camYC1C2 depending on the output signal of the circuit 654 selection.

[0304] Using the results of the evaluation, it is possible to accumulate the pixels selected by the logic 652 choice. In one embodiment, the pixel can be set using thresholds C1_min, C1_max, C2_min, C2_max, as shown in the graph 570 in Fig. 49. The pixel is included in the statistics, if it satisfy oral the following conditions:

1. C1_min<=C1<=C1_max

2. C2_min<=C2<=C2_max

3. abs ((C2_delta*C1)-(C1_delta*C2)+Offset)<distance_max

4. Ymin<=Y<=Ymax

According to the schedule shown in Fig. 56, in one embodiment, the point 662 represents values (C2, C1) corresponding to the current pixel data YC1C2 selected logic 654. C1_delta can be defined as the difference between C1_1 and C1_0, and C2_delta can be defined as the difference between C2_1 and C2_0. As shown in Fig. 56, point (C1_0, C2_0) and (C1_1, C2_1) can set the minimum and maximum bounds for C1 and C2. Offset can be determined by multiplying C1_delta value (C2_intercept) at the point of intersection of line 664 with the axis C2. Thus, assuming that the Y, C1 and C2 satisfy the minimum and maximum boundary conditions, the selected pixels (Bayer RGB, sRGBlinear, sRGB and YC1C2/camYC1C2) is included in the amount of savings, if its distance 670 from line 664 less distance_max 672, which may be a distance of 670 in pixels from the line, multiplied by the normalizing factor:

distance_max=distance*sqrt(C1_delta^2+C2_delta^2)

In the present embodiment, distance, C1_delta and C2_delta can take values in the range from -255 to 255. Thus, distance_max 672 can be represented 17 bits. Point (C1_0, C2_0) and (C1_1, C2_1), and the parameters for determining distance_max (for example, normalizing(e) the multiplier(and)), can be provided as part of the logic 656 pixel conditions in each pixel Phi is tre 650. Obviously, the pixel 656 conditions can be configurable/programmable.

[0305] Although the example shown in Fig. 56 depicts a pixel condition on the basis of two sets of points (C1_0, C2_0) and (C1_1, C2_1), in additional embodiments, implementation, some pixel filters may define a more complex shape and area, which are determined by the pixel conditions. For example, in Fig. 57 shows an implementation option, where the pixel filter 650 may define a five-sided polygon 673 using points (C1_0, C2_0), (C1_1, C2_1), (C1_2, C2_2) and (C1_3, C2_3), and (C1_4, C2_4). Each party 674a-674e can specify the condition of the line. However, unlike the case shown in Fig. 56 (for example, a pixel may be located on each side of the line 664, provided that distance_max is satisfied), the condition may be that the pixel (C1, C2) should be located on one side of the line 674a-674e and, thus, to be inside the polygon 673. Thus, the pixel (C1, C2) is calculated when the conditions of the intersection of multiple lines. For example, in Fig. 57, this intersection is relatively pixel 675a. However, the pixel 675b cannot satisfy the condition line for line 674d and, thus, cannot be counted in the statistics when processing pixel filter configured.

[0306] additionally the m of the embodiment, it is shown in Fig. 58, the pixel can be determined on the basis of overlapping forms. For example, in Fig. 58 shows how the pixel filter 650 may have a pixel conditions specified using two overlapping forms, in this case rectangles 676a and 676b, defined by the points (C1_0, C2_0), (C1_1, C2_1), (C1_2, C2_2) and (C1_3, C2_3) and points (C1_4, C2_4), (C1_5, C2_5), (C1_6, C2_6) and (C1_7, C2_7), respectively. In this example, the pixel (C1, C2) can satisfy the conditions of lines specified in this pixel filter, as concluded in the field, together limited forms 676a and 676b (for example, satisfying the conditions of lines of each line, specifying both forms). For example, in Fig. 58, these terms are relative to the pixel 678a. However, the pixel 678b cannot satisfy these conditions (in particular in relation to the line 679a rectangle 676a and line 679b rectangle 676b) and, thus, cannot be counted in the statistics when processing pixel filter configured.

[0307] For each pixel filter 650, qualifying pixels are identified on the basis of the pixel conditions specified logic 656, and for classifying the pixel values, the machine 468 statistics 3A may collect the following statistics: a 32-bit sum of: (Rsum, GsumBsumor (sRlinear_sum, sGlinear_sum, sBlinear_sum), or (sRsum, sGsub> sum, sBsum) or (YsumThat C1sum, C2sumor 24-bit pixel counter, Count, which may represent the total number of pixels included in the statistics. In one embodiment, the software may use the amount for generating medium in the tile or window.

[0308] When the pixels camYC1C2 selected logic 652 pixel filter 650, color thresholds can be carried out on the scaled values of the color. For example, since the intensity of the color white points increases with the brightness value, the use of color, scaled brightness value in the pixel filter 650 may, in some cases, to provide results with improved consistency. For example, the minimum and maximum brightness can allow the filter to ignore the dark and/or bright areas. If the pixel of the pixel satisfies the condition YC1C2, the values of RGB, sRGBlinear, sRGB or YC1C2 accumulate. The choice of pixel values logic 652 choice may depend on the type of information. For example, the white balance is usually selected pixels RGB or sRGBlinear. For detection of specific conditions, such as sky, grass, skin tones and so on, the set of pixels or sRGB YCC may be more suitable.

[0309] In the present embodiment, it is possible to set Vose is ü sets the pixel conditions, one associated with each of the pixel filters PF0-PF7 650. Some pixel conditions can be set for a cutting region in the color space C1-C2 (Fig. 49), where, probably, is the white point. This can be determined or estimated on the basis of the current light source. Then, the accumulated sum of RGB can be used to determine the current white point based on the relationship of R/G and/or B/G for adjustment of white balance. In addition, some pixel conditions can be set or adapted for the analysis and classification of the scene. For example, some pixel filters 650 and Windows/mosaic elements can be used to detect conditions, such as blue sky in the upper part of the frame image or the green grass at the bottom of the frame image. This information can also be used to adjust the white balance. Additionally, some pixel conditions can be set or adapted to detect skin tones. For such filters, the mosaic elements can be used to detect regions of the image frame that have a flesh tone. Identifying these areas, you can improve skin tone, for example, by reducing the magnitude of the noise filter in the areas of flesh tones and/or reduce quantization for compression of the video signal in these areas to improve the quality of the A.

[0310] the Logic 468 statistics 3A may also provide data collection brightness. For example, the brightness value, camY, from the conversion of the color space of the camera (camYC1C2) can be used to accumulate statistics total brightness. In one embodiment, the logic 468 statistics 3A may collect the following information brightness:

Ysum: the amount camY

cond(Ysum): amount camY, which satises: Ymin<=camY<Ymax

Ycount1: count of pixels where camY<Ymin

Ycount2: count of pixels where camY>=Ymax

Here Ycount1 may represent the number of underexposed pixels, and Ycount2 may represent the number of overexposed pixels. They can be used to determine whether the image is overexposed or underexposed. For example, if the pixels do not reach saturation, the amount camY (Ysumcan specify the average brightness in the scene, which can be used to achieve the target exposure AE. For example, in one embodiment, the average brightness can be determined by dividing Ysumby the number of pixels. In addition, knowing the statistics of the brightness/AE mosaic statistics and positions of Windows, it is possible to measure AE. For example, depending on the scene, it may be desirable to weigh the statistics of AE in the Central box, the eat in boxes on the edges of the image, for example, in the case of portrait photography.

[0311] In the illustrated embodiment, the logic of statistics 3A may be configured to collect statistics in the tiles and Windows. In the illustrated configuration, one window can be set for mosaic statistics 674. The window may be defined by the beginning and the width of the column and the beginning and the line height. In one embodiment, the position and size of the window you can choose a multiple of four pixels in this window, the statistics collected in the tiles of arbitrary size. By way of example, all tiles in the box, you can choose so that they were the same size. The size of the tile element can be set independently for the horizontal and vertical directions and, in one embodiment, it is possible to set a maximum limit on the number of horizontal tiles (for example, a limit of 128 horizontal tile). In addition, in one embodiment, it is possible to set the minimum size of the mosaic element, is equal to, for example, 8 pixels wide by 4 pixels high. Below are some examples of configurations of tiles on the basis of different modes and standards video/imaging to obtain a window of 16×16 tile:

HD 1280×720: mosaic interval 80×45 pixels

HD 1920×1080: mosaic interval 120×68 pixels

5MP 2592×1944: mosaic interval 162×122 pixels

8MP 3280 x 2464: mosaic interval 205×154 pixels

[0312] In the present variant implementation of the eight available pixel filters 650 (PF0-PF7), four can be selected for mosaic statistics 674. For each mosaic element, the following statistics may be collected:

(Rsum0, Gsum0Bsum0or (sRlinear_sum0, sGlinear_sum0, sBlinear_sum0), or

(sRsum0, sGsum0, sBsum0) or (Ysum0That C1sum0, C2sum0), Count0

(Rsum1, Gsum1Bsum1or (sRlinear_sum1, sGlinear_sum1, sBlinear_sum1), or

(sRsum1, sGsum1, sBsum1) or (Ysum1That C1sum1, C2sum1), Count1

(Rsum2, Gsum2Bsum2or (sRlinear_sum2, sGlinear_sum2, sBlinear_sum2), or

(sRsum2, sGsum2, sBsum2) or (Ysum2That C1sum2, C2sum2), Count2

(Rsum3, Gsum3Bsum3or (sRlinear_sum3, sGlinear_sum3, sBlinear_sum3), or

(sRsum3, sGsum3, sBsum3) or (Ysum3That C1sum3, C2sum3), Count3, or

Ysum, cond(Ysum), Ycount1, Ycount2(camY)

In the above statistics, Count0-3 represents the count of pixels that satisfy the pixel conditions, corresponding wybranymi pixel filters. For example, if the pixel filters PF0, PF1, PF5 and PF6 selected as four pixel filters for a specific mosaic element or window, then the above expression can correspond to the Count values and the amounts corresponding to the pixel data (for example, Bayer RGB, sRGBlinear, sRGB, YC1Y2, camYC1C2) chosen for these filters (for example, logic 652). Additionally, the value of Count can be used to normalize the statistics (for example, by dividing the amount of colors on the respective Count values). As shown, based at least in part, on the required types of statistics, the selected pixel filters 650 may be configured to select between the pixel data of either one of the Bayer RGB, sRGBlinearor sRGB pixel data or pixel data YC1C2 (nonlinear conversion or chamber color space depending on the choice of logic 654), and determine the statistics of the amount of colors for the selected pixel data. Additionally, as discussed above, the brightness value, camY, from the conversion of the color space of the camera (camYC1C2) also collected information total brightness for statistics automatic exposure (AE).

[0313] Additionally, the logic 468 statistics 3A may also be configured to collect statistics 676 for set the state of the Windows. For example, in one embodiment, it is possible to use up to eight floating window, any rectangular area of any size which is a multiple of four pixels (for example, height × width), up to a maximum size corresponding to the size of the image frame. However, the position of the Windows is not necessary to restrict multiple of four pixels. For example, the Windows may overlap each other.

[0314] In the present embodiment, four pixel filter 650, you can choose from the available eight pixel filters (PF0-PF7) for each window. Statistics for each window can be collected in the same manner as for the mosaic elements discussed above. Thus, for each window, you can collect the following statistics 676:

(Rsum0, Gsum0Bsum0or sRlinear_sum0, sGlinear_sum0, sBlinear_sum0), or

(sRsum0, sGsum0, sBsum0) or (Ysum0That C1sum0, C2sum0), Count0

(Rsum1, Gsum1Bsum1or (sRlinear_sum1, sGlinear_sum1, sBlinear_sum1), or

(sRsum1, sGsum1, sBsum1) or (Ysum1That C1sum1, C2sum1), Count1

(Rsum2, Gsum2Bsum2or (sRlinear_sum2, sGlinear_sum2, sBlinear_sum2), or

(sRsum2, sGsum2, sBsum2) or (Ysum2That C1sum2, C2sum2), Count2

(Rsum3, Gsum3Bsum3or (sRlinear_sum3 , sGlinear_sum3, sBlinear_sum3), or

(sRsum3, sGsum3, sBsum3) or (Ysum3That C1sum3, C2sum3), Count3, or

Ysum, cond(Ysum), Ycount1, Ycount2(camY)

In the above statistics, Count0-3 represents the count of pixels that satisfy the pixel conditions corresponding to the selected four pixel filters for a particular window. Of the eight available pixel filters PF0-PF7, four active pixel filter can be selected independently for each window. Additionally, one of the sets of statistics can be collected using the pixel filter or statistics brightness camY. Window statistics collected for the AWB and AE may, in one embodiment, be displayed in one or more registers.

[0315] According to Fig. 51, the logic 468 statistics 3A may also be configured to retrieve statistics 678 total brightness on the line for one window using the brightness values, camY, to convert the color space of the camera. This information can be used for detection and compensation of flicker. Flicker is generated periodic change in some fluorescent and incandescent light sources, usually caused by a signal in AC power. For example, in Fig. 59 shows a graph demonstrating how the flickering may be caused from what anenemy in the light source. Detection of flicker, thus, can be used to detect the frequency of the AC power used for the light source (for example, 50 Hz or 60 Hz). When the frequency is known, the flicker can be avoided by setting the integration time of the image sensor is equal to an integer multiple of the flicker period.

[0316] To detect flicker, brightness, camera, camY, accumulates on each line. Due to down-sampling the input Bayer data, each value camY can fit 4 lines of the original primary image data. Then, control logic and/or firmware may perform frequency analysis of the average in the line or, more reliably, differences of averages in a string of consecutive frames to determine the frequency of the AC power that is associated with a specific light source. For example, according to Fig. 59, the time integration for image sensor can be based on the times t1, t2, t3 and t4 (for example, so that the integration occurred at times when the light source, showing changes has, in General, the same level of brightness.

[0317] In one embodiment, may be specified window total brightness by row and statistics 678 reported for the pixels in this window. In order Prim is RA, to capture video in 1080p HD, assuming that the window has a height of 1024 pixels, is generated 256 total brightness on the line (for example, one sum for every four rows due to down-sampling logic 586), and each accumulated value can be expressed 18 bits (e.g. 8-bit values camY for up to 1024 samples per line).

[0318] the Logic 468 statistics 3A shown in Fig. 51 may also provide statistics 682 autofocus (AF) via logic 680 statistics AF. Functional block diagram illustrating an implementation option logic 680 AF statistics, is presented in more detail in Fig. 60. As shown, the logic 680 AF statistics may include a horizontal filter 684 and the detector 686 edge, which is applied to the source Bayer RGB (without down-sampling), two filter 688 3×3 Y from Bayer, and two filter 690 3×3 camY. In General, a horizontal filter 684 provides accurate statistics permissions for each of the color components, filters 688 3×3 can provide accurate statistics resolution BayerY (Bayer RGB applied conversion 3×1 (logic 687)), and filters 690 3×3 can provide a more coarse two-dimensional statistics camY (because camY is obtained using a Bayer RGB data reduced scale, i.e., logic 630). In addition, l is ghica 680 may include logic 704 for thinning Bayer RGB data (for example, averaging 2×2, averaging 4×4, and so on), and thinned Bayer RGB data 705 can be filtered using filters 706 3×3 to generate a filtered output signal 708 to thinned Bayer RGB data. This version of the implementation provides 16 boxes of statistics. On the boundaries of the primary frame, the edge pixels are duplicated for filter logic 680 statistics AF. The various components of the logic 680 AF statistics described in more detail below.

[0319] First, the process of detecting horizontal edges includes the use of horizontal filter 684 for each color component (R, Gr, Gb, B), followed by an optional detector 686 edges in each color component. Thus, depending on the conditions of formation of the image, this configuration allows you to set the logic 680 AF statistics as a high pass filter without edge detection (e.g., disabled detector edge) or, alternatively, as a low pass filter followed by an edge detector (e.g., included with the detector edge). For example, in low-light conditions, a horizontal filter 684 may be more susceptible to noise, and, thus, the logic 680 can configure horizontal filter as a low pass filter, followed included detector 686 edge. As shown, the signal 694 management m which can enable or disable the detector 686 edge. Statistics from different color channels are used to determine areas of focus for sharpening, because different colors can be focused at different depths. In particular, the logic 680 AF statistics can provide controls for auto focus using a combination of coarse and fine adjustments (for example, the focal length of the lens). Embodiments of such means are described in more detail below.

[0320] In one embodiment, the horizontal filter may be a 7-taking the filter and can be specified as follows in equations 41 and 42:

out(i)=(af_horzfilt_coeff[0]*(in(i-3)+in(i+3))+af_horzfilt_coeff[1]*(in(i-2)+in(i+2))+(41)

af_horzfilt_coeff[2]*(in(i-1)+in(i+1))+af_horzfilt_coeff[3]*in(i))

out(i)=max(-255, min(255, out(i)))(42)

Here, each coefficient af_horzfilt_coeff[0:3] may be in the range [-2, 2], and i represents the index of the input pixel for R, Gr, Gb, or B. the Filtered output signal out(i) can be cut between the minimum and maximum values -255 and 255, respectively (equation 42). The coefficients of the filter can be set independently for each color component.

[0321] an Optional detector 686 edge may be provided at the output of the horizontal filter 684. In one embodiment, the detector 686 edge can be specified as:

edge(i=abs(-2*out(i-1)+2*out(i+1))+abs (out(i-2)+out(i+2)) (43)

edge(i)=max(0, min(255, edge(i)))(44)

Thus, the detector 686 edge, when enabled, can output a value based on the two pixels on both sides of the current input pixel i, as shown in equation 43. The result could be truncated to 8-bit values between 0 and 255, as shown in equation 44.

[0322] depending found any land, the final output signal of the pixel filter (e.g. filter 684 and detector 686) you can select either the output signal of the horizontal filter 684, either as an output signal of the detector 686 edge. For example, as shown in equation 45, the output signal of the detector 686 edge may be an edge(i) if the edge is detected, or the absolute value of the output signal of the horizontal filter out(i) if the edge is not detected.

edge(i)=(af_horzfilt_edge_detected)?edge(i):abs(out(i))(45)

For each window the accumulated values edge_sum[R, Gr, Gb, B], we can choose either (1) the amount of edge(j,i) for each pixel in the window, or (2) the maximum value of edge(i) on the line in the window, max(edge), summed over the lines in the window. Assuming that the size of the primary frame is 4096×4096 pixels, the number of bits required to store the maximum values edge_sum[R, Gr, Gb, B], equal to 30 bits (for example, 8 bits per pixel, the pole 22 bits for the window, covering the entire primary image frame)./p>

[0323] As discussed, the filters 690 3×3 for brightness camY may include two programmable filter of 3×3, referred to as F0 and F1, which are used to camY. The result of the filter 690 arrives or quadratic function, or a function of the absolute value. The result is accumulated in this window AF for both filters F0 and F1 3×3 to generate boundary values of brightness. In one embodiment, the edge intensity values at each pixel camY are defined as follows:

edgecamY_FX(j,i)=FX*camY(46)

=FX(0,0)*camY (j-1, i-1)+FX(0,1)*camY (j-1, i)+FX(0,2)*camY(j-1, i+1)+

FX(1,0)*camY (j, i-1)+FX(1,1)*camY (j, i)+FX(1,2)*camY (j, i+1)+

FX(2,0)*camY (j+1, i-1)+FX(2,1)*camY (j+1, i)+FX(2,2)*camY (j+1, i+1)

edgecamY_FX(j,i)=f(max(-255, min(255, edgecamY_FX(j,i)))) (47)

f(a)=a^2 or abs(a)

where FX is programmable filters 3×3, F0 and F1, with significant coefficients in the range [-4, 4]. The indices j and i represent the position of the pixel in the image camY. As discussed above, the filter camY can provide a rough statistics resolution, because camY is displayed using a Bayer RGB data reduced scale (for example, 4×4 in 1). For example, in one embodiment, the filters F0 and F1 can be set using the operator Sarra that provides improved rotational symmetry than the Sobel operator, an example of which is shown below:

[0324] For each of the first window, the accumulated value 700-defined filters 690, edgecamY_FX_sum (where FX=F0 and F1), you can choose either (1) the amount edgecamY_FX(j,i) for each pixel in the window, or (2) the maximum value edgecamY_FX(j) of line in the window, summarized by the lines in the window. In one embodiment, edgecamY_FX_sum can reach saturation with a 32-bit value when f(a) is set equal to a^2 to provide a more “peaky” statistics with high resolution. In order to avoid saturation, the maximum window size of X*Y pixels of the primary frame can be set so that it does not exceed a total of 1024×1024 pixels (for example, i.e., X*Y<=1048576 pixels). As mentioned above, f(a) can also be set as an absolute value to provide a more linear statistics.

[0325] the filters AF 688 3×3 Y Bayer can be set in a similar manner as the filters 3×3 camY, but they are applied to luminance values Y are generated from Bayer four (2×2 pixels). First of all, an 8-bit Bayer RGB values are converted to Y with programmable coefficient in the range [0, 4] to generate the Y values, balanced on the white level, as shown below in equation 48:

bayerY=max(0, min(255, bayerY_Coeff[0]*R+bayerY_Coeff[1] *(Gr+Gb)/2+(48)

bayerY_Coeff[2]*B))

[0326] Similarly, the filters 690 for camY, filters 688 3×3 for brightness bayerY may include two program is miroamer filter 3×3, referred to as F0 and F1, which are used to bayerY. The result of the filter 688 enters or quadratic function, or a function of the absolute value. The result is accumulated in this window AF for both filters F0 and F1 3×3 to generate boundary values of brightness. In one embodiment, the edge intensity values at each pixel bayerY are defined as follows:

edgebayerY_FX(j,i)=FX*bayerY(49)

=FX(0,0)*bayerY (j-1, i-1)+FX(0,1)*bayerY (j-1, i)+FX(0,2)*bayerY (j-1, i)+

FX(1,0)*bayerY (j, i-1)+FX(1,1)*bayerY (j, i)+FX(1,2)*bayerY (j-1, i)+

FX(2,0)*bayerY (j+1, i-1)+FX(2,1)*bayerY (j+1, i)+FX(2,2)*bayerY (j+1, i)

edgebayerY_FX(j,i)=f(max(-255, min(255, edgebayerY_FX(j,i))))(50)

f(a)=a^2 or abs(a)

where FX is programmable filters 3×3, F0 and F1, with significant coefficients in the range [-4, 4]. The indices j and i represent the position of the pixel in the image bayerY. As discussed above, the filter Y Bayer can provide accurate statistics resolution, because Bayer RGB signal, adopted by the logic 680 AF, not thinned. Solely by way of example, the filters F0 and F1 logic 688 filter can be installed using one of the following filter configurations:

[0327] For each window, the accumulated values 702, certain filters 688, edgebayerY _FX_sum (where FX=F0 and F1), you can choose either (1) the amount edgebayerY_FX(j,i) for each pixel in the window, or as (2) m is kimalee is edgebayerY_FX(j) of line in the window, summarized by lines in the window. In this case, edgebayerY _FX_sum can reach saturation at 32 bits, when f(a) is set equal to a^2. Thus, in order to avoid saturation, the maximum window size of X*Y pixels primary frame should be set so that it does not exceed the entire 512×512 pixels (for example, X*Y<=262144). As discussed above, setting f(a) as a^2 can provide more peak statistics, and setting f(a) as abs(a) can provide a more linear statistics.

[0328] As discussed above, the statistics 682 for AF is going to 16 Windows. The window can be any rectangular area, each size which is a multiple of 4 pixels. Because each logic 688 and 690 filter includes two filters, in some cases, a single filter can be used for normalization by 4 pixels and can be configured for filtering in vertical and horizontal directions. In addition, in some embodiments, implementation, logic 680 AF can normalize the statistics of AF in brightness. This can be done by specifying one or more filters of logical blocks 688 and 690 as bypass filters. In some embodiments, the implementation, the position of the window may be limited to a value that is a multiple of 4 pixels, and the maximum overlap of the Windows. For example, one window can be used to obtain values is normalisatie, and another window can be used for additional statistics, such as variance, as discussed below. In one embodiment, the filters AF (for example, 684, 688, 690) may not realize the redundancy of the pixels on the edge of the image frame and, thus, to the AF filters used all valid pixels of the AF window can be set so that each of them was located at least 4 pixels from the top edge of the frame, at least 8 pixels from the bottom edge of the frame and at least 12 pixels from the left/right edge of the frame. In the illustrated embodiment, for each window, you can collect and report the following statistics:

32-bit edgeGr_sum for Gr

32-bit edgeR_sum for R

32-bit edgeB_sum for B

32-bit edgeGb_sum to Gb

32-bit edgebayerY_F0_sum for Y from Bayer for filter0 (F0)

32-bit edgebayerY_F1_sum for Y from Bayer for filter1 (F1)

32-bit edgecamY_F0_sum to camY for filter0 (F0)

32-bit edgecamY_F1_sum to camY for filter1 (F1)

In this embodiment, the memory required to store statistics 682 AF, may be 16 (Windows) multiplied by 8 (Gr, R, B, Gb, bayerY_F0, bayerY_F1, camY_F0, camY_F1) multiplied by 32 bits.

[0329] Thus, in one embodiment, the accumulated value on the box, you can choose between: the output signal of the filter (which can be configured as a configuration item is default) input input pixel or pixel in the square. The selection can be carried out for each of the 16 boxes AF, and can be applied to all 8 statistical parameters AF (listed above) in this window. This can be used to normalize the indicator AF between the two overlapping Windows, one of which is configured to collect the output signal of the filter, and another configured to collect the sum of the input pixels. Additionally, to calculate the variance of a pixel in the case of two overlapping Windows, one window can be configured to collect the sum of the input pixels and the other for collection of sum of squares of the input pixels, thus, thus, the variance can be calculated as:

variance = (avg_pixel2)-(avg_pixel)^2

[0330] using the AF statistics, control logic 84 ISP (Fig. 7) may be configured to adjust the focal length of the lens device of image formation (for example, 30) using the several adjustments of the focal length on the basis of the “indicators” coarse and fine auto focus to focus the image. As discussed above, filters 690 3×3 for camY can provide a rough statistics, while the horizontal filter 684 and the detector 686 edge can provide relatively more accurate statistics for each of the color components, while fil is ture 688 3×3 BayerY can provide accurate statistics on BayerY. In addition, filters 706 3×3 thinned Bayer RGB signal 705 may provide a rough statistics for each color channel. As further described below, the performance of AF can be calculated on the basis of output values of the filter for a given input (e.g., the sum of the output signals F0 and F1 filter for thinned camY, BayerY, Bayer RGB or on the basis of the output signals of the detector horizontal/edge and so on).

[0331] Fig. 61 shows a graph 710, which depicts curves 712 and 714, which are indicators of coarse and fine AF, respectively. As shown, the indicators gross AF on the basis of rough statistics can have a more linear characteristic relative to the focal length of the lens. Thus, in any position of the focus, the movement of the lens can generate a change in the AF, which can be used to detect improvement or deterioration of image capture. For example, increase gross AF after adjustment of the lens may indicate that the focal length is adjusted in the correct direction (for example, to the optical position of the focus).

[0332] However, as it approaches the optical position of the focus, the change in the coarse AF for smaller steps adjustable lens can be increased, making it difficult to recognize ver the CSOs direction of focus adjustment. For example, as shown in the graph 710, the change in the coarse AF between coarse position (CP) CP1 and CP2 are represented as ∆?C12that shows an increase in General from CP1 to CP2. However, as shown, from CP3 to CP4, the change ∆C34metric coarse AF (which passes through the optimal focus position (OFP)), although still increasing, relatively less. It should be understood that the positions CP1-CP6 along the focal distance L are not required to conform to the values of the step, the logic of the AF along the focal distance. Thus, between each coarse position may be additional steps that are not shown. Illustrated position CP1-CP6 are only meant to demonstrate how the change in the coarse AF may be gradually reduced when the focus position is approaching OFP.

[0333] When the approximated position OFP defined (for example, on the basis of indicators gross AF shown in Fig. 61, an approximate position of the OFP can be between CP3 and CP5), the values of the indicator accurate AF, represented by the curve 714, can be evaluated to clarify the position of the focus. For example, the data is accurate AF can be flattened when your image, so a large positional change of the lens does not lead to a large change indicator accurate AF. However, when the focus position reaches pricheskoj position of the focal point (OFP), measure the exact AF can change dramatically with small positional adjustments. Thus, finding the position of the peak or peaks 715 on the curve 714 indicator accurate AF, you can define the OFP for the current scene image. Thus, in the end, the performance of the rough AF can be used to determine the General neighborhood of the optical position of the focus, whereas the data is accurate AF can be used to pinpoint the precise positions in this neighborhood.

[0334] In one embodiment, the process of AF may begin with receipt of indicators gross AF around the available focal length, starting from position 0 and ending at position L (shown in graph 710), and to define indicators gross AF in different step positions (e.g., CP1-CP6). In one embodiment, after the position of the focus lens reaches the position L, the position can be reset to 0 before the estimation of parameters of AF in different focal positions. For example, this may be due to the installation of the coil of the mechanical element that controls the focus position. In this embodiment, after a reset to position 0, the focal position can be adjusted in the direction of the position L to the position, which first indicated a negative change in the coarse AF, in this case, the position CP5, demonstrating tricatel is the amount of change δ C45relative to the position of CP4. From the position CP5, the focus position can be adjusted in small increments relative to the increments used when adjustments metric coarse AF (for example, positions FP1, FP2, FP3, and so on) back to position 0, simultaneously searching for the peak of the curve 715 714 indicator accurate AF. As discussed above, the focus position OFP, corresponding to the peak of the curve 715 714 indicator accurate AF, may be an optimal focus position for the current scene image.

[0335] it is Obvious that the above described methods of determining the position of the ideal and optimum position of focus can be called “climbing the convex surface”, in the sense that changes performance curves 712 and 714 AF are analyzed to determine the position of the OFP. In addition, although the analysis of indicators gross AF (curve 712) and indicators accurate AF (curve 714) shown using the steps are the same size analysis coarse metric (e.g., the distance between CP1 and CP2) and the steps are the same size for the analysis of the exact metric (e.g., the distance between FP1 and FP2), in some embodiments, implementation, step value may change depending on the change of the index from one position to the other. For example, in one embodiment, the step size between CP3 and CP4 can be reduced relative to the size of the step between CP1 and CP2 as General Affairs is the metric coarse AF (δ C34) less Delta from CP1 to CP2 (∆C12).

[0336] the Method 720, depicting this process is shown in Fig. 62. Beginning in block 722, metric coarse AF is determined for the image data with the different steps along the focal distance from position 0 to position L (Fig. 61). Then, at block 724, the indicators gross AF are analyzed, and a rough position, showing the first negative change in the coarse AF, is identified as a starting point for analysis of indicator accurate AF. For example, then, at block 726, the focus position is incrementally changed back to the initial position 0 with smaller steps, and measure the exact AF is analyzed step by step, until you find the peak of the curve of the AF indicator (for example, curve 714 in Fig. 61). At block 728, the focus position corresponding to the peak is set as the optimum focus position for the current scene image.

[0337] As discussed above, due to time mechanical installation of the coil, a variant implementation of the method shown in Fig. 62, can be adapted for initial metrics rough AF around the focal distance, instead of having to analyze each coarse position one by one and to find the optimal area of focus. However, other embodiments of in which the time of installation of the coil is not so important, can be analyzed by the simple rough AF one step by step instead of searching the entire focal length.

[0338] In some embodiments, implementation, indicators of AF can be determined using the values of brightness, balanced white level obtained from Bayer RGB data. For example, the brightness value, Y, can be deduced by thinning Bayer four 2×2 by a factor of 2, as shown in Fig. 63, or by thinning the pixel block of 4×4, consisting of the four Bayer of four 2×2, with a factor of 4, as shown in Fig. 64. In one embodiment, the indicators AF can be determined using gradients. In another embodiment, the indicators AF can be determined by applying the transform 3×3 using the operator Sharra, which provides rotational symmetry, at the same time, minimizing the weighted quadratic angular errors in the field of Fourier. By way of example, the following shows the calculation of the rough AF on camY using the General operator Sarra (discussed above):

,

where in represents a thinned the Y value of the brightness. In other embodiments, the implementation, the AF indicator for coarse and accurate statistics can be calculated using other transformations 3×3.

[0339] the Adjustment of the autofocus can also be different depending the tee from the color components, because the lens can have different impacts on different wavelengths of light, causing a horizontal filter 684 is applied independently to each color component. Thus, the autofocus can be performed even in the presence of chromatic aberration in the lens. For example, since red and blue light focuses in different positions or at different distances relative to the green color in the presence of chromatic aberration, the relative performance of AF for each color can be used to determine the direction to the focus. This is better shown in Fig. 65, which demonstrates the optimum focus position for the color channels of blue, red and green lens 740. As shown, the optimal focal position for the red, green and blue depicts the letters R, G and B, respectively, each of which corresponds to AF, when the current position 742 focus. In General, in this configuration, it may be desirable to choose an optimal focus position as the position corresponding to the optimum position of focus for the green component (for example, because Bayer RGB pattern is twice the component of the green than red or blue), here the position of G. Thus, we can expect that for the optimum position of focus, the green channel has displayed the AMB higher rate of AF. Thus, on the basis of the positions of the optimal focal position for each color (higher AF, the closer to the lens), the logic 680 AF and the corresponding control logic 84 may determine the direction of focus on the basis of relative indicators for AF blue, green and red. For example, if the blue channel has a higher ratio AF relative to the green channel (as shown in Fig. 65), the focus position is adjusted in the negative direction (image sensor) without initial analysis in the positive direction from the current position 742. In some embodiments, the exercise may be carried out the detection or analysis of the light source using svecokarelian temperature (CCT).

[0340] in Addition, as mentioned above, you can also use the estimates of variance. For example, the pixel amount and value of the sum of squares of pixels can be accumulated for dimensions (for example, 8×8 to 32×32 pixels) and can be used to obtain estimates of variance (for example, avg_pixel2)-(avg_pixel)^2). Dispersion can be summed up to obtain the total variance for each window. Smaller unit can be used to obtain accurate estimates of variance and large block sizes can be used for more coarse indicators dispers is I.

[0341] as for logic 468 statistics 3A shown in Fig. 51, the logic 468 may also be configured to collect the component histograms 750 and 752. Obviously, the histogram can be used to analyze the distribution of the level of the pixel in the image. This can be useful to implement some functions, for example, histogram equalization the histogram used to determine the specification of the histogram (histogram matching). By way of example, the luminance histograms can be used for AE (for example, to adjust/set times of integration of the sensor), and color histograms can be used to AWB. In the present embodiment, the histogram may be 256, 128, 64 or 32 bins (where 8, 7, 6 and 5 bits of the pixel is used to determine Bina, respectively) for each of the color components specified size Bina (BinSize). For example, when the pixel data is 14 bits long, an additional scale factor between 0-6 and offset can be specified to determine which band (for example, what 8 bits) pixel data is collected for the purposes of statistics. The number of Binah can be obtained as follows:

idx=((pixel-hist_offset)>>(6-hist_scale)

[0342] In one embodiment, Bina color histograms increase only if the indices of the bins are in the range[0, 2^(8-BinSize)]:

if (idx>=0&&idx<2^(8-BinSize))

StatsHist[idx]+=Count;

[0343] In the present embodiment, block 120 statistical processing may include two blocks of the histogram. This first histogram 750 (Hist0) can be configured to collect pixel data as part of the statistics after thinning 4×4. For Hist0, components can be selected as RGB, sRGBlinear, sRGB or YC1C2 using the scheme 756 choice. The second histogram 752 (Hist1) can be configured to collect pixel data to the pipeline statistics logic 460 correction of defective pixels), as shown in more detail in Fig. 65. For example, the primary Bayer RGB data (exit 124) can be thinned (to generate a signal 754) using logic 760 by skipping pixels, as further described below. For the green channel, color can be choose between Gr, Gb, or both Gr and Gb (green bins accumulate both counters Gr and Gb).

[0344] in Order to maintain the width of Bina histograms are the same between the two histograms, Hist1 can be configured to collect pixel data every 4 pixels (each second Bayer fours). The start window of the histogram determines the position of the first Bayer foursome, where the histogram begins to accumulate. Starting from this position, each second Bayer four proposals is horizontally and vertically for Hist1. The initial position of the window can be any pixel position for Hist1, and, thus, the pixels to be skipped when calculating the histogram, you can choose by changing the initial position of the window. Hist1 can be used to collect data presented 1112 in Fig. 66, near the black level to assist in the dynamic compensation of the black level at block 462. Thus, although for purposes of illustration, in Fig. 66 it is shown separately from the logic 468 statistics 3A, it should be understood that the histogram 752 may actually be part of the statistics recorded in memory, and can actually physically be located in block 120 statistical processing.

[0345] In the present embodiment, Bina red (R) and blue (B) can be 20-bit, bin and green (G) is a 21-bit (more green, to ensure the accumulation of Gr and Gb in Hist1). This ensures maximum image size 4160 on 3120 pixels (12 MP). Required internal memory size is 3×256×20(1) bits (3 color components, 256 bins).

[0346] In the format memory statistics for Windows AWB/AE, AF Windows, 2D color histograms and component histograms can be displayed in the registers to facilitate access by the firmware. In one embodiment, two memory pointer can be used to record statistics in memory, one for the Ministry of health is offered by the statistics 674, and another for statistics 678 total brightness on the line, followed by the rest of the collected statistics. All statistical data is written to the external memory, which you can use memory DMA. Address memory registers can be double-buffered, so that each frame can specify a new memory cell.

[0347] Before proceeding to a detailed consideration of the logic 82 of the conveyor ISP located after logic 80 pre-treatment ISP, it should be understood that the configuration of the various functional logical blocks in blocks 120 and 122 statistical analysis (for example, logical blocks 460, 462, 464, 466 and 468) and the block 130 pre-treatment pixels ISP (for example, logical blocks 298 and 300) are intended to illustrate only one implementation of the present invention. Indeed, in other embodiments, the implementation, the illustrated logic blocks can be placed in a different order or may include additional logic blocks, which can perform additional image processing function, specifically, is not described here. In addition, it should be understood that the operation of the image processing performed in units of aggregation (e.g., 120 and 122), such as shading correction lens, the detection/corre is of the defective pixels and the compensation of the black level, are carried out in blocks of statistical processing for the purpose of collecting statistical data. Thus, the processing operation is performed on the image data taken by the blocks aggregation, does not actually appear in the signal 109 image (FEProcOut), which is derived from logic 130 pre-treatment pixels ISP and is redirected to the logic 82 pipeline ISP.

[0348] Before continuing, it should also be noted that if sufficient processing time and the similarities between many of the processing requirements of the various operations described here, you can reconfigure shown here functional blocks for implementing image processing in serial, not pipelining. Obviously, this can further reduce the total cost of hardware implementation, but can also increase requirements for bandwidth to access external memory (for example, to cache/store the intermediate results/data).

Logic pipeline (“pipeline”) ISP

[0349] On the basis described above logic 80 pre-treatment ISP, this review will now focus on the logic 82 pipeline ISP. In the General case, the logic function 82 of the conveyor ISP is receiving primary image data, which can Postup the th of logic 80 pre-treatment ISP or retrieved from the memory 108, and in the implementation of the additional image-processing operations, i.e., to output image data to a device 28 to display.

[0350] a Block diagram showing an implementation option logic 82 of the conveyor ISP, shown in Fig. 67. As shown, the logic 82 of the conveyor ISP may include logic 900 primary processing logic 902 processing RGB, and logic 904 processing YCbCr. Logic 900 primary processing can perform various operations of image processing, such as detection and correction of defective pixels, the correction lens shading, demosaic, and applying a gain to auto white balance and/or set the black level, as will be further described below. As shown in the present embodiment, the input signal 908 logic 900 primary processing may be the output signal 109 of the primary pixel (signal FEProcOut) from the logic of 80 pre-treatment ISP or primary pixel data 112 from the memory 108, depending on the current configuration logic 906 of the selection.

[0351] as a result of operations demosaic implemented in logic 900 primary processing, the output image 910 may be in the area of RGB, and then may be routed to the logic 902 processing RGB. For example, as shown in Fig. 67, logic 902 processing accepts RGB signal 916, which can provide is to be the output signal 910 or signal 912 RGB image from the memory 108, depending on this configuration logic 914 choice. Logic 902 processing RGB can provide a variety of operation, adjust RGB color, including color correction (for example, using the matrix, color correction, color gain for automatic white balancing, and display global tones that will be further described below. Logic 904 processing RGB can also provide a color space conversion RGB image data in the color space YCbCr (luma/chroma). Thus, the output signal 918 image can be in the field of YCbCr, and then may be routed to the logic 904 YCbCr processing.

[0352] for Example, as shown in Fig. 67, logic 904 processing accepts YCbCr signal 924, which may be the output signal from logic 918 902 processing RGB or signal 920 YCbCr from the memory 108, depending on the current configuration logic 922 choice. As will be described in more detail below, the logic 904 processing YCbCr can ensure operation of the image processing in the color space YCbCr, including scaling, the suppression of the chrominance, luminance sharpening, brightness, contrast and color (BCC), the display range YCbCr, thinning color, etc. Output signal 926 image logic 904 YCbCr can be sent to the memory 108 or may be derived from logic 82 pipeline ISP as signal 114 of the image (Fig. 7). The signal 114 of the image can be sent to the device 28 to display (either directly or via the memory 108 for viewing by the user, or can optionally be processed using a compression machine (for example, encoder 118), CPU/SE, machine graphics and etc.

[0353] In accordance with the variants of implementation of the present invention, the logic 82 of the conveyor ISP can support the processing of raw pixel data to 8-bit, 10-bit, 12-bit or 14-bit formats. For example, in one embodiment, 8-bit, 10-bit or 12-bit input data can be converted to 14-bit input logic 900 primary processing, primary processing and processing of RGB can be carried out with 14-bit precision. In the latter embodiment, a 14-bit image data can be subjected to down-sampling rate up to 10 bits before converting RGB data to the color space of YCbCr and YCbCr processing logic 904) may be carried out with 10-bit precision.

[0354] To provide an exhaustive description of the various functions provided by the logic 82 pipeline ISP, each logic 900 primary processing logic 902 processing RGB and logic 904 processing YCbCr, and internal logic to implement various image-processing operations that can be implemented in each of sootvetstvuuschem the logic block 900, 902 and 904 will be discussed below, starting with logic 900 primary processing. For example, with reference to Fig. 68, illustrated a block diagram showing a more detailed view of a variant of implementation of the logic 900 primary processing, in accordance with the embodiment of the present invention. As shown, the logic 900 primary processing includes logic 930 gain, offset and limit (GOC), logic 932 detection/correction of defective pixels (DPDC), logic 934 denoising, logic 936 shading correction lens, logic 938 GOC and logic 940 demosaic. In addition, although in the above examples assume the use of the matrix Bayer color filter with sensor(s) 90 images, it should be understood that other embodiments of the present invention can use other types of color filters.

[0355] the Input signal 908, which may be the primary image signal first received by the logic 930 gain, offset and limit (GOC). Logic 930 GOC can provide similar functions and may be implemented by analogy with the logic 462 BLC block 120 statistical processing logic 80 pre-treatment ISP discussed above in Fig. 37. For example, the logic 930 GOC may provide the digital gain coefficient, offset the Oia and the limit (cut-off) independently for each color component R, B, Gr and Gb Bayer image sensor. In particular, the logic 930 GOC can make avtomaticheskij the white balance or set the black level raw data image. In addition, in some embodiments, implementation, logic 930 GOC can also be used to correct or compensate for the offset between the color components of the Gr and Gb.

[0356] during operation, the input value for the current pixel is first shifted to the symbolic value and is multiplied by the gain. This operation can be carried out using the formula shown in the above equation 11, where X represents the input pixel value for the color components R, B, Gr or Gb, O[c] represents a signed 16-bit offset for the current color component c, and G[c] represents the gain value for the color component c. The values of G[c] may be defined in advance in the course of statistical processing (e.g., in block 80 pre-treatment ISP). In one embodiment, the coefficient G[c] gain can represent a 16-bit unsigned integer with 2 bits of the integer part and 14 bits of the fractional part (for example, in the view 2,14 floating point), and the coefficient G[c] gain you can apply rounding. Solely by way of example, the coefficient G[c] gain can take values within the 0 to 4X.

[0357] the Computed pixel value Y (which includes the factor G[c] gain and offset O[c]) from equation 11 is then limited by the minimum and maximum range in accordance with equation 12. As discussed above, the variables min[c] and max[c] may submit a signed 16-bit value cutoff for the minimum and maximum output values, respectively. In one embodiment, the logic 930 GOC may also be configured to maintain a counter of the number of pixels that were clipped above and below the maximum and minimum ranges, respectively, for each color component.

[0358] Then, the output signal of the logic 930 GOC is forwarded to the logic 932 detection and correction of defective pixels. As discussed above with reference to Fig. 37 (logic 460 DPDC), the presence of defective pixels can be attributed to several factors, and they may include “hot” (or drain) of the pixels “are stuck pixels and dead pixels, where the hot pixels show excessive leakage of the charge relatively non-defective pixels and, thus, may appear brighter than a non-defective pixel, and where you are stuck pixel looks like a permanently enabled (e.g., fully charged) and thus appear brighter, while dead pixel looks like constantly about the prisoners. For this reason, it may be desirable to have a detection scheme pixels, robust enough to identify and repair various types of failure scenarios. In particular, when compared with the logic 460 pre-treatment DPDC, which can only provide detection/correction of dynamic defects, logic 932 conveyor DPDC can provide detection/correction of fixed or static defect detection/correction of dynamic errors, as well as removing speckles.

[0359] In accordance with a variant implementation of the invention described here, the correction/detection of defective pixels is performed by the logic 932 DPDC, can occur independently for each color component (for example, R, B, Gr and Gb), and may include various operations for detecting defective pixels, and for correcting the detected defective pixels. For example, in one embodiment, the operation of detecting defective pixels can allow for the detection of static defects, dynamic defects, as well as the detection of speckle, which can be associated with electrical noise or noise (e.g., photon noise) that may be present in the sensor image formation. By analogy, speckle may occur in the image as, apparently, random noise artifacts, oppo is ichno to as statics may occur on a display such as a television display. In addition, as mentioned above, the dynamic correction of defects is considered to be dynamic in the sense that the characterization of the pixel as defective in a given time may depend on the image data of neighboring pixels. For example, are stuck pixel, which always has the maximum brightness may not be considered as a defective pixel, if you are stuck pixel is located in the area of the current image, dominated by a bright white color. On the contrary, if you are stuck pixel is within the scope of the current image that is dominated by black or darker colors, you are stuck pixel can be identified as a defective pixel in the processing logic 932 DPDC and adjust accordingly.

[0360] In the detection of static defects, the position of each pixel is compared with a table of static defects, which can contain data corresponding to the position of the pixel for which it is known that they are defective. For example, in one embodiment, the logic 932 DPDC can track the detection of defective pixels (for example, using the mechanism of the counter or register) and, if there are constant failures of a particular pixel, the position of this pixel is stored in the table static defecto is. Thus, the detection of static defects, if it is determined that the position of the current pixel are shown in table static defects, then the current pixel is identified as a defective pixel, and the replacement value is determined and temporarily stored. In one embodiment, the replacement value may be a value of the previous pixel (based on the scanning order of the same color components. The replacement value can be used to correct the static defect discovery and correction of dynamic defects /speckles, as will be discussed below. Additionally, if the previous pixel is outside of the primary frame 278 (Fig. 19), its value is not used, and the static defect can be corrected in the process of correction of dynamic defects. In addition, for reasons of memory, in the table of static defects can store a finite number of records provisions. For example, in one embodiment, a table of static defects can be implemented as a FIFO queue configured to store a total of 16 positions for every two lines of image data. However, the provisions specified in the table of static defects will be corrected using the replacement value of the previous pixel (and not through the discovery process, dinamicas the x defects, discussed below). As mentioned above, embodiments of the present invention can also provide an update to the table of static defects and breaks over time.

[0361] the Options for implementation may provide a table of static defects, sold comes in memory or uncrystalline memory. Obviously, the use comes implementations may increase the total area/size of the chip, while using uncrystalline implementation can reduce the area/size of the chip, but to increase demand for memory bandwidth. Thus, it should be understood that the table of static defects can be implemented either on-chip or outside the crystal, depending on the requirements of a particular implementation, i.e., the total number of pixels to be stored in the table of static defects.

[0362] the Processes of dynamic detection of defects and speckles can be shifted in time relative to the above process, the detection of static defects. For example, in one embodiment, the process of detecting dynamic faults and speckles may begin after the discovery process static defects will analyze two scanning lines (e.g., rows) of pixels. Obviously, this provides the identification of static defects and their corresponding values replacement, be determined before the discovery of dynamic defects/speckles. For example, in the discovery process of dynamic defects/speckles, if the current pixel was previously marked as a static defect, instead of applying discoveries dynamic defects/speckles, static defect is easily adjusted by using the previously estimated replacement value.

[0363] In the detection of dynamic defects and speckles, these processes may occur sequentially or in parallel. Detection and correction of dynamic defects and speckles, which are logic 932 DPDC, can rely on the adaptive edge detection using gradient in the direction from pixel to pixel. In one embodiment, the logic 932 DPDC can choose eight immediate neighbors of the current pixel having the same color component used in the primary frame 278 (Fig. 19). In other words, the current pixel and its eight immediate neighbors P0, P1, P2, P3, P4, P5, P6 and P7 can form a region of 3×3 as shown in Fig. 69.

[0364] However, it should be noted that depending on the position of the current pixel P, the pixels outside of the primary frame 278 are not considered when calculating the gradients from pixel to pixel. For example, in relation to the “upper left” case 942 shown in Fig. 69, tech is the overall pixel P is located in the upper left corner of the primary frame 278, and, thus, the neighboring pixels P0, P1, P2, P3 and P5 outside of the primary frame 278 are not considered, leaving only the pixels P4, P6 and P7 (N=3). In the “upper” case 944, the current pixel P is at the upper edge of the primary frame 278, and, thus, the neighboring pixels P0, P1 and P2 outside of the primary frame 278 are not considered, leaving only the pixels P3, P4, P5, P6 and P7 (N=5). Then, in the “upper right” case 946, the current pixel P is at the top right corner of the primary frame 278, and, thus, the neighboring pixels P0, P1, P2, P4 and P7 outside of the primary frame 278 are not considered, leaving only the pixels P3, P5 and P6 (N=3). In the “left” case 948, the current pixel P is at the left edge of the primary frame 278, and, thus, the neighboring pixels P0, P3, and P5 outside of the primary frame 278 are not considered, leaving only the pixels P1, P2, P4, P6 and P7 (N=5).

[0365] In the “Central” scenario, 950, all the pixels P0-P7 are in the primary frame 278 and, thus, are used in the determination of gradients from pixel to pixel (N=8). In the “right” case 952, the current pixel P is at the right edge of the primary frame 278, and, thus, the neighboring pixels P2, P4, and P7 outside of the primary frame 278 are not considered, leaving only the pixels P0, P1, P3, P5 and P6 (N=5). Additionally, in the “lower left” 954 case, the current pixel P is at the bottom left corner of the primary frame 278, and, thus, the neighboring pixels P0,P3, P5, P6, and P7 outside of the primary frame 278 are not considered, leaving only the pixels P1, P2, and P4 (N=3). In the “lower” case 956, the current pixel P is at the bottom edge of the primary frame 278, and, thus, the neighboring pixels P5, P6, and P7 outside of the primary frame 278 are not considered, leaving only the pixels P0, P1, P2, P3, and P4 (N=5). Finally, in the “lower right” case 958, the current pixel P is at the bottom right corner of the primary frame 278, and, thus, the neighboring pixels P2, P4, P5, P6, and P7 outside of the primary frame 278 are not considered, leaving only the pixels P0, P1 and P3 (N=3).

[0366] Thus, depending on the position of the current pixel P, the number of pixels used to determine the gradients from pixel to pixel can be 3, 5 or 8. In the illustrated embodiment, for each neighboring pixel (k=0~7) inside the border of the image (for example, the primary frame 278), the gradients from pixel to pixel can be calculated as follows:

for(only for k in the primary frame) (51)

Additionally, the average gradient, Gav, can be calculated as the difference between the current pixel and the average, Pavthe surrounding pixels, as shown in the following equations:

where N=3, 5 or 8 (depending on the pixel's position)(52a)

Gradient values from pixel to pixel (equation 51) can be used in the case definition of dynamic defect, and the average of the surrounding pixels (equation 52a and 52b) can be used for identifying instances of speckles, as further described below.

[0367] In one embodiment, the dynamic detection of defects can be performed by logic 932 DPDC as follows. First of all, it is assumed that the pixel is defective, if a number of gradients Gklocated on a specific threshold or below, which is denoted by the variable dynTh (dynamic threshold of the defect). Thus, for each pixel accumulates the counter (C) the number of gradients of neighboring pixels within the image boundaries that are at or below the threshold dynTh. The threshold dynTh can be a combination of a fixed threshold components and dynamic threshold components, which may depend on the activity of the surrounding pixels. For example, in one embodiment, the dynamic threshold component dynTh can be determined by calculating the value of the high frequency components of the Phfon the basis of the summation of absolute difference between the average pixel values Pav(equation 52a) and each neighboring pixel, as shown below:

where N=3, 5 or 8(52c)

In cases where the pixel is located in the corner of the image (N=3) or at the edge of the image (N=5), Phfyou can multiply on 8/3 or 8/5, respectively. Obviously, this ensures that the high frequency component of Phfnormalized on the basis of the eight neighboring pixels (N=8).

[0368] by Defining Phfthreshold dynTh the dynamic detection of defects can be calculated as shown below:

dynTh=dynTh1+(dynTh2×Phf),(53)

where dynTh1represents a fixed threshold component, and dynTh2represents the dynamic threshold component and is a multiplier for Phfin equation 53. For each of the color components may be provided separate fixed threshold component dynTh1but for each pixel of the same color dynTh1one and the same. Solely by way of example, dynTh1you can set at least higher than the variance of the noise in the image.

[0369] the Dynamic threshold component dynTh2you can determine on the basis of certain characteristics of the image. For example, in one embodiment, dynTh2you can define using the stored empirical data relating to exposure and/or the integration time of the sensor. Empirical data megaloblastic during calibration of the image sensor (for example, 90) and can associate the value of the dynamic threshold components that you can choose for dynTh2with each series of data points. Thus, based on the current exposure value and/or the integration time of the sensor, which can be determined through statistical processing logic 80 pre-treatment ISP, dynTh2can be determined by selecting the dynamic threshold components of the stored empirical data that corresponds to the current value of the exposure and/or the integration time of the sensor. Additionally, if the current exposure value and/or the integration time of the sensor does not correspond directly to one of the empirical data points, dynTh2can be determined by interpolation of the values of dynamic threshold components associated with the data points, between which is the current value of the exposure and/or the integration time of the sensor. In addition, by analogy with the fixed threshold component dynTh1dynamic threshold component dynTh2can have different values for each color component. Thus, the composite threshold dynTh can be changed for each color component (for example, R, B, Gr, Gb).

[0370] As mentioned above, for each pixel, is determined by the count C of the number of gradients on the I of neighboring pixels within the image boundaries, which are on the threshold or below dynTh. For example, for each neighboring pixel in the primary frame 278 accumulated by the counter C gradients Gkthat are at or below the threshold dynTh, can be calculated as follows:

(only for k in the primary frame)

Then, if it is determined that the accumulated count C is less than or equal to the maximum count indicated by the variable dynMaxC, the pixel can be considered as a dynamic defect. In one embodiment, different values for dynMaxC can be provided for conditions N=3 (right), N=5 (edge) and N=8. This logic is expressed below:

if (C≤dynMaxC), then the current pixel P is defective. (55)

[0371] As mentioned above, the position of the defective pixel can be stored in the table of static defects. In some embodiments, the implementation, the minimum gradient value (min(Gk)), calculated during the dynamic detection of defects for the current pixel can be saved and can be used for sorting defective pixels, so that a higher minimum gradient value indicates a higher severity of the defect and shall be subject to correction during the correction pixels before will be adjusted less severe defects. In one embodiment, the pixel may need to mod the development of multiple frames of image formation to save in the table of static defects, for example, by filtering the positions of defective pixels over time. In the latter embodiment, the position of the defective pixel can be stored in the table of static defects, only if defect occurs in a specific number of consecutive images in the same position. In addition, in some embodiments, implementation, table static defects can be configured to sort the saved positions of defective pixels based on the minimum gradient values. For example, the higher the minimum gradient value may indicate a defect greater “severity”. Marshalling position so you can set the priority of correction of static defects, to the most difficult or important defects were corrected in the first place. Additionally, table static defects can be updated over time to include newly discovered static defects and their ordering, respectively, on the basis of their respective minimum values of the gradient.

[0372] the Detection of speckles which may occur in parallel to the above process of dynamic detection of defects can be performed by determining whether the value of Gav(equation 52b) threshold spkTh detection of speckles. By analogy with the OIG dynTh dynamic defect, specify threshold spkTh also may include fixed and dynamic components, denoted spkTh1and spkTh2, respectively. In General, fixed and dynamic components spkTh1and spkTh2you can install a more “aggressive” compared with values dynTh1and dynTh2in order to avoid erroneous detection of speckle in the image areas that can be more heavily textured, and others, such as, text, foliage, some templates, fabric, etc. Accordingly, in one embodiment, the dynamic threshold component spkTh2speckle can be increased for highly textured areas of the image and decrease for “flattened” or more homogeneous areas. The threshold spkTh detection of speckles can be calculated as shown below:

spkTh=spkTh1+(spkTh2×Phf),(56)

where spkTh1represents a fixed threshold component, and spkTh2represents the dynamic threshold component. Then the detection of speckle can be determined in accordance with the following expression:

if (Gav>spkTh), then the current pixel P is speculum. (57)

[0373] After identification of the defective pixels, the logic 932 DPDC can apply correction pixels depending on the type of the detected defect. For example, if the defective pixel is identified as a static defect, the pixel is replaced with the stored value replacement, as discussed above (for example, the value of the previous pixel of the same color components). If the pixel is identified either as a dynamic defect or as speckle, the correction of the pixels can be performed as follows. First, the gradients are calculated as the sum of the absolute differences between the Central pixel and the first and second neighboring pixels (for example, calculation of the Gkaccording to equation 51) for four directions, horizontal (h) directions, vertical (v) direction, diagonally positive (dp) direction and diagonally negative (dn) direction, as shown below:

[0374] Then, the adjustment pixel value PCcan be determined by linear interpolation of two adjacent pixels associated with directional gradient Gh, Gv, Gdpand Gdnthat has the smallest value. For example, in one embodiment, the following logical statement can Express the computation of PC:

Methods of correction pixels is implemented by the logic 932 DPDC, can also provide exceptions with boundary conditions. For example, if one of the two neighboring pixels associated with the selected direction of interpolation between the seeking out of the primary frame, instead it substitutes the value of the neighboring pixel, which is located in the primary frame. Thus, using this method, correcting the pixel value is equivalent to the value of the neighboring pixel in the primary frame.

[0375] it Should be noted that the methods of detection/correction of defective pixels, applied logic 932 DPDC during pipeline processing ISP, more reliable than logic 460 DPDC logic 80 pre-treatment ISP. As discussed above under option exercise, the logic 460 DPDC only provides detection and correction of dynamic defects using neighboring pixels in the horizontal direction only, whereas the logic 932 DPDC provides detection and correction of static defects, dynamic defects, as well as speckles, using neighboring pixels in horizontal and vertical directions.

[0376] it is Obvious that, maintaining the position of the defective pixel using a table of static defects can provide a temporary filtering of defective pixels with lower memory requirements. For example, compared to many traditional methods for the preservation of the full image and applying temporal filtering to identify static defects over time, variants of which westline of the present invention provide for the preservation of only the positions of the defective pixels, which usually you want to use only part of the memory required to store a full frame image. In addition, as discussed above, maintaining a minimum gradient values (min(Gk)), allows to effectively use the table of static defects, determining the priority order of the provisions in which defective pixels are corrected (for example, starting with those that are most visible).

[0377] Additionally, the use of thresholds, which include a dynamic component (e.g., dynTh2and spkTh2), can reduce the probability of erroneous detection of defects, which often occurs in conventional image processing systems when processing vysokochastotnykh image areas (for example, text, foliage, some templates, fabrics and so on). In addition, the use of directional gradients (e.g., h, v, dp, dn) for the correction of pixels can reduce the visibility of visual artifacts in the case of an erroneous detection of the defect. For example, filtering in the direction of the minimum gradient can lead to the correction, which still gives acceptable results in most cases, even in cases of erroneous detection. Additionally, the inclusion of the current pixel P in the computation of the gradient can increase the accuracy of detection of the gradient, in private the tee, in the case of hot pixels.

[0378] the above methods of detection and correction of defective pixels that are implemented by the logic 932 DPDC, can be summarized next block diagrams of the operations provided in Fig. 70-72. For example, In Fig. 70 illustrates the process 960 for the detection of static defects. At step 962, the input pixel P is taken at the first moment of time T0. Then, at step 964, the position of the pixel P is compared with the values stored in the table of static defects. Logic 966 decision determines whether it found the position of the pixel P in the table of static defects. If the position P shown in table static defects, the process 960 proceeds to step 968, on which the pixel P is marked as a static defect, and determines the replacement value. As discussed above, the replacement value can be determined on the basis of the value of the previous pixel (in scan order) of the same color components. The process then 960 proceeds to step 970 where the process 960 proceeds to process 980 dynamic detection of defects and speckles shown in Fig. 71. Additionally, if the logic 966 decision, it is determined that the position of the pixel P is not specified in the table of static defects, the process 960 proceeds to step 970, bypassing step 968.

[0379] According to Fig. 71, the input pixel P is taken at the moment of time the Yeni T1, as shown in step 982, for processing to determine whether a dynamic defect or speckle. The time T1 may represent a temporal offset relative to the process 960 detection of static defects, shown in Fig. 70. As discussed above, the process of detecting dynamic faults and speckles may begin after the discovery process static defects will analyze two scanning lines (e.g., rows) of pixels, allowing time to identify static defects and their respective replacement values to be determined before the discovery of dynamic defects/speckles.

[0380] the Logic 984 decision determines whether the previously marked input pixel P as a static defect (for example, at step 968 process 960). If P is marked as a static defect, the process 980 can move on to the process of correction pixels shown in Fig. 72 and may skip the remaining steps shown in Fig. 71. If the logic 984 decision determines that the input pixel P is not a static defect, the process goes to step 986, and the identified neighboring pixels, which can be used in the discovery process of dynamic defects and speckles. For example, in accordance with the embodiment discussed above and shown in Fig. 69, neighboring pixels can on the part 8 immediate neighbors of the pixel P (for example, P0-P7), thus forming a region of 3×3 pixels. Then, at step 988, the gradients from pixel to pixel are calculated for each neighboring pixel in the primary frame 278, as described in the above equation 51. Additionally, the average gradient (Gav) can be calculated as the difference between the current pixel and the average of the surrounding pixels, as shown in equations 52a and 52b.

[0381] Then, the process 980 proceeds to step 990 to detect dynamic faults and the logic 998 decision for the detection of speckles. As mentioned above, the dynamic detection of defects and detection of speckles, in some embodiments, implementation may occur in parallel. At step 990 is determined by the count C of the number of gradients that are less than or equal to the threshold dynTh. As described above, the threshold dynTh may include fixed and dynamic components and, in one embodiment, may be determined in accordance with the above equation 53. If C is less than or equal to the maximum count, dynMaxC, the process 980 proceeds to step 996, and the current pixel is marked as dynamic defect. After that, the process 980 can move on to the process of correction pixels shown in Fig. 72, which will be discussed below.

[0382] Returning to the branching after step 988, for detection of speckles, logic 998 adoption solved the I determines larger than average gradient Gavthe threshold spkTh detection of speckles, which can also include a fixed and dynamic components. If Gavmore threshold spkTh, then at step 1000 pixel P is marked as containing speckle, and, after that, the process 980 proceeds to Fig. 72 for correcting specalog pixel. In addition, if both logical blocks 992 and 998 decision making the answer is “No”, it indicates that the pixel P does not contain dynamic defects, speckles and even static defects (logic 984 decision). Thus, when logic 992 and 998 decision given the answer “No”, the process 980 may conclude at step 994, therefore, the pixel P remains unchanged, since no defects (e.g., static, dynamic, or silovich) not found.

[0383] In Fig. 72 presents the process 1010 correction of pixels in accordance with the above methods. At step 1012, the input pixel P is taken from the process 980 in Fig. 71. It should be noted that the pixel P may be received by the process from step 1010 984 (static defect) or stage 996 (dynamic defect) and 1000 (speclly defect). Then logic 1014 decision determines whether marked pixel P as a static defect. If the pixel P is a static defect, the process 1010 continues and ends at step 1016, blagoj the OC anything static defect is corrected using the replacement value defined at step 968 (Fig. 70).

[0384] If the pixel P is not identified as a static defect, the process 1010 is transferred from the logic 1014 decision to step 1018, and calculates the directional gradients. For example, as discussed above with reference to equations 58-61, gradients can be calculated as the sum of the absolute differences between the Central pixel and the first and second neighboring pixels for the four directions (h, v, dp and dn). Then, at step 1020 is identified by a directional gradient, with the lowest value, then the logic 1022 decision making estimates, is one of the two neighboring pixels associated with the minimum gradient outside the frame image (for example, the primary frame 278). If both of the neighboring pixels are in the image frame, the process 1010 proceeds to step 1024, and the value (PC) correction of a pixel is determined by applying linear interpolation to the values of two neighboring pixels, as shown in equation 62. After that, the input pixel P can be adjusted using the interpolated values of PCcorrection of the pixel, as shown in step 1030.

[0385] If the logic 1022 decision determined that one of the two neighboring pixels located outside the frame image (for example, per the ranks of the frame 165), instead of using values outside pixel (Pout), the logic 932 DPDC can replace the value of Pout value of the other neighboring pixel that is inside the frame image (Pin), as shown in step 1026. Then, at step 1028, the value of PCcorrection of a pixel is determined by interpolating the values of Pin and wildcard values of Pout. In other words, in this case, PCmay be equivalent to the value of the Pin. Concluding on stage 1030, the pixel P is corrected using a value of PC. Before continuing, it should be understood that the specific processes of detection and correction of defective pixels discussed here with reference to the logic 932 DPDC, intended to represent only one possible implementation of the present invention. In fact, depending on structural and/or cost constraints, there may be some changes, and signs can be added and removed, so that the overall complexity and reliability of the detection/correction of defects is between simplified logic 460 detection/correction, implemented in block 80 pre-treatment ISP, and logic detection/correction of defects discussed here with reference to the logic 932 DPDC.

[0386] According to Fig. 68, adjusted pixel data outputted from the logic 932 DPDC and then accepted the logic 934, sumomo is to achieve for further processing. In one embodiment, the logic 934 noise reduction can be configured to implement two-dimensional low-pass filtering, with adaptation to the boundary conditions to reduce noise in the image data while maintaining detail and texture. Thresholds, with adaptation to the boundary conditions can be established (for example, the control logic 84) based on the current light levels, so that filtering can grow in low light conditions. Additionally, as briefly mentioned above in relation to the definition of values dynTh and spkTh, the variance of the noise it is possible to determine in advance of the sensor that allows you to set thresholds for noise reduction is slightly higher than the variance of the noise, so that, in the course of processing of noise reduction, the noise is reduced without significant impact on the texture and details of the scene (for example, to eliminate/reduce false detections). Assuming the implementation of the Bayer color filter logic 934 noise reduction can process each color component of Gr, R, B, and Gb independently using splittable 7-outlet horizontal filter and 5-outlet vertical filter. In one embodiment, the noise reduction process may be carried out by correcting the unevenness in the color component (Gb and Gr) green, followed by the implementation of the horizontal filter and vertical filter.

[0387] Heterogeneity green (GNU), in General, is characterized by a small difference in brightness between pixels Gr and Gb provided a uniformly illuminated flat surface. Without correction or compensation of this heterogeneity, after demosaic a full-color image can be some artifacts, for example, an artifact of the labyrinth. During the heterogeneity of the green, the process may include determining, for each green pixel in the primary Bayer image data, less if the absolute difference between the current pixel (G1) green and green pixel to the right and below (G2) of the current pixel threshold correction GNU (gnuTh). Fig. 73 illustrates the position of the pixels G1 and G2 in the area of 2×2 Bayer pattern. As shown, the color of the boundary pixels G1 may depend on whether the current pixel is a green pixel Gb or Gr. For example, if G1 is Gr, G2 is a Gb pixel to the right of G1 is R (red), and the pixel under G1 is B (blue). Alternatively, if G1 is Gb, then G2 is Gr, and the pixel to the right of G1 is B, while the pixel under G1 is R. If the absolute difference between G1 and G2 is less than the threshold value correction GNU, then the current pixel G1 green is replaced by the average of G1 and G2, as shown by the following logic:

Obviously, this use of a correction neo is narodnosti green helps prevent averaging of the pixels G1 and G2 on the edges, that improves and/or maintains sharpness.

[0388] the Horizontal filtering is applied after the correction of heterogeneity green and may, in one embodiment, to provide 7-taking a horizontal filter. Gradients along the edge of each tap of the filter is calculated, and in case of exceeding the horizontal edge threshold (horzTh), the exhaust filter is applied to the Central pixel, which will be illustrated below. A horizontal filter can process the image data independently for each color component (R, B, Gr, Gb), and can use unfiltered values as input values.

[0389] by way of example, Fig. 74 shows a graphical representation of a set of horizontal pixels P0-P6, where the Central outlet is located on P3. On the basis of the pixels shown in Fig. 74, the edge gradients for each tap of the filter can be calculated as follows:

Then the regional gradients Eh0-Eh5 can be used by the horizontal component of the filter for determining the output of the horizontal filter, Phorzusing the formula shown below in equation 70:

,

where horzTh[c] - horizontal edge threshold for each color component c (e.g., R, B, Gr and Gb), and C0-C6 - coefficients of taps of the filter, which is adequate the pixels P0-P6, respectively. The output signal of the Phorzhorizontal filter can be applied in the position of the Central pixel P3. In one embodiment, the coefficients C0-C6 of taps of the filter can be 16-bit values in the format additions up to 2 to 3 bits of the integer part and 13 bits of the fractional part (in view of 3.13 floating point). In addition, it should be noted that the coefficients C0-C6 of taps of the filter are not required to be symmetric with respect to the Central pixel P3.

[0390] the Vertical filtering is also applied logic 934 noise reduction processes after correction of heterogeneity green and horizontal filtering. In one embodiment, the operation of the vertical filter can provide 5-taking the filter, as shown in Fig. 75, where the Central tap vertical filter is located in P2. The process of vertical filtering may occur in a similar manner as described above, the horizontal filtering process. For example, the Gradients along the edge of each tap of the filter is calculated, and in case of exceeding the vertical edge threshold (vertTh), the exhaust filter is applied to the Central pixel P2. The vertical filter can process the image data independently for each color component (R, B, Gr, Gb), and can use unfiltered values as input values.

<> [0391] On the basis of the pixels shown in Fig. 75, the vertical edge gradients for each tap of the filter can be calculated as follows:

Then the regional gradients Ev0-Ev5 can be used vertical filter for determining the output of the vertical filter, Pvertusing the formula shown below in equation 75:

where vertTh[c] - vertical edge threshold for each color component c (e.g., R, B, Gr and Gb), and C0-C4 are coefficients of taps of the filter corresponding to the pixels P0-P4 of Fig. 75, respectively. The output signal of the Pvertthe vertical filter can be applied in the position of the Central pixel P2. In one embodiment, the coefficients C0-C4 of taps of the filter can be 16-bit values in the format additions up to 2 to 3 bits of the integer part and 13 bits of the fractional part (in view of 3.13 floating point). In addition, it should be noted that the coefficients C0-C4 of taps of the filter are not required to be symmetric with respect to the Central pixel P2.

[0392] in Addition, regarding the boundary conditions, when the neighboring pixels are outside of the primary frame 278 (Fig. 19), the values of the outer pixels are duplicated by the value of the pixel of the same color on the edge of the primary frame. This agreement can re is to be applied to both horizontal and vertical filtering. By way of example, returning to Fig. 74 in the case of horizontal filtering, if the pixel P2 is an edge pixel at the left edge of the primary frame, and the pixels P0 and P1 are outside of the primary frame, the pixel values P0 and P1 are replaced by the pixel value P2 for horizontal filtering.

[0393] Returning to the block diagram of the logic 900 primary processing shown in Fig. 68, the output signal of the logic 934 noise reduction then goes to logic 936 shading correction lens (LSC) for processing. As discussed above, methods of shading correction lens may include the use of appropriate amplification factor for pixel-by-pixel basis to compensate for the decay of the light intensity that may be the result of geometrical optics lens, imperfections in manufacturing, the error microlensing matrix and the matrix of color filters, etc. in Addition, infrared (IR) filter in some lenses may cause the decline will depend on the light source, which allows to adapt the gain lens shading depending on the detected light source.

[0394] In the present embodiment, the logic 936 LSC conveyor 82 ISP can be implemented in a similar manner and, thus, to provide, in the General case, the same functions as the logic 464 LSC block 80 p is adveritising processing ISP, that discussed above with reference to Fig. 40-48. Accordingly, to avoid redundancy, it should be understood that the logic 936 LSC illustrated variant implementation is configured to work in the General case, in the same way as logic 460 LSC, and therefore, the foregoing description of methods of shading correction lens here will not recur. However, to summarize the overall results, it should be understood that the logic 936 LSC can independently process each color component of the primary flow of pixel data to determine the gain to be applied to the current pixel. In accordance with the above variant implementation, the gain shading correction lens can be determined on the basis of a given set of grid points, the gains are distributed over a frame of image formation, in which the interval between each grid point is defined by the number of pixels (for example, 8 pixels, 16 pixels, and so on). If the position of the current pixel corresponds to a grid point, then the gain value associated with that grid point, is applied to the current pixel. If the position of the current pixel is located between the grid points (e.g., G0, G1, G2 and G3 in Fig. 43), the value of the gain LSC can be calculated by interpolation of the grid points, between the which is the current pixel (equation 13a and 13b). This process is illustrated by the process 528 in Fig. 44. In addition, as mentioned above according to Fig. 42, in some embodiments, implementation of grid points can be distributed unevenly (e.g., logarithmically), which, grid point is less concentrated in the Central region 504 LSC, but more concentrate on the corners region 504 LSC, where the distortion Shader lens is usually more visible.

[0395] in Addition, as discussed above with reference to Fig. 47 and 48, the logic 936 LSC can also use the radial component of the gain values of the gain factor of the grid. The radial component of the amplification factor can be determined on the basis of the distance of the current pixel from the center of the image (equations 14-16). As mentioned, the use of radial gain allows the use of a single overall net gain for all color components, which can significantly reduce the total storage capacity required to store separate grids of gains for each color component. This reduction in data grid gain reduces the cost of implementation, because the table data grid gain can take a significant amount of memory location or size of the chip in the image processing hardware.

[0396] Then, returning to the flowchart 900 primary processing logic shown in Fig. 68, the output signal of the logic 936 LSC then fed to the second block 938 gain, offset and limit (GOC). Logic 938 GOC can be applied to demosaic (logical block 940) and can be used for automatic white balance on the output signal of the logic 936 LSC. In the present embodiment, the logic 938 GOC can be implemented in the same way as logic 930 GOC (and logic 462 BLC). Thus, in accordance with the above equation 11, the input signal passed by the logic 938 GOC, first shifting symbolic value and then multiplied by the gain. Then the resulting value is limited by the minimum and maximum range in accordance with equation 12.

[0397] then, the output signal of the logic 938 GOC is forwarded to the logic 940 demosaic for processing to create a full-color (RGB) image on the basis of primary Bayer input data. Obviously, the primary output signal of the image sensor that uses a matrix of color filters, for example, a Bayer filter is “incomplete” in the sense that each pixel is filtered to obtain only one color component. Thus, data collected for a single pixel, not eno is but to determine the color. Accordingly, methods demosaic can be used to generate full-color images of primary Bayer data by interpolating the missing color data for each pixel.

[0398] In Fig. 76 illustrated circuit 692 graphics processing, which provides a General overview of how you can apply demosaic to the template 1034 primary Bayer image to create a full-color RGB. As shown, the plot 1036 4×4 Bayer primary image 1034 may include separate channels for each color component, which includes the channel 1038 green channel 1040 red and the channel 1042 blue. Because each pixel forming the image in the Bayer sensor receives data only for one color, the color data for each color channel 1038, 1040 and 1042 may be incomplete, as indicated by the“? ” symbol. By applying the method of 1044 demosaic, the missing color samples from each channel can be interpolated. For example, as indicated by the reference position 1046, interpolated data G' can be used to fill the missing samples on the color green channel, similarly, the interpolated data R' (in combination with interpolated data G' (1046) can be used for filling up missing samples per color channel 1048 red, and interpolated data B' (in which Oceanie with interpolated data G' (1046) can be used for filling up missing samples per color channel 1050 blue. Thus, in the process demosaic, each color channel (R, G, B) will have a full set of color data, which can then be used to reconstruct a full color RGB image 1052.

[0399] the Method demosaic that you can implement logic 940 demosaic will be described below in accordance with one embodiment. On the color green channel, the missing color samples can be interpolated using a low-pass directional filter on known samples of green and high-frequency (or gradient) of the filter on the adjacent color channels (for example, red and blue). For the color channels red and blue, the missing color samples can be interpolated in a similar way, but using low-pass filtering on the known values of red or blue and high-frequency filtering on close interpolated green values. In addition, in one embodiment, demosaic on the color green channel can use a filter of the pixel block of 5×5, with adaptation to the boundary conditions on the basis of the original Bayer color data. As will be further described below, using a filter with the adaptation to regional conditions can provide continuous weighting based on the gradients of the horizontal and vertical filtered values that reduces the visibility of some artifacts, for example, stepped, “chess” or “rainbow” artifacts commonly seen in traditional methods demosaic.

[0400] In the course demosaic on the green channel, use the original values for pixels, green pixels Gr and Gb) of the Bayer pattern image. However, for a complete dataset for the green channel, the values of the green pixels can be interpolated at the pixel of red and blue Bayer pattern image. In accordance with the present invention, the horizontal and vertical energy components, respectively referred to as Eh and Ev, are first evaluated on the pixels of red and blue on the basis of the above-mentioned pixel block of 5×5. Values of Eh and Ev can be used to obtain weighted by the edge of the filtered values of the phases of the horizontal and vertical filtering, as further described below.

[0401] by way of example, Fig. 77 illustrates the calculation of the values of Eh and Ev for a red pixel, centered in the pixel block of 5×5 in position (j, i), where j corresponds to a row, and i corresponds to the column. As shown, the Eh calculation takes into account the three middle rows (j-1, j, j+1) of the pixel block of 5×5, and the Ev calculation takes into account three of the middle column (i-1, i, i+1) of the pixel block of 5×5. To calculate Eh, the absolute value is their sum of each of the pixels in columns (i-2, i, i+2) red, multiplied by the appropriate factor (e.g., -1 for columns i-2 and i+2; 2 for column i) is added to the absolute value of the sum of each of the pixels in columns (i-1, i+1) blue, multiplied by the appropriate factor (e.g., 1 to column i-1; -1 for the column i+1). To calculate Ev, the absolute value of the sum of each of the pixels in the rows (j-2, j, j+2) red, multiplied by the appropriate factor (e.g., -1 for row j-2 and j+2; 2 line (j) is added to the absolute value of the sum of each of the pixels in the rows (j-1, j+1) blue, multiplied by the appropriate factor (e.g., 1 for row j-1; -1 for the row j+1). These calculations are illustrated by the following equations 76 and 77:

Thus, the full sum of the energies can be expressed as: Eh + Ev. In addition, although the example shown in Fig. 77 illustrates the calculation of Eh and Ev for the Central red pixel at (j, i), it should be understood that the values of Eh and Ev can be defined similarly for the Central blue pixels.

[0402] Then, the horizontal and vertical filtering may be applied to the Bayer pattern to obtain the vertical and horizontal filtered values Gh and Gv, which can be interpolated green values in the horizontal and vertical directions, respectively. Filtration is consistent values Gh and Gv can be defined using a low pass filter on the well-known neighboring green samples in addition to the use of directional gradients of neighboring color (R or B) to obtain high-frequency signal in the provisions of the missing green samples. For example, with reference to Fig. 78, will be illustrated by the example of the horizontal interpolation to determine Gh.

[0403] As shown in Fig. 78, five horizontal pixels (R0, G1, R2, G3, and R4) line 1060 red Bayer image, where it is assumed that R2 is the Central pixel at (j, i), can be considered when determining Gh. The filter coefficients associated with each of these five pixels, indicated by the reference position 1062. Accordingly, the interpolated green values, referred to as G2', for the Central pixel R2, can be determined as follows:

Various mathematical operations can then be used to create expressions for G2', shown below in equations 79 and 80:

Thus, with reference to Fig. 78 and the above equation 78-80, the General expression for the horizontal interpolation for values in green (j, i) can be derived in the form:

[0404] the Component Gv vertical filtering can be defined in a similar manner as Gh. For example, according to Fig. 79, five vertical pixels (R0, G1, R2, G3, and R4) column 1064 red Bayer image and their corresponding coefficients 1068 filter, where it is assumed that R2 is Central to what in Ixelles (j, i), can be considered when determining Gv. Using low-pass filtering on known samples of green and high-frequency filtering on the red channel in the vertical direction, it is possible to derive the following expression for Gv:

Although in the examples considered here shows the interpolation of the values of the green to the red pixel, it should be understood that the expressions given in equations 81 and 82, can also be used for horizontal and vertical interpolation of the values of green to blue pixels.

[0405] the Final interpolated green value G' for the Central pixel (j, i) can be determined by weighing the horizontal and vertical output signals of the filter (Gh and Gv) energy components (Eh and Ev), discussed above, to obtain the following equation:

As discussed above, the energy components Eh and Ev can provide weighting adaptation to the boundary conditions horizontal and vertical output signals of the filter Gh and Gv, which can help to reduce image artifacts, for example, rainbow, manual or chess artifacts in the reconstructed RGB image. Additionally, the logic 940 demosaic can provide an option to bypass the sign of the weighting adaptation to the Raev conditions by setting the values of each Eh and Ev are equal to 1, as a result, Gh and Gv are weighted equally.

[0406] In one embodiment, the horizontal and vertical weighting coefficients shown in the above equation 51, we can quantize to reduce the accuracy of the weights in the set of “gross” values. For example, in one embodiment, the weighting coefficients can quantize to eight possible relations scales: 1/8, 2/8, 3/8, 4/8, 5/8, 6/8, 7/8 and 8/8. Other options for implementation may quantize the weights of 16 values (e.g., 1/16 to 16/16), 32 values (from 1/32 up to 32/32), etc. it is Obvious that, compared with using the full precision values (for example, 32-bit floating point values), the quantization weighting coefficients can reduce the implementation complexity when defining and applying weighting coefficients to the horizontal and vertical output signals of the filter.

[0407] In additional embodiments, the implementation described herein, the invention, in addition to the definition and use of horizontal and vertical energy component for applying weighting coefficients to the horizontal (Gh) and vertical (Gv) filtered values can also define and use energy components in diagonally positive and diagonally negative directions. For example, in such embodiments done by the means, filtering can also be applied diagonally positive and diagonally negative directions. The weighting of the output signals of the filter may include a choice of two high energy component, and the use of selected energy components for weighing their respective output signals of the filter. For example, assuming that the two highest energy components correspond to the vertical and diagonal positive directions, vertical and diagonal positive energy components are used for weighting the vertical and diagonal positive output signal of the filter to determine the interpolated green values (for example, the position of the red or blue pixel in the Bayer pattern).

[0408] Then, demosaic on the color channels red and blue can be performed by interpolating the values of red and blue on the green pixels of the Bayer pattern image, the interpolation values of the red on the blue pixels of the Bayer pattern image, and the interpolation values of the blue to the red pixels of the Bayer pattern image. In accordance with the methods discussed here, the missing pixel values of red and blue can be interpolated using low-pass filtering on the basis of the Institute of known neighboring pixels of red and blue and high-frequency filtering on the basis of closely-spaced values of the pixels of green, which can be the original or interpolated values (from the above process demosaic channel green) depending on the position of the current pixel. Thus, for such embodiments, it should be understood that the first may be the interpolation of the missing green values, as a result, formed a complete set of green values (original and interpolated values) for the interpolation of missing samples of red and blue.

[0409] the Interpolation of the pixel values of red and blue can be described with reference to Fig. 80, which illustrates different 3×3 blocks of the Bayer pattern image, to which you can apply demosaic red and blue, and the interpolated green values (denoted G'), which can be obtained during demosaic on the green channel. First, at block 1070, the interpolated red value, R'11for pixel Gr (G11) can be defined as follows:

where G'10and G'12are interpolated green values, as specified by the reference position 1078. Similarly, the interpolated blue value, B'11for pixel Gr (G11) can be defined as follows:

where G'01and G'21present in calironia green values (1078).

[0410] Then, the pixel block 1072, in which the Central pixel is a pixel of the Gb (G11), the interpolated red value, R'11and the blue value B'11you can define as shown below in equations 86 and 87:

[0411] in Addition, according to the pixel block 1074, the interpolation values of the red to the blue pixel, B11can be determined as follows:

where G'00, G'02, G'11, G'20and G'22are interpolated green values, as specified by the reference position 1080. Finally, the interpolation values of the blue to the red pixel, as shown pixel block 1076, can be calculated as follows:

[0412] Although the above version of the implementation was based on the difference of colors (e.g., gradient) to determine interpolated values of red and blue, another option may not provide the interpolated red and blue values using the relationships of colors. For example, the interpolated green values (blocks 1078 and 1080) can be used to obtain the relationship of the colors in the provisions of the pixel in the red and blue Bayer pattern image, and a linear interpolation relations can be used to determine the Oia interpolated relations colors for the missing color samples. Green value, which can be interpolated or the original value can be multiplied by the interpolated ratio of colors to produce the final interpolated color values. For example, the interpolation of the pixel values of the red and blue using the relations colors may be carried out in accordance with the following formulas, in which the equations 90 and 91 show the interpolation of the red and blue pixel Gr, equations 92 and 93 show the interpolation of the red and blue pixel Gb, equation 94 shows the interpolation values of the red to the blue pixel, and the equation 95 shows the interpolation of the values of the blue to the red pixel:

(R'11interpolated when G11is the Gr pixel)

(B'11interpolated when G11is the Gr pixel)

(R'11interpolated when G11is the Gb pixel)

(B'11interpolated when G11is the Gb pixel)

(R'11the interpolated pixel B11blue)

(B'11interpolated pixel R11red)

[043] After the interpolation of the missing color samples for each image pixel of the Bayer pattern image, full sample of color values for each of the color channels red, green and blue (for example, 1046, 1048 and 1050 in Fig. 76) can be combined to create full-color RGB image. For example, according to Fig. 49 and 50, the output signal of the logic 910 900 primary processing of pixels may be a signal of an RGB image to 8, 10, 12 or 14-bit formats.

[0414] In Fig. 81-84 illustrated various flowchart of operations illustrating processes for demosaic primary Bayer pattern image in accordance with the disclosed variants of implementation. In particular, the process 1082 in Fig. 81 depicts a determination of which color component to be interpolated for a given input pixel p On the basis of the determination process 1082, one or more of the process 1100 (Fig. 82) to interpolate the values of the green, process 1112 (Fig. 83) to interpolate the values of the red and process 1124 (Fig. 84) to interpolate the values of the blue can be performed (for example, logic 940 demosaic).

[0415] since Fig. 81, 1082 process begins with step 1084 receiving input pixel P. the Logic 1086 decision determines the color of the input pixel. For example, it may depend on the position of the pixel in the Bayer pattern image. Accordingly, if P is identified as a pixel of green (for example, Gr or Gb), 1082 process goes to step 1088 floor for the treatment of interpolated values of red and blue for P. This may include, for example, go to processes 1112 and 1124 in Fig. 83 and 84, respectively. If P is identified as a pixel of the red, 1082 process proceeds to step 1090 to obtain interpolated values of green and blue for P. This may include additional implementation processes 1100 and 1124 in Fig. 82 and 84, respectively. Additionally, if P is identified as a blue pixel, the process 1082 goes to step 1092 to obtain interpolated values of green and red for P. This may include additional implementation processes 1100 and 1112 in Fig. 82 and 83, respectively. Each of the processes 1100, 1112 and 1124 additionally described below.

[0416] the Process 1100 for determining interpolated green values for the input pixel P shown in Fig. 82 and includes the steps 1102-1110. At step 1102 is made of the input pixel P (for example, from the process 1082). Then, at step 1104 identifies the set of neighboring pixels forming a pixel block of 5×5, and P is the center of a block of 5×5. After that, the pixel block is analyzed to determine the horizontal and vertical energy component at step 1106. For example, the horizontal and vertical energy components can be determined in accordance with equations 76 and 77 to calculate the Eh and Ev, respectively. As discussed, ene the energy components Eh and Ev can be used as weighting factors to provide filtering of adaptation to regional conditions and thus, reducing the visibility of some of the artifacts demosaic in the final image. At step 1108 low-pass filtering and high-frequency filtering is applied in the horizontal and vertical directions to determine the output signals of the horizontal and vertical filtering. For example, the output signals of the horizontal and vertical filtering, Gh and Gv, can be calculated in accordance with equations 81 and 82. Then 1082 process proceeds to step 1110, where the interpolated green value G' is calculated based on the values of Gh and Gv-weighted energy components Eh and Ev, as shown in equation 83.

[0417] Then, in process 1112 in Fig. 83, the interpolation values of the red may begin at step 1114, which makes the input pixel P (for example, from the process 1082). At step 1116 is identified by a set of neighboring pixels, forming a 3×3 pixel block, where P is the center of the block of 3×3. After that, low-pass filtering is applied on the neighboring red pixels in the block of 3×3 step 1118, and high-frequency filtering is applied (step 1120) on closely spaced adjacent green values, which can be the original green values captured Bayer image sensor, or interpolated values (e.g., defined in the process 100 of Fig. 82). The interpolated red value R' for P can be determined on the basis of the output signals of the low-frequency and high-frequency filtering, as shown in step 1122. Depending on the color of P, R' can be determined in accordance with one of equations 84, 86 or 88.

[0418] In respect of the interpolated values of blue, you can apply the process 1124 in Fig. 84. Steps 1126 and 1128, in General, identical to the steps 1114 and 1116 process 1112 (Fig. 83). At step 1130, low-pass filtering is applied on the neighboring blue pixels in 3×3, and, at step 1132, the high-frequency filtering is applied on closely spaced adjacent green values, which can be the original green values captured Bayer image sensor, or interpolated values (e.g., defined in the process 1100 of Fig. 82). The interpolated blue value B' for P can be determined on the basis of the output signals of the low-frequency and high-frequency filtering, as shown in step 1134. Depending on the color of P, B' can be determined in accordance with one of equations 85, 87 or 89. In addition, as mentioned above, the interpolation of red and blue can be defined using a color difference (equation 84-89) or relations colors (equation 90-95). Again, it should be understood that it may be interpolation nedostoinoi green, as a result, formed a complete set of green values (original and interpolated values) for the interpolation of missing samples of red and blue. For example, the process 1100 of Fig. 82 can be used to interpolate any missing color samples green before making processes 1112 and 1124 in Fig. 83 and 84, respectively.

[0419] In Fig. 85-88 examples of color drawings of images processed by the logic 900 primary processing of pixels in the pipeline 82 ISP. Fig. 85 depicts a scene 1140 of the original image that can be captured by the sensor 90, the image device 30 form the image. Fig. 86 shows the primary Bayer image 1142, which may represent the primary pixel data captured by the sensor 90 of the image. As mentioned above, traditional methods demosaic may not provide for adaptive filtering based on the detection of edges (for example, boundaries between regions of two or more colors) in the image data, which may, in addition to the desire to produce artifacts in the resulting reconstructed full-color RGB image. For example, in Fig. 87 shows an RGB image 1144, which is reconstructed using traditional methods demosaic and may include artifacts, such as “chess” artifacts 1146, on the edge of 1148. However, against the first image 1144 RGB image 1150 in Fig. 88, which may serve as an example of an image reconstructed using the above methods demosaic, you can see that chess artifacts 1146 present in Fig. 87, missing or at least significantly less noticeable on the edge of 1148. Thus, the image shown in Fig. 85-88, are intended to illustrate at least one advantage of the disclosed methods here demosaic over traditional methods.

[0420] Fully described the operation of the logic 900 of processing of primary pixels, which can output a signal 910 RGB image, we, returning to Fig. 67, we focus on the description of the signal processing 910 RGB image logic 902 processing RGB. As shown, the signal 910 RGB image can be sent to the logic 914 and/or memory 108. Logic 902 processing RGB can accept input signal 916, which may be a data RGB image signal 910 or from the memory 108, indicated by signal 912, depending on the configuration logic 914 choice. Data 916 RGB images can be processed by the logic 902 processing RGB for transactions adjust colors, including color correction (for example, using the matrix, color correction, color gain for automatic white balancing, and display global tones and so on

[0421] a Block diagram depicting a more detailed view of a variant of implementation of the logic 902 processing of RGB shown in Fig. 89. As shown, the logic 902 processing RGB includes logic 1160 gain, offset and limit (GOC), logic 1162 color correction RGB, logic 1164 GOC, logic adjustment range RGB and logic 1168 conversion color space. First input signal 916 is made by the logic 1160 gain, offset and limit (GOC). In the illustrated embodiment, the logic 1160 GOC may apply the gain for automatic white balancing on one or more color channels R, G or B before processing logic 1162 color correction.

[0422] the Logic 1160 GOC may be similar to the logic 930 GOC logic 900 of processing of primary pixels, except that the processed color components of the RGB field, and not the components of R, B, Gr and Gb Bayer image data. During operation, the input value for the current pixel is first shifted to the symbolic significance O[c] and is multiplied by the coefficient G[c] gain, as shown in the above equation 11, where c is R, G, and B. As discussed above, the coefficient G[c] gain can represent a 16-bit unsigned integer with 2 bits of the integer part and 14 bits of the fractional part (e.g., not only at the research Institute 2,14 floating-point), and the value of the coefficient G[c] gain can be predefined in the statistical processing (e.g., in block 80 pre-treatment ISP). The calculated pixel value Y (equation 11) is then limited by the minimum and maximum range in accordance with equation 12. As discussed above, the variables min[c] and max[c] may submit a signed 16-bit value cutoff for the minimum and maximum output values, respectively. In one embodiment, the logic 1160 GOC may also be configured to maintain a counter of the number of pixels that were clipped above and below the maximum and minimum, respectively, for each of color components R, G and B.

[0423] Then, the output signal of the logic 1160 GOC is forwarded to the logic 1162 color correction. In accordance with the invention described here, the logic 1162 color correction may be configured to apply a color correction to the data in an RGB image using matrix color correction (CCM). In one embodiment, the CCM may be the transformation matrix RGB 3×3, although in other embodiments, implementation is also possible to use matrices of other sizes (e.g., 4×3, etc). Accordingly, the process of performing color correction on the input pixel has components R, G, and B, t is expressed as follows:

where R, G and B represent the current values of the red, green and blue for the input pixel, CCM00-CCM22 represent the coefficients matrix, color correction, and R', G' and B' represent the corrected values of the red, green and blue for the input pixel. Accordingly, the correct color values can be calculated in accordance with the following equations 97-99:

[0424] the Coefficients (CCM00-CCM22) CCM can be determined through a statistical processing unit 80 pre-treatment ISP, as discussed above. In one embodiment, the coefficients for a given color channel can be chosen so that the sum of these factors (for example, CCM00, CCM01 and CCM02 for correcting red) was equal to 1, that can help maintain brightness and color balance. In addition, the coefficients are generally chosen so as to adjust the color was applied positive gain. For example, when the correction of the red color, the coefficient CCM00 can be greater than 1, whereas one or both of the coefficients CCM01 and CCM02 can be less than 1. The job of factors may increase a component of red (R) in the resulting adjusted value R', simultaneously weakening any of the blue component (B) and green (G). Obviously, this allows AIT to solve problems due to overlapping of colors which may occur upon receipt of the original Bayer image, due to the “liquid” part of the filtered light for a specific colored pixel to a neighboring pixel of a different color. In one embodiment, the coefficients of the CCM can be provided as a 16-bit number in the format additions to 2 with 4 bits integer part and 12 bits of the fractional part (expressed in view of 4.12 floating point). Additionally, the logic 1162 color correction can be calculated cutoff adjusted color values, if the values exceed the maximum value or below the minimum value.

[0425] Then, the output signal of the logic 1162 color correction RGB is directed to another logical block 1164 GOC. Logic 1164 GOC can be implemented in the same way as logic 1160 GOC, and therefore a detailed description of the bottom here of the functions of the gain, bias, and limitations will not recur. In one embodiment, the application logic 1164 GOC after color correction can provide automatic white balance of the image data on the basis of the corrected color values, and may also regulate changes of the sensor in connection with the relations of red to green to blue to green.

[0426] Then, the output signal of the logic 1164 GOC is passed on lo is the IR 1166 adjustment gamut RGB for further processing. For example, the logic 1166 adjustment range of RGB can provide gamma correction, display colors, matching histogram, etc., In accordance with the disclosed variant implementation, the logic 1166 adjustment range can provide the input RGB values to the corresponding output RGB values. For example, the logic control scheme can provide a set of three lookup tables, one table for each component R, G and B. by way of example, each search table may be configured to store 256 records 10-bit values; each value represents the level of the output signal. The table entries can be evenly distributed in the range of input pixel values, so that when the input value falls between two entries, the output value was linearly interpolated. In one embodiment, each of the three look-up tables for R, G and B can be duplicated, resulting in a search table, double-buffered in memory, so you can use one table during processing, while its duplicate is updated. On the basis of the above 10-bit output values, it should be noted that 14-bit image signal RGB effectively subjected to down-sampling rate up to 10 bits in the result of the process, gamma correction nastojashem embodiment.

[0427] the Output signal of the logic 1166 adjustment range can be set up in memory 108 and/or logic 1168 conversion color space. Logic 1,168 color space conversion (CSC) may be configured to convert the output RGB signal from the logic 1166 adjustment range to YCbCr, where Y represents the brightness component, the Cb component is citratest blue, and Cr represents the component of citratest red, each of which can have a 10-bit format, a conversion bit depth RGB data from 14 bits to 10 bits during the operation of the adjustment range. As discussed above, in one embodiment, the output RGB signal logic 1166 adjustment range can be subjected to down-sampling rate up to 10 bits and, thus, converted to 10-bit YCbCr values logic 1168 CSC, which can then be redirected to the logic 904 processing YCbCr, which will be further discussed below.

[0428] the Conversion from the area of RGB to the YCbCr color space can be done using the transformation matrix color space (CSCM). For example, in one embodiment, CSCM can be a 3×3 matrix transform. The coefficients CSCM can be set in accordance with the known equation of transformation, for example, standards BT.601 and BT709. Additionally, the coefficients CSCM can be flexibly set based on the desired range of input and output signals. Thus, in some embodiments, the implementation, the coefficients CSCM can be defined and programmed on the basis of data collected during the statistical processing unit 80 pre-treatment ISP.

[0429] the Process of conversion of YCbCr color space for the input pixel RGB can be expressed as follows:

where R, G and B represent the current values of the red, green and blue input pixel in 10-bit form (for example, processed by the logic 1166 adjustment range), CSCM00-CSCM22 represent the coefficients of the transformation matrix color space, and Y, Cb and Cr represent the resulting components of luminance and chrominance for the input pixel. Accordingly, the values of Y, Cb and Cr can be calculated in accordance with the following equations 101-103:

After conversion of the color space, the resulting YCbCr values can be derived from logic 1168 CSC as a signal 918, which can be processed by the logic 904 YCbCr processing, as will be discussed below.

[0430] In one embodiment, the coefficients CSCM can be 16-bit numbers in the format additions to 2 with 4 bits a h the STI and 12 bits of the fractional part (4,12). In another embodiment, the logic 1168 CSC may be further configured to apply a bias to each of the values of Y, Cb and Cr, and to limit the received values of the minimum and maximum values. Solely by way of example, assuming YCbCr values are 10-bit form, the offset may be in the range from -512 to 512, and the minimum and maximum values can be 0 and 1023, respectively.

[0431] Returning to the block diagram of logic 82 of the conveyor ISP in Fig. 67, the signal 918 YCbCr can be sent to the logic 922 choice and/or memory 108. Logic 904 processing YCbCr can accept input signal 924, which may be a data YCbCr image signal from 918 or from the memory 108, as shown by signal 920, depending on the configuration logic 922 choice. Data 924 YCbCr image can then be processed by the logic 904 processing for YCbCr luminance sharpening, suppression of color, reduction of chroma noise, lower chroma noise, as well as adjustments of brightness, contrast and color, etc. in Addition, the logic 904 YCbCr processing can provide the display range and the scaling-processed image data in the horizontal and vertical directions.

[0432] a Block diagram depicting a more detailed view of a variant of implementation of the logic 904 processing YCCr, presented on Fig. 90. As shown, the logic 904 processing YCbCr includes logic 1170 sharpening images, logic 1172 to adjust the brightness, contrast and/or color, logic 1174 adjustment range YCbCr logic 1176 thinning color and logic 1178 scale. Logic 904 processing YCbCr can be configured to process pixel data formats 4:4:4, 4:2:2 or 4:2:0 using 1-plane 2-plane or 3-planar configuration memory. In addition, in one embodiment, the input signal 924 YCbCr can provide information of luminance and chrominance in the form of 10-bit values.

[0433] it is Obvious that 1-plane 2-plane or 3-plane refers to the number of planes of imaging used in memory images. For example, a 3-plane format, each of the components Y, Cb and Cr may use a separate corresponding plane memory. 2 is a planar format, the first plane may be provided to components of the luminance (Y) and the second plane, which punctuates the sample Cb and Cr, can be provided for a component of the chrominance (Cb and Cr). 1 is a planar format, the only plane in the memory is interspersed with samples of luminance and chrominance. In addition, in respect of formats 4:4:4, 4:2:2 and 4:2:0, it is obvious that the format of 4:4:4 refers to the sample format, in which the each of the three component YCbCr is discretized with the same sampling frequency. In the format of 4:2:2 chrominance components Cb and Cr were subjected to thinning out at half the sampling frequency brightness components Y, what, the resolution component of the chrominance Cb and Cr is reduced by half in the horizontal direction. Similarly, the format 4:2:0 thins chrominance components Cb and Cr in the vertical and horizontal directions.

[0434] the information Processing YCbCr may occur in the active area of the source specified in the source buffer, where the active area of the source contains valid pixel data. For example, in Fig. 91 illustrates the buffer 1180 source, which defines the active region 1182 source. In the illustrated example, the buffer of the source can provide 1-planar format 4:4:4, providing the source pixels are 10-bit values. The active region 1182 source can be set separately for samples of luminance (Y) and samples of the chrominance (Cb and Cr). Thus, it should be understood that the active region 1182 source may actually include multiple active source area for samples of luminance and chrominance. The beginning of the active regions 1182 source for brightness and chromaticity can be determined on the basis of offsets from the base address (0,0) 1184 buffer source. For example, the initial position (Lm_X, Lm_Y) 1186 for luminance active area of the source, you can specify the offset 1190 p is the x-axis and offset 1194 y-axis relative to the base address 1184. Similarly, the initial position (Ch_X, Ch_Y) 1188 for chroma active area of the source, you can specify the offset 1192 x axis and offset 1196 y-axis relative to the base address 1184. It should be noted that in the present example, the offset 1192 and 1196 y for luminance and chrominance, respectively, may be equal. On the basis of the starting position 1186, luminance active area of the source can be specified width 1193 and a height of 1200, each of which may represent the number of samples the brightness in the x and y directions, respectively. Additionally, on the basis of the starting position 1188, chroma active area of the source can be specified width 1202 and height 1204, each of which may represent the number of color samples in the x and y directions, respectively.

[0435] In Fig. 92 additionally provides an example that demonstrates how you can define the active area of the source for samples of luminance and chrominance in the two-plane format. For example, as shown, luminance active area 1182 source can be set in the first buffer 1180 source (with base address 1184) as the area defined by the width of 1193 and a height of 1200 relative to the initial position 1186. Chroma active region 1208 source can be set in the second buffer 1206 source (with base address 1184) as the area defined by the width 1202 and height 1204 otnositelnaya position 1188.

[0436] accordingly, referring back to Fig. 90, the signal 924 YCbCr first received by the logic 1170 sharpening of the image. Logic 1170 sharpening of the image may be configured to implement processing sharpening images, and underscores the edges for increased detail textures and edges in the image. Obviously, increasing the sharpness of the image can increase the perceived resolution of the image. However, in the General case, it is desirable that the existing noise in the image were not found as texture and/or edges, and thus, was not increased in the sharpening process.

[0437] In accordance with the present invention, the logic 1170 sharpening images can improve the sharpness of the image using a filter with multi-scale Unsharp mask on the component of the luminance (Y) signal is YCbCr. In one embodiment, may be provided with two or more low-pass Gaussian filter of size scale of the difference. For example, according to a variant implementation, which provides two Gaussian filter output signal (for example, Gaussian blur) of the first Gaussian filter having a first radius (x) is subtracted from the output signal of the second Gaussian filter having a second radius (y), where x is greater than y, to generate blurred the ASCI. Additional Unsharp mask can also be obtained by subtracting the output signals of the Gaussian filter from the input signal Y. In some embodiments, the implementation, the method may also include comparing operation with an adaptive threshold coringa that can be carried out using an Unsharp mask, which allows, on the basis of the result(s) of comparison, we add the value of the gain to the underlying image, which can be selected as the source input image Y, or to the output signal of one of the Gaussian filter, for generating a final output signal.

[0438] In Fig. 93 illustrates a block diagram depicting exemplary logic 1210 to effect the sharpening of the image in accordance with a variant implementation of the invention described here. Logic 1210 represents a multi-scale Unsharp mask filter, which can be applied to the input luminance image Yin. For example, as shown, Yin is received and processed by two low-pass Gaussian filters 1212 (G1) and 1214 (G2). In this example, the filter 1212 may be a filter of 3×3, and the filter 1214 may be a filter of 5×5. However, it is clear that additional variants of implementation, it is also possible to use more than two Gaussian filters, including filters of different scales (in the example, 7×7, 9×9, and so on). It is obvious that due process low-pass filtering high-frequency components, which, in General, correspond to noise, can be removed from the output signals G1 and G2 to create a “blurred” image (G1out and G2out). As will be discussed below, the use of the blurred input image as the base image enables noise reduction as part of the sharpening filter.

[0439] the Gaussian filter 1212 3×3 and the Gaussian filter 1214 5×5 can be set as shown below:

Solely by way of example, the values of the Gaussian filters G1 and G2 can be chosen in one embodiment, as follows:

[0440] On the basis of Yin, G1out and G2out, you can generate three Unsharp mask, Sharp1, Sharp2 and Sharp3. Sharp1 can be defined as the result of subtracting the Unsharp image G2out from the Gaussian filter 1214, an Unsharp image G1out Gaussian filter 1212. Because Sharp1, essentially equal to the difference between the two low-pass filters, it may be cited as the “srednepolny” mask, since noise components higher frequencies already filtered in Unsharp images G1out and G2out. Additionally, Sharp2 can be calculated by subtracting G2out from the input luminance image Yin, and Sharp3 can be calculated by subtracting G1out from the original luminance image Yin. As will be discussed below, a diagram of an adaptive threshold coringa can be applied using Unsharp mask Sharp1, Sharp2 and Sharp3.

[0441] According to the logic 1216 choice, the base image can be selected on the basis of the control signal UnsharpSel. In the illustrated embodiment, the base image may be either the input image Yin, or filtered output signals G1out or G2out. Obviously, when the original image have a high variance of the noise (for example, almost as high as the signal variance), using the original image Yin as the base image with sharpening may not provide sufficient reduction of noise components during sharpening. Accordingly, upon detection of a specific threshold noise filling in the input image, the logic 1216 selection can be adapted to select one of output signals G1out or G2out low pass filter, which removed the high-frequency content, which may include noise. In one embodiment, the value of the control signal UnsharpSel can be determined by statistical analysis of data obtained through statistical processing unit 80 pre-treatment ISP to determine the noise content of the image. By way of example, if the input image Yin is the bottom of the second noise content, so it is a noticeable increase in noise as a result of the sharpening process, the input image Yin can be selected as the base image (for example, UnsharpSel=0). If it is determined that the input image Yin contains a noticeable level of noise, making the sharpening process may enhance the noise, you can select one of the filtered images G1out or G2out (for example, UnsharpSel = 1 or 2, respectively). Thus, thanks to the use of an adaptive method of selecting a base image, the logic 1210, essentially, provides the function of noise reduction.

[0442] Then, the gain can be applied to one or more of the masks Sharp1, Sharp2 and Sharp3 in accordance with the scheme of the adaptive threshold coringa, as described below. Then Unsharp values Sharp1, Sharp2 and Sharp3 can be compared to various thresholds SharpThd1, SharpThd2 and SharpThd3 (not necessarily respectively) via comparator block 1218, 1220 and 1222. For example, the value Sharp1 is always compared with the SharpThd1 on comparator block 1218. In relation to comparator block 1220, the threshold SharpThd2 you can either compare with Sharp1 or Sharp2, depending on the logic 1226 choice. For example, the logic 1226 may choose Sharp1 or Sharp2 depending on the state of the control signal SharpCmp2 (for example, SharpCmp2=1 selects Sharp1; SharpCmp2=0 selects Sharp2). For example, in one embodiment, SOS is the right SharpCmp2 can be defined depending on the dispersion/filling noise of the input image (Yin).

[0443] In the illustrated embodiment, in the General case, it is preferable to set the values SharpCmp2 and SharpCmp3 to select Sharp1, if not detected that the image data have a relatively low noise level. The fact that Sharp1, being the difference between the output signals of the Gaussian filters G1 and G2 low frequencies, in General, less sensitive to noise, which reduces the magnitude of the change values SharpAmt1, SharpAmt2 and SharpAmt3 due to fluctuations of noise “noisy” image data. For example, if the original image has a high variance of the noise, some high-frequency components cannot catch with the use of fixed thresholds, and thus, they can grow in the sharpening process. Accordingly, in the case of high noise content of the input image, a noise filling may be present in Sharp2. In such cases, SharpCmp2 can be set to 1 to select srednepolny mask Sharp1, which, as discussed above, reduced high-frequency content due to the fact that is the difference between the two output signals of filters low frequencies, and, therefore, less sensitive to noise.

[0444] it is Obvious that a similar process can be applied to the selection Sharp1 or Sharp3 logic 1224 choice running SharpCmp3. In one embodiment, the implement is to be placed, SharpCmp2 and SharpCmp3 can be set to 1 by default (for example, to use Sharp1), and set equal to 0 only for those input images that are identified as having, in General, low values of the noise variance. It essentially provides a diagram of an adaptive threshold coringa, in which the choice of the comparison value (Sharp1, Sharp2 or Sharp3) is adaptive on the basis of the noise variance of the input image.

[0445] On the basis of the output signals from comparator block 1218, 1220 and 1222, you can define the output image of high sharpness Ysharp by applying an Unsharp mask applied gain to the base image (e.g., selected by logic 1216). For example, comparator block 1222, SharpThd3 compared to the B input provided by the logic 1224 choice, which we will call here “SharpAbs”, and may be equal Sharp1 or Sharp3, depending on the state SharpCmp3. If SharpAbs more threshold SharpThd3, the gain SharpAmt3 applies to Sharp3, and the resulting value is added to the base image. If SharpAbs less than threshold SharpThd3, we can apply the weak gain Att3. In one embodiment, the weakened gain Att3 can be defined as follows:

where SharpAbs equal Sharp1 or Sharp3, that is definitely what is logic 1224 choice. The choice of the summation of the base image with full gain (SharpAmt3) or attenuated by the gain (Att3) is logic 1228 selection on the basis of the output signal of comparator unit 1222. Obviously, the use of attenuated gain to allow situations in which SharpAbs does not exceed the threshold (for example, SharpThd3), but the variance of the image noise, however, is close to this threshold. This allows you to make less noticeable transition between sharp and blurred pixels. For example, if in such circumstances, the image data are skipped without weakened gain, the resulting pixel may look like a defective pixel (for example, are stuck pixel).

[0446] Then, the same process can be applied to comparator block 1220. For example, depending on the state SharpCmp2, logic 1226 choice can provide Sharp1 or Sharp2 as input comparator block 1220, which is compared with a threshold SharpThd2. Depending on the output of the comparator block 1220, or gain SharpAmt2 or attenuated by the gain on the basis of SharpAmt2, Att2, applies to Sharp2 and summed with the output signal of the logic 1228 selection discussed above. It is obvious that the attenuated gain Att2 can be calculated similarly visiprise the military equation 104, except that the gain SharpAmt2 and threshold SharpThd2 apply to SharpAbs, which you can choose Sharp1 or Sharp2.

[0447] then, the gain SharpAmt1 or weakened gain Att1 applies to Sharp1, and the resulting value is summed with the output signal of the logic 1230 choice for creating pixel output increased sharpness Ysharp (logic 1232). Whether to apply the gain SharpAmt1 or weakened gain Att1, can be determined on the basis of the output signal of the comparator block 1218, which compares Sharp1 threshold SharpThd1. Again, weakened gain Att1 can be defined similarly to the above equation 104, except that the gain SharpAmt1 and threshold SharpThd1 apply to Sharp1. The resulting pixel values increased sharpness, scaled using each of the three masks, are added to the input pixel Yin to generate the output signal of enhanced sharpness Ysharp, which, in one embodiment, can be cut to 10 bits (assuming YCbCr processing occurs with 10-bit precision).

[0448] it is Obvious that, compared with traditional methods Unsharp masking, methods of improving the sharpness of the images, set forth in this disclosure can provide to improve the Noah underline textures and edges with a simultaneous reduction of noise in the output image. In particular, the present invention can be successfully used in applications, where images captured using, for example, CMOS image sensors, demonstrate low signal-to-noise ratio, for example images taken in low light conditions using lower resolution cameras built into portable devices (e.g. mobile phones). For example, when the noise variance and the variance of the signal is comparable, it is difficult to use a fixed threshold for sharpening, because some of the noise components will be subjected to sharpening together with texture and edges. Accordingly, the provided methods, as discussed above, allow you to filter out the noise from the input images using a multiresolution Gaussian filters to highlight signs of blurred images (for example, G1out and G2out) to ensure high image sharpness, which also shows reduced noise content.

[0449] Before continuing, it should be understood that the illustrated logic 1210 is designed to provide only one exemplary variant of implementation of the present invention. In other embodiments, implementation, more or fewer characteristics can be provided by logic 1170 sharpening of the image. For example, in some the x variants of implementation, instead of applying the attenuated gain, logic 1210 may simply pass the base value. Additionally, some embodiments of can include logical blocks 1224, 1226 or 1216 choice. For example, comparator blocks 1220 and 1222 can simply take the values Sharp2 and Sharp3, respectively, instead of the output signal of the selection of the logical block 1224 and 1226 selection, respectively. Although these options for implementation may not provide signs of sharpening and/or noise reduction, which is as reliable as the implementation shown in Fig. 93, it is obvious that such design options can be the result of limitations associated with cost and/or business.

[0450] In the present embodiment, the logic 1170 improve the sharpness of the image can also provide signs underscore the edges and suppress color after receiving the output signal YSharp image of high sharpness. Below we consider each of these additional features. In Fig. 94, sample logic 1234 for the implementation underscores the edges, which may be realized by positioning it after the logic 1210 sharpening shown in Fig. 93, illustrated in accordance with one embodiment. As shown, the original input value Yin is processed by the Sobel filter 1236 for detection of heaven. The Sobel filter 1236 can determine the value of the gradient YEdge on the basis of the pixel block of 3×3 (marked “A”) of the original image, where Yin is the Central pixel of the block of 3×3. In one embodiment, the filter 1236 Sobel can calculate YEdge by calculating the convolution of the original image data to detect changes in horizontal and vertical directions. This process is shown below in equations 105-107.

where Sxand Syare matrix operators for gradient detection strength of the edges in the horizontal and vertical directions, respectively, and Gxand Gyrepresent a gradient image that contain derivatives of the horizontal and vertical changes, respectively. Accordingly, the output YEdge is determined by the product of Gxand Gy.

[0451] Then YEdge accepted logic 1240 choice together with srednepolny mask Sharp1, as discussed above in Fig. 93. On the basis of the signal EdgeCmp management, Sharp1 or YEdge is compared with a threshold, EdgeThd, comparator block 1238. State EdgeCmp you can define, for example, based on the noise content of the image, thus providing a diagram of an adaptive threshold coringa to detect and enhance edges. Then the output of comparator block 1238 can do n the logic 1242 choice and can be used either full gain or weakened gain. For example, when the selected B-input to comparator block 1238 (Sharp1 or YEdge) above EdgeThd, YEdge is multiplied by the marginal gain, EdgeAmt, to determine the magnitude underscore the edges of the subject application. If the B-input to comparator block 1238 less EdgeThd, it can be applied weakened marginal gain, AttEdge, avoid noticeable transitions between pixel with mild edge and the original pixel. Obviously, AttEdge can be calculated in the same way as shown above in equation 104, but EdgeAmt and EdgeThd apply to “SharpAbs”, which can be a Sharp1 or YEdge, depending on the output signal of the logic 1240 choice. Thus, the edge pixel is reinforced with the use of gain (EdgeAmt) or weakened gain (AttEdge), can be added to YSharp (output logic 1210 in Fig. 93) to obtain the output pixel Yout with tart edge, which, in one embodiment, can be cut to 10 bits (assuming YCbCr processing occurs with 10-bit precision).

[0452] In signs of suppression of color provided by the logic 1170 sharpening images, such evidence may weaken the color on the edges of brightness. In the General case, giving the giving the color can be done by applying a gain chroma (attenuation coefficient) less 1 depending on the values (YSharp, Yout) obtained in the above steps luminance sharpening and/or underscore edges. By way of example, Fig. 95 shows a graph 1250, which includes a curve 1252, representing the gain of the color that you select for the respective brightness values in the high-field (YSharp). The data represented by the graph 1250 may be implemented in the form of a lookup table values YSharp and the corresponding gain of the color, between 0 and 1 (attenuation coefficient). The lookup table is used to approximate the curve 1252. For values YSharp that close between the two attenuation factors in the lookup table, linear interpolation can be applied to the two attenuation factors, the corresponding values YSharp above or below the current value YSharp. In addition, in other embodiments, the implementation, the input brightness value can also be selected as one of the values Sharp1, Sharp2 and Sharp3, certain logic 1210, as described above in Fig. 93, or values YEdge, a certain logic 1234, considered in Fig. 94.

[0453] Next, the output signal of the logic 1170 sharpening images (Fig. 90) is processed by the logic 1172 adjust the brightness, contrast and color (BCC). Functional block diagram depicting an implementation option logic 1172 adjustment is BCC, presented on Fig. 96. As shown, the logic 1172 includes a block 1262 processing of brightness and contrast, block 1264 management of global tone and block 1266 control saturation. Illustrated here is an implementation option provides data processing YCbCr 10-bit accuracy, although other options for implementation may use different bit depth. The function of each block 1262, 1264 and 1266 discussed below.

[0454] At block 1262 processing of brightness and contrast, offset, YOffset, first is subtracted from the brightness data (Y) to set the black level to zero. This is done in order to ensure that the contrast adjustment does not change the black levels. Then, the luminance value is multiplied by the gain value for the contrast use the contrast control. By way of example, the gain value of the contrast may be a 12-bit unsigned number with 2 bits of the integer part and 10 bits of the fractional part, thereby providing a range of gain contrast, up to 4 times the pixel value. After that, the brightness control can be implemented by adding (or subtracting) the offset value of the brightness of the brightness data. By way of example, the brightness offset in the present embodiment, may be a 10-bit mn is an increase in the format additions of up to 2, having a range from -512 to +512. In addition, it should be noted that the brightness adjustment is carried out after the contrast adjustment in order to avoid offset changes a permanent part of changing the contrast. After that, the initial YOffset added back to the adjusted brightness data to reset black level.

[0455] the Blocks 1264 and 1266 provide a color adjustment based on the tonal characteristics of the data Cb and Cr. As shown, the offset 512 (assuming 10-bit processing) is first subtracted from the data Cb and Cr to install a range of approximately zero. Then the tone is controlled in accordance with the following equations:

where Cbadjand Cradjare adjusted values Cb and Cr, and θ represents the angle of pitch, which can be calculated as follows:

The above operation is shown by logic block 1264 management of global tone and can be represented by the following matrix operation:

where, Ka=cos(θ), Kb=sin(θ), and θ defined above in equation 110.

[0456] Then, the control saturation can be applied to values Cbadjand Cradjthat is shown by block 1266 control saturation. In the illustrated embodiment, the control saturation is carried out by applying a global saturation coefficient and coefficient of saturation on the basis of tone for each of the values Cb and Cr. Management of saturation on the basis of tone can improve colour reproduction. The tone color can be represented in the color space YCbCr, as shown in the pie chart 1270 colors in Fig. 97. Obviously, the color wheel 1270 tone and saturation YCbCr can be withdrawn by sliding identical to the color wheel in the color space the HSV (hue, saturation and intensity) of about 109 degrees. As shown, the graph 1270 includes a circumferential values representing the ratio of the saturation (S) in the range from 0 to 1, and the angular values representing θ, defined above, in the range from 0 to 360°. Each θ may represent a single color (for example, 49° = crimson, 109° = red, 229° = green, etc). Tone color at a specific angle θ tone can be adjusted by selecting an appropriate ratio S saturation.

[0457] Returning to Fig. 96, the angle θ tone (calculated in block 1264 management of global tone) can be used as an index to the lookup table 1268 saturation Cb and search table 1269 saturation Cr. In one embodiment, the lookup table 1268 and 1269 saturation can contain 256 values of saturation, uniformly distributed in the range tone 0-360° (for example, the first entry of the lookup table is at 0° and the last entry is 360°), and the saturation value S at this peaks the Le can be determined by linear interpolation of the values of saturation in the lookup table directly below and above the current angle θ tone. The final saturation value for each of the components Cb and Cr is obtained by multiplying the global saturation value (which may be a global constant for each of Cb and Cr) for a certain value of saturation on the basis of tone. Thus, the final adjusted values Cb' and Cr' can be determined by multiplying Cbadjand Cradjwith their respective final values of saturation, as shown in block 1266 control saturation on the basis of tone.

[0458] then, the output signal of the logic 1172 BCC goes to logic 1174 adjustment range YCbCr, as shown in Fig. 90. In one embodiment, the logic 1174 adjustment range can provide the function of a nonlinear mapping for channels Y, Cb and Cr. For example, the input values of Y, Cb and Cr are displayed in the corresponding output values. Again, assuming YCbCr data are processed in 10 bits, you can use a lookup table of 256 interpolated 10-bit entries. These three look-up tables may be provided one for each of the channels Y, Cb and Cr. Each of the 256 input records can be evenly distributed, and the output signal can be determined by linear interpolation of the output values displayed in the index immediately above and below the current input index. In some valentinorossini, you can also use reinterpretating search table has 1024 entries (for 10-bit data), but may present a much higher memory requirements. It is obvious that by adjusting the output values of the lookup tables, the function adjustment range YCbCr can also be used to implement some filter effects image, for example, black and white, Sepia, negative image, solarization, and so on

[0459] Then thinning color can be applied logic 1176 thinning color to the output signal of the logic 1174 adjustment range. In one embodiment, the logic 1176 thinning color can be configured to implement horizontal thinning for data conversion from YCbCr 4:4:4 format 4:2:2, in which the information of the chrominance (Cr and Cr) is weeded to half the sampling frequency of the data brightness. Solely by way of example, thinning can be done by applying a 7-drop low pass filter, for example spolupracovalo filter Lanczos, set of 7 horizontal pixels, as shown below:

where in(i) represents the input pixel (Cb or Cr), and C0-C6 are the coefficients of the filter 7-drop filter. Each input pixel has an independent filtration coefficient (C0-C) to provide a flexible phase offset for filtered samples color.

[0460] in Addition, thinning color, in some cases, it is also possible to carry out without filtering. This can be useful when the original image is initially assumed to be in the format of 4:2:2, but is increasing the sampling rate to 4:4:4 YCbCr processing. In this case, the resulting thinned image in the format of 4:2:2 is identical to the original image.

[0461] Then YCbCr data output from the logic 1176 thinning color can be scaled using logic 1178 scaling up of output from block 904 processing YCbCr. The logic function 1178 scaling may be similar to the functionality of the logic 368, 370 zoom filter 300 compensation binning unit 130 preliminary pixel processing discussed above with reference to Fig. 28. For example, the logic 1178 scaling can perform horizontal and vertical scaling in two stages. In one embodiment, 5-taking polyphase filter can be used for vertical scaling, and 9-hottoddy polyphase filter can be used for horizontal scaling. Multi-tap polyphase filters can multiply the pixels selected from the image source, a weighting factor (e.g., permeability) and then sum the output signals to form the Finance of the final pixel. The selected pixels can be selected depending on the position of the current pixel and the number of filter taps. For example, using vertical 5-outlet filter, you can select two adjacent pixel on each vertical side of the current pixel, and using a horizontal 9-ottodrome filter, you can choose from four neighboring pixels on each horizontal side of the current pixel. The coefficients of the filter can be provided from the lookup table and can determine the current inter-pixel fractional position. Then output logic 926 1178 scaling derived from block 904 YCbCr processing.

[0462] Returning to Fig. 67, the processed output signal 926 may be sent to the memory 108 or may be derived from logic 82 pipeline ISP as signal 114 of the image on the equipment display (for example, the display 28) for viewing by the user, or on a machine compression (for example, the encoder 118). In some embodiments, implementation, signal 114 image can optionally be processed by the graphics processing unit and/or machine compression and to continue to ascend, the Deco and the filing of the display. Additionally, one or more frame buffer also can ensure to control the buffering of the image data displayed on the display, in particular with respect to the data of the video image.

[0463] Acevi is but what variety of image processing methods described above, and related, in particular, to the detection and correction of defective pixels, shading correction lens, demosaic and increase the sharpness of the image provided here solely by way of example. Accordingly, it should be understood that the present disclosure should not be construed as limited to the above examples. Indeed, the logic presented here, for example, can undergo a number of changes and/or to purchase additional signs in other variants of implementation. In addition, it is obvious that the above methods can be implemented in any suitable way. For example, the schema components 32 image processing and, in particular, the block 80 pre-treatment ISP and block 82 of the conveyor ISP can be implemented using hardware (e.g., appropriately configured circuit), software (e.g., via computer program comprising executable code stored on one or more tangible machine-readable storage media), or using a combination of hardware and software elements.

[0464] the Above-described specific ways of implementation have been presented by way of example, and it should be understood that these embodiments of admit the difference is major modifications and alternative forms. Also it should be understood that the claims not be limited to specific open forms, but is intended to cover all modifications, equivalents, and alternatives to, the essence and scope of this disclosure.

1. The signal processing system of the image that contains:
the preprocessing block of pixels configured to receive the primary frame of image data containing pixels obtained with the use of devices forming the image with a digital image sensor, and the preprocessing block of pixels contains machine statistics with statistics logic AF, configured for processing source image data to collect statistics rude and accurate AF, and
control logic configured to determine the optimum position of the focus lens device of image formation using a metric coarse and fine AF on the basis of statistics coarse and fine movement to adjust the position of the focus lens between the position of the minimum and the position of the maximum of specifying the full focal length to achieve the optimum position of focus.

2. The signal processing system of the image under item 1, in which the control logic is configured to determine on the optimum position of the focus lens by:
step-by-step change the position of the focus for many positions in the crude incidence rate along the full focal length in the first direction starting from the position of the minimum and ending position of the maximum
the metric coarse AF for each of multiple positions of rough indicator
identify which of the many positions in the crude incidence rate is the appropriate metric coarse AF, which is reduced with respect to the metric coarse AF, corresponding directly to the previous position in the crude incidence rate,
starting with the identified position in the crude incidence rate, step-by-step change the position of the focus in the second direction opposite the first direction and back to the position of the minimum for many positions accurate indicator
definition of indicator accurate AF for each of multiple positions of the exact figure, and
identify which of the many positions accurate indicator corresponds to the peak in the data is accurate AF, and set the identified position to the exact value as the optimum position of focus.

3. The signal processing system of the image under item 2, in which the step size between each of the multiple positions of the coarse index is greater than the step size between each of the multiple positions accurate p is the index.

4. The signal processing system of the image under item 2, in which the step size between each of the positions in the crude incidence rate may be changed, at least partially, on the basis of the magnitude of changes in the parameters of the rough AF corresponding to adjacent positions of the coarse index.

5. The signal processing system of the image under item 4, in which the step size between the positions in the crude incidence rate decreases in accordance with decrease of the magnitude of changes in the parameters of the rough AF corresponding to the positions of the coarse index.

6. The signal processing system of the image under item 2, in which the control logic adjusts the position of the focus lens using the coil, and control logic loops through the coarse position indicator along the full focal length to account for the effects of time of installation of the coil.

7. The signal processing system of the image under item 1, in which the logic of statistics autofocus configured to provide statistics of the rough AF by applying the first and second filters to at least one of the brightness values of the camera, derived from the primary image data after thinning or thinned from the primary image data, and to provide statistics accurate autofocus or by use of the third and included four the addition of filters to the brightness values, obtained by applying the transformation to the primary image data, or by use of the horizontal filtering of the primary image data.

8. The signal processing system of the image on p. 6, in which the indicators gross AF in each coarse position is determined, at least partially, based on the sum of the output signals of the first and second filters, and the data is accurate AF in each exact position is determined, at least partially, on the basis of the sums of the output signals of the third and fourth filters.

9. The signal processing system of the image on p. 6, in which the first and second filters for filtering the brightness of the camera contains a filter of 3×3 on the basis of operators Sara.

10. The method containing the steps are:
identify indicators gross AF, which is based on the statistics of the rough AF collected from the machine statistics, at various steps along the focal length of the lens device of image capture
identify through logic step, in which the appropriate indicator AF is reduced relative to the previous step
identify by control logic of the optimal focal region in the vicinity of the step, and
analyze through the panel is engaged in the logic of the data is accurate AF in the optimal focal area for determining the optimum position of the focus lens.

11. The method according to p. 10, which defines the analysis of the data is accurate AF in the optimal focal area for determining the optimum position of focus contains the search position of the focus, which provides maximum indicator accurate AF in the optimal focal area.

12. The method according to p. 10, in which the figures are rough and precise autofocus based on brightness, balanced on the white level, derived from the Bayer RGB data.

13. The method according to p. 12, in which the brightness values, balanced on the white level, for indicators gross AF output of the thinned Bayer RGB data.

14. An electronic device, comprising:
the device forming the image, a digital image sensor and lens,
interface configured for communication with a digital image sensor,
storage device,
a display device configured to display a visual representation of the scene image corresponding to the primary image data obtained by the digital image sensor, and
subsystem processing the image signal containing the preprocessing block of pixels configured to receive the primary frame of image data containing pixels obtained using the disorder of image formation, having a digital image sensor, and the preprocessing block of pixels contains the logic of statistics with statistics logic AF, configured for processing source image data to collect statistics rude and accurate AF, and
control logic configured to determine the optimum position of the focus lens device of image formation using a metric coarse and fine AF on the basis of statistics gross and accurate AF, respectively, and the control logic determines the optimal position of the focus lens by metric coarse AF for each of multiple positions in the crude incidence rate along the full focal length in the first direction, identify which of the many positions in the crude incidence rate is the appropriate metric coarse AF, which is reduced with respect to the metric coarse AF, corresponding directly to the previous position in the crude incidence rate, step-by-step change the position of the focus in one or more positions accurate indicator in a second direction opposite the first direction, starting from the identified positions of the coarse index and searching a peak performance that is Noah AF, and set the focus positions corresponding to the peak, as the optimum position of focus.

15. The electronic device according to p. 14, in which the step size between each of the multiple positions in the crude incidence rate, in General, constant.

16. The electronic device according to p. 15, in which the step size between each of the positions of the exact figure, in General, constant, but less than the step size between each of the positions a rough indicator.

17. The electronic device according to p. 14, in which the sensor interface contains interface standard architecture imaging for mobile devices (SMIA).

18. The electronic device according to p. 14, containing at least one of a desktop computer, a laptop computer, tablet computer, mobile phone, portable media player or any combination of them.

19. The electronic device according to p. 14, in which the digital image sensor includes at least one of a digital camera, combined with an electronic device, an external digital camera connected to the electronic device via the interface, or some combination of the three.



 

Same patents:

FIELD: physics, video.

SUBSTANCE: invention relates to camera control means. The method comprises acquiring first information used to control a first area specified within a full image captured by a camera unit; acquiring second information used to control a second area specified within the full image; controlling mechanical movement of the camera unit based on the first information input; acquiring an image of the first area from the full image captured by the camera unit and extracting an image of the second area from the first area based on the second information.

EFFECT: high range of the obtained image.

18 cl, 20 dwg

FIELD: physics.

SUBSTANCE: image processing system may include a control circuit configured to determine if a device is operating in single sensor mode (with one active sensor) or double sensor mode (with two active sensors). When operating in single sensor mode, data may be provided directly to a pixel preprocessing unit from the sensor interface of the active sensor. When operating in double sensor mode, image frames from the first and second sensors are transmitted to a pixel preprocessing unit alternately. For example, in one embodiment, image frames from the first and second sensors are recorded in memory and then read in the pixel preprocessing unit alternately.

EFFECT: wider range of technical capabilities of an image forming apparatus, particularly image data processing.

19 cl, 79 dwg, 4 tbl

FIELD: physics, optics.

SUBSTANCE: invention relates to a camera and a system having a camera, wherein the ratio of the distance between the lens and the sensor to the focal distance varies during exposure. The invention also relates to a method of deconvoluting image data. A variation frequency which enables to form an image which is invariant with respect to movement is set.

EFFECT: reduced blur due to movement.

17 cl, 24 dwg, 1 tbl

FIELD: physics, photography.

SUBSTANCE: invention relates to image capturing devices. The result is achieved due to that the image capturing device includes a photographic lens which forms an image of an object, a photoelectric conversion unit located in the predicted image plane of the photographic lens, a display unit which displays the photographed image obtained by the photoelectric conversion unit, an image display control unit which displays the photographed image through the display unit after obtaining the photographed image through the photoelectric conversion unit, a distance information acquisition unit which obtains information on distance in the photographed image, and a blur correction unit which corrects blurring on the photographed image based on information on distance obtained by the distance information acquisition unit. The image display control unit displays the photographed image, where multiple distances in the photographed image are focused.

EFFECT: correcting blurring based on information on distance of an object included in the photographed imaged.

13 cl, 25 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to a video surveillance and camera control system capable of performing panoramic turning and tilted turning of the camera. The camera platform system has a camera which captures an object image to generate a frame image, camera platforms which turn a camera about a panning axis and a tilt axis and image processors which generate a visual image based on the frame image. When a camera passes through a predefined angular position for turning about the tilt axis, an image processor generates a first visual image corresponding to the image formed by turning the frame image by an angle greater than 0 degrees but less than 180 degrees about the panning axis in a predefined angular position before generating a second visual image corresponding to the image formed by turning the frame image 180 degrees about the panning axis.

EFFECT: reducing unnaturalness of change in direction of movement of an object in a visual image in order to reduce errors when tracking an object.

8 cl, 15 dwg

FIELD: physics.

SUBSTANCE: method is carried out using, in a displacement metre, a correlator which performs the function of determining the variance of signal increments based on squaring difference values of correlated signals from linear photodetectors in digital form, and an interpolator is made in form of a unit which performs interpolation using the formula: χ^=Δm(D1D1)/[2(D12D0+D1)], where D-1, D1, D0 denote signal variances, χ^ is displacement, Δm is the pixel size of the auxiliary photodetector.

EFFECT: reduced image displacement measurement error.

4 dwg

FIELD: physics, computation hardware.

SUBSTANCE: in compliance with this invention, sequence of images including multiple lower-resolution images is contracted. Vectors of motion between reference image in sequence and one or several nest images in sequence are defined. The next forecast image is generated by application of motion vectors to reconstructed version of reference image. Difference between next actual image and next forecast image is generated. Image in sequence from set to set is decoded and SR technology is applied to every decoded set for generation of higher-resolution image by rime interpolation and/or spatial interpolation of reference and difference images. Compression of sequence of images includes steps of determination of vectors of motion between reference image and at least one of extra image of sequence of images. Note here that obtained vector of motion is applied to forecast at least one extra image to calculate difference in mages between at least one extra image and forecast of at least one extra image, respectively.

EFFECT: high-resolution imaging by superhigh resolution technology.

13 cl, 5 dwg

FIELD: chemistry.

SUBSTANCE: invention relates to system and method of recording procedure for recorder. Proposed system comprises time code generator for time code generation for synchronisation of electronic data. Recorder transceiver executes wireless communication of time code to multiple cameras. Cameras fix video and audio data while appropriate camera time data dispatchers combine receive time code with recorded said data to be transmitted via wireless communication line for writing in recorder memory. Recorder can receive and memorise audio data from warning system while computer can communicate with recorder for appropriate editing of stored camera data and warning data to obtain edited data.

EFFECT: efficient use of recorder.

14 cl, 11 dwg

Digital camera // 2510866

FIELD: physics, communication.

SUBSTANCE: invention relates to digital camera with moving mirror. Proposed camera comprises microcomputer 110 that features live scan mode to control images generated by CMOS-sensor 130 or image data obtained by pre-processing of said image data so that these are displayed on LCD 150 as moving images in real time. Note here that when trigger button 141 receives live scan automatic focusing switch-on instruction, microcomputer 110 controls said moving mirror to displace it on optical path to measure by AF-transducer 132 and out of it thereafter to live scan mode.

EFFECT: expanded operating performances for digital camera with moving mirror.

28 cl, 41 dwg

FIELD: physics.

SUBSTANCE: brightness distribution is determined for each of multiple image data portions, the characteristic value of each brightness distribution is calculated from said brightness distribution and a correcting value is found for tonal correction, which is carried out with respect to the combined image data based on the obtained characteristic value of brightness distribution.

EFFECT: carrying out tonal correction to obtain a combined image, having suitable brightness and contrast.

10 cl, 6 dwg

FIELD: engineering of systems for analyzing television images, in particular, for stabilizing an image in television images.

SUBSTANCE: in accordance to the invention, first digital image and at least second image have a set of pixels, and each pixel has associated address for display and is represented by color. System user sets a color matching interval, or system uses a predetermined color matching interval, then in first digital image a pixel is selected, for example, representing an element in an image, which is either fuzzy because of element movement, or appears trembling due to camera movement, and is matched within limits of interval with a pixel of second image. The interval ensures compensation, required during change of lighting. After selection of a pixel in first image, it may be matched with all pixels in the second image, where each pixel of the second image, having matching color within limits of matching interval, is stored in memory, and pixel color is selected, closest to pixel of first image. Then pixel addresses are changed in second image so that the address of pixel positioned in second image, closest color-wise to the pixel in the first image, is assigned the same address on the display as the pixel of first image and the resulting rearranged second image is dispatched into memory for storage.

EFFECT: creation of efficient image stabilization method.

9 cl, 11 dwg

FIELD: devices for reading, recording and reproducing images, and method for correcting chromatic aberrations.

SUBSTANCE: processing of correction is performed with consideration of diaphragm aperture size and object image height in image reading lens. The output signal of the camera signal processing circuit (4) by means of switch (5) is sent to block (6) for correction of chromatic aberration. Value of aperture of diaphragm (31) in lens (1) for reading image, and coordinates of pixel, relatively to which correction processing is performed, from the block (6) for correction of chromatic aberration is sent to block (10) for computation of transformation ratio. The length of focal distance of approach or withdrawal of lens (1) for reading image and camera trembling correction vector are sent to block (10) for computing transformation ratio, then transformation ratio is produced for each color to be dispatched to chromatic aberration correction block (6), where the signal, corrected in block (6) for chromatic aberration correction is compressed in data compression circuit (15) for transmission to record carrier in device (17) for recording and reproduction and unpacked in data unpacking circuit (18) for transmission to switch (5).

EFFECT: increased quality of image, such as color diffusiveness.

6 cl, 10 dwg

FIELD: information technologies.

SUBSTANCE: method and the device for stabilisation of the image containing set of shots is offered, and estimates motion vectors at level of a shot for each shot, and is adaptive integrates motion vectors to yield, for each shot, the vector of a motion which is subject to use for stabilisation of the image. The copy of the reference image of a shot is biased by means of the corresponding is adaptive the integrated vector of a motion. In one version of realisation of the invention, the perimetre of the data unit of the image is supplemented with the reserve of regulation which is subject to use for neutralisation of images, in other variant vertical and horizontal builders are handled independently, and plans of motion evaluation related to the MPEG-4 coder, used for evaluation of vectors at level of macroblocks, and histograms.

EFFECT: possibility to delete astable motions at maintenance of a natural motion of type of scanning of the film-making plan, with the underload requirement for additional specialised plans and the underload magnification of computing complexity.

26 cl, 4 dwg

FIELD: information technologies.

SUBSTANCE: invention can be used for underwater shooting, provision of surveillance, visual inspection and control of underwater shooting parametres and diver actions from surface in process of underwater-technical or diagnostic works at a depth under water. Underwater television control system comprises video portable camera block installed under water in leak-tight box and video camera fixed on helmet of diver's suit and installed in leak-tight box, leak-tight sources of light for illumination of video filming object, the following components installed under water - control unit, monitor, units for power supply of light sources, unit of communication with diver, unit of audio-video recording, terminals of video-audio recording unit are connected to information inputs of monitor, unit of system power supply, accumulator and unit of accumulator charging.

EFFECT: improved efficiency of underwater-technical works control, monitoring over divers' work under water due to increased reliability and validity of information obtained in process of underwater shooting.

14 cl, 5 dwg

FIELD: physics; video technology.

SUBSTANCE: invention relates to video surveillance devices. The result is achieved due to that, a camera (16) and a receiver (28) of a swiveling base are connected to each other so as to transmit a video signal. A web-server (50) sends a video signal beyond the border to a camera (16) and receives a signal from outside for remote control of the camera and a signal for remote control of the swiveling base. A control unit (40) controls the camera (16) in accordance with the signal for remote control of the camera. The signal for remote control of the swiveling base is superimposed on the video signal to be transmitted to the receiver (28) of the swiveling base using the video signal circuit (52). The receiver (28) of the swiveling base extracts the signal for remote control of the swiveling base from the video signal and controls rotation of the base (14) in accordance with the signal for remote control of the swiveling base. The given configuration can be used for transmission with superposition, and the camera and the swiveling base can be easily controlled through communication with the external environment.

EFFECT: controlling swiveling base of a camera through a remote control signal.

10 cl, 6 dwg

FIELD: physics; computer engineering.

SUBSTANCE: invention relates to computer engineering for determining and reducing parametres of video cameras to given values, where the video cameras operate in a machine vision system consisting of three video cameras, two of which provide a detailed image and the third is for scanning. The result is achieved due to that, a device is proposed for automatic adaptive three-dimensional calibration of a binocular machine vision system, which has a first video camera, first image input unit, first orientation unit, second video camera, second image input unit, second orientation unit, system controller and control unit. The device also includes a third video camera, third image input unit and a third orientation unit. Accuracy of calibrating the machine vision system is achieved due to successive pairwise calibration of different pairs of video cameras.

EFFECT: calibration of a machine vision system consisting of three video cameras which, after calibration, should be placed on a single line straight line and directed perpendicular this line, where the two outermost video cameras have a narrow view angle and different focal distances and the third video camera which is placed in the centre between the outermost video cameras has a wide view angle.

4 dwg

FIELD: physics, photography.

SUBSTANCE: invention relates to television and digital photography and more specifically to image stabilisation methods. The result is achieved due to that two additional linear photodetectors of considerably smaller area, made in form of rows (columns) are placed on a single crystal together with the main photodetector matrix, a signal is read from the additional two linear photosensitive devices with horizontal frequency many times greater than the frame frequency of the main photodetector matrix. The pixel size along the linear photodetector is selected such that it is several times less than the pixel size of the main matrix. To main equality of sensitivity of the main matrix and the additional linear photodetectors, in the latter the pixel size in the direction across reading is increased in proportion to reduction of the longitudinal size and reading time. Further, three video data streams are picked: one main one and two auxiliary ones, from which the shift of the crystal relative the image formed by the lens is calculated.

EFFECT: compensation for the effect of the shaking of the hands of the operator.

2 cl, 6 dwg

Digital camera // 2384968

FIELD: physics, photography.

SUBSTANCE: invention relates to image capturing devices. The result is achieved due to that the digital camera includes a microcomputer (110) having a "live" display mode which controls such that image data generated by a CMOS sensor (130) or image data obtained through predefined processing of image data generated by the CMOS sensor (130) are displayed on a liquid-crystal display (150) as a moving image in real time. When the down button (141) receives an instruction relative the beginning of the automatic focusing operation in "live" display mode, the microcomputer (110) controls the movable mirror such that it enters the optical path in order to measure trough an AF sensor (132) and then enable the movable mirror to come out of the optical path in order to return the digital camera to the "live" display mode.

EFFECT: display of a subject image of a frame in "live" mode through an electronic view finder in a digital camera with a movable mirror.

7 cl, 41 dwg

FIELD: information technology.

SUBSTANCE: digital photographic camera has a support structure, an objective lens held by the support structure and having an optical axis, a sensitive element held by the support structure under the objective lens and having a certain number of adjacent pixel rows, where each pixel row contains a certain number of pixels, and each pixel includes an image sensor, and the image signal processor connected to the sensitive element includes an image scaling device which is configured to scale each pixel row in accordance with the scaling factor which differs from the adjacent pixel row. The image scaling device is configured to correct the oblique angle between the sensitive element of the photographic camera and the objective lens, the image of which is being captured.

EFFECT: avoiding geometrical distortions caused by the position of image capturing apparatus relative the object whose image is being captured.

25 cl, 16 dwg

FIELD: information technology.

SUBSTANCE: low-power mobile device for capturing images can create a stereo image and stereo video in time from one fixed type. For this purpose, statistics from an auto-focusing process is used to create a block depth map of one fixed type. In the block depth map, artefacts are suppressed and the depth map of the image is created. 3D left and right stereotypes are created from the depth map of the image using a 3D surface reconstruction process based on the Z-buffer and a mismatch map, which depends on the geometry of binocular vision.

EFFECT: providing a simple calculation process for detecting and estimating depth information for recording and creating stereo video in real time.

29 cl, 24 dwg

Up!