RussianPatents.com

Image processing method and image processing device. RU patent 2508604.

Image processing method and image processing device. RU patent 2508604.
IPC classes for russian patent Image processing method and image processing device. RU patent 2508604. (RU 2508604):

H04N5/232 -
Another patents in same IPC classes:
Video system on chip for image stabilisation Video system on chip for image stabilisation / 2486688
Video system 10 on a chip for image stabilisation has a main photodetector array 11 and two secondary mutually perpendicular linear photodetector arrays 12 and 13 (with a larger pixel area), first and second random access memory 14 and 15, inputs N1…Nk of which are connected to corresponding outputs N1…Nk of the secondary mutually perpendicular linear photodetector arrays 12 and 13, outputs N1…Nk of which are also connected to inputs N1…Nk of first and second controllers 16 and 17 for calculating correlation, respectively, the second inputs M1…Mk of which are connected to corresponding outputs of the first and second random access memory 14 and 15, wherein outputs of the first and second controllers for calculating correlation are connected to inputs of a control unit 18.
Image capturing device, control method thereof and data medium Image capturing device, control method thereof and data medium / 2456654
Image capturing device has an image sensor for capturing an image signal generated by a photographic optical system which includes a focusing lens, a detecting unit for detecting the object region based on the image signal captured by said image sensor, a first generating unit for generating first information associated with the focusing state of the photographic optical system based on the image signal captured by said image sensor, a second generating unit for dividing optical flux from the object into two in order to generate two images and generate second information associated with the value of relative positional shift between the two images, and a control unit for controlling the execution of at least one of first focusing control using first information and second focusing control using second information. The control unit is configured to restrict execution of second focusing control when said detecting unit detects an object region.
Digital camera with triangulation autofocusing system and method related to it Digital camera with triangulation autofocusing system and method related to it / 2447609
At least one light spot is projected on target object; the first image of target object is captured with at least one light spot, in response to step in which light spot is projected; distance from target object to digital camera is programmatically determined using image spot with at least one light spot, and distance triangulation factor of at least one light spot in the image; and digital camera lens is automatically focused based on the step in which distance from target object to digital camera is determined.
Video surveillance method and apparatus Video surveillance method and apparatus / 2436255
Disclosed is a video surveillance method using a video camera with a video recorder, an infrared sensor for detecting movement of the object under video surveillance and a device for interfacing the motion sensor and the video camera with a video recorder. Video surveillance is carried out with possibility of turning the video camera in the azimuthal plane by a multiphase turning angle of the video image synchronously with reception of infrared radiation by the motion sensor from an infrared radiation generator mounted on the object under video surveillance, by scanning with a drive with the interfacing device in the azimuthal plane of the corresponding phase turning angle using clocked pulses with fixed frequency higher than 10 Hz. The phase turning angle is scanned in a code via successive approximation of the normalised value to the measured value by algebraic summation of reverse phase increments, recorded in form of differences between measured and normalised values.
Information processing device, information processing method and programme Information processing device, information processing method and programme / 2434260
Information processing device has a display; input operation receiving apparatus; display control apparatus designed to control the display so that an image on an image map conforms with an object imitating the real map which displays the verification image which denotes a shape in which a plurality of image maps are superimposed on to each other on the display which displays one of the images corresponding to a specific image, and information which denotes the image in a partial or complete region of the map with the specific image, when the verification operation is received, which denotes an instruction to verify the map with the specific image, and which changes the display state of the display from a first state, in which is displayed an image which corresponds to the map with the specific image, when a selection operation is received, which indicates that the map with the specific image was selected.
Apparatus and method of estimating displacement due to arm shake and device using said method and apparatus to obtain images Apparatus and method of estimating displacement due to arm shake and device using said method and apparatus to obtain images / 2433563
Invention discloses apparatus for estimating displacement and a method of estimating the common displacement vector of an image due to arm shake by using scaling information and focus information, involving the following steps: the captured image is divided into multiple image blocks; a step for determining the weight coefficient value for the displacement vector of each of the multiple image blocks based on focus information and magnification information; a step for predicting the displacement vector for each image block; and a step for estimating overall displacement by applying the weight coefficient value defined for the displacement vector for each image block to the predicted displacement vector.
Image forming device and method of controlling said device Image forming device and method of controlling said device / 2430482
Image forming device includes an image forming module configured to output the image signal of the captured object, a processing module configured to process the image signal, the processing module including a cyclic noise reduction module which is configured to reduce image signal noise, a parameter changing module configured to change the parameter of at least one image formation through the image forming and signal processing modules, and a coefficient changing module configured to change the cyclic coefficient of the cyclic noise reduction module.
Recording and creating stereo images and stereo video in real time using monoscopic low-power mobile device Recording and creating stereo images and stereo video in real time using monoscopic low-power mobile device / 2417548
Low-power mobile device for capturing images can create a stereo image and stereo video in time from one fixed type. For this purpose, statistics from an auto-focusing process is used to create a block depth map of one fixed type. In the block depth map, artefacts are suppressed and the depth map of the image is created. 3D left and right stereotypes are created from the depth map of the image using a 3D surface reconstruction process based on the Z-buffer and a mismatch map, which depends on the geometry of binocular vision.
Photographic camera for electronic device Photographic camera for electronic device / 2417545
Digital photographic camera has a support structure, an objective lens held by the support structure and having an optical axis, a sensitive element held by the support structure under the objective lens and having a certain number of adjacent pixel rows, where each pixel row contains a certain number of pixels, and each pixel includes an image sensor, and the image signal processor connected to the sensitive element includes an image scaling device which is configured to scale each pixel row in accordance with the scaling factor which differs from the adjacent pixel row. The image scaling device is configured to correct the oblique angle between the sensitive element of the photographic camera and the objective lens, the image of which is being captured.
Digital camera Digital camera / 2384968
Invention relates to image capturing devices. The result is achieved due to that the digital camera includes a microcomputer (110) having a "live" display mode which controls such that image data generated by a CMOS sensor (130) or image data obtained through predefined processing of image data generated by the CMOS sensor (130) are displayed on a liquid-crystal display (150) as a moving image in real time. When the down button (141) receives an instruction relative the beginning of the automatic focusing operation in "live" display mode, the microcomputer (110) controls the movable mirror such that it enters the optical path in order to measure trough an AF sensor (132) and then enable the movable mirror to come out of the optical path in order to return the digital camera to the "live" display mode.
Method (variants) and image stabilization system Method (variants) and image stabilization system / 2308816
In accordance to the invention, first digital image and at least second image have a set of pixels, and each pixel has associated address for display and is represented by color. System user sets a color matching interval, or system uses a predetermined color matching interval, then in first digital image a pixel is selected, for example, representing an element in an image, which is either fuzzy because of element movement, or appears trembling due to camera movement, and is matched within limits of interval with a pixel of second image. The interval ensures compensation, required during change of lighting. After selection of a pixel in first image, it may be matched with all pixels in the second image, where each pixel of the second image, having matching color within limits of matching interval, is stored in memory, and pixel color is selected, closest to pixel of first image. Then pixel addresses are changed in second image so that the address of pixel positioned in second image, closest color-wise to the pixel in the first image, is assigned the same address on the display as the pixel of first image and the resulting rearranged second image is dispatched into memory for storage.
Device for recording and reproducing an image, device for reading an image and method for correcting a chromatic aberration Device for recording and reproducing an image, device for reading an image and method for correcting a chromatic aberration / 2321964
Processing of correction is performed with consideration of diaphragm aperture size and object image height in image reading lens. The output signal of the camera signal processing circuit (4) by means of switch (5) is sent to block (6) for correction of chromatic aberration. Value of aperture of diaphragm (31) in lens (1) for reading image, and coordinates of pixel, relatively to which correction processing is performed, from the block (6) for correction of chromatic aberration is sent to block (10) for computation of transformation ratio. The length of focal distance of approach or withdrawal of lens (1) for reading image and camera trembling correction vector are sent to block (10) for computing transformation ratio, then transformation ratio is produced for each color to be dispatched to chromatic aberration correction block (6), where the signal, corrected in block (6) for chromatic aberration correction is compressed in data compression circuit (15) for transmission to record carrier in device (17) for recording and reproduction and unpacked in data unpacking circuit (18) for transmission to switch (5).
Adaptive image stabilisation Adaptive image stabilisation / 2350036
Method and the device for stabilisation of the image containing set of shots is offered, and estimates motion vectors at level of a shot for each shot, and is adaptive integrates motion vectors to yield, for each shot, the vector of a motion which is subject to use for stabilisation of the image. The copy of the reference image of a shot is biased by means of the corresponding is adaptive the integrated vector of a motion. In one version of realisation of the invention, the perimetre of the data unit of the image is supplemented with the reserve of regulation which is subject to use for neutralisation of images, in other variant vertical and horizontal builders are handled independently, and plans of motion evaluation related to the MPEG-4 coder, used for evaluation of vectors at level of macroblocks, and histograms.
Underwater television control system Underwater television control system / 2374781
Invention can be used for underwater shooting, provision of surveillance, visual inspection and control of underwater shooting parametres and diver actions from surface in process of underwater-technical or diagnostic works at a depth under water. Underwater television control system comprises video portable camera block installed under water in leak-tight box and video camera fixed on helmet of diver's suit and installed in leak-tight box, leak-tight sources of light for illumination of video filming object, the following components installed under water - control unit, monitor, units for power supply of light sources, unit of communication with diver, unit of audio-video recording, terminals of video-audio recording unit are connected to information inputs of monitor, unit of system power supply, accumulator and unit of accumulator charging.
Surveillance camera device, method of controlling surveillance camera device and program for surveillance camera device Surveillance camera device, method of controlling surveillance camera device and program for surveillance camera device / 2376725
Invention relates to video surveillance devices. The result is achieved due to that, a camera (16) and a receiver (28) of a swiveling base are connected to each other so as to transmit a video signal. A web-server (50) sends a video signal beyond the border to a camera (16) and receives a signal from outside for remote control of the camera and a signal for remote control of the swiveling base. A control unit (40) controls the camera (16) in accordance with the signal for remote control of the camera. The signal for remote control of the swiveling base is superimposed on the video signal to be transmitted to the receiver (28) of the swiveling base using the video signal circuit (52). The receiver (28) of the swiveling base extracts the signal for remote control of the swiveling base from the video signal and controls rotation of the base (14) in accordance with the signal for remote control of the swiveling base. The given configuration can be used for transmission with superposition, and the camera and the swiveling base can be easily controlled through communication with the external environment.
Method of calibrating machine vision system consisting of three video cameras and device for realising said method Method of calibrating machine vision system consisting of three video cameras and device for realising said method / 2382515
Invention relates to computer engineering for determining and reducing parametres of video cameras to given values, where the video cameras operate in a machine vision system consisting of three video cameras, two of which provide a detailed image and the third is for scanning. The result is achieved due to that, a device is proposed for automatic adaptive three-dimensional calibration of a binocular machine vision system, which has a first video camera, first image input unit, first orientation unit, second video camera, second image input unit, second orientation unit, system controller and control unit. The device also includes a third video camera, third image input unit and a third orientation unit. Accuracy of calibrating the machine vision system is achieved due to successive pairwise calibration of different pairs of video cameras.
Image stabilisation method (versions) Image stabilisation method (versions) / 2384967
Invention relates to television and digital photography and more specifically to image stabilisation methods. The result is achieved due to that two additional linear photodetectors of considerably smaller area, made in form of rows (columns) are placed on a single crystal together with the main photodetector matrix, a signal is read from the additional two linear photosensitive devices with horizontal frequency many times greater than the frame frequency of the main photodetector matrix. The pixel size along the linear photodetector is selected such that it is several times less than the pixel size of the main matrix. To main equality of sensitivity of the main matrix and the additional linear photodetectors, in the latter the pixel size in the direction across reading is increased in proportion to reduction of the longitudinal size and reading time. Further, three video data streams are picked: one main one and two auxiliary ones, from which the shift of the crystal relative the image formed by the lens is calculated.
Digital camera Digital camera / 2384968
Invention relates to image capturing devices. The result is achieved due to that the digital camera includes a microcomputer (110) having a "live" display mode which controls such that image data generated by a CMOS sensor (130) or image data obtained through predefined processing of image data generated by the CMOS sensor (130) are displayed on a liquid-crystal display (150) as a moving image in real time. When the down button (141) receives an instruction relative the beginning of the automatic focusing operation in "live" display mode, the microcomputer (110) controls the movable mirror such that it enters the optical path in order to measure trough an AF sensor (132) and then enable the movable mirror to come out of the optical path in order to return the digital camera to the "live" display mode.
Photographic camera for electronic device Photographic camera for electronic device / 2417545
Digital photographic camera has a support structure, an objective lens held by the support structure and having an optical axis, a sensitive element held by the support structure under the objective lens and having a certain number of adjacent pixel rows, where each pixel row contains a certain number of pixels, and each pixel includes an image sensor, and the image signal processor connected to the sensitive element includes an image scaling device which is configured to scale each pixel row in accordance with the scaling factor which differs from the adjacent pixel row. The image scaling device is configured to correct the oblique angle between the sensitive element of the photographic camera and the objective lens, the image of which is being captured.
Recording and creating stereo images and stereo video in real time using monoscopic low-power mobile device Recording and creating stereo images and stereo video in real time using monoscopic low-power mobile device / 2417548
Low-power mobile device for capturing images can create a stereo image and stereo video in time from one fixed type. For this purpose, statistics from an auto-focusing process is used to create a block depth map of one fixed type. In the block depth map, artefacts are suppressed and the depth map of the image is created. 3D left and right stereotypes are created from the depth map of the image using a 3D surface reconstruction process based on the Z-buffer and a mismatch map, which depends on the geometry of binocular vision.

FIELD: physics.

SUBSTANCE: brightness distribution is determined for each of multiple image data portions, the characteristic value of each brightness distribution is calculated from said brightness distribution and a correcting value is found for tonal correction, which is carried out with respect to the combined image data based on the obtained characteristic value of brightness distribution.

EFFECT: carrying out tonal correction to obtain a combined image, having suitable brightness and contrast.

10 cl, 6 dwg

 

The technical field to which the invention relates

The present invention relates to a process for receiving a portion of the image data by adding and combining several portions of the image data.

The level of technology

There is a way to image processing, in which several images are added and are combined to form one image. For example, A Patent Application Number Japan 2003-46859 considers the way to tone conversion to reduce duly number of gradations of the merged image, in the way in which one and the same object is displayed on a digital camera several times at different exposure, and the images are combined to form a United image with a wide dynamic range.

There's also examined the way in which the other object appears several times, and more images are added and combined to represent multiple objects in one image. In this case, there are two ways: the way in which each of the images is displayed with proper exposure and summed, and the way in which each of the images is displayed with the exposition "1/(number of parties images)and summed. When a dark background, the first way is effective to ensure proper brightness of each object, while the second way is effective to ensure proper exposure process after Association, during ordinary image capture.

If the combined several portions of the image data, which are captured by different objects, and the portions of the image data obtained by a normal image capture, that is not on a dark background, contrast, the merged image often reduced if it is received by only a simple summation and combination described above, and each object, in most cases, will appear transparent.

List of citations

- Patent literature

PTL 1: A Patent Application Number Japan 2003-46859

The essence of the invention

The present invention is directed to the method of image processing that performs tonal correction to obtain a merged image with a suitable brightness and contrast, and the imaging device, which can execute this method.

According to one aspect of the present invention, the way of processing of images to produce a piece of data merged image by combining several portions of the image data includes the stages at which find the distribution of brightness for each of these in some portions of the image data, we calculate the characteristic value of each brightness distribution of the brightness distribution, and receive correction value for tonal correction, which is executed against the data of the joint image, based on the results of the characteristic magnitude of brightness.

Additional characteristics and aspects of the present invention will become apparent through a subsequent detailed description of illustrative options for implementation with reference to the attached drawings.

Brief description of drawings

Accompanying drawings, which are included in the description of the invention and constitute its part, explain illustrative options for implementation, signs, as well as aspects of the present invention and, together with a description, serve to inculcate the principles of the present invention.

Figure 1 is a block diagram illustrating digital camera that can implement imaging device according to the present invention.

Figure 2 illustrates the block diagram of the sequence of the process of finding a characteristic value.

Figure 3 is a graphical representation of the conceptual field of production of brightness person to person found.

Figure 4 is a block diagram of the process flow calculation of tonal correction.

Figure 5 is a conceptual graphical representation of the value of tonal correction, when the person is not found.

6 is generic graphical representation of the value of tonal correction, when the person has been identified.

Embodiments of the invention

Below, with reference to the drawings, will detail the various illustrative options for implementation, signs, as well as aspects of the present invention.

Examples

Figure 1 is a block diagram of a digital camera, which can implement an imaging device according to the present invention.

In figure 1, optical image object that passes through the lens (not shown)is formed on the sensor 101 image (the block of the input image), and converted into charges according to the quantity of light.

Charges converted PV transformative element are displayed on the unit 102 analog-to-digital (A/d) conversion from the sensor 101 image in the form of an electrical signal and converted into a digital signal (image data) through a process of A/d conversions. Digital signal output from the block 102 A/d conversion is processed at the Central processor unit (CPU) 100. Next, the digital signal is transferred on the block 111 output image data for playback. The process for CPU 100 is stored in memory (not shown) in the form of a program, and executed on the CPU 100. The program, which will run may be externally recorded on the recording medium, or so forth. Described below, the process is executed on the CPU 100.

Digital signal output from the block 102 A/d conversion is passed to the block 103 finding white balance (WB), block 104 finding characteristic values (block location of the brightness distribution unit for computation of the characteristic values) and block 105 processing BB respectively. Block 103 finding BB performs finding BB. In the course of this process, calculates the strengthening white balance, suitable for the captured image, based on data of the captured image. Strengthening the white balance can be calculated in the traditional way. In the block 105 processing BB, white balance gain obtained unit 103 finding BB, each value entered in the "red-green-blue (RGB - red-green-blue) item image to the image. The image, which is submitted gain white balance, temporarily stored in the device mass memory 106 for image storage.

The image data and the characteristic value of the images are recorded respectively in a storage device 106 for image storage and the storage device 107 for each of a number of formations images. Once received a pre-set number of servings image data block 108 Association of image data summarizes and integrates the data portion of the image, recorded in a storage device 106 for image storage.

Block 109 calculate the correction value (unit receive corrective variable) calculates the value of the tonal correction, based on the data of the characteristic values of each image saved in a storage device 107, and data on the characteristic size of the combined image. Way to calculate the value of tonal correction will be described below. Block 110 processing visualization performs tonal correction to the data of the joint image, using the value of the tonal correction passed from a block 109 calculation of correcting values, and then the corrected data of the combined images are displayed on the block 111 output image data.

In this illustrative embodiment, the value of tonal correction is calculated on the basis of data of the characteristic values of each image, recorded in a storage device 107, and data of the characteristic size of the combined image. However, the magnitude of the tonal correction can be calculated from the table data using the characteristic values of each image, and information on the size of tonal correction can be obtained.

Figure 2 is a block diagram of the sequence of operations, illustrating the process of finding a characteristic value of executed against the data of each image in the block 104 finding characteristic values.

At the stage of S201 in figure 2, is a histogram. At the stage of S201, strengthening the BB, the calculated unit 103 finding BB, applies to all portions of the data captured images to find the histogram, in relation to which the process is running gamma correction, as the distribution of brightness. The process of gamma correction can be traditional treatment using information table. The range, which is a histogram, can be an area in which the end of the data image is carved.

On stage, S202, is the characteristic value of the histogram. In this illustrative embodiment of histograms obtained value (SD), which combines an element of the image, having accumulated frequency of 1% from the dark (shadow zone), and value (HL), which combines an element of the image, having accumulated frequency of 1% from bright (lit) zone.

At the stage of S203, pre-processing of the search of the person. In the course of this process, the input image is the compression process or the process of gamma correction, to facilitate the search of a person present in the image.

At the stage of S204, performed a search of a person to find the area of the face in the image. There are no special restrictions on the way to search for a person. Randomly selected and traditional way can be used to search for a person. Traditional search technology entity includes method based on the training, involving a neutral network, and the way in which the image is detected sector, with a characteristic form, for example, the eyes, nose and mouth, using pattern matching, and the image of the person, if the degree of similarity is high. Additionally, considered a variety of ways, including the way in which is the characteristic value of the image, such as skin color or shape of the eye, to find a person using statistical analysis. Some of these methods can be combined to improve search accuracy of the person. In this illustrative embodiment, is highlighted by the high-frequency portion of the image to get the size of the person and the location of the eye is compared with the template, which was prepared in advance, and thus the individual is located.

At the stage of S205, determined, found as a result of the search of the person at the stage of S204 oblast (region)with high reliability is a face. When one or more areas person (YES at the stage of S205) the process moves to step S206. If there is no person on stage S205), the process for finding the characteristic value ends.

At the stage of S206 calculates the region receiving the brightness of the face. Region receiving the brightness of the faces is set on the part of the face. For example, obtaining the brightness of the faces are set at three sites, such as sites under both eyes, and a land plot on the middle of the distance between the eyes, as illustrated in figure 3, and the size of each area is calculated in accordance with the size of the person found. In this illustrative embodiment, the area is a square. Figure 3 includes data space 301 of the image area 302 persons, and the area is 303, 304 and 305 obtain the brightness of the face.

At the stage of S207, it turns out the average of the values of each element R image, item G of the image, item B of the image in the input image for each region receiving the brightness of the face and the resulting value is converted to the value of Y brightness (illumination) according to the formula of 1:

Y = 0,299∗R + 0,587∗G + 0,114∗B (Form 1).

For transform can be applied approximation described by the formula 2:

Y = (3∗R + 6∗G + B)/10 (Formula 2).

At the stage of S208 is calculated representative value of the brightness of the face. For example, it turns out the maximum value of the brightness values of the three sections of each person, and the mean value is calculated for the values of the brightness all persons.

The characteristic value of the image, found as described above, temporarily stored in the device mass memory 107 shown in Figure 1.

Further, with reference to a block diagram of the sequence in figure 4, you will learn how to calculate the value of tonal correction in the block 109 calculation of correcting values.

At the stage of S401, determined whether there is a picture, in which among accumulated images found region of the face. If the image, which found the face, is present (YES at the stage of S401), the process moves to step S402, and if not (NOT on stage S401), the process moves to step S403.

At the stage of S402, is the brightness of the field corresponding to field of the person merged image. Brightness is by finding the brightness values of the corresponding region. The way you calculate the brightness may be the same as the way you calculate the brightness of each of the captured image described above.

At the stage of S403 calculated the characteristic value of the joint histogram of the image. Method of calculation can be the same as the method of computing the characteristic values of the histogram of each of the captured image. In this illustrative embodiment, calculated HL and SD merged image.

At the stage of S404 calculates the desired value of HL. In this illustrative embodiment, the highest value HL each of the captured images is defined as the desired value.

At the stage of S405 calculates the desired value of SD. In this illustrative embodiment lowest value SD each of the captured images is defined as the desired value. The desired value HL and SD are not limited to values that satisfy the condition used in this illustrative embodiment, and can be changed appropriately. For example, the distribution of brightness of the image data, with the highest contrast among several of the captured images can be defined as the desired value, or can calculate the average value of HL and the mean value SD relevant captured images to be defined as the search value.

If you have an image which found the face, the correction is carried out in such a way that the value of the brightness of the field corresponding to field of the person merged image, it was getting close to the preferred value of the brightness of the face. Namely, calibration value for the representative brightness values in the face before the merge process is prepared as a reference table. In this case, the amendment shall be entered so that the correction in relation to SD and HL weakened more than a correction in the case when no image, which found the face, together with a correcting the value of the brightness of the face to this correction does not become unnatural taking into account the correction values SD and HL. Then create a reference table of compliance of the output value of the brightness and input brightness using spline interpolation on the relevant points SD, HL, and the brightness of the face and the minimum value and maximum value of the brightness of the image.

Figure 6 illustrates an example of a tonal curve obtained as the result of the process described above. FACEin figure 6 indicates the representative value of brightness in the face after the merge process, and FACEout denotes the corresponding luminance output.

According illustrative variant of the implementation of the present invention, when captured and unites several images can be executed tone correction for the formation of the joint image that has the appropriate brightness and contrast.

In this illustrative embodiment, the brightness values of the darkest area and bright portion of each image are used as the data to get the value of tonal correction merged image. However, tone correction can be performed in accordance with the share of picture elements, which is brighter than the value of HLth brightness, and the share of picture elements that are darker than the value SDth brightness the brightness histogram each found image. Assuming that the brightness value is from 0 to 255 least significant bits (LSB), HLth defined as 240 LSB, and SDth defined as 15 LSB, for example. As described above, the present invention is applicable to a process in which calculates the amount of tonal correction in relation to the combined image using the distribution of brightness histogram of each of the images that will be combined.

In this illustrative embodiment, brightness histogram of each of the images that will be combined, is used as the initial data for calculation of tonal correction in relation to the combined image. However, the histogram G each image, for example, can be used as information, relevant information about the brightness. In this case, data G can be obtained from portions of the data R, G, and B, which are output from the unit 102 A/d conversion, and corresponding to the characteristic value can be calculated in block 104 finding characteristic values.

In this illustrative embodiment made to strengthening the white balance for each of the captured images, and then these portions of the image data are combined. However, after the unification of the portions of the image data can be introduced by a representative gain white balance.

In this illustrative embodiment, the characteristic value of the histogram is from the data of the merged image. However, the characteristic value of the joint histogram of the image can be calculated from multiple pieces of data on a captured image before the merge process.

In this illustrative embodiment, the cycle of the process of unification of the image is performed using the CPU to 100. However, part of the process can be executed by the hardware, such as circuit.

In this illustrative embodiment digital camera is shown as an example of the device image processing, and image data coming from the sensor 101 image, which from the outside takes luminous flux and converts mentioned luminous flux in the image signal is used as the data of the image. On the other hand, as the image data can be used image data coming from the unit imaged, which reads an image with the help of scanning optical system, or image data coming in from the interface block, which gets taken from outside the image data and introduces the image data in the device. Specifically, examples of imaging devices include a camera or a video camera installed in the image sensor, printer, scanner device or copying machine, which has a unit imaged, and computer interface block, which introduces image data received from external media records.

In this case, the code itself, can be read out from the data medium, implements a new feature of the present invention, and media data, stores the code and the program are present invention. Examples of storage media for delivery of the software code include a floppy disk, hard disk, optical disk, magnetic optical disk, CD-ROM, read-only CD-ROM, CD-recordable (CD-R), rewritable compact disc (CD-RW), digital versatile disk read only memory (DVD-ROM), digital versatile disc rewritable method of phase transition (DVD-RAM), digital versatile disc rewritable (DVD-RW), digital versatile disc-recordable (DVD-R), magnetic tape, nonvolatile memory card, ROM, and the like.

The present invention covers not only the case when the functions of illustrative case for are realized through the implementation of a read out of the computer software code, but also the case when the operating system (OS) running on a computer, partially or fully complies process based on the instructions of the program code and illustrative function variants of implemented using the process described above.

Additionally, the present invention also covers the case when a program code is read by the media data is written to the storage device, provided by the Board of functional extensions, plug in the computer, or in a storage device, provided for in the block functional enhancement is connected to your computer, and then the CPU or something like that, provided the Board functional extension or in the functional expansion, partially or fully complies process based on the instructions of the program code and illustrative function variants of implemented using the process described above.

Although the present invention and described with reference to the illustrative options for implementation, you should understand that this invention is not limited to open illustrative options for implementation. The volume with the following formula of the invention must receive extended interpretation to cover all modifications, equivalent structures and functions.

This application claims the priority towards Laid the Patent Application Number Japan 2003-46859, filed may 7, 2009, which is included in this application by reference in its entirety.

1. Processing method images for receiving a portion of the data of the joint image, through the Union, including sums, in some portions of the image data, the method contains the stages at which: find the distribution of brightness for each of the several portions of the image data; calculate the characteristic value of each brightness distribution of the brightness distribution; and receive correction value for tonal correction, which is executed against the data of the joint image, based on the results of the characteristic magnitude of brightness.

2. The method according to claim 1, which further comprises the stages at which: find the area in which the person is present for each of the several portions of the image data; calculate the characteristic value of the field of a person found in several portions of the image data, and the characteristic value of the field corresponding to field of the person, found in the data portion of the joint image; and receive the value of the tonal correction of the area corresponding to the face data merged image, based on the characteristic magnitude of the field person in several portions of the image data.

3. The method according to claim 1, wherein the target value of the characteristic values of the brightness distribution data merged image is calculated from the characteristic values of the brightness distribution in some portions of the image data and the magnitude of the tonal correction is calculated on the basis of this search value.

4. The method of claim 3, in which the target value is the characteristic value of the distribution of brightness of the image data which have the highest contrast among the several portions of the image data.

5. The method of claim 3, in which the target value is the average value of the characteristic values of the brightness distribution in some portions of the image data.

6. The method of claim 3, in which the target value is the characteristic value of the distribution of brightness of the image data which have the greatest value among values of illumination in some portions of the image data.

7. The method of claim 3, in which the target value is the characteristic value of the distribution of brightness of the image data, which have the lowest value among the values of shading in some portions of the image data.

8. Imaging device, which receives a portion of the data of the joint image, through the Union, including summation of several portions of the image data received from the block of the input image, the device contains: block location of the brightness distribution, made with the possibility of finding the distribution of brightness for each of the several portions of the image data; the evaluation unit of the characteristic values, made with the possibility to calculate the characteristic magnitude of each brightness distribution of the said brightness distribution; and a unit for production of correcting values, made with the possibility of obtaining value tonal correction used for tonal correction, which is executed against the data of the joint image, based on the characteristic values of the brightness distribution received in the block computing the characteristic values.

9. Imaging device according to paragraph 8, in which the image input unit includes imaging sensor, which from the outside takes luminous flux and converts mentioned luminous flux in image signal, the unit imaged, which reads an image with the help of scanning optical system or interface unit, which receives from the outside image data and enters it into the device.

10. The media keeps a program for prescribing computer to perform a way of processing of images to produce a piece of data merged image, through the Union, includes sums, in some portions of the image data, and the program contains: finding the brightness distribution for each of the several portions of the image data; calculation of the characteristic magnitude of each brightness distribution of the brightness distribution; and obtain a correction value for tonal correction, which is executed against the data of the joint image, based on the results of the characteristic magnitude of brightness.

 

© 2013-2014 Russian business network RussianPatents.com - Special Russian commercial information project for world wide. Foreign filing in English.