Image capturing device

FIELD: physics, photography.

SUBSTANCE: invention relates to image capturing devices. The result is achieved due to that the image capturing device includes a photographic lens which forms an image of an object, a photoelectric conversion unit located in the predicted image plane of the photographic lens, a display unit which displays the photographed image obtained by the photoelectric conversion unit, an image display control unit which displays the photographed image through the display unit after obtaining the photographed image through the photoelectric conversion unit, a distance information acquisition unit which obtains information on distance in the photographed image, and a blur correction unit which corrects blurring on the photographed image based on information on distance obtained by the distance information acquisition unit. The image display control unit displays the photographed image, where multiple distances in the photographed image are focused.

EFFECT: correcting blurring based on information on distance of an object included in the photographed imaged.

13 cl, 25 dwg

 

The technical field

The present invention relates to a device for image capturing, which allows correction of the blurring on the basis of the information about the distance of the object included in the photographed image.

The level of technology

In the traditional technique of the capture device image, which can discretely distribute the pixels in the focus detection between the pixel groups of the photoelectric conversion unit and to calculate the distance to the object based on the signals from the pixels for focus detection, as disclosed in patent application laid Japan No. 2000-156823 (hereinafter referred to as "patent document 1"). In the case of the configuration disclosed in patent document 1, it is possible to obtain the distribution of distances to the object included in the photographed image.

To restore the blurred image and generating image restored after the blur is used, for example, a Wiener filter, inverse filter of the General type or projection filter. The method for correcting the blur using the tools mentioned above is disclosed in patent application laid Japan No. 2000-20691 (hereinafter referred to as "patent document 2"). Applying the method disclosed in patent document 2, it is possible to calculate the function deterioration by physical analysis on the basis of the condition is th photographing or evaluation on the basis of the output signal of the measuring device in the capture device image, and you can restore the blurred image using the algorithm of image reconstruction is called deconvolution.

In General, the distance to the object to be focus, you can determine the state of focus while shooting. For this reason, the distance to the object to be focusing cannot be edited after shooting. However, when receiving the distribution of the distance to the object in the photographed image using the method disclosed in patent document 1, and in the implementation of the blur correction using the method of correcting blur disclosed in patent document 2, the distance to the object to be focus, you can change after shooting.

The invention

However, when adapting the technologies disclosed in patent documents 1 and 2, the device capturing the image the photographer is difficult to confirm the photographed image.

When using the technologies disclosed in patent documents 1 and 2, after capturing an image blur correction is switched according to the distance to the object to change the distance to the object to be focus. However, the range of movement of the focusing lens of the photographic lens is limited and extreme conversion processing issue is neetsa during recovery blurred image, if the degree of erosion is very high, and very likely noise generation. As a result, the range of subject distances, where the blurring can be corrected or the degree of erosion is limited. For this reason, the distance to the object to be focus, or the degree of erosion also lies in a predetermined range. The photographer is looking at the display of the confirmation image immediately after photographing and evaluates the range of distances to objects to be focusing. However, it is very difficult to capture the range of distances to objects to be the focus of the image, where the focus certain distance from the object.

Regardless of the area in which the image to be the focus, can be obtained by processing the correction of erosion after photographing, it is assumed the case where the display confirm the photographed image after the photographing is displayed in a state where focusing is not performed. In this case, despite the fact that the field is generated by a failed part, undesirable for the photographer, the display confirm the photographed image is displayed in a blurred state. As a result, the photographer can confirm the failed part immediately after shooting and first confirms the failed part, when otmyvanie adjusted after taking a picture.

The present invention was made in view of the above problems, and an object of the present invention is the provision of a capture device, the image correction block blur, which allows the photographer to easily confirm the photographed image.

To solve this task, the capture device image that meets the present invention includes a photographic lens which forms an object image, the photoelectric conversion unit, which is located in the projected image plane of the photographic lens, the display unit, which displays the photographed image obtained by the photoelectric conversion unit, the control unit displays the image, which displays the photographed image using the imaging unit after receiving the photographed image by using the photoelectric conversion unit, a unit for acquiring information about the distance, which receives the distance information in the photographed image, and the correction block erosion, which performs blur correction on the photographed image on the basis of the information on the distance received by the unit acquiring information about distance. The control unit displays the image on the displays photographed image, where multiple focused distance of a photographed image.

Another device of the image capture of the present invention includes a photographic lens which forms an object image, the photoelectric conversion unit, which is located in the projected image plane of the photographic lens, the display unit, which displays the photographed image obtained by the photoelectric conversion unit, the control unit displays the image, which displays the photographed image using the imaging unit after receiving the photographed image by using the photoelectric conversion unit, a unit for acquiring information about the distance, which receives the distance information in the photographed image, and the correction block erosion, which performs blur correction on the photographed image based on the information about the distance obtained unit acquiring information about distance. The control unit displays the image displays the range where it is possible to correct the blur.

The device image capture of the present invention can display a photographed image, where the corrected blurring of objects in the plural the n distances in the range from the first distance to the second distance in the photographed image, as confirming the image immediately after photographing. That is, the capture device, images may not display the image blur correction where focused multiple distances in the photographed image, to display a confirmation image where the photographer can easily capture the range with the possibility of correction of erosion. According to the image blur correction provided by the present invention, it is possible to easily identify the failed part of photographing in the field, where you can get the image to be the focus. Thanks to the display positions and the number of detected objects for the imposition of the photographed image blur correction, the photographer can easily grasp the position and the number of detected objects. Thus, according to the present invention can implement the capture device image, which allows the photographer to easily confirm the photographed image.

Additional features of the present invention made it clear from the following description of illustrative embodiments described with reference to the accompanying drawings.

Brief description of drawings

Fig. 1 is a diagram illustrating the configuration of a camera according to the present invention.

Fig. 2 diagram is displayed on the General scheme of the element image capture, which is used in the camera according to the present invention.

Fig. 3 is a view in section of a pixel part of an element of the image capture according to the present invention.

Fig. 4A and 4B is a planar view and a view in section of a pixel of the image capturing element of the image capture according to the present invention, respectively.

Fig. 5A and 5B is a planar view and a view in section of a pixel of the focus detection element of the image capture according to the present invention, respectively.

Fig. 6A and 6B is a flat view and a view in section of another pixel of the focus detection element of the image capture according to the present invention, respectively.

Fig. 7 is a schematic diagram showing a case of the pupil dividing element of the image capture according to the present invention.

Fig. 8 is a schematic diagram showing the distance information obtained in the present invention.

Fig. 9 is a diagram showing the relationship between the object distance and the distance with the possibility of correcting blur in the present invention.

Fig. 10 is a diagram showing the relationship between the position of the image plane distance to the object and the position of the image plane of the photographic lens 137 in the first embodiment of the present invention.

Fig. 11 is a diagram illustrating sfotografirovat is a great image to the implementation of the blur correction according to the present invention.

Fig. 12A and 12B is a diagram showing the correction of the erosion of the photographed image according to the present invention.

Fig. 13A and 13B is a diagram showing confirming the image according to the present invention that is displayed immediately after shooting.

Fig. 14 is a main flowchart showing the operation of the capture devices that meet present invention.

Fig. 15 is a flowchart showing a subroutine generate a map of distances to the object.

Fig. 16 is a flowchart showing a subroutine of photography.

Fig. 17 is a flowchart showing a subroutine of display confirmation image after shooting.

Fig. 18 is a flowchart showing a subroutine correction dilution.

Fig. 19 is a flowchart showing the procedure of generating functions dilution.

Fig. 20 is a diagram showing the relationship between the position of the image plane distance to the object and the position of the image plane of the photographic lens 137 in the second embodiment of the present invention.

Preferred embodiments of

Now will explain in detail illustrative embodiments of the present invention based on the accompanying drawings.

The first option exercise

In Fig. 1-19 of prodemo who reported the first variant of implementation of the present invention. Describe the function of the first variant implementation of the present invention using the drawings.

In Fig. 1 shows a diagram illustrating the configuration of the capture devices that meet present invention, which is represented in the form of an electronic camera which includes a body 138 of the camera, having the element of image capture, and a separate photographic lens 137 and allows the change of the photographic lens 137 in regard to the housing 138 of the camera.

First will be described the configuration of the photographic lens 137. The first group lens 101, which is located at the leading end of the photographic optical system (the optical system of image formation) is held with the ability to move forward and backward (extension and return) in the direction of the optical axis. Aperture 102 adjusts the diameter of the aperture regulates the amount of light during photographing. Symbol 103 indicates the second lens group. Aperture 102 and the second group 103 lenses combined with each other and forward and back in the direction of the optical axis and perform the operation of zoom (zoom function) in connection with the operation of moving the first group lens 101.

The third group 105 lens moves forward and back in the direction of the optical axis and carries out adjustment the focus. The actuator 111 rotates the zoom slider zoom (not shown) and moves the first group lens 101 and the second group of lenses 103 forwards or returns the first group lens 101 and the second group 103 lenses back in the direction of the optical axis, and performs a zoom operation. The actuator 112 of the diaphragm controls the diameter of the aperture 102 and controls the amount of light photography. The focus actuator 114 moves the third lens group 105 forward or returns the third group 105 lenses back in the direction of the optical axis and carries out an operation of adjusting the focus.

The circuit 136 connection with the camera transmits the information of the lens on the camera or receives information from the camera. Information lens includes a state of the zoom condition of the diaphragm, focus status and information of the frame of the lens. The circuit 136 connection with the camera transmits the information to the scheme 135 connection with the lens provided on the side of the camera.

Now let's describe the housing 138 of the camera. Optical low-pass filter 106 is an optical element that is designed to suppress the distortion of color or moire in the photographed image. Element 107 image capture includes CMOS sensor and its peripheral circuit. As part of capturing a two-dimensional color sensor, assembled on a single Board (block photoele the electrical conversion), where formed on the crystal mosaic filter main color Bayer matrix, use the matrix ICP photosensitive pixels in transverse and longitudinal directions.

Packing unit 139 controls the exposure time when photographing still images. A drive 140 shutter activates the shutter 139.

As electronic flash units 115 to illuminate the object during photographing suitable device pulsed light using a xenon lamp, but you can use the lighting device includes an LED that continuously emits light. Auxiliary optical unit 116 AF projects the image of a mask having a predefined pattern of openings on the field through a projection lens and increases the possibility of detection of focus against a dark object or a low-contrast object.

The CPU 121, which is a CPU, built-in camera to perform various control operations in the camera body, has an operating unit, ROM, RAM, A/d Converter, d/a Converter and a diagram of the communication interface. The CPU 121 controls the various circuits of the camera on the basis of a predetermined program stored in the ROM, and performs the sequence of operations, for example, AF, photography, image processing and recording.

Scheme 126 management focus controls the focus actuator 114 based on the result of detection of the focus moves to the third group 105 lens forward or returns the third group 105 lenses back in the direction of the optical axis, and performs focus adjustment. The circuit 128 iris controls the actuator 112 of the diaphragm and controls the aperture of the diaphragm 102. Circuit 129 controls zoom controls the actuator 111 of the zoom lens according to the zoom operation performed by the photographer. Scheme 135 connection with the lens communicates with the circuit 136 connection with the camera in a photographic lens 137. Scheme 145 of the gate control controls the drive of the shutter 140.

The display device 131, which is, for example, us is the device type display LCD, displays information about the shooting mode of the camera, the image preview before taking the picture and confirming the image after photographing, and an image display focused state during detection of the focus. Group 132 switches operations includes the power switch, the switch shutter (trigger photographing), the switch operation of the zoom and the mode select switch photographing. Photographed image is recorded on removable flash memory unit 133. Built-in memory 144 stores various data required by the CPU 121 to perform operations.

In Fig. 2 shows a schematic diagram of a configuration item of image capture according to the present invention. For example, an item of image capture can be performed using the method disclosed in patent application laid Japan No. H09-046596 filed by the present applicant. In Fig. 2 placement of pixels, for convenience of description, only shown in 2 columns × 4 rows of the two-dimensional CMOS sensor. However, in the actual item image capture are multiple pixels indicated by the positions 30-11 up to 30-42, which allows to obtain a high definition image. In this embodiment, the element image capture is a e is ement capture images with a pixel pitch of 2 μm, effective number of pixels 3000 columns H 2000 rows in the transverse and longitudinal directions = 6000000, and screen capture 6 mm × 4 mm in the transverse and longitudinal directions.

In Fig. 2 designator 1, specifies the unit of the photoelectric conversion element, photoelectric conversion, which includes the gate of the MOS transistor and the depletion layer under the gate, the symbol 2 indicates the photo chopper symbol 3 indicates a MOS transistor of the transfer switch, the symbol 4 indicates a MOS transistor for reset, the symbol 5 indicates a MOS transistor istokpoga repeater, symbol 6 indicates a MOS transistor switch horizontal selection symbol 7 indicates a load MOS transistor istokpoga repeater, the symbol 8 indicates a MOS transistor transferring dark output symbol 9 indicates a MOS transistor of the transfer of light output, symbol 10 indicates cumulative capacity CTNdark output, symbol 11 indicates a cumulative capacity of CTSlight output, symbol 12 indicates a MOS transistor of horizontal transfer, symbol 13 indicates a MOS transistor reset the horizontal output line, symbol 14 indicates the output of the differential amplifier, the designation 15 specifies the schema horizontal and oboznacenie specifies the schema of the vertical scan.

In Fig. 3 shows a view in section of a pixel of the photoelectric conversion. In Fig. 3, symbol 17 indicates a hole P-type designation 18 points oxide film shutter designation 19 indicates the first layer of polysilicon, the symbol 20 indicates a second layer of polysilicon, and the symbol 21 indicates n+a floating diffusion area (FD). Plot FD 21 is connected to another unit of the photoelectric conversion through the other MOS transistor of the transfer. According Fig. 3, the drains of the two MOS transistors 3 migration and land FD 21 are used, and the sensitivity is increased due to the small size and small capacity section FD 21. However, the section FD 21 can be connected to the AL wiring.

In Fig. 4A, 4B and Fig. 5A and 5B illustrates the pixel structure of the image capture and pixel detection focus. In this embodiment, is applied Bayer matrix, where the pixels having sensitivity to the spectrum of G (green), are in the form of two pixels located diagonally from the four pixels in a 2-line H 2 column, and the pixels having sensitivity to the spectra of R (red) and B (blue)are in the form of the other two pixels, respectively. In the Bayer configuration, the pixels in the detection of the focus placed according to a predetermined rule. PQS is LCO method of discrete placement of pixels in the focus detection among pixels of the image capture is a well-known method, which is disclosed in patent document 1, its description is not provided here.

In Fig. 4A and 4B depicts the placement and structure of the pixels of the image capture. In Fig. 4A shows a flat view of pixels of image capture, arranged in 2 rows H 2 column. As already known, in the Bayer configuration, the pixels G are arranged in a diagonal direction, and the pixels R and B are in the form of the other two pixels, respectively. Structure 2 lines H 2 column repeats.

In Fig. 4B illustrates the cross-section 4B-4B in Fig. 4A. ML indicates the on-chip microlens, which is located on the front surface of each pixel, CFRspecifies R (red) color filter, and CFGspecifies the G (green) color filter. PD schematically illustrates the block photoelectric conversion CMOS sensor shown in Fig. 3, and CL indicates a wiring layer for forming signal lines for transmission of various signals in the CMOS sensor. TL schematically illustrates a photographic optical system.

In this case, the on-chip microlens ML of a pixel of image capture and the photoelectric conversion unit PD are designed to most efficiently receive the light flux passing through the photographic optical system ML. Thus, the exit pupil EP of the photographic optical system TL and the Loka photoelectric conversion PD is the ratio of interfacing with the microlens ML, and the effective area of the photoelectric conversion unit is designated as a large area. In Fig. 4B shows the incident luminous flux pixel R. However, the pixel G and the pixel B (blue) have the same structure.

Accordingly, the output pupil EP, which correspond to the individual pixels of the RGB image capture, have a large diameter, and the luminous flux from the object is received effectively, which increases the ratio of the image signal.

In Fig. 5A and 5B illustrates the placement and structure of pixels in the focus detection for the implementation dividing the pupil in the horizontal direction (transverse direction) of the photographic lens. In Fig. 5A shows a flat view of pixels arranged in 2 rows H 2 column, which shows the pixels in the focus detection. When receiving the signal of the image-capturing pixel G is the main component of the brightness information. Since the characteristic of the image recognition of the individual depends on the brightness information, in the case of defective pixel G an individual is likely to feel the image quality degradation. Meanwhile, the pixel R or B pixel is where the obtained color information. However, because some people are not sensitive to color information, despite the fact that in the pixel, where the obtained color information is generated by a small the defect, the lower image quality to recognize hard. Accordingly, in this embodiment, from the pixels arranged in 2 rows H 2 column, pixel, G pixel remains capture the image, and the pixels R and B are used as pixels for focus detection that are designated as SHAand SHBin Fig. 5A.

In Fig. 5B illustrates the cross-section 5B-5B in Fig. 5A. The microlens ML and the photoelectric conversion unit PD have the same structure as the pixel of image capture, as shown in Fig. 4B. In this embodiment, since the signal of the pixel of the focus detection is not used as a signal photographed image instead of a color separating filter color is the transparent film CFW(white). Because the pupil division is performed element of the image capture, the hole in the wiring layer CL is deviation in one direction relative to the center line of the microlens ML. In particular, since the pixel SHAand the opening OPHA is arranged with a deviation to the right, taken luminous flux passing through the left exit pupil EPHAthe photographic lens TL. Similarly, because the opening OPHB of the pixel SHBpositioned with deviation to the left, taken luminous flux passing through the right exit pupil EPHBfotografiche the one lens TL.

Accordingly, the image of the object obtained by the proper placement of pixels SHAin the horizontal direction, is the image A and the image of the object obtained by the proper placement of pixels SHBin the horizontal direction, is the image B. In this case, the degree of defocus of the photographic lens 137 can be detected by determining the relative positions of image A and image B.

In this case, the microlens ML performs the function of the lens element, which generates a pair of optical images comprising image A formed by a light flux passing through the left exit pupil EPHAthe photographic lens TL, and the image B formed by a light flux passing through the right exit pupil EPHBthe photographic lens TL.

In pixels SHAand SHBin relation to the object that has the brightness distribution in the transverse direction of the screen of photographing, such as a longitudinal line, the focus detection is activated. However, with respect to a transverse line which has a distribution of brightness in the longitudinal direction, the focus detection is deactivated. Accordingly, in this embodiment, the pixels, performing pupil division also includes an upright direction (longitudinal direction) of the screen photography therefore the focus detection is activated in relation to the lateral line.

In Fig. 6A and 6B illustrates the placement and structure of pixels in the focus detection for the implementation dividing the pupil in the vertical direction of the screen photography. In Fig. 6A shows a flat view of pixels arranged in 2 rows H 2 column, which shows the pixels in the focus detection. By analogy with Fig. 5A, the pixels G remain the pixels of the image capture, and the pixels R and B are used as pixels for focus detection that are designated as SVCand SVDin Fig. 6A.

In Fig. 6B illustrates the cross-section 6B-6B in Fig. 6A. The pixel structure in Fig. 5B is the same as the pixel structure in Fig. 6B, except that the pixels in Fig. 5B division of pupil in the transverse direction, the pixels in Fig. 6B division of pupil in the longitudinal direction. Thus, since the hole OPVC pixel SVClocated deviation down, accepted the luminous flux passing through the upper exit pupil EPVC photographic lens TL. Similarly, because the hole OPVD pixel SVDlocated deviation up, accepted the luminous flux passing through the lower exit pupil EPVD photographic lens TL. Accordingly, the image of the object obtained by the proper placement of pixels SVC/sub> in the vertical direction, is the image C and the image of the object obtained by the proper placement of pixels SVDin the vertical direction, is the image of D. In this case, the degree of defocus of the object image that has a brightness distribution in the vertical direction, can be detected by determining the relative positions of the image C and image d

In Fig. 7 schematically demonstrated in the case of the pupil dividing element of the image capture according to the present invention. TL indicates the photographic lens, the designation 107 specifies the element to capture the image, which is projected the image plane of the photographic lens, OBJ specifies the object, and IMG specifies the image of the object.

According Fig. 4A and 4B, the pixel image capture takes luminous flux passing through the entire area of the exit pupil EP of the photographic lens. Meanwhile, according to Fig. 5A, 5B, 6A and 6B, the pixel of the focus detection has the function of dividing the pupil. In particular, the pixel detection focus SHAshown on Fig. 5A and 5B, receives a luminous flux passing through the left pupil, when observing the rear end of the lens from the plane of the image capture, i.e. the luminous flux passing through the pupil EPHAin Fig. 7. Similarly, PI is sat detection focus S HB, SVCand SVDtake light fluxes passing through the pupil EPHB, EPVC and EPVD. If the pixels in the focus detection is spread over the whole area of element 107 image capture, the focus detection is activated by the entire area image capture. The CPU 121, which functions as a unit for acquiring information about the distance that calculates the distance to the object based on the information of the focus detection and information of the lens, such as the focal distance.

In Fig. 8 shows the distance information received by the unit acquiring information about distance. In item 107 of image capture according to the present invention, since the pixels of the detection of the focus SHA, SHB, SHCand SHDshown on Fig. 5A, 5B, 6A and 6B, are distributed throughout the area, you can get the distance to the object at an arbitrary position of the screen photography. If the area with small distances to objects in the distribution of the obtained distances to objects are combined and grouped, you can select the path of the object included in the screen photography. : Target1, Target2, and Target3 indicate the area of the selected objects, and BackGround1 specifies the background region. Dist1, Dist2, Dist3, and Dist4 indicate the distances to objects. Dist1 indicates the distance to the object in the object area : Target1, Dist2 indicates the distance to the volume of the KTA in the field of object Target2, Dist3 indicates the distance to the object in the object area Target3, and Dist4 indicates the distance to the object on the background BackGround1. Dist1 is the closest, Dist2 is the second closest, and Dist3 is the third closest. In addition, Dist4 is the most remote.

The CPU 121 selects an object from the distribution of the distance to the object obtained from the pixel of the focus detection, and obtains the area and the distance of each object.

In the capture device image that meets the present invention, the erosion of the photographed image is corrected on the basis of the information about the distance. Generation progress erosion can be estimated from the characteristics of the capture devices or characteristics of the photographic lens. Sets the function of erosion, where the simulated generation progress erosion, and blurred image is restored according to the algorithm of image reconstruction called "deconvolution", for example, using the Wiener lter, which allows you to adjust the blur. Because the correction method of erosion described in patent document 2, its detailed description is not provided here.

In Fig. 9 illustrates the relationship between the feature distances Dist1, Dist2, Dist3, and Dist4 and distance with the possibility of correction of erosion. On axis the distance to the object Dist.

From the feature distances Dist1, Dist2, Dist3, and Dist4 shown in Fig. 8, the object distance Dist1 - Dist3 are in the range of the first distance Dist11 and the second distance Dist12, and the distance to the object Dist4 is out of range.

In Fig. 10 illustrates the relationship between the positions of the image planes corresponding to the feature distances Dist1, Dist2, Dist3, and Dist4, and the position of the image plane of the photographic lens 137. On axis, the degree of defocus Def from the position of the image plane of the photographic lens 137. In this case, the positive direction of the degree of defocus Def corresponds to the direction after focusing.

"0" indicates the position of the image plane of the photographic lens 137, and the shift value is expressed as "0". Def1, Def2, Def3 and Def4 indicate the degree of defocus positions of the image planes for the feature distances Dist1, Dist2, Dist3, and Dist4 respectively. Def11 and Def12 indicate the degree of defocus positions of the image planes for the first distance Dist11 the nearest side and the second distance Dist12 endless parties.

Meanwhile, Def21 and Def22 demonstrate the degree of defocus, which correspond to the degrees of blur if you have enabled the correct dilution. When the degree of erosion is very high, extreme, and less the AC conversion during restore a blurred image, and very likely noise generation. For this reason, when the blur is adjusted, it is preferable that the degree of erosion was in the range Def21 and Def22. According Fig. 10 the degree of defocus Def21 and Def22, which correspond to the degrees of blur when activated correction erosion, are located closer to the outer side than the degree of defocus Def11 and Def12 in the positions of the image planes of the first distance Dist11 the nearest side and the second distance Dist12 infinite side. For this reason, the first distance Dist11 the nearest side and the second distance Dist12 endless parties are not updated.

In Fig. 11 demonstrated photographed image to the implementation of the blur correction using the error correction block erosion. Photographed image of Fig. 11 is identical to the photographed image of Fig. 8. As described in the following sequence of operations, the position detection of the focus has priority from the nearest side, and the position of the object, which is located on the nearest side of some of the selected objects is defined as the position detection focus. For this reason, the photographed image to the implementation of the blur correction using the error correction block erosion becomes a state in which a focused area of the object : Target1 nearest side as shown in Fig. 11. Other areas of interest Target2, and Target3 BackGround1 not focused.

In Fig. 12A and 12B shown the case where the blurring of the photographed image is corrected using the error correction block erosion. Image capture image in Fig. 12A and 12B are identical to the image capture image in Fig. 8 and 11. In Fig. 12A shows an image where the blur correction is carried out throughout the area photographed image, based on the distance to the object Dist2 object area Target2. From the information characteristics of the capture device and image information of the photographic lens, which correspond to the object distance Dist2 object area Target2, sets the function of dilution. If the correction processing based on the function of erosion occurs throughout the area photographed image, blur area of the object Target2 is adjusted, and the corresponding image becomes the image of the subject into focus. Meanwhile, if the image correction is performed on areas other than the area of the object Target2, played blur, where the focus position of the photographic lens 137 is generated in a state in which the focused object distance Dist2. Thus, the image to be the focus, you can get only with the help area on which the target Target2, as shown in Fig. 12A.

In Fig. 12B illustrates the image where the blur correction is carried out throughout the area photographed image based on the distance to the object Dist3 area of the object Target3. From the information characteristics of the capture device and image information of the photographic lens, which correspond to the object distance Dist3 in the field of object Target3, sets the function of dilution. If the correction processing based on the function of erosion occurs throughout the area photographed image, blur area of the object Target3 is adjusted, and the corresponding image becomes the image of the subject into focus. Meanwhile, if the image correction is performed on areas other than the area of the object Target3, played blur, where the focus position of the photographic lens 137 is generated in a state in which the focused object distance Dist3. Thus, the image to be the focus, can be obtained only by using the field object Target3, as shown in Fig. 12B.

As described above using Fig. 11, 12A and 12B, the capture device image, where activated blur correction on the basis of the information about the distance to the object, if the blur correction is performed on the basis of information in the distance, includes distance and the area of each object, you can select the object you want to focus.

However, according to Fig. 11, 12A and 12B, as an object that can be the focus, given the degree of freedom, the photographer is difficult to confirm the photographed image.

Through the above-described correction of the blurring object distance to be focused, can be changed after photographing. However, the range of distances to objects with the possibility of correction of the erosion is limited. For this reason, the distance to the object to be focus, also concluded in a predetermined range. Photographer looking at confirming the image immediately after photographing and evaluates the range of distances to objects to be focusing. However, it is very difficult to capture the range you want the focus of the image, where focused arbitrary distance to the object.

Regardless of the area in which the image to be the focus, can be obtained by processing the correction of erosion after photographing, it is assumed the case when confirming the image after taking a picture is displayed in the unfocused condition. In this case, despite the fact that the field is generated by a failed part, undesirable for the photographer, confirming the image of the group is displayed in a blurred state. In the result, the photographer can confirm the failed part immediately after shooting and first confirms the failed part, when the dilution is adjusted after taking a picture.

Accordingly, in the capture device image that meets the present invention, when the distance of the nearest side where the blur can be corrected using the error correction block erosion, defined as a first distance, and the infinite distance side, where the blurring can be corrected using the error correction block erosion, defined as a second distance in the view confirming image immediately after photographing, displays the photographed image where blurring range from the first distance to the second distance is adjusted. Thus, the image where the blur corrected, displayed the highest possible.

In Fig. 13A and 13B illustrates the display of the confirmation image immediately after photographing in the present invention. Photographed image of Fig. 13A and 13B are identical photographed images in Fig. 8, 11, 12A and 12B. In Fig. 13A shows the case when the displayed image, where the focused areas of interest : Target1 - Target3. In Fig. 13B shows the case when the objects pane : Target1 - Target3 focused, and the positions and number of narushennyh objects appear to overlap.

In Fig. 13A, since the object distance Dist1 - Dist3 for areas of interest : Target1 - Target3 are in the range of the first distance Dist11 and the second distance Dist12, where the blurring can be corrected, the correction of the blur is activated. Accordingly, in the capture device image that meets the present invention, in the view confirming image immediately after photographing, the blur correction is carried out on areas of interest : Target1 - Target3.

By performing the correction processing on the basis of information of the characteristics of the capture device and image information of the photographic lens, which correspond to the object distance Dist1 object area : Target1, the erosion area of the object : Target1 is adjusted, and the corresponding image becomes the image of the subject into focus. By performing the correction processing on the basis of information of the characteristics of the capture device and image information of the photographic lens, which correspond to the object distance Dist2 object area Target2, the erosion area of the object Target2 is adjusted, and the corresponding image becomes the image of the subject into focus. By performing the correction processing on the basis of information of the characteristics of the capture devices and information f is etograficheskoe lens, which correspond to the object distance Dist3 in the field of object Target3, the erosion area of the object Target3 is adjusted, and the corresponding image becomes the image of the subject into focus. Thus, the area of the object : Target1, the area of the object Target2 and the area of the object Target3, which are located in the distance range where the blur can be corrected, displayed in focused state. In other words, the image where multiple focused object distance Dist1, Dist2 and Dist3, is displayed as the display of the confirmation image immediately after photographing. Background BackGround1 outside the range of distances where the blur can be corrected is displayed in a blurred state.

In Fig. 13B illustrates the aspect, which displays the position number of the detected objects to overlay areas of interest : Target1 - Target3 subject into focus. SI1 is the display position of the object, which specifies the position of the object area : Target1. SI1 is the display position of the object, which specifies the position of the object area : Target1. Outline display, larger than the contour of the object area : Target1, indicates the position of the object area : Target1. Similarly, SI2 is the display position of the object, which specifies the position of the object area Target2, and SI3 - this displays the object position, which is indicated which indicates the position of the object area Target3. SI4 - this displays the number of objects that specifies the number of detected objects. Thus, the photographer can easily grasp the position and the number of detected objects, when confirming the picture is displayed after taking a picture.

Thanks to the display image shown in Fig. 13A and 13B, as a confirmation image after taking a picture, you can implement the capture device, the image having the function of a photographed image where the photographer to easily confirm the photographed image. Thus, the photographer can easily capture the range with the possibility of correction of erosion. You can easily identify the failed part in the area where the image to be the focus, can be obtained by correcting the blur. Thanks to the display positions and the number of detected objects for the imposition of the photographed image blur correction, the photographer can easily grasp the position and the number of detected objects, when confirming the picture is displayed after taking a picture.

In Fig. 14-19 demonstrated a flowchart showing the operation of the capture devices that meet present invention.

In Fig. 14 the main block diagram showing the capture device image, the CTE is committed to the present invention. The operation according to the main logic flowchart represents the process performed by the CPU 121 according to the program stored in the ROM.

If the photographer includes a power switch of the camera (S101), the CPU 121 confirms the operation of each actuator or element of the image capture in the camera, initializes the contents of the memory or the execution of a program and performs a preparation operation for photographing (S102). At step S103, the CPU communicates with the lens through a scheme of communication with the camera in a photographic lens via a scheme 135 connection with the lens. The CPU confirms the operation of the lens by contacting the lens, initializes the contents of the memory or program execution in the lens and performs a preparation operation. The CPU receives various data attributes of the lens required for focus detection or capture of the image, and stores the attribute data in the internal memory 144. At step S104, the CPU starts the operation of the image capture element to capture the image and displays the moving image is low resolution for previewing. At step S105, the CPU displays the read moving image using the block 131 of the display provided on the rear side of the camera, and the photographer is looking at the preview and determines the composition at the time of photographing.

At step S106, the CPU determines the presence of the ie or absence of a person in a moving image for preview. From the moving image to preview the CPU determines the number of persons, positions of persons and sizes of persons, i.e. objects, and stores the objects in the built-in memory 144. Since the method of face recognition is disclosed in patent application laid Japan No. 2004-317699 and already known, its description is not provided here.

At step S107 in the case of the recognition of the presence of the person photographed in the field of processing goes to step S108 and translates the adjustment mode focus mode AF face. In this case, the AF entity means the AF mode, in which you added the position of the face in the photographed area map and the distance to the object generated at step S200, and is the focus.

Meanwhile, at the step S107, in the case of the recognition of the absence of the person photographed in the field, processing proceeds from step S107 to step S109 and translates the adjustment mode focus mode multi-point AF. In this case, the mode multi-point AF means the mode in which the photographed region is divided into 3-5=15, and the main object analgeziruta on the basis of the result of detection of the focus in each area of the division, calculated from the map of distances to the object and information of the brightness of the object, and the corresponding area is in focus.

In the case of the AF mode in step S108 or S109, the CPU determines whether the switch preparation of fotografer the training phase S110. When it is determined that the switch of taking a photograph is not included, the processing goes to step S116 and determines turned off the power switch.

At step S111, when the switch of preparation for photographing is enabled, processing proceeds to step S200 and executes a subroutine generate a map of distances to the object.

At step S111, the CPU determines the position of the focus detection on the basis of map distance to the object calculated in step S200. In this case, according to the method of determining the position detection priority is assigned to the nearest side, and the position of the object located on the near side of the object obtained at the step S200, is defined as the position detection of the focus.

At step S112 (CPU calculates the degree of defocus in the position detection of the focus is determined at step S111, from the map of defocus obtained in step S200, and determines whether the obtained degree of defocus valid or lesser value. When the degree of defocus exceeds the allowable value, the CPU determines the result of the determination as a lack of focus and controls the focusing lens to the step S113. Then, the processing returns to step S110 and determines whether the switch is pressed preparation for photographing. At step S112 (when it is determined that the condition becomes focused state,the CPU performs a focused display at step S114, and the processing proceeds to step S115.

At step S115, the CPU determines whether the trigger switch photographing. When it is determined that the trigger switch photographing is not enabled, the idle state of the photographing supported on the step S115. At step S115, when the trigger switch photographing is enabled, processing proceeds to step S300 and executes a subroutine of photography.

At the end of the subprocedures photographing step S300, the processing proceeds to step S116 and determines turned off the power switch. When it is determined that the power switch is not turned off, the processing returns to step S103. When it is determined that the power switch is turned off, the CPU ends the sequence of operations.

In Fig. 15 illustrates the flowchart showing the subroutine generate a map of distances to the object. The sequence of operations (function of acquiring information about the distance) sub procedure generate a map of distance to the object is also running on the CPU 121.

If the process goes from step S200 main procedure to step S200 corresponding sub procedure, the CPU sets the detection area of the focus on the step S201. The CPU determines the detection area of focus from all areas of focus detection, and performs the following process of step S202. At step S202, the CPU reads the pixel signal detection fo the radius of the detection area of the focus, specified at step S201. At step S203, the CPU generates two images for the correlation operation. If the signals of the individual pixels in the focus detection read at step S202, there are signals become signals of two images A and B for the correlation operation.

At step S204, the CPU performs a correlation operation on the basis of the received image signals and calculates the phase difference of two images A and B. At step S205, the CPU determines the reliability of the result of the correlation operation. In this case, the reliability indicates the degree of coincidence of the images A and B, and in the case of a high degree of coincidence of the images a and B, the reliability of the result of detection of the focus, in General, high. Accordingly, the CPU can determine the accuracy of the determination of the phase difference on the basis of whether the degree of coincidence threshold value, and may preferably use the information having high reliability when selecting multiple areas of focus detection. At step S206, the CPU multiplies the phase difference between images A and B obtained in step S204, the conversion factor for converting the phase difference in the degree of defocus, thus varying the degree of defocus.

At step S207, the CPU determines whether completed calculation of the degree of defocus in relation to all areas of focus detection. It is always defined, that calculation is not completed in respect of all areas of focus detection, the processing returns to step S201 and selects the detection area of the focus of the remaining areas of focus detection and sets the detection area of focus. At step S207, when it is determined that the calculation is completed for all areas of focus detection, the processing proceeds to step S208.

At step S208, the CPU generates a map of the degree of defocus of the degrees of defocus in all areas of focus detection obtained by repeating the processes of steps S201-S207. In this case, the card of the degree of defocus is a data distribution, where the position on the screen photography and the degree of defocus are related to each other.

At step S209, in relation to a map of defocus obtained in step S208, taking into account the information of the lens obtained from the photographic lens 137 through communication with the lens at step S103, the CPU converts the degree of defocus in the distance to the object. Thus, it is possible to obtain distribution data, where the position on the screen photography and the distance to the object are related to each other.

At step S210, the CPU selects objects based on the data distribution of the distance to the object. The CPU integrates the area with small distances to objects in the distribution of the obtained distance is up to objects and groups of areas and highlights the contours of the object, included in the screen photography. Thus, it turns out map distance (distance information), where the area of each object and the distance to the object are related to each other.

If the process of step S210 is completed, the CPU ends the subroutine generate a map of distances to the object, and the processing goes to step S111 in the main procedure.

In Fig. 16 is a flowchart showing a subroutine of photography. The sequence of operations sub procedure photographing is performed, the CPU 121 according to the above program.

Phase spider S301 demonstration CPU controls the aperture adjusting the amount of light, and controls the aperture of the mechanical shutter, which sets the exposure time.

On stage W302 CPU reads the image for photographing a still image of high resolution, i.e. reads the signals of all pixels.

At step S303, the CPU carries out interpolation of the defective pixels in the read image signals. Thus, the output pixel of the focus detection does not contain color information RGB for photographing, and the pixel of the focus detection corresponds to the defective pixel at the image acquisition. So the CPU generates the image signal from the information of the peripheral pixel of image capture by interpolation.

At step S304, the CPU performs the processing of the image is to be placed, for example, gamma correction, color conversion, and the edges of the image. At step S305, the CPU writes the photographed image in the flash memory 133. Photographed image is recorded in step S305, is set as the image where blurring is not adjusted. When reproducing and displaying the recorded image blur is corrected, and the image is displayed. Thus, reduced workload capture device image. The process of step S305, consisting in recording the photographed image are performed by the CPU 121.

At step S306, the information characteristics of the camera body 138 is recorded in the flash memory 133 and the built-in memory 144 to match the photographed image recorded in step S305. In this case, the information characteristics of the camera body 138 includes optical characteristics, for example information distribution sensitivity pixel image capture and pixel of the focus detection element 107 of the image capture information clean strip light flux photographing in the housing 138 of the camera, information about the distance from the mounting plane of the housing 138 of the camera and a photographic lens 137 to item 107 capture images and information inaccuracies of manufacture. Because information is distributed to the I sensitivity pixel image capture and pixel of the focus detection element 107 of the capture image is determined by the on-chip microlens ML and the photoelectric conversion unit PD, this information can be recorded.

At step S307, the information characteristics of the photographic lens 137 is written in the flash memory 133 and the built-in memory 144 to match the photographed image recorded in step S305. In this case, the information characteristics of the photographic lens 137 includes optical characteristics, for example the information of the exit pupil EP, the information of the frame information F-number at the time of photographing, information aberration and information inaccuracies of manufacture.

In the capture device image that meets the present invention, as shown in steps S305-S307, the photographed image where the blur correction is not performed, and information characteristics of the photographic lens and the information of the characteristics of the capture device, the image corresponding to the photographed image recorded in the flash memory 133. Thus, erosion of the photographed image can be adjusted on the basis of information of the characteristics of the photographic lens and the information of the characteristics of photography after photography. The workload while recording a photographed image can be reduced. Recording photographed image information of the characteristics of the photographic lens and information the AI characteristics of the capture devices, which is on the steps S305 - S307 is carried out at the CPU 121.

If the process of step S307 is completed, the processing proceeds to the subroutine to display confirmation image after photographing step S400.

At the end of the sub procedure to display confirmation image after photographing step S400, the CPU ends the photographing subroutine of step S300, and the processing proceeds to step S116 main procedure.

In Fig. 17 is a flowchart showing a subroutine of display confirmation image after shooting. The sequence of operations sub procedure to display confirmation image after taking a picture, also runs on the CPU 121. The control unit displays the image referred to in the claims corresponds to the CPU 121.

At step S401, the CPU receives map the distance to the object generated at step S200.

At step S402, the CPU sets the target area, where erosion adjusted, and the distance to the object. As shown in the subroutine generate a map of distances to the object of step S200, the information, where the area of the object and the distance to the object are related to each other, is obtained from the map of distances to the object. As described in Fig. 9, the distance range with the possibility of correction of blurring is determined by the photographic lens 137, and perversione Dist11, which is the distance of the nearest side where the blur can be corrected, and the second distance Dist12, which is the largest infinite distance side change. Accordingly, the object, where the object distance is in the range of distances with the possibility of correction of erosion (from the first distance Dist11 to the second distance Dist12), a specific photographic lens 137 is set so that the blur can be corrected. Thus, it is possible to set each area of the object and the distance to the object used in the correction of the erosion of its scope.

The first distance Dist11 and the second distance Dist12, forming a range of distances with the possibility of correcting blur, again set so that the amount of shift from the position of the image plane of the photographic lens 137 was less than or equal to a predetermined value. Thus, the degree of blurring can be maintained less than or equal to a predetermined value, and you can make strong correction of the blur.

Upon completion of the job correcting erosion and distance at the step S402, the processing proceeds to the subroutine blur correction step S500.

At step S403, the CPU displays the image where the blur corrected in the step S500, using the display unit 131 in accordance with their pre-defined time. At this time, as the display image, as shown in Fig. 13A, shows only the image blur correction or position and the number of items displayed to overlay the image blur correction, as shown in Fig. 13B. The display is switched according to the set value that the photographer enters the switch operations 132.

If the process of step S403 is completed, the CPU ends the subroutine to display confirmation image after shooting, and processing returns to the subroutine of photography.

In Fig. 18 is a flowchart showing a subroutine correction erosion. The sequence of operations sub procedure blur correction is also carried out by the CPU 121. The correction block erosion, referred to in the claims corresponds to the CPU 121.

At step S501, the CPU 121 receives the information conversion, which specifies the content conversion processing in the circuit 125 image processing.

At step S502, the CPU 121 determines a conversion method that is used when converting the image information received from circuit 125 image processing. In particular, the CPU 121 determines how the conversion based on the conversion information (if necessary, the information characteristics of the capture device and what the considerations applying or information characteristics of the photographic lens, obtained in step S306 or step S307, in addition to the information conversion)obtained in step S501. In this case, a specific conversion method is a method comprising such a transformation of image information in which the exposure value and the pixel value are in a proportional ratio to ensure linearity, corresponding to the initial condition of the algorithm of the process of image reconstruction disclosed in patent document 2.

For example, when the circuit 125 image processing performs gamma correction, the CPU performs the transformation, the inverse transformation on the basis of the gamma correction, at step S502. Thus, it is possible to reproduce an image before conversion and you can get the image with linearity. Similarly, when the circuit 125 performs image processing color correction, transformation, inverse transformation on the basis of the color conversion is executed at the step S502. Thus, it is possible to obtain an image having linearity. Thus, at step S502 is determined by the method of conversion, which corresponds to the transform of the inverse processing of the conversion through a scheme 125 image processing.

At step S503 scheme 125 image processing receives the photographed image. On the floor is PE S504 obtained photographed image is converted according to the conversion method, a particular stage S502. Upon completion of the conversion processing in step S504, the processing goes to step S600 and generates a blurring function.

At step S505, the CPU performs conversion, inverse functions blurring generated in step S600, and performs the processing of correcting the blur on the photographed image, converted at step S504. In this case, the processing of correcting the blurring is performed according to the algorithm of image reconstruction, referred to as processing treatment convolution. Thus, it is possible to obtain the image blur correction, where the adjusted dilution of pre-defined object. As a way to adjust the blur by performing the conversion processing, inverse functions erosion, disclosed in patent document 2, its description is not provided here.

If the process of step S505 is completed, the CPU ends the subroutine correction erosion, and the processing proceeds to step S403 in the subroutine to display confirmation image after shooting.

In Fig. 19 is a flowchart showing a subroutine generating function of dilution. The sequence of operations sub procedure generating function of erosion on the CPU 121.

At step S601, the CPU receives information of the characteristics of the body of the camera is 138, which is recorded in the internal memory 144 to step S305 in which the picture was taken.

At step S602, the CPU receives information of the characteristics of the photographic lens 137, which is recorded in the internal memory 144 to step S306 in which the picture was taken.

At step S603, the CPU retrieves the parameters that are used when specifying the function of dilution. Function blur is determined by the characteristics of the optical transmission between the photographic lens 137 and element 107 image capture. The optical transmittance varies depending on factors such as characteristics of the camera body 138, the information characteristics of the photographic lens 137, the position of the object area in the photographed image and the distance to the object. Accordingly, the CPU stores in the internal memory 144 table data, where the factors and parameters used in the job functions of erosion, related to each other. If the process of step S603 is executed, the CPU 121 receives the parameters that are used when setting the function of the blurring of internal memory 144 on the basis of the factors.

At step S604, the CPU sets the function of dilution on the basis of the dilution obtained in step S603. By way of example, the blurring function comprises a Gaussian distribution, where it is assumed that the phenomenon of erosion podina is carried out in a normal distribution. If the distance from the Central pixel is denoted as r, and an arbitrary parameter of a normal distribution denoted as σ2the blurring function h(r) can be expressed as follows:

If the process of step S604 is completed, the CPU ends the subroutine generating function of erosion, and the processing proceeds to step S505 in the subroutine correction dilution.

As capture devices that meet present invention, considered the camera with the ability to change the photographic lens, but the present invention can be applied to a so-called camera type "soap box", where the photographic lens part of the camera. Even in the camera like "soap box" there is a traditional problem. As described in the present invention, the same effect can be obtained by displaying a photographed image blur correction.

As described above, according to the capture device image that meets the present invention, it is possible to implement the capture device, the image correction block blur, which allows the photographer to easily confirm the photographed image.

The second option exercise

In Fig. 20 shows a second variant implementation of the present invention. Now describe the processing is in operation in the second embodiment of the present invention using the drawings.

In the first embodiment, the degree of defocus Def21 and Def22, which correspond to the degree of erosion where erosion can be adjusted closer to the outside than the degree of defocus Def11 and Def12 in the positions of the image planes of the first distance Dist11 the nearest side and the second distance Dist12 infinite side. For this reason, the first distance Dist11 the nearest side and the second distance Dist12 endless parties are not updated.

Meanwhile, in the second embodiment, the degree of defocus Def21 and Def22, which correspond to the degree of erosion where erosion can be adjusted closer to the inside than the degree of defocus Def11 and Def12 in the positions of the image planes of the first distance Dist11 the nearest side and the second distance Dist12 infinite side. For this reason, where the first distance Dist11 the nearest side and the second distance Dist12 endless parties are updated according to the degrees of defocus Def12 and Def22, differ from those places in the first embodiment.

In Fig. 20 shows the relationship between the positions of the image planes corresponding to the feature distances Dist1, Dist2, Dist3, and Dist4, and the position of the image plane of the photographic lens 137. On axis, the degree refocus ovci Def from the position of the image plane of the photographic lens 137. In this case, the positive direction of the degree of defocus Def corresponds to the direction after focusing.

"0" indicates the position of the image plane of the photographic lens 137, and the shift value is expressed as "0". Def1, Def2, Def3 and Def4 indicate the degree of defocus positions of the image planes for the feature distances Dist1, Dist2, Dist3, and Dist4 respectively. Def11 and Def12 indicate the degree of defocus positions of the image planes for the first distance Dist11 the nearest side and the second distance Dist12 infinite sides, respectively.

Meanwhile, Def21 and Def22 indicate the degree of defocus, which correspond to the degree of erosion where erosion can be adjusted accordingly. When the degree of erosion is very high, extreme conversion processing is performed during recovery blurred image, and very likely noise generation. For this reason, when the blur is adjusted, it is preferable that the degree of erosion was in the range Def21 and Def22. Because Def11 and Def12 on the outer side Dif21 and Def22, erosion finds it difficult to adjust.

Accordingly, the first distance Dist11 nearest side is adjusted so that the degree of defocus Def11 on the first distance Dist11 nearest side was a degree of defocus thus estoya the degree of defocus Def21. Similarly, the second distance Dist12 infinite side is adjusted so that the degree of defocus Def12 on the second distance Dist12 endless side was a degree of defocus corresponding to the degree of defocus Def22. Thus, it is possible to reduce the acuteness of the problems that occur when the degree of erosion is very high, extreme conversion processing is performed during recovery blurred image, and generates noise. In other words, is activated reinforced correction dilution.

The refresh operation of the first distance Dist11 the nearest side and the second distance Dist12 endless parties according to the degree of defocus Def21 and Def22 is carried out at step S402 sub procedure to display confirmation image after shooting (see Fig. 17).

At step S402, Fig. 17, are specified region of the object, where the blur is adjusted, and the distance to the object. As described in the subroutine generate a map of distances to the object of step S200, the information, where the area of the object and the distance to the object are related to each other, is obtained from the map of distances to the object. The object, where the object distance is in the range of distances with the possibility of correction of erosion (from the first distance Dist11 to the second distance Dist12), a specific photographic lens 137 is set t is to, to make the blur correction. Distance is again set so that the first distance Dist11 and the second distance Dist12 at a range of distances with the possibility of correction of erosion, the magnitude of shift from the position of the image plane of the photographic lens 137 were less than or equal to a predetermined value. Thus, the degree of blurring can be maintained less than or equal to a predetermined value, and you can make strong correction of the blur.

Since the configuration of the device image capturing element of the image capture and the structure of the pixels in the second embodiment are the same as in the first embodiment, we will not repeat their description. In addition, since the ratio between the distance to the object, the distance with the possibility of correcting blur and distance information obtained by the unit acquiring information about the distance is the same as in the first embodiment, we will not repeat the description. In addition, since the sequence capture devices according to the second variant implementation is the same as in the first embodiment, we will not repeat the description.

As described above, even in the second embodiment of the present invention can be implementing is to capture device image correction block erosion, which allows the photographer to easily confirm the photographed image.

We have described above the preferred embodiments of the present invention. However, the present invention is not limited to these options for implementation, and it is possible to provide various modifications and changes without going beyond the scope of the present invention.

Although the present invention is described with reference to illustrative embodiments of, it should be understood that the invention is not limited to the disclosed illustrative options for implementation. The volume of the following claims must comply with the broadest interpretation to encompass all such modifications and equivalent structures and functions.

Aspects of the present invention can also be implemented through a computer system or device (or devices such as CPU or MPU)that reads out and executes the program recorded in the storage device to perform the functions of the above(s) option(s), and by the way, the steps of which are performed by a computer system or device, such as reading and executing the program recorded in the storage device to perform the functions of the above(s) option(s) implementation. To this end ol the gram is transmitted to the computer, for example, over a network or recording media type that serves as a storage device (for example, computer-readable media).

This application claims the priority of Japanese patent application No. 2009-115592, filed may 12, 2009, which, thus, fully incorporated here by reference.

1. The capture device image that contains
the photographic lens 101, 103 and 105), which forms an image of an object,
unit (107) of the photoelectric conversion, which is projected the image plane of the photographic lens,
block (131) display, which displays the photographed image obtained by the photoelectric conversion unit,
block (121) control the display of the image, which displays the photographed image using the imaging unit after receiving the photographed image by using the photoelectric conversion unit,
unit (121) of acquiring information about the distance, which receives information about the distance to the object in the photographed image,
block (121) correction of blurring, which performs blur correction by processing treatment convolution on the photographed image based on the information about the distance to the object obtained Blo the ohms of acquiring information about the distance,
block (121) account, which records photographed image where blurring is not adjusted, the information characteristics of the photographic lens, the information characteristics of the capture device image associated with each other,
moreover, the control unit displays the image displays a photographed image, where multiple of the distance to the object in the photographed image is focused.

2. The capture device image according to claim 1, additionally containing
block (121) object detection, which detects an object in the photographed image, and
block display position of the object, which displays the position of the object detected by the detection unit object with the imaging unit, for the imposition of the photographed image.

3. The capture device image according to claim 1, additionally containing
block (121) object detection, which detects an object in the photographed image, and
a block displaying the number of objects, which displays the number of objects detected by the detection unit of the object on the display unit.

4. The capture device image according to claim 1,
in which the processing treatment convolution is performed on the basis of the characteristics of the optical transmission between the photographic lens and the block photo is elektricheskogo conversion.

5. The capture device image that contains
the photographic lens 101, 103 and 105), which forms an image of an object,
unit (107) of the photoelectric conversion, which is projected the image plane of the photographic lens,
block (131) display, which displays the photographed image obtained by the photoelectric conversion unit,
block (121) control the display of the image, which displays the photographed image using the imaging unit after receiving the photographed image by using the photoelectric conversion unit,
unit (121) of acquiring information about the distance, which receives information about the distance to the object in the photographed image, and
block (121) correction of blurring, which performs blur correction on the photographed image based on the information about the distance to the object obtained by the unit acquiring information about distance,
moreover, the control unit displays the image displays the photographed image where blurring is corrected using the error correction block erosion, ranging from the first distance to the object to the second distance to the object according to the information about the distance to the object photographed is depicted and, and
the first distance to the object is determined on the basis of end distance of the nearest side of the photographic lens, and the second distance is determined based on the end of the infinite distance side of the photographic lens.

6. The capture device image according to claim 5,
in which the range of the first distance to the object and a second distance to the object is set to the value of the difference between the position of the image plane of each of the first distance to the object and a second distance to the object and the position of the image plane of the photographic lens was less than or equal to a predetermined value.

7. The capture device image according to claim 5, additionally containing
block (121) object detection, which detects an object in the photographed image, and
block display position of the object, which displays the position of the object detected by the detection unit object with the imaging unit, for the imposition of the photographed image.

8. The capture device image according to claim 5, additionally containing
block (121) object detection, which detects an object in the photographed image, and
a block displaying the number of objects, which displays the number of objects detected by the detection unit of an object, the unit displayed is I.

9. The capture device image according to claim 5,
in which the correction block erosion processes of circulation convolution on the basis of the optical characteristics
bandwidth between the photographic lens and the photoelectric conversion unit.

10. The capture device image containing a photographic lens (101, 103 and 105), which forms an image of an object,
unit (107) of the photoelectric conversion, which is projected the image plane of the photographic lens,
block (131) display, which displays the photographed image obtained by the photoelectric conversion unit,
block (121) control the display of the image, which displays the photographed image on the display unit after receiving the photographed image by using the photoelectric conversion unit,
unit (121) of acquiring information about the distance, which receives information about the distance to the object in the photographed image, and
block (121) correction of blurring, which performs blur correction by contacting convolutions on the photographed image based on the information about the distance to the object obtained by the unit acquiring information about distance,
moreover, the control unit displays the image displays information about the range of distances to the object, where it is possible to correct the blur.

11. The device capturing an image of claim 10,
in which the processing treatment convolution is performed on the basis of the characteristics of the optical transmission between the photographic lens and the photoelectric conversion unit.

12. The device capturing an image of claim 10, further comprising
the block object detection, which detects an object in the photographed image,
moreover, the control unit displays the image shows the position of the object detected by the detection unit object for the imposition of the photographed image.

13. The device capturing an image of claim 10, further comprising
the block object detection, which detects an object in the photographed image,
moreover, the control unit displays the image displays the number of objects detected by the detection unit object.



 

Same patents:

FIELD: physics, video.

SUBSTANCE: invention relates to a video surveillance and camera control system capable of performing panoramic turning and tilted turning of the camera. The camera platform system has a camera which captures an object image to generate a frame image, camera platforms which turn a camera about a panning axis and a tilt axis and image processors which generate a visual image based on the frame image. When a camera passes through a predefined angular position for turning about the tilt axis, an image processor generates a first visual image corresponding to the image formed by turning the frame image by an angle greater than 0 degrees but less than 180 degrees about the panning axis in a predefined angular position before generating a second visual image corresponding to the image formed by turning the frame image 180 degrees about the panning axis.

EFFECT: reducing unnaturalness of change in direction of movement of an object in a visual image in order to reduce errors when tracking an object.

8 cl, 15 dwg

FIELD: physics.

SUBSTANCE: method is carried out using, in a displacement metre, a correlator which performs the function of determining the variance of signal increments based on squaring difference values of correlated signals from linear photodetectors in digital form, and an interpolator is made in form of a unit which performs interpolation using the formula: χ^=Δm(D1D1)/[2(D12D0+D1)], where D-1, D1, D0 denote signal variances, χ^ is displacement, Δm is the pixel size of the auxiliary photodetector.

EFFECT: reduced image displacement measurement error.

4 dwg

FIELD: physics, computation hardware.

SUBSTANCE: in compliance with this invention, sequence of images including multiple lower-resolution images is contracted. Vectors of motion between reference image in sequence and one or several nest images in sequence are defined. The next forecast image is generated by application of motion vectors to reconstructed version of reference image. Difference between next actual image and next forecast image is generated. Image in sequence from set to set is decoded and SR technology is applied to every decoded set for generation of higher-resolution image by rime interpolation and/or spatial interpolation of reference and difference images. Compression of sequence of images includes steps of determination of vectors of motion between reference image and at least one of extra image of sequence of images. Note here that obtained vector of motion is applied to forecast at least one extra image to calculate difference in mages between at least one extra image and forecast of at least one extra image, respectively.

EFFECT: high-resolution imaging by superhigh resolution technology.

13 cl, 5 dwg

FIELD: chemistry.

SUBSTANCE: invention relates to system and method of recording procedure for recorder. Proposed system comprises time code generator for time code generation for synchronisation of electronic data. Recorder transceiver executes wireless communication of time code to multiple cameras. Cameras fix video and audio data while appropriate camera time data dispatchers combine receive time code with recorded said data to be transmitted via wireless communication line for writing in recorder memory. Recorder can receive and memorise audio data from warning system while computer can communicate with recorder for appropriate editing of stored camera data and warning data to obtain edited data.

EFFECT: efficient use of recorder.

14 cl, 11 dwg

Digital camera // 2510866

FIELD: physics, communication.

SUBSTANCE: invention relates to digital camera with moving mirror. Proposed camera comprises microcomputer 110 that features live scan mode to control images generated by CMOS-sensor 130 or image data obtained by pre-processing of said image data so that these are displayed on LCD 150 as moving images in real time. Note here that when trigger button 141 receives live scan automatic focusing switch-on instruction, microcomputer 110 controls said moving mirror to displace it on optical path to measure by AF-transducer 132 and out of it thereafter to live scan mode.

EFFECT: expanded operating performances for digital camera with moving mirror.

28 cl, 41 dwg

FIELD: physics.

SUBSTANCE: brightness distribution is determined for each of multiple image data portions, the characteristic value of each brightness distribution is calculated from said brightness distribution and a correcting value is found for tonal correction, which is carried out with respect to the combined image data based on the obtained characteristic value of brightness distribution.

EFFECT: carrying out tonal correction to obtain a combined image, having suitable brightness and contrast.

10 cl, 6 dwg

FIELD: information technology.

SUBSTANCE: device has an image sensor which includes an array of a plurality of image forming pixels and a plurality of focus determining pixels which receive light rays passing through exit pupils of image forming lenses while they are partially shielded, a vertical output line, a vertical summation unit which sums, in the vertical direction of the image sensor, signals from a plurality of pixels, aligned in one column, and a control unit which performs control so that the vertical summation unit is always off when the focus determining pixel is included among pixels having signals to be summed, when summing signals from a plurality of pixels in the vertical direction and reading said signals through the vertical summation unit.

EFFECT: enabling mixing of signals of image forming pixels and focus determining pixels.

7 cl, 32 dwg

FIELD: radio engineering, communication.

SUBSTANCE: video system 10 on a chip for image stabilisation has a main photodetector array 11 and two secondary mutually perpendicular linear photodetector arrays 12 and 13 (with a larger pixel area), first and second random access memory 14 and 15, inputs N1…Nk of which are connected to corresponding outputs N1…Nk of the secondary mutually perpendicular linear photodetector arrays 12 and 13, outputs N1…Nk of which are also connected to inputs N1…Nk of first and second controllers 16 and 17 for calculating correlation, respectively, the second inputs M1…Mk of which are connected to corresponding outputs of the first and second random access memory 14 and 15, wherein outputs of the first and second controllers for calculating correlation are connected to inputs of a control unit 18.

EFFECT: high sensitivity to image shift, wider range of compensated shifts and shift accelerations, accuracy of measuring shift and size and weight characteristics of the device.

2 dwg

FIELD: physics.

SUBSTANCE: image capturing device has an image sensor for capturing an image signal generated by a photographic optical system which includes a focusing lens, a detecting unit for detecting the object region based on the image signal captured by said image sensor, a first generating unit for generating first information associated with the focusing state of the photographic optical system based on the image signal captured by said image sensor, a second generating unit for dividing optical flux from the object into two in order to generate two images and generate second information associated with the value of relative positional shift between the two images, and a control unit for controlling the execution of at least one of first focusing control using first information and second focusing control using second information. The control unit is configured to restrict execution of second focusing control when said detecting unit detects an object region.

EFFECT: ensuring stable focusing on an object at a high rate without creating inconvenience for the photographer.

20 cl, 15 dwg

FIELD: information technology.

SUBSTANCE: at least one light spot is projected on target object; the first image of target object is captured with at least one light spot, in response to step in which light spot is projected; distance from target object to digital camera is programmatically determined using image spot with at least one light spot, and distance triangulation factor of at least one light spot in the image; and digital camera lens is automatically focused based on the step in which distance from target object to digital camera is determined.

EFFECT: providing possibility for triangulation automatic focusing of camera.

39 cl, 21 dwg

FIELD: engineering of systems for analyzing television images, in particular, for stabilizing an image in television images.

SUBSTANCE: in accordance to the invention, first digital image and at least second image have a set of pixels, and each pixel has associated address for display and is represented by color. System user sets a color matching interval, or system uses a predetermined color matching interval, then in first digital image a pixel is selected, for example, representing an element in an image, which is either fuzzy because of element movement, or appears trembling due to camera movement, and is matched within limits of interval with a pixel of second image. The interval ensures compensation, required during change of lighting. After selection of a pixel in first image, it may be matched with all pixels in the second image, where each pixel of the second image, having matching color within limits of matching interval, is stored in memory, and pixel color is selected, closest to pixel of first image. Then pixel addresses are changed in second image so that the address of pixel positioned in second image, closest color-wise to the pixel in the first image, is assigned the same address on the display as the pixel of first image and the resulting rearranged second image is dispatched into memory for storage.

EFFECT: creation of efficient image stabilization method.

9 cl, 11 dwg

FIELD: devices for reading, recording and reproducing images, and method for correcting chromatic aberrations.

SUBSTANCE: processing of correction is performed with consideration of diaphragm aperture size and object image height in image reading lens. The output signal of the camera signal processing circuit (4) by means of switch (5) is sent to block (6) for correction of chromatic aberration. Value of aperture of diaphragm (31) in lens (1) for reading image, and coordinates of pixel, relatively to which correction processing is performed, from the block (6) for correction of chromatic aberration is sent to block (10) for computation of transformation ratio. The length of focal distance of approach or withdrawal of lens (1) for reading image and camera trembling correction vector are sent to block (10) for computing transformation ratio, then transformation ratio is produced for each color to be dispatched to chromatic aberration correction block (6), where the signal, corrected in block (6) for chromatic aberration correction is compressed in data compression circuit (15) for transmission to record carrier in device (17) for recording and reproduction and unpacked in data unpacking circuit (18) for transmission to switch (5).

EFFECT: increased quality of image, such as color diffusiveness.

6 cl, 10 dwg

FIELD: information technologies.

SUBSTANCE: method and the device for stabilisation of the image containing set of shots is offered, and estimates motion vectors at level of a shot for each shot, and is adaptive integrates motion vectors to yield, for each shot, the vector of a motion which is subject to use for stabilisation of the image. The copy of the reference image of a shot is biased by means of the corresponding is adaptive the integrated vector of a motion. In one version of realisation of the invention, the perimetre of the data unit of the image is supplemented with the reserve of regulation which is subject to use for neutralisation of images, in other variant vertical and horizontal builders are handled independently, and plans of motion evaluation related to the MPEG-4 coder, used for evaluation of vectors at level of macroblocks, and histograms.

EFFECT: possibility to delete astable motions at maintenance of a natural motion of type of scanning of the film-making plan, with the underload requirement for additional specialised plans and the underload magnification of computing complexity.

26 cl, 4 dwg

FIELD: information technologies.

SUBSTANCE: invention can be used for underwater shooting, provision of surveillance, visual inspection and control of underwater shooting parametres and diver actions from surface in process of underwater-technical or diagnostic works at a depth under water. Underwater television control system comprises video portable camera block installed under water in leak-tight box and video camera fixed on helmet of diver's suit and installed in leak-tight box, leak-tight sources of light for illumination of video filming object, the following components installed under water - control unit, monitor, units for power supply of light sources, unit of communication with diver, unit of audio-video recording, terminals of video-audio recording unit are connected to information inputs of monitor, unit of system power supply, accumulator and unit of accumulator charging.

EFFECT: improved efficiency of underwater-technical works control, monitoring over divers' work under water due to increased reliability and validity of information obtained in process of underwater shooting.

14 cl, 5 dwg

FIELD: physics; video technology.

SUBSTANCE: invention relates to video surveillance devices. The result is achieved due to that, a camera (16) and a receiver (28) of a swiveling base are connected to each other so as to transmit a video signal. A web-server (50) sends a video signal beyond the border to a camera (16) and receives a signal from outside for remote control of the camera and a signal for remote control of the swiveling base. A control unit (40) controls the camera (16) in accordance with the signal for remote control of the camera. The signal for remote control of the swiveling base is superimposed on the video signal to be transmitted to the receiver (28) of the swiveling base using the video signal circuit (52). The receiver (28) of the swiveling base extracts the signal for remote control of the swiveling base from the video signal and controls rotation of the base (14) in accordance with the signal for remote control of the swiveling base. The given configuration can be used for transmission with superposition, and the camera and the swiveling base can be easily controlled through communication with the external environment.

EFFECT: controlling swiveling base of a camera through a remote control signal.

10 cl, 6 dwg

FIELD: physics; computer engineering.

SUBSTANCE: invention relates to computer engineering for determining and reducing parametres of video cameras to given values, where the video cameras operate in a machine vision system consisting of three video cameras, two of which provide a detailed image and the third is for scanning. The result is achieved due to that, a device is proposed for automatic adaptive three-dimensional calibration of a binocular machine vision system, which has a first video camera, first image input unit, first orientation unit, second video camera, second image input unit, second orientation unit, system controller and control unit. The device also includes a third video camera, third image input unit and a third orientation unit. Accuracy of calibrating the machine vision system is achieved due to successive pairwise calibration of different pairs of video cameras.

EFFECT: calibration of a machine vision system consisting of three video cameras which, after calibration, should be placed on a single line straight line and directed perpendicular this line, where the two outermost video cameras have a narrow view angle and different focal distances and the third video camera which is placed in the centre between the outermost video cameras has a wide view angle.

4 dwg

FIELD: physics, photography.

SUBSTANCE: invention relates to television and digital photography and more specifically to image stabilisation methods. The result is achieved due to that two additional linear photodetectors of considerably smaller area, made in form of rows (columns) are placed on a single crystal together with the main photodetector matrix, a signal is read from the additional two linear photosensitive devices with horizontal frequency many times greater than the frame frequency of the main photodetector matrix. The pixel size along the linear photodetector is selected such that it is several times less than the pixel size of the main matrix. To main equality of sensitivity of the main matrix and the additional linear photodetectors, in the latter the pixel size in the direction across reading is increased in proportion to reduction of the longitudinal size and reading time. Further, three video data streams are picked: one main one and two auxiliary ones, from which the shift of the crystal relative the image formed by the lens is calculated.

EFFECT: compensation for the effect of the shaking of the hands of the operator.

2 cl, 6 dwg

Digital camera // 2384968

FIELD: physics, photography.

SUBSTANCE: invention relates to image capturing devices. The result is achieved due to that the digital camera includes a microcomputer (110) having a "live" display mode which controls such that image data generated by a CMOS sensor (130) or image data obtained through predefined processing of image data generated by the CMOS sensor (130) are displayed on a liquid-crystal display (150) as a moving image in real time. When the down button (141) receives an instruction relative the beginning of the automatic focusing operation in "live" display mode, the microcomputer (110) controls the movable mirror such that it enters the optical path in order to measure trough an AF sensor (132) and then enable the movable mirror to come out of the optical path in order to return the digital camera to the "live" display mode.

EFFECT: display of a subject image of a frame in "live" mode through an electronic view finder in a digital camera with a movable mirror.

7 cl, 41 dwg

FIELD: information technology.

SUBSTANCE: digital photographic camera has a support structure, an objective lens held by the support structure and having an optical axis, a sensitive element held by the support structure under the objective lens and having a certain number of adjacent pixel rows, where each pixel row contains a certain number of pixels, and each pixel includes an image sensor, and the image signal processor connected to the sensitive element includes an image scaling device which is configured to scale each pixel row in accordance with the scaling factor which differs from the adjacent pixel row. The image scaling device is configured to correct the oblique angle between the sensitive element of the photographic camera and the objective lens, the image of which is being captured.

EFFECT: avoiding geometrical distortions caused by the position of image capturing apparatus relative the object whose image is being captured.

25 cl, 16 dwg

FIELD: information technology.

SUBSTANCE: low-power mobile device for capturing images can create a stereo image and stereo video in time from one fixed type. For this purpose, statistics from an auto-focusing process is used to create a block depth map of one fixed type. In the block depth map, artefacts are suppressed and the depth map of the image is created. 3D left and right stereotypes are created from the depth map of the image using a 3D surface reconstruction process based on the Z-buffer and a mismatch map, which depends on the geometry of binocular vision.

EFFECT: providing a simple calculation process for detecting and estimating depth information for recording and creating stereo video in real time.

29 cl, 24 dwg

Up!