Frame grabber, frame grabber control method and medium

FIELD: physics, photography.

SUBSTANCE: invention relates to frame grabbers. The result is reached by that the frame grabber includes the generation unit designed with a possibility of generation of the image data, and the resolution unit designed with a possibility, on the basis of the first image data generated by the generation unit, when the position in focus is located in the first focal position, when the object is located in the focused state, or the second focal position on the side of smaller distance of the first focal position, and the second image data generated by the generation unit, when the position in focus is located in the third focal position on the side of greater distance of the focal position, when the back ground is located in the focused state, resolutions of the first area including the object, and the second area including the background.

EFFECT: accurate resolution of the object of shooting and background, even if the image data have poor difference by depth between the object and background.

16 cl, 11 dwg

 

The prior art TO WHICH the INVENTION RELATES.

The technical field to which the invention relates.

[0001] the Present invention relates to a device for image capturing, control method for image capture and storage unit. More specifically, the present invention relates to a device for image capturing, such as an electronic camera or camcorder, and control method for image capture, and storage medium that stores a program to control the capture device image.

Description of related technology

[0002] In recent years, many capture devices such as digital cameras or digital video cameras, provided with functions to perform image processing in the field, different from the object of interest. For example, as one of the functions is a function for giving effect Pseudorasbora the background region data of the captured image, for example.

[0003] In General, if the capture device image is a large image sensor, as a single lens reflex camera, depth of field becomes shallow by opening the aperture to make the focal length longer, and it becomes relatively simple to capture image data that has once ity background, in addition to the object that is in focus, as described above.

[0004] on the other hand, in the capture device, the image having a small image sensor, such as a compact digital camera, even if you use the above method, the depth of field tends to become deeper, and as a result it is difficult to capture these images with the blurred background.

[0005] in View of this fact, it is known that the capture device, the image having a small image sensor, such as a compact digital camera can obtain image data with the blurred background by distinguishing the object area from the background area data of the captured image and processing by filtering the background area.

[0006] the Publication of the patent application of Japan No. 2007-124398 consider the way to obtain a spatial-frequency component from the data of the captured image in order to distinguish between an object area and the background area. That is, in the way discussed in the publication of the patent application of Japan No. 2007-124398, the size of the blur side of the background data of the captured image is increased by adjusting the focal position of the lens so that the object was located in the rear depth of field. Then we can calculate the magnitude of the spatial frequency component for each of the multiple segmented the locks, and the block whose value is equal to or greater than the threshold value varies as the area of the object.

[0007] However, in the method discussed in the publication of the patent application of Japan No. 2007-124398, there is a problem that sufficient accuracy cannot be obtained if the amount of blur on the side of the background area is small, because the distinction between areas is performed based on the magnitude of the spatial-frequency component of the one frame of image data. In particular, the capture device image having a small image sensor, such a compact digital camera, which is widely used in recent years, there is a tendency that a sufficient amount of blur does not work, even when the above-described processing. In the result, it is difficult to perform the distinction between areas on the basis of the amount of spatial-frequency component of the one frame of image data.

The INVENTION

[0008] the Present invention is directed to a device for image capturing, control method for image capture and storage unit, capable, even if the image data are insufficient difference in depth between the subject and the background, just to distinguish the area containing the object from the containing scope of the background.

[0009] According to the aspect of this image is etenia the capture device image includes the power generation, configured to generate first image data, when the position of focus is the first focal position at which the object is in the focused state, or the second focal position on the side close to the first focal position, and the second image data generated by the generating unit, when the position of focus is the third focal position on the side of the far distance of the focal position at which the background is focused condition, and the unit distinction made with the possibility, on the basis of the first and second image data generated by the generating unit, distinguish between the first region that includes the object, and the second area that includes background.

[0010] According to another aspect of the present invention, the control method of capture devices includes discernment, based on first image data obtained at the position of focus, which is the first focal position at which the object is in the focused state, or the second focal position on the side close to the first focal position, and second image data obtained at the position of focus, which is the third focal position on the side of the large distance of focus the situation, when focused background, and the first region contains the object, and the second area contains the background.

[0011] According to another aspect of the present invention is provided a storage medium storing a program to control the capture device image, and the program executes the processing by a computer, includes discernment, based on first image data obtained at the position of focus, which is the first focal position at which the focused object, or the second focal position on the side close to the first focal position, and second image data obtained at the position of focus, which is the third focal position on the side of the far distance of the focal position at which the focused background, and the first region includes the object, and the second region includes a background.

[0012] Additional features and aspects of the present invention will become apparent from the following detailed description of exemplary embodiments with reference to the attached drawings.

BRIEF DESCRIPTION of DRAWINGS

[0013] the Accompanying drawings, which are included in and form part of this specification, illustrate exemplary embodiments of the characteristics and aspects of the present invention and together with the description serve is to explain the principles of the present invention.

[0014] figure 1 shows a block diagram illustrating the configuration of the device 100 of the image capture according to an exemplary variant of implementation of the present invention.

[0015] figure 2 shows a block diagram illustrating the circuit configuration 112 distinguish areas of figure 1 according to an exemplary variant of implementation of the present invention.

[0016] figure 3 shows a flowchart of operations illustrating the processing operation by the image capture device 100 of the image capture according to an exemplary variant of implementation of the present invention.

[0017] figure 4 shows a flowchart of operations illustrating the processing operation to distinguish areas of the device 100 of the image capture according to an exemplary variant of implementation of the present invention.

[0018] figure 5 shows a block diagram illustrating the configuration of the device 500 of image capture according to an exemplary variant of implementation of the present invention.

[0019] figure 6 shows a block diagram illustrating the configuration of block 513 generating a blurred image of the background of figure 5 according to an exemplary variant of implementation of the present invention.

[0020] figure 7 shows a flowchart of operations illustrating the processing operation by blurring the background of a device 500 of image capture according to an exemplary variant of the implementation of nastoyascheevremya.

[0021] In figa and 8B show examples of the relationship between the distance of the scanned pixel and the area of the object and value of the amplification device 500 of image capture according to an exemplary variant of implementation of the present invention.

[0022] figure 9 shows another example of the relationship between the distance of the scanned pixel and the area of the object and value of the amplification device 500 of image capture according to an exemplary variant of implementation of the present invention.

[0023] figure 10 shows a map of the distances of the device 500 of image capture according to an exemplary variant of implementation of the present invention.

[0024] In figa, 11B and 11C illustrate blur and diffusion that occurs when the standard image processing.

DESCRIPTION of embodiments

[0025] Various exemplary embodiments of the characteristics and aspects of the present invention will be described in detail with reference to the drawings.

[0026] First, the overall configuration of the capture devices, according to an exemplary variant of implementation of the present invention described with reference to figure 1. Figure 1 shows a block diagram illustrating the configuration of the device 100 of the image capture according to an exemplary variant of implementation of the present invention.

[0027] the Device 100 of the image capture includes block 117 control is, which performs control of the whole device 100 of the image capture. Unit 117 controls established by the Central processing unit (CPU) or microprocessor unit (MPU) and controls the operation of each scheme, described below. Unit 117 controls the actuation to control the aperture (not illustrated). The circuit 116 controls the image capture controls the mechanism for actuating the diaphragm (not illustrated) to change the diameter of the aperture diaphragm of the diaphragm according to the signal from block 117 control.

[0028] Additionally, the unit 117 controls the actuation to regulate focal lens (not illustrated) inside the lens 101 of image formation. The circuit 116 controls the image capture controls the mechanism for actuating the lens (not illustrated)that performs focusing by actuation of the focal lens in the direction of the optical axis according to the signal from block 117 management. The mechanism for actuating the lens includes a stepper motor or a DC motor (DC) as the source of actuation. As the lenses inside the lens 101 of the imaging lens provided a variable ratio and a fixed lens in addition to the focal lens, and the lens unit done is h, optionally including in these lenses.

[0029] the Sensor 102 of the image formed by the sensor devices, charge-coupled (CCD) or a sensor with complementary structure of metal-oxide-semiconductor (CMOS) or other sensors, and its surface is covered with RGB color filters, such as the Bayer array, which provides the ability to capture a color image. The image of the object incident through the lens 101 of image formation, including focal lens, is formed on the sensor 102 of the image. The sensor 102 image photoelectrically converts an object image to generate image data. Then, the generated image data stored in the memory 103.

[0030] Block 117, the control calculates the shutter speed and the aperture value of the diaphragm, which provide an opportunity for all image data to have the correct exposure, and calculates the amount of actuation of the lens 101 of image formation, in order to focus on the subject. Then the exposure value (shutter speed and the aperture value of the diaphragm), calculated by the block 117 management, and information that specifies the amount of actuation of the lens 101 forming the image are output to the circuit 116 controls the image capture. The exposure control and the regulation of the focus is performed on the basis of appropriate estwanik values.

[0031] the Circuit 104 with the transformation matrix color applies color enhancement to the data of the captured image that you want to play in the optimal color and converts the color difference signals R-Y and B-Y. the Circuit 105 of the low pass filter (LPF) is used to limit the frequency bands color-difference signals R-Y and B-Y. the Circuit 106 suppress chroma (CSUP) is used to suppress a false color signal plot of saturation in the image data, limited by the bandwidth of the circuit LPF 105.

[0032] on the other hand, the data of the captured image is also shown in the circuit 107 for generating the luminance signal. Circuit 107 for generating the luminance signal generates a brightness signal Y from the input image data. Scheme 108 improving circuit performs processing to improve the contour of the generated Y signal brightness.

[0033] Scheme 109 RGB conversion converts the color difference signals R-Y and B-Y outputted from the circuit 106 CSUP, and the brightness signal Y is extracted from the circuit 108 contouring, RGB signals. Scheme 110 gamma correction adjusts the gradation converted RGB signals. Further, the circuit 111 conversion color brightness converts the RGB signals that have been subjected to the gradation correction in YUV signals.

[0034] the Circuit 112 distinguish between areas does the distinction between an object area and the area f is relatively the image data, converted to YUV signals. Detailed circuit configuration 112 distinguish areas will be described below. Unit 113 of the image processing performs image processing such as processing for blurring the background area. Scheme 114 compression standard of the joint expert group on pictures (JPEG) compresses the way JPEG or the like, image data which have been subjected to the image processing unit 113 of the image processing, and stores the image data after compression by external or internal media 115 entries.

[0035] Next will be described a specific circuit configuration 112 distinguish areas. Figure 2 shows a block diagram illustrating the circuit configuration 112 distinguish areas with figure 1. As illustrated in figure 2, the circuit 112 distinguish areas includes unit 201 of the detection circuit unit 202 subtraction circuit block 203 calculate the entire value of the circuit block 204 assessment integer value circuit and block 205 generate maps of areas.

[0036] Below in this document the operation of the device 100 of the image capture according to the present exemplary variant of the implementation will be described in detail with reference to Fig 3 and 4. The following processing procedure is stored in a memory (not illustrated) in block 117 management as a computer program (software), and CPU (not illustrated) in block 11 reads and executes the computer program.

[0037] First, the processing operation by the image capture device 100 of the image capturing will be described with reference to figure 3. Figure 3 shows a flowchart of operations illustrating the processing operation by the image capture device 100 of the image capture.

[0038] After the status of switch 1 (SW 1) to perform the operation of forming the image in the standby mode, such as exposure control and regulation of the focus is transferred to the "on"position, the operation of forming the image in the standby mode for focusing on the object and to obtain a correct exposure, the status of switch 2 (SW 2) to perform the operation of forming the image is transferred in the "on"position. At this time, the phase spider S301 demonstration unit 117 Department receives the distance of the current object. In this process, the distance to the object can be calculated based on, for example, the lens position at which the object is in the focused state.

[0039] At step W302 unit 117, the control actuates the lens 101 of image formation so as to focus on the background. At this time, the photographer is not necessary to determine the position of focus background by using the device 100 of image capture, or the device 100 of the image capture can automatically perform obnarujivaetsya many areas within the field of view, in order to determine the position of focus background.

[0040] At step S303 unit 117 Department receives the distance of the current background. In this process, the distance to the background can be calculated based on, for example, the lens position at which the background is focused condition. At step S304 unit 117, the control actuates the lens 101 of image formation so that the background was located in front of the depth of field. That is, the lens 101 of the formation of the image is moved to the position (the third focal position) on the side far distance position of focus background, within the range where the background is in depth of field.

[0041] At step S305 unit 117, the control performs control to perform the operation of forming the image. The image data generated by the sensor 102 of the image through the operation of forming the image, stored in memory 103. In addition, the image data obtained through the operation of image formation, are focused on the side of the background, which is within the range of the depth of field of the focus position, but the object located in front, blurred more than the background, because it is outside the depth of field of the focus position, and the magnitude of the blur more than when the background is in focus.

[0042] At step S36 unit 117 management determines is the background within the same depth of field and depth of field of the object when the object is in the focused state on the basis of the distance to the background and the distance to the object obtained earlier (based on the conditions of formation of the image). If the background is present within the same depth of field and depth of field of the object (YES at step S306), the processing proceeds to step S307. On the other hand, if the background is not present within the same depth of field and depth of field of the object (NO at step S306), the processing proceeds to step S309.

[0043] First described processing in step S307. At step S307 unit 117, the control actuates the lens 101 of image formation, in order to focus on the subject. At step S308, the device 100 performs image capture operation of forming the image and stores the image data generated through the operation of forming the image in the memory 103.

[0044] the following describes the processing to step S309. At step S309 unit 117, the control actuates the lens 101 of image formation so that the object was located in the rear depth of field. That is, the lens 101 of the formation of the image is moved to the position (second focal position) on the side of the small distances provisions in focus (the first focal position) of the object is within range, where the object is within the depth of field. At step S310, the device 100 performs image capture operation of forming the image and stores the image data generated through the operation of forming the image in the memory 103.

[0045] Next, the processing operation to distinguish areas of the device 100 of the image capturing will be described with reference to figure 4. Figure 4 shows a flowchart of operations illustrating the processing operation to distinguish areas unit 112 of the image capture. More specifically, figure 4 shows a flowchart of operations illustrating processing to distinguish between an object area and the background area of the image data.

[0046] At step S401 unit 201 of the detection circuit performs processing for bandpass filtering the image data, which focused on the object side and the image data, which focused on the side of the background, obtained by processing, illustrated in figure 3, to take the absolute value, and takes a shape corresponding to the image data.

[0047] "the image Data, which focused on the object side are image data obtained at steps S308 and S310. In addition, the "image data, which focused on the side of the background, are image data obtained at step S305. The data depicted is I, focused on the object side, are an example of the first image data, and image data, which focused on the side of the background, are an example of second image data.

[0048] At step S402, the block 202 of the subtraction circuit subtracts the contours of the image data, which focused on the side of the background, from the contours of the image data, which focused on the object side, for each pixel to generate a difference contours (hereafter in this document referred image data with a difference contours) of the image data. At step S403 unit 203 calculate the entire value of the circuit divides the image data with a difference of paths generated in step S402, in many areas and integrates the values of the contours of the respective areas.

[0049] At step S404 unit 204 estimates integer value circuit compares integer values of the contours of the respective regions computed in step S403, the predefined threshold value. If the integer value of the values of the contours is equal to or greater than a predetermined threshold value, then the control unit 204 estimates the entire value of the circuit determines that the region of the object. On the other hand, the unit 204 estimates the entire value of the circuit determines that the region is a background region, if an integer value is e values of the contours less than a predetermined threshold value. Above a predetermined threshold may be a predefined fixed value or may be adaptive obtained on the basis of the histogram distribution of the contours of the image data.

[0050] At step S405 unit 205 generate maps of areas generates a map of areas that can distinguish between an object area and the background area, based on the result of determination at step S404. In the above map, for example, the ratio of the associations represented by the pixel value of the image data. To make the unevenness of the border invisible relative to the map areas, the low pass filter may be applied to the boundary between the object area and the background area. The above object is an example of the first region and above the background region is an example of the second region.

[0051] Next will be described processing for blurring the background region according to the present exemplary variant implementation. Unit 113 of the image processing performs special processing to filter data of the captured image to generate data IMG 2 blurred image. The image data that is the target that should be subjected to special treatment by the filter, are on nami captured image in one step S305, S308 and S310.

[0052] When the above special processing by the filter processing by the filter is performed on image data based on the filter form. When processing by filtering the interpolation of the brightness-saturated pixel by multiplying the pixels having a predetermined brightness value, arbitrarily set the value of gain K.

[0053] Next, the unit 113 of the image processing combines the data IMG 1 images, which focused on the object side, obtained in step S308 or S310, and data IMG 2 blurred image based on the above mentioned map areas. Will be described an example of processing for merging the image data. Unit 113 of the image processing combines the data IMG 1 images, which focused on the object side, and the data IMG 2 blurred image on the basis of α (0≤α≤1)obtained from a pixel value above map areas, and generates combined data "B" of the image. That is, the unit 113 of the image processing calculates each pixel B[i, j] merged data B images using the following equation 1.

B[i, j]=IMG1[i, j]*α[i, j]+IMG2[i, j]*(1-α)equation 1

[0054] the combined data "B" of the image obtained through the your of the above-described processing, the obtained data as the motion blur of the background. In addition, the processing of generating data blurred background image in the present exemplary embodiment, the invention is not limited to the above exemplary embodiment.

[0055] for Example, the data is blurred background images can be obtained by performing special processing to filter only on the background area of the image data, which focused on the object side, on the basis of the map areas. Data IMG 2 blurred image can be generated by reducing the data of the captured image and increase the reduced image data in order to return them to their original size.

[0056] alternatively, the data IMG 2 blurred image can be generated by applying a low pass filter to the data of the captured image. Data captured image is image data captured in one of the steps S305, S308 and S310.

[0057] it Becomes possible to perform special processing of the image in the background region, segmented by the above processing.

[0058] furthermore, the present invention is not limited to these exemplary embodiments implement, and various variations and modifications are possible within the scope of the present invented who I am. For example, the order of operations of image formation of image data, which focused on the side of the background image data, which focused on the object, or image data, which focused on the object side, can be used interchangeably.

[0059] in Addition, processing for detection regions according to the present exemplary variant implementation is performed using the image data, which focused on the object side and the image data, which focused on the side of the background, but it is not limited to this. For example, if additional performing other processing on the object, which is present on the side of the small distance of the main object can be used in three or more portions of the image data that includes image data captured by the moving position of the focus to the side of the small distance of the object. In this case, by performing processing for detection regions according to the present exemplary variant implementation, in which the main object is considered as a background, it becomes possible to divide the image data into multiple regions according to the depth of field.

[0060] in Addition, in the above exemplary embodiment, an example of the capture of the image data illustrated in steps S305, S308 and S310, but it is not about what reichen it. For example, after the capture of the image data, which focused on the object image data, which focused on the side of the background, can be generated by performing image processing so that the image data becomes closer to the focused state of an image which could be obtained if the lens forming the image is placed in the position in which the background is focused condition. Similarly, image data, which focused on the object side, can be generated by performing image processing so that the image data becomes closer to the focused state of an image which could be obtained if the object is placed in the rear part of the depth of field.

[0061] According to the present exemplary variant implementation, after receiving the respective contours of the image data captured at the position of focus of the object or by shift in position on the side of the small distances provisions in focus, and the image data captured by shift in position on the side of the large distance position in the focus-background distinction of the areas is performed on the basis of their differential values. Therefore, even in the image data in which the difference of depth between the object and f is nom is insufficient, it becomes possible to perform the distinction between the object and the background with high accuracy.

[0062] the image Data of the background region, which was given the blurring through processing for correction of the exposure to estimate the actual exposure of a saturated pixel, and applying the gain according to the estimated amount of actual exposure, and the image data area of the object, which was not given to the effect of dilution can be combined. In this case it may be blurring and blurring due to the blur of a saturated pixel in the object area, the background area near the area of the object.

[0063] Below in this document, referring to figa-11C, this phenomenon will be specifically described. On figa illustrated image data captured when the object is focused. On FIGU illustrated image data, which has been given to the effect of dilution by performing processing for correction of the exposure image data illustrated in figa. When segmented, only the region of the object image data illustrated in figa, and combined with the image data illustrated in figv, there is a blurring and blurring, as illustrated in figs.

[0064] Thus, in the present exemplary var the ante implementation will be described in the device configuration of the image capture and the capture device image to obtain image data, in which suppressed the occurrence of blurring and diffusion on the background area in the immediate vicinity of the area of the object.

[0065] First, the overall configuration of the device image capturing will be described with reference to figure 5. Figure 5 shows a block diagram illustrating the configuration of the device 500 of the image capture.

[0066] the Device 500 of the image capture includes block 517 management, which performs control of the whole device 500 of the image capture. Block 517 control is formed by a CPU, MPU or the like and controls the operation of the respective circuits described below. Lens 501 forming the image attached with the ability to detach the device 500 of the image capture through the lens mount unit (not illustrated). Block 521 of electrical contacts provided in the lens mount.

[0067] Block 517, the control unit 500 of the image capture performs communication with the lens 501 imaging through the block 521 electrical contacts and controls the actuation to regulate focal lens 518 and aperture 522 inside the lens 501 imaging. Circuit 520 controls the lens according to the signal from block 517 control, controls the mechanism 519 actuate the lens, which moves the focus lens 518 in the direction of the optical axis in the execution focus.

[0068] the Mechanism 519 actuate the lens has a stepper motor or a DC motor (DC) as the source of actuation. In addition, the scheme 524 actuate the iris controls the mechanism 523 actuate the diaphragm to change the diameter of the aperture diaphragm of the diaphragm 522, according to the signal from block 517 management. Figure 5 only the focus lens 518 illustrated as lens inside the lens 501 imaging, but in addition to this lens is a variable ratio or a fixed lens, and a lens unit configured to include in these lenses.

[0069] figure 5 sensor image 502 is formed of a CCD sensor, CMOS sensor, or other sensors, and its surface is covered with RGB color filter, such as a Bayer array, and capable of forming a color image. When the image of the object incident through the lens 501 imaging, including focal lens 518, attachable to and detachable from the device 500 capture images formed on the sensor 502 image, generates image data and stored in the memory 503.

[0070] Block 517, the control calculates the shutter speed and the aperture value of the diaphragm so that the data of the entire image showed the correct expose the tion, and calculates the magnitude of the actuation focal lens 518 so as to be focused on the object located within the field of focus. Then, information indicating the exposure value (shutter speed and the aperture value of the diaphragm), calculated by block 517 management, and the magnitude of the actuation focal lens 518, shown in scheme 516 control image capture, circuit 524 actuation aperture control and circuit 520 of the control lens. The exposure control and the regulation of the focus is performed on the basis of each value.

[0071] the Circuit 504 with the transformation matrix color applies color boost so that the data captured images were reproduced in an optimal color, and converts them into color difference signals R-Y and B-Y. the Circuit 505 of the low pass filter (LPF) is used to limit the bandwidth color difference signals R-Y and B-Y. the Circuit 506 suppress chroma (CSUP) is used to suppress a false color signal is a busy area in the image data, limited by the bandwidth of the circuit LPF 505.

[0072] on the other hand, the data of the captured image are also displayed in the diagram 507 generating the luminance signal. Circuit 507 to generate the luminance signal generates a brightness signal Y from the input image data. Circuit 508 improve the contour of the check W processing to improve the contour of the generated Y signal brightness.

[0073] Scheme 509 RGB conversion converts the color difference signals R-Y and B-Y outputted from the circuit 506 CSUP, and the brightness signal Y is extracted from the schema 508 improve contour, in RGB signals. Circuit 510 gamma correction adjusts the gradation converted RGB signals. Further circuit 511 convert color brightness converts the RGB signals that have been subjected to the gradation correction in YUV signals.

[0074] Block 513 generate a blurred background image performs image processing to make a blur effect on the converted image data. Detailed configuration block 513 generate a blurred background image will be described below. Circuit 514 JPEG compression compresses the image data which have been subjected to the image processing unit 513 generate a blurred background image, using the scheme of JPEG or the like, and causes after compression of the image data storage on external or internal media 515 entries.

[0075] the following describes a specific configuration of block 513 generate a blurred background image. Figure 6 illustrates the configuration block 513 generate a blurred background image. As illustrated in Fig.6, block 513 generating a blurred image background includes block 601 detection circuit block 602 subtraction circuit block 603 calculate the whole meant the I loop, block 604 estimate the entire value of the circuit block 605 generate maps of areas, block 606 processing on erosion and block 607 merge images.

[0076] Next, referring to the sequence diagram of operations figure 7, described processing for blurring unit 513 generate a blurred background image.

[0077] At step S701, the device 500 performs image capture operation of forming the image when focusing on the object. Next, the capture device image 500 performs an operation of forming the image by moving a lens to a predefined value, to focus on the background. After a lot of image data was captured at different focus positions, at step S702 block 601 detection circuit detects the contours of the image data that was captured by focusing on the object, and detects the edges of the image data that was captured by focusing on the background.

[0078] Examples of methods of detection circuit includes a method for detecting contours of the image data by performing band-pass filtering the data of the captured image, to take the absolute value. Method for detecting circuits is not limited to this, and other methods can be used. Below this is the document outlines, detected from the image data captured by focusing on the object, referred to as image data of the contour on the side of the focus object, and contours detected from the image data captured by focusing on the background, referred to as image data of the contour on the side of focus background.

[0079] At step S703 block 602 subtraction circuit subtracts the image data of the contour on the side of focus background from the image data circuit-side focus of the object for each pixel to generate a difference contours of the image data (hereafter in this document referred image data with a difference contours). At step S704 block 603 calculate the entire value of the circuit divides the image data with a difference of paths generated in step S703, in many areas and integrates the values of the contours of the respective areas.

[0080] At step S705 block 604 assessment integer value circuit compares integer values of the contours of the respective regions computed in step S704, and a predetermined threshold value. If the integer value of the values of the contours is equal to or greater than a predetermined threshold value, the block 604 estimate the entire value of the contour defines this area as the target area. On the other hand, block 604 evaluation CE the th values of the contour defines this region as a background region, if the integer value of the values of the contours is less than a predetermined threshold value. Above a predetermined threshold value may be a constant value defined in advance, or may be adaptive obtained on the basis of the histogram distribution of the contours of the image data.

[0081] At step S706 block 605 generate maps of areas generates segmented maps, which provide the ability to distinguish between the object area and the background area, based on the result of determination at step S705. In the above-described segmented map, for example, the ratio of the associations represented by the pixel value of the image data. To make the unevenness of the border invisible relatively segmented cards, the low pass filter may be applied to the boundary between the object area and the background area.

[0082] At step S707 block 606, the process performs blurring processing on the blur, based on the segmented maps, image data captured when the object is focused to generate the data of the blurred image.

[0083] the following describes details of processing for blurring at step S707. When processing for blurring processing filtering is performed on image data captured when the object is focused in the situation of the Institute, based on the filter form. When processing by the filter processing by the filter is performed after multiplication of the pixel having the predetermined brightness value, the gain value K obtained from the table illustrated in figa, and interpolation of the brightness-saturated pixel. Later in this document pixel scanned (target site) at the current point in the processing of filtering is called the scanned pixel.

[0084] On figa instead of tabular form illustrated the relationship between the distances from the scanned pixels to the area of the object and gain values. Actually, however, the gain value corresponding to the respective distances from the scanned pixels to the area of the object, specified in the table. The target area may differ from the above-described segmented cards.

[0085] As illustrated in figa, the value of gain K for the scanned pixel takes the value of 0 or more and is a value determined depending on the distance "r" between the scanned pixel and the area of the object. For example, as illustrated in figv, it is assumed that there are scanned pixels having a distance from the object area ra and rb (ra<rb). At this time, the table illustrated in figa, the value of Kh gain is set to be scanned is exile, having a shorter distance ra from the area of the object, and the value of the gain K, which is higher than Kh, set for the scanned pixel having a longer distance rb from the area of the object.

[0086] when the scanned pixel is in the area of the object, the gain value is set to Kmin. The amplification value that should be set in the table illustrated in figa, is determined based on the number of outputs (taps) or waveform filter. For example, if the number of terminals of the filter is large, then "r" is set on a gain value to a large value in order to be free from the effects of the pixels in the region of the object.

[0087] However, the present invention is not limited to this, and can be used the values of K and Kmin gain with constant values, as illustrated in Fig.9. The value of gain K is an example of a predetermined second gain value, and the value of Kmin gain is an example of a predetermined first gain value.

[0088] will Be described advantages when used values of K and Kmin gain with constant values. For example, if the segmentation of the object area is made pixel by pixel, the value of gain K is always set to the pixel distinguished as the background region, and the value of Kmin gain is always peaks for the I, distinguished as the area of the object. As a result, the blurring of the saturated pixel area of the object is suppressed to a minimum, and because of this blurring and blurring of the image data after the merge can be prevented. As illustrated in Fig.9,>Kmin satisfied.

[0089] the following describes the advantages of determining the gain value using the table, as illustrated in figa. For example, if an error occurs between the actual position of the object area and the position of the segmented object area, there is a blurring and blurring due to motion blur saturated pixel region in the image data after the merge simply because of an error. In this case, the blurring and blurring of the image data after the merge can be prevented by setting a lower gain value for a pixel in the vicinity of the area of the object.

[0090] the Above-described characteristics of the filter or the gain value can be adaptively changed based on a map of distances, including the information about the depth data of the captured image. Figure 10 illustrates an example of a map distances, in which the depth of the image data is parsed into multiple levels. By reference to the map distances, as illustrated in figure 10, the shape of the filter is set to large or sachenhausen set high for the area with a deep depth. On the other hand, the shape of the filter is set to small or the gain value is set low for areas with a shallow depth.

[0091] Returning to the description of Fig.7, at step S708 block 607 of blending segmenting the object from the image data captured when the object is in the focused state on the basis of the segmented maps, and combines them with the data of the blurred image generated at step S707.

[0092] In this process only needs to perform processing for merging the image data in a manner similar, for example, the above-described first exemplary variant implementation. In other words, block 607 associations of images combines the object's data IMG 1[i, j] is the image captured when the object is in the focused state, and IMG data 2[i, j] blurred image based on α [i, j] (0≤α≤l), obtained from the values of the pixels segmented maps to generate the combined data B [i, j] of the image. That is, block 607 merge images calculates the combined data B [i, j] of the image, using equation 1 in the first exemplary embodiment, [i, j] indicates each pixel.

[0093] Through the above processing block 607 merge images can retrieve data blurred image obtained by the interpolation mn is the expansion of the brightness-saturated pixel in suppressing the occurrence of blurring and diffusion of the background region in the immediate vicinity of the area of the object. In addition, the distance "r" between the scanned pixels and the area of the object in the exemplary embodiment, is the distance to the object area, which is now closest to the scanned pixel at its center, but the distance "r", by obtaining the coordinates of the center of mass of the main subject may be the distance from the coordinates of the center of mass to the scanned pixel.

[0094] in Addition, the segmented maps in the present exemplary embodiment, generated from the two portions of the image data: image data captured when the object is in the focused state, and the image data captured when the background is in the focused state, but can be generated from three or more portions of the image data that includes image data captured when the front side of the object is in the focused state.

[0095] As described above, according to the present exemplary variant implementation of image data that have been processed by filtering through a change in the gain value for a pixel according to the distance from the area of the object, and image data captured when the sharp object is in the focused state, United on the basis of the segmented maps. Therefore, the hundred and ulitsa possible to generate image data, which has a blur effect with the assessed value of the real exposure to the field of high background brightness while suppressing the occurrence of blurring and diffusion at the periphery of the field of high brightness of the object, and may be provided with image data, is desirable for the photographer.

[0096] Aspects of the present invention can also be implemented by a computer system or apparatus (or devices such as a CPU or MPU)that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiments, and by a method, the steps of which are performed by a computer system or device, for example by reading and executing the program recorded on a memory device to perform the functions of the above embodiments. For this purpose, the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (for example, machine-readable media). In this case, the system or device and a recording medium that stores the program, included as being within the scope of the present invention.

[0097] while the present invention has been described with regard to exemplary embodiments, it should be the imat, the invention is not limited to the disclosed exemplary embodiments of the implementation. The volume of the following claims must comply with the broadest interpretation so as to encompass all modifications, equivalent structures and functions.

1. The capture device image that contains:
a generating unit, configured to generate first image data, when the position of focus is the first focal position at which the object is in the focused state, or the second focal position on the side close to the first focal position, and the second image data generated by the generating unit, when the position of focus is the third focal position on the side of the far distance of the focal position at which the background is focused state; and
block distinction made with the possibility, on the basis of the first and second image data generated by the generating unit, distinguish between the first region that includes the object, and a second area that includes background.

2. The capture device image according to claim 1, in which the second focal position is a position within the range in which the object gets inside the depth of field, and is a position on the side close range is, I can pay tithing focal position.

3. The capture device image according to claim 1, in which the third focal position is a position within the range in which the background gets inside the depth of field, and is a position on the side of the far distance of the focal position at which the focused background.

4. The capture device image according to claim 1, in which the generating unit configured to generate first image data by positioning a lens in one of the first focal position and a second focal position according to the condition of formation of the image.

5. The capture device image according to claim 1, additionally containing:
a processing unit configured to perform predetermined processing by filtering the first image data or second image data; and
the block Association is made with the possibility of combining the first image data or second image data that have been subjected to predetermined processing by the filter processing unit, on the basis of the distinguishing unit of discernment, and the first image data that have been generated by the generating unit.

6. The capture device image according to claim 5, further comprising:
block distinction made with the possibility of determining the gain value is according to the distance between the target area in the image data and area includes the object, and the processing unit performs the predetermined processing by filtering the image data after the interpolation target area, using the gain value specified by the block definition.

7. The capture device image according to claim 6, in which the determination is made so that a particular value of gain was lower as the distance between the target area and the area that includes the object decreases.

8. The capture device image according to claim 6, in which determining is configured to determine, when the target area is included in the region that includes the object, a predetermined first gain value and determines, when the target area is included in a region different from the region that includes the object, a predetermined second gain value.

9. The capture device image according to claim 6, in which a predetermined first gain value is less than a predetermined second gain value.

10. The capture device image according to claim 6, in which the block definition changes the gain value, which should be determined according to the characteristics of the filter used when a predefined processing on the filter.

11. Arrange the creation of image capture according to claim 6, in which the determination is made with the possibility of changing the gain value, which should be determined according to the depth of field of the target area.

12. The capture device image according to claim 6, in which the processing unit is configured to change characteristics of the filter according to the depth of field of the target area.

13. The capture device image according to claim 1, additionally containing:
the processing unit is arranged to reduce the size of the first image data or second image data and the increase of the reduced image data, to thereby bring them back to their original size; and
the block Association is made with the possibility of combining the first image data or second image data that have been processed by the processing unit, based on the result of the differentiation block of discernment, and the first image data that have been generated by the generating unit.

14. The capture device image according to claim 1, additionally containing:
the processing unit is arranged to apply low pass filter to the first image data or second image data; and
the block Association is made with the possibility of combining the first image data or second image data processed by the processing unit, based on the result of replicareplica discernment and the first image data that have been generated by the generating unit.

15. The control method of capture devices, comprising:
discernment, based on first image data obtained at the position of focus, which is the first focal position at which the object is in the focused state, or the second focal position on the side close to the first focal position, and second image data obtained at the position of focus, which is the third focal position on the side of the large distance of the focal position at which the focused background, and the first region contains the object, and the second area contains the background.

16. The storage medium storing the program for device management, image capture, and the program executes the processing by computer, and the processing includes:
discernment, based on first image data obtained at the position of focus, which is the first focal position at which the focused object, or the second focal position on the side close to the first focal position, and second image data obtained at the position of focus, which is the third focal position on the far side of the focal distance state is, when focused background, and the first region includes the object, and the second region includes a background.



 

Same patents:

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to image capturing devices. The result is achieved due to that the image capturing device comprises an image capturing unit configured to capture an image of an object through an optical system; a display unit configured to display an image captured by the image capturing unit on a screen; a determination unit configured to simultaneously determine a plurality of touch positions on the screen where an image is displayed; and a control unit configured to smoothly adjust the focusing state in accordance with change in distance between a first determined touch position and a second determined touch position in order to change the focusing area.

EFFECT: broader technical capabilities of the image capturing device.

13 cl, 27 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer engineering. An image processing device for detecting, from image data generated by an image sensor formed by a plurality of pixels, a signal from a defective pixel of the image sensor comprises a first determination unit for obtaining a first determination value indicating the value of the difference in signal strength between a pixel of interest and a plurality of pixels located near the pixel of interest; a second determination unit for obtaining a second determination value indicating the distribution width of the difference in signal strength between the pixel of interest and the plurality of pixels located near the pixel of interest; and a detection unit for detecting if the signal from the pixel of interest is a signal from a detective pixel using the first determination value and the second determination value, wherein the first determination unit obtains the first determination value by obtaining the difference in signal strength between the pixel of interest and each of the plurality of pixels located near the pixel of interest, obtaining from each difference value indicating the probability that the signal from the pixel of interest is a signal from a defective pixel, and multiplying the obtained values.

EFFECT: high accuracy of detecting a defective pixel.

11 cl, 22 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to an image forming apparatus. The result is achieved due to that the image forming apparatus includes a control unit and a detector which includes a plurality of pixels and which performs an image capturing operation for outputting image data corresponding to emitted radiation or light. The image capturing operation includes a first image capturing operation in which the detector is scanned in a first scanning region which corresponds to part of the plurality of pixels to output image data in the first scanning region, and a second image capturing operation in which the detector is canned in a second scanning region larger than the first scanning region to output image data in the second scanning region. The control unit prompts the detector to perform an initialisation operation for initialising a conversion element during a period between the first image capturing operation and the second image capturing operation in accordance with the switch from the first scanning region to the second scanning region.

EFFECT: design of a device capable of reducing the difference in level which might arise in a captured image and which depends on the scanning region to prevent considerable deterioration of image quality.

9 cl, 8 dwg

FIELD: physics, computer engineering.

SUBSTANCE: group of inventions relates to image processing technologies. An image processing device for reconstruction processing for correcting image quality deterioration due to aberration in an optical image-forming system. The image processing device comprises a dividing means for dividing image data of colours of colour filters into image data of corresponding colours of colour filters. The device also includes a plurality of image processing means, each designed to perform reconstruction processing by processing using an image data filter of one of the corresponding colours divided by said dividing means.

EFFECT: fewer false colours through image reconstruction processing in a RAW image, as well as reduced load on image reconstruction processing.

10 cl, 33 dwg

FIELD: physics.

SUBSTANCE: apparatus for adjusting a magnetooptical system for forming a beam of protons consists of a pulsed electromagnet which is formed by a pair or a system of pairs of thin conductors directed along the axis of a proton graphic channel spread in a transverse plane. A scaling array of metal plates mounted in a frame is placed at the output of the electromagnet. The method of adjusting a magnetic system for forming a beam of protons and a method of matching magnetic induction of an imaging system involve generating a magnetic field, through which the beam of protons is passed, the direction of said beam through the imaging system to a recording system by which the image of the scaling array is formed. Upon obtaining a distorted image, the magnetic beam forming system is adjusted and magnetic induction of the magnetooptical imaging system is adjusted by varying current of lenses of said systems and retransmitting the beam of protons until the required images are formed.

EFFECT: high quality of adjustment.

4 cl, 14 dwg

FIELD: radio engineering, communication.

SUBSTANCE: user sets, in a photograph display device 370B, the fact that a physical address 2000 represents a recording device which controls 370B display of photographs in place of the physical address 2000. According to that setting, the photograph display device 370B defines a logic address as a recording device controlled by consumer electronics control (CEC) devices. When the user performs operations with the recording device 210B on a disc, which is a CEC-incompatible device, using a remote control transmitter 277, a television receiver 250B generates a CEC control command addressed to the disc recording device 210B. The photograph display device 370B detects a CEC control command, converts the CEC control command to an infrared remote control command and transmits the infrared remote control command from the infrared transmission module 384 to the disc recording device 210B.

EFFECT: controlling operations of a controlled device, which processes only a control signal in a second format based on a control signal in a first format.

11 cl, 31 dwg

FIELD: physics.

SUBSTANCE: disclosed apparatus includes a means (100) for providing an aerosol designed to generate an aerosol stream (108) with average particle diameter of the disperse phase of less than 10 mcm in a screen formation area, a means (200) of providing a protective air stream designed to generate a protective air stream (210, 211) on two sides of the aerosol stream (108), wherein the aerosol stream (108) and the protective air stream (210, 211) have a non-laminar, locally turbulent flow near an obstacle on the flow path, wherein the Reynolds number for said streams near outlet openings (134, 215, 216) is in the range from 1300 to 3900.

EFFECT: improved method.

17 cl, 9 dwg

FIELD: physics.

SUBSTANCE: image forming process includes a first image forming process for outputting image data in accordance with illumination of a detector with radiation or light in an illumination field A, which corresponds to part of a plurality of pixels, and a second image forming process for outputting image data in accordance with illumination of a detector 104 with radiation or light in an illumination field B which is wider than the illumination field A. In accordance with transfer from illumination in the illumination field A to illumination in the illumination field B, operation of the detector is controlled such that the detector performs an initiation process for initiating conversion elements during the period between the first and second image forming processes.

EFFECT: weaker ghost image effect which can appear in an image resulting from FPD operation, and which is caused by the illumination region, and preventing considerable drop in image quality without complex image processing.

7 cl, 21 dwg

FIELD: physics.

SUBSTANCE: computer has a video card; in the prototype television camera, the first television signal sensor is based on a charge-coupled device (CCD) matrix with "row-frame transfer" and the use of an additional pulse former for clock power supply of the photodetector provides summation in the memory section of charge signals accumulated in its photodetector section. As a result, sensitivity is levelled on the entire field of the composite image.

EFFECT: high signal-to-noise ratio at the output of the CCD matrix of the first television signal sensor owing to summation, in its memory section, of charge packets formed in the photodetector section.

4 cl, 12 dwg, 3 tbl

FIELD: radio engineering, communication.

SUBSTANCE: invention provides an optical-electronic system which enables to measure density of fluorescence radiance in the UV spectral range, arising from ionisation of atmospheric nitrogen, and converting the obtained information to a visual image of distribution of levels of radioactive contamination on the underlying surface.

EFFECT: faster aerial radiological survey of an area due to shorter flight time of the aircraft and high reliability of instrument measurement data.

3 cl, 2 dwg

FIELD: radio engineering.

SUBSTANCE: for authentication of data exchange, identifier of decoder receiver is used, based on number of smart-card and gate is used, made with possible receipt of data from receiver-decoder, which are sent via non-network protocol and transformation of these data to inter-network protocol for their following transfer to internet service provider.

EFFECT: higher efficiency.

2 cl, 10 dwg

FIELD: communication systems.

SUBSTANCE: transfer system for transfer of transport flow of MPEG-2 standard from transmitter 10 to receiver 14, having check channel 16, along which receiver 14 can transfer selection criterion for selection of information blocks of MPEG-2 standard to transmitter 10, transmitter 10 has selector 38, which receivers required criteria and then filters information blocks in accordance to these criteria prior to transfer.

EFFECT: higher efficiency.

4 cl, 3 dwg

FIELD: technology for broadcast transmissions of digital television, relayed together with multimedia applications.

SUBSTANCE: method includes transmission of digital signal, having additional data flow, appropriate for compressed video images and data flow, appropriate for at least multimedia application, and also service signals, meant for controlling aforementioned data flows, service signals is determined, appropriate for series of synchronization signal, including series, meant for assignment of multimedia signal, meant for recording execution parameters of aforementioned assigned multimedia application, after that multimedia application is loaded and multimedia application is initialized with aforementioned execution parameters.

EFFECT: possible interactive operation of multimedia application with user.

2 cl, 2 dwg

FIELD: engineering of receivers-decoders used in broadcasting systems such as television broadcasting system, radio broadcasting system, cell phone communication system or other similar systems.

SUBSTANCE: method includes performing transmission to receiver-decoders through broadcasting system of a command, ordering receiver-decoders to perform an action; when command is receiver, identifier of command stored in current command is compared to identifiers of commands stored in memory of current decoder-receiver; aforementioned action is only executed in case when command identifier is not stored in memory.

EFFECT: transmitted commands are only executed once and information related to each receiver-decoder malfunction and may be useful for detecting and repairing malfunction is extracted remotely.

2 cl, 10 dwg

Personal computer // 2279708

FIELD: engineering of computer hardware.

SUBSTANCE: personal computer contains system block including system board, processor, random-access memory, hard drive with controller, video card, sound card, disk drive, monitor, keyboard and mouse. Personal computer additionally contains digital monitor, television card and compression block.

EFFECT: expanded functional capabilities, possible receipt by personal computer of digital television broadcast signal, reproduction of aforementioned signal without loss of quality on full screen of display with recording in compressed form onto hard drive of computer.

22 dwg, 1 tbl

FIELD: medicine; medical diagnostic technique.

SUBSTANCE: topographometric system with topography function contains X-ray emitter, profondometer, rotational console, mechanism for moving of the profondometer along console, device for detection of X-ray radiation as a detectors scale, moving mechanism for detection device, table for the patient with radiotransparent deck, stand with rotation drive for console, data transmitter to the electronic computer, analog-digital converters and management system. Profondometer scanning contains flat-field collimator graving from the opening of operating beam narrow fan-shaped beam. It also has drive which provides moving synchrony of the beam and device for X-ray radiation detection containing crate with detectors scale. It is mounted on the movable carriage of the mechanism for moving of the detecting device with the ability of rocking about the axis and fitted with driver, providing the orientation of the scale on the focus point in any position of the carriage. Detectors scale radius in the central position is equal to the distance from the focus point to the central scale in the tomography mode. Mechanism for the moving of detectors device is switched in the synchronization device, which is made with the capability of crate orientation according the position of flat-field collimator so that the detectors scale is located in the flat of fan-shaped beam and crate aperture is oriented on the focus point of the X-ray tube.

EFFECT: development of the sphere of potential focal disease detecting among the oncology patients at one testing.

3 cl, 6 dwg

FIELD: physics, radio.

SUBSTANCE: stated invention is related to the field of radio engineering, in particular, to elements of television systems, and may be used in observing systems used under low temperature conditions. Container of video camera contains external and internal casings, holder that interacts with external surface of internal casing and internal surface of external casing in closed circuit. In cross section shapes of external casing internal surface and internal casing external surface are similar.

EFFECT: provision of normal temperature conditions for operation of video camera with its invariable position in relation to container external casing.

8 cl, 3 dwg

FIELD: medical equipment.

SUBSTANCE: medical diagnostic X-ray system contains X-ray emitter with flat-field collimator connected to high-frequency X-ray generator and programmable control unit with PC, control panel and video monitor, multielement linear or matrix X-ray detector connected to digital electronic system of image translation, recording and pattern generation connected to programmable control unit, the mechanical scanner with irradiation scanner input system and protective cabin with platform for patient's feet. Irradiation scanner input system contains coordinate measuring apparatus connected to scanner. The platform for patient's feet is steadily rotates by mechanical drive round vertical axis lying through central ray of X-ray beam, perpendicularly to X-ray beam plane. Coordinate measuring apparatus and a mechanical drive of platform are connected to programmable control unit to generate transverse tomographic section. Rotation axis of platform for patient's feet is displaced relatively to central X-ray along the line, perpendicular to central ray direction by value S≤R/2, where R=1sinα, L is detector length; 1 is distance from focus of X-ray tube to cross point of displacement line of rotation axis of platform for patient's feet and central ray projection to plane of this platform, f is focal length.

EFFECT: higher diagnostic accuracy and enhanced operation of the device due to improved performance of scanning system and higher patient's orientation accuracy relative thereto.

2 dwg

FIELD: medical equipment.

SUBSTANCE: scanning low dose radiograph comprises mechanical scanning system, on which the following components are serially installed: X-ray radiator, slit collimators and X-ray receiver that comprises multiline solid-state linear detectors installed along width of scanning zone. Multiline solid-state linear detectors of X-ray receiver installed along width of scanning zone are installed in n parallel rows with shift relative to each other in every row, at that device comprises at least two slit collimators distanced in space along with direction of X-ray radiation beam and arranged as n-serial, and their slots are oriented at focus of X-ray radiator, while their projections are matched with receiving apertures of multiline solid-state linear detectors.

EFFECT: improved quality of X-ray images.

5 cl, 5 dwg

FIELD: medicine.

SUBSTANCE: equipment contains a X-ray emitter with a fissured collimator, connected to a high-pitched x-ray generator and a programmed control block equipped with a computer, a control panel and a video monitor, multielement linear or matrix x-ray detector bridged to digital electron system of transformation, registration and formation of the image, connected to the programmed control block, mechanical scanner with system of conclusion of the scanner on level of irradiation and protective cabin with platform for feet of the patient. The system of conclusion of the scanner on irradiation level contains coordinate scale, bridged to the scanner electric motor. The platform for feet of the patient is executed with possibility of uniform rotation by means of mechanical drive round the vertical axis which are passing through the central beam of x-ray fascicle, is perpendicular to the plane of x-ray fascicle. The coordinate scale and the mechanical drive of the platform are connected to the programmed block of management executed with possibility of formation of the cross-section tomographic section. Thus the platform for feet of the patient is supplied by fixatives of patient's position.

EFFECT: application of the given invention will allow to raise accuracy of diagnostics and to dilate operational possibilities of the apparatus at the expense of improvement of system of scanning and rising of accuracy of orientation of the patient concerning it.

2 cl, 2 dwg

Up!