Device and method to process images, and software

FIELD: information technologies.

SUBSTANCE: modules from a module 23 for calculation of a quantitative index of blur extent to a module 27 for calculation of a quantitative index of colour intensity withdraw the quantitative value of the specified characteristic from the input image, and the quantitative index is calculated using a separately taken characteristic, which characterises assessment of the input image on the basis of this characteristic. Thus, the module 24 for calculation of the quantitative index of brightness withdraws characteristics and the brightness value as a quantitative value from the input image, and calculates the quantitative index of brightness, which characterises assessment based on distribution of brightness value in the area occupied by the object, in the input image. The module 28 for calculation of the total quantitative index calculates the total quantitative index on the basis of each quantitative index of separately taken characteristics, and this total index characterises assessment of image fixation condition for the input image.

EFFECT: more accurate and effective assessment of the input image.

7 cl, 28 dwg

 

The technical field to which the invention relates.

The present invention relates to a device and method and image processing program and, in particular, relates to a device and method and image processing program that are capable of a more appropriate way to assess for the captured image of the fixity of the image.

The level of technology

Up to the present time, there has been known a technology for calculating the value of the assessment indicate an assessment of satisfactory or not was recorded image obtained by performing imaging using a fixation device of the image, that is characterizing for image evaluation of the fixity of the image.

An example of a technology for calculating the value of the assessment imaging is a technology in which the entire image is extracted quantitative value for each characteristic for each image characteristics receive assessment, and evaluation for each of these characteristics are summed, whereby the total score is calculated (see, for example, NPL1). In particular, in the technology disclosed in NPL1, as estimates for each characteristic is calculated score based on the complexity of the circuit, the size of the bounding rectangle,the degree of blur and the like.

The list of cited documents

Non-patent literature

NPL1: Yan Ke, Xiaoou Tang, Feng Jing "The Design of High-Level Features for Photo Quality Assessment" ("development of a high performance level for assessing the quality of pictures", [publication in the Internet"], [search on 3 June 2009], a uniform resource locator of the Internet: <URL:http://www.cs.cmu.edu/Kuk/photoqual/cvpr06photo.pdf*search='The Design of HighLevel Features for Photo Quality Assessments'>

Disclosure of inventions

Technical task

However, in the above technology, it was difficult to properly estimate for the image state imaging. That is, in the technology disclosed in NPL1, the entire image is extracted quantitative value of each characteristic and computes the values of the evaluation. Therefore, depending on the features used for image evaluation, it becomes impossible to calculate the proper value.

For example, usually an image in which the area occupied by the background is simple, that is, an image in which the contour is not difficult on the area occupied by the background, is fixed in a satisfactory manner, and recognizes the image having the highest evaluation. This assumes that there is a typical image having high estimation in which the area occupied on the target, the image is large, and the area occupied by the object has a complex path, but the path to the area occupied by the background, is not difficult. In this case, if one characteristic is extracted from the image contrast of the contour and get an estimate based on the complexity of the circuit, the circuit is complicated in the entire image, the resulting value becomes the rating that is lower than the score that would be obtained initially.

The present invention was made in view of such circumstances and aims to allow a more appropriate way to assess for the image of the fixity of the image.

The solution of the problem

The imaging device corresponding to one aspect of the present invention, includes: a means of calculating the first value estimates to extract quantitative values of the first characteristics of the entire input image, and to calculate the first partial value estimates of the evaluation of the input image based on the first characteristic based on the quantitative values of the first characteristic; means of calculating the second value estimates to extract quantitative values of the second characteristics of the specified region of the input image and to calculate the second is actiongo values assessment characterizing the evaluation of the input image based on the second characteristics based on the quantitative values of the second characteristic; and means for calculating the total value estimates to calculate the total value estimates for the input image evaluation of the fixity of the image based on the first partial estimate values and the second partial score values.

The means of calculating the second partial score values may include a means of identifying the area occupied by the object to retrieve from the corresponding fields of the input image, the quantitative value of the third characteristic, which has the area of the object on the input image, thus, to identify the area of the object on the input image, and the calculator tool to extract quantitative values of the second characteristic of the object area in which the input image contains the object, and/or the background area, in which the input image object is not contained, and to calculate the second partial score values.

The calculator tool can be executed with the ability to extract, as the quantitative values of the second characteristic, the brightness value of the respective areas of the object area in the input image, and bicicletta partial value estimates based on the distribution of brightness values in the region of the object.

The calculator tool can be executed with the ability to extract as quantitative values of the second characteristic, the contrast of the contour in the relevant areas of the background area in the input image and calculate the second partial value estimates based on the complexity of the path in the background area.

The means of calculating the first value assessment can be made with the possibility to calculate the first partial value estimates based on at least one characteristic of the degree of blur, color distribution, the average value of color saturation and dispersion of the color saturation on the entire input image.

The tool will calculate the total value of the evaluation can be performed with the opportunity to summarize the value that is predetermined in relation to the first partial score values, and the value that is predetermined in relation to the second partial score values, to calculate the total value of the assessment.

The value that is predetermined in relation to the value of the first partial score values may be determined based on the first partial estimate values for multiple images with different assessment of the state of fixation of the image, and these estimates are obtained in advance, and a value that is determined in advance in respect of the value of the second partial score values, can be determined based on the second partial score values for multiple images with different estimates of the fixity of the image, and these estimates are obtained in advance.

The method or the image processing program according to one aspect of the present invention includes the steps in which: extract quantitative value of the first characteristics of the entire input image, and calculates the first partial value estimates of the evaluation of the input image based on the first characteristic, based on the quantitative values of the first features; extract quantitative value of the second characteristics of the specified region of the input image and compute the second partial value estimates of the evaluation of the input image based on the second characteristic, based on the quantitative values of the second characteristic; and calculate the total value of the estimates of the valuation of the fixity of the image for the input image, based on the first partial estimate values and the second partial score values.

In one aspect of the present invention, the quantitative value of the first features extracted from the entire input image. The first partial value estimates of the evaluation of the input shows the I, based on the first characteristic is calculated based on the quantitative values of the first characteristic. The quantitative value of the second features extracted from the specified region of the input image. The second fractional value estimates of the evaluation of the input image based on the second characteristic is calculated based on the quantitative values of the second characteristic. The total value of the estimates of the valuation of the fixity of the image for the input image, is calculated based on the first partial estimate values and the second partial score values.

Useful inventions

In accordance with the first aspect of the present invention has the opportunity to more properly assess to image the state of fixation of the image.

Brief description of drawings

Figure 1 illustrates an example configuration of a variant of implementation of the imaging device to which the present invention is applicable.

Figure 2 illustrates an example configuration of the module for computing a quantitative measure of the degree of blur.

Figure 3 illustrates an example configuration of the module for computing a quantitative measure of brightness.

Figure 4 illustrates an example configuration of the extraction module object.

Figure 5 illustrates an example of a configuration of m is a module for retrieving information about the brightness.

6 illustrates an example configuration of the module extracting color information.

7 illustrates an example configuration of the extraction module outline information.

Fig illustrates an example configuration of a module to extract information about the person.

Fig.9 illustrates an example configuration of the module for computing a quantitative indication of the path.

Figure 10 illustrates an example configuration of the module for computing a quantitative measure of the color distribution.

11 illustrates an example configuration of the module for computing a quantitative measure of color saturation.

Fig is a block diagram of an algorithm illustrating the process of displaying a slide show (slideshow).

Fig illustrates an example of a conversion table of the quantitative indicators of the degree of blur.

Fig illustrates a method for determining a quantitative measure of the degree of blur.

Fig is a block diagram of an algorithm illustrating the process of computing the degree of blur.

Fig illustrates the creation of the map contours.

Fig illustrates the creation of a local maximum.

Fig illustrates an example of a path.

Fig is a block diagram of an algorithm illustrating the process of calculating the quantitative indicator brightness.

Fig is a block diagram of an algorithm illustrating the process of creating a map object.

Fig is a block diagram of an algorithm illustrating the process of extracting information about the brightness.

Fig is a block diagram of an algorithm illustrating the process of extracting color information.

Fig is a block diagram of an algorithm illustrating the process of extracting information about paths.

Fig is a block diagram of an algorithm illustrating the process of extracting information about a person.

Fig is a block diagram of an algorithm illustrating the process of calculating the quantitative indicator of the path.

Fig is a block diagram of an algorithm illustrating the process of computing a quantitative measure of the distribution of colors.

Fig is a block diagram of an algorithm illustrating the process of calculating the quantitative indicator of saturation.

Fig illustrates cited as an example of the configuration of the computer.

Description of variants of realization of the invention

Below with reference to drawings will be described variants of realization, in which is applied the present invention.

The device configuration of the image processing

Figure 1 illustrates an example configuration of a variant of implementation of the imaging device in which the present invention is applicable.

This device 11 of the image processing vices yet respect to the input image, obtained by fixing the image by using the fixation device of the image, such as, for example, the camera, the value of evaluation (hereinafter referred to as the total score), describing the assessment regarding whether the input image is fixed in a satisfactory manner, i.e. the state of fixing the image for the input image (the input image). It is assumed that the closer the input image to the image recorded more than a professional photographer, the higher the evaluation of the input image and the smaller the value of the total measure for this input image. Thus, it is assumed that the smaller the value of the total quantitative score, the more satisfactory recorded image.

The device 11 of the image processing includes a recording module 21, the receiving module 22 module 23 calculate a quantitative measure of the degree of blur, the module 24 of computing a quantitative indication of the brightness module 25 calculating the quantitative indicator circuit module 26 of computing a quantitative measure of the distribution of colors, the module 27 calculate a quantitative indication of the saturation module 28 calculate the total quantitatively what about the indicator, module 29 controls the display module 30 to display.

The recording module 21 is formed of a hard magnetic disk or the like and has recorded on it a lot of input images entered by the user using the device the image is fixed. For example, it is assumed that the input image is an image in which each pixel has the value of the pixel value R (red), G (green) and b (blue) components. The receiving module 22 receives the input image from the recording module 21 and provides its modules: module 23 calculate a quantitative measure of the degree of blur to module 27 calculate a quantitative indication of the saturation, and the module 29 of the control display.

From the input image, provided by the receiving module 22, the modules from module 23 calculate a quantitative measure of the degree of blur to module 27 calculate a quantitative indication of the saturation extract quantitative value of the predetermined characteristics and calculate a quantitative measure of each characteristic characterizing the evaluation of the input image on the basis of this feature.

Namely, the module 23 calculate a quantitative measure of the degree of blur extracts from the input image as quantitatively what about the values of predetermined characteristics of the contrast of the contour of the image and calculates on the basis of this contrast contour image quantitative measure of the degree of blur characterizing the degree of blur of the input image. Module 24 of computing a quantitative measure of the brightness of the extracts from the input image as quantitative values of predetermined characteristics of the brightness value and calculates based on the brightness metric brightness characterizing an estimate based on the distribution of the brightness values on the plot (the plot in the foreground of the input image.

Module 25 calculating the quantitative indicator circuit extracts from the input image as quantitative values of predetermined characteristics of the contrast of the contour of the image and calculates on the basis of this contrast of the contour of the image metric for the path characterizing an estimate based on the complexity of the contour plot of the background in the input image. Module 26 of computing a quantitative measure of the distribution of color extracts from the input image as quantitative values of predetermined characteristics that make up each color and calculates on the basis of the color component quantitative measure of the distribution of color that characterize an estimate based on the color distribution in the input image.

Module 27 quantitative calculations showing the indicator color saturation extracts from the input image as quantitative values of predetermined characteristics color saturation and calculates on the basis of color saturation figure color saturation characterizing an estimate based on average and variance of the distribution of color saturation in the input image. Modules from module 23 calculate a quantitative measure of the degree of blur to module 27 calculating the quantitative measure of color saturation, provide the calculated quantitative measure of the degree of blur, a quantitative indicator of the brightness, the quantitative indicator circuit, a quantitative measure of the color distribution and the quantitative measure of color saturation module 28 calculate the total measure.

Further, when there is no particular need to distinguish between a quantitative measure of the degree of blur, a quantitative indicator of the brightness, the quantitative indicator circuit, a quantitative measure of the color distribution and the quantitative measure of color saturation, they will also be simply referred to as a quantitative indicator of a certain characteristic.

Module 28 calculate the total measure calculates the total quantitative indicator-based measure of individual features provided by modules from module 23 calculate a quantitative measure of the degree of blur to module 27 calculating the quantitative indicator of the feast upon the particular color, and provides a summary of the quantitative indicator module 29 control the display. Based on the total measure coming from the module 28 for the calculation of summary quantitative indicator, the module 29 control the display selects from among the input image provided from the receiving module 22, multiple input images having a high evaluation. In addition, the module 29 control the display provides the selected input image module 30 of the display in such a way as to control the display of the input image. Module 30 display is performed, for example, liquid crystal display, and displays the input image under the control module 29 of the control display.

Configuration module for computing a quantitative measure of the degree of blur

In addition, the module 23 calculate a quantitative measure of the degree of blur, shown in figure 1, if we describe it in more detail, is configured as shown in figure 2.

Namely, the module 23 calculate a quantitative measure of the degree of fuzziness includes module 61 map contours, the module 62 determination of the dynamic range, the module 63 configuration parameter calculation module 64 generate local maxima, the module 65 extract the contour points, the module 66 is the distribution volume of the extract, module 67 analysis circuit and module 68 determine the degree of blur.

Based on the input image provided from the receiving module 22, the module 61 map contours determines the contrast of the contour (contour contrast of the input image in units of blocks of three types having sizes different from each other, and creates a map of the contours, in which the pixel value is the contrast of the contour is determined. This map contours is created for each block size, and, starting in order from the small-block card contours are defined as maps of the contours in the scale: scale SC 1 on the scale SC 3. Module 61 map contours provides these three cards contours module 62 determine the dynamic range and the module 64 generate local maxima.

Module 62 determination of the dynamic range using the map contours coming from the module 61 map contours, determines the dynamic range, which represents the difference between the maximum value and minimum value of the contrast of the contour of the input image, and supplies the result of this determination module 63 setting calculations.

Based on the result of determination supplied from the module 62 determination of the dynamic range, the module 63 option calculation adjusts the parameter calculations used to extract the points of the contour, so that the number of extraction points on the path (hereinafter also referred to as the recoverable amount of points on the path)that is used to determine the degree of blur of the input image, took the proper value. The term "contour point" refers to the pixel forming the outline of the image.

In addition, this calculation provides a reference value for extracts used to determine whether an appropriate reference value for path that is used to determine whether a contour point by point of the path, and whether the proper number of extraction points of the contour. Module 63 setting calculation provides a reference value for the circuit module 65 extract the contour points and the module 66 to determine the number of retrievals and provides a reference value for extraction module 66 to determine the number of outs.

Module 64 generate local maxima divides each of the cards paths provided from the module 61 map contours, into blocks, each of which has a specified size, and extracts the maximum value for pixel value for each block, generating, thus, the local maximum. A local maximum is generated for each map scale Comte is economical and is available from the module 64 generate local maxima module 65 extract the contour points and the module 67 analysis of the circuit. Further local maxima generated on the basis of maps of contours in scale from the scale SC 1 to scale SC 3 will be referred to as, respectively, the local maxima: a local maximum LM 1 to local maximum LM 3.

Module 65 extract the contour points and extracts the contour point from the input image based on the reference value for a path coming from the module 63 setting calculations, and the local maxima coming from the module 64 generate local maxima. In addition, the module 65 extract the contour points and generates a table of points on the path that specifies information about the extracted contour point, and provides a table of points on the path to the module 66 to determine the number of outs. Meanwhile, the table of the contour points obtained on the basis of local maxima: a local maximum LM 1 to a local maximum LM 3, are, respectively, called the tables of the contour points, respectively, from table (ET 1) of the contour points and the table (ET 3) points on the path.

The module 66 to determine the number of extraction determines, based on the table of points on the path, coming from the module 65 extract the contour points, and the reference value for the extraction coming from the module 63 setting calculate whether the number of extraction points on the path is good is them. In the case where the number of extraction points on the path is not appropriate, the module 66 to determine the number of retrievals notifies the module 63 setting calculations of the fact that the number of extraction points on the path is not appropriate. In the case where the number of extraction points on the path is appropriate, the module 66 to determine the number of retrievals provides a reference value for contour and grid points of the circuit module 67 analysis of the path.

Module (247) analysis circuit performs, based on the tables of points on the path, coming from the module 66 to determine the number of retrievals, the analysis points of the contour of the input image, and supplies the result of this analysis module 68 determine the degree of blur. On the basis of the analysis and point circuit module 68 determine the degree of blur determines the degree of fuzziness, which is an index indicating the degree of blur of the input image, and provides this degree of fuzziness as a quantitative measure of the degree of blur module 28 calculate the total measure.

Configuration module for computing a quantitative measure of brightness

In addition, the module 24 of computing a quantitative indication of the brightness shown in figure 1, if we describe it in more detail, sconfig the seat so as shown in figure 3.

Namely, the module 24 of computing a quantitative measure of the brightness of the formed module 91 retrieve the object, module 92 multiplication module 93 generating a histogram module 94 regulation and module 95 calculating the quantitative indicator. The input image received from the receiving module 22, is provided to the module 21 retrieve the object and module 92 multiplication.

Based on the input image provided from the receiving module 22, the module 91 retrieve the object creates a map object for extracting region on the input image containing the object, and provides this map object module 92 multiplication.

For example, the pixel value of the pixel on the map object is set to "1" in the case in which according to the estimate of the area of the input image located in the same place as the area of the pixel is a region containing the object, and is set to "0" in the case in which according to the estimate of the area of the input image located in the same place as the area of the pixel is a region in which the object is not contained. In addition, assumed to be mentioned here the object is the target object in the input image, which draws the attention of the user when the user glances at the entrance of the second image, that is, the target object on which the user is estimated that draws attention. Therefore, the term "object" is not necessarily limited to the person.

Multiplying the pixel value of the pixel of the input image received from the receiving module 22, the pixel value of the pixel map of an object coming from the module 91 retrieve the object, the module 92 multiplication generates an object image that is an image region of the object on the input image, and provides the image of the object module 93 generating a histogram. The image of the object pixel value of the pixel area occupied by the object, takes the same value as the pixel value of the pixel of the input image located in the same location as the pixel. The image of the object pixel value of the pixel area occupied by the background, not containing the object is set to "0". Thus, the process of multiplication in the module 92 multiplication allows to identify (extract) the scope of the object on the input image, and generates an image of the object formed from a portion of the object.

On the basis of an object image received from the module 92 multiplication module 93 generating a histogram generates a histogram of brightness values of an image object, and provides this histogra the mu module 94 of the regulation. Module 94 normalization normalizes the histogram supplied from the module 93 generate histograms, and exposes it to the module 95 calculating the quantitative indicator. In addition, based on the histogram supplied from the module 94 regulation module 95 calculating the quantitative indicator calculates a quantitative indicator of the brightness and provides it to the module 28 calculate the total measure.

The configuration of the extraction module object

This module 91 extract the object shown in figure 3, if we describe it in more detail, is configured as shown in figure 4.

Namely, the module 91 retrieve the object formed by the module 121 retrieve information about the brightness, the module 122 extract color information module 123 extraction circuits, the module 124 retrieve information about the person and module 125 create a map object. In addition, the input image received from the receiving module 22, is provided to the modules from module 121 retrieve information about the brightness up module 124 retrieve information about the person, included with module 91 retrieve the object.

Modules from module 121 retrieve information about the brightness up module 124 retrieve information about the person extract quantitative value of the characteristics of an object area from the input image, rendering aemula from the receiving module 22, and create a map information indicating the likelihood that the region is an object in the respective fields of the input image.

In particular, the module 121 retrieve information about the brightness of the extracts from the input image, the luminance value, and generates map information about the brightness of the indicating information relating to the brightness in the respective fields of the input image, and the map provides information about the brightness module 125 create a map object. The module 122 extract color information extracts from the input image components of a predefined color, creates a map color information that indicates information relating to the colors in the respective fields of the input image, and provide a map color information module 125 create a map object.

Module 123 extraction circuits extracts from the input image, the contrast of edges, creates a map outline information indicating information relating to the circuit in the respective fields of the input image, and provides a map outline information module 125 create a map object. The module 124 retrieve information about the person retrieves the quantitative characteristic value, which has a human face from an input image, generates map information about a person, pointing information, kasumu the contacting face of the person as the object in the respective fields of the input image, and map provides information about the person module 125 create a map object.

Meanwhile, hereinafter, in the case, which does not require an individual to distinguish one from another card from the card information on the brightness of the map to information about the person, which are derived from modules from module 121 retrieve information about the brightness up module 124 Retrieve information about the person, they will also be referred to simply informational card. The information contained in these information maps, represents information indicating a quantitative value of the characteristics contained in a greater degree in the field that contains the object, and this information, which is placed in such a way as to correspond to the respective areas of the input image, made in the form of information cards. Thus, we can say that the card information is information indicating the characteristic value in the respective fields of the input image.

Therefore, the area of the input image corresponding to the region having a greater amount of information, that is, the area having a large quantitative values of characteristics, is an area that has a higher probability of a content object, which makes it possible to identify on the basis of each information and the map area, in which the input image contains the object.

The module 125 create a map object in a linear manner combines information card provided modules module 121 retrieve information about the brightness up module 124 retrieve information about the person, to create a map object. Namely, information (quantitative characteristic value) of the respective areas for the card from the card information on the brightness of the map to information about a person is subjected to weighted summation for each area, located in the same place, forming a map object. The module 125 create a map object provides created map object module 92 multiplication.

The configuration module extraction brightness

Next, with reference to figure 5-8 will be described in more detailed configuration of the modules from module 121 retrieve information about the brightness up module 124 retrieve information about the person, shown in figure 4.

Figure 5 illustrates an example of a more detailed configuration module 121 retrieve information about the brightness.

The module 121 retrieve information about the brightness of the formed module 151 generating image brightness, module 152 generate pyramid images, module 153 calculating the difference and module 154 map information about the brightness.

Module 151 generating image brightness generates, is using the input image, provided from the receiving module 22, the image brightness, where the pixel value of the pixel represents the brightness value of the pixel in the input image, and provides the image brightness module 152 generate pyramid images. When the pixel value of an arbitrary pixel of the image brightness specifies the brightness value of the pixel of the input image located in the same location as the location of an arbitrary pixel.

Module 152 generate pyramid images produced using the image brightness provided from module 151 generating image brightness, many image brightness with permissions that are different from each other, and provides these image brightness, as the pyramid image brightness module 153 calculating the difference.

For example, generate pyramid images L1-L8 hierarchical eight permissions from level L1 to level L8. Pyramid image L1 L1 has the highest resolution, and it is assumed that the resolution pyramid image is set in such a way as to be successively decreasing in order from level L1 to level L8.

In this case, the image brightness generated by module 151 generating image brightness is defined as the feast is Mednoe image level L1. In addition, the average value of pixel values of four adjacent to each other pixels in the pyramid image of level Li (where 1≤i≤7) is set as the pixel value for one pixel of the pyramid image at level L(i+1)corresponding to these pixels. Therefore, the pyramid image of level L(i+1) will be the image having a size which is horizontally and vertically is half (rounded up, if not divisible) the size of the pyramid image of level Li.

Module 153 calculating the difference chooses two pyramid images of different hierarchical representations from among the many pyramid images supplied from the module 152 generate pyramid images, and receive the difference between the selected pyramid images to generate an image of the difference of brightness. Meanwhile, since the size (number of pixels) of the pyramid images of each hierarchical view differ from each other, when generating images of a difference smaller pyramid image is subjected to the step-up conversion in accordance with the great pyramid image.

When the module 153 calculating the difference will generate a pre-specified number of images, the difference of the brightness module 153 calculating the difference normalizes these scenario the image data of the difference and provides their module 154 map information about the brightness. Module 154 map information about the brightness creates, based on the difference images supplied from the module 153 calculating the difference, map information about the brightness, and provides the map information about the brightness module 125 create a map object.

The configuration module extracting color information

6 illustrates an example of a more detailed configuration module 122 extract color information shown in figure 4.

The module 122 retrieve information about the color formed by the module 181 generating image difference "red/green", module 182 generating image difference "blue/yellow", module 183 generate pyramid images, module 184 generate pyramid images, module 185 calculating the difference, module 186 calculating the difference, the module 187 map color information and the module 188 map color information.

Using the input image supplied from the receiving module 22, the module 181 generating image difference "red/green" generates the image difference "red/green", in which the pixel value of the pixel of the input image is the difference between R (red) components and G (green) components, and provides this image is the difference between "red/ green" module 183 generate pyramid images is available. The pixel value of an arbitrary pixel of the image is the difference between "red/green" indicates the value of the difference between the red and green components components of the pixel in the input image, which is in the same location as the location of an arbitrary pixel.

Using the input image supplied from the receiving module 22 module 182 generating image difference "blue/yellow" generates the image difference "blue/yellow", in which the pixel value of the pixel of the input image is the difference between B (blue) components and Y (yellow) components, and provides the image difference "blue/yellow" module 184 generate pyramid images. The pixel value of an arbitrary pixel of the image difference "blue/yellow" indicates the value of the difference between the B (blue) components and Y (yellow) components of the pixel in the input image, which is in the same location as the location of an arbitrary pixel.

Module 183 generate pyramid images and module 184 generate pyramid images generate many images the difference between "red/green" and images of a difference "blue/yellow"with permissions that are different from each other, using the picture of the difference between "red/green is Noah" and the image difference "blue/yellow", provided, respectively, of the module 181 generating image difference "red/green" and module 182 generating image difference "blue/yellow". Then, the module 183 generate pyramid images and module 184 generate pyramid images provide these generated image difference "red/green" and the image difference "blue/yellow" as pyramid images to the difference "red/green" and the pyramid image difference "blue/yellow" module 185 calculating the difference and module 186 calculating the difference, respectively.

For example, as pyramid images to the difference between "red/green" and the pyramid image difference "blue/yellow", as in the case of the pyramid of image brightness are generated, respectively, the pyramid hierarchical image representations to eight permissions from level L1 to level L8.

Module 185 calculating the difference and module 186 calculating the difference choose from among the many pyramid images supplied from the module 183 generate pyramid images and module 184 generate pyramid images, two pyramid images that are different from each tree view and receive the difference between the selected pyramid images to generate the image of p is snasti for the difference between "red/green" and the image difference for difference between "blue/yellow". Meanwhile, since the dimensions of the pyramid images of each hierarchical view differ from each other, when generating images of a difference smaller pyramid image is subjected to the step-up conversion so that it had the same size as the larger pyramid of the image.

When the module 185 calculating the difference and module 186 calculating the difference will generate a pre-specified number of images of a difference for difference between "red/green" and images of a difference for difference between "blue/yellow", module 185 calculating the difference and module 186 calculating the difference normalized these generated image difference and provide their module 187 map color information and the module 188 map color information. Module 187 map color information and the module 188 map color information create, based on the difference images supplied from the module 185 calculating the difference and module 186 calculating the difference, map color information and provides them to the module 125 create a map object. In module 187 map color information is generated map color information for the difference between "red/green", and in the module 188 map color information is generated map color information for a difference "blue/yellow".

The configuration is odule extraction circuits

7 illustrates an example of a more detailed configuration module 123 extract information about the contours shown in figure 4.

Module 123 extract information about the contours formed by the modules from module 211 generate images of paths to module 214 to generate images of the contours; the modules from module 215 generate pyramid images to module 218 generate pyramid images; modules module 219 calculating the difference to module 222 calculating the difference; modules module 223 map outline information to module 226 map outline information.

Modules from module 211 generate images of paths to module 214 to generate images of the contours of performing the filtering process using Gabor filter on the input image provided from the receiving module 22 generates image contours, in which pixel values of the pixel values are contrast contour in the areas constituting, for example, 0 degrees, 45 degrees, 90 degrees and 135 degrees, and provide their modules from module 215 generate pyramid images to module 218 generate pyramid images.

For example, the pixel value for an arbitrary pixel on the image of the contour generated by the module 211 generate images of conturo is, specifies the contrast of the contour in the direction of 0 degrees in the pixel of the input image, which is located in the same location as the location of an arbitrary pixel. Meanwhile, the direction of each circuit corresponds to the direction indicated angular components of the Gabor function, forming the Gabor filter.

Modules from module 215 generate pyramid images to module 218 generate pyramid images generated using the image path in each direction, which are provided, respectively, of the modules from module 211 generate images of paths to module 214 to generate images of the contours, multiple images of circuits having resolution different from each other. Then the modules from module 215 generate pyramid images to module 218 generate pyramid images provide these generated image paths in the respective directions, as pyramid images in the respective directions of the contour, respectively, the modules from module 219 calculating the difference to module 222 calculating the difference.

For example, as pyramid images of each direction of the path, as in the case of pyramid image brightness, generate pyramid images of the eight hierarchical representations from the level L1 to level L8, respectively.

Modules from module 219 calculating the difference to module 222 calculating the difference choose two pyramid images of different hierarchical representations from among the many pyramid images supplied from the modules from module 215 generate pyramid images to module 218 generate pyramid images, and receive the difference between the selected pyramid images for sgenerirovanny differential image in each direction of the path. Meanwhile, since the pyramid image of each hierarchical representations have dimensions that differ from each other, then when you want to generate a differential image, a smaller pyramid image is subjected to the step-up conversion.

When the modules from module 219 calculating the difference to module 222 calculating the difference will generate a predetermined number of difference images for each path direction, they are normalized these generated differential image, and give their modules from module 223 map outline information to module 226 map contours. Based on the difference images supplied from the modules from module 219 calculating the difference to module 222 calculating the difference; the modules from module 223 map outline information to module 226 map information and contours create maps outline information for each direction and provide their module 125 create a map object.

Configuration module for retrieving information about the face

Fig illustrates an example of a more detailed configuration module 124 retrieve information about the person, shown in figure 4.

The module 124 retrieve information about the person formed by module 251 of the detection entity and module 252 map information about a person.

Module 251 of the face detection detects the face area of the person as the object on the input image provided from the receiving module 22, and provides the result of the detection module 252 map information about a person. Module 252 map information about a person creates, based on the result of detection received from the module 251 of the face detection map information about a person, and provides the map information on the face of the module 125 create a map object.

Configuration module for computing a quantitative indication of the path

In addition, the module 25 calculating the quantitative indicator circuit shown in figure 1, if we describe it in more detail, is configured as shown in Fig.9.

Namely, the module 25 calculating the quantitative indicator circuit formed by the module 281 retrieve the object, module 282 inversion module 283 processing filter module 284 regulation, module 285 multiplication module 286 generate histograms and module 287 compute the number is i.i.d. indicator.

Module 281 retrieve the object creates a map object based on the input image provided from the receiving module 22, and provides this map object module 282 inversion. Meanwhile, since this module 281 retrieve the object has the configuration identical to the configuration of the module 91 retrieve the object, shown in figure 4, its description is omitted.

Module 282 inversion inverts the pixel value of the pixel map of the object provided from the module 281 retrieve the object and gives it the value of the pixel module 285 multiplication. Thus, in the map object pixel, whose pixel value is "1"is assigned to the pixel value of "0"and Vice versa, the pixel whose pixel value is "0"is assigned to the pixel value of "1". Therefore, the pixel value of the pixel of the map object after inversion is set to "0"when the area of the input image located in the same location as the location of this pixel is an area in which, it is estimated that contains the object, and is set to "1"when the area of the input image located in the same location as the location of this pixel is an area in which, as is estimated, the object is not contained. Thus, the map object after inversion performance is to place a map to identify the background area, in which the object on the input image does not contain.

Module 283 processing filter performs a filtering process using a Laplace filter on the input image provided from the receiving module 22, thereby to generate the Laplace image in which the pixel value of the pixel is the contrast of the contour in the respective fields of the input image, and supplies this Laplace image module 284 regulation. Module 284 normalization normalizes the Laplace image coming from the module 283 processing filter, and it provides Laplace image module 285 multiplication.

Module 285 multiplication multiplies the pixel value of the pixel Laplace image coming from the module 284 regulation, the pixel value of the pixel of the inverted map of the object coming from the module 282 inversion, to generate the image background, which is the image of the background area in the input image, and gives it the background image module 286 generate histograms. On the background image, the pixel value of the pixel area occupied by the background which does not contain an object that has the same value as the pixel value of the pixel Laplace image of nahodyawegosya the same place, as the location of the pixel area occupied by the background. On the background image, the pixel value of the pixel area occupied by the object becomes equal to "0". Thus, the process of multiplication in the module 285 multiplication allows to identify (and remove) the rear of the plan on the input image, and generates the background image, formed on the basis of the contrast of the contour on the site occupied by the background.

On the basis of the image background, coming from the module 285 multiplication module 286 generating a histogram generates a histogram of the contrast of the contour image of the background and provides the histogram module 287 calculating the quantitative indicator. Module 287 calculating the quantitative indicator calculates, based on the histogram supplied from the module 286 generating a histogram metric for the path, and this provides a quantitative indicator circuit module 28 calculate the total measure.

Configuration module for computing a quantitative measure of the distribution of color

In addition, the module for computing a quantitative measure of the color distribution shown in figure 1, if we describe it in more detail, is configured as shown in figure 10.

Namely, the module 26 is ycycline quantitative measure of the distribution of the color formed by the module 311 generating a histogram of the red component; module 312 generating a histogram of the green component; module 313 generating a histogram of the blue component; modules module 314 of regulation down module 316 regulation; module 317 generate histograms and module 318 calculating the quantitative indicator.

Modules from module 311 generating a histogram of the red component to the module 313 generating a histogram of the blue component generate the histogram of each component: R (red), G (green) and b (blue), from the input image provided from the receiving module 22, and provides their respective modules from module 314 of regulation down module 316 regulation. Modules from module 314 of regulation down module 316 normalization normalized histogram for each component, coming respectively from modules from module 311 generating a histogram of the red component to the module 313 generating a histogram of the blue component and provide their module 317 generate the histogram.

Using the histograms for each color component supplied from the modules from module 314 of regulation down module 316 regulation module 317 generating a histogram generates a single histogram, where the range of values of the same color is a column made of each component: red, green, and si is her and this provides a histogram module 318 calculating the quantitative indicator. Module 318 calculating the quantitative indicator calculates, based on the histogram supplied from the module 317 generating a histogram, a quantitative measure of the distribution of color and this provides a quantitative measure of the distribution of the color module 28 calculate the total measure.

Configuration module for computing a quantitative measure of color saturation

In addition, the module 27 calculating the quantitative measure of color saturation, shown in figure 1, if we describe it in more detail, is configured as shown figure 11.

Namely, the module 27 calculate a quantitative indication of the saturation of the color formed by the module 341 conversion module 342 generating histograms and module 343 calculating the quantitative indicator.

Module 341 conversion converts the input image is generated from each component: red, green, and blue provided from the receiving module 22 in the input image that consists of the values of each component of H (hue), S (saturation) and V (brightness), and it provides the input image module 342 to generate the histogram.

Module 342 generating a histogram generates g is stogram components of the color saturation of the pixel which satisfies specific conditions, from among pixels of the input image provided from the module 341 conversion, and provides the histogram module 343 calculating the quantitative indicator. Module 343 calculating the quantitative indicator calculates, based on the histogram, coming from module 342 generating a histogram, a quantitative measure of color saturation and this provides a quantitative measure of color saturation module 28 calculate the total measure.

Description of the process of displaying a slideshow

Meanwhile, when the user operates the device 11 of the image processing shown in figure 1, indicates the set of input images recorded in the recording module 21, and gives a command to display a slide show of these input images, the device 11 of the image processing starts in response to the user command, the process of displaying the slide show.

Below with reference to the block diagram of the algorithm is shown in Fig, description will be given of a process of displaying a slide show, performed by the device 11 of the image processing.

At step S11, the receiving module 22 receives one of the input images specified by the user of the recording module 21 and provides this input image modules: module 23 calculate the amount the state indication of the degree of blur to module 27 calculating the quantitative measure of color saturation, module 29 of the control display.

For example, when the user specifies the folder recorded in the recording module 21, and gives a command to display a slide show of the input images stored in this folder, the receiving module 22 receives one input image in the composition of the folder specified by the user. Meanwhile, the term "display a slideshow of the input image" refers to the process of sequential display of a set of input images.

At step S12 module 23 calculate a quantitative measure of the degree of blur performs the process of calculating the degree of fuzziness thus, to compute a quantitative measure of the degree of blur in relation to the input image, and this provides a quantitative measure of the degree of blur module 28 calculate the total measure. In addition, at step S13 module 24 of computing a quantitative indication of the brightness performs a process of calculating quantitative measure of brightness thus, to compute a quantitative measure of the brightness of the input image, and this provides a quantitative measure of the brightness module 28 calculate the total measure.

At step S14 module 25 calculating the quantitative indicator circuit performs the process of calculating the quantitative display of the indicator circuit so to calculate the metric for the path for the input image, and supplies it to the module 28 calculate the total measure. At step S15 module 26 of computing a quantitative measure of the distribution of color performs a process of computing a quantitative measure of the distribution of colors in such a way as to calculate a quantitative measure of the distribution of colors for the input image, and this provides a quantitative measure of the distribution of the color module 28 calculate the total measure.

In addition, at step S16 module 27 calculating the quantitative measure of color saturation performs the process of calculating the quantitative measure of color saturation thus, to compute a quantitative measure of color saturation of the input image, and provides a quantitative measure of color saturation module 28 calculate the total measure.

Meanwhile, details of the processes from the process of computing the degree of fuzziness to the process of computing a quantitative measure of color saturation, performed at steps S12-S16 will be described below.

At step S17, the module 28 for the calculation of summary measure based on quantitative indicator of individual characteristics, predostavlenom of the modules from module 23 calculate a quantitative measure of the degree of blur to module 27 calculating the quantitative measure of color saturation calculates the total figure for the input image.

Namely, using the conversion table stored in advance for each quantitative indicator, the module 28 calculate the total measure converts the value of the measure of individual characteristics in the score measure that has been predefined for this measure values of individual characteristics, and sets as the sum of the quantitative indicator of the amount (the sum of) points of the quantitative results for each measure of individual performance.

For example, when it is assumed that the value, which can be a quantitative indicator of the degree of blur, as a measure of individual characteristics, ranges from 0 to 1, and that, more than the value of the quantitative measure of the degree of blur, the more blurry all the input image module 28 for the calculation of summary quantitative indicator receives a score quantitative indicator to measure the degree of blur based on the conversion table of the quantitative measure of the degree of blur, shown in Fig.

Thus, when a quantitative indicator of the degree of blur minority is e 0,2, score quantitative indicator to measure the degree of blur is set to 0, when a quantitative measure for the degree of blur is greater than or equal to 0.2 and less than 0.3, the score quantitative indicator to measure the degree of blur is set to 1, and when a quantitative indicator of the degree of blur is greater than or equal to 0.3 and less than 0.5, the score of the quantitative indicator for the quantitative measure of the degree of blur is set to 2.

In addition, when a quantitative indicator of the degree of blur is greater than or equal to 0.5 and less than 0.7, the score quantitative indicator to measure the degree of blur is set to 3, when a quantitative indicator of the degree of blur is greater than or equal to 0.7 and less than 0.9, the score quantitative indicator to measure the degree of blur is set to 5, and when a quantitative indicator of the degree of blur is greater than or equal to 0.9, score quantitative indicator to measure the degree of blur is set to 10.

Here, the score measure, determined with respect to each range of values of quantitative indication of the degree of blur is determined on the basis of a quantitative indication of the extent blurred the tee, obtained from a professional image and Amateur images prepared in advance. Meanwhile, the term "professional image" refers to an image with an overall high score (recorded satisfactorily), which was recorded by a professional photographer, and the term "Amateur image refers to an image having a generally low grade (poorly recorded), which was recorded by an Amateur.

For example, it is assumed that the quantitative indicators of the degree of blur in respect of a number of professional images and Amateur images, and the results obtained are shown in Fig. Meanwhile, in this figure the vertical axis indicates the number of samples professional images or Amateur images, while the horizontal axis indicates the value of the quantitative measure of the degree of blur.

On Fig in the upper part indicates the distribution of the quantitative indicators of the degree of blur professional image, and in the lower part indicates the distribution of the quantitative indicators of the degree of blur of Amateur images.

Most samples are quantitative indicators of the degree of blur professional image is less than 0.3, and no samples, the number of the quality indicator of the degree of blur is larger than or equal to 0.8. Thus, the smaller the degree of blur of the image, the less quantitative measure of the degree of blur. Therefore, among the professional images almost no blurry images.

Compared to this, most of the samples Amateur images quantitative indicators for the degree of blurring is less than 0.4. However, there are some Amateur image, a quantitative indicator of the degree of blur is larger than or equal to 0.4. Thus, Amateur images include some images that are completely blurred.

When quantitative indicators of the degree of blur professional images are compared with the quantitative indicators of the degree of blur of Amateur images, you can see that there are no professional images that exist in the range indicated by the arrow And 11 on distribution of professional images, i.e. images, a quantitative indicator of the degree of blur is larger than or equal to 0.8. Compared to this, there are several Amateur images that exist in the range indicated by the arrow 12 on the distribution of Amateur images, i.e. images, a quantitative indicator of the degree of blur is larger than or equal to 0.8.

Therefore, when the quantitative indicator of the degree of blur obtained for the input image is greater than or equal to 0.8, the input image has a high probability to be Amateur image, i.e. the image that is poorly recorded. In addition, the total value of the scores of quantitative indicators relating to each quantitative indicator of individual characteristics, is the total score, and this total figure shows that, the smaller, the better the condition of fixing the image.

Accordingly, when a quantitative indicator of the degree of blur is greater than or equal to 0.8, that is, when the probability that the input picture is an Amateur image, high score quantitative indicator to measure the degree of blur is set to have a large value so that the total quantity increased. In addition, in General, the smaller the degree of blur of the image, the higher the state in which the image has been satisfactorily fixed. Therefore, the score of the quantitative indicator to measure the degree of blur is set with a small value.

Similarly for the other measure of individual characteristics, CPA is nivago this quantitative indicator of individual characteristics for professional images, which is prepared in advance, with this measure of individual characteristics for Amateur image, determine the score of the quantitative indicator for the range of values of each quantitative indicator of individual characteristics in advance and receive a conversion table for each quantitative indicator of individual characteristics.

In relation to the range of values in which there is a difference measure's distribution of individual characteristics between a professional image and Amateur image, by determining the higher the score the score or a lower score measure in accordance with this difference there is a possibility of more appropriate (accurate way to assess the condition of fixing the image to the input image. Thus, the accuracy of estimation can be improved.

In addition, even when using a quantitative indicator of individual characteristics it is difficult to properly assess the condition of fixing the image to the input image. However, since the total figure obtained from a variety of quantitative indicators of individual characteristics, there is a chance over the appropriate way to assess the state of fixing the image to the input image.

For example, in the example shown in Fig, there are many professional images and Amateur images, with a quantitative indicator of the degree of blur, amounting to less than 0.2. For this reason, when a quantitative indicator of the degree of blur of the input image is less than 0.2, it is difficult on the basis of only quantitative measure of the degree of blur to assess accurately whether the input image is close to the professional image or close to Amateur image.

However, in the quantitative measure of each characteristic there is a range of values, which can be more accurately identified whether the input image is close to the professional image or Amateur image. Therefore, if the range of values of each quantitative indicator of individual characteristics defined score measure and score the score for each measure of individual characteristics is summarized thus to be established as the sum of the measure, there is a chance of a more appropriate way to assess the state of fixing the image to the input image.

As described above, the case in which the om each quantitative indicator of individual characteristics is converted into the score the score using the conversion table and get the sum of these scores are quantitative indicators, corresponds to the case where each quantitative indicator of individual characteristics is subjected to weighted summation thus, to get the total figure.

Again, refer to the description of the flowchart of the algorithm, shown in Fig, which, when received a total metric, module 28 calculate the total measure provides received a total metric module 29 control the display. The process then proceeds from step S17 to step S18.

At step S18, the device 11 of the image processing determines if the total figure for all input images. For example, in the case where the total quantity of all input images that you want to display the slideshow, which is specified by the user is received, it is determined that the total figure for all the input images received.

When at step S18 it is determined that the total figure for all of the input images is not received, the process returns to step S11, and the above-mentioned data processing is repeated. Namely, the receiving module 22 receives the following input image, and receives in relation to this input and what the considerations applying to the total figure.

On the contrary, when at the step S18 it is determined that the total figure was obtained for all input images, at step S19 module 29 control the display selects the input image that you want to display the slideshow.

For example, based on the total measure of each input image provided from the module 28 for the calculation of summary quantitative indicator, the module 29 control the display selects from among the input image provided from the receiving module 22 as subject the input image, the input image for which the total quantity is less than some specified threshold value or equal to him. Thus, the lower the value of the total quantitative score, the higher specified condition assessment fixation image for the input image.

As described above, if the selected input image, the total quantity of which is less than the threshold value or equal to the slideshow can display only those of the input image, for which the evaluation of the fixity of the image is high enough, i.e. only the input image, sufficiently close to professional images. When this subject from which the image of the input image can be selected, for example, in the number corresponding to the specified number, in ascending order of the total measure.

At step S20 module 29 control the display provides a consistent provision of an input image selected at step S19, the module 30 of the display, where the input image is displayed, in doing so, display a slide show of the input images. Then, when the input image is displayed for the slide show, the process of displaying the slide show ends. Thus, as described above, when the selected input images are shown for the slide show, the user has the opportunity to evaluate only the input image having the highest evaluation.

As described above, the device 11 of the image processing obtains a quantitative indicator of individual characteristics for each input image, and calculates on the basis of these quantitative indicators for selected characteristics of the total figure. As described above, when calculating the total quantitative metric based on a set of quantitative indicators on individual characteristics, there is a possibility of a more appropriate way to assess the state of fixing the image to the input image.

Description of the process of calculating, key writing, the degree of blur

Next, with reference to the block diagram of the algorithm is shown in Fig, description will be given of the process of computing the degree of blurring corresponding to the process at step S12 shown in Fig.

At step S51 module 61 map contours creates a map of contours, using the input image supplied from the receiving module 22.

To put it concretely, the module 61 map contours divides the input image into blocks of size 2×2 pixels, and calculates the absolute value of the absolute value (MTL-TR) to absolute values (MBL-BR) the difference of pixel values between pixels within each block in accordance with the following expressions (1)to(6)

MTL-TR=|a-b|(1)

MTL-BL=|a-b|(2)

MTL-BR=|a-b|/mtext> (3)

MTR-BL=|a-b|(4)

MTR-BR=|a-b|(5)

MBL-BR=|a-b|.(6)

Meanwhile, in expressions (1)to(6) variables a, b, C, and d each represent a pixel value for a pixel within the block size of 2×2 pixels on the input image. For example, as shown in Fig, the pixel value indicates the pixel value of the pixel in the upper left area of the block. In addition, the pixel value b indicates a pixel value of the pixel in the upper right area of the block. The pixel value indicates the pixel value of the pixel in the lower-left area of this block. The pixel value d indicates p is xelee values of the pixel in the lower right pane of this block.

Therefore, each of the absolute values from the absolute value (MTL-TR) to absolute values (MBL-BRindicates the absolute value of the difference of pixel values between adjacent pixels in each direction in the block, i.e. the contrast of the contour in each direction.

Then the module 61 map contours calculates the average value of MAvfor the absolute value from the absolute values of MTL-TRto the absolute value of MBL-BRin accordance with the following expression (7). This is the average value of MAvrepresents the average value of kontrastnosti circuit above, below, right, left and inclined to the unit

MAve=MTL-TR+MTL-BL+MTL-BR+MTR-BL+MTR-BR+MBL-BR6.(7)

Module 61 create a map of the paths has a calculated average value of MAv is each block in the same order as the order of the corresponding blocks on the input image in such a way as to create a map of contours in scale SC 1.

In addition, in order to create maps with contours in scale SC 2 and scale SC 3, module 61 map contours generates an average image scale SC 2 and scale SC 3 in accordance with the following expression (8)

P(m,n)i+1=P(2m,2n)i+P(2m,2n+1)i+P(2m+1,2n)i+P(2m+1,2n+1)i4.(8)

Meanwhile, in the expression (8) Ri(x, y) indicates a pixel value for a pixel having coordinates (x, y) on an average image scale SCi (where i=1, 2). In addition, Ri+1(x, y) indicates a pixel value of the pixel Meuse what about the coordinates (x, y) on an average image scale SCi+1. Meanwhile, it is assumed that the averaged image of the scale SC 1 represents the input image.

Therefore, the averaged image of the scale SC 2 is an image in which the pixel value of one pixel is set to the average value of pixel values of pixels in each block in which the input image is divided into blocks of size 2×2 pixels. In addition, the averaged image of the scale SC 3 is an image in which the pixel value of one pixel is set to the average value of pixel values for pixels within each block, such that the averaged image of the scale SC 2 is divided into blocks of size 2×2 pixels.

Module 61 map contours performs the same process as the process that is performed on the input image by using expressions (1)to(7) above average image scale SC 2 and scale SC 3 thus, in order to create maps with contours in scale SC 2 and scale SC 3.

Therefore, maps of paths executed in the scope of scale SC1 to scale SC3 represent an image obtained by extracting from the input image boundary components different from each range is in frequency. Meanwhile, the number of pixels of the map contours at a scale of SC1 is 1/4 (1/2 vertical ×1/2 horizontally) from the input image, the number of pixels of the map contours at a scale of SC 2 is 1/16 (1/4 vertical ×1/4 horizontally) from the input image, and the number of pixels of the map contours at a scale of SC 3, is 1/64 (1/8 vertical ×1/8 horizontally) from the input image.

Module 61 map contours provides created a map of paths executed in the scope, scale from SC 1 to scale SC 3 module 62 determine the dynamic range and the module 64 generate local maxima. As described above, by creating maps of contours made at different scales, in units of blocks of different sizes, it is possible to suppress fluctuations kontrastnosti path.

At step S52, using the map contours provided from the module 61 map contours, the module 62 to determine the dynamic range defines the dynamic range of the input image and provides the result of the determination module 63 setting calculations.

To put it concretely, the module 62 to determine the dynamic range determines the maximum value and the minimum value of pixel values of the number of cards contours, ispolnennyh in scale from the scale SC1 to scale SC3 and sets the value obtained by subtracting the minimum value from this particular maximum value for pixel value as the dynamic range of the contrast of the contour of the input image. Thus, as the dynamic range defined by the difference between the maximum value and the minimum value of kontrastnosti contour of the input image.

Meanwhile, in addition to the above method, for example, provides that the dynamic range was determined for each map contours, and the dynamic range used in practice, is defined as the maximum value and the average value of this particular dynamic range.

At step S53 module 63 setting calculation sets the initial value of the parameter calculations based on the dynamic range provided by the module 62 definition of dynamic range.

Thus, when the dynamic range is less than a predefined threshold value, the module 63 settings calculation option specifies the input image as an image with a low dynamic range, and when the dynamic range is greater than this threshold value, or equal to, the module 63 settings calculation option specifies the input image as the image with the most dynamic range.

Then, when the input image is an image with a small dynamic range, the module 63 setting calculation assigns the parameter to calculate an initial value for the image with a small dynamic range. In addition, when the input image is an image with a large dynamic range, the module 63 setting calculation assigns the parameter to calculate an initial value for the image with a large dynamic range.

While I believe that an image with a low dynamic range has fewer contours compared to the image with a large dynamic range, and the number of extracted points is small. Therefore, in order to extract the contour point in sufficient quantity to ensure the accuracy of determining the degree of blur of the input image at a fixed level or above, the initial value for the reference value for the circuit in the case of images with a small dynamic range is assigned a value that is smaller than the initial value for the reference value for the circuit in the case of images with a large dynamic range. In addition, the initial value for the reference value for the extraction in the case of images with a small dynamic range assigns the I value the smaller the initial value for the reference value for the extraction in the case of images with a large dynamic range.

Module 63 setting calculation provides a reference value for path that is established on the basis of the dynamic range, the module 65 extract the contour points, and provides a reference value for a path and a reference value for extraction module 66 to determine the number of outs.

At step S54 module 64 generate local maxima generates using the card contours provided from the module 61 generate maps of the contours, the local maximum and provides this local maximum module 65 extract the contour points and the module 67 analysis of the path.

For example, as shown in the left part Fig module 64 generate local maxima divides the map contours in scale SC 1 into blocks of 2×2 pixels. Then the module 64 generate local maxima extracts the maximum value of each block of the map contours and features extracted maximum value in the same order as the order of the corresponding block, thereby generating a local maximum LM 1 scale SC 1. In other words, retrieves the maximum value of the pixel values for the pixels in each block.

In addition, as the show is about in the center of this figure, module 64 generate local maxima divides the map contours in scale SC 2 into blocks of 4×4 pixels. Then the module 64 generate local maxima extracts the maximum value of each block of the map contours and features extracted maximum value in the same order as the order of corresponding blocks, generating a local maximum LM 2 in the scale SC2.

In addition, as shown in the right part of this figure, the module 64 generate local maxima divides the map contours in scale SC3 into blocks consisting of 8×8 pixels. Then the module 64 generate local maxima extracts the maximum value of each block of the map contours and features extracted maximum value in the same order as the order of the respective blocks, thereby generating the local max LM3 scale SC3.

At stage S 55 module 65 extract the contour points and extracts the contour point from the input image using the reference value for a path coming from the module 63 setting calculations, and local maxima provided from the module 64 generate local maxima.

To put it concretely, the module 65 extract the contour points and selects one of the pixels of the input image and sets it to the number of "interesting" pixels. In addition, the module 65 extract the contour points sets the y coordinate in the coordinate system of Hu on the input image with the coordinate (x1,y1) selected "interest" of the pixel (x, y), and receives the pixel of the local max LM1, the "interest" of the pixel in accordance with the following expression (9)

(x1,y1)=(x/4,y/4).(9)

It is assumed that in the expression (9), all digits to the right of the decimal point in values of x/4 and/4 are discarded.

One pixel is local maximum LM 1 is generated from a block consisting of 4×4 pixels in the input image. Therefore, the coordinates of the local maximum pixel LM 1, which corresponds to the pixel of interest on the input image, are the values in which the coordinate x and the coordinate "y" of the pixel of interest is made, respectively, components of 1/4.

Similarly, in accordance with the following expressions (10) and (11), the module 65 extract the contour points and obtains the coordinate (x2, Y2) of the pixel is a local maximum LM 2 corresponding to the pixel of interest and the coordinate (X3, U3) of the pixel of the local the aqueous maximum LM 3, the corresponding pixel of interest

(x2,y2)=(x/16,y/16)(10)

(x3,y3)=(x/64,y/64).(11)

In the expression (10) and expression (11) all digits to the right of the decimal point in values of x/16,/16, x/64, and/64 are discarded.

In addition, in the case where the pixel value of the pixel with coordinates (x1, y1) local maximum LM 1 is greater than the reference value for a path, or equal to, the module 65 extract the contour points and extracts the pixel of interest as a point of a path in a local maximum of the LM-1. Then the module 65 extract the contour points and keep (in memory) the pixel value of coordinate (x, y) pixel of interest and the coordinate (x1, y1) of the local max LM1 so that they were related to each other.

Similarly in the case where the pixel value of the pixel with coordinate (x2, U2) is analnogo maximum LM2 is greater than the reference value for a path, or equal to, module 65 extract the contour points and extracts the pixel of interest as a point of a path in the local max LM2 and stores (memory) pixel values with the coordinates (x, y) pixel of interest and the coordinate (x2, Y2) of the local max LM2 so that they were related to each other. In addition, in the case where the pixel value of the pixel with coordinate (X3, U3) local maximum LM 3 more than the reference value for a path, or equal to, the module 65 extract the contour points and extracts the pixel of interest as a point of a path in the local max LM3 and stores (memory) pixel values with the coordinates (x, y) pixel of interest and the coordinate (X3, U3) of the local max LM3 so that they were related to each other.

Module 65 extract the contour points and repeats the above processes until such time as as a pixel of interest will not be set to all pixels of the input image.

As a result, on the basis of local max LM1, as the contour points are extracted pixels contained in the block in which the contrast of the contour is greater than the reference value for a path, or equal to, the number of blocks that have a size of 4×4 pixels in the input image.

Similarly, on the basis of local max LM2, as the contour points extracted the camping pixels, contained in the block in which the contrast of the contour is greater than the reference value for a path, or equal to, the number of blocks that have a size of 16×16 pixels on the input image. In addition, based on the local maximum LM3, as the contour points are extracted pixels contained in the block in which the contrast of the contour is greater than the reference value for a path, or equal to, the number of blocks that have a size of 64×64 pixels on the input image.

Therefore, as the contour points are extracted pixels contained at least in one of the blocks having 4×4 pixels, 16×16 pixels or 64×64 pixel on the input image, in which (block) the contrast of the circuit is greater than a reference value for contour or equal to.

Module 65 extract the contour points and generates a table (ET) points of the contour, which is a table in which the coordinate (x, y) of the contour point extracted based on the local maximum LM1, and a pixel value for a pixel is a local maximum (LM1)corresponding to that point in the circuit, are connected to each other.

Similarly, the module 65 extract the contour points and generates a table (ET) points of the contour, in which the coordinate (x, y) of the contour point extracted based on the local maximum LM2, and a pixel value for a pixel is a local maximum (LM2), soo is relevant to this part of the circuit, connected to each other. In addition, the module 65 extract the contour points and also generates a table (ET) points of the contour, in which the coordinate (x, y) of the contour point extracted based on the local maximum LM3, and a pixel value for a pixel is a local maximum (LM3)corresponding to that point in the circuit, are connected to each other.

Then the module 65 extract the contour points and provides these generated point table outlines the module 66 to determine the number of outs.

At step S56, the module 66 to determine the number of extraction determines, using the point table outlines provided from the module 65 extract the contour points, whether the proper number of extraction points on the path. For example, when the total number of extracted points of the contour, i.e. the sum of the data elements in tables from table (ET) of the contour points and the table (ET) points on the path, less than a reference value for excerpts provided from the module 63 configuration parameter calculation module 66 to determine the number of extraction determines that the number of extraction points on the path is not appropriate.

When at the step S 56 it is determined that the number of extraction points on the path is not appropriate, the module 66 to determine the number of retrievals notifies the module 63 setting calculations about the fact that quantity is of extraction points on the path is not appropriate, the process then proceeds to step S 57.

At stage S 57 module 63 setting calculation adjusts the parameter calculation based on the notification received from the module 66 to determine the number of outs.

For example, the module 63 setting calculation reduces the reference value for a path by an amount corresponding to pre-set value from the value set in the current time, therefore, to get a lot more points than the current time. Module 63 setting calculation provides customized reference value for the circuit module 65 extract the contour points and the module 66 to determine the number of outs.

After this, the calculation option is set, the process returns to step S 55, and the above-mentioned data processing is repeated until then, until it is determined that the number of extraction points on the path is appropriate.

In addition, when the step S 56 it is determined that the number of extraction points on the path is appropriate, the module 66 to determine the number of retrievals provides a reference value for a path coming from the module 63 setting calculations and a table of points on the path module 67 analysis of the circuit. After this, the process moves to the step S 58.

As a result, in Sopianae data in order to improve the accuracy of determining the degree of blur of the input image with a small dynamic range, the contour point is also extracted from the block in which the contrast of the contour is weak, so as to provide points of the contour in a quantity sufficient to make an accurate determination of the degree of blur with some fixed level or above. On the other hand, for the object image with a large dynamic range, the contour point extracted from the block in which the contrast of the circuit is as high as possible, to get the points on the path, forming a more contrasting outline.

At step S 58 module 67 analysis circuit performs the analysis of the circuit using the reference value for a path and a table of points on the path provided from the module 66 to determine the number of outs, and highs that are provided from the module 64 generate local maxima.

To put it concretely, based on the tables from table NO 1 the contour points and the table NO 3 points on the path, module 67 analysis of the circuit shall appoint one of the contour points extracted from the input image as a pixel of interest. After this module 67 analysis circuit sets the coordinate of the pixel of interest in the system is oordinate "Hu" as (x, y), and receives the coordinates from the coordinates (x1, y1) to the coordinates (X3, U3), the local maxima of the local maximum LM 1 to a local maximum LM 3 corresponding to the pixel of interest, in accordance with the above expressions (9) to 11.

Module 67 analysis circuit assigns the Local Mach 1 (x1, y1) to a maximum value from among the pixel values for the pixels in the block local maximum LM1, have a size of m×m pixels (for example, 4×4 pixels), in which the pixel with coordinate (x1, y1) of the local max LM1 is the pixel in the upper left corner. In addition, the module 67 analysis circuit assigns the Local Mach 2 (x2, U2) the maximum value from among the pixel values in the block having the size n×n pixels (for example, 2×2 pixels), in which the pixel with coordinate (x2, Y2) local maximum LM 2 is the pixel in the upper left corner, and assigns the pixel value with the coordinates (X3, U3) local maximum LM 3 as the Local Mach 3 (X3, U3).

Meanwhile, the parameters for the m×m used to assign the Local Mach 1 (x1, y1), and for n×n is used to assign the Local Mach 2 (x2, U2)are the parameters for correcting the difference in the sizes of blocks in the input image corresponding to one pixel of the local maxima of the local maximum LM 1 to a local maximum LM 3.

Module 67 analysis circuit determines satis is tworay whether Local Mach 1 (x1, y1), Local Mach 2 (x2, U2) and Local Mach 3 (X3, U3) the following expression (12) conditions. When the Local Mach "1 (x1, y1), Local Mach 2 (x2, U2) and Local Mach 3 (X3, U3) satisfy expression (12) conditions, the module 67 analysis of the circuit gives the value of the variable "Nedge" increment equal to one,

Localmax1(x1,y1)>AboutpaboutpnabouteCnandhenandedlIKaboutntypandandlandLocalmax2(x2,y2)>Referencevalueforpath(12)andlandLocalmax3(x3,y3)>AboutpaboutpnabouteCnandhenande dlIKaboutntypand.

While I believe that the point of a path, which satisfies the expression (12)represents a point of the path, forming a circuit having a fixed contrast or higher contrast, regardless of its structure, such as the circuit shown on drawings A-D on Fig.

When this circuit is shown on the drawing And on Fig formed as a contour in the form of a sharp pulse, the circuit shown in the drawing on Fig formed as a contour in the form of a pulse having a slope of more mild than the slope for a path on the drawing And on Fig, and the circuit shown in the paragraph on Fig, represents the contour of stepped form, whose slope is essentially perpendicular. In addition, the circuit shown in the drawing D on Fig, represents the contour of stepped form, whose slope is more moderate than the slope of the contour shown in the drawing on Fig.

When the Local Mach 1 (x1, y1), Local Mach 2 (x2, U2) and Local Mach 3 (X3, U3) satisfy expression (12) conditions, the module 67 analysis circuit further determines, be satisfied if the following expression (13) or (14) conditions. When the Local Mach 1 (x1, y1), Local Mach 2 (x2, U2) and Local Mach 3 (X3, U3) satisfy the expression 13) conditions or expression (14) conditions, module 67 analysis of the circuit gives the value of the variable "Nsmallblur" increment equal to one,

Localmax1(x1,y1)<Localmax2(x2,y2)<Localmax3(x3,y3)(13)

Localmax2(x2,y2)>Localmax1(x1,y1)and(14)Localmax2(x2,y2)>Localmax3(x3,y3).

Meanwhile, it is recognized that the contour point, which satisfies the expression (12) conditions and which meet AET expression (13) conditions or expression (14) conditions, is a contour point, forming a loop, which has the structure shown in the drawing on Fig or D on Fig with some fixed contrast or higher contrast, but with the contrast, weaker than the contrast of the contour drawing And Fig or on Fig.

In addition, when the Local Mach 1 (x1, y1), Local Mach 2 (x2, U2) and Local Mach 3 (X3, U3) satisfy expression (12) the terms and conditions satisfy the expression (13) conditions or expression (14) conditions, the module 67 analysis circuit determines whether the Local Mach 1 (x1, y1) to the following expression (15) conditions. When the Local Mach 1 (x1, y1) satisfies the expression (15) conditions, the module 67 analysis of the circuit gives the value of the variable "Nlargeblur" increment equal to one,

Localmax1(x1,y1)<AboutpaboutpnabouteCnandhenandedlIKaboutntypand.(15)

It is recognized that the contour point, which satisfies the expression (12) conditions, which satisfies the expression (13) conditions is whether the expression (14) conditions and which satisfies the expression (15) conditions, is the point of the edges forming the path, which was a blur, and lost the clarity within the circuit having the structure shown in paragraph at Fig or D on Fig, and this circuit has some fixed contrast or higher contrast. In other words, it is recognized that at this point the path has been blurred.

Module 67 analysis of the circuit repeats the above processing data up until as pixels of interest will not be used all the points of the contour extracted from the input image. As a result, among the extracted points on the path gain (Nedge) points of the contour, the number (Nsmallblur) of the contour points and the number (Nlargeblur) points of the contour.

The number Nedge is the number of path points, which satisfy the expression (13) conditions, the number Nsmallblur represents the number of path points, which satisfy the expression (12) conditions, and which satisfy the expression (13) conditions or expression (14) conditions. In addition, the number Nlargeblur represents the number of path points, which satisfy the expression (12) conditions, which satisfy the expression (13) the terms or expressions (14) and conditions which satisfy the expression (15) conditions.

Module 67 analysis of the circuit gives the calculated number is STV Nsmallblur and the number Nlargeblur module 68 determine the degree of blur.

At stage S 59 module 68 determine the degree of blur calculates the following expression (16), using the number Nsmallblur and the number Nlargeblur coming from the module 67 analysis of the circuit, and receives, as a quantitative measure of the degree of blur degree (BlurEstimation) blur that serves as an indicator of the degree of blur of the input image

BlurEstimation=NlargeblurNsmallblur.(16)

Thus, the degree (BlurEstimation) blur represents the proportion occupied by the points of the contour, which is recognized forms a contour, which has been a blur, among the points on the path, accept-forming circuit having the structure shown in Fig, or D shown in Fig 18, with some fixed contrast or higher contrast. Therefore, the greater the degree (BlurEstimation) blur, especially recognizes the high degree of blur of the input image.

When the module 68 determine the degree of blur gets a share of the public exponent blur module 68 determine the degree of blur provides a quantitative measure of the degree of blurring module 28 calculate the total measure, and the process of computing the degree of blur ends. The process then proceeds to step S13 shown in Fig.

As described above, the module 23 calculate a quantitative measure of the degree of blur computes a quantitative measure of the degree of blur, indicating the degree of blur of the entire input image, on the basis of the input image. When you want to calculate a quantitative measure of the degree of blur, because the conditions for extracting the contour point and the number of extraction points along the path is controlled appropriately in accordance with the input image, it is possible to determine the degree of blur of the input image with higher accuracy.

Meanwhile, the above description was given in such a way that as a quantitative measure of the degree of fuzziness get the blur of the whole input image. However, as a quantitative measure of the degree of blur can be obtained the degree of blur of the plot occupied by the object, by "targeting" (processing) only at the site of the input image.

Description of the calculation process quantitative the th indicator brightness

Next, with reference to the block diagram of the algorithm is shown in Fig, description will be given of a process of computing a quantitative indication of the brightness, which process corresponds to the process at step S13 shown in Fig.

At stage S 81 module 91 retrieve the object performs the process of creating a map object in such a way as to create a map object from the input image provided from the receiving module 22, and provides this map object module 92 multiplication. Meanwhile, details of the process of creating a map object will be described later.

At step S 82 module 92 multiplication multiplies the input image provided from the receiving module 22, a map of the object provided from the module 91 retrieve the object, thereby to generate an image of the object, and provides the image of the object module 93 generating a histogram. Thus, when the pixel to which attention was drawn, from the image of the object, make the "interest" of the pixel, the pixel value for this "interesting" pixels do the work of pixel values for a pixel map of an object and a pixel value of the pixel of the input image having the same location as the location of the "interest" of the pixel. This image represents the image that is displayed is tsya only the plot, occupied by the object on the input image.

At stage S 83 on the basis of the image of the object provided from the module 92 multiplication module 93 generating a histogram generates a luminance histogram in which each column corresponds to the range of brightness values of pixels of an image object, and provides a brightness histogram module 94 of the regulation. In addition, the module 94 normalization normalizes the histogram supplied from the module 93 generate histograms, and this provides a histogram module 95 calculating the quantitative indicator.

At stage S 84 based on the histogram supplied from the module 94 regulation module 95 calculating the quantitative indicator calculates a quantitative indicator of the brightness in accordance with the method of K-NN (K - nearest neighbor) or similar and this provides a quantitative measure of the brightness module 28 calculate the total measure.

For example, the module 95 calculating the quantitative indicator saves against professional image and Amateur images, which were prepared in advance, many histograms of brightness that were generated through processes identical to the processes at the steps S 81 S 83. Module 95 calculating the quantitative indicator calculates the difference between each of the histograms of the C number of many professional images and Amateur images which have been stored in advance, and the histogram supplied from the module 94 of the regulation.

Thus, the module 95 calculating the quantitative indicator specifies the total value of the difference frequency values of each column between the luminance histogram of the input image and a histogram, which is stored in advance as the distance between the two histograms. Thus, receive the difference between the frequency values between columns of the same representative values, and the total sum of the differences obtained for each column that is defined as the distance between histograms.

The module then 95 calculating the quantitative indicator chooses To distances in ascending order of the distance from among the distances between the image histogram object and a histogram, which is stored in advance.

In addition to this module 95 calculating the quantitative indicator subtracts the number of distances histogram of the image object and Amateur images from a number of distances histogram of the object image and professional image from among the selected distance, and sets the resulting value as a measure of brightness.

The distance between the histogram of the object image and the histogram professional image is to be placed or Amateur image represents the sum total of the difference frequency values of each column. Therefore, the more similar is the distribution of brightness of these images, the smaller the distance value. Thus, the smaller the distance, the closer is the image of the object to the professional image or Amateur image, for which data will be processed.

In this case, as a quantitative measure of the brightness is set to the difference between the number of distances to the histogram professional image and the number of distances to the histogram Amateur picture set. Therefore, the more professional image has a brightness distribution similar to the distribution of brightness of the object image, the greater the value of the measure of brightness. Thus, this quantitative indicator of the brightness indicates the degree of similarity of the brightness distribution of the object image with respect to a professional image, and the larger the value of the quantitative measure of the brightness of the input image, the higher for image evaluation of the fixity of the image.

As described above, when a quantitative indicator of the brightness calculated, the module 95 calculating the quantitative indicator provides designed a quantitative indicator of the brightness module 28 calculate the total amount of the aqueous indicator and the process of computing a quantitative indication of the brightness ends. Then, after that, the process goes to step S14 shown in Fig.

As described above, the module 24 of computing a quantitative indication of the brightness extracts an object area from the input image, compares the distribution of brightness in this area of the object brightness distribution in the object area professional image and Amateur image thus to compute a quantitative measure of brightness. As described above, in order to calculate a quantitative measure of the brightness, the brightness distribution is compared, "targeting" only on the target area of the input image, which allows a more appropriate way to assess for the input image, the state of fixation of the image.

For example, in General, an image in which the brightness of the portion of the object in the image is high, is regarded as the image having the highest evaluation, which was introduced satisfactorily, even if the area occupied by the background is dark. In this case, when the goal of treatment is the entire image, and the image is fixed for a given image is measured using the brightness distribution as an indicator, if, in spite of the fact that the brightness distribution is part of the object is close to the brightness distribution plot occupied by the object on the professional image of the distribution of brightness of the area occupied by the background differs from the professional image, the evaluation of the image is low.

As described above, in the case where the state of fixing of images for a given image should be measured using a brightness distribution as an indicator, it is important for the distribution of brightness in the area of the object within the image, and the estimate of the distribution of the brightness of the area occupied by the background, is not necessarily required. Thus, the evaluation of the input image is quite possible on the basis of brightness distribution on the area occupied by the object. In contrast, when considering the distribution of brightness in the area occupied by the background, there is the risk that the method of assessment will be difficult, or made an erroneous assessment.

Accordingly, in the module 24 of computing a quantitative measure of brightness, calculating a quantitative indicator of the brightness by "targeting" only on the target area of the input image as a target for processing, you can more simply and more appropriate way to assess for the input image, the state of fixation of the image.

Description of the process of creating a map object

In addition to this below with SS is the LM block diagram of the algorithm shown in Fig, description will be given of the process of creating a map object, which process corresponds to the process at stage S 81 shown in Fig.

At step S11 module 121 retrieve information about the brightness performs a process of extracting information about the brightness so that, based on the input image provided from the receiving module 22, to create a map of information about the brightness, and provides the map information about the brightness module 125 create a map object. Then, at step S112 (module 122 extract color information executes a process of extracting color information so that, based on the input image provided from the receiving module 22, to create a map color information, and provides this map color information module 125 create a map object.

At step S113 module 123 extraction circuits performs the process of extracting contours so that based on the input image provided from the receiving module 22, to create a map contours, and provides this map outline information module 125 create a map object. In addition, at step S114 module 124 retrieve information about the person performs the process of extracting information about a person so that, based on the input image supplied from the floor is emitting module 22, to create a map of information about the person, and provides the map information on the face of the module 125 create a map object.

Meanwhile, more details of the above-mentioned process of extracting information about the brightness, the process of extracting color information extraction process outline information and extraction of information about the person will be described later.

At step S115 module 125 create a map object creates a map object using the map from the map information about the brightness of the map to information about the person, provided modules module 121 retrieve information about the brightness up module 124 retrieve information about the person, and provides maps of the object module 92 multiplication, shown in figure 3.

For example, the module 125 create a map object in a linear manner combines information cards using the weight (Wb) of information, which represents a weighting factor, which was obtained in advance for each information card, and normalizes these information cards multiplying the pixel value thus obtained maps on the weight (Wc) of the object, which is a pre-obtained weighting factor, thereby to form a map object.

Thus, if the pixel to which attention was drawn to get the map object is defined as "of interest to the pixel, the pixel is the second value for the pixel information of each card, located in the same location as the pixel of interest is multiplied by a weighting factor (Wb) information intended for each information card, and the total sum of pixel values, which are multiplied by a weighting factor (Wb) information, is set as the pixel values for "interesting" pixels. In addition, a pixel value for each pixel of the map object, thus obtained, is multiplied by a weighting factor (Wc) object received in advance in relation to a map object, therefore, to be normalized, and forms the final map object.

Meanwhile, if you describe it in more detail, the card color information is used to map the color information for the difference between "red/green" card and color information for the difference between "blue/yellow". As maps outline information includes map information about paths in each direction of a number: 0 degrees, 45 degrees, 90 degrees and 135 degrees. This creates a map object. In addition, the weighting factor (Wc) object has been obtained in advance by learning, and rationing causes a pixel value for each pixel on the map object to be one of the values: 0 and 1. Thus, in the course of the valuation, the threshold process using a pre-specified the threshold value, makes a pixel value for each pixel is converted to binary form.

When the method described above, the map object is created and provided to module 92 multiplication, the process of creating a map of the object ends. After that, the process goes to step S 82, shown in Fig.

As described above, the module 91 retrieve object retrieves information from each region of the input image and creates a map object.

Description of the process of extracting information about the brightness of the

Next, with reference to the flowcharts of the algorithms shown in Fig-24, description will be given of processes corresponding to the respective processes in steps S111 through S114 shown in Fig.

First, with reference to the block diagram of the algorithm is shown in Fig will be the following describes the process of extracting information about the brightness corresponding to the process at step S111 shown in Fig.

At step S151 module 151 generating image brightness generates using the input image supplied from the receiving module 22, the image brightness, and provides the image brightness module 152 generate pyramid images. For example, module 151 generating image brightness multiplies the value of each component: red, green, and blue pixel on the input image for the set of coefficients is ient for each component, and sets the sum of the values of each component, which is multiplied by a factor, as a pixel value for a pixel in the image brightness, which is located in the same location as the location of the pixel on the input image. Thus, get the brightness component in the composite signal composed of the components (Y) the luminance and color difference components (b, SG). Meanwhile, the average values for each component: red, green, and blue pixel may be set as a pixel value for a pixel in the image brightness.

At step S152, based on the image brightness signal is provided from the module 151 generating image brightness, module 152 generate pyramid images to generate pyramid images of each hierarchical view from level L1 to level L8, and provides these pyramid image module 153 calculating the difference.

At step S153 module 153 calculating the difference generates and normalizes the image difference using pyramid images supplied from the module 152 generate pyramid images, and provides its module 154 map information about the brightness. The normalization is performed so that the pixel value of the pixel on the image difference was, nab is emer, value between 0 and 255.

To put it concretely, the module 153 calculating the difference obtains the difference between the pyramid images of the combination of hierarchical representations of level L6 and level L3, level L7 and level L3, level L7 and level L4, level L8 and level L4, level L8 and level L5 from among the pyramid images of the brightness of each of the hierarchical representation. In the result, we have a total of five images to the difference of brightness.

For example, in the case in which must be generated by the image difference for the combination of level L6 and level L3, the pyramid image of level L6 is subjected to the step-up conversion in accordance with the size of the pyramid image of level L3. Thus, a pixel value of one pixel in the pyramid image of level L6 before increasing conversion is set as the pixel values of multiple adjacent to each of the pixels of the pyramid image of level L6 after increasing conversion, which corresponds to one pixel. Then get the difference between the pixel value for the pixel of the pyramid image of level L6 and the pixel value for the pixel of the pyramid image of level L3, which is located in the same location as the location of the pixel of the pyramid image of level L6, and this difference for the W as a pixel value for a pixel of the image difference.

The process for generating these images, the difference is equivalent to a process in which the image brightness performs a filtering process using a band-pass filter, and image brightness are extracted predetermined frequency components. The pixel value for the image pixel difference, which is obtained in this way indicates the difference between the pixel values of pyramid images of each level, i.e. the difference between the brightness of a given pixel on the input image and an average brightness in the surrounding area of the pixel.

Typically, the region having a large difference in brightness from the surrounding area in the image, is an area that attracts the attention of the person who sees this image. Consequently, this area has a high probability to be the target area. Therefore, we can say that the pixel having a higher pixel value for each image difference is the area that has a high probability of being a region of the object.

At step S154 module 154 map information about the brightness creates, based on the difference images supplied from the module 153 calculating the difference, map information about the brightness, and provides the map information about the brightness module 125 create a map object. When the card in which information about the brightness of the given module 154 map information about the brightness module 125 create a map object, the process of extracting information about the brightness of the ends, and the process then proceeds to step S112 (shown in Fig.

For example, module 154 map information about the brightness performs a weighted summation of the five provided images of a difference using the weighting coefficient (Wa) difference, which is the weight factor for each image difference, thus, to obtain one image. Thus, each of the pixel values for pixels located in the same place as the pixels in each image of the difference multiplied by the weight (Wa) of the difference, and get the total of the pixel values, which are multiplied by a weighting factor (Wa) difference.

Meanwhile, when it should be a map with information about the brightness is increasing transformation of the difference images so that the image difference had the same size.

As described above, the module 121 retrieve information about the brightness receives the image brightness of the input image, and creates a map of information about the brightness of the image brightness. In module 121 retrieve information about the brightness as a quantitative characteristic value is extracted difference between the brightness of the respective areas of the input image and an average brightness of the area is surrounded by this area of the input image, and creates a map of information about the brightness, indicating a quantitative characteristic value. In accordance with the map information about the brightness obtained in this way, you can easily discover a region having a large luminance difference on the input image, that is, an area that can easily be recognized by the observer, who glances at the input image.

Description of the process of extracting color information

Next, with reference to the block diagram of the algorithm is shown in Fig, description will be given of the process of extracting color information corresponding to the process at step S112 (shown in Fig.

At step S181 module 181 generating image difference "red/green" generates, using the input image supplied from the receiving module 22, the image difference "red/green" and provides this image is the difference between "red/green" module 183 generate pyramid images.

At step S182 module 182 generating image difference "blue/yellow" generates, using the input image supplied from the receiving module 22, the image difference "blue/yellow" and provides this image of a difference "blue/yellow" module 184 generate pyramid images.

At step S183 module 184 generate iriminage image and module 183 generate pyramid images generate pyramid images using, respectively, the image difference "red/green", coming from the module 181 generating image difference "red/green", and the image difference "blue/yellow", coming from the module 182 generating image difference "blue/yellow".

For example, the module 183 generate pyramid images generates multiple images of the difference between "red/green"with different resolutions, thus, to generate pyramid images of each hierarchical representation levels: from level L1 to level L8, and it provides many images the difference between "red/green" module 185 calculating the difference. Similarly, the module 184 generate pyramid images generates multiple images of a difference "blue/yellow", with different resolutions, thus, to generate pyramid images of each hierarchical representation levels: from level L1 to level L8, and it provides many images of a difference "blue/yellow" module 186 calculating the difference.

At step S184 module 186 calculating the difference and module 185 calculating the difference and generate normalized images of a difference based on the pyramid images supplied from the module 183 generate pyramid images and module 184 generate pyramid images is the or and provide these images to the difference, respectively, the module 187 map and color information about the module (118) map color information. During the rationing of image difference, for example, pixel values for a pixel set having a value between 0 and 255.

For example, module 185 calculating the difference obtains the difference between the pyramid images of the combinations of level L6 and level L3, level L7 and level L3, level L7 and level L4, level L8 and level L4, level L8 and level L5 from among the pyramid images of difference "red/green" each tree view. In consequence, we get a total of five images the difference between "red/green".

Similarly, the module 185 calculating the difference obtains the difference between the pyramid images of the combinations of level L6 and level L3, level L7 and level L3, level L7 and level L4, level L8 and level L4, level L8 and level L5 from among the pyramid images of difference "blue/yellow" each tree view. In consequence, we get a total of five images of a difference "blue/yellow".

The process of generating these images, the difference is equivalent to the process running the filtering process on the image difference "red/green" or image difference "blue/yellow", ispolzuya the band-pass filter, thus, to extract predetermined frequency components from the image difference "red/green" or image difference "blue/yellow". A pixel value for a pixel image of the difference thus obtained indicates the difference between the specific color components of the pyramid images of each level, i.e. the difference between the components of the specific color in the pixel of the input image and constituents of the medium specific color in the surrounding area of the pixel.

Usually, the area having a more noticeable color compared to the surrounding area on the image, that is, a region having a large difference in specific color components from the surrounding area, is an area that attracts the attention of the person who sees the image. Consequently, this area has a high probability to be the target area. Therefore, we can say that the pixel having a higher pixel value for each image difference, indicates that this area is more likely to be the area of the object.

At step S185 module 188 map color information and the module 187 map color information create using images of a difference, coming respectively from the module 185 calculating the difference and module 18 calculating the difference, card color information and provide their module 125 create a map object.

For example, the module 187 map color information performs a weighted summation of the images the difference between "red/green"provided from the module 185 calculating the difference, using the obtained pre-weighting factor (Wa) difference for each image difference, forming, thus, one map color information for the difference between "red/green".

Similarly, the module 188 map color information performs a weighted summation of image difference "blue/yellow"provided from the module 186 calculating the difference, using the obtained pre-weighting factor (Wa), forming thus one map color information for the difference between "blue/yellow". Meanwhile, when it should be created map color information, performs up-conversion of image difference, so that these image differences were the same size.

When the module 187 map color information and the module 188 map color information to provide, respectively, map color information for the difference between "red/green" and map color information for the difference between "blue/yellow", which is obtained by the method described above, the module 125 create a map object, about the ECC retrieve information about the color ends. The process then proceeds to step S113 shown in Fig.

As described above, the module 122 extract color information receives the image - difference components of the specific color from the input image, and generates from this image map color information. Thus, in the module 122 extract color information as a quantitative characteristic value extracted the difference between the specific color components of the respective areas of the input image and the average specific color components of the region surrounded by this area of the input image, and creates a map color information indicating a quantitative characteristic value. In accordance with the map information about the color obtained by the method described above, there is a possibility easily find the area having a higher specific difference of the color components, that is, an area that can easily be recognized by the observer, who glances at the input image.

Meanwhile, it has been described that in the module 122 extract color information as color information extracted from the input image, extracts the difference between the R (red) components and G (green) components and the difference between B (blue) components and Y (yellow) accounted for the Commissioner. Alternatively, can be extracted color-difference components CR and the color difference components b. Thus, the color difference components CR represents the difference between the red components and luminance components and color difference components b represent the difference between the blue components and luminance components.

Description of the process of extracting contours

Fig is a block diagram of an algorithm illustrating the process of retrieving information about the contours corresponding to the process at step S113 shown in Fig. This process of extraction circuits will be described below.

At step S211 modules from module 211 generating image paths to module 214 to generate the image contours are performing the filtering process using the Gabor filter on the input image provided from the receiving module 22, and generate image contours, in which pixel values of the pixel are the contrast of the contour in the direction of 0 degrees, 45 degrees, 90 degrees and 135 degrees. Then the modules from module 211 generating image paths to module 214 to generate the image contours provide the generated image contours, respectively modules from module 215 generate pyramid images to the mo the ula 218 generate pyramid images.

At step S 212 modules from module 215 generate pyramid images to module 218 generate pyramid images generate pyramid images using image contours, coming respectively from modules with module 211 generating image contours by module 214 to generate the image contours, and provide the pyramid image accordingly modules from module 219 calculating the difference to module 222 calculating the difference.

For example, the module 215 generate pyramid images generates multiple images of the contours in the direction of 0 degrees, with different resolutions, thus, to generate the pyramid image of each hierarchical representation levels from level L1 to level L8, and provides this pyramid image module 219 calculating the difference. Similarly, the modules from module 216 generate pyramid images to module 218 generate pyramid images generate pyramid images of each hierarchical representation levels from level L1 to level L8 and provide these pyramid image accordingly modules from module 220 calculating the difference to module 222 calculating the difference.

At stage S 213 modules from module 219 calculating the difference to module 222 calculating the difference and generate normalized is zobrazenie difference, using the pyramid images supplied, respectively, from modules with module 215 generate pyramid images module 218 generate pyramid images, and provide these images to the difference accordingly modules from module 223 map outline information to module 226 map contours. During the rationing of image difference, for example, a pixel value of the pixel is set with a value between 0 and 255.

For example, module 219 calculating the difference obtains the difference between the pyramid images of the combinations of level L6 and level L3, level L7 and level L3, level L7 and level L4, level L8 and level L4, level L8 and level L5 from among the pyramid images of the contour in the direction of 0 degrees each hierarchical view provided from module 215 generate pyramid images. In consequence, we get a total of five images of a difference for the path.

Similarly, the modules from module 220 calculating the difference to module 222 calculating the difference get the difference between the pyramid images of the combinations of level L6 and level L3, level L7 and level L3, level L7 and level L4, level L8 and level L4, level L8 and level L5 from among the pyramid images of each hierarchical view. As a result of this developed is implemented in a total of five images of a difference for the path.

The process for generating these images, the difference is equivalent to the process running the filtering process using a band-pass filter, thus, to extract from the image of the contour of the predetermined frequency components. The pixel value of the pixel of the image difference obtained in this way indicates the difference between kontrastnostyu circuit pyramid images of each level, i.e. the difference between the contrast of the contour in a pre-specified location in the input image and the average contrast of the contour of the image in the area surrounding this place.

Usually the area having a higher contrast contour compared with the surrounding area on the image is an area that attracts the attention of the person who sees the image. Consequently, this area has a high probability to be the target area. Therefore, we can say that the pixel having a higher pixel value for each image difference, indicates that this area is more likely to be the area of the object.

At stage S 214 modules from module 223 map outline information to module 226 map contours created using image difference, coming respectively from modules with module calculating the difference in module 222 calculating the difference, card information about paths in each direction and passes them to the module 125 create a map object.

For example, module 223 map outline information performs a weighted summation of the difference images supplied from the module 219 calculating the difference, using the weighting coefficient (Wa) the difference obtained in advance, thereby to form a map of the contours in the direction of 0 degrees.

Similarly, the modules from module 224 map outline information to module 226 map contours perform a weighted summation of the difference images supplied from the modules from module 220 calculating the difference to module 222 calculating the difference, using the weighting coefficient (Wa) of the difference, respectively, thus to create a map of the contours in each direction: 45 degrees, 90 degrees and 135 degrees, respectively. Meanwhile, when it should be created map outline information, is performed to the step of converting the difference so that the image difference had the same size.

When the modules from module 223 map outline information to module 226 map contours provide a total of four cards of the contours obtained by the method described above, the module 125 create a map object, the process of extracting information about the contours of the ends. The process then proceeds to step S114 shown in Fig.

As described above, the module 123 extraction circuits receives from the input image, the image difference for a path in a particular direction, and generates from this image difference map contours. Thus, in module 123 extract information about the contours extracted as a quantitative characteristic value, the difference between the contrast of the contour in a particular direction in the respective fields of the input image and the average contrast of the contour in the specific direction of the area surrounded by this area of the input image, and creates a map outline information indicating the number of lines. In accordance with the map information about the paths for each direction obtained by the method described above, there is a possibility easily find the area with the most contrast contour in a particular direction, compared with the surrounding region on the input image, that is an area that can easily be recognized by the observer, who glances at the input image.

Meanwhile, it has been described that in the process of extraction of the circuit is to extract the contours used the Gabor filter. Alternatively, you can use any other filter to extract contours, such as a Sobel filter or the filter Roberts.

Description of the process of extracting information about a person

Next, with reference to the block diagram of the algorithm is shown in Fig, description will be given of the process of extracting information about the person corresponding to the process at step S114 shown in Fig.

At stage S 241 module 251 detect face detect on the basis of the input image provided from the receiving module 22, the face area of the person and provides the result of the detection module 252 map information about a person. For example, module 251 detection of a person performs a filtering process using the Gabor filter on the input image, and detects the face area in the input image, extracting from the input image of the characteristic area, such as eyes, mouth and nose, and the like.

At stage S 242 module 252 map information about a person creates, using the result of detection received from the module 251 of the face detection map information about a person and provides the map information on the face of the module 125 create a map object.

For example, it is assumed that as a result of detecting a face from the input image found many rectangular regions (hereinafter imenu the feasible regions - candidates) on the input image, which is believed to contain a face. This also assumes that the set of regions candidate is found close to some predefined location on the input image, and what areas these areas candidates can overlap each other. That is, for example, in the case where the ratio of the area of one face on the input image was obtained as areas candidates many areas, including the face, parts of these areas - candidates overlap.

Module 252 map information about a person generates in relation to field a candidate obtained by the face detection, image detection, having the same size as the input image for each region candidate. This image detection is formed in such a manner that a pixel value of the pixel in the same area as the region is a candidate to be processed, on the detected image has a value that is higher than the pixel value of the pixel in the area other than the area of the candidate.

In addition, the closer a pixel value for a pixel in the same location as the location of a state candidate, which is considered to have a probability to contain the face of a man, the more the pixel value is e for the pixel in the detected image. Module 252 map information about the person adds the image of the detection obtained by the method described above, thereby to generate and normalize one image, forming, thus, the map information about the person. Therefore, the map information about a person, a pixel value of the pixel in the same location as the region that overlaps the areas many areas of the candidate on the input image increases, and the probability that contains the person is higher. Meanwhile, in the course of the valuation pixel value of the pixel is set with a value between 0 and 255.

When the map information about a person created, the process of extracting information about the person ends. The process then proceeds to step S15 shown in Fig.

As described above, the module 124 retrieve information about the person detects a face from the input image, and generates on the basis of the detection map information about a person. In accordance with the map information about a person obtained by the method described above, there is the possibility to easily detect the face area of the person as the object.

In module 91 retrieve the object, shown in figure 4, is created each information card, as described above, and from these information cards creates a map object.

A description of the process is the calculation of quantitative indicator circuit

In addition, below with reference to Fig-28 description will be given of the process of calculating the quantitative indicator circuit, the process of computing a quantitative measure of the color distribution and the process of computing a quantitative measure of color saturation corresponding to the processes in steps S14-S16 shown in Fig.

First, with reference to the block diagram of the algorithm is shown in Fig, description will be given of the process of calculating the quantitative indicator circuit, which process corresponds to the process at step S14 shown in Fig. This process is performed by the module 25 calculating the quantitative indicator circuit shown in Fig.9.

At stage S 271 module 281 retrieve the object performs the process of creating a map object in such a way as to create a map object from the input image provided from the receiving module 22, and provides this map object module 282 inversion. Meanwhile, the process of creating a map object is a process identical to the process of creating a map object, described with reference to Fig, and thus, its description is omitted.

At stage S 272 module 282 inversion performs a process of inversion of the object provided from the module 281 retrieve the object, and provides a map object module 285 multiplication. Thus, a pixel value for each peak is El map object is inverted from "1" to "0" or from "0" to "1". As a result, the use of the card object after inversion allows you to retrieve on the input image area occupied by the background.

At stage S 273 module 283 processing filter performs a filtering process using a Laplace filter on the input image provided from the receiving module 22, thereby to generate the Laplace image, and it provides Laplace image module 284 regulation. In addition, the module 284 normalization normalizes the Laplace image coming from the module 283 filtering data, and provides Laplace image module 285 multiplication.

At stage S 274 module 285 multiplication generates the background image, multiplying the Laplace image coming from the module 284 regulation on the inverted map of the object coming from the module 282 inversion, and it provides the background image module 286 generate histograms. So, get a piece of pixel values for pixels Laplace images and maps of the object in the same place, and this product is set as the pixel value of the pixel of the image background. The background image obtained in this way is an image indicating the contrast of the contour plot, which is not an area of the object on the input image, that is, the area occupied by the background.

At stage S 275 using the background image provided from the module 285 multiplication module 286 generating a histogram generates a histogram indicating the complexity of the contour on the site occupied by the background, on the input image.

Thus, the first module 286 generating a histogram performs on the background image threshold process. To put it concretely, the pixel value of the pixel whose pixel value is greater than some threshold value or equal to, the number of pixels in the image background is saved with the value that it has, and the pixel value of the pixel whose pixel value is less than this threshold value is set to "0".

Next, the module 286 generating a histogram generates a histogram of paths on which column is the range of pixel values for pixels of the image background, i.e. the range of values of the contrast of the contour. Then, the module 286 generating a histogram provides the generated histogram circuit module 287 calculating the quantitative indicator.

At stage S 276 module 287 calculating the quantitative indicator calculates the metric for the path, using the histogram, predostavlyaemy the module 286 generate histograms, and this provides a quantitative indicator circuit module 28 calculate the total measure.

For example, the range of values that can take the value of the pixel of the image background after the threshold process is taken components from 0 to 1. This module 287 calculating the quantitative indicator extracts, as the maximum frequency value MA, the frequency value of the column having the highest frequency value among one or multiple columns that are contained in the range of pixel values (contrast contour) from "0" to "0,1" in the histogram. Thus, among the columns, in which the representative value is between 0 and 0.1, select the column that has the highest frequency value and the frequency value of this column is set as the maximum frequency value MA.

In addition, the module 287 calculating the quantitative indicator extracts, as the minimum frequency values of Mb, the frequency value of the column whose frequency value is the lowest among one or multiple columns contained in the range of pixel values (contrast contour) from "0,8" to "0,9" in the histogram. Thus, select the column that has the lowest frequency among the columns, katharinemcphee value is between 0.8 and 0.9, and often the value of this column is set as the minimum frequency value Mb.

Then the module 287 calculating the quantitative indicator subtracts the minimum frequency value Mb of the maximum frequency value MA, and sets the resulting value as a quantitative indicator circuit. Metric for the path obtained in this way indicates the difference between the number of pixels having low contrast contours, and the number of pixels having high contrast outline, in the area occupied by the background, on the input image, i.e. the complexity of the contour on the site occupied by the background.

Usually, Amateur image contour area occupied by the background is complex, and often, the maximum frequency value of MA is increased, and the minimum frequency value Mb is reduced. Thus, a quantitative measure of the contour of the image, close to Amateur image has a high probability to take a large value. On the other hand, the professional image contour plot of the background is simple, i.e. the number of pixels that have a contrast contour is large, is small, and often the maximum frequency value MA and the minimum frequency value Mb is small. Thus, if the natural enemy of the indicator circuit of the image, close to the professional image that has a high probability to take a small value. Therefore, the smaller the value of the quantitative metric of the path, the higher is specified for the input image evaluation of the fixity of the image.

When the calculated quantity indicator circuit is provided from the module 287 calculating the quantitative indicator module 28 for the calculation of summary measure, the process of calculating the quantitative indicator circuit ends. After this, the process goes to step S15 shown in Fig.

As described above, the module 25 calculating the quantitative indicator circuit extracts from the input image area occupied by the background, and calculates, based on the contrast of the contour of the area occupied by the background, the quantitative indicator circuit indicates the complexity of the contrast of the contour of the area occupied by the background. As described above, calculating the quantitative indicator circuit by "targeting" only to the area occupied by the background, you can more appropriately be assessed for the input image, the state of fixation of the image.

That is, there is a tendency in which the circuit area occupied by the background, Amateur image is complex, and it is the contour plot occupied background, professional image is simple. Accordingly, using such a trend, get a quantitative indicator circuit indicates the complexity of the circuit, only the area occupied by the background. Therefore, there is a possibility of more easily and properly assess for the input image state image is fixed, regardless of whether the complex contour on the site occupied by the object.

Description of the process of computing a quantitative measure of the distribution of color

Next, with reference to the block diagram of the algorithm is shown in Fig, description will be given of a process of computing a quantitative measure of the color distribution, which process corresponds to the process at step S15 shown in Fig.

At step S 301 modules from module 311 generating a histogram of the red component to the module 313 generating a histogram of the blue component, generate a histogram of each component: R (red), G (green), and b (blue)based on the input image provided from the receiving module 22, and provide these histograms respectively the modules from module 314 of regulation down module 316 regulation. For example, the histogram of the red component of the generated histogram, in which the column is the range of values is sometimes the component of the input image.

In addition, the modules from module 314 of regulation down module 316 normalization normalized histogram for each component, coming respectively from modules from module 311 generating a histogram of the red component to the module 313 generating a histogram of the blue component, and provide a histogram module 317 generate the histogram.

At step S 302, using the histogram for each color component supplied from the modules from module 314 of regulation down module 316 regulation module 317 generating a histogram generates a color histogram compiled from each component: red, green, and blue, and this provides a histogram module 318 calculating the quantitative indicator.

For example, in the case where the histogram of each component: red, green, and blue, formed of 16 columns, one histogram is generated, is made up of the 163columns in terms of color composed of red, green and blue components. This histogram indicates the distribution of color over the entire input image.

At step S 303 based on the histogram supplied from the module 317 generating a histogram module 318 calculating the quantitative indicator calculates a quantitative measure of the color distribution using the method of K - NN or similar method, and pre the leaves this quantitative measure of the distribution of the color module 28 calculate the total measure.

For example, module 318 calculating the quantitative indicator stores a set of color histograms, which are generated by the same processes as the processes at the steps S-301 and S 302, in relation to the professional image and Amateur images, which are prepared in advance. Module 318 calculating the quantitative indicator calculates the distance between the corresponding histograms from a variety of professional images and Amateur images, which are stored in advance, and the histogram received from the module 317 generate the histogram.

The distance between the histograms represents the total value of the difference frequency values of each column between the color histogram of the input image and one of a histogram, which is stored in advance.

In addition to this module 318 calculating the quantitative indicator chooses To distances in ascending order of distance values among many of the obtained distances, and subtracts the number of distances to the histogram of the input image and Amateur images from a number of distances, histogram of the input image and professional image among the selected distances. Then the module 318 calculate the measure sets a value obtained as a result of this wakitani is, as a quantitative measure of the color distribution.

With regard to the quantitative measure of the color distribution obtained in this way, similarly to the above-mentioned quantitative measure of the brightness, the more professional image in which the color distribution resembles the distribution of colors of the input image, the greater the value of this indicator. Thus, a quantitative measure of the distribution of the color indicates the degree to which the color distribution of the input image is reminiscent of the color distribution professional image. The larger the value of the quantitative measure of the color distribution of the input image, the higher for image evaluation of the fixity of the image.

When a quantitative measure of the color distribution calculated module 318 calculating the quantitative indicator provides designed a quantitative measure of the distribution of the color module 28 calculate the total measure, and the process of computing a quantitative measure of the distribution of color finishes. After this, the process then proceeds to step S16 shown in Fig.

As described above, the module 26 of computing a quantitative measure of the color distribution compares the distribution of colors in the input image is agenie with the distribution of colors professional image and Amateur images which are prepared in advance, thereby to calculate a quantitative measure of the distribution of color. Comparing in this way, the color distribution of the entire input image so as to calculate a quantitative measure of the color distribution, it is possible to more appropriately estimate for the input image, the state of fixation of the image.

Thus, in the case where the condition of fixing the image to a certain image should be evaluated using, as an indicator, the color distribution, it is necessary to consider the whole picture. Therefore, by "targeting" of all the input image and by comparing the color distribution of the input image with another color distribution of the image, there is a possibility of a more appropriate way to assess for the input image, the state of fixation of the image.

Description of the process of computing a quantitative measure of color saturation

Next, with reference to the block diagram of the algorithm is shown in Fig, description will be given of a process of computing a quantitative measure of color saturation, which process corresponds to the process at step S16 shown in Fig.

At stage S 331 module 341 conversion converts the input image provided from the receiving module 22, in the WMO is a great image composed of the values of each component of H (hue), S (saturation) and V (brightness), and it provides the input image module 342 to generate the histogram.

At stage S 332 module 342 generating a histogram generates a histogram component of the color saturation using the input image supplied from the module 341 conversion, and provides the histogram module 343 calculating the quantitative indicator. For example, module 342 generating a histogram retrieves from among pixels of the input image, those pixels that have a component of H (hue) is pre-specified threshold value th1 or more, and whose component V (brightness) is greater than a predefined threshold value th2 or equal to, and generates, using these extracted pixels in the histogram, in which the column is the range of values component S (color saturation) pixels.

At stage S 333 module 343 calculating the quantitative indicator calculates, using the histogram supplied from the module 342 generating a histogram, a quantitative measure of color saturation and provides it to the module 28 calculate the total measure.

For example, the module 343 calculating the quantitative indicator performs approximation, using the model the GMM (Gaussian model mixing) on the histogram color saturation, and received the degree of significance, the mean value and variance of each distribution in respect of one or multiple distributions. Meanwhile, referred to herein, the term "distribution" refers to the plot, with one peak in the entire distribution curve obtained by approximation.

Module 343 calculating the quantitative indicator specifies each parameter of the number variance and the mean value of the distribution with the highest degree of importance, as a quantitative measure of color saturation. As a result, during the process in the step S17 shown in Fig, each parameter of the number variance and the average value of color saturation, as a quantitative measure of color saturation, convert using the conversion table, the score measure.

A quantitative measure of color saturation, obtained in this way, indicates the average value and the variance components of the color saturation of the entire input image. Values of the mean and variance allow you to identify what image, professional or Amateur, close the input image. Therefore, a quantitative measure of color saturation allows us to estimate for the input image, the state of fixation of the image.

When the amount of the hydrated indicator color saturation calculated module 343 calculating the quantitative indicator provides this calculated metric for color saturation module 28 calculate the total measure, and the process of computing a quantitative measure of color saturation ends. After that, the process then proceeds to step S17 shown in Fig.

As described above, the module 27 calculating the quantitative measure of color saturation calculates as quantitative indicators color saturation average value and the variance of the color saturation on the input image. Obtaining the average value and the variance of the color saturation, to set them as quantitative indicators color saturation by "targeting" of all the input image by the method described above, there is a possibility of a more appropriate way to assess for the input image, the state of fixation of the image.

Thus, in the case in which image state imaging should be estimated using, as an indicator, color saturation, it is necessary to consider the whole picture. Therefore, calculating the average value and the variance of the color saturation by "targeting" of all the input image, it is possible to more appropriately estimate for the input image condition the fixation of the image.

The above sequence of processes can be executed by hardware and software. In the case where this sequence of processes is executed by software, a program forming the software is installed from the recording media program on a computer built in dedicated hardware or, for example, a universal personal computer capable of performing various functions when installed on a variety of programs.

Fig is a block diagram illustrating an example configuration of a computer that executes the above-mentioned sequence of processes in accordance with programs.

In this computer, a CPU (Central processing unit) 601, a ROM (Permanent memory) 602, and a RAM (random access memory) 603 are connected to each other via a bus 604.

In addition, to the bus 604 is connected interface 605 I / o. Interface 605 I / o connected: an input device 606, keyboard, mouse, microphone, and the like; an output unit 607 includes a display, a loudspeaker, and the like; a recording device 608, which includes a hard disk, non-volatile memory device is in, and the like; device 609 connection, including network interface and the like; and an actuator 610 to actuate the removable medium 611 information, such as magnetic disk, optical disk, magneto-optical disk or semiconductor storage device.

In the computer having the above-described configuration, the CPU 601 loads, for example, the program recorded in the recording device 608, a memory device 603 via the interface 605 I / o and bus 604 and executes a program for implementing the above-mentioned sequence of processes.

The program executed by the computer (CPU 601) is provided, for example, by recording on the removable medium 611 information, which is a compact storage medium such as a magnetic disk (including flexible disk), optical disk (including CD-ROM (permanent memory on compact disk) or DVD (digital versatile disk) or the like), a magneto-optical disk, a semiconductor storage device or the like. Alternatively, the program is provided through a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.

Then the program can be installed in zapisywa the developing device 608 via the interface 605 I / o by setting the removable medium 611 information on the drive 610. Alternatively, the program may be accepted by the device 609 communication through a wired or wireless medium, and installed in the recording device 608. Alternatively, the program may be installed in advance in a persistent storage device 602 or recording device 608.

Meanwhile, the program executable by the computer may be a program by which these processes are executed sequentially in time in accordance with the procedure described in this description, or may be a program by which these processes are performed in parallel or at necessary, such as when they are invoked.

Meanwhile, the embodiments of the present invention is not limited to the above variants of implementation, it is possible to make various modifications without going beyond the scope and essence of the present invention.

The reference list of items

23 is a module for computing a quantitative measure of the degree of blur, 24 - module for computing a quantitative measure of brightness, a 25 - module calculating the quantitative indicator circuit 26 module calculating a quantitative measure of the color distribution, 27 - module for computing a quantitative measure of color saturation, 28 - module calculate the total amount of the aqueous indicator 29 - the control module display 91 - extraction module object.

1. An imaging device, comprising: a means of calculating the first value estimates to extract quantitative values of the first characteristics of the entire input image, and to calculate the first partial value estimates based on the first characteristic evaluation of the input image, based on the quantitative values of the first characteristic; means of calculating the second value estimates to extract quantitative values of the second characteristics of the specified region of the input image and to calculate a second partial value estimates based on the second characteristic evaluation of the input image, based on the quantitative values of the second characteristic; and means for calculating the total value estimates to calculate the total value estimates for the input image evaluation of the fixity of the image based on the first partial estimate values and the second partial score values, and the means of calculating the second partial score values includes means for identifying the area of the object to retrieve from the corresponding fields of the input image, the quantitative value of the third characteristic, which is Oh has an object area in the input image, in order to identify the area of the object on the input image, and the calculator tool to extract quantitative values of the second characteristic of the object area in which the input image contains the object, or the background area, in which the input image object is not contained, and to calculate the second partial score values.

2. The imaging device according to claim 1 in which the means of calculations with the ability to extract as quantitative values of the second characteristic value of brightness in the respective areas of the object area in the input image and calculate the second partial value estimates based on the distribution of brightness values in the region of the object.

3. The imaging device according to claim 1 in which the means of calculations with the ability to extract as quantitative values of the second characteristic, the contrast of the contour in the relevant areas of the background area in the input image and calculate the second partial value estimates based on the complexity of the path in the background area.

4. The imaging device according to claim 1 in which the means of calculating the first value estimates are made with the ability to calculate the first partial value estimates based on at least one show the La of the degree of blur the color distribution, the average value of color saturation and dispersion of the color saturation on the entire input image.

5. The imaging device according to claim 1 in which the means of calculating the total value estimates are made with the ability to summarize the value defined as the value of the first partial score values, and the value defined as the value of the second partial score values in order to calculate the total value.

6. The imaging device according to claim 5, in which the value that is predetermined as a value of the first partial score values, determined on the basis of the first partial estimate values for multiple images with different estimates of the fixity of the image, and these estimates are obtained in advance, and the value, a predetermined value of the second partial score values, determined on the basis of the second partial score values for multiple images with different estimates of the fixity of the image, and these estimates are obtained in advance.

7. The method of image processing for use with an imaging device, comprising: a means of calculating the first value estimates to extract quantitative values of the first x the characteristics of the entire input image, and to calculate the first partial score values, characterizing based on the first characteristic evaluation of the input image, based on the quantitative values of the first characteristic, the means of calculating the second value estimates to extract quantitative values of the second characteristics of the specified region of the input image and to calculate a second partial value estimates based on the second characteristic evaluation for the input image, based on the quantitative values of the second characteristic; and means for calculating the total value estimates to calculate the total value estimates for the input image evaluation of the fixity of the image based on the first partial estimate values and the second partial score values, and the means of calculating the second partial score values includes tool identification area of the object to retrieve from the corresponding fields of the input image, the quantitative value of the third characteristic, which has the area of the object on the input image, in order to identify the area of the object on the input image, and the calculator tool to extract quantitative values of the second characteristic of the object area in which the input image contains the object, or the background area in which the input image object is not contained, and to calculate the second partial score values, characterized in that it includes steps in which: calculate the first partial value based on the input image by using a means of calculating the first estimate values; calculate a second partial value based on the input image by using the calculation of the second value estimate and calculate the total value of the evaluation based on the first partial estimate values and the second partial score values using a means of calculating the total value of the assessment.



 

Same patents:

FIELD: physics.

SUBSTANCE: system has an illuminator, a test object, spatially spaced apart light-sensitive sensors mounted in the plane of the test object and connected to an illumination control device which is connected to the illuminator. The test optoelectronic camera device is connected to a data processing system which is connected to the illumination control device. The object plane of the lens of the test optoelectronic camera device coincides with the plane of the test object. The illumination control device is configured to vary the level of illumination of the test object and control its uniformity by sending a corresponding control signal to the illuminator, correcting the control signal in accordance with a feedback signal from the light-sensitive sensors and measuring the real steady-state value of the level of illumination of the test object. The test device captures an image of the test object and transmits the captured data to the data processing system which determines characteristics of the test device based on the captured data and values of the level of illumination of the test object.

EFFECT: high accuracy of measurements through accurate adjustment of illumination of the test object.

13 cl, 1 dwg

FIELD: physics.

SUBSTANCE: apparatus has a collimator with a test object in its focal plane, the output of which is connected to the input of the radiation receiver, a video viewing device, an oscilloscope and a synchronisation and control panel. The oscilloscope is a double-beam oscilloscope. The output of the radiation receiver is connected to the input of the synchronisation and control panel and the input of the first channel of the oscilloscope. The first output of the synchronisation and control panel is connected to the input of the video viewing device. The second output of the synchronisation and control panel is connected to the input of the second channel of the oscilloscope.

EFFECT: high reliability of measurements, high information content and accuracy of controlling and cutting time for measuring parameters of radiation receivers.

2 dwg

FIELD: electric engineering.

SUBSTANCE: method and device can be used for determining format of video image actually reproduced on TV screen. Signal from remote control signal is received and identified and channel which has to be received by TV set is determined. Tuned condition of channel of channel reference selector is checked to put in coincidence the sonic signals from reference channel selector and loud speaker of TV set. Reference channel selector is capable of extracting sonic signal of single random predetermined number of channels.

EFFECT: ability of quick and precise finding of channel actually perceived by TV set.

11 cl, 10 dwg

The invention relates to television technology, in particular to methods improve the quality of the television image

The invention relates to the field of television measurement, and more particularly to methods of spectral measurements of reflection or radiation of a transferred object, and also to devices that implement these methods

The invention relates to measuring television equipment

The invention relates to a method and apparatus for selection of the signal, in particular of the digital signal value, which corresponds to the noise in the signal

The invention relates to methods of measurement of the reflectance spectrum in air or space exploration for environmental monitoring areas, etc

The invention relates to diagnosis, testing or measuring characteristics of color picture tubes and allows for field monitoring of the main parameters of color picture tubes with a single connecting tube, increasing the efficiency of the device and reducing the time of diagnosis, which in the device containing the measuring current, voltage converters, switch modes, introduced oscillator test oscillator line scan, block color, generators, line and staff expansions, deflecting system Board CRT and PCB compounds, and the above-mentioned units are connected in such a way that, firstly, measurement of leakage current of the picture tube and currents of the cathode is carried out by the diode method that allows to disable the other electrodes of the tube from the device, and, secondly, the line and frame synchronizing pulses separated, which gives a stable synchronization, which is especially important when working in the field

FIELD: physics.

SUBSTANCE: method includes an image forming step (710); a step (720) of placing a thickness gauge (21) in image data space; a scaling step (730) for selecting the scale of the thickness gauge (21) using a scaling factor in said data space; a step (740) of moving the thickness gauge (21) in said data space; and a thickness gauge forming step (750) for imaging thereof; wherein the object is measured based on the scaling factor.

EFFECT: high accuracy of determining points on the surface of the structure of a measured object on a medical image.

10 cl, 9 dwg

FIELD: physics.

SUBSTANCE: when performing repetitive scans on a patient using a magnetic resonance imaging machine or the like, patients often tend to move as they relax during a lengthy scanning session, causing movement in the volume or portion of the patient being scanned. A prospective motion correction component accounts for patient movement by calculating transformation data characterising patient movement in multiple planes, as well as rotational movement, and a host evaluates the change in position relative to a most recent scanning geometry of the patient or dynamic volume. In this manner, correction or adjustment to the scanning geometry employed by an associated scanner is made only for the difference between the current geometry and the most recent geometry in order to exclude redundant adjustment that can result in oscillatory over- and under-compensation during adjustments.

EFFECT: cutting scanning time and improving quality of scanning with real time motion correction.

23 cl, 10 dwg

FIELD: information technologies.

SUBSTANCE: video cameras are arranged in pairs, at the same time they are synchronised with each other and calibrated in respect to each other by three coordinates in general objects, forming a stereo module from two video cameras, stereo modules are arranged at the specified fixed distance from each other in the amount of two and more, at the same time each stereo module realises independent 3D reconstruction of its visible part of the person's face, the reconstructed parts of the person's face are combined into a common 3D reconstruction of the person's face, at the same time continuous or periodical calibration of stereo modules is carried out between each other by video images of cameras without suspension of 3D reconstruction of faces, using the 3D reconstruction of the person's face built with the help of all stereo modules, comparison is carried out between the face of the identified person and the basic face image, using comparison results, the personal identification is carried out.

EFFECT: improved probability of personal identification.

4 cl, 2 dwg

FIELD: medicine.

SUBSTANCE: group of inventions relates to medicine, visualization of vessels and their connection with pathological changing. Data of 3-dimensional image, reflecting spatially changing degree of connection of vessels between areas of data in 3-dimensional image and pathological changing, are created. Data can be represented by means of displaying maximal intensity projection (MIP), where image brightness represents degree of vessel participation in blood supply of pathological changing. Corresponding computer-readable carriers are used in method realisation. Described methods of visualisation can be useful in visualisation of connectedness with structures which are not pathological changes, and in visualisation of connectedness which is not connectedness of vessels.

EFFECT: inventions provide representation of degree of connectedness of vessels by means of relative brightness of vessel with increased reliability with respect to image interference.

12 cl, 4 dwg

FIELD: medicine.

SUBSTANCE: invention relates to medical equipment. System of visualisation in vivo contains capsule with animal, which includes one or more holders or bed, cylindrical cover, sensors of physiological parameters, identifier and docking interface, providing control, heating and anesthesia for one or more animals. System contains one first means of visualisation for collection in vivo data of animals in the field of visualisation of the first visualisation means, which includes docking port and reading device. System contains reconstructing processor, which reconstructs presented image from in vivo data for visualisation, installation unit or preparation unit, which includes docking port. System contains research work station, which includes electronic interface for interaction with the first visualisation means, user interface, which provides user with possibility of projecting new research and modification of existing research. System contains display devices, which displays vital parameters of animals during visualisation and preparation, and system of animal control and anesthesia for control of vital signals from animals during influence with sedative medication and for provision of regulated introduction of anesthesia into object via several docking interfaces.

EFFECT: application of the invention will make it possible to assess several biological parameters simultaneously within one visualisation procedure.

13 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: method, apparatus and system for model-based playfield registration. An input video image is processed. The processing of the video image includes extracting key points relating to the video image. Further, whether enough key points relating to the video image were extracted is determined, and a direct estimation of the video image is performed if enough key points have been extracted and then, a tomographic matrix of the final video image based on the direct estimation is generated.

EFFECT: high stability of search and acceleration thereof, smaller search space.

12 cl, 11 dwg

FIELD: information technology.

SUBSTANCE: invention relates to an image forming apparatus in which a developing apparatus is installed, having a plurality of developer transporting elements, and a latent image formed on an image bearing element is developed by the developing apparatus. The image forming apparatus includes an image forming unit which serves as the support of the image bearing element, which enables said element to rotate, a developing unit which includes a first developer transporting element which is meant to develop the latent electrostatic image formed on the image bearing element, and a second developer transporting element which is meant to develop the latent electrostatic image formed on the image bearing element; a first gap adjusting element which is meant to adjust the gap between the image bearing element and the first developer transporting element; a second gap adjusting element which is meant to adjust the gap between the image bearing element and the second developer transporting element; a propelling element which is meant to move the developing unit towards the image forming unit so that the first gap adjusting element and the second gap adjusting element are pressed to a support region formed on the image bearing element; and a positioning element which is meant for positioning the developing unit relative the image forming unit.

EFFECT: reduced image deterioration due to the SD gap arising from the relative alignment error of paired hollow rollers.

7 cl, 20 dwg

FIELD: information technologies.

SUBSTANCE: module (21) of object extraction uses an input image for generation of the object map, which represents an are including the object on the input image, and provides the object map to a detection module (22). The detection module (22) uses the input image and the object map that arrived from the object extraction module (21) to determine the extent of blur of the object area on the input image, and calculates on the basis of this extent of blur the estimate in points for the input image. This estimate in points is considered as an index to estimate the extent of how clearly the object is visible on the input image. This invention may be applied to an image capturing device.

EFFECT: expansion of functional capabilities due to reliable estimation of the fact whether the assessed image is the image assessed as acceptable for viewing by a user.

5 cl, 27 dwg

FIELD: information technology.

SUBSTANCE: two-dimensional spatial representation of the inspected electronic image is divided into blocks; k-level wavelet transform is performed on each block; coefficients of the performed wavelet transform are generated; statistical characteristics of the wavelet transform coefficients are calculated; a vector of statistical characteristics of the block is generated therefrom; the vector of statistical characteristic of the block is compared with vectors of statistical characteristics of electronic images known to be unmodified and with vectors of statistical characteristics of electronic images known to be modified; the block is identified as modified and orientation of axes of the rectangular coordinate grid is then successively changed J≥1 times.

EFFECT: high probability of detecting modification of an electronic image.

13 cl, 10 dwg

FIELD: physics.

SUBSTANCE: method involves formation of shadow zone by processing computation control matrix data, obtained and distributed on matrix positions before constructing calculated shadow zones. To realise the method, before information processing, a digital control unit is made, which consists of a computation control matrix and a matrix for storing intermediate computation results.

EFFECT: shorter time for constructing calculated shadow zones on a digital map.

12 dwg

FIELD: information technology.

SUBSTANCE: several auxiliary markers are formed, which consist of references of image areas for enabling tracking of the selected area of the scene; to generate references of auxiliary markers, selection of areas of the scene with marked selective features is carried out, as well as detection of areas of the scene with marked selective features and elimination of the noise microstructure is carried out based on results of decomposing the initial image with a Haar wavelet via fast discrete stationary two-dimensional wavelet-transform. Coordinate shifts of the auxiliary markers are used to increase accuracy when calculating coordinate shifts of the tracking point, as well as maintaining the tracking function in cases when information contact with the tracked object is lost, wherein to reduce the effect of reference rewriting on accuracy of localisation thereof on the current image, simultaneously with formation of a reference for each marker, its scaling series is created, which is used during changes in the scale of the scene.

EFFECT: high accuracy and reliability of tracking irrespective of selection of the tracking object.

18 dwg

Up!