Image processing device, method and programme

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to image processing, particularly to a device and a method of processing images, which enable to classify composition of an input image. The technical result is achieved through a device which includes a separation line detection module, which can be configured to receive an input image and detect a separation line which separates two sides of the input image, a classification module which can be configured to classify an input image based on the degree of symmetry and the separation line; the classification module can also be configured to generate a classification signal to provide at least one of display or storage of classification.

EFFECT: automating classification of composition of an input image into detailed composition structures based on the degree of symmetry and a separation line.

19 cl, 46 dwg

 

The technical field to which the invention relates.

The present invention relates to a device and image processing method and image processing program, and, in particular, the present invention relates to a device and image processing method and image processing program, which allow to classify the composition of the input image.

The level of technology

In recent years the technology was developed, allowing to distinguish between the structure of the composition of the image shooting device imaging, such as digital camera, etc.

For example, the technology was developed, which recognizes the subject's attention, recognizes the status of the subject of attention and chooses the structure of the composition that includes the subject of the attention among the many recorded structures of compositions on the basis of the detected status of the subject of attention. An example of such technology is disclosed in the Publication No. 2008-81246 on the examination of applications for Japanese patents.

In addition, it was suggested that the imaging device, which detects a characteristic structure by analyzing the input image, calculates, as the value of the assessment, the degree of Association between the number of pre-prepared compositions and detectable characteristic structure and opredeletsa.posle input image based on the value assessment. An example of such an imaging device is disclosed in the publication No. 2009-159023 on the examination of applications for Japanese patents.

In addition, it was proposed camera, which secretes the edge corresponding to the upper edge of the main subject of the captured image, or edge, continuing between the two parties concerned of the captured image, compares the position or tilt of the selected edges with a predefined corresponding length, and determines whether the composition is right or wrong. An example of such an imaging device is disclosed in Japanese patent No. 4029174.

The invention

However, in the method in accordance with publication No. 2008-81246 on the examination for Japanese patent needed to perform expensive operations for recognizing the subject or recognition status of the subject of attention.

In addition, in the method in accordance with publication No. 2009-159023 on the examination for Japanese patent because an estimated value between the number of pre-prepared compositions and characteristic structure, detected on the basis of the analysis of the input image calculated with respect to each of pixels, also was required to perform an expensive operation.

Cu is IU, in the method according to Japanese patent No. 4029174, since the criterion of the decision, to determine whether the composition is right or wrong based only on the edge corresponding to the upper edge of the main object in the shot image, or edge, continuing between the two parties concerned of the captured image, the type of composition that can be defined, was limited.

The present invention is directed to solving the above problem. In addition, it is desirable to classify the composition of the input image on the detailed structure of the song using operations with a lower cost.

Accordingly, it is disclosed a device for processing input image. The device may include a module for calculating the degree of symmetry, which can be made capable of receiving the input image and calculating the degree of symmetry of the input image. The device may also include a detection module of the dividing line, which can be made capable of receiving the input image and the detection of the dividing line that separates the two sides of the input image. In addition, the device may include a classification module that can be configured to classify the input image based on the degree of SIM is etree and the separation line. The classification module may also be configured to generate the signal classification to ensure that at least one of displaying or storing the classification.

There is also disclosed a method of processing input image. The processor may execute the program to ensure that device. The program may be stored in non-volatile machine-readable media. The method may include receiving input image. Furthermore, the method may include calculating the degree of symmetry of the input image. The method may also include the detection of the dividing line that separates the two sides of the input image. Furthermore, the method may include classifying the input image on the basis of the degree of symmetry and the line of separation. The method may also include generating a signal classification to ensure that at least one of displaying or storing the classification.

Brief description of drawings

In Fig.1 shows a block diagram illustrating an example functional configuration of the imaging device in accordance with the embodiment disclosed now technology;

in Fig.2 shows a block diagram illustrating an example functional configuration of the calculation module with apani symmetry (i.e. software module, module hardware or combination of software module and hardware);

in Fig.3 shows a block diagram illustrating an example functional configuration of the module calculating the degree of symmetry of the edge;

in Fig.4 shows a block diagram illustrating an example functional configuration of the module calculating the degree of symmetry of color;

in Fig.5 shows a block diagram illustrating an example functional configuration of the detection module of the dividing line;

in Fig.6 shows a block diagram illustrating an example functional configuration of the horizontal detection module of the dividing line;

in Fig.7 shows a block diagram illustrating an example functional configuration of the detection module vertical separation line;

in Fig.8 shows a block diagram illustrating an example functional configuration of an inclined detection module of the dividing line;

in Fig.9 shows a block diagram of a sequence of operations explaining the processing of the classification of the composition;

in Fig.10 shows a diagram illustrating the structure of the composition, is usually recommended for a photo, etc.;

in Fig.11 shows a block diagram of a sequence of operations illustrating the processing of calculating the degree of symmetry;

in Fig.12 shows a block diagram of a sequence of operations explaining the processing of the Vacha the comprehension degree of symmetry of the edge;

in Fig.13 shows a diagram illustrating the input image and the image edges;

in Fig.14 shows a diagram illustrating an example of calculating the degree of symmetry of the edge from left to right;

in Fig.15 shows a diagram illustrating an example of calculating the degree of symmetry of the edge from the top down;

in Fig.16 shows a diagram illustrating the input image and the image edges;

in Fig.17 shows a block diagram of a sequence of operations illustrating the processing of calculating the degree of symmetry of color;

in Fig.18 shows a diagram illustrating a weighting factor on the basis of color differences;

in Fig.19 shows a diagram illustrating the conversion of the sum of color differences;

in Fig.20 shows the block diagram of the sequence of operations explaining the processing of detecting the separation line;

in Fig.21 shows a diagram illustrating the input image and the image edges;

in Fig.22 shows a block diagram of a sequence of operations illustrating horizontal processing of detecting the separation line;

in Fig.23 shows a diagram to illustrate the meaning of integration in the horizontal direction of the edge information;

in Fig.24 shows a diagram illustrating an example of the detection of a horizontal line of separation;

in Fig.25 shows a block diagram of a sequence of operations illustrating vertical processing detector is of the line of separation;

in Fig.26 shows a diagram to illustrate the meaning of integration in the vertical direction of the edge information;

in Fig.27 shows a block diagram of a sequence of operations illustrating processing for detecting the inclined separation line;

in Fig.28 shows a diagram illustrating the input image and the image edges, the conversion to binary image edges and rotate the image edges;

in Fig.29 shows a diagram illustrating an example of the normalization values of the integration of edge information of each line in the direction tilted;

in Fig.30 shows a diagram illustrating an example of the normalization values of the integration of edge information of each line in the direction tilted;

in Fig.31 shows a diagram to illustrate the meaning of integration in the direction angle information of the edges;

in Fig.32 shows a diagram illustrating an example of the detection of the inclined separation line;

in Fig.33 shows a diagram explaining another example of the normalization values of the integration of edge information of each line in the direction tilted;

on. Fig.34 shows a diagram explaining another example of the normalization values of the integration of edge information of each line in the direction tilted;

in Fig.35 shows a diagram explaining another example of the normalization values of integration and the formation of the edges of each line in the direction tilted;

in Fig.36 shows a diagram illustrating an example of the structure of a composition in which classified the composition of the input image;

in Fig.37 shows a diagram illustrating an example of the structure of the composition, classified on the basis of the horizontal separation line;

in Fig.38 shows a diagram illustrating an example of the structure of the composition, classified on the basis of a vertical separation line;

in Fig.39A and 39B presents diagrams illustrating examples of the structure of the composition, classified on the basis of the inclined separation line;

in Fig.40 shows a diagram illustrating an example of the structure of the composition, classified on the basis of the inclined separation line;

in Fig.41 shows a diagram illustrating the relationship between the degree of symmetry and the line of separation;

in Fig.42 shows a flowchart of the sequence of operations illustrating another processing operation classification compositions;

in Fig.43 shows a block diagram illustrating an example functional configuration of the device forming the image;

in Fig.44 shows a diagram illustrating a display example of the recommendations of the composition;

in Fig.45 shows a diagram illustrating a display example of the recommendations of the composition; and

in Fig.46 shows a block diagram illustrating an example configuration of the computer equipment.

Detailed description of the invention

Below, with reference to the drawings will be described embodiments of disclosed in the present technology.

[Configuration example of the imaging device]

In Fig.1 illustrates an example of a functional configuration of the imaging device in accordance with the embodiment disclosed in the present technology.

For example, the device 11 of the image processing in Fig.1 calculates the degree of symmetry, indicating the symmetry line of the input image, the input of the device forming the image, such as a digital camera, etc., or other imaging device, and detects a separation line dividing the input image at specified area. In addition, the device 11 of the image processing classifies the composition of the input image to the specified structure of the composition (i.e. classification) on the basis of at least one of the degree of symmetry and the separation line.

The device 11 of the image processing includes module 31 calculating the degree of symmetry, the detection module 32 of the dividing line and module 33 classification of composition.

The input image input to the device 11 of the image processing, is fed into the module 31 calculating the degree of symmetry and the detection module 32 of the dividing line.

Module 31 for calculating the extent to which metrie calculates the degree of symmetry, denoting the symmetry line of the pixel information (pixel value) of each pixel in the input image, relative to each other in the direction of left-to-right and top-down direction of the input image, and supplies the degree of symmetry in the module 33 classification of composition.

[Example of the functional configuration of the module calculating the degree of symmetry]

In Fig.2 illustrates an example of a functional configuration of the module 31 calculating the degree of symmetry.

Module 31 calculating the degree of symmetry includes the module 41 calculating the degree of symmetry of the edge, the module 42 calculating the degree of symmetry of color and module 43 to determine the degree of symmetry.

Module 41 calculating the degree of symmetry of the edge calculates the degree of symmetry (below is called the degree of symmetry of the edge) for edge information that represents the type information of each pixel from the pixels in the input image, and supplies the degree of symmetry in the module 43 to determine the degree of symmetry.

[Example of the functional configuration of the module calculating the degree of symmetry edges]

In Fig.3 illustrates an example of a functional configuration module 41 calculating the degree of symmetry of the edge.

Module 41 calculating the degree of symmetry of the edges includes a module 51 generating image edges, the module 52 calculating the degree of symmetry from left to right (i.e. mod the eh calculation of the first degree of symmetry of the edge) and the module 53 calculating the degree of symmetry from top to bottom (i.e. the second module calculating the degree of symmetry of the edge).

Module 51 generating image edges generates the image edges, including the edge information of each pixel in the input image (i.e. the image edges, which means that the edges of the input image on the basis of each of the pixels, and transmits the image edges in the module 52 calculation of degrees of symmetry from left to right and in the module 53 calculating the degree of symmetry from top to bottom.

Module 52 calculating the degree of symmetry from left to right calculates the degree of symmetry of the edge from left to right, i.e. the degree of symmetry information of the edges relative to the center line in the direction from left to right in the image edges (that is, the first imaginary line on the image edges, which are parallel to the side of the image edge) supplied from module 51 generating image edges, and outputs the degree of the degree of symmetry of the edge from left to right.

Module 53 calculating the degree of symmetry from top to bottom calculates the degree of symmetry of the edge from top to bottom, which represents the degree of symmetry information of the edges relative to the center line in the direction from top to bottom in the image edges (i.e. the second imaginary line in the image edge, which is perpendicular to the first imaginary line) that is passed from module 51 generating image edges, and outputs the degree of SIM is etrie edges from top to bottom.

Thus, the module 41 calculating the degree of symmetry of the edge serves as the degree of symmetry of the edge, the degree of symmetry of the edge from left to right and the degree of symmetry of the edge from top to bottom in the module 43 to determine the degree of symmetry.

Returning to the description of Fig.2, the module 42 calculating the degree of symmetry colors calculates the degree of symmetry (below is called the degree of symmetry of the color) of the color information which represents the type information of each pixel from the pixels in the input image, and supplies the degree of symmetry in the module 43 to determine the degree of symmetry.

[Example of the functional configuration of the module calculating the degree of symmetry color]

In Fig.4 illustrates an example of a functional configuration module 42 calculating the degree of symmetry of color.

Module 42 calculating the degree of symmetry of colors includes module 61 of the color space conversion module 62 calculating the degree of symmetry from left to right (i.e., the calculation module of the first degree of symmetry colors) and module 63 calculating the degree of symmetry from top to bottom (i.e. the second module calculating the degree of symmetry of color).

Module 61 conversion color space converts into another color space color space, which presents information of a pixel (color information) of each pixel in the input image, and supplies the module 62 R the account of the degree of symmetry from left to right and in the module 63 calculating the degree of symmetry down the input image, includes color information represented in the converted color space.

Module 62 calculating the degree of symmetry from left to right calculates the degree of symmetry of the colors from left to right, which represents the degree of symmetry of color information relative to the Central line in the direction from left to right in the input image (that is, the first imaginary line in the input image, which is parallel to the direction of the input image) supplied from module 61 conversion color space, and outputs the degree of symmetry of the colors from left to right.

Module 63 calculating the degree of symmetry from top to bottom calculates the degree of symmetry of color from top to bottom, which represents the degree of symmetry of color information relative to the Central line in the downward direction in the input image (i.e. the second imaginary line in the input image, which is perpendicular to the first imaginary line) supplied from module 61 conversion color space, and outputs the degree of symmetry of color from top to bottom.

Thus, the module 42 calculating the degree of symmetry of color delivers, as the degree of color symmetry, the degree of symmetry of the colors from left to right and the degree of symmetry of color from top to bottom in the module 43 to determine the degree of symmetry.

Returning to the description of Fig.2, based on the grade the symmetry edge, supplied from module 41 calculating the degree of symmetry of edges and the degree of symmetry of color supplied from module 42 calculating the degree of symmetry of the color module 43 to determine the degree of symmetry determines the degree of symmetry from left to right, indicating the line of symmetry relative to the direction from left to right in the input image, and the degree of symmetry from top to bottom, indicating the line of symmetry relative to the direction from top to bottom in the input image. In particular, the module 43 to determine the degree of symmetry defines as the degree of symmetry from left to right, one of the degree of symmetry of the edge from left to right, served as the degree of symmetry of the edges of the module 41 calculating the degree of symmetry of the edges, and the degree of symmetry of the colors from left to right, served as the degree of symmetry of colors, from module 42 calculating the degree of symmetry of color, a certain value that satisfies a specified condition. In addition, the module 43 to determine the degree of symmetry defines as the degree of symmetry from top to bottom, one of the degree of symmetry of the edge from top to bottom, served as the degree of symmetry of the edge from module 41 calculating the degree of symmetry of the edges, and the degree of symmetry of color from top to bottom, served as the degree of symmetry of colors, from module 42 calculating the degree of symmetry of the colors defined so that it satisfies a specified condition.

Thus, modulating the 31 calculating the degree of symmetry gives, as the degree of symmetry, the degree of symmetry from left to right and the degree of symmetry from top to bottom, the module 33 classification of composition.

Returning to the description of Fig.1, the detection module 32 of the dividing line detects the separation line for dividing the input image by the variance of the distribution information of pixel in the input image, and supplies the information of the dividing line, denoting detektirovaniya the splitter module 33 classification of composition.

[An example of a functional configuration of a detection module of the dividing line]

In Fig.5 shows an example of a functional configuration of the detection module 32 of the dividing line.

The detection module 32 of the dividing line includes a module 71 generating image edges, the module 72 detection of a horizontal line of separation (i.e., the detection module of the first line of separation), the module 73 detection of vertical lines of separation (i.e., the detection module of the second line of separation) and modules 74 and 75 detection of the inclined separation line (that is, the detecting modules of the third and fourth separation line).

In the same way as the module 51 generating image edges in Fig.3, the module 71 generating image edges generates the image edges, including the edge information of each of the PI is mudflows in the input image, on the basis of each of the pixels, and supplies the image edges in the module 72 detection of a horizontal line of separation in the detection module 75 of the dividing line angle.

Module 72 detection of a horizontal line separating integrates information edges in the horizontal direction in the image edges supplied from module 71 generating image edges, and detects the horizontal line of separation (i.e. the first line of separation dividing the input image in the horizontal direction (i.e. up and down), on the distribution of the values of its integration. Module 72 detecting the horizontal separation line displays information line horizontal separation, indicating detektirovaniya horizontal line of separation.

Module 73 detection of vertical lines of separation integrates information edges in the vertical direction in the image edges supplied from module 71 generating image edges, and detects a line of vertical separation (i.e. a second line of separation, which is located at an angle relative to the first separation line) that divides the input image in the vertical direction (namely, right and left side), the distribution of its integrated value. Module 73 detection of vertical l the research Institute division displays a vertical line of separation, denoting detektirovaniya vertical line of separation.

Module 74 detection of the inclined separation line integrates information edge inclined upward and to the right in the image edges supplied from module 71 generating image edges, and detects a slanted line separation, directed upwards and to the right (i.e. the third line of division, which is located at an angle relative to the first and second lines of separation), which divides the input image in an oblique direction up and to the right, on the distribution of its integrated value. Module 74 detection of the inclined separation line displays the first information of the inclined separation line denoting detektirovaniya inclined separation line upward and to the right.

The detection module 75 of the inclined separation line integrates information edge in an inclined direction up and to the left in the image edges supplied from module 71 generating image edges, and detects a slanted line of separation in the direction of the top and to the left (that is, the fourth line of division, which is located at an angle relative to the first, second and third lines of separation), which divide the input image in an oblique direction up and to the left of the distribution of the integral value. Module 75 dete the tests of the inclined separation line displays information slanting line of separation, denoting detektirovaniya inclined separation line in the direction up and to the left.

Thus, as information of the dividing line, the detection module 32 of the dividing line provides information line horizontal separation, information line vertical separation, a first inclined separation line and the second information of the inclined separation line in the module 33 classification of composition.

Here with reference to Fig.6-8 are examples of functional configuration module 72 detection of a horizontal line of separation before detection module 75 of the inclined separation line.

[An example of a functional configuration of the detection module horizontal separation line]

In Fig.6 illustrates an example of a functional configuration module 72 detection of a horizontal line of separation.

Module 72 detecting the horizontal separation line includes module 111 of integration in the horizontal direction, the filter 112 low frequency (LPF) module 113 detection of the peak values and the module 114 processing threshold value.

Module 111 of integration in the horizontal direction integrates information of pixel (edge) of a pixel with respect to each of the lines including pixels (below, simply called line) in the horizontal direction of the tion, in the image edges supplied from module 71 generating image edges, and feeds the result of its integration in the LPF 112. The result of integration is obtained here represents the value of integration of edge information in the horizontal direction relative to the position of the pixel in the vertical direction in the image edges (the input image).

In the result of processing filter for the integration of the module 111 of integration in the horizontal direction, namely the values of the integration of edge information in the horizontal direction relative to the position of the pixel in the vertical direction in the image edges, LPF 112 removes noise from the result of integration and delivers the result of integration in the module 113 detection of the peak values.

Module 113 of the detection values of the peak detects the peak value for the value of the integration result of the integration, from which you have removed the noise using LPF 112, and submits to the module 114 processing threshold detektirovanie the peak value and the position of the pixel in the vertical line that extends in the horizontal direction, which was obtained the value of integration, representing the peak value.

Module 114 processing threshold value compares the value is the peak of the module 113 detection of the peak values with the specified threshold value. In addition, when the peak value is greater than a specified threshold, the module 114 processing threshold determines how a horizontal line separation line in the horizontal direction, which receives the value of integration, which should be a peak value, and outputs information of a horizontal line separating the position of a pixel in the vertical line in the image edges.

[An example of a functional configuration of the detection module line vertical separation]

In Fig.7 illustrates an example of a functional configuration module 73 detection line vertical separation.

Module 73 detection line vertical separation includes module 121 integration in the vertical direction LPF 122, the module 123 detection of the peak values and the module 124 processing threshold value.

The module 121 integration in the vertical direction integrates information of edges with respect to each of lines in the vertical direction in the image edges supplied from module 71 generating image edges, and transmits the result of its integration in the LPF 122. They are the result of integration is the value of integration of edge information in the vertical direction relative to the position of the pixel in the horizontal on the managing of the image edges (the input image).

In the result of processing filter of the integration module 121 integration in the vertical direction, namely the values of the integration of edge information in the vertical direction relative to the position of the pixel in the horizontal direction in the image edges, LPF 122 removes noise from the result of integration and delivers the result of integration in the module 123 detection of the peak values.

Module 123 of the detection values of the peak detects the peak value for the value of the integration result of the integration, from which you have removed the noise using LPF 122, and submits to the module 124 processing threshold detektirovanie the peak value and the position of a pixel in the horizontal line direction, in the vertical direction, which receives the value of integration as the peak value.

The module 124 processing threshold, compares the peak value of the module 123 detection of the peak values with the specified threshold value. In addition, when the peak value is greater than a specified threshold, the module 124 processing threshold determines how the vertical line separation line in the vertical direction, which receives the value of integration, which should represent the peak value, and outputs, as the information of the vertical line in the division, the position of a pixel in the horizontal direction on the line at the edge of the image.

[An example of a functional configuration of a detection module of the inclined separation line]

In Fig.8 illustrates an example of a functional configuration of the module 74 detecting the incline of the dividing line.

Module 74 detection of the inclined separation line includes module 131 integration inclined direction LPF 132, module 133 detecting the peak value and the module 134 processing threshold value.

Module 131 integration inclined direction integrates information of edges with respect to each of lines inclined upward to the right in the image edges supplied from module 71 generating image edges, and gives it the result of integration in the LPF 132. The result of integration is obtained here represents the value of integrating information edge inclined upward and to the right relative to the position of a pixel in an oblique direction up and to the left in the image edges (the input image).

In the result of processing filter for the integration of the module 131 integration inclined direction, namely the value of integrating information edge inclined upward and to the right relative to the position of the pixel in the slope of the ω direction up and to the left on the image edges, LPF 132 removes noise from the result of integration and delivers the result of integration in the module 133 detection of the peak values.

Module 133 of the detection values of the peak detects the peak value for the value of the integration result of the integration, from which noises have been removed using the LPF 132, and submits to the module 134 processing threshold detektirovanie the peak value and the position of a pixel in an oblique direction up and to the left of the line inclined direction up and to the right which gives the value of integration, which should represent the peak value.

The module 134 processing threshold, compares the peak value of the module 133 detection of the peak values with the specified threshold value. In addition, when the peak value is greater than a specified threshold, the module 114 processing threshold determines how inclined separation line up and to the right of the line sloping upward and to the right, from which was obtained the value of integration, which should represent the peak value, and outputs, as the information of the first separation line, the position of a pixel in an oblique direction up and to the left of the line on the edge of the image.

In addition, an example of a functional configuration of the detection module 75 of the inclined separation line, basically, is that W is, the module 74 detection of the inclined separation line in Fig.8 except that the separate modules module 74 detection of the inclined separation line treatment for inclined direction up and to the right information edge is replaced by the processing for the inclined direction up and to the left of the edge information. Therefore, its description will be eliminated.

In addition, returning to the description of Fig.1, the module 33 classification composition classifies composition in the input image to one of the predefined structures of the composition, based on the degree of symmetry of the module 31 calculating the degree of symmetry and information of the dividing line of the detection module 32 of the dividing line and displays the structure of the composition together with the degree of symmetry and information of the dividing line in the information-processing device, the device save, etc. that are not shown.

[Processing classification compositions performed in the imaging device]

Next, with reference to the block diagram of the sequence of operations shown in Fig.9, will be described processing classification compositions performed in the device 11 of the image processing in Fig.1.

The composition of the input image supplied to the device 11, the image processing are classified into one of the predefined structures of the songs on the OS is ove processing, classification, composition, presented in the block diagram of the sequence of operations in Fig.9.

Here the structure of the composition, usually recommended when taking pictures, etc. will be described with reference to Fig.10.

The structure of the composition illustrated in Fig.10, includes two representative patterns of the composition, such as composition, based on the rule of three, and diagonal composition.

Composition, based on the rule of three, is a composition that includes horizontal lines H1 and H2 separation and vertical lines V1 and V2 separation, called lines dividing by 3. Furthermore, the boundary of the subject or landscape, at least one line of the horizontal lines H1 and H2 separation and vertical lines V1 and V2 separation, or the subject is located in one of the four intersection points (points of intersection of the line dividing by 3) between lines H1 and H2 horizontal separation and lines V1 and V2 vertical separation, providing thus the possibility of obtaining a well-balanced image.

In addition, the diagonal composition is a composition that includes diagonal lines D1 and D2, and the boundary of the subject or landscape set, at least one line of diagonal lines D1 and D2, providing thus the possibility of obtaining good sbalansirovat the tion of the image.

When processing the classification of the composition described below, determine how symmetric is the composition of the input image relative to the direction from left to right or directions up and down, or on any of the above compositions on the basis of the rule of three or diagonal composition similar to the composition of the input image.

At step S11 in the flowchart of the sequence of operations in Fig.9 module 31 calculating the degree of symmetry performs the processing of calculating the degree of symmetry and calculates the degree of symmetry information of each pixel from the pixels in the input image with respect to each of the direction from left to right and in the direction from top to bottom of the input image.

[Processing of calculating the degree of symmetry running in the module calculating the degree of symmetry]

Here, the processing of calculating the degree of symmetry that runs on the step S11 in the flowchart of the sequence of operations in Fig.9, will be described with reference to the block diagram of the sequence of operations in Fig.11.

At step S31 module 41 calculating the degree of symmetry of the edges in the module 31 calculating the degree of symmetry performs the processing of calculating the degree of symmetry of the edges, and calculates the degree of symmetry of the edges of the input image.

[Processing of calculating the degree of symmetry of the edges that are performed in the module calculating the degree of symmetry edges]

Here education is denied calculating the degree of symmetry of the edge, performed at the step S31 in the flowchart of the sequence of operations in Fig.11, will be described with reference to the block diagram of the sequence of operations in Fig.12.

At step S41 module 51 generating image edges in the module 41 calculating the degree of symmetry of the edge receives the image brightness of the input image and generates the image edges, which includes the value of the edge (edge), obtained by applying the filter selection edges, such as a Sobel filter, a Gabor filter, etc. in the image brightness.

In addition, the module 51 generating image edges can produce images of the color channels, such as R, G, b and so on, from the input image, to compare the values of the edges obtained individually using the filter selection edges with color images of the channel, with each other between the channels for each pixel, and to generate the image edges, which includes an individual the maximum value.

In addition, the module 51 generating image edges can also perform the separation region by color, which uses the algorithm of the average shear (mode shift average), etc., for the input image, and generate the image edges by assigning values to the edges of the pixel on the boundary line of the split pane. For example, in this case, mn is the significance of "1" of the edge is prescribed for a pixel in the boundary region, and a value of "0" edges appointed for a pixel in another area, in addition to the boundary line.

For example, when, as shown in Fig.13, the image of the scene in which the subject is a mountain type as the input image P1, the module 51 generating image edges generates image P1e edges, which means that the profile shape of the scene, which includes the mountain. Image P1e edges, thus generated, is served in the module 52 calculation of degrees of symmetry from left to right and in the module 53 calculating the degree of symmetry from top to bottom.

At step S42, the module 52 calculation of degrees of symmetry from left to right calculates the degree to the degree of symmetry of the edge from left to right, which represents the degree of symmetry from left to right image edge-based image edge supplied from module 51 generating image edges.

Here is an example calculation for degree of symmetry of the edge from left to right will be described with reference to Fig.14. In Fig.14 illustrates the image P1e edges.

As shown in Fig.14, if the image P1e edge includes N×W pixels, the Central line in the direction from left to right in the image P1e edge is a line located in position W/2 pixels.

In addition, attention is focused on the line in the horizontal direction, in the position of the pixel in the direction of the control up and down which is denoted by "i", and it is assumed that the position of the pixel of the pixel located on the j pixels on the right side of the pixel (the pixel position of the pixel which is a W/2) on the Central line in the direction from left to right, are represented as "+j", and the position of the pixel of the pixel located on the j pixels from the left side from the pixel to the center line in the horizontal direction, represented as "j".

At this time, in the image P1e edges, the sum d of the difference between the pieces of information of edges of the pair of pixels located on opposite sides of the center line in the direction from left to right (the position of the pixel is W/2), the provisions of the pixel pairs of pixels representing the position of the pixel (i, j) and (i, j) (below, simply called the position of the pixel (i, j)), and the sum s of pieces of information of the edge pairs of pixels, located on opposite sides of the center line in the direction from left to right (namely the sum of the parts of the edge information of all pixels) indicated by the following expressions (1) and (2), respectively.

d=i=0H-1j=1W/2-1| I(W/2-j)-I(W/2+j)|WW(1)

S=i=0H-1j=1W/2-1{I(W/2-j)+I(W/2+j)}WW(2)

In expressions (1) and (2) the coefficient w is a weighing factor, the degree of weighting that decreases with increasing distance of the position of the pixel (i, j) for a pixel which focused attention from the Central point of the input image, the position (i, j) pixel is determined relative to the center of the edges of the image, and when the distance is e position (i, j) of a pixel from the center point of the input image is denoted as r, the ratio of w will be denoted by the following expression (3)

W=exp(-r2σ2)(3)

In addition, it is assumed that the constant σ in the expression (3) is arbitrarily set value.

"The sum of the differences between pieces of information edge" d identified by expression (1) has a value approaching "0", which increases with symmetry from left to right image P1e edges, and the sum of the differences between pieces of information edge" d has a value approaching the sum of the parts information of the edges of s, and increases in asymmetry from left to right image P1e edges. Therefore, the degree of symmetry of the edge from left to right sym_edg_LR, which represents the degree of symmetry from left to right image edges denoted by the following expression (4).

sym-edg-LR=1-ds(4)

Namely step is ery symmetry edges from left to right sym_edg_LR has the meaning the range is 0<sym_edg_LR<1, and has a value approaching "1" when the increase in symmetry from left to right image P1e edges.

Thus, calculate the degree-degree edges from left to right.

Returning to the block diagram of the sequence of operations in Fig.12, at step S43, the module 53 calculating the degree of symmetry from top to bottom calculates the degree of symmetry of the edge from top to bottom, which represents the degree of symmetry from top to bottom of the image edge-based image edge supplied from module 51 generating image edges.

In addition, with regard to the degree of symmetry of the edge from top to bottom sym_edg_TB, as shown in Fig.15, it is assumed that the Central line in the direction of up-down image P1e edge is a line, the position of the pixel which is N/2, and focus on the line that includes the pixels in the vertical direction, located at position j of the pixel in the direction from left to right. In addition, when it is assumed that the position of a pixel for a pixel located at a distance of i pixels from the lower side of the pixel (the pixel position of the pixel which is N/2) on the Central line in the direction of up and down, represented as a "+i", and the position of a pixel for a pixel located at the i pixels from the upper side of the pixel is on the Central line in the direction of up and down represented as "i", the degree of symmetry of the edge in the direction from top to bottom, sym_edg_TB, calculated by substituting the values of N and the values of W at each other in expressions (1) and (2), as well as the degree of symmetry of the edge from left to right sym_edg_LR.

Thus, calculate the degree of symmetry of the edge from top to bottom.

After step S43, the processing returns to step S31 in the flowchart of the sequence of operations in Fig.11 and moves to step S32.

In particular, when the input image, enter the input image P2, in which vegetables, as subjects of shooting, having approximately the same shape and different colors placed next to each other, as depicted on the left side in Fig.16, the image Re edges, shown on the right side in Fig.16, generated during the processing of calculating the degree of symmetry of the edge.

Since the image Re edges, shown in Fig.16, has a high linear symmetry with respect to the direction from left to right, a large value of the gain, as the degree of symmetry of the edge from left to right. However, because the colors of the subjects placed next to each other, differ from each other in the actual input image P2, there is no need to provide a high value of linear symmetry with respect to the direction from left to right.

Thus, during the processing of calculating the degree of symmetry of the edge of the difficult get the ü symmetry line for the color of the input image.

Therefore, at step S32 module 42 calculating the degree of symmetry colors performs the processing of calculating the degree of symmetry of color and calculates the degree of symmetry of colors in the input image, and thus, the symmetry line color input image.

[Processing of calculating the degree of symmetry of the colors that are performed in the module calculating the degree of symmetry color]

Here will be described, with reference to the block diagram of the sequence of operations shown in Fig.17, the processing performed when calculating the degree of symmetry of color performed at step S31 in the flowchart of the sequence of operations shown in Fig.11.

At step S51, the module 61 conversion color space converts the color space so that the individual pixels in the input image represented by the RGB color space will be presented, for example, space L*a*b*. Module 61 conversion color space gives the input image is represented by a space L*a*b*, the module 62 calculating the degree of symmetry from left to right and in the module 63 calculating the degree of symmetry from top to bottom.

At step S52 module 62 calculating the degree of symmetry from left to right calculates the degree of symmetry of the colors from left to right, which represents the degree of symmetry from left to right in the input image, the color space which has been converted by the module 61 is converted into the adowanie color space, the input image is represented by a space L*a*b*.

Here will be described an example of the calculation of the degree of symmetry of the colors from left to right. In addition, it is assumed that the input image is represented by a space L*a*b*, expressed in the same way as the image P1e edges described with reference to Fig.14.

At this time in the input image, the sum D of the difference of colors between a pair of pixels located on opposite sides of the Central line in the direction from left to right (the position of the pixel is a W/2), the provisions of the pixel pairs of pixels representing the position (i, j), the pixel denoted by the following expression (5).

D=i=0H-1j=1W/2-1dE(i,j)WW(5)

In the expression (5) the difference between dE colors between pixels located at position (i, j) pixel, the difference dL between the components of L on the L* axis, the difference dabetween them to monentary and on the axis and* and the difference db between the components of b on the axis b* individually marked with the following expression (6).

dE=dL2+da2+db2dL=L(i,W/2-j)-L(i,W/2+j)da=a(i,W/2-j)-a(i,W/2+j)db=b(i,W/2-j)-b(i,W/2+j)}(6)

Moreover, the coefficient of w in the expression (5) represents the weighting coefficients related to the difference between dE colors between pixels located at position (i, j) pixel, and the ratio of w denoted by the following expression (7).

W=WPWE(7)

In the expression (7) weighing coefficient wprepresents the weighting factor, the weighting ability which decreases with increasing distance position (i, j) of a pixel from the center point of the input image, and the weighting coefficient wpindicated by the following expression (8).

WP=exp[-β{(i-H2)2+(j-W2)2}](8)

In addition, it is assumed that the constant β in the expression (8) is an arbitrarily set value.

In addition, in expression (7), the weighting coefficient wErepresents the weighting factor, the weighting function of which will be higher in the area in which the color difference dE between the pixel located at position (i, j) pixel, which is the main focus, there will be more, and weighing coefficient is icient w Ehas such a characteristic as shown in Fig.18. In particular, when the difference dE colors will be less than the value of dE1weighing coefficient wEhas a value of 1.0, and when the difference dE colors will be more than the value of dE2weighing coefficient wEhas a value of wEM. In addition, when the difference dE colors is within the range of values dE1to the value of dE2weighing coefficient wEalso increases in accordance with increase in the difference between dE colors.

Namely, the weighting coefficient wEweigh the relative difference dE colors in such a way that the difference between dE colors will be more pronounced for the area, the colors of which vary considerably from right to left, thus, as the input image P2 shown in Fig.16.

Accordingly, while the "sum-difference colors" D identified by expression (5), has a value that decreases with increasing symmetry from left to right the colors in the input image, and has a value that increases with increasing asymmetry from left to right the colors of the input image, the sum of color difference" D transform, as shown in Fig.19, to make it easier to work with her.

Namely, in accordance with Fig.19, when the amount of the difference D colors will be less than the minimum value is s dE minthe difference between dE colors between pixels located at position (i, j) pixel, the amount of the difference D' colors after conversion is set to "0", and when the amount of the difference D colors more than the maximum value of dEmaxthe difference between dE colors between pixels located at position (i, j) pixel, the amount of the difference D' colors after conversion is set to "1". In addition, when the amount of the difference D colors range from dEminto dEmaxthe amount of the difference D' colors after conversion also increases in response to an increase in the amount of the difference D colors.

In addition, the degree of symmetry of the colors from left to right sym_col_LR, which represents the degree of symmetry from left to right the colors in the input image indicated by the following expression (9).

sym-col-LR=1-D'(9)

Namely, the degree of symmetry of the colors from left to right sym_col_LR is set, the range is 0≤sym_col_LR≤1, and has a value approaching "1", with an increase in the symmetry of the colors from left to right in the input image.

Thus, calculate the degree of symmetry of the colors from left to right.

Returning to block and therefore the e sequence, it is shown in Fig.17, at step S53, the module 63 calculating the degree of symmetry from top to bottom calculates the degree of symmetry of color from top to bottom, which represents the degree of symmetry from top to bottom of the input image, the color space which is converted by the module 61 conversion of the color space, the input image is represented as a space L*a*b*.

In addition, with regard to the degree of symmetry of colors from top to bottom sym_col_TB, in the same manner as in the picture Pie edges, shown in Fig.15, it is assumed that the Central line in the downward direction of the input image is a line, the position of the pixel which is N/2, and focus on the line in the vertical direction, is located at the position j of the pixel in the direction from left to right. In addition, when it is assumed that the position of a pixel for a pixel located at i - pixel with the lower side of the pixel (the pixel position of the pixel which is N/2) on the Central line in the direction of up-down, represented as "+i" and the position of a pixel for a pixel located at the i pixels from the upper side of the pixel on the Central line in the direction of up and down, represented as "-i", the degree of symmetry of the edge from top to bottom, sym_edg_TB, calculated by substituting the values of H and W each other is d in the Expressions (5) and (6), in the same way as the degree of symmetry of color from left to right sym_col_LR.

Thus, calculate the degree of symmetry of color up and down.

After step S53, the processing returns to step S32 in the flowchart of the sequence of operations in Fig.11, and moves to step S33.

At step S33 module 43 to determine the degree of symmetry determines the degree of symmetry from left to right and the degree of symmetry from top to bottom of the input image on the basis of the degree of symmetry of the edges supplied from module 41 calculating the degree of symmetry of the edges, and the degree of symmetry of color supplied from module 42 calculating the degree of symmetry of color.

For example, the module 43 to determine the degree of symmetry defines as the degree of symmetry from left to right, one of the degree of symmetry of the edge from left to right, supplied from module 41 calculating the degree of symmetry of edges and the degree of symmetry of the colors from left to right, supplied from module 42 calculating the degree of symmetry of color with a certain degree less than others. Similarly, the module 43 to determine the degree of symmetry defines as the degree of symmetry from top to bottom, one of the degree of symmetry of the edge from top to bottom, supplied from module 41 calculating the degree of symmetry of edges and the degree of symmetry of color from top to bottom, supplied from module 42 calculating the degree of symmetry of color with a certain degree smaller than the others.

Additionally, the module 43 to determine the degree of symmetry can also be defined as the degree of symmetry from left to right, one of the degree of symmetry of the edge from left to right and the degree of symmetry of the colors from left to right, with a certain value will be larger than the other, and to determine how the degree of symmetry from top to bottom, one of the degree of symmetry of the edge from top to bottom and the degree of symmetry of color from top to bottom, with the value specified is greater than the other.

Module 31 calculating the degree of symmetry serves as the degree of symmetry, the degree of symmetry from left to right and the degree of symmetry from top to bottom, defined this way, the module 33 classification, composition, and processing returns to the step S11 in Fig.9.

After step S11, in step S12, the detection module 32 of the dividing line performs the processing of detecting the separation line and detects the line dividing by the variance in the distribution information of pixel in the input image.

[Processing of detecting the line of separation performed in the module detecting the separation line]

Next, with reference to the block diagram of the sequence of operations shown in Fig.20, will be described the processing of detecting the line of separation performed at step S12 in the flowchart of the sequence of operations shown in Fig.9.

At step S71, in the same manner as the processing performed at step S41 in the flowchart of the sequence of operations shown in Fig.12, the module 71 generate and what the considerations applying edge module 32 detection of the dividing line receives the image brightness of the input image, and generates the image edges, which includes the value of the edge (edge), obtained by applying the filter selection edges, such as a Sobel filter, a Gabor filter, etc. for image brightness.

In addition, the module 71 generating image edges can get image color channel, such as R, G, b and so on, to compare the values of the edges obtained by the individual filter selection edge image color channel, with each other between the channels for each pixel, and to generate the image edges, which includes the individual maximum values.

In addition, the module 71 generating image edges can also perform the separation of the color area, which uses the algorithm of shift average (method of shift average), etc., for the input image, and generate the image edges by assigning the values of the edge pixel on the boundary line of the area of separation. For example, in this case, the value "1" of the edge is prescribed for a pixel on the boundary line of the field, and a value of "0" edges appointed for a pixel in a different area than the boundary line.

For example, when, as shown in Fig.21, an image of a scene, comprising the horizon, enter as the input image P3, the module 71 generating image edges generates image is giving Re edges, which means that the profile shape of the scene. Image Re edges, thus generated, is served in the module 72 detection of a horizontal line of separation, the detection module 75 of the inclined separation line.

At step S72 module 72 detection of a horizontal line division performs the processing of detecting the horizontal separation line, and detects a horizontal dividing line that divides the input image in the horizontal direction (i.e. up and down), on the basis of image edges originating from module 71 generating image edges.

[Processing of detecting the horizontal line of separation performed by the detection module horizontal separation line]

Here, the processing of detecting the horizontal line of separation performed at step S72 in the block diagram of the sequence of operations in Fig.20, will be described with reference to the block diagram of the sequence of operations shown in Fig.22.

At step S81 module 111 integrating the horizontal direction in the module 72 detection of a horizontal line separating integrates information of the edges relative to each of the lines in the horizontal direction for the image edges supplied from module 71 generating image edges, and feeds the result of its integration in the LPF 112.

the moreover, when integrating the information edge module 111 integrating the horizontal direction can integrate information edge after weighing the edge information of each pixel with the same weighing factor w, as shown in the above-mentioned expression (3). In accordance with this, the value of integration decreases with increasing distance value integration from the center of the image edges.

The result of integration is obtained here represents the value of integration of edge information in the horizontal direction relative to the position of the pixel in the vertical direction in the image edges (the input image), and the result of integration is represented graphically as a black diamond in the graph shown in Fig.23.

In Fig.23 illustrates examples of values integration in the horizontal direction of the edge information calculated in the processing of detecting the horizontal line of separation, and the values obtained by performing the operations described below on the value of integration.

On the graph shown in Fig.23, the horizontal axis is the position of the pixel in the vertical direction of an image edge (input image).

In addition, at step S82, the processing filter for the value of integrating the information edge, supplied from module 111 of integration in the horizontal direction LPF 112 removes noise and delivers value integration module 113 detection of the peak values. In Fig.23 the result of integration, from which you have removed the noise, represented on the graph by white squares.

At step S83 module 113 of the detection values of the peak detected value of the peak value of the integration result of the integration, from which you have removed the noise using LPF 112.

In particular, the module 113 detection of the peak values calculates a first differential value (represented on the graph by white triangles in Fig.23) the values of integration, from which you have removed the noise, and, in addition, calculates a first differential absolute value (represented on the graph marks in the form of crosses in Fig.23), which represents the absolute value of the first differential values. Module 113 of the detection values of the peak determines, as the value of the peak values of integration, the value of integration (dot surrounded by a circle of solid lines in Fig.23), which corresponds to the first differential absolute value (point surrounded by a circle of a dotted line in Fig.23), where the first differential value is negative, and the first differential absolute value is has a local maximum value. In accordance with this, in the value of integration by dramatically changing the peak value.

Module 113 of the detection values of the peak gives the module 114 processing threshold detective the peak value and the pixel position (position 7 of the pixel in the example of Fig.23) in the vertical direction on the line in the horizontal direction from which you want to retrieve the value of the integration values of the peak.

At step S84, the module 114 processing threshold, compares the peak value of the module 113 detection of the peak values with the specified threshold value, and determines whether or not the peak value greater than a specified threshold.

At step S84, when determining that the value of the peak is greater than the preset threshold value, the processing goes to step S85, and the module 114 processing threshold determines how a horizontal line separation line in the horizontal direction (line in position 7 of the pixel in the vertical direction), from which the value of integration, which must be the value of the peak. In addition, the module 114 processing threshold displays, as the information lines of horizontal separation, the position of a pixel in the vertical line on the image edges, and the processing of detecting the horizontal line separating zakancivaetsa is. After that, the processing returns to step S72 in Fig.20.

Thus, when such an input image P3, as shown in Fig.21, enter a line in the horizontal direction on the horizontal portion of the input image P3 detects, as the horizontal line of separation, as shown in Fig.24.

On the other hand, when, at the step S84 determines that the peak value is not greater than the preset threshold value, the processing in step S85 miss. In this case, the horizontal line of separation is not detected, and the detection processing line horizontal split ends. After that, the processing returns to step S72 in Fig.20.

After step S72, at step S73, the module 73 detection line vertical separation performs the processing of detecting the line of vertical separation, and detects a line of vertical separation that divides the input image in the vertical direction (namely, right and left side), on the basis of image edges supplied from module 71 generating image edges.

[Processing of detecting vertical lines of separation performed in the module detecting the vertical separation line]

Here, the processing of detecting the vertical line of separation performed at step S73 in the block diagram of the sequence of operations in Fig.20, bodø is described with reference to the block diagram of the sequence of operations, described in Fig.25.

At step S91 module 121 integration in the vertical direction in the module 73 detection of vertical lines of separation integrates information of edges with respect to each of lines in the vertical direction in the image edges supplied from module 71 generating image edges, and gives it the result of integration in the LPF 122.

In addition, in the case of embedding information edge, the module 121 integration in the vertical direction can integrate information edge after weighing the information of edges with the same coefficient w weighting, as indicated in the above-mentioned Expression (3). In accordance with this, the value of integration decreases with increasing distance value integration from the center of the image edges.

The result of integration is obtained here represents the value of integration of edge information in the vertical direction relative to the position of the pixel in the horizontal direction in the image edges (the input image), and the result of integration display on the graph, represented by a black diamond in the graph shown in Fig.26.

In Fig.26 illustrates examples of the value of integration in the vertical direction of the edge information calculated by processing the detection of the vertical line in the division, and the value obtained by the operation described below, for the value of integration. On the graph shown in Fig.26, the horizontal axis indicates the pixel position in the horizontal direction of the image edges (input image).

In addition, at step S92, the processing filter for the value of integration of edge information supplied from module 121 integration in the vertical direction LPF 122 removes noise and takes the value of integration in the module 123 detection of the peak values. In Fig.26 the result of integration from which noises have been removed, indicated by white squares.

At step S93 module 123 of the detection values of the peak detects the peak value for the value of the integration result of the integration, from which noises have been removed using the LPF 122.

In particular, the module 123 detection of the peak values calculates a first differential value (represented on the graph by using the white triangle in Fig.26) for the value of integration, from which you have removed the noise, and, in addition, calculates a first differential absolute value (represented on the graph marks in the form of crosses in Fig.26), which represents the absolute value of the first differential values. Module 123 detection of the peak values of the defines, as the value of the peak value of the integration value of the integration corresponding to the first differential absolute value, where the first differential value is negative, and the first differential absolute value has a local maximum value. In Fig.26, although there is no peak value, dramatically changing thus, as in the example shown in Fig.23, the value of integration, which must be a value of the peak, get in position 11 of the pixel in the horizontal direction.

Module 123 detection of the peak value transmitting module 124 processing threshold detektirovanie the peak value and the pixel position (position 11 of the pixel in Fig.26) in the horizontal direction along the line in the vertical direction, from which was obtained the value of integration, which should represent the peak value.

At step S94 module 124 processing threshold, compares the peak value of the module 123 detection of the peak values with a preset threshold value, and determines, exceeds or not the value of the peak threshold value.

At step S94, when determining that the value of the peak is greater than the preset threshold value, the processing goes to step S95, and the module 124 processing threshold determines how vertical the social dividing line, the line in the vertical direction, which receives the value of integration, which must be a value of the peak. In addition, the module 124 processing threshold displays, as the information of the vertical line of separation, the position of a pixel in the horizontal direction of the line on the image edges, and processing the detection of vertical lines split ends. After that, the processing returns to step S73 in Fig.20.

On the other hand, when, at step S94, determines that the peak value is not greater than the preset threshold value, the processing in step S95 miss. For example, when the value of the peak in Fig.26 is less than the specified threshold value, the vertical dividing line is not detected, and the processing of detecting vertical lines split ends. After that, the processing returns to step S73 in Fig.20.

After step S73, the stage set S74, the module 74 slanting line detecting separation processes 1 detection of the inclined separation line, and detects the first vertical line of separation dividing the input image in an inclined upward to the right, on the basis of image edges supplied from module 71 generating image edges.

[Processing of the detection line inclined separation performed in the module detection NAC the traditional separation]

Here processing 1 detection line inclined separation performed on a stage set S74 in the block diagram of the sequence of operations in Fig.20, is illustrated with reference to the block diagram of the sequence of operations in Fig.27.

Here, when, as shown in the upper plot of Fig.28, an image of a scene that includes a mountain slope which obliquely extends from the top right to the bottom left corner, enter as the input image P4, the module 71 generating image edges generates image Re edges denoting the profile shape of the scene shown on the second upper plot in Fig.28, and supplies the input image P4 in module 72 detection of a horizontal line of separation in the detection module 75 of the inclined separation line.

At step S101 module 131 of integration in the downward direction in module 74 detection of the inclined separation line converts to binary form, presents one of "0" and "1", the information of the edges in each of the pixels in the image edges supplied from module 71 generating image edges based on a predetermined threshold value. For example, when the information of the edge image edge has a value ranging from "0" to "255", the module 131 integration inclined direction defines as "0", the value of the pixel whose value is less than the threshold value "127", and who determines, as "1", the value of the pixel whose value is greater than the threshold value "127", information regarding the edges of the image edges, which is a value from "0" to "255". In line with this, get this converted to binary image P4f edges, as presented at the third upper plot in Fig.28.

At step S102, the module 131 integrating the areas under the tilt rotates converted to binary image P4f edges in counterclockwise direction so that the diagonal line up to the right is converted to binary image P4f edge becomes perpendicular to the horizontal axis in arbitrarily set the coordinate. The rotation angle at this time is calculated on the basis of the aspect ratio converted to binary image P4f edge of the input image P4). In accordance with this get is rotated converted to binary image P4 r edges, as illustrated at the fourth upper plot in Fig.28.

Since converted to binary image P4f edge, which was converted to binary form, turn this way, the number of pixels representing the target processing turn, can be reduced in comparison with image rotation Re edges before PR is education in binary form. Therefore, the processing cost of this operation may be reduced.

At step S103, the module 131 integration inclined direction integrates information of edges with respect to each of lines in the vertical direction in the fixed coordinate in the rotated transformed into a binary image 4f edges, in other words, on each of the lines in the downward direction, which is parallel to the inclined diagonal lines from top right, converted to binary image P4f edges before turning, and delivers the result of integration in the LPF 132.

In addition, when the module 131 integration inclined direction integrates information of the edge in the direction tilted, the number of active pixels varies depending on the line, which integrates the information edge. Therefore, the module 131 of integration in the downward direction normalizes the value of integrating the information of the edges of each line, using the integrated number of pixels.

Here, examples of the normalization values of the integration of edge information regarding each of the lines in the downward direction will be described with reference to Fig.29 and 30.

In each of figs.29 and 30 is illustrated rotated, transformed into a binary image edge (below, simply called from the reflection edge), which is rotated so that the sloping diagonal line D1 up to the right will be perpendicular to arbitrarily set the coordinate (X-axis). In addition, as shown in each of figs.29 and 30, it is assumed that the length of the longest side of the image edges before turning circle is "a", the length of its short side is equal to "b", and the angle between the long side and diagonal line D1 is "θ". In addition, for example, the image edges rotate in the counterclockwise direction by 90 degrees - θ from the state shown in the third upper plot in Fig.28, and hence the diagonal line D1 is perpendicular to the axis X. in Addition, it is assumed that the x-coordinate on the X-axis denotes the position of the virtual pixel on the line in an oblique direction.

In addition, each of Fig.29 and 30 it is assumed that those areas, which are indicated by the width of "m" and are located at both ends of the projection of the edges of the image on the X-axis is excluded from the processing goals of integration because of their low integrated number of pixels.

First, as shown in Fig.29, when the line is inclined in the direction in which integrate information edge is located on the left side of the diagonal line D1, namely the range of the x coordinate is m≤x≤b*cosθ-integrated quantity 1 pixels indicated by the following expression (10).

I=x(tanθ+1tanθ)(10)

In addition, as shown in Fig.30, when the line is inclined in the direction in which integrate information edge is on the right side of the diagonal line D1, namely, the range of x is b x cosθ≤x≤a*sinθ+b*cosθ-m integrated quantity 1 pixel is denoted by the following expression (11).

I=x'(tanθ+1tanθ)x'=asinθ+bcosθ-x}(11)

Thus, the module 131 of integration in the downward direction counts the number of pixels included in the integrated line, and normalizes the value of integrating the information of the edges of each line, using the integrated number of pixels.

Normalized the result of integration is obtained here represents the value of integration and the formation of the edge in the vertical direction, in relation to the position of the pixel in the direction of the X axis for image edge provided on each of Fig.29 and 30, and the result of integration is shown in the graph, indicated by black diamonds, shown in Fig.31.

In Fig.31 shows examples of values of integration in the downward direction from the top to the right of the edge information calculated by processing 1 detection of the inclined separation line, and the values obtained by performing the operations described below on the value of integration.

On the graph shown in Fig.31, the horizontal axis is the position of the pixel in the direction of the X axis of the image edges (input image) shown in each of figs.29 and 30.

In addition, at step S104, the processing filter for the value of integration of edge information supplied from module 131 integration inclined direction LPF 132 removes noise and takes the value of integration in the module 133 detection of the peak values. In Fig.31 the result of integration, from which you have removed the noise displayed, as indicated by white squares.

At step S105, the module 133 of the detection values of the peak detects the peak value for the value of integration of the results of integration, in which you removed the noise, using LPF 132.

In particular, the module 133 detection the Finance value of the peak calculates a first differential value (shown by a white triangle in Fig.31) for the value of integration, from which you have removed the noise, and, in addition, calculates a first differential absolute value (indicated on the chart label in the form of crosses in Fig.31), which represents the absolute value of the first differential values. Module 133 detection of the peak value determines how the peak value of the values of integration, and the value of integration (dot surrounded by a circle of solid lines in Fig.31) corresponds to the first differential absolute value (dot surrounded by a circle of a dotted line in Fig.31), where the differential value is negative, and the first differential absolute value has a local maximum value. In accordance with this, in the value of integration detects the value of abruptly changing the peak.

Module 133 of the detection values of the peak gives the module 134 processing threshold detektirovanie the peak value and the pixel position (position 27 of the pixel in the example of Fig.31) in the direction of X-axis lines inclined upward to the right, which gives a value of integration, which should represent the peak value.

At step S106, the module 134 processing threshold, compares the peak value of the module 133 detection of the peak values with a preset threshold value is tion determines whether or not the peak value greater than a specified threshold.

At step S106, when determining that the value of the peak is greater than the preset threshold value, the processing goes to step S107, and the module 134 processing threshold determines how inclined separation line in the upward direction to the right, a line sloping upward to the right (line regulation 27 of the pixel in the direction of the X axis), from which the value of integration, which must be a value of the peak. In addition, the module 134 processing threshold displays, as the information of the first inclined line of separation, the position of the pixel in the direction of the X axis and the angle Q lines in the image edges, and handling 1 detection slanting line split ends. After that, the processing returns to the stage set S74 in Fig.20.

Thus, when enter is the input image P4, as shown in Fig.28, the line in slope upward to the right on a slope of the mountain, in the input image P4 is detected as the sloping line of separation up to the right, as shown in Fig.32.

On the other hand, when at step S106 determines that the peak value is not greater than the preset threshold value, the processing at step S107 miss. In this case, the inclined separation line up right is not detected, and the processing 1 is Elektrownia slanting line split ends. After that, the processing returns to the stage set S74 in Fig.20.

After the stage set S74, at step S75, the detection module slanting line 75 separation processes 2 detection of the inclined separation line, and detects a second vertical line of separation dividing the input image in an inclined upward to the left on the basis of image edges supplied from module 71 generating image edges.

In addition, treatment 2 detection slanting line of separation performed by the detection module 75 slanting line of separation, in principle, is the same as the above-mentioned processing 1 detection slanting line of separation, except that the image edge is rotated so that the sloping diagonal line of the top left image edge is set perpendicular to the axis X. Therefore, its description will be eliminated.

In particular, although in the above description, the image edges are turned so that the sloping diagonal line D1 top right and sloping diagonal line D2 top left was located perpendicular to the X-axis, detektywa thus, the sloping line separating the top right and the sloping line separating the top left image edge can be rotated so that the line is inclined in an arbitrary image edges is of perpendiculary the X-axis, in the result, perform detection of the inclined separation line up right and the inclined separation line up to the left.

In addition, even if the information edge will be integrated in arbitrary oblique direction, the number of active pixels varies depending on the line, which integrates the information edge. Therefore, it is necessary to normalize the value of integrating the information of the edges of each line, using the integrated number of pixels.

Here, examples of the normalization values of embedding information of edges with respect to each of lines inclined in an arbitrary direction will be described with reference to Fig.33-35.

Each of figs.33-35 illustrate rotated, transformed into a binary image edge image edge), which is rotated so that the sloping line Dn up to the right, mounted on the image edge will be perpendicular to the axis X. in Addition, as shown in each of figs.33 - 35, it is assumed that the length of the longest side of the image edges before turning circle is "a"the length of its short side is equal to "b", and the angle between the long side and an inclined line Dn up to the right is "θ". In addition, for example, the image edges rotate in the counterclockwise direction by 90 degrees - θ from the state in which the long sides of which is parallel to the X-axis, and, consequently, the sloping line Dn is set up to the right perpendicular to the axis X. in Addition, it is assumed that the x-coordinate on the X-axis denotes the virtual position of a pixel line in the downward direction

In addition, each of Fig.33-35 it is assumed that the areas, which are indicated by the values "m" widths and are located at both ends of the image edges projected on the X-axis is excluded from the processing goals of integration, because of the small number of pixels, which integrates information edge.

First, as shown in Fig.33, when the range of x is m≤x≤b*cosθ-integrated quantity 1 pixels indicated by the following expression (12).

I=x(tanθ+1tanθ)(12)

In addition, as shown in Fig.34, when the range of x is b*cosθ<x≤a*sinθ, integrated quantity 1 pixels indicated by the following expression (13).

I=bsinθ(13)

In addition, as shown in Fig.35, to the Yes range of the x coordinate is a*sinθ<x≤a*sinθ+b*cosθ-m integrated quantity 1 pixels indicated by the following expression (14).

I=x'(tanθ+1tanθ)x'=asinθ+bcosθ-x}(14)

Thus, even if the information edge will be integrated with respect to each of lines inclined in an arbitrary direction, counting the number of pixels that includes an integrated line, and the value of integrating information edge normalize each line, using the integrated number of pixels.

As described above, the detection module 32 of the dividing line gives the module 33 classification of the composition of the separation line, detektiruya during processing of detecting the separation line among the data lines horizontal separation, information line vertical separation, information of the first inclined line separation and information of the second inclined line of separation, and processing returns to step S12 in Fig.9.

After step S12, in step S13, the module 33 classification the purpose of the composition classifies the composition of the input image to one of the predefined structures of the composition, on the basis of the degree of symmetry of the module 31 calculating the degree of symmetry and the line of separation from the detection module 32 of the dividing line.

Here, examples of the structures of the songs, which classify the composition of the input image will be described with reference to Fig.36.

In accordance with the structures of the composition shown in Fig.36, first, the input image is classified based on the degree of symmetry (the degree of symmetry from left to right and the degree of symmetry from top to bottom).

[In the case when the degree of symmetry from left to right is greater than or equal to the threshold value Th_LR, and the degree of symmetry from top to bottom greater than or equal to the threshold value Th_TB]

In this case, the composition of the input image are classified into the structure of the composition with the symmetry of the up-down and left-to-right.

[In the case when the degree of symmetry from left to right is greater than or equal to the threshold value Th_LR, and the degree of symmetry from top to bottom is less than the threshold value Th_TB]

In this case, the composition of the input image is additionally classified on the basis of information of a horizontal line of separation. Namely, when the horizontal separation line indicated by the horizontal information of the dividing line placed above the horizontal line H2 separation in the composition based on the rule of three, described n the Fig.10, the composition of the input image are classified into the structure of the composition "the upper horizontal side of the dividing line". When the horizontal separation line indicated by the horizontal information of the dividing line is between the horizontal line H2 separation and horizontal line H1 separation in the composition based on the rule of three, described in Fig.10, the composition of the input image are classified into the structure of the composition between the horizontal lines of separation". In addition, when the horizontal separation line indicated by the information of the horizontal separation line is located below the horizontal line H1 separation in the composition based on the rule of three, described in Fig.10, the composition of the input image are classified into the structure of the composition "the lower horizontal side of the dividing line".

For example, as shown in Fig.37, in the input image, the height (the length of the short side) is equal to "b", position coordinates on the Y-axis corresponding to the horizontal lines H1 and H2 separation in Fig.10 constitute the "b/3" and "2*b/3", respectively. At this time, when a horizontal line of HP separation is detected on the basis of processing of detecting the separation line and a horizontal line of separation, indicating the pixel position (position coordinates) on, poda is t from the detection module 32 of the dividing line, module 33 classification composition determines that 2*b/3≤UN≤b, and classify the composition of the input image on the structure of the composition of the upper side of a horizontal line of separation". In addition, when determining that 0≤UN≤b/3, module 33 classification composition classifies the composition of the input image on the structure of the composition of the lower horizontal side of the dividing line". In addition, when determining that b/3<UN≤2*b/3, module 33 classification composition classifies the composition of the input image on the structure of the composition between the horizontal lines of separation".

In addition, when the horizontal separation line is not detected based on the detection processing of the dividing line, and information about the horizontal separation line is not available from the detection module 32 of the dividing line, the module 33 classification composition classifies the composition of the input image on the structure of the composition of "other".

[In the case when the degree of symmetry from left to right is smaller than the threshold value, Th_LR, and the degree of symmetry from top to bottom greater than or equal to the threshold value Th_TB]

In this case, the composition of the input image is additionally classified on the basis of the information of the vertical line of separation. Namely, when the vertical separation line indicated by the information ver icalneu the separation line, on the left side of the line VI vertical separation in the composition based on the rule of three, described in Fig.10, the composition of the input image are classified into the structure of the composition "left side" of vertical separation line. When the vertical separation line indicated by the vertical information of the dividing line placed between the vertical line V1 separation and vertical line V2 separation in the composition based on the rule of three, described in Fig.10, the composition of the input image are classified into the structure of the composition between the vertical lines of separation". In addition, when the vertical separation line indicated by the vertical information of the dividing line, placed on the right side of the vertical line V2 separation in the composition based on the rule of three, described in Fig.10, the composition of the input image are classified into the structure of the song "right side" of vertical separation line.

For example, as shown in Fig.38, in the input image whose width (the length of the longest side) is equal to "a"the provisions of the coordinate axis X, corresponding to vertical lines V1 and V2 separation in Fig.10, constitute "a/3" and "2*a/3", respectively. At this time, when a vertical line Vg division detects on the basis of processing of detecting the separation line and the slight pressure from the beginning of the dividing line, denoting the pixel position (position coordinates) x_V, served from the detection module 32 of the dividing line, the module 33 classification composition determines thatand/3<x_V≤2*a/3, and classifies the composition of the input image on the structure of the composition between the vertical lines of separation". In addition, when determining that 0≤xV≤and/3 module 33 classification composition classifies the composition of the input image on the structure of the composition "left side" of vertical separation line. In addition, when determining that 2*and/3<x_V≤amodule 33 classification composition classifies the composition of the input image on the structure of the composition between the vertical lines of separation".

In addition, when the vertical line of separation will not be detected on the basis of processing of detecting the separation line and a vertical line of separation is not served from the detection module 32 of the dividing line, the module 33 classification composition classifies the composition of the input image, as the structure of the composition of "other".

[In the case when the degree of symmetry from left to right is smaller than the threshold value Th_LR, and the degree of symmetry from top to bottom is less than the threshold value Th_TB]

In this case, the composition of the input image is additionally classified on the basis of the first information is or second line of the inclined separation. Namely, when the inclined angle of the line of separation relative to the horizontal direction, indicated by the first information line of the inclined separation included in the specified range of angles from the angle of a diagonal line D1 relative to the horizontal direction in a diagonal composition, described in Fig.10, the composition of the input image are classified into the structure of the composition "line inclined separation" up to the right. In particular, as shown in Fig.39B, when the angle of the diagonal line D1 relative to the horizontal direction is equal to θRand the angle θ of the line of the inclined separation relative to the horizontal direction indicated by the information of the first inclined line of separation is included in the range θR-Δθ≤θ≤θR+Δθ, the composition of the input image are classified into the structure of the composition "line inclined separation up to the right". In addition, when the inclined angle of the line of separation relative to the horizontal direction indicated by the information on the second line of the inclined separation included within a given angular range of the angle of the diagonal line D2 relative to the horizontal direction in a diagonal composition, described in Fig.10, the composition of the input image are classified into the structure of the composition "line inclined split is up to the left." In particular, as shown in Fig.39A, when the angle of the diagonal line D2 relative to the horizontal direction is θLand the angle θ of the line of the inclined separation relative to the horizontal direction indicated by the information on lines inclined separation included in the range θL-Δθ≤θ≤θL+Δθ, the composition of the input image is classified as the structure of the composition "line inclined separation up to the left."

For example, when the line Ds inclined separation detects on the basis of processing of detecting the separation line in the input image shown in Fig.40, and the information of the first line of the inclined separation denoting the angle θ, is supplied from the detection module 32 of the dividing line, the module 33 classification composition determines that θR-Δθ≤θ≤θR+Δθ, and classify the composition of the input image on the structure of the composition "line inclined separation up to the right". In addition, when determining that θL-Δθ≤θ≤θL+Δθ, the module 33 classification composition classifies the composition of the input image on the structure of the composition "line inclined separation up to the left."

In addition, when either one of the inclined lines of separation up to the right and lines inclined separation up to the left is not detected based on the processing detektirovanie the line of separation, and none of the information the first line of the inclined separation and information the second line of the inclined separation is not received from the detection module 32 of the dividing line, the module 33 classification composition classifies the composition of the input image on the structure of the composition of "other".

The structure of the composition, classified thus, display, along with information on the degree of symmetry and the line of separation, in the information-processing device, the device save, etc. that are not shown.

In accordance with the above-described processing, since the composition of the input image is classified based on the degree of symmetry, indicating the symmetry line of the edge information and color information in the input image, and the separation line, indicating variations in the distribution information of pixel (edges) in the input image, the composition of the input image may be classified, although it is not necessary to identify the entity or state of a subject, or to calculate the value between the input image and the previously prepared composition for each of the pixels. In addition, the composition is not classified using only the symmetry line information edge, but the composition of the input image may be classified additionally, ISOE is isua symmetry line of color information or the position of the dividing line. In accordance with this arrangement, the input image may be classified on the detailed structure of the song using the operation with low cost.

In addition, information about the degree of symmetry and the separation line and displayed together with the structures of the composition in the information-processing device, the device save, etc. that are not represented, can be assigned as metadata of the input image.

In accordance with this recorded image can be found or classified based on the degree of symmetry or information about the separation line (the position of the dividing line).

In addition, the degree of symmetry or line of separation is used as the characteristic values of the input image, along with the frequency distribution or the distribution of colors, and therefore, the recognition accuracy of the image recognition can be improved.

In particular, in the above description, as described with reference to Fig.36, when classify the composition of the input image, the song is about initially classified based on the degree of symmetry, and, in addition, it is classified based on the position of the dividing line. Here the relationship between the degree of symmetry and the line of separation described with reference to Fig.36, shown in Fig.41.

is a of Fig.41 the horizontal axis and the vertical axis indicates the degree of symmetry from left to right and the degree of symmetry from top to bottom, accordingly, the structure of C1-C4 composition is distributed in two-dimensional space defined by a separate axis.

In two-dimensional space shown in Fig.41, the structure C1 of the composition, denoting the structure of the composition "symmetry from top to bottom and left to right, are placed in the field in which the degree of symmetry from left to right, and the degree of symmetry from top to bottom is great. In addition, the structure of the composition C2 denoting the structure of the composition between the horizontal lines of separation", placed in the field in which the degree of symmetry from left to right is great, and the degree of symmetry from top to bottom is small, and the structure of C3 compositions, indicating that the structure of the composition between the vertical lines of separation" is located in the area in which the degree of symmetry from left to right is small, and the degree of symmetry from top to bottom is great. In addition, the structure of C4 composition, denoting the structure of the composition inclined separation line up right, placed in the region in which both the degree of symmetry from left to right and the degree of symmetry up down low.

In addition, for example, the structure of the song "vertical lines separate the upper hand" is not placed in the field in which the degree of symmetry from left to right is great, and the degree of symmetry up down low, and, for example, the structure of the song "horizontal line RA is dividing the lower side" is not placed in the field, in which the degree of symmetry from left to right is small, and the degree of symmetry from top to bottom is great.

Thus, since the separation line, made with the possibility of detection, and the structure of the composition made with the possibility of classification is limited, depending on the results of the calculation of the degree of symmetry from left to right and the degree of symmetry from top to bottom, some or all of the processing of detecting the separation line can be excluded.

Namely, when the degree of symmetry from left to right, and the degree of symmetry from top to bottom, great composition of the input image are classified into the structure of the composition with the "symmetry from top to bottom and left to right, regardless of the result of detection of the dividing line. Therefore, all the processing of detecting the separation line can be excluded.

In addition, when the degree of symmetry from left to right is great, and the degree of symmetry from top to bottom small, the composition of the input image are classified into one of the structures in the song "the upper horizontal side of the dividing line, between the horizontal lines of separation", "the lower side of a horizontal line of separation" and "other". So you only need to perform processing for detecting the horizontal line of separation, and processing of detecting the vertical separation line, when this processing is 1 and detection of the inclined separation line, and treatment 2 detection slanting line of separation, can be excluded.

Similarly, when the degree of symmetry from left to right is low and the degree of symmetry from top to bottom is great, the composition of the input image are classified into one of the structures in the track "left side" of the vertical line of separation "between the vertical lines of separation", "the right side of the vertical line of separation" and "other". So you only need to perform processing for detecting a vertical line separation and processing of detecting the horizontal line of separation, reprocessing 1 detection slanting line separation and treatment 2 detection slanting line of separation, can be excluded.

In addition, when, as the degree of symmetry from left to right, and the degree of symmetry from top to bottom small, the composition of the input image are classified into one of the structures in the song "inclined separation line up right, the sloping line of separation up to the left" and "other". Therefore, you only need to handle 1 detection slanting line of separation, and processing 2 detection slanting line of separation, and processing of detecting the horizontal line separation and processing of detecting the vertical line of separation can be excluded.

Thus, the processing of detecting the separation line, post-processing of calculating the degree of symmetry, and, therefore, some or all of the processing of detecting the separation line may be excluded, depending on the result of the calculation processing of calculating the degree of symmetry. Therefore, the composition of the input image can be classified using operations with lower costs.

In addition, while, in the above description, as in the block diagram of the sequence of operations in Fig.9, the processing of detecting the separation line is executed after the processing of calculating the degree of symmetry, the processing of calculating the degree of symmetry may be performed after processing the detection of the dividing line, and Vice versa, as in the block diagram of the sequence of operations shown in Fig.42.

In Fig.42 shows a flowchart of the sequence of operations explaining the processing of the classification of the composition, in which the processing of calculating the degree of symmetry required after processing the detection of the dividing line.

In addition, since the processing executed at the step S211-S213 in the block diagram of the sequence of operations in Fig.42, the processing performed at the step S12, and the processing performed at step S11 in the flowchart of the sequence of operations in Fig.9, only replace each other, their detailed description will be omitted.

In this respect, one is about, because the relationship between the degree of symmetry and separation line described in Fig.41, the separation line, which can be detected, and the structure of the composition, which can be classified, is also limited depending on the result of the calculation of the dividing line, in this case, the processing of calculating the degree of symmetry can be excluded.

Namely, when only a horizontal line separating detects, as the line of separation, the degree of symmetry from left to right tends to be large, and the degree of symmetry from top to bottom tends to be small. Therefore, the processing of calculating the degree of symmetry from top to bottom in the processing of calculating the degree of symmetry of the edge or the processing of calculating the degree of symmetry of color can be excluded.

In addition, when only the vertical line separating detects as the line of separation, the degree of symmetry from left to right tends to be small, and the degree of symmetry from top to bottom tends to be large. Therefore, the processing of calculating the degree of symmetry from left to right during the processing of calculating the degree of symmetry of the edge or the processing of calculating the degree of symmetry of color can be excluded.

Thus, the processing of calculating the degree of symmetry is performed after processing the detection of the dividing line, and therefore, a part or the I processing of calculating the degree of symmetry may be excluded depending on the result of the calculation processing of the detection of the dividing line. Therefore, the composition of the input image can be classified using the low cost.

While in the above description described the imaging device, in which the composition of the input image classified image captured using the device imaging, etc. as the input image, a configuration in which the composition of the captured image, which was taken directly classify, may be provided in the forming device of the image.

[Example configuration of the device forming the image]

In Fig.43 shows an example of a device configuration of image formation, which classify the composition of the captured image that was shot. In addition, the device 311 of image formation according to Fig.43, the same name and the same sign is assigned to the same configuration having the same function as the function provided by the device 11 of the image processing in Fig.1, and its description will be eliminated.

Namely, the device 311 forming the image in Fig.43 differs from the device 11 of the image processing in Fig.1 in that the module 331 of the image acquisition module 332, the image processing module 333 and display module 334 save the newly provided in the device 311 of the formation from the expression.

In addition, the classification module compositions according to Fig.43 transmits information on the degree of symmetry and the separation line in the module 333 display or module 334 conservation, together with the structure of the composition (i.e., the classification module of the composition of Fig.43 generates a signal classification to ensure that at least one of displaying or storing classification).

Module 331 of the image includes an optical lens, the imager module analog/digital conversion (A/D) (where they are not represented). In module 331 of the image on the imager direct light coming into the optical lens, and it performs photoelectric conversion, showing, thus, the subject, and the obtained analog image is subjected to A/D conversion. Module 331 of the image gives the module 332 image processing, digital image data (captured image) obtained as the result of A/D conversion.

Module 332 image processing performs image processing such as processing to eliminate noise, etc., for the captured image from the module 331 of the image, and serves as the input image in real time (so-called direct image of the lens), the captured image in the module 31 calculating the degree is Imperii, the detection module 32 of the dividing line and the module 333 of the display. Namely, the module 31 calculating the degree of symmetry and the detection module 32 of the dividing line performs the processing of calculating the degree of symmetry and the processing of detecting the separation line for the input image in real time as the moving image, respectively.

In addition, when a user operation on the valve, which performs an operation unit 311 imaging, he clicks on the shutter button and so on, which are not shown, module 332 image processing performs image processing such as processing to eliminate noise, etc., for the captured image at this time, and supplies the input image as a still image in the module 31 calculating the degree of symmetry, the detection module 32 of the dividing line and the module 334 conservation. At this time, the module 31 calculating the degree of symmetry and the detection module 32 of the dividing line executes the processing of calculating the degree of symmetry and the processing of detecting the separation line for the input image as a still image, respectively.

Together with the captured image (the image obtained through the lens) module 332, the image processing module 333 display displays information based on at least one of the patterns of the composition, the degree is Imperii and information of the dividing line, transferred from module 33 classification of the composition. For example, with the image obtained through the lens module 333 display displays the structure of a composition, which composition image obtained through the lens, classify, receives a quantitative assessment of the degree of symmetry from left to right and the degree of symmetry from top to bottom, and the separation line indicated by the information of the dividing line.

Together with the captured image (still image) from a module 332 image processing module 334 save saves, as the metadata of the captured image, the structure of the composition, the degree of symmetry and the information of the dividing line, supplied from module 33 classification of composition.

In line with this, among the images stored in the module 334 conservation, can be searched or they may be classified based on the degree of symmetry or information about the separation line (the position of the dividing line).

Using this configuration, as described above, the device 311 imaging performs processing classification compositions, which classify the composition of the captured image. In addition, the processing of the classification of the composition, performed in the device 311 of imaging performed in the same manner as the processing of the classification of the composition, ispolnjaemuju in the device 11, the image processing described with reference to Fig.9 or Fig.42, and receive the same preemptive effect. Therefore, their description will be omitted.

In addition, the device 311 imaging can provide the recommended composition of the user based on the structure of the composition, the degree of symmetry or information about the separation line, obtained from the processing of the classification of the composition.

For example, when the degree of symmetry from left to right of the image received through the lens, the display module 333 of the display is great, and the image is classified on the structure of the composition between the horizontal lines of separation", the module 333 display displays a horizontal line of separation, detektirovaniya when processing the detection of the dividing line, together with the image received through the lens, as indicated in the left plot of Fig.44. At this time, for example, the user performs operations module operations, which are not shown, and therefore, lines 3-CSO separation (dashed line), indicating the composition on the basis of the rule of three, can be displayed, as shown in the right plot of Fig.44. In addition, you may see an arrow, which indicates that the user can adjust the horizontal dividing line, detektiruya during processing of detecting the line of separation l is Sri split into 3 parts, the corresponding horizontal line H2 separation, indicated in Fig.10.

In addition, when the image received through the lens, display module 333 of the display, perform the transition from the state in which the degree of symmetry from left to right is small, as shown in the left plot of Fig.45, in the state in which the degree of symmetry from left to right is high, as shown in the right plot of Fig.45, may be displayed by a line (dotted line), which indicates that the composition becomes symmetrical from left to right.

In addition, it is assumed that the device 311 imaging has the configuration in which the main lens, with a typical viewing angle and an auxiliary lens having full, a much wider viewing angle than that of the main lens, is provided as the optical lens module 331 of the image, and the module 333 display provides a display of a composite image in which the main image captured through the main lens, combined with the auxiliary image captured through an auxiliary lens.

The composite image is an image in which the main image having a narrow viewing angle, combined with a plot of the auxiliary image having a wide viewing angle so that the position is object correspond to each other, and by validation of the composite image, the user can confirm the composition, having a wide range (composition corresponding to the auxiliary image) that it was difficult to confirm, using only the main image.

When the device 311 imaging having such a configuration as described above, processing, classification, composition individually perform for the main image and the auxiliary image, and therefore, the degree of symmetry of the auxiliary image is higher than the degree of symmetry of the main image or line of separation of the auxiliary image is very close to the line of division 3 of part compositions on the basis of the rule of three can be mapped, which proposes that the user performs operations on the device 311 of image formation so that the composition of the main image becomes similar composition supporting images.

As described above, since the user can find the recommended composition for the image, the shooting of which perform, the user can shoot with the best composition. In addition, when the composition conforms to the recommended composition, device 311 imaging can automatically perform operas is tion with the shutter.

In addition, it should be understood that the processing described above, in which the calculated degree of symmetry, detects the line of separation, and classify the composition of the input image on the basis of the degree of symmetry and the line of separation can also be performed for the moving image.

The sequence of processing described above can be performed using part of the hardware or software. When the sequence of operations described above is performed using part of the software program included in the software tool set with the recording media program in the computer, which is embedded in the part dedicated hardware or a computer, for example, a personal computer for General purposes, in which various types of functions can be executed by installing various types of programs.

In Fig.46 shows a block diagram illustrating an example hardware configuration of a computer that executes a series of processing operations using the program.

In the computer Central processing unit (CPU) 901, a persistent storage device (ROM) 902, and a random access memory (RAM) 903 are connected to each other via the bus 904.

In addition, the interface 905 I / o is connected by schinas 904. Module 906 input, keyboard, mouse, microphone, etc., module 907 output that includes a display, a loudspeaker, and so on, the module 908 conservation, which includes a hard disk, non-volatile storage device, and so on, the module 909 data includes a network interface and so on, the actuator 910, managing removable media 911 (i.e. non-volatile machine-readable media, such as magnetic disk, optical disk, magneto-optical disk, semiconductor memory device, etc., connected to the interface 905 I / o.

In the computer having such a configuration as described above, for example, the CPU 901 loads the program stored in the module 908 save in RAM 903 via the interface 905 I / o and bus 904, and executes the program, performing, thus, the sequence of processing operations described above.

For example, a program executed by a computer (CPU 901) is recorded on the removable media 911 recordings, which is a batch media using magnetic disk (including a flexible disk), optical disk (permanent storage device to the CD-ROM (CD-ROM), digital versatile disk (DVD), etc.), magneto-optical disk, semiconductor memory device, etc., and its present, or program transfer across the cable or wireless transmission medium data such as a local area network, Internet, or digital satellite broadcast.

In addition, by loading the removable media 911 in the drive 910, the program may be installed in the module 908 save interface 905 IO. In addition, the program can be received via cable or wireless communication media module 909 data, and installed in the module 908 save. In addition, the program may be installed in advance in the ROM 902 or the module 908 saved.

In addition, the program executed by the computer may be a program in which processing operations are performed in time sequence, in the order described in the specification, and may be a program in which processing operations performed in parallel or at necessary points in time, such as when performing operations call, etc.

In addition, an implementation option disclosed in the present technology is not limited to the above-described variants of implementation, and various modifications can be made unless they are within the scope of the disclosed technology.

1. The processing unit of the input image that contains:
module calculating the degree of symmetry, performed with the opportunity to:
take WMO is a great image; and
to calculate the degree of symmetry, indicating the symmetry of the dividing line, dividing the input image in the specified region; in which the module calculating the degree of symmetry includes:
module calculating the degree of symmetry of color, with the ability to compute the degree of symmetry of the input color image, the degree of symmetry of the color represents the degree of symmetry between the color information of the pixels in the incoming image, and
module calculating the degree of symmetry of the edge made with the possibility to calculate the degree of symmetry of the edges of the input image, the degree of symmetry of the edge represents the degree of symmetry between the information of edge pixels in the incoming image;
the detection module of the dividing line, is configured to:
to take the input image; and
to detect the information of the dividing line on the basis of information of a pixel in the input image, the information of the dividing line denotes the line of division which separates the input image;
module classification compositions, performed with the opportunity to:
to classify the input image based on the degree of symmetry and information of the dividing line; and
to generate the signal classification to ensure that at least one of displaying or storing the klassifikatsii.

2. The device under item 1, in which the module calculating the degree of symmetry in color includes:
the first module of calculation of the degree of symmetry of color, with the ability to compute the degree of color symmetry with respect to the first imaginary line of the input image; and
the second module of calculation of the degree of symmetry of color, with the ability to compute the degree of color symmetry with respect to a second imaginary line of the input image, and the second imaginary line is at an angle relative to the first imaginary line.

3. The device according to p. 2, wherein the second imaginary line perpendicular to the first imaginary line.

4. The device according to p. 3, in which the first imaginary line is parallel to the direction of the input image.

5. The device according to p. 2, in which the module calculating the degree of symmetry of colors includes a module for converting a color space that is configured to convert the first color space for each pixel of the input image in the second color space.

6. The device under item 1, in which the module calculating the degree of symmetry of the edges includes a module for generating image edges made with the possibility to generate the image edges, which means that the edges of the input image based on the input image is to be placed.

7. The device according to p. 6, in which the module calculating the degree of symmetry of the edges includes:
the first module of calculation of the degree of symmetry of the edge made with the possibility to calculate the degree of symmetry of the edge relative to the first imaginary line image edges; and
the second module calculating the degree of symmetry of the edge made with the possibility to calculate the degree of symmetry of the edge relative to the second imaginary line image edges, and the second imaginary line is at an angle relative to the first imaginary line.

8. The device under item 1, in which the module calculating the degree of symmetry includes a module to determine the degree of symmetry made with the possibility to determine the degree of symmetry of the input image on the basis of the degree of symmetry of the input color image and the degree of symmetry of the edges of the input image.

9. The device under item 1, in which the detection module of the dividing line includes a module for generating image edges made with the possibility to generate the image edges, which means that the edges of the input image based on the input image.

10. The device under item 9, in which the detection module separation line includes:
the first detection module of the dividing line, made with the ability to detect pervo the line of separation, which separates the two sides of the input image based on the image edges; and
the second detection module of the dividing line, made with the ability to detect the second dividing line that separates the two sides of the input image based on the image edges, and a second separation line is located at an angle relative to the first separation line.

11. The device according to p. 10, in which the second dividing line perpendicular to the first separation line.

12. The device according to p. 11, in which the detection module of the dividing line includes a third detection module of the dividing line, made with the ability to detect the third dividing line that separates the two sides of the input image based on the image edges, and the third separation line is located at an angle relative to the first and second lines of separation.

13. The device according to p. 1, containing a display module, configured to display the classification.

14. The device according to p. 13, in which the display module is configured to simultaneously display the classification of the input image.

15. The device under item 1 that contains the module save made with the possibility to contain the input image and the classification of the input image, and the classification of the input image is Oia contains metadata of the input image.

16. The device under item 1, in which the module classification compositions made with the possibility of classifying the input image in one of the many pre-defined structures of composition.

17. The device according to p. 1, wherein a set of pre-defined structures of the composition include a composition based on the rule of three, and the diagonal composition.

18. The method of processing an input image, comprising:
take the input image;
calculate the degree of symmetry of the input image, in which
the calculation of the degree of symmetry includes:
the calculation of the degree of symmetry of the input color image, the degree of symmetry of the color represents the degree of symmetry between the color information of the pixels in the incoming image, and
the calculation of the degree of symmetry of the edges of the input image, the degree of symmetry of the edge represents the degree of symmetry between the information of edge pixels in the incoming image;
detects information of the dividing line on the basis of information of a pixel in the input image, the information of the dividing line denotes the line of division which separates the input image;
classify the input image based on the degree of symmetry and information of the dividing line; and
generate a signal classification, to ensure the, at least one of displaying or storing the classification.

19. Non-volatile machine-readable storage medium containing a program which, when executed by its processor, provides the performance of a device, method of processing an input image, the method containing:
take the input image, in which
the calculation of the degree of symmetry includes:
the calculation of the degree of symmetry of the input color image, the degree of symmetry of the color represents the degree of symmetry between the color information of the pixels in the incoming image, and
the calculation of the degree of symmetry of the edges of the input image, the degree of symmetry of the edge represents the degree of symmetry between the information of edge pixels in the incoming image; calculate the degree of symmetry of the input image;
detects information of the dividing line on the basis of information of a pixel in the input image, the information of the dividing line denotes the line of division which separates the input image;
classify the input image based on the degree of symmetry and information of the dividing line; and
generate a signal classification, to ensure that at least one of displaying or storing the classification.



 

Same patents:

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer engineering and can be used in video sequence analysis and processing systems and digital television. The method of detecting defects on video signals includes analysing differential images of adjacent frames. Binarisation, expansion and joining operations are then applied to the differential images of adjacent frames. Non-zero values are analysed on the obtained arrays, for which a defectiveness decision is made on the initial frames based on dispersion of the initial values.

EFFECT: detecting the position of defects on video signals with insufficient prior information about statistical characteristics of additive noise and useful component function.

2 cl, 1 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to multispectral reading photosensitive devices for reading sub-sampled data of photosensitive pixels in large-scale array photosensitive chips. The multispectral photosensitive device and pixel sampling method include: a first merging process for merging and sampling two adjacent pixels in the same row and the different columns, or in different rows and the same column or different rows and different columns in a pixel array to obtain sampling data of a first merged pixel; a second merging process for merging and sampling the sampling data of the first merged pixel obtained in the first merging process to obtain sampling data of a second merged pixel; and a third merging process; sampling data of a third merged pixel are obtained in a digital space by colour changing and image zooming methods.

EFFECT: enabling sub-sampling with high output and efficient image processing.

18 cl, 26 dwg

FIELD: personal use articles.

SUBSTANCE: system includes movie server configured for movie show and movie time code transmission; central server configured to transmit movie reference time; movement control unit consisting of preliminary storage unit for movement code data corresponding to time code, before movie show; receiver unit; actuator movement control unit aligned with the stored movement code when a movie starts.

EFFECT: smoothing of seat actuator drive movement with time synchronisation to the movie shown.

4 cl, 3 dwg

FIELD: physics, photography.

SUBSTANCE: invention relates to frame grabbers. The result is reached by that the frame grabber includes the generation unit designed with a possibility of generation of the image data, and the resolution unit designed with a possibility, on the basis of the first image data generated by the generation unit, when the position in focus is located in the first focal position, when the object is located in the focused state, or the second focal position on the side of smaller distance of the first focal position, and the second image data generated by the generation unit, when the position in focus is located in the third focal position on the side of greater distance of the focal position, when the back ground is located in the focused state, resolutions of the first area including the object, and the second area including the background.

EFFECT: accurate resolution of the object of shooting and background, even if the image data have poor difference by depth between the object and background.

16 cl, 11 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to image capturing devices. The result is achieved due to that the image capturing device comprises an image capturing unit configured to capture an image of an object through an optical system; a display unit configured to display an image captured by the image capturing unit on a screen; a determination unit configured to simultaneously determine a plurality of touch positions on the screen where an image is displayed; and a control unit configured to smoothly adjust the focusing state in accordance with change in distance between a first determined touch position and a second determined touch position in order to change the focusing area.

EFFECT: broader technical capabilities of the image capturing device.

13 cl, 27 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer engineering. An image processing device for detecting, from image data generated by an image sensor formed by a plurality of pixels, a signal from a defective pixel of the image sensor comprises a first determination unit for obtaining a first determination value indicating the value of the difference in signal strength between a pixel of interest and a plurality of pixels located near the pixel of interest; a second determination unit for obtaining a second determination value indicating the distribution width of the difference in signal strength between the pixel of interest and the plurality of pixels located near the pixel of interest; and a detection unit for detecting if the signal from the pixel of interest is a signal from a detective pixel using the first determination value and the second determination value, wherein the first determination unit obtains the first determination value by obtaining the difference in signal strength between the pixel of interest and each of the plurality of pixels located near the pixel of interest, obtaining from each difference value indicating the probability that the signal from the pixel of interest is a signal from a defective pixel, and multiplying the obtained values.

EFFECT: high accuracy of detecting a defective pixel.

11 cl, 22 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to an image forming apparatus. The result is achieved due to that the image forming apparatus includes a control unit and a detector which includes a plurality of pixels and which performs an image capturing operation for outputting image data corresponding to emitted radiation or light. The image capturing operation includes a first image capturing operation in which the detector is scanned in a first scanning region which corresponds to part of the plurality of pixels to output image data in the first scanning region, and a second image capturing operation in which the detector is canned in a second scanning region larger than the first scanning region to output image data in the second scanning region. The control unit prompts the detector to perform an initialisation operation for initialising a conversion element during a period between the first image capturing operation and the second image capturing operation in accordance with the switch from the first scanning region to the second scanning region.

EFFECT: design of a device capable of reducing the difference in level which might arise in a captured image and which depends on the scanning region to prevent considerable deterioration of image quality.

9 cl, 8 dwg

FIELD: physics, computer engineering.

SUBSTANCE: group of inventions relates to image processing technologies. An image processing device for reconstruction processing for correcting image quality deterioration due to aberration in an optical image-forming system. The image processing device comprises a dividing means for dividing image data of colours of colour filters into image data of corresponding colours of colour filters. The device also includes a plurality of image processing means, each designed to perform reconstruction processing by processing using an image data filter of one of the corresponding colours divided by said dividing means.

EFFECT: fewer false colours through image reconstruction processing in a RAW image, as well as reduced load on image reconstruction processing.

10 cl, 33 dwg

FIELD: physics.

SUBSTANCE: apparatus for adjusting a magnetooptical system for forming a beam of protons consists of a pulsed electromagnet which is formed by a pair or a system of pairs of thin conductors directed along the axis of a proton graphic channel spread in a transverse plane. A scaling array of metal plates mounted in a frame is placed at the output of the electromagnet. The method of adjusting a magnetic system for forming a beam of protons and a method of matching magnetic induction of an imaging system involve generating a magnetic field, through which the beam of protons is passed, the direction of said beam through the imaging system to a recording system by which the image of the scaling array is formed. Upon obtaining a distorted image, the magnetic beam forming system is adjusted and magnetic induction of the magnetooptical imaging system is adjusted by varying current of lenses of said systems and retransmitting the beam of protons until the required images are formed.

EFFECT: high quality of adjustment.

4 cl, 14 dwg

FIELD: radio engineering, communication.

SUBSTANCE: user sets, in a photograph display device 370B, the fact that a physical address 2000 represents a recording device which controls 370B display of photographs in place of the physical address 2000. According to that setting, the photograph display device 370B defines a logic address as a recording device controlled by consumer electronics control (CEC) devices. When the user performs operations with the recording device 210B on a disc, which is a CEC-incompatible device, using a remote control transmitter 277, a television receiver 250B generates a CEC control command addressed to the disc recording device 210B. The photograph display device 370B detects a CEC control command, converts the CEC control command to an infrared remote control command and transmits the infrared remote control command from the infrared transmission module 384 to the disc recording device 210B.

EFFECT: controlling operations of a controlled device, which processes only a control signal in a second format based on a control signal in a first format.

11 cl, 31 dwg

FIELD: physics.

SUBSTANCE: device additionally includes a register of criteria codes, a unit of memory of criteria codes, a decoder of criteria codes and a unit of result memory.

EFFECT: increased efficiency of a device due to reduced quantity of requested criteria of recognition for instances, when the result becomes available in advance by the current situation of recognition.

5 dwg, 2 tbl

FIELD: radio engineering, communication.

SUBSTANCE: invention relates to conference communication facilities. The device contains storage unit, where images plotted at workplaces are stored, image synthesis unit set so that end images do not contain ones plotted at addressee workplace, a unit for transmission of synthesised images to workplaces, unit of synthesis type determination based on the number of workplaces and image quality.

EFFECT: provision of sharing images at multiple workplaces with no increase of processing load at each workplace.

12 cl, 8 dwg

Standard gestures // 2534941

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to a system, a method and a computer-readable medium for grouping complementary sets of standard gestures into gesture libraries. A method for providing a package of complementary gesture filters to an application which can be controlled by gestures comprises steps of providing a package of complementary gesture filters; receiving an indication of assigning a first value to a parameter of a first filter, the parameter defining an attribute of a motion or pose to be made by a user; assigning the first value to the parameter; assigning a second value to a second parameter of a second filter, the second value determined using the first value; and processing data indicative of the user's motion or pose with the first filter and the parameter to determine output information indicative of whether the user's motion or pose invokes the gesture of the first filter.

EFFECT: reducing the amount of resources needed to process image data corresponding to user input.

19 cl, 21 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to image binarisation means. The method includes selecting a number of points from a reference image, calculating predetermined factors for said points, selecting from said factors those factors in whose space distinct clusters exist, said clusters being formed by points of an object and a background; determining in the found space training samples for classifiers consisting of medoid points of each cluster and boundary points which divide clusters. In the method, said samples are used during operation of two classifiers of the "K nearest neighbours" type, to the input of which all points of the processed image are successively transmitted, where the method comprises using first a classifier based on boundary points in a "qualified majority" version, then for points not classified by the first classifier, using a classifier based on medoid points in a "qualified majority" version, and for the remaining points a classifier based on boundary points in a "simple majority" version; adjustment of the classifiers is completed by selecting the training samples.

EFFECT: high quality of binarisation.

6 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer engineering. The information processing device comprises a processor configured to obtain first information about the position of a device housing corresponding to the information processing device, a first coordinate of a distance corresponding to the information processing device, second information about the position of the device housing corresponding to another information processing device, and a second coordinate of a distance corresponding to another information processing device, wherein the processor is configured to calculate a standardisation value for standardising the spatial coordinate system of the devices based on the first and second information about the position of the device housing and the first and second coordinates of the distance, if the first and second information about the position of the device housing and the first and second coordinates of the distance do not match, and calculate the position of an object in virtual space based on the first and second information about the position of the device housing, first and second coordinates of the distance and standardisation value.

EFFECT: enabling sharing of a virtual object by standardising the spatial coordinate system in each device and determining the corresponding position of the virtual object.

21 cl, 15 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to content-based image search. The method for content-based image search includes determining a selected target image; generating a candidate region of interest set comprising one or more regions of interest within the target image; determining a recommended region of interest set comprising one or more recommended regions of interest selected from the candidate region of interest set based at least in part on evaluation criteria. The evaluation criteria are determined based at least in part on analysis of maintained region of interest-based search history. The method also includes providing the recommended region of interest set for user selection of one or more target regions of interest from the recommended region of interest set as query criteria for searching an image library for one or more result images.

EFFECT: faster search and accuracy of search results.

26 cl, 18 dwg

FIELD: physics, optics.

SUBSTANCE: invention relates to means of generating spectrozonal electronic images. In the method, decomposition of an optical image into spectral zones is carried out using a variable interference filter located in the plane of the optical image. During the period of generating one video frame, the variable interference filter is shifted relative to the optical image by the value of a spectral zone. Spectrozonal video frames are formed from the stored video frames by sampling, from the stored frames, arrays of lines with video images obtained in the same spectral zones.

EFFECT: real-time variation of the spectral width of spectrozonal video frames.

6 cl, 2 dwg

FIELD: physics; control.

SUBSTANCE: invention relates to a system of situation-analytical centres of an organisational system. The system comprises a telecommunication network, a control centre, situation-analytical centres, control points of departments of the organisational system, two-way communication means, means of monitoring surveillance objects, which affect the state of operations of the organisational system, and enables automated generation of scenarios using computer systems of the control centre, situation-analytical centres and control points of departments of the organisational system, storage of data on the scenarios in a system for storing data for auditing operations of the organisational system, which is part of the control centre, transmission of data on the scenarios via equipment interfaces of the control centre, situation-analytical centres, control points of departments of the organisational system and over the telecommunication network to computer networks of the control centre, situation-analytical centres and control points of departments of the organisational system, to video systems and a computer for setting up the video system of the control centre and situation-analytical centres, to multimedia screens of the control centre, situation-analytical centres and control points of departments of the organisational system to make decisions based on the generated scenarios.

EFFECT: high efficiency of the decision-making process owing to automated generation of scenarios for solving problem situations.

21 cl, 1 tbl, 30 dwg

FIELD: physics.

SUBSTANCE: method of optimising maximum intensity projection technique for rendering scalar three-dimensional data in static mode, in interactive mode and in real time. The method comprises fragmenting the entire volume of scalar three-dimensional data of a voxel array into a set of sub-volumes consisting of voxels; determining a subset of sub-volumes located along a viewing beam; determining pixel colour as a maximum intensity value from the subset of sub-volumes belonging to the viewing beam. The voxel array is reduced to a type where the length of the edge of the cube of a voxel is equal to the length of the side of the square of a pixel, from the set of which an image is then formed. The range of voxel intensity values of the voxel array is taken equal to the number of elements of colour sets used for the image of the pixel array. Boundaries of the voxel array are rendered in the form of edges of a rectangular parallelepiped.

EFFECT: high rate of computation by reducing the amount of random-access memory used when constructing an image.

10 cl, 6 dwg

FIELD: physics, computer engineering.

SUBSTANCE: disclosed group of inventions relates to cultural-historical and art retrospective analysis using computer technologies. Disclosed is a method for mediated non-destructive analysis of painting canvas, monument or other works of art to detect hidden individual peculiarities thereof. The method comprises obtaining a digital copy of the image of a painting canvas, storing said copy on a computer and processing using at least one graphics editor. Computer processing is carried out by performing a sequence of procedures which includes changing the scale of the original image, successive dimming of the image using a "burn" tool with a minimum-size brush until visible contours appear, said contours characterising depressions and prominence of the surface.

EFFECT: obtaining digital copies of images of painting canvas and other works of art processed by mediated non-destructive technique, which enable to see prominence and depressions of surface and internal layers.

19 cl, 13 dwg

FIELD: aircraft instrumentation engineering; manned flying vehicle information representation systems.

SUBSTANCE: proposed device is provided with computer module, memory module and graphical module and is designed for dynamic forming of sequence of cartographic mimic frames and their animation demonstration on displays of onboard multi-functional indicators. Device employs cartographic data kept in memory and present flight data. Actual navigational information pertaining to present moment may be obtained by personnel in graphical form at high level of clearness and readability, which is achieved due to realization of definite modes and conditions of flight and conditions of several modes of flight of synthesis of cartographic mimic frames which differ in criterion of selected representations, methods of representation, cartographic projections and rules of positioning, orientation and scaling-up of cartographic representations. Mode of synthesis of cartographic mimic frames is selected automatically according to results of identification of present stage, mode and conditions of flight or at the discretion of personnel.

EFFECT: possibility of keeping the personnel informed on flight conditions at all phases of flight.

5 cl, 2 dwg

Up!