Device of searching image duplicates

FIELD: information technology.

SUBSTANCE: device comprises the pretreatment units of first and second images, the recording units of first and second images, the conversion units of first and second images into a color space YIQ, the enhancing units of the real component of first and second images, the image forming units as a result of rotation of the first and second image, the units of image forming in changing the angle of inclination of the first and second images, the units of storage of simulated images for the first and second images, the unit of application of the method SIFT, the calculation unit of quantity of equal descriptors, the unit of storage of the found pair of duplicates.

EFFECT: ensuring the ability to compare the descriptors applied to the task of searching image duplicates.

5 dwg

 

The invention relates to a method of processing digital images and can be used in computer vision systems for identification and registration of objects in the image, multimedia applications, working with visual data.

A simplified mathematical model of the image is a model in the RGB color space in an array Si,j,kwherei=1,I,j=1,Jcolumns and rows of the image,k=1,3- color channel. When you use a grayscale image, each point is represented by a brightness range from 0 (black) to 255 (white), with intermediate values representing different levels of gray. Accordingly, such an image is denoted as Si,j.

The main task - comparison of descriptors applied to the task of finding duplicate images.

Finding point correspondences between images of the same object is of great importance to identify the scene, the reconstruction volume is mswb three-dimensional space, image retrieval, object recognition.

Methods of matching images based on the construction of descriptors. The descriptor is a vector of characteristics, which is calculated for each point of the image features and describing the structure of its surroundings. Typically, these vectors are formed on the basis of a set of values of the first and second derivatives of the image at point.

The descriptors can be represented in the form of small invariant sections, points in the image, the particular areas. When choosing how to represent the fields have the following alternative: area one can imagine its external characteristics (i.e., boundary) or internal characteristics (a collection of elements constituting this region). The external representation is normally in cases where the focus is on the characteristics of the form of the region. The internal representation is selected, if the interest is in the properties pane, such as color or texture. Sometimes used both views simultaneously. The data presented in the form of a set of pixels along the boundary or within the region, are used to generate descriptors that take values in the space of characteristics. If this space to specify the measure, then you can compare the image of the other the other, calculating the distance between the corresponding eigenvectors.

In order to meet the challenge of matching images and point features, descriptors should be specific and selected so that with high probability descriptors of different images (but belonging to the same point features) are correctly aligned.

The descriptors should be:

specific (to distinguish different points);

- local (to depend only on a small neighborhood);

invariant (rotation, stretching, compression, monotonic change in brightness, affine and projective transformations);

a simple calculation.

The General problem of mapping methods descriptors is to recognize the object on the photographs taken from different angles, under different angles, at different scales, with different lighting. This leads to the fact that one and the same object depending on the shooting conditions will be characterized by different vectors-signs. Therefore, methods of recognition should be invariant to such changes.

Analysis of the existing literature allows us to highlight the most popular methods. The main problem of all methods matching descriptors in images is to find an affine invariant local features the th. Almost none of the discussed methods do not satisfy this requirement. Methods market Church-Affine and Harris-Affine - not resistant to change the angle and scale, so do not implement a proper affine invariance. Method MSER is not fully invariant to scale: can't cope with sudden changes in the level of geometric blur. Constantly expanding the number of different variants of SIFT method, such as PCA-SIFT, GLOH, SURF. They are widely used in the field of recognition, detection, motion detection, but do not solve the problem of affine invariance.

There is a method of object recognition in the image [Patent RU 2438174 C1 IPC G06K 9/68]. The invention relates to methods for object recognition in machine vision systems, television systems, surveillance, information-control systems of robotic systems.

The main technical challenge is to create a method that improves recognition accuracy by increasing the stability of the detectors of key areas on the image and increase the number of invariant characteristics of these detectors. In the first stage, the input image is minimized with a given green function. Next, the resulting convolution subtracted from each other to obtain the finite-difference approximation of the first derivative of a convolution of the input image is agenia filter, when searching for a local extremum of this convolution equate to zero the corresponding first derivatives. In the next step, find all local extrema and conduct adaptive threshold filter for cutting off minor features, with selected points are the centers of the neighborhoods for which build arbitrary descriptors.

The characteristics of the method-analogue, coinciding with the characteristics of the proposed technical solution, the following:

building descriptors of the image.

The disadvantages of the known method and device implements are:

- low accuracy of determination of the local descriptors. Barriers to achieving the desired technical result are as follows:

the method of convolution with the green's function does not allow to detect different types of textures.

The known method and device for detecting object in the image [US Patent No. 6711293, IPC G06K 9/68]. The invention relates to methods for object recognition in machine vision systems.

The main technical challenge is to recognize objects in the image using the invariant scaling function. At the first stage are calculated difference image is a convolution of the image with a Gaussian function, then again calculates the convolution of the collapsed image is placed with a Gaussian function to build a differential image and the input image is subtracted differential image. In the second stage, find local extrema values of the pixels. At the third stage, highlight the area around the extreme points.

In the fourth phase region is divided into subregions and the fifth stage produces many component descriptor subfield.

The characteristics of the method-analogue, coinciding with the characteristics of the proposed technical solution, the following:

- finding and mapping descriptors;

invariance of descriptors to zoom.

The disadvantages of the known method and device implements are:

- low accuracy of the build descriptor.

Barriers to achieving the desired technical result are as follows:

the Gaussian function has a limited set of invariant features that characterize the features of the image described by the vector-signs, and, thereby, of the vector data are less informative.

There is a method of computer object recognition [Patent RU No. 2234127 C2, IPC G06K 9/68]. The invention relates to the field of automation and computing techniques, namely artificial intelligence systems.

The main technical problem is to increase the speed of recognition of objects entered into the computer. At the first stage, pre-cast image of the object is, entered in the computer to normal, standard for the method view - zoom, rotate to the desired position, centering, inscribing rectangle of the desired size, converting the object image into an image made in the gradations of different degrees of brightness of one color. Then it sequentially, alternately superimposed image stored in computer memory templates. The program object recognition step combines the normalized image of the recognized object, centered and written in the same size of a table cell and templates, centered and written in the same table cell templates, with a step equal to the height of the row with the cell or the column widths of the table cells. In each column or each row of the table template, the number of which equals the number of columns or rows in the table of recognized objects is a full set of templates

The characteristics of the method-analogue, coinciding with the characteristics of the proposed technical solution, the following:

the comparison of the descriptors of two images.

The disadvantages of the known method and device implements are:

- the lack of invariance to affine transformations;

- the need to ensure centering and normalization of images.

not bodymist complete enumeration of patterns for recognition.

Barriers to achieving the desired technical result are as follows:

- when you step combination of images, even small differences in the structure of the image lead to a sharp increase in recognition errors.

The known method and device for image recognition of objects (Patent RU №2361273 C2, IPC G06K 9/62). The invention relates to computer technology and can be used in computer vision systems to identify objects in the image.

The technical challenge is to improve the accuracy and quality recognition through the use of three-dimensional vector model of the reference object. The technical result is achieved as follows: the reference image is stored in the form of a three-dimensional vector model; for each of these models fixed set of parameters for affine transformations: rotation angles of the axes x, y, z and scale. This set of parameters is determined for each model given the complexity of its form: the more complex the form, the more angles you need for the most complete description possible options for the object's position in space so that the most precise identification.

Produce the following sequence of actions: receiving three-dimensional vector model of the reference object by geometrical post is oene, then, changing its position in space (rotation, reflection, scaling), get a number of the above parameters, which are preserved and used in further recognition to recreate the corresponding angle of the reference object. When recognizing a three-dimensional image is rotated, each time generates a series of flat images, unless you can find a match.

The characteristics of the method-analogue, coinciding with the characteristics of the proposed technical solution, the following:

- modeling of affine transformations of the object.

The disadvantages of the known method and device implements are:

- the presence of a pre-processing step;

- define the number of parameters of the object class to which this object dimensions. This set of parameters is determined for each model given the complexity of its form.

Barriers to achieving the desired technical result are as follows:

- large computational costs.

Closest to the claimed solution, chosen for the prototype is a method and apparatus affine-invariant pattern recognition [Patent # US 2011/0069889 A1, IPC G06 9/46]. The invention relates to the recognition of objects in digital images.

The main technical requirement is the detection invariant to rotation, move and scale. This requirement is achieved by adding an affine invariant extension to the method SIFT (Scale-invariant feature transform). ASIFT (Affine-scale-invariant feature transform) allows to reliably identify features that were strong affine distortion. Instead of building an affine-invariant descriptors models, with sufficient accuracy, all the distortions caused by changing the position of the optical cameras. The scale and changing the position of the camera (includes two parameters of the axis of the camera) - simulated parameters; the rotation and movement (also includes 2 changes of position) - normalized parameters. ASIFT first reaches the affine invariance of the descriptors of the image, then uses SIFT which stimulates the scale and normalize the rotation and movement.

The considered device-prototype involves the following operations:

1. Each image is converted, by simulating all possible affine distortion caused by changing the position of the camera from the front position. These distortions depend on two parameters: the longitude φ and latitude θ. In the first step of this algorithm will be a certain set of images, all sorts of different affine transformations;

2. Received after affine transformations of the image is obrabatyvayutsya the SIFT algorithm;

3. SIFT includes the following operations:

- Finding the critical points by building a pyramid of gaussiana (Gaussian) and differences of gaussiana (Difference of Gaussian, DoG).

- Clarification of particular points using the approximation function DoG (difference of gaussiana) the Taylor polynomial of the second order, taken at the point calculated extremum.

- Finding orientation key point, which is calculated based on the directions of the gradients of adjacent pixels.

Building descriptors. Method SIFT descriptor is a vector. As the direction of the cue point, the descriptor is computed for the Gaussian closest in scale to the key point, and based on the gradients in a box key point.

The disadvantages of the known devices of the prototype are:

- Different lighting conditions (e.g. day/night).

The object has a reflective surface (usually, cars, mirrors).

- The object has a strong 3-D structure.

- The object has a similar descriptors or periodic structure.

Barriers to achieving the desired technical result are as follows:

- ASIFT method considers the array image to grayscale Si,j(the image array takes values in the range from 0 to 255), this method allows you to keep the resistance to color change. But when takanohana reflective surfaces retain their features, that significantly affects the process of finding correspondence between descriptors.

The proposed device search duplicate images allows us to solve one of the problems of the original device by using the YIQ color space. The device implements the following algorithm. At the first stage will present an array of image Si,j,kin YIQ color space. This television format NTSC. This color space, as well as human vision is more sensitive to light brightness (intensity) and not color. The image consists of three components - brightness (Y) and two artificial color difference (I and Q) components. The signal I is called common-mode, Q - quadrature.

Decompose the channel YIQ, the color components. The conversion from RGB to YIQ is carried out by the following formulas:

Y=0,299R+0,587G+0,114B,

I=0,596R-0,274G-0,322Â,

Q=0,211R-0,522G+0,311Â .

Analysis of images of different component of the YIQ space showed that the components I and Q do not contain the glare and reflection inherent to the mirror surfaces. These components do not contain luminance information and show only the color, while reflecting surface structure is characterized by a greater degree of brightness change. Select the channel I for the match between descriptors on reflective surfaces.

Next, the obtained images are processed by the ASIFT algorithm.

Image converted by simulating all possible affine distortion caused by changing the position of the camera from the front position. These distortions are obtained by changing two parameters: φ the longitude and latitude θ. In the first step of this method, there are a lot of images, different affine transformations. Rotation and tilt are modeled for finite and small values of latitude and longitude angle, the selection of these parameters is provided by a simulated image that takes every opportunity review at other values of latitude and longitude.

Then you step the presence of singular points. The key to the detection of singular points is the construction of the pyramid of gaussiana and differences of gaussiana. The Gaussian (or, the image is amitim Gaussian filter) is an image:

L(x,y,σ)=G(x,y,σ)*I(x,y),

where L is the value of the Gaussian at the point with coordinates (x, y), σ is the blur radius, G is the Gaussian kernel, the I - value of the original image, * is the convolution operator.

The difference of gaussiana called the image obtained by pixel-by-pixel subtraction of one Gaussian source image from images with different blur radius.

D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ).

Scalable image space is the set of all possible, smoothed some filter versions of the original image is agenia. It is proved that the Gaussian space is a linear invariant under shifts, rotations, scale, does not bias the local extrema, and has the property of semigroups. Varying degrees of blur Gaussian filter can be adopted for the original image, taken in a certain scale.

Invariance with respect to scale is achieved by finding the key points of the original image, taken at different scales. For this build the pyramid of gaussiana: all scalable space is divided into some areas of the octave, and the scalable part of the space occupied by the next octave, two times greater part occupied by the previous one. Moreover, the transition from one octave to another is the resampling of the image, its size is halved. Naturally, each octave covers an infinite variety of gaussiana image so that it is built only some of their number N, with a certain step on the blur radius. With the same steps were completed two additional Gaussian (for a total of N+2) beyond the octave. The scale of the first image of the next octave is equal to the scale of the image from the previous octave number N.

In parallel with the construction of the pyramid of gaussiana built the pyramid of differences of gaussiana, sotoyama is from the differences of the neighboring images in the pyramid of gaussiana (Fig.1). Accordingly, the number of images in this pyramid will be N+1.

Each difference obtained from two neighboring gaussiana, the number of differences is one less than the number of gaussiana, the transition to the next octave, the image size is reduced by half.

After building the pyramids find the singular points. The point will be considered to be special, if it is a local extremum of the difference of gaussiana. In Fig.2 shows the stage of determination of the extreme points. It is considered as such, if the differential value of gaussiana in the marked point, more (or less) all other values at points.

In each image of the pyramid of the difference of gaussiana defined point of local extremum. Each point of the current image DoG is compared to its eight neighbors and nine neighbors in the DoG, is one level above and below it in the pyramid. If this point is more (less) of all neighbors, it is a point of local extremum.

The next step is the clarification of particular points validation points of extremum on the role of special.

Determine the coordinates of the special points with sub-pixel accuracy. This is achieved by using approximatively function DoG Taylor polynomial of the second order, taken at the point calculated extremum.

D(x)= D+DTxx+12xT2Dx2x,

where D is the DoG function, x=(x, y, σ) is the vector of displacement relative to the point of decomposition, the first derivative DoG - gradient, the second derivative DoG is the Hessian matrix.

The extremum of the Taylor polynomial is found by calculating the derivative and equating it to zero. The result is a displacement of a point calculated extremum with respect to the exact:

x=-2D-1x2Dx.

If one component ∧vectorxgreater than 0.5*grid step in this direction, it means that actually extremum point was calculated incorrectly and need to move to the next point in the direction of the component. For neighboring points repeats all over again. If so, we have gone beyond the octave, then you should exclude this t is CCW from consideration.

When the position of the point of extremum calculated, checked the actual value of the DoG at this point by the formula:

D(x)=D+12DTxx.

If this validation fails, the point is excluded as a point with low contrast.

Last check includes checking if a particular point lies on the boundary of some object or poorly lit, in this case, such a point can be excluded from consideration. These points are of great bend (one of the components of the second derivative) along the border and small in the perpendicular direction. The big bend is determined by the Hessian matrix H. To test fit H of size 2×2.

H=[DxxDxyDxyDyy].

Let Tr(H) is the trace of the matrix a Det(H) is its determinant.

Tr(H)= xx+Dyy=α+β,

Det(H)=DxxDyy-(Dxy)2=αβ.

Let r be the ratio of the larger bend to a smaller,

α=rβ,

Then

Tr(H)2Det(H)=(α+β)2αβ=(rα+β)2rβ2=(r+1)2r,

and the point is considered further, if

T r(H)2Det(H)<(r+1)2r.

The direction of orientation of the key points is calculated based on the directions of the gradients of the pixels adjacent to the particular. All computations of gradients are produced on the image in the pyramid of gaussiana, with the scale closest to the scale of the key point. The magnitude and direction of the gradient at the point (x, y) are calculated by the formulas:

m(x,y)=(L(x+1,y)-L(x-1,y))2+(L(x,y+1)-L(x,y-1))2,

θ(x,y)=tan-1/mn> (L(x,y+1)-L(x,y-1)L(x+1,y)-L(x-1,y),

wheremthe gradient magnitude,θhis direction.

First let us define the window (at key points, which will be discussed gradients. This will be the window required for the convolution with the Gaussian kernel, it will be round and the blur radius for this kernel (σ) equal to 1.5*the scale of the key point. For the Gaussian kernel is valid, the so-called rule of "three Sigma". It is that value of the Gaussian kernel is very close to zero at a distance of more than 3*σ. Thus, the radius of the window is defined as [3*σ].

The direction of the singular point is found from the histogram of directions (Fig.3). The histogram consists of 36 component that evenly cover the period of 360 degrees, and it is formed in the following way: each point of the window (x, y) contributes equal to m*G(x, y, σ), the component histogram, which is open gap containing the gradient direction θ(x, y).

The direction key point lies in the interval covered by the maximum component of the histogram. The maximum components (max), and two neighboring interpolated by the parabola and the point of maximum of this parabola is taken as the direction of the cue point. If the histogram has more components to values not less than 0.8*max, they are similarly interpolated additional activities are ascribed a key point.

The next step is the construction of descriptors. Method SIFT descriptor is a vector. As the direction of the cue point, the descriptor is computed for the Gaussian closest in scale to the key point, and based on the gradients in a box key points. Before computing the handle to the window turn on the angle of the cue point than to achieve invariance with respect to rotation.

Fig.3 schematically shows part of the image obtained on the basis of the descriptor. Here you see the pixels, indicated by small squares. These pixels are taken from a square window handle, which, in turn, divided into four equal parts (regions). The center of this window is located between pixels. This must be chosen as close as possible to the exact coordinates of key points. The circle denotes the convolution window is the Gaussian kernel (similar to the window to calculate the direction of the cue point). For this kernel is defined as equal to half the width of the descriptor window. In the future, the value of each point of the window descriptor will domestica on the value of the Gaussian kernel at this point, as the weighting factor.

To the right is shown the handle singular point, dimension 2×2×8. The first two digits in the value of the dimension is the number of regions horizontally and vertically. Those squares, which covered a region of pixels in the left image, the right cover of the histogram built on the pixels of these regions. Accordingly, the third digit in the dimension of the descriptor indicates the number of the component histogram of these regions.

Each gradient in the window descriptor can be attributed to three real coordinates (x, y, n), where x is the distance gradient is horizontal, y - vertical distance, n is the distance to the gradient direction in the histogram (meaning, the corresponding histogram descriptor, which contributes to the gradient). For point of reference is the lower left corner of the window handle and the initial value of the histogram. For single pieces are the size of regions by horizontal and vertical directions x and y, respectively, and the number of degrees in the component histograms for n. The coefficient trilinear interpolation is determined for each coordinate (x, y, n) gradient ka is 1-d, where d is the distance from the coordinates of the gradient until the middle of that single period in which this coordinate is horrible. Each occurrence of the gradient histogram is multiplied by all three weighting factor trilinear interpolation.

Descriptor key point consists of all the obtained histograms. The resulting descriptor is normalized, then all of its components whose values are greater than 0.2, a are clipped to values of 0.2 and then the descriptor is normalized again. In this form descriptors is ready for use.

In the next steps is a comparison of the descriptors of the source images in YIQ color space. At the final stage, combining results of the mapping.

The device search duplicate pictures (Fig.4) contains an input that is connected to the input of the preprocessing unit of the first image 1 and to the input of the preprocessing of the second image 2, block 1 consists of a recording unit of the first image 1.1, the output of which is connected to the input of the conversion unit of the first image's color space, YIQ 1.2, the output of which is connected to the input of block allocation in-phase component of the first image 1.3, the output of which is connected to the input of the imaging unit in the rotation of the first image 1.4, the output of which is connected to I the control of the imaging unit when the angle of inclination of the first image 1.5, the output of which is connected to the input of the storage unit of the simulated images for the first image 1.6, the output of which is connected to the first input of the unit application of the method SIFT; output recording unit of the second image 2.1 is connected to the input of the conversion unit of the second image's color space, YIQ 2.2, the output of which is connected to the input of block allocation of the in-phase component of the second image 2.3, the output of which is connected to the input of the imaging unit in the rotation of the second image 2.4, the output of which is connected to the input of the imaging unit when the angle of inclination of the second image 2.5, the output of which is connected to the input of the storage unit of the simulated images for the second image 2.6, the output of which is connected to the second input of the unit application of the SIFT method (block-diagram of the device shown in Fig.5, patent No. US6711293B1, IPC G06 9/68), the output of which is connected to the input of the computing unit of the same number of handles 4, the output of which is connected to the input of the storage unit found a pair of duplicates 5, the output of which is an information output device.

The device search duplicate images is implemented and works as follows. The inputs of the blocks of the first 1.1 and 2.1 second image comes two images that is necessary is to identify the appropriate descriptors and to establish their similarity. For the purpose of processing images with specular surfaces of the image is converted into YIQ color space in blocks conversion of the image's color space, YIQ 1.2 and 2.2. Next is a selection of in-phase component of the YIQ color space in blocks 1.3 and 2.3. In the following blocks 1.4, 2.4, 1.5, 2.5 formed image due to the rotation and tilting angle, i.e. applying affine transformations. In blocks 1.6 and 2.6 records all received images resulting from these transformations. The unit application of the SIFT method is the construction of the SIFT descriptor all images specified by the method. The obtained descriptors for the pair of images are sent to the input unit 4 that calculates a number of the same descriptors and decides that the second image is a duplicate of the first, then the duplicate is recorded in the storage unit found a pair of duplicates 5.

The technical result of the comparison of the descriptors applied to the task of finding duplicate images.

The device search duplicate images containing block preprocessing of the first image and the block preprocessing of the second image, the preprocessing unit of the first image consists of a recording unit of the first image; the imaging unit in the cut is ltate rotation of the first image, the output of which is connected to the input of the imaging unit when the angle of inclination of the first image, the output of which is connected to the input of the storage unit of the simulated images for the first image, the output of which is connected to the first input of the unit application of the SIFT method; a recording unit of the second image, the imaging unit in the rotation of the second image, the output of which is connected to the input of the imaging unit when the angle of inclination of the second image, the output of which is connected to the input of the storage unit of the simulated images for the second image, the output of which is connected to the second input of the unit application of the SIFT method, the storage unit found a couple of duplicates, the output of which is an information output device, wherein the output unit of the first image is connected to the input of the conversion unit of the first image in the YIQ color space, the output of which is connected to the input of block allocation in-phase component of the first image; an output unit of the second image are connected to the input of the conversion unit of the second image in the YIQ color space, the output of which is connected to the input of block allocation in-phase component of the second image; the output of the unit application of the method SIFT connect the n to the input of the computing unit of the same number of descriptors, the output of which is connected to the input of the storage unit found a pair of duplicates.



 

Same patents:

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to an apparatus for integrating images into a single scene composition. The apparatus comprises, connected to each other, a control point search unit (7), a match search unit (8), an input image conversion unit (14), an overlay unit (15), an input realisation storage unit (1), the input of which is the data input of the apparatus, a unit for transferring control points to the original image (12), a boundary detector unit (4), a detailed object search unit (5), a detailed region selection unit (6), a false match exclusion unit (9), a detailed object scaling unit (10), a unit for bringing input images to a single scale (13), a detailed object conversion unit (11), an output realisation storage unit (16), the output of which is the data output of the apparatus, as well as a control unit, wherein synchronous operation of the apparatus is provided by a clock pulse generator (3).

EFFECT: high accuracy of integrating images of a different scale owing to the automatic selection of conversions of detailed objects, determining scaling and transformation factors and excluding false matches.

4 dwg

FIELD: physics.

SUBSTANCE: interventional system includes an imaging device providing a live image of an object, a needle device which can enter the object and has a position in the object visible on the live image, and a processing device capable of obtaining a pre-recorded image of the object from a database. The needle device includes a sensor for providing local data corresponding to properties of tissue near the sensor, and the processing device is capable of overlaying the pre-recorded image and the live image on each other, wherein the local data from the sensor are used to recalibrate overlaying based on the pre-recorded image, the position of the needle device on the live image and local data from the sensor. A computer-readable medium of the system stores a computer program which instructs the processing device to carry out a method of overlaying the pre-recorded image and the live image of the object.

EFFECT: use of the invention improves accuracy of overlaying images.

12 cl, 8 dwg

Analysis of vessels // 2534948

FIELD: medicine.

SUBSTANCE: invention relates to an analysis of a tubular structure. A system contains a display means (1) for displaying a three-dimensional image, representing at least the tubular structure; an indication means (2) for providing the user with a possibility to indicate the position on a vessel of the tubular structure for obtaining the indicated position, with the indication means (2) being made with a possibility to react to an event of the mouse movement; and the identification means (3) for the identification of the tubular structure section, located around the said position, including any bifurcations, and extending up to a specified distance, measured from the said position, for obtaining the identified section, with the display means (1) being made with a possibility of displaying a graphic annotation in the displayed three-dimensional image, displaying the identified section of the tubular structure, and with the identification means and/or display means being made with a possibility of providing a renewed identified section, when the event of the mouse movement is identified.

EFFECT: provision of the vessel analysis realisation.

10 cl, 2 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to means of determining edge curves of an object on an image. The method comprises converting an image into a digital spectrozonal image on which each pixel is assigned a hue value from a HSV colour space, which corresponds to a colour angle H on a predetermined colour circle, classifying the object pixel whose hue is within a predetermined range of values and a background pixel determining an entropy profile. The method comprises differentiation and consideration of extreme values of the determined entropy profile and determination of edge curves of the object having the highest entropy difference.

EFFECT: determining edge curves of an object without clearly defined contours.

2 cl, 5 dwg

FIELD: medicine.

SUBSTANCE: invention refers to segmenting a medical image volume. A system (10) comprises: a display (24) in the user's interface (16) whereon the user is presented with an original segmented volume of interest; the user's interface (16) comprises an instrument (26) for data input by the user which is used to control the weight of a superparameter (50) of the segmented volume of interest; a processor configured to execute a component of a parameter controller (22) which controls one or more of the numbers of internal parameters (52) in accordance with the superparameter weight control to induce variations of the segmented volume of interest; the processor (12) is configured to repeat the segmentation of the volume of interest more than once with using the number of the controlled internal parameters, and displaying the re-segmented volume.

EFFECT: simplified control of the internal parameters in accordance with the superparameter weight control, as well as provided image re-segmentation to achieve the desired effect.

15 cl, 6 dwg

FIELD: medicine.

SUBSTANCE: invention relates to the field of analysis of three-dimensional medical images. The system contains: a detector (52) of the surface for the identification of the section of an object surface (5), represented by an image; a detector (51) of a pathological change for the identification of the pathological change (4), present in the object and represented by the image; and means (53) of determining orientation of the pathological change (4) relative to the section of the surface (5), basing on the output signal of the detector (52) of the surface.

EFFECT: increased accuracy of estimating orientation of the pathological change of the three-dimensional image.

15 cl, 8 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to digital image processing means. The method includes creating a digital photograph data array; creating a reference base; forming a code feature matrix of reference photographs by recoding brightness matrices into code feature matrices; constructing a photometric correction function for each reference photograph; forming a code feature matrix of the original photograph by recoding the brightness matrix into a code feature matrix; algebraic subtraction of the code feature matrices of the reference photographs and the original photograph to establish a threshold for reliable identification the reference photograph most similar to the original photograph; forming an address of a reference photograph with an extension from its code feature matrix and photometric correction function; retouching the original photograph based on the calculated code feature matrix and the photometric correction function at the address in the reference base.

EFFECT: providing automatic correction of an original photograph based on a reference photograph photometric correction function.

4 dwg

FIELD: physics.

SUBSTANCE: group of inventions relates to computerised tomography techniques. The apparatus for determining change in the size of an object includes a unit for providing a set of image data for providing a first set of image data showing the object at a first time and for providing a second set of image data showing the object at a second time which is different from the first time. The apparatus also includes a unit for providing a region of interest for providing a first and a second region of interest in which objects shown in the first and second sets of image data are located. The apparatus also includes a recording unit for recording the first and second regions of interest relative to each other. Said unit is configured to generate a scaling value by performing scaling transformation for recording the first and second regions of interest relative to each other.

EFFECT: high accuracy of determining changes in the size of an object.

10 cl, 4 dwg

FIELD: medicine.

SUBSTANCE: inventions relate to the means of visualising an object of interest together with a device, applied for treatment of a patient. The device contains a unit of the image formation, made with a possibility of presenting information about the image of the object of interest and the device, located inside, a processing unit and a display, made with a possibility of visualising the image, which represents a combination of information about the device image and an improved information about the image of the object of interest. It fulfils the detection and segmentation of the device basing on the information about the image, provided by the unit of the image formation, deleting the information about the device image; the detection of the object of interest basing on the provided information about the image; an improvement due to the time integration about the image of the object of interest with re-addition of the information about the device image.

EFFECT: invention provides a possibility of compensating the movement of the device, applied for treatment and located inside the object of interest, by means of the time integration.

11 cl, 1 dwg

FIELD: aviation.

SUBSTANCE: invention relates to a system and method of monitoring the flight strip (FS). The system comprises one or several cameras which capture images of the FS; and an image processing system for detecting damages from foreign objects on the FS based on adaptive processing of images captured with the cameras, and for use of methods to improve the image quality to improve their clarity; at that the image processing system is formed so that to detect motions in the captured images, to study the background using only the images in which the motion is absent, to develop the composite map of contours during the study of background; at that the composite map of the background contours consists of a map of contours of adaptive background and previously studied map of the background, as well as additional seasonal marking created for a specific time of year or weather conditions; at that the monitoring system is adaptively suitable for detecting damages from foreign objects under conditions of outdoor lighting in the daytime and at night without forced lighting, including infrared or laser light devices.

EFFECT: providing detecting damages from foreign objects under conditions of natural lighting in the daytime and at night without the use of additional lighting, such as infrared or laser light devices.

30 cl, 10 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to a method and a device for reading physical characteristics of an object. The method of reading a physical characteristic of an object includes: a step (240, 315, 405) of capturing a first image of at least a portion of the object with a first resolution; a step (245, 320, 415, 420) of determining the position of an area of the object to be processed, based on to the first image; a step (255, 330, 430) of capturing a second image of the area of the object to be processed, with a second resolution higher than the first resolution; and a step (260, 335, 440) of determining the physical characteristic based on the second image, wherein the step (245, 320, 415, 420) of determining the position of an area of the object to be processed includes: a step of determining at least one marked position, a step of capturing an image of at least one said marking and a step of reading information represented by said marking and representing the position of the area to be processed.

EFFECT: enabling positioning of an area of interest, from which a physical characteristic of the object is read when detecting output data of the object for sorting and standardisation.

17 cl, 6 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to computer science and can be used for automatic identification of objects on images. The method involves scanning the original high-resolution photographic image. The array of the obtained readings is converted to the scale of the reference array by normalising brightness pixels with a scaling coefficient. The obtained image is broken down into three two-dimensional arrays in the standard RGB colour palette. Outlines of objects are selected using techniques for spatial differentiation of the array signal function. The relief surfaces of the objects inside the selected outlines are approximated with a mosaic of triangles. The area of the mosaic in each channel is calculated using the Heron formula and the obtained surface relief areas of the objects are compared with values thereof for references based on a validation criterion: |SrefSob|Sref100%.

EFFECT: automating recognition with high accuracy.

7 dwg

FIELD: physics.

SUBSTANCE: method for equal-contrast colour discrimination of two or more objects consists in that, by switching from traditional ICI colour spaces to a new colour space, in which the colour discrimination threshold is represented by a sphere (ball), the diameter of which is equal to that threshold, conversion of the old colour space is carried out using an altered Einstein equation, describing curved space-time. In the new colour space, coordinates of the metric tensor are defined by a colour tensor, the cosmological constant and the energy momentum tensor are equal to zero, and indices of the metric tensor range over values 1, 2 and 3.

EFFECT: broader functionalities by taking into account stimulus brightness.

9 dwg

FIELD: information technology.

SUBSTANCE: method of recognising facial images involves recording and recognition using gamma-correction and logarithmation of images; conversion thereof into spectra by two-dimensional cosine transform and removing high-frequency spectral components; when recognising test images, comparing the obtained test spectra with spectra stored in a reference base is carried out based on the average brightness value of the test image; the converted test spectra are classified based on minimum distance criterion; a decision on recognition is made. The system for realising said method includes a recording device and a recognition device, the recording device having a gamma-correction unit, an adder, a logarithmation unit, two two-dimensional cosine transform units, a reference base unit, and the recognition device having an adder, a logarithmation unit, a gamma-correction unit, two two-dimensional cosine transform units, a spectrum generating unit and a classifier.

EFFECT: faster recognition of facial images and high universality thereof.

2 cl, 6 dwg

FIELD: information technologies.

SUBSTANCE: method operates with passports of archive and query fingerprints in formats of two lists of unstable Cartesian coordinates of key points, which are converted into two lists of more stable polar coordinates of these points relative to centres of valid basic sections. Using them, the extent of proximity is determined between two fingerprints by finding the minimum distance for every key point in one print from all key points of another print, and by counting the average value of named distances in all key points of both prints for further comparison with the threshold value and making a decision about identification. The device performs procedures of handling of query and archive prints in parallel in two interrelated master micro-PC (master and slave micro-PC), which additionally for acceleration of calculations in parallel sections of an identification logic, data is transferred into a multiprocessor accelerator of recognition.

EFFECT: development of a specialised computing device for finding proximity of a pair of fingerprints during identity verification or identification with least costs of time for recognition and minimised impact of negative factors, arising when reading a print.

2 cl, 3 dwg

FIELD: information technologies.

SUBSTANCE: method identifies appropriate image members with suspicion for twice printed characters, in the first place, providing a set of reference images of single characters from images of characters identified in a text processed with the OCR system, and then by combination of reference single characters to provide a model of candidates for the image with suspicion for twice printed characters. Correlation between each appropriate model of a candidate and an image with suspicion for twice printed characters provides indication of that pair of model reference images of single characters with highest probability, which is correct identification of appropriate images of characters at the image with twice printed characters.

EFFECT: recognition of twice printed characters.

5 cl, 4 dwg, 1 tbl

FIELD: information technology.

SUBSTANCE: input image is turned with the given Green function; obtained resultants are subtracted from each other to obtain a finite-difference approximation first-order derivative of the resultant input image with a filter; when searching for the local extremum of said resultant, corresponding first-order derivatives are equated to zero; all local extrema are found and adaptive threshold filtering is performed to cut off insignificant features, wherein the selected points serve as centres of vicinities for which arbitrary descriptors are constructed.

EFFECT: high accuracy of recognising objects.

3 dwg

FIELD: information technologies.

SUBSTANCE: invention can be used for protecting private information of citizens when providing social assistance to them. Besides invention can be used in judicial record keeping when protecting witnesses, as well as in electronic voting systems. In order to achieve the result, examples of two biometric images of the registered person are used and a pair of keys (public and private key) is formed, the first neuron network is taught to reproduce the public key, when the first biometric image is presented to it, the second neuron network is taught to reproduce the user's private key when the second biometric image is presented to it, of the registered person, the public key bunch of the registered person and narrative matrixes of connections and parametres of its taught neuron network are kept, registration time and place is fixed, electronic digital signature (EDS) data of automatic registration machine and EDS of the registered person is signed, accuracy of EDS of the registered person is checked, and in case it is accurate, examples of biometric images and the private key of the registered person are deleted.

EFFECT: improving safe registration of people.

2 cl

FIELD: hydrology, possible use for determining thickness of ice cover over freezing water areas on basis of data of remote measurement instruments, installed on meteorological Earth satellites.

SUBSTANCE: in accordance to the invention, provided mathematical formulas are used to a priori calculate a set of standards of nonlinear similarity coefficients between relief of true ice thickness and relief of temperature field of ice cover. Then similarity coefficients are determined on basis of data of infrared image being analyzed, where test sections are selected, surface temperature of which corresponds to sections of water at freezing temperature and to sections of "thick" (thickness > 120 cm) snow-covered ice. Under identical hydro-meteorological conditions from a set of standard values for data for the time of image receipt, similarity coefficients are selected which are equal to the calculated ones, and temperature intervals, which correspond to selected discrete intervals of ice thickness, are determined. On the image of ice cover, sections with given ice thickness intervals are singled out using a color pattern. As test sections on the infrared image which correspond to surface temperature or brightness of pixels of "thick" snow-covered ice, sections of image of snow-covered earth surface, which is positioned closely to ice cover being examined, are selected.

EFFECT: creation of monitoring over thickness of ice in freezing water areas on basis of images in heat channel of infrared range of frequencies of meteorological Earth satellites, which is automated in interactive mode.

3 cl, 1 dwg

FIELD: computer engineering.

SUBSTANCE: instantaneous object recognition computer system is made with possible recording, storage, erasing, reproduction and recognition of information, and includes a computer, device for controlling electronic devices, controlled by computer, memorizing-recognizing block of memory pages, input contacts of which are connected to device for controlling electronic devices controlled by computer, and output contacts are connected to reading device of the computer. Memorizing-recognizing block consists of memory pages in the form of board with thyristor-stabilitron cells. Controlling electrodes of each thyristor of thyristor-stabilitron cell memory pages are connected to analogical controlling electrodes of adjacent memory pages, creating a system of input contacts of memorizing-recognizing device. Output contacts of stabilitron circuits of thyristor-stabilitron cells are connected to analogical output contacts of stabilitron chains of adjacent memory pages, creating a system of output contacts. The principle of memorization and instantaneous recognition includes outputting the normalized contour, which should be memorized in memorizing-recognizing block, to a monitor screen. Computer calculates the contour of thyristors which should be opened on given memory page in such a way that contour of electric current passing through these thyristors on memory page forming an electric circuit, is analogical to image contour on monitor screen, which signifies memorization and is stored in memory until current electric circuit is killed on signal from the computer. For opening these thyristors the device for controlling electronic devices controlled by computer dispatches opening control impulses to input contacts of memorizing-recognizing device, connected to controlling electrodes of memory page thyristors, contour of which matches the contour of the image being memorized. For recognition of object on basis of contour of its normalized image or only by its fragment, outputted on the monitor screen, the computer determines coordinates of input contacts of memorizing-recognizing block, corresponding to contour of the object being recognized, and dispatches control impulses to them. For more precise recognition of object on basis of its fragment the computer creates a system of moveable buses, which select a part of contour of template images in accordance to which recognition is performed. On the memory page on which the contour of electric current passing through thyristors of thyristor-stabilitron cell of contour of memorized template image coincides with contour of the object being recognized, the current, passing along contour of thyristor-stabilitron cell, is increased, resulting in disruption of stabilitron chains of thyristor-stabilitron cell and appearance of potential on output contacts of memorizing-recognizing block, contour of which is analogical to the contour of the object being recognized. The computer, reading the contour of output contacts of recognizing-memorizing block with potential, outputs the contour of recognized image - template image to monitor screen together with information about it or sends recognition signal to executing organ.

EFFECT: increased trustworthiness of recognition, expanded recognition capabilities.

2 cl, 6 dwg

FIELD: physics.

SUBSTANCE: device additionally includes a register of criteria codes, a unit of memory of criteria codes, a decoder of criteria codes and a unit of result memory.

EFFECT: increased efficiency of a device due to reduced quantity of requested criteria of recognition for instances, when the result becomes available in advance by the current situation of recognition.

5 dwg, 2 tbl

Up!