RussianPatents.com

Image processing device, system for forming images using radiation, image processing method and storage medium storing programme. RU patent 2496142.

Image processing device, system for forming images using radiation, image processing method and storage medium storing programme. RU patent 2496142.
IPC classes for russian patent Image processing device, system for forming images using radiation, image processing method and storage medium storing programme. RU patent 2496142. (RU 2496142):

G06T1/00 - IMAGE DATA PROCESSING OR GENERATION, IN GENERAL (specially adapted for particular applications, see the relevant subclasses, e.g. G01C, G06K, G09G, H04N)
Another patents in same IPC classes:
Copying animation effects from source object to at least one target object Copying animation effects from source object to at least one target object / 2495478
Method involves receiving a selection from a presentation file of a source object having an associated animation effect (302, 402); receiving an indication for copying the associated animation effect from the source object to a target object (306, 406); receiving a selection of the target object (308, 408); and copying the animation effect from the source object to one target object in response to receiving the selection of the source object, receiving the indication of the desired operation and receiving the selection of the target object (316, 416).
Image processor, image generator and computer programme Image processor, image generator and computer programme / 2493600
Image processor configured to: receive an image; take area-specific samples (11) of the image information of the area and calculate (12) a reference value (REF) based on the samples; store in memory (16) or transmit (15) the area-specific reference values (REF) calculated for an area and an area identifier (ALUE); check (13) whether the reference value (REF) obtained as a result of the calculation and the corresponding image information (INF) have already been stored in the memory (16) or transmitted (15); and store in the memory (16) or transmit (15) the calculated reference value (REF) and the corresponding image information (INF), for which said reference value was calculated, if the reference value (REF) and the corresponding image information (INF) have not been stored in the memory or transmitted previously.
Architecture for remote operation with graphics Architecture for remote operation with graphics / 2493582
Server computer has memory; a processor, user space in memory, which includes an abstraction module for controlling bandwidth consumption by limiting frame transfer rate based on client functionality and type of 3D graphics application. The memory and processor receive information on loss of a device associated with user space and, upon receiving said information, initiates a procedure for deleting or repeating client connection.
Device for enhancing quality of facial images of people in video sequences Device for enhancing quality of facial images of people in video sequences / 2491636
Device includes an input lens, a memory unit, a bus, a video sensor, a movement estimation unit, a frequency divider and video sensor resolution selection unit, a unit for single-step enhancement of the whole frame, a clock frequency generator unit, a unit for selecting faces of people, a unit for iterative enhancement faces of people, an image combining unit and a conflict resolver.
Device for photographing current collector by image processing Device for photographing current collector by image processing / 2491182
Invention relates to photographing the current collector mounted on vehicle roof. Proposed device comprises the warner of transport facility approach, object shape identifier, photographing means and image processing means. Image processing means includes photographing means and current collector image storage section. Said section includes writer, shooting control means, memory, current collector detector and detector control device.
Method of ranking video data Method of ranking video data / 2484529
In the method, fragments which correspond to an object and/or an event are selected from initial video data; features are calculated for each fragment, which influence fragment priority estimation and/or are used when searching for fragments in the storage; the priority of each fragment is estimated based on its features; the fragments are sorted each according to its priority and the obtained priority queue of fragments or a fragment with the highest priority is transmitted over communication channels to a user and/or storage.
Method of generating improved map data for use in navigation devices Method of generating improved map data for use in navigation devices / 2482540
Disclosed is a method of generating improved map data for use in an electronic navigation device, the method comprising steps of: displaying map data, suitable for route guidance algorithms, on a display of the electronic navigation device; receiving a first input from a user on a user interface of the electronic navigation device to create a correction to an error in the map data, wherein the electronic navigation device is able to use the created correction without external processing of the correction; downloading to the electronic navigation device a correction to another error in the map data input by another user; and receiving a second input from the user on the user interface to include or exclude from route calculations at least one of said created correction and said downloaded correction, and calculating a route to a destination based at least on said received second input.
Adaptive method to create and print colour anaglyph images Adaptive method to create and print colour anaglyph images / 2481635
In the method a test colour sample is printed on a device of anaglyph images printing, functions of spectral transmission are assessed for the left and right filter of glasses for a printed test sample, a stereomate is generated, an anaglyph image is generated for a stereomate, and the anaglyph image is printed.
Device for image processing, method of image processing, device for capturing tomogram, programme and carrier for programme recording Device for image processing, method of image processing, device for capturing tomogram, programme and carrier for programme recording / 2481056
Invention relates to medical equipment, namely to systems and methods of image processing with application of eye tomogram. Versions of implementation of devices and methods for processing images for determination of state of capturing image of subject's eye or tomogram continuousness, as well as carriers of data with programmes for implementation of methods include stages of work with unit of image processing, which receives information, showing continuousness of subject's eye tomogram and with unit of determination, which determines state of capturing subject's eye image on the basis of information, received by unit of image processing.
Model-based playfield registration Model-based playfield registration / 2480832
Method, apparatus and system for model-based playfield registration. An input video image is processed. The processing of the video image includes extracting key points relating to the video image. Further, whether enough key points relating to the video image were extracted is determined, and a direct estimation of the video image is performed if enough key points have been extracted and then, a tomographic matrix of the final video image based on the direct estimation is generated.
Device for synthesis of cartographic representations Device for synthesis of cartographic representations / 2250182
Proposed device is provided with computer module, memory module and graphical module and is designed for dynamic forming of sequence of cartographic mimic frames and their animation demonstration on displays of onboard multi-functional indicators. Device employs cartographic data kept in memory and present flight data. Actual navigational information pertaining to present moment may be obtained by personnel in graphical form at high level of clearness and readability, which is achieved due to realization of definite modes and conditions of flight and conditions of several modes of flight of synthesis of cartographic mimic frames which differ in criterion of selected representations, methods of representation, cartographic projections and rules of positioning, orientation and scaling-up of cartographic representations. Mode of synthesis of cartographic mimic frames is selected automatically according to results of identification of present stage, mode and conditions of flight or at the discretion of personnel.
Method for automatic structuring of computer codes adequate for processed information Method for automatic structuring of computer codes adequate for processed information / 2257611
Method includes performing a block of operations along N1 channels, where N1 is selected from 1 to 2256, wherein received information is separated on logically finished fragments, encoded on basis of preset algorithm, to produce a block of N-dimensional sets adequate for converted source information Aj with elements like {Bm, X1, X2,...,Xn}, where j - order number of set in range from 1 to 2256, Bm - identifier, X1-Xn - coordinate of element from its coordinates center, m and n are selected from 1 to 2256; received block of sets is compared to already accumulated and/or newly produced sets from multiple channels, intersecting portions of sets are found and cut out; after that cut intersections and sets remaining after cutting are distributed among databases, placing each same set into database appropriate for it and each of sets different with some parameter to databases appropriate for them and identifiers of databases storing these sets are substituted in place of cut sets.
Photogrammetric workplace Photogrammetric workplace / 2277262
Photogrammetric workplace contains standard personal computer in base configuration, consisting of system block, monitor, keyboard, mouse-type graphical information manipulator and printer, and stereoscopic video control device, consisting of color monitor, coordinate joystick, parallax joystick, light-valve goggles and monitor adapter, first inputs-outputs of which are connected to system block of personal computer, first output - to input of color monitor, and second output - to input of light-valve goggles.
Device for processing two-dimensional and three-dimensional images Device for processing two-dimensional and three-dimensional images / 2289161
Device consists of processor matrix, control unit, rotation junction, pressure junction, memory junction matrixes, commutation unit, volume detection junction, and code former. Additional informational input is introduced in the device in every matrix processor local memory input commutator, memory junction matrix, additional pressure junction inputs, connected with processor matrix outputs, additional control unit outputs, connected with additional commutation unit inputs. For each k-th matrix control signal matrix collection of elements is divided into non-overlapping subsets; for each subset first cascade amplifier is introduced. The whole set of m-th cascade amplifiers is divided into non-overlapping subsets; for each m-th cascade amplifiers, excluding the last one, an m+1 cascade amplifier is introduced, output of which is connected with input of every amplifiers of m-th cascade; last cascade amplifiers inputs are connected with corresponding outputs of control unit.
Correcting device for inputting an image into computer Correcting device for inputting an image into computer / 2295153
Known device, contains analog-digital converter, features control circuit, video data RAM, address generator, abscissa correction table RAM, ordinate correction table RAM, correcting address generator, bus controller.
Method for finding images in jpeg format which contain a digital watermark Method for finding images in jpeg format which contain a digital watermark / 2301447
Invention allows detection of digital watermark size and coordinates of modified discontinuous cosine transformation coefficients of JPEG format file with required error probability. Along perimeter of each image block, correlation coefficient relative to adjacent blocks are determined for all image blocks, analyzed block after forced modification - incrementing by one in accordance to complete enumeration method of one or several absolute values of discontinuous cosine transformation coefficients (not equal to 0 and 1) - and reverse discontinuous cosine transformation is restored in spatial area. Along perimeter of restored block, values of coefficients of correlation with adjacent blocks are calculated. Minimum of distance is determined - minimum of average square deviation between values of correlation coefficients of restored image block, restored fragment from a corresponding set of discontinuous cosine transformation coefficients is the most correlated with adjacent fragments, and its restored discontinuous cosine transformation coefficients characterize the digital watermark.
Method for providing for interaction between visualization means for video data and drivers of graphic devices Method for providing for interaction between visualization means for video data and drivers of graphic devices / 2308082
In accordance to the invention, by means of communication protocols and/or applied programming interface, making information exchange possible, request is transferred from video data visualization means to graphic device driver, concerning image processing operations, which the driver of graphic device may provide to video data visualization means, response to request indicates at least one image processing operation, which may be provided to video data visualization means by graphic device driver. Image processing operations include operations for adjustment of control by generating amplifier, removal of interlacement, corrections of image format, transformations of color space, transformations of frame frequency, vertical or horizontal mirror imaging and alpha-mixing.
Image recognition method Image recognition method / 2313828
Image recognition method additionally includes operation for detecting local extremes, while a priori unknown structure of image being recognized, presence and location of its component objects, are determined not by using preliminarily given object standard, but on basis of analysis of characteristics of a set of local extremes detected on image signal being analyzed.
System and method for using data of non alpha-channel image in an environment informed about alpha-channel System and method for using data of non alpha-channel image in an environment informed about alpha-channel / 2321891
In accordance to the invention, a visualizing application, which is not informed about alpha-channel, is used in environment, which is informed about alpha-channel. The pixel data file which is supposed to be recorded by means of the application which is not informed about alpha-channel, is initialized by setting the alpha value for each pixel to 1. The application, which is informed about alpha-channel, then performs recording of initialized pixel data file, changing some or all pixel data. The alpha value for each pixel is then reduced by 1 (modulus 256), making changed pixels non-transparent and making unchanged pixels transparent.
Graphic conveyor and method for early depth detection Graphic conveyor and method for early depth detection / 2325699
Invention is related to graphic engines. Technical result is achieved because graphic conveyor includes multitude of sequentially located processing stages, where imaging pixels data is visualized from input data of elementary objects. Processing stages include at least texturing stage and depth testing stage, and depth testing stage may be located earlier in the graphic conveyor than the texturing stage.

FIELD: information technology.

SUBSTANCE: image processing device has a unit for obtaining images, which receives a plurality of partial images obtained by capturing each from a plurality of capturing ranges into which an object capturing region is divided; a forming unit which forms an image by combining the plurality of partial images obtained by capturing the object; a unit for deriving quantitative values of features, which derives a quantitative value of a feature of one of the partial images; a characteristic deriving unit which derives a gradation conversion processing characteristic based on the quantitative value of the feature and the capturing region; a conversion unit which converts, based on the characteristic, gradation of the image of the object capturing region, obtained by combining partial images; a control unit which controls the forming unit and the characteristic deriving unit, wherein control is performed such that processes of the unit for deriving quantitative values of features and the characteristic deriving unit are performed concurrently with the process of the forming unit.

EFFECT: shorter time required for image formation using radiation to obtain an inverse image with converted gradation.

17 cl, 7 dwg

 

THE LEVEL OF TECHNOLOGY

The technical field

The present invention relates to the transformation of gradations of image.

Description of the prior art

Conversion gradation is a process in which the range of pixel values in the image stands out in comparison with other bands. Conversion gradation used to justify the gradation image with scale during recording and the gradation of the display device and to increase the gradation of the region of interest. This treatment to prevent unnecessary loss of detail of the selection in the image or loss shaded detail in the image. In addition, can be obtained image, where it is easier to see the area of interest.

On the other hand, there are shooting modes, such as taking a big picture or take a picture with the separation of the fragments, in which lots of partial images obtained by shooting area of the scene at the division of shooting on a lot of shooting ranges. These methods are used for shooting a large object that does not fit into one image. The image of the whole object can be obtained by combining the separate partial images obtained by taking a large image. US patent # 7440559 considers methods for converting gradation image obtained by taking a large image. In U.S. patent № 7440559 techniques, in which the gradation is based on the histogram density merged image or on the basis of a partial image.

However, if the merged image is used as the target of the analysis, the load on the processing increases, which complicates the achievement of accurate analysis. On the other hand, if the conversion gradation is performed on the large image on the basis of one partial images, a whole big picture may not be appropriate gradation, because other partial images are not counted.

THE ESSENCE OF THE INVENTION

In accordance with the aspect of this the invention of an imaging device includes a tool to acquire images made with the possibility of receiving lots of partial images obtained by taking each of the many shooting ranges, divided region of shooting object, get quantitative values of signs made with the opportunity to obtain a quantitative characteristic value of at least one of partial images, get characteristics, made with the possibility of characterization of the conversion processing gradation on the basis of quantitative characteristic values and shooting, and conversion tool, made with the possibility to convert on the basis of the characteristics of gradation image shooting object obtained by combining the partial images.

Additional distinctive features and aspects of the present invention will become apparent from the following detailed description of illustrative way to implement with reference to the attached drawings.

BRIEF DESCRIPTION OF DRAWINGS

Accompanying drawings which are included in the specification and are part of the show illustrative options incarnation, distinctive features and aspects of the invention and, together with a description serve to inculcate the principles of the invention.

Figure 1 - diagram of the system configuration imaging using radiation in accordance with the first illustrative option embodiment of the present invention.

Figure 2 illustrates the configuration of the device image processing in accordance with the first illustrative option of the incarnation.

Figure 3 shows a configuration block obtain quantitative characteristic value in accordance with the first illustrative option of the incarnation.

Figure 4 illustrates an example of a conversion function grades obtained by the imaging device.

Figure 5 is a block diagram of the sequence of operations, illustrating the flow of processing performed by the system imaging using radiation in accordance with the first illustrative option of the incarnation.

Fig.6 shows a configuration imaging devices in accordance with the fifth illustrative option embodiment of the present invention.

Fig.7 - block diagram of the sequence of operations, illustrating the flow of processing performed by the system imaging using radiation in accordance with the fifth illustrative option of the incarnation.

DESCRIPTION OF VARIANTS OF THE INCARNATION

Various illustrative options incarnation, distinctive features and aspects of the invention will be described with reference to the drawings.

Now with reference to Figure 1-7 will describe the system 100 imaging using radiation in accordance with the first illustrative option embodiment of the present invention.

First, with reference to Figure 1 will describe the configuration of the system is 100 imaging using radiation. As illustrated in figure 1, system 100 imaging using radiation includes source 101 radiation sensor 103, device 104 image processing and block 105 display. Source 101 radiation is made of x-ray tube which emits radiation, such as x-ray radiation. Source 101 radiation irradiated object 102 appropriate dose of radiation within a predefined time. Sensor 103 forms sensor with flat panel indirect type (FPD), which is fluorescent emitter, transforms radiation in the visible account, and image sensor receiving the visible light and transforms it into an electrical signal on the basis of the amount of light. Sensor 103 can be obtained electric signal, representing the image of the object 102. Sensor 103 forms the image of the object through the implementation of a famous correction on this electrical signal. Because the passage of radiation depends on the substance through which it passes, based on the image sensor 103, may be rendered internal structure of an object. Sensor flat-panel direct that directly converts the x-ray radiation into an electrical signal can also be used for sensor 103.

Besides, the system is 100 imaging using radiation supports shooting of large images, which gets lots of partial images by shooting area of the object 102 at the division of shooting on a lot of shooting ranges. Shooting of large images is a way to capture, in which shooting is executed multiple times by moving the probe 103 along a 102 changing the direction of radiation source 101 radiation by not illustrated back of the drive unit. Therefore, you may receive a picture of an object that more effective shooting sensor 103.

Device 104 image processing generates large image by combining many partial images obtained by taking a large image. The term "the big picture" refers to the image derived by merging of multiple partial images obtained by taking a large image. Device 104 image processing also determines the characteristics conversion processing gradation through the analysis of partial images and converts the gradation of the large image. Then an image that has transformed gradation, appears on the block 105 display. Because the image information, important for the diagnosis can be displayed easily understood way through conversion processing gradation, you can get an image that can be easily viewed. This image is especially useful in the diagnosis, which includes detailed research.

Device 104 image processing includes as hardware in the Central processing unit (CPU) 106, random access memory (RAM) 107, a persistent storage device (ROM) 108, drive 109 on the hard disk (HDD), the network interface (I/F) 112 and interface 113 unit display. Keyboard 110 and mouse 111 connected with the device 104 image processing. Computer program for implementation of the corresponding functional blocks, illustrated in figure 2, or for the execution described below processes is stored in the permanent memory (ROM) 108 or on the hard drive (HDD) 109. This program is performed by the Central processor (CPU) 106 in the operative memory (RAM) 107 for the implementation of the functions described below, the device 104 image processing. Although figure 1 Central processing unit (CPU) 106 illustrated only one block, this invention is not restricted to this. The device may incorporate multiple CPUs 106.

Now with reference to Figure 2 will be described in more detail device configuration 104 image processing. Device 104 image processing block 201 receiving the partial images, unit 202 image formation for the formation of a large image, unit conversion 205 gradations, block 206 output block 207 receipt of the survey area, block 208 obtain quantitative characteristic values, block 209 reception of characteristics to obtain the conversion characteristics gradation, block 210 control and block 211 storage.

Unit 202 imaging block 203 image correction to correct the value of a pixel of a partial image obtained by the unit 201 receiving the partial images, and block 204 merge images for positioning partial images and connections and unite them in an overlay.

As illustrated in figure 3, block 209 obtain quantitative characteristic values includes a unit 301 calculate the value of a busy pixel block 302 retrieve the object, block 303 compute the maximum values of the pixel, unit 304 calculate the minimum values pixel and block 305 calculate the value of a pixel in the region of interest. These blocks calculations in block 208 obtain quantitative characteristic values calculated quantitative characteristic value through analysis of each partial image on the basis of information about the subject area, obtained through block 207 receipt of the subject area. These partial images represent a partial image obtained by the unit 201 receiving the partial images and corrected through block 203 image correction. The quantitative characteristic values include values of saturated pixels each partial image, the maximum pixel value in the object, the minimum value of a pixel in the object and value of the pixel in the region of interest. Thus, since the quantitative characteristic values calculated by means of execution of each process analysis on partial image processing accuracy of the analysis above, than at the analysis of the whole image. In addition, the time required for processing of the analysis, may be reduced.

Block 209 reception of characteristics receives characteristic conversion processing gradation of quantitative characteristic values by the method based on the subject area. This "feature processing" represents the characteristic, such as a function used in conversion gradation, or a lookup table. In this illustrative version of the incarnation, because conversion gradation function is used as a performance transformation gradation turns out parameter necessary for job conversion functions gradation. When the shooting reliably as it defines a General trend of pixel values in the target area of the shot, it could be the conversion of the gradation on the basis of the survey area. The method of obtaining the conversion characteristics gradations will be described below.

Block 210 control centrally manages each of the functions described above.

Information about the area of interest, which is necessary block 208 obtain quantitative characteristic values to get the value of a pixel in the area of interest, connected with the shooting area and is stored in the block 211 storage. Block 208 obtain quantitative characteristic values retrieves the value of a pixel in an interesting area by reference to this information. In addition, the name of the function to retrieve the parameter used unit 209 reception of characteristics when converting gradation on the basis of quantitative characteristic values associated with the shooting area and is stored in the block 211 storage. Function to obtain the quantitative characteristic values can be fulfilled through block 209 reception of characteristics. Block 209 reception of characteristics receives a function to convert a gradation as specifications for conversion gradations by reference to this information.

Now with reference to the Figure 4 is described in the scheme of processing to obtain conversion functions gradation, running unit 209 reception of characteristics.

First, based on the values of saturated pixels, received as a quantitative characteristic value of each partial image, determined by the value of saturated pixels for large images by means of the method based on the subject area. Then the pixel has a value greater or equal to this value saturated pixels clipped as the maximum value in the image.

Then sets target values for the minimum value of a pixel, the maximum value of the pixel and values of the pixels in a selected field after conversion. These target values are set on the basis of the output number of gradations and how the area of interest has to be selected. Thus, the target values can be determined on the basis of the survey area or installed using the prescribed values regardless of the subject area. Thus, a function is defined conversion gradation, illustrated in figure 4.

Now with reference to Figure 5 will describe the sequence of processing operations performed by the system 100 imaging using radiation. First, at the stage of S501 conditions are shooting object, based on input from the keyboard 110 and click 111. Shooting conditions could be established by receiving the information of the order of shooting from the external information system. These scenes include information about the subject area, which indicates the area of the object should be removed. Shooting conditions are saved in the 211 storage 211 device 104 image processing.

At the stage of S503 unit 201 receiving the partial image device 104 image processing gets lots of partial images obtained by taking a large image. Partial image associated with information indicating the number was filmed partial image, and information indicating the total number of partial images obtained by taking a large image, as additional information.

At the stage of S504 block 203 image correction unit 202 formation of the image adjusts the pixel values of many received partial images. As a way to adjust the values of the pixels may be a way in which each pixel value in the full image is shifted with using an average imposed areas, as in the traditional way, and then executed correction of pixel values for blurry pixel values imposed areas. Here "approximation" should not be interpreted strictly. For example, "approximation" may indicate that the average value of the pixel values in common areas is done below a predefined threshold. In addition, the correction of pixel values can also be achieved by minimizing the difference in the histogram imposed areas.

At the stage of S505 block 210 administration begins the process of combining the partial images with an update of the pixels in the block 204 merge images. Unit 204 merge images positions taken many partial images and combines the partial image to generate a large image. As a way to merge images, it is known way in which the contribution of each image gradually changes based on distance to a connection point in a region where the images are overlaid. Positioning can be performed on the basis of the characteristic image by means of a block 204 combine images or done by using a manual adjustment.

The process to define a conversion function gradation of the stages S506-S509 performed simultaneously with the process of merger of the image, started on the stage S505. Therefore, the time of the shooting before the big picture, having converted gradation, appears on the block 105 display may be reduced. In addition, partial images obtained unit 201 receiving the partial images can be analyzed sequentially during shooting of the large image object 102. Execution of the process thus allows to reduce the duration of processes.

At the stage of S506 block 207 receipt of the subject area receives a shooting range of information about the shooting conditions, stored in a block 211 storage.

At the stage of S507 block 208 obtain quantitative characteristic values receives a quantitative characteristic values through the analysis of partial images that have shifted the pixel value. The quantitative characteristic values represent the maximum pixel value in the object, the minimum value of a pixel, the pixel value in the region of interest and the value of saturated pixels. These quantitative characteristic values obtained for each partial image.

Unit 301 calculate the value of saturated pixels calculates the value of saturated pixels partial images are correction of pixel values. The calculation of the value of saturated pixels can be performed using a method that uses the characteristics of the sensor.

Block 302 retrieve the object retrieves an object that remains after removal of the partial image part, in which x-rays fell on the direct x-ray detector flat-panel without passing through the object, and part of the shielding which is fenced by etc. Way to retrieve the object can be made using a method which uses the results of the analysis of histograms and two-dimensional analysis.

Block 303 compute the maximum pixel values calculates the maximum value for the object. As a method of calculating the maximum value of a pixel can be used the method, which computes the representative value based on the histogram of the image. However, this invention is not restricted to this. You can apply any method of calculating the maximum value of a pixel.

Unit 304 calculate the minimum values of the pixel calculates the minimum value in the parts of the object in parallel with the process of calculating the maximum value, the running actuator 303 compute the maximum values of the pixel. As a method of calculating the minimum value of a pixel can be used the method, which computes the representative value based on the histogram of the image.

Unit 305 calculate the value of a pixel in an interesting area calculates the value of the pixels in a selected field in the object. As a way to compute the value of a pixel can be used the method, which computes the representative value based on the histogram of the image, or the way, which retrieves the area of interest of the two-dimensional structure of image and receives from it the aggregated value as a representative value. Since the information used to determine the area of interest, is stored in the block 211 storage for each area of the shooting, block 305 calculate the value of a pixel in the area of interest is processing by reference to this information. This processing is done for every partial image. On the basis of processing during S507 for every partial images are four types of quantitative characteristic values.

At the stage of S508 block 209 receipt of specifications defines the method of obtaining the parameter which will be used when converting a gradation of quantitative characteristic values on the basis of information about the subject area. As described above, information about the shooting and information indicating the way to obtain the parameter of the quantitative characteristic values, are related to each other and are stored in a block 211 storage. Block 209 reception of characteristics accesses this information to determine how the parameter.

First block 209 reception of characteristics retrieves the value of saturated pixels larger image based on the value of saturated pixels in each partial image. For example, when taking the entire length of the lower extremities among the values of saturated pixels, obtained from each partial images, one of the lowest values selected for use as values of saturated pixels for the merged image. The choice of the lowest values allows you to trim pixels with the larger the pixel value than the lowest value of the pixel is computed as the value of saturated pixels. Since the thickness of the lower extremities is not very changes on their length, are unlikely to mistake a normal pixel for the rich pixel. Therefore, by cut-off values of pixels, which is more than the value of saturated pixels, the effect of saturated pixels can be reduced. With other hand, when shooting the entire spine examples of which method is used include the way in which the values of saturated pixels of each selected partial images, calculates the average value of a pixel and then this average value is taken as the values of saturated pixels merged image; the method that calculates the median values of saturated pixels each partial image; and the way in which the maximum value is used. If there is an error in calculating the value of saturated pixels corresponding partial images, using the average or the median value reduces the impact of the errors that enables sustainable clipping. In addition, the ways in which calculate the average or the median for the values of saturated pixels are effective when shooting under conditions which do not change when shooting each partial image, for example, dose for shooting and the distance between the focus and the sensor. When shooting the entire spine, because the difference in thickness of the object is large, using the maximum value allows to reduce errors are truncated normal pixels. Thus, the optimal value of saturated pixels can be computed by means of choosing the optimal method of calculation based on the shooting conditions.

Block 209 reception of characteristics also selects the median value based on the value of pixels in areas of interest to each partial image and sets the median value as the value of pixels in areas of interest to the merged image. Through this choice median value, even if the value of pixels in areas of interest to some of partial image has an error, the impact of this error can be reduced. Furthermore, the method of calculation can use the average of a set of selected values of pixels in areas of interest. In addition, when you use the value that reflects many of the values of the pixels in the area of interest, even if in the analysis of partial image has an error, the impact of this error can be reduced. For example, when taking the entire length of the lower extremities, if the area of interest is a bone, there are not large differences in the values of pixels in areas of interest to each partial image. In this case, through the use of the method in which the area of interest of a large the image was determined by calculating the median values or average values, the transformation gradation of bone tissue, which is an area of interest, may be an appropriate way done with the reduction of the impact of errors in the analysis of partial images.

When the pixel values of the area of interest are very different, uses a method in which one pixel value in the region of interest is selected from the corresponding partial images. For example, when shooting the entire spine, if the area of interest is a lumbar spine, depending on the partial image pixel value in the area of interest of the image obtained by shooting mainly thoracic spine, and the value of a pixel of the image received by shooting mainly lumbar spine, can very different. In this case the default value of the pixel area of interest, calculated on the basis of the image, obtained by taking the lumbar spine. On the contrary, if the area of interest is a thoracic spine, conversion function gradation is obtained using the values of the pixel area of interest, obtained from the image obtained by taking mainly the thoracic spine. In addition, when taking the whole body, if the area of interest is a bone in the values of the pixels of the area of interest of the relevant partial images will often be the value of a pixel, which is very different from the other values of the pixel. In this case, to exclude abnormal value from the analysis of partial images, the area of interest can be established by the refusal of maximum and minimum values of the pixel values area of interest of the relevant partial images and then calculating the average value based on the remaining values of the pixels. Thus, the pixel value of the area of interest, which is suitable for the purpose of diagnosis, can be computed by means of the choice of method of calculation on the basis of the survey area.

At the stage of S509 block 209 reception of characteristics receives a function to convert a gradation on the basis of the received parameter. Because this process is the same as the one described with reference to Figure 4, its description will be omitted.

At the stage of S510 block 205 conversion gradation performs the conversion gradation of the large image. When the process of uniting the image, which began at the stage of S505, finished, block 210 management expects completion of the processing of receipt of the conversion option gradation performed on stage S509, and then gives a command block 205 conversion gradation begin the conversion gradation of the large image. On the basis of this command block 205 conversion gradation converts gradation large image in accordance with the function of converting gradation.

At the stage of S511 unit 206 output produces an image that has transformed the gradation on the block 105 display, and block 105 display displays this image.

As described above, through calculating quantitative characteristic values through the analysis of each partial image by taking a large image, the accuracy of the analysis can be increased compared to when analysing the combined image, and the processing time can be reduced. In addition, through the production of conversion functions gradations by combining quantitative characteristic values by the method based on the survey area can be implemented conversion gradation of a large image that is appropriate for each area of the shooting.

Block 208 obtain quantitative characteristic values is in the process of determining the type of quantitative characteristic values to be obtained from each image, on the basis of information about the subject area. For example, when taking the entire length of the lower extremities, if the area of interest is a bone, there is no big difference of the values of the pixels of the area of interest of the relevant partial images. Thus, information identifying any of the partial images, showing the bone tissue, applies to the information about the shooting and is stored in the 211 storage. Block 208 obtain quantitative characteristic values accesses this information and retrieves the value of a pixel of the region of interest only from the partial image.

As another example, when shooting the entire spine, as described above, if the area of interest is an lumbar spine, pixel value in the area of interest of the image obtained by shooting mainly thoracic spine, and the value of a pixel of the image received by shooting mainly lumbar spine can vary greatly. Therefore, if the area of interest of the pre-installed lumbar spine, the pixel value of the region of interest is derived only from a partial image obtained by taking the lumbar spine.

As another example, if you have a lot of relevant areas in the large image of the lumbar spine, thoracic spine, etc. and if the width of the distribution of the values of the pixel in each region of interest is great only in one image, marking the region of interest can be sufficient, or the image may be unnatural. Thus, the natural gradation can be prevented by retrieving the value of a pixel area of interest only from the partial image defined on the basis of the survey area, and perform the currency conversion, gradation based on this value.

Thus, by performing analysis on partial image defined on the basis of information about the subject area, the need to perform the analysis for all partial images are removed. Therefore, the processing time can be reduced, and the load on the process can be reduced.

In addition, this illustrative version of the incarnation among relevant partial image block 208 obtain quantitative characteristic values retrieves the quantitative characteristic values only from areas that do not overlap with other partial image. This is because when shooting a large image area, which includes a lot of information necessary for diagnosis, often removed by adjusting the position of the shooting, so that they overlap, and that the analysis could be performed with the exception imposed areas. In addition, in the above-mentioned illustrative versions of the incarnation, since the process of Association and process analysis of partial images are executed in parallel, the pixel values are imposed areas vary depending on the processing of Association. Thus, the accuracy of the analysis may deteriorate. Through the installation as an objective of the analysis not only imposed areas precision machining analysis to obtain the conversion functions gradation can be increased, and the processing time can be reduced.

In the third illustrative version embodiment of the present invention the area of interest in one big image is divided into predefined groups, such as, for example, lumbar spine and thoracic spine when shooting the entire spine, and function conversion gradation is obtained for each group. Grouping associated with information about the shooting and pre-stored in the block 211 storage. For example, when shooting the entire spine, because there is a great difference in the distribution of pixels between the lumbar part of the spine and the thoracic spine, each of them shall be established as a separate group. On the contrary, when taking the entire length of the lower extremities and the installation of the bone tissue as the area of interest, since the difference of the values of the pixels of the area of interest, obtained from a variety of partial images, is not as large, these areas of interest are as one group. This grouping can be set manually using the keyboard 110 and click 111 for each area of the shooting. Block 208 obtain quantitative characteristic values accesses this information to obtain quantitative characteristic values of each partial image for more conversion functions gradation corresponding to each group. Similar to the first illustrative embodiment variant types of qualitative characteristic values include the value of saturated pixels, the minimum value of the pixel in the facility, the maximum pixel value in the object and value of the pixel region of interest. Block 209 reception of characteristics receives a function to convert a gradation on the basis of the quantitative characteristic values.

Now will describe the specific example of processing. Processing the minimum value of a pixel and the maximum value of a pixel of a large image in the same way as in the first illustrative version of the incarnation and for larger image of the whole spine, and for larger image of the entire length of the lower extremities. As for the value of pixel area of interest, while shooting the entire spine is obtained conversion function gradation which allocates the pixel value of the area of interest, obtained from the lumbar spine, and a function to convert gradation which allocates the pixel value of the area of interest, derived from the thoracic spine. On the other hand, when taking the entire length of the lower extremities pixel value in the bone tissue is obtained from one partial image, and it turns out function for the conversion of grades for allocation of the value of a pixel.

Thus, through conversion functions grades for each group of conversion function gradation can be obtained for each area of interest, with a similar distribution of pixel values. Compared to obtaining the conversion functions grades for every partial image when there's fields of interest, having very different distribution of pixel values in partial images are obtained as separate conversion functions gradation appropriate for each area of interest, can be obtained image that appropriate distinguishes different fields of interest. In addition, when there are areas of interest, having similar distribution of pixel values that cover lots of partial images, since these areas of interest can be identified together with one conversion function gradation, the process of obtaining a function that executes a block 209 reception of characteristics, and the load on the process block 205 conversion gradation 205 can be reduced. In addition, instead of grouping your areas can be grouped partial image.

In the fourth illustrative version embodiment of the present invention many functions conversion gradations are obtained on the basis of the survey area.

Although a large image obtained by taking a large image, can be treated as having a wide area of an image with a lot of interest to the regions, because the area of the image is large, dynamic range is also large. Consequently, it is necessary to prepare many functions conversion gradation and use these functions on the basis of the purpose of diagnosis.

Thus, since the process of obtaining the conversion functions gradation runs in parallel with the processing of the formation of a large image, options conversion functions gradation can be presented to the user faster than in the case when receiving options conversion gradation starts once formed a great picture.

In the fifth illustrative version embodiment of the present invention, unlike the above illustrative options incarnation, the option to convert the gradation is obtained on the basis of the size of the pixel values the region of interest in a partial image without the use of information about the subject area.

Now with reference to 6 will be described in the device's configuration 600 image processing. Different from the above illustrative options incarnation consists in that the device 600 image processing unit has 608 specifications of your areas and 609 block definition.

Further, with reference to Fig.7 will be described in the sequence of processing operations performed by the system 100 imaging using radiation.

At the stage of S701 source 101 radiation emits radiation based on external command. Sensor 103 detects a radiation which has passed through the object, 102, and forms an image of an object using radiation.

At the stage of S702 unit 601 receiving the partial image device 600 image processing gets the image using a radiation sensor 103 through the interface (I/F) 112. The resulting image through block 611 is sent to the management unit 602 formation of the big picture and also is stored in the 612 memory.

At the stage of S703 unit 603 image correction is in the process of correction, where the value of each pixel of the image is shifted so that the levels of pixel values for common areas in each image to match. Because through this treatment eliminates differences of the levels the values of the pixels in each image, the pixel values in the areas that must have the same value of the pixel in all the large image can be made about the same by executing further processing combine common areas. Moreover, since in the process of merging the aim of processing are only imposed areas advantage of the fact that the quantitative characteristic values obtained from each partial image after this correction, the value of a pixel, but before the process of merger, represent values that do not normally change from the receipt of a large image of the Association.

At the stage of S704 block 611 control starts in parallel, the process of unification, running unit 604 Association of the image, and process specifications area of interest, running unit 608 specifications of your areas. These processes can start at the same time or sooner or later each other.

At the stage of S705 unit 604 associations image combines region, common to many of the corrected images. In parallel with this process, at the stage of S706 block 608 specifications of your areas specifies the area of interest.

At the stage of S707 609 block definition calculates a value indicating the width of the pixel values in the region of interest. In this illustrative version of the incarnation 609 block definition retrieves the minimum pixel value and the maximum value of the pixel in the area of interest, calculates the difference between these values and takes this difference as the width of the pixel values.

At the stage of S708 609 block definition determines whether the value indicating the width of the pixel values, in the predefined range. This predefined range flag block 611 management as a range from 0 up to a threshold that is pre-stored in the unit 612 memory. If 608 unit specifications of your areas determined that the value is included in the predefined range ("NO" on stage S708), the process moves on stage S709. At the stage of S709 block 608 specifications of your areas installs all of the image as the region, which will be highlighted and a function to convert gradation region, established unit 610 receiving functions.

On the other hand, if the stage S708 608 unit specifications of your areas determined that the value of the width of the pixel values exceeds a predefined range ("YES" at the stage of S708), the process moves on stage S710. At the stage of S710 block 608 specifications of your areas divides the image into a variety of areas. This process occurs through the installation of a number of areas for separation by reference to the table for the establishment of the binding between the value of the width of the pixel value and the number of areas. This table is stored in the block 612 memory. Alternatively, this process can be accomplished by defining the width of the pixel values that can be allocated by each conversion, gradation, and by dividing the image into areas that fit within the limits of the width. Then at the stage of S710 block 610 receiving function receives a function to convert the grades corresponding to each region.

If there are three or more areas, established 608 unit specifications of your areas, areas with a similar distribution of pixel values can be grouped together, as in the fourth illustrative version of the incarnation, and the conversion of gradation can be performed for each group. In this case the block 608 specifications of your areas functions as a group, which groups together the region with similar distribution of pixel values. This grouping can be done using the known methods of clustering. Therefore, the gradation of the region of interest can be appropriately converted prevention of complexity that arises when the converted images are formed individually for each transformation of gradation. On the other hand, option for individual transformation gradation areas can be obtained without execution of the grouping. In this case, each area can be brought to the attention of the diagnostician through individual formation of the converted image for each area.

At the stage of S711 block 605 conversion gradation converts gradation images on the basis of the conversion functions gradation. If during the S710 received many conversion functions, the gradation, the number of received images that have transformed the gradation is the same as the number of functions. The process of applying these functions to the big picture can be made on the basis of functions by means of a block 605 conversion gradation. Alternatively, a system that is convenient from the point of view of execution prevention of unnecessary processing, can be provided through a serial conversion gradation based on user input.

At the stage of S712 device 606 output displays this image, having adjusted the gradation on the block 105 display. Diagnostician can run diagnostics, considering the displayed image.

Thus, obtaining parameter which will be used when converting a gradation of many images that have been adjusted unit 603 image correction lets you get the option to convert up-gradation, which is roughly the same as when receiving from a large image. In addition, because the image that serves as the objective of the analysis was divided, the load on the process of analysis is not as great as in the analysis of a single image. In addition, the execution of processing Association of the image in parallel with the receipt of the conversion option gradation allows reducing the duration of treatment. Therefore, a great image, having the area of interest, whose graduation was appropriately converted, it may be faster issued after the shooting, and reduces the duration of treatment.

In addition, the gradation can also be transformed through the removal of the areas outside the object and regions in the object that are less important, by specifying a region of interest and the separation of this area by at least one region. In addition, the gradation can also be converted to highlight the areas on which the diagnostician should pay attention. In particular, when the width of the pixel values the region of interest is great, because the image is formed after the area of interest is divided into many areas and subjected to a process of selection, may be formed image that has a dedicated area of interest without harm to balance the gradation of the entire image. In addition, can be implemented gradation, which more closely reflects the differences in the object, than at use of the information about the subject area.

Through the implementation of the processing described in the first, second and third illustrative options incarnation, based on information about the value of a pixel areas of interest without the use of information about the subject area, can be reached appropriate gradation, which reflects individual differences of each object. On the contrary, this treatment can also be performed using tabular information related to the subject area. In this case, can be implemented sustainable gradation on the basis of the region. In addition, information about the shooting and information about the importance of the pixel region of interest can be used together.

In addition, by means of parallel processing combine partial images and processing of the analysis of partial conversion of images to produce a gradation of the processing time can be reduced. Because the processing of Association includes processing, image analysis, the load on process to adjust the levels of pixel values among partial images and processing positioning pixels is high. Moreover, this process takes time. From the point of view of preventing malfunction diagnostics for manual execution of adjusting the level of the pixel values and processing positioning to improve the accuracy required more time than when only the automatic processing. On the other hand, since the process of analysis to obtain the characteristics of conversion gradation also requires an analysis of the image, the process takes time. Even if the characteristic processing could be installed manually without the need to analyze the images, it still takes the same time as in automatic processing. In this illustrative version of the incarnation by the parallel execution time-consuming processing of Association and processing of receipt of the characteristics of the gradation, the time delay until after the shooting will not be shown on the big picture, can be largely reduced. In addition, every time shot a partial image of the object, the process of analyzing captured partial images can execute until removed the following partial image. When shooting partial images, because the process of moving the x-ray tube and detector flat-panel takes time, through the processing of the analysis in parallel with a broken into fragments shooting delay time display can be very reduced.

The gradation of the merged image installed in a suitable manner for the diagnosis through the use of the above processing to obtain the conversion characteristics of gradations of the merged image to image using radiation and x-ray image. In particular, since x-ray images are displayed in monochrome mode, use an image which has appropriately adjusted gradation, allows effectively diagnose.

The above illustrative options incarnation quantitative characteristic values are obtained by analysis of partial images, the pixel values which have undergone correction offset by unit 203 image correction. But the process of obtaining conversion option gradation and the adjustment process with the shift may be performed in parallel. In addition to the pixel value of the quantitative characteristic values obtained through block 208 obtain quantitative characteristic values, block 203 image correction also can use the offset values of the pixel partial images in obtaining conversion functions gradation running through block 209 reception of characteristics.

Conversion gradation not have to be performed on the basis functions, and also can use the lookup table to convert the grades. In this case the block 209 reception of characteristics receives the factor for conversion of each partial image in the large image, and there is no option to determine the functional form. In addition, the characteristic of the transformation processing gradation can also be determined by means of fixation of the shape of the curve conversion gradation and the shift of the curve on the basis of quantitative characteristic values of a large image. For example, the offset can be accomplished through the use of pixel values the region of interest as the reference value.

The present invention can be implemented as image processing system, in which the processing performed by the imaging device, is allocated according to the multitude of devices. The present invention may also be implemented by means of distributing processing, organized as a single functional unit, according to the multitude of functional blocks. In addition, the present invention may be realized by means of the device display images, which includes the function of imaging devices and display devices described above illustrative options incarnation in your display images or the sensor. Alternatively, the present invention can be realized through the installation process, organized as a single functional unit, as one or many hardware schemes.

The present invention also includes cases in which, for example, the operating system (OS), etc. performed on electronic computing device, carries out a part of the actual processing or all of the actual processing, and from this treatment are implemented functions described above illustrative options for implementation. In addition, the computer may incorporate multiple CPUs. In this case, the present invention may be realized by means of distribution according to the multitude of CPUs. In addition, in this case, the code for the read data from storage media implements functions of illustrative way to implement them, so the media keeps this program or programming code is present invention.

The above description of illustrative options incarnation is an example of imaging devices. The present invention is not limited to the description.

Aspects of the present invention can also be implemented by means of a computer system or device (or devices such as CPU or microprocessor (MPU)), which reads and executes the program stored in a storage device, to perform the functions described above variant (variants) the incarnation, and by the way, the stages of which are performed by a computer system or device, for example, by reading and execution of the program stored in a memory device to perform the functions described above variant (variants) the incarnation. With this goal, the program provided with your computer, for example, through a network or with different types of recording media, which serves as a storage device (for example, a machine-readable media).

Although the present invention has been described in relation to the illustrative options incarnation, it should be understood that the invention is not limited to open illustrative options incarnation. The volume of the formula of the invention shall obtain the most wide interpretation to cover all modifications and equivalent structures and functions.

1. Imaging device, containing: a tool to acquire images made with the possibility of receiving lots of partial images, obtained by taking each of the many shooting ranges, divided the image area of the object; the imaging device differs in that it additionally contains: the tool for formation, made with the possibility to form the image by combining many partial images obtained through the scene; a means of obtaining quantitative characteristic values, done with the opportunity to obtain a quantitative characteristic value of at least one of partial images; means for obtaining specifications made the possibility of characterization of the conversion processing gradation on the basis of quantitative characteristic values and shooting; conversion tool, made with the possibility to convert on the basis of the characteristics of conversion processing gradation gradation image formed; and management tool, made with the possibility to control the means for creating and at least one of the means of obtaining quantitative characteristic values and means of obtaining characteristics, and management tool manages so that the process of at least one of means of obtaining quantitative characteristic values and means of obtaining characteristics of runs in parallel with the process of means of formation.

2. Imaging device according to claim 1, additionally contains a means of defining made with the ability to determine in accordance with the shooting area quantitative characteristic value obtained from each partial image, and the way to obtain the characteristics of conversion processing gradation of quantitative characteristic value.

3. Imaging device according to claim 1, additionally contains a means of defining it, done with the ability to identify, from a partial image pixel value in the region of interest in the object received as a quantitative characteristic value.

4. Imaging device according to claim 1 in which the determination is performed with the possibility to define the method that uses the average or median values of the pixels in a selected region of interest in the object, resulting from a variety of partial images.

5. Imaging device of claim 1, wherein the means of obtaining quantitative characteristic values configured to receive quantitative characteristic value of the field partial image, which is not imposed on other partial image.

7. Imaging device according to any of claims 1 to 6, in which the receipt of the characteristics done with the possibility of characterization of the conversion processing gradation corresponding to each partial image, and the means of control performed the ability to display the characteristic conversion processing gradation as an option with the image obtained by combining the partial images, and the management tool is adapted to comply with the introduction to select the characteristics conversion processing gradation conversion gradation selected characteristics of the treatment is over the image, obtained by combining the partial images, and display the processed image.

8. Imaging device according to any of claims 1 to 6, in which the tool to obtain the quantitative values characteristic configured to receive as a quantitative characteristic value of at least one of the maximum value of a pixel, the minimum value of the pixel and values of the pixel area of interest of the object, shown in partial images.

9. Imaging device according to any of claims 1 to 6, in which the characteristic conversion processing gradation is either a function that is used when converting a gradation or lookup table used in conversion of gradation.

10. Imaging device on any one of claims 1 to 6, in which the control is performed with the ability to manage the means of forming, way of obtaining quantitative characteristic values and the means of obtaining characteristics, and management tool manages the means of obtaining quantitative characteristic values and means of receipt of specifications for running processes in parallel with the formation of the joint image means of formation.

11. Imaging device according to any of claims 1 to 6, additionally contains a means of processing, made with the possibility to fit the width of the distribution pixel values for the region of interest in the generated merged image to separate area of interest to many groups for allocation on the basis of the same conversion gradation, the means of obtaining specifications performed with the possibility of characterization of the processing conversion gradation, corresponding to each of the groups.

12. System imaging using radiation, comprising: imaging device according to claim 1, a radiation source, made with the possibility of emitting radiation sensor, made with the possibility of detecting radiation, emitted from a source of radiation and passing through the object, and convert the detected radiation into an electrical signal, representing the image of the object; and a means of displaying it, done with the ability to display the transformed image.

13. Way of processing of images containing the stages at which: get lots of partial images obtained by taking each of the many shooting ranges, divided region scene; shift the value of the pixel at least one of the partial image among the many partial image and approximately align tendency of pixel values in the area common to many partial images; form the image by combining many partial image, with pixel values at least one of the partial images are shifted and aligned; the method of differs in that it additionally contains the stages at which: get quantitative characteristic value of at least one of partial images; receive characteristic conversion processing gradation on the basis of quantitative characteristic values and shooting; and transform on the basis of characteristics of conversion processing gradation gradation image shooting object obtained by combining the partial images, and the characteristic conversion processing gradation receive during the formation of the merged image.

14. Machine-readable medium of data storing the program which, when executed on the device instructs the device to act as an imaging device according to claim 1.

15. Imaging device, containing: a tool to acquire images made with the possibility of receiving lots of partial images obtained by taking each of the many shooting ranges, divided the image area of the object; the changes made with the ability to change the values of the pixels of at least one of partial images, so the pixel values of at least two of partial images that overlap, approximately coincide; the alignment done with the ability to align the partial image, with pixel values at least one of the partial images changed the means to change; a means of obtaining quantitative characteristic values, done with the opportunity to obtain a quantitative characteristic value of at least one of partial images; means of obtaining characteristics, made with the possibility of characterization of the conversion processing gradation on the basis of quantitative characteristic values and shooting; and conversion tool, made with the possibility to convert on the basis of characteristics conversion processing gradation gradation image formed, the imaging device differs in that the mentioned at least one of the partial images, including the modified values of the pixels is accepted as a means of leveling, and the means of obtaining quantitative characteristic values, so that the processes of alignment tools and means of obtaining quantitative characteristics values are executed in parallel.

16. Imaging device on item 15, additionally contains a management tool, made with the possibility of control means alignment and the means of obtaining quantitative characteristic values, so that the process of alignment tools and means of obtaining quantitative characteristic values to run concurrently.

 

© 2013-2014 Russian business network RussianPatents.com - Special Russian commercial information project for world wide. Foreign filing in English.