Image file container

FIELD: information technology.

SUBSTANCE: image container file has at least first and second multimedia streams (MS). The first MS includes first image data representing an image. The second MS includes arbitrary data which can correspond to: a different representation of the same image; annotations to the first image data; second image data that together with the first image data form a new image with greater dynamic range, resolution, field of view or other attributes that can be derived from processing two or more independent images; or an executable file related to the first MS. The image container file can also include extensible metadata to hold information describing one or more multimedia streams of the image container file, as well as DRM information for obtaining a license to access encrypted data or verifying the authenticity of encrypted or unencrypted data.

EFFECT: providing, when creating an image container file, functional linkage of multiple multimedia streams, one of which is received by a receiver and the other includes arbitrary data.

26 cl, 6 dwg

 

The technical field

The present invention relates generally to data processing and, in particular, to the conservation and creation of image files.

Prior art

In General, the traditional formats of image file specify that the image file is stored a single image. For example, a digital camera that uses the traditional format of the image file, creates a separate image file for each individual pictures. Thus, if the photographer uses the camera to take multiple related images and to form one overall picture, each image is stored in a separate image file, resulting in the photographer or another user have to manually organize and keep track of individual files image until they are combined in a separate process. This manual operation can be time-consuming and error-prone, especially if the photographer/the user has to deal with a large number of images.

Another drawback to the traditional formats of image files is that they are, in General, do not support digital rights management (DRM). Thus, the owner of the images usually do not have access to the same level of regulation intellect is real property, which is widely available, or video or audio.

An additional drawback to the traditional formats of image files is that they do not provide a convenient built-in support for annotating images. For example, if the photographer or the user wishes to provide audio annotation for a specific image file or group of interrelated images using traditional file formats image, the photographer/user usually has to save audiointel in a separate file and manually shaping the relationship between the two files.

The invention

According to aspects of various described embodiments, a container file of the image has at least a first multimedia stream and the second multimedia stream. A container file of the images may contain more than two media streams. The first media stream contains the first image data representing the image. The second media stream contains arbitrary data, usually related to the image of the first multimedia stream. For example, arbitrary data may correspond to a different representation of the same image. Arbitrary data can also correspond to image, audio, video, graphics, text, date and time, location, the EB-links or entertainment annotations for the first image data. Arbitrary data can also correspond to the second image data, which are in some way associated with the image data in the first media stream. For example, the second media stream may be one of several media streams corresponding to the number of images obtained during a certain time, or the second image, which can be combined with the first image data to generate a new image with a high overall quality, for example, increased dynamic range, pixel resolution or field of view. Arbitrary data can also correspond to the application (for example, the executable file)that can be used for viewing and/or processing of the first image data.

According to another aspect, a container file images may also include extensible metadata object, which contains information describing one or more representations of images stored in the container file of the images. The metadata may contain information related to the image (or audio or video), for example, the title, the name of the Creator, subject, description, publisher name, distributor name, date, format, language, and other types of information that may be needed, depending on the application. Metadata takemaru to refer to other threads in the container file of the images indicating the container or an indication of the stream.

According to another aspect, the container file images can be stored one or more views and/or other media streams in an encrypted format. In this case, a container file of the image contains information of the digital rights. For example, the digital rights may relate to licences for access to the encrypted data contained in the container file of the image.

Brief description of drawings

Permissive and non-exhaustive embodiments of the present invention is described with reference to the following figures, in which similar positions indicate similar parts in the various views unless otherwise specified.

Fig. 1 is a block diagram of a system that uses a container file of images, according to one variant of implementation.

Fig. 2 is a logical block diagram of the operations of forming a container file of images, according to one variant of implementation.

Fig. 3 is a block diagram of components of a container file of images, according to one variant of implementation.

Fig. 4 is a block diagram of components of a container file of the images according to the second variant implementation.

Fig. 5 is a block diagram of components of a container file of the images according to the third variant of implementation.

Fig. 6 is a block diagram of an illustrative computing environment suitable for forms is the formation or use of a container file images according to one variant of implementation.

Detailed description

In Fig. 1 shows a simplified system 100, which uses a container file of images according to one variant of implementation. In this embodiment, system 100 includes a device 102 and the platform 104. In normal use of the device 102 is a device that generates the image, for example, a digital camera, scanner, mobile phone (with camera), a personal computer equipped with a camera, personal digital assistants (PDA)equipped with a camera, set top box function freeze frame, video player function freeze frame, music player, digital versatile disks (DVD) with freeze frame or other suitable device or apparatus perceiving/breathtaking image. The platform 104 is typically implemented by a personal computer, a storage device (e.g. hard disk, CD-ROM, digital versatile disk (DVD), tape, network storage device or other storage media), printer, or other suitable device or apparatus for storing or displaying images. Other embodiments of system 100 can be implemented using multiple devices, the device 102 or multiple platforms, like what's the platform 104, or combinations of such devices and platforms. In this embodiment, a container file images can be transferred between the device(s) and/or platform(s) in all directions.

In this embodiment, the device 102 comprises a generator 110 of container files of images, a storage device 114 and the interface 116 with the communication environment. The platform 104 in this embodiment, device 120 read container files of images, a storage device 124, and an interface 126. In other variants of implementation (for example, when the platform 104 is used mainly as a storage device), the device 120 reads the image files can be deleted.

The device 102 may send a container file 106 images (generated by the generator 110 of container files of images and stored in a storage device 114) to the platform 104 via the interface 106 and line 108 communication. For example, line 108 may be a direct wired connection, a wireless connection, a network (e.g., LAN, Internet, phone system, etc), direct transfer of removable media device 102 to the platform 104 (removable disk media, media, flash memory, CD-ROM, DVD-RW or DVD+RW), etc. Platform 104 receives a container file 106 images via the interface 126 and stores it in the storage device 124. To the data contained in the container file 106 images, you can then access through the device 120 read container files of images, at the request of the user.

Unlike systems that use traditional formats of image files, the system 100 uses a container file 106 of images to provide a flexible, extensible, with backward compatibility mechanism for saving images, annotations to images, information rights management of digital content (DRM) and other arbitrary information in the self describing mode. Below, with reference to Fig. 3-6, includes a more detailed description of embodiments of the container file 106 images.

In Fig. 2 shows the formation of a container file 106 images (Fig. 1)according to one variant of implementation. According Fig. 1 and 2, this alternative implementation, the device 102 provides for the formation of a container file images as follows. At step 202, the device 102 collects image data. According to one variant of implementation of the image data is collected from the image sensor (not shown). For example, the image sensor can be a charge coupled device (CCD) digital camera. Then the image data stored in the storage device 114. In some embodiments assests is of image data can be processed by the device, and then be stored in storage device 114. The image data may represent a primary sensor data, the uncompressed image data (i.e. raw data, converted to an uncompressed representation of the image), compressed image data (for example, according to the format of the consolidated group of experts in the field pictures (JPEG) or other suitable compression format). The format of the image data usually depends on the processing performed by a device, and, as described below, a container file 106 images able to perceive any format.

At step 204, the device 102 stores the image data of the stage 202 in the form of a multimedia stream in the container file 106 images. Used here, the concept of a multimedia stream in this context usually refers (but not exclusively) to a single image or information used to represent a single image, the fragment of the video, the fragment of an audio file or an executable object or application. According to one variant of implementation, the generator 110 of container files images receives image data from the storage device 114 and stores information related to the image data and the overall organization of the container file of the images in the header part of the container file 106 images. Generator Alu containers images also stores at least some image data (e.g., pixel data) into meaningful parts (pieces of data) of the container file 106 images. In this embodiment, the generator container files images can also save the location information for the image data stored in the part data in the index part of the file container images. For example, this information is the location may contain the offset from the pre-selected point of the container file of the images. In some embodiments, the implementation of the index object can be excluded (for example, when a container file image contains only image data representing a single image). Header, content and index parts form the components of this variant implementation of the container file 106 images (e.g., described below in Fig. 3). In other embodiments, the implementation of a container file 106 images can contain other components. For example, some additional components are described below with reference to Fig. 4 and 5.

At step 206, the decision device 102 determines whether to associate with the image data collected at block 202, for more information. For example, the device 102 may include a microphone and appropriate machining components for recording audio the century The user can also do audiointel to image data collected in step 202, activating components audio and speaking into the microphone. In this embodiment, the generator 100 container files is able to detect when audio is enabled to annotate the image data collected in step 202.

The device 102 may include other functions that generate the information you want to associate with image data collected in step 202. As these functions can be (but not exclusively): (a) automatic generation of proof or pictographic images and full images of a single image; (b) generating a sequence of individual images that will later be processed to create a panoramic image; (c) generating a single image with multiple views for different settings, exposure, white balance settings, settings compression, pixel resolutions, color spaces (for example, the color space for printing and another color space for display on the monitor), field of view, color context (described below); (d) generate other annotations (e.g., video, graphics, text annotation); and (e) attaching executable applications (e.g., downloads the tion, necessary to interpret primary image data, or to decode the internal compression format). In one embodiment, the streams of video and audio data added to the file-container 106 images about the same as provided by "advanced systems format" (ASF). Each additional piece of data is added to the additional media stream, while the header information is stored in the header part and the data stored in the contents of the container file 106 images.

Used herein, the term "color space" means a mathematical space in which the color values can be specified (usually) small number of color coordinates. For example, in the RGB color space for color, there are three coordinates (denoted R, G and B)and in CMYK color space for the color, there are four other coordinates (C, M, Y and K). Color values in these two spaces behave differently. RGB is commonly used to describe light that has the property of additivity, and CMYK is usually used to describe ink having the property of subtractively. You can convert color values from one color space to another, but in some cases this leads to deterioration of the color values.

Used herein, the term "color context" determines that actually mean color coordinates, in accordance with some objective standard. For example, some devices may use the RGB color space, but the same values of RGB colors to be displayed on each of these devices may not look the same (even under the same observing conditions). The reason is that different devices use different color settings.

If at step 206 the decision, the generator 110 of container files images determines that image data collected in step 202, it is not necessary to associate any additional information, the file-container image ready, and the sequence of operations for forming a container file of images is completed. Otherwise, the sequence proceeds to step 208.

According Fig. 2, at step 208, the device 104 generates another multimedia stream. If additional information is image data for a different view of the image, the generator 110 of container files images performs step 208, essentially the same as previously described for steps 202 and 204. This means that in this embodiment, the generator 110 of container files of images generates the header info is rmatio and stores it in the header part, retains at least some image data (e.g., pixel data) in a substantial part (part data) and stores the location information for this new information image in the index part. If additional information is image data, the generator 110 of container files images performs essentially the same operations, except that the header information contains information specific for additional information in the header part of the container file 106 images. For example, header information may include an indication of whether additional information is audio data, video data, executable application, etc. and Then the sequence returns to step 206 a decision to determine whether to generate another multimedia stream.

In Fig. 3 shows the components of a container file 106 images (Fig. 1) according to the first variant implementation. In this embodiment, a container file 106 images contains a header portion 302, a substantial portion 304 (part data) and the index portion 306. The header portion 302 contains the header information for each media stream contained in the container file 106 images. In this illustrative container file image the response header portion 302 contains the header information 312 1-312Nfor the media streams from the first to the N-th, respectively. The header information contains information such as the basic properties of the media stream, the language that is present in a multimedia stream, the properties of the index information stored in the index part 306 for the media stream, information filling, which specifies the fill in the header information. The header information also may include information that identifies the compression algorithm for this media stream, macros, embedded in the header information, the marker information identifying markers embedded in the data (stored in the part data 304) for the media stream. In other embodiments, the implementation of the header information may contain information about other properties of the media stream. In one embodiment, the header information formed similar to (but simpler than) the corresponding information in the above format ASF. The above description of the header portion 302 illustrates a particular implementation; however, in other embodiments, the implementation of the header portion 302 may be formed using any suitable format (e.g., other formats of header information, definitions and so on).

In this example, the part of Yes the data includes data 314 1- 314Nfor data flow from the first to the N-th, respectively. In one illustrative embodiment, to store the data in the part data 306 are used packages. Although described in more detail below an implementation option of storing packet data, in other embodiments, the implementation may use other approaches to storing data in the part data 304. In this illustrative embodiment, the packets may have variable size, up to four gigabytes, because images can be extremely large. In other embodiments, implementation of the packages can have a fixed size or different sizes. In some embodiments, the implementation of the packets representing data from different threads, can, optionally, be interspersed. In other words, for a packet of this flow And can be followed by a packet of this flow, followed by other packets from the stream And that allows the device 102 to generate the information in any order. In addition, the package may contain data from more than one media stream. In addition, to save the data in one multimedia stream can be used by multiple packages. This batch approach provides the flexibility to save multimedia data portion 304 of the data. As described above, the data contained in part 304 of the data can present a is a, for example, image data (initial, uncompressed and/or compressed). In addition, some data 304 may include image data representing multiple views of the same image, or image data for multiple images to be consolidated into one image. In addition, some data 304 may include audio data, video data, graphics or text data to annotate the image data and/or executable program code for work with image data contained in the data object, or process the data.

Index part 306 contains indexes with 3161at 316Nfor the media streams from the first to the N-th, respectively. According to the above, the index information is used to determine the location of the desired pieces of data in the data object. According to one variant of implementation of the index object 306, essentially the same index object used in the above format ASF. According to other variants of implementation, it is possible to use other appropriate mechanisms for indexing.

In Fig. 4 shows the components of a container file 106 images (Fig. 1) according to the second variant of implementation. This implementation is essentially the same as option implementation shown in Fig. 3, except that afforded by Alicia part 400 metadata in the header part 302. Part 400 metadata provides a mechanism for associating metadata with all multimedia streams container file 106 images, or to one or more media(and) flow(s) of the container file 106 images. In addition, some 400 metadata allows the user container file 106 image to access the metadata without having to process part 304 of the data and the index portion 306. In addition, some 400 metadata provides an extensible metadata format to provide flexibility in adapting container file 106 images to other applications.

According to one variant of implementation of part 400 metadata allows you to specify one or more namespaces, with individual metadata elements in each namespace include a field for the name of self-description and the corresponding field values. According to one variant of implementation, the content metadata is converted into an extensible markup language (XML). In one embodiment, the namespace can contain metadata elements, with definitions, essentially the same specified in the Regulations under the Dublin metadata initiative metadata core (DCMI), issued March 4, 2003, and in the list of types of DCMI, published February 12, 2003, Alternative or additional space is Myung metadata may also include determining, essentially similar to those that meet other established metadata standards. For example, these other formats include interchangeable "the image file format for digital cameras (EXIF version 2.2, April 2002 from the Japanese Association for the development of electronic industry (JEIDA) and/or information exchange model" (IIM), version 4.1, released in July 1999, the International Council for press and telecommunications (IPTC). In part 400 metadata you can also use other standardized or customized to meet the specific application namespace self-description.

Part 400 metadata may contain information related to the image (or audio or video), for example, the title, the name of the Creator, subject, description, publisher name, distributor name, date, format, language, and other types of information that may be needed, depending on the application. As part 400 metadata is extensible, applications and/or users can define their own metadata, and these extensions can be used simultaneously in the same container and part of the metadata without conflict.

In Fig. 5 shows the components of a container file 106 images according to the third variant of implementation. This implementation essentially similar variant implementation, the notion is the resultant of Fig. 4, except that requires part 500 digital rights management (DRM) in the header portion 302 and the encrypted data 5141-514Nfor the media streams from the first to the N-th, respectively (instead of the data 3141-314Nas in the embodiment shown in Fig. 4). According to one variant of implementation, some DRM 500 may include a universal resource locator (URL) of the license server (i.e. where users are licensed to use images and other data contained in the container file 106 images). The license may contain a key for decrypting the encrypted data 5141-514N. The DRM information may also relate to the verification of the authenticity of the image data. In other embodiments, implementation of encryption can be used for each media stream separately.

In Fig. 6 shows a generalized computer environment 600 that can be used for the implementation of the guidelines. For example, device 102 and the platform 104 may include a computing environment, essentially similar to the generalized computer environment 600. Computer environment 600 is only one example of a computing environment and is not intended to impose any restrictions on the scope of use or functionality of the computer and network architecture is URS. In addition, the computing environment 600 should not be construed as having any dependency or requirement relating to any one or combination of components illustrated in the illustrative computer environment 600.

According Fig. 6, one illustrative system for implementing the invention includes a computing device such as computing device 600. In the most General configuration, computing device 600 typically includes at least one processor 602 and system memory 604. Depending on the exact configuration and type of computing device, system memory 604 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc. or a combination. The system memory 604 are usually placed operating system 605, one or more program modules 606, and possibly software data 607. This basic configuration, computing device 600 illustrated in Fig. 6 through components, dashed 608.

Computing device 600 may have additional features or functions. For example, computing device 600 may also include additional mass storage devices (removable and/or fixed), such as magnetic disks, optical disks or tape. These additional Zap miniusa device shown in Fig. 6 removable storage device 609 and stationary storage device 610. Computer storage media may include volatile and nonvolatile, removable and fixed media implemented in any method or technology for storage of information such as computer-readable commands, data structures, program modules or other data. System memory 604, a removable storage device 609 and stationary storage device 610 are examples of computer storage media. Computer storage media include, but without limitation, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, a storage device based on a magnetic disk or other magnetic storage devices, or any other medium that can be used to store useful information, and which can be accessed by computing device 600. Any such computer storage media may be part of the device 600. Computing device 600 may also have a device(a) input 612, for example, keyboard 622, mouse, 623, pen, voice input device, touch input device, scanner, etc. It t is the train may contain device(s) output 614, for example, a display, speakers, printer, etc. These devices are well known in the art and need no further discussion.

Computing device 600 may also include means 616 connection that allows the device to communicate with other computing devices 618, for example, over the network. Means 616 links are one example of the media data. The transmission medium is usually implemented by computer-readable commands, data structures, program modules or other data in a modulated data signal such as a carrier signal or other transfer mechanism and include any medium of information delivery. The term "modulated data signal" means a signal, one or more characteristics which is modified so that encodes information in the signal. By way of example, but not limitation, the transmission medium includes a wired environment, for example, a wired network or direct-wired connection, and wireless environment, such as acoustic, RF, infrared and other wireless environments. Used herein, the term "computer-readable medium" includes both media data and environment data.

Various modules and techniques may be described herein in the General context of computer-executable commands, for example, a software module which, by one or more computers or other devices. In the General case, the software modules include procedures, programs, objects, components, data structures, etc. for the implementation of particular tasks or implement abstract data types. These software modules, etc. can be performed in the form of internal code, or can be loaded and executed, for example, in a virtual machine or other runtime operational compilation. Typically, the functionality of the program modules may be combined or distributed depending on the specific variant implementation.

Mentioned in this description, the expression "one implementation", "option exercise" or "illustrative variant implementation" means that specifically described feature, structure, or characteristic is responsible, at least one variant of implementation of the present invention. Thus, the use of such expressions may refer to more than one variant of implementation. Furthermore, the described features, structures or characteristics may form any suitable Association in one or more embodiments of the implementation.

However, the person skilled in the art it is obvious that the invention can be practiced without one or more of konkretnyh details or with other methods, resources, materials, etc. In other instances, well known structures, resources, or operations have not been shown or described in detail merely in order not to obscure aspects of the invention.

Although there have been illustrated and described illustrative embodiments of and application of the present invention, it should be understood that the invention is not limited only to the above configuration and resources. Specialists in this field can offer various modifications, changes and variations concerning the organization, operation and details of the methods and systems that meet opened here to the invention without going beyond the scope of the claimed invention.

1. Machine-readable media having recorded thereon the components that, when executed by a computer allow you to store the image data, and the components contain:
the receiver of the image data, and
generator image files for the formation of a container file image with multiple media streams, and plenty of multimedia streams contains the first multimedia stream and the second multimedia stream, while the first media stream includes the first image data obtained from image data taken by the receiver of the image data and the second media stream includes is in itself arbitrary data.

2. The carrier according to claim 1, in which arbitrary data includes the second image data, and the first and second image data to provide different views of the same image.

3. The carrier according to claim 1, in which arbitrary data contains data representing an annotation image represented by first image data.

4. The carrier according to claim 3, in which arbitrary data contain audio, video, graphics, text, date and time, location, web links or animation data.

5. The carrier according to claim 1, in which arbitrary data contain executable component.

6. The carrier according to claim 1, in which arbitrary data includes the second image data, and the data of the first image data and the data from the second image data are combined to represent the image, which is more individual images represented by the first and second image data.

7. The carrier according to claim 1, in which arbitrary data includes the second image data, and the data of the first image data and the data from the second image data are combined to represent an image of higher quality than individual images represented by the first and second image data.

8. The carrier according to claim 1, in which the generator of the image files is additionally the La add metadata in the file-container images.

9. The carrier according to claim 1, in which the generator of the image files additionally adds the index information related to locations of the first image data and the first random data in the container file of the image.

10. The carrier according to claim 1, in which a container file of the image contains the encrypted data.

11. The carrier according to claim 1, in which the generator of the image files additionally serves to preserve the information of the digital rights management container file images.

12. The media according to claim 11, in which the digital rights management contains information related to obtaining a license to access the first image data.

13. The media according to claim 11, in which the digital rights management contains information related to the verification of the authenticity of the first image data.

14. The carrier according to claim 1, in which access to the file-container images possible by a viewer of the media.

15. The media 14 in which the application media viewer is a viewer that allows you to view files in the advanced systems format (ASF).

16. Machine-readable media having recorded thereon the commands that, when executed by a computer allow you to store the image data, and commands the realization of what comfort means of collecting image data and
means for generating a container file of images that contain multiple media streams, and plenty of multimedia streams contains the first multimedia stream and the second multimedia stream, the first media stream contains the first image data obtained from image data taken by the receiver of the image data and the second multimedia stream contains arbitrary data.

17. Media in clause 16, in which arbitrary data includes the second image data, and the first and second image data to provide different views of the same image.

18. Media in clause 16, in which arbitrary data contains data representing an annotation image represented by first image data.

19. Media in p in which arbitrary data contain audio, video, graphics, text, date and time, location, web links or animation data.

20. Media in clause 16, in which means for generating selectively encrypts the data in the container file of the image.

21. Media in clause 16, in which arbitrary data contain executable component.

22. Media in clause 16, in which the means for generating includes a means of adding metadata to the file-container images.

23. Media in clause 16, in which the generation facility contains among the STV save the index information, related to locations of the first image data and the first random data in the container file of the image.

24. Media in clause 16, further containing a means of preserving information in digital rights management container file images.

25. Media in paragraph 24, in which the digital rights management contains information related to obtaining a license to access the first image data.

26. Media in paragraph 24, in which the digital rights management contains information related to the verification of the authenticity of the first image data.



 

Same patents:

FIELD: radio engineering, communication.

SUBSTANCE: method is realised by partial animation of the facial image of the sender by superimposing a two-dimensional facial image onto the protruding area of a three-dimensional mesh, and a hair image onto the flat area of the mesh. To this end, the communication system has a sending device, a receiving device and a network which connects the sending device and the receiving device. The receiving device has memory which stores a plurality of head images associated with different sending devices, a receiver means for receiving text messages, a means of converting text symbols of the message an audio message, and an animating means for generating an animated partial three-dimensional image of a head of the sender. The animating means animates at least one facial feature of the head, the animation corresponding the movements made by the head when reading the message. A display displays the animated partial three-dimensional head and an acoustic system outputs the audio message in synchronisation with the displayed head.

EFFECT: reduced volume of image processing performed by the device of the recipient.

18 cl, 12 dwg

FIELD: information technologies.

SUBSTANCE: animation scheme is defined using a declarative language, which includes instructions that determine animations and/or visual effects subject to application to one or several objects, and how animations or visual effects shall be applied. The animation scheme may include rules, which, when assessed, define how the objects shall be animated. The mechanism of the animation scheme is also provided for assessment of the animation scheme together with other factors for application of the appropriate animation to each object. The mechanism of the animation scheme extracts the animation scheme and data related to objects. The animation scheme mechanism then assesses the animation scheme together with data related to objects, for identification of animation to be used on each object. Identified animations and visual effects are then applied to objects.

EFFECT: reduction of time and increased convenience in creation of an animated presentation due to application of animation schemes.

9 cl, 5 dwg

FIELD: information technology.

SUBSTANCE: in an encoding method and an encoder for a three-dimensional video signal, central type frames, the depth map for the central type frames and the enclosure data frame are encoded. Functional and non-functional data in the enclosure data frame are distinguished based on the depth map of the central type frame. In the decoder, a stream of combined data is generated from functional data in encoded enclosure data frames and central type frames. Preferably, central type frames are used as reference frames when encoding enclosure data frames.

EFFECT: high efficiency of encoding and providing a method of encoding three-dimensional image data on the transmitting side, wherein visible defects around depth disruptions for the displayed image are reduced, while the amount of data is maintained in acceptable limits.

18 cl, 16 dwg

FIELD: information technology.

SUBSTANCE: method of generating animated video comprises the following steps: receiving a user-transmitted instruction and determining the corresponding action instruction, determining the total number of frames corresponding to the action and the availability factor for each frame; calculating the shift of each control point in each frame according to the availability factor for each frame; creating an animated video according to the shift of each control point in each frame and the total number of frames. A device for creating an animated video is also provided.

EFFECT: shorter time spent on creating an animated video by reducing the amount of computations.

11 cl, 3 dwg

FIELD: information technology.

SUBSTANCE: method for changing lip shape includes: obtaining audio signals and obtaining a proportional lip shape change value according to characteristics of the audio signals; obtaining an original lip shape model input by a user and generating lip shape change value according to the original lip shape model and the obtained proportional lip shape change value; generating a lip shape grid model set according to the obtained proportional lip shape change value and a preconfigured lip pronunciation model library. The method for changing lip shape in a voice-driven animation includes an obtaining module, a first generating module and a second generating module.

EFFECT: simple algorithm of changing lip shape in voice-driven animation.

16 cl, 8 dwg

FIELD: information technology.

SUBSTANCE: invention involves generating an edge data table located in cache memory, common for two or more geometrically adjacent patches. Each patch is assigned an identification value. When a first patch has a common edge with a second patch, a unique identification value is generated for a table entry, which is carried out based on identification values of two patches with a common edge. Attributes of the common edge are stored in a table entry associated with a unique identification value. When estimating a common edge for the second patch, the edge can be read from the table in reverse order.

EFFECT: high rate of processing patch edges.

17 cl, 3 dwg

FIELD: physics.

SUBSTANCE: device has a resistive imaging bolometer sensitive to the electromagnetic radiation to be detected and designed to be electrically connected to a signal generating circuit, and a resistive in-phase signal attenuating bolometer which is electrically connected to the imaging bolometer such that current flowing through the in-phase signal attenuating bolometer is subtracted from current flowing through the imaging bolometer, where it has a means of controlling resistance of the in-phase signal attenuating bolometer by injecting current into it.

EFFECT: high accuracy of detecting electromagnetic radiation.

12 cl, 8 dwg

FIELD: transport.

SUBSTANCE: set of invention relates to elongated element of space device or aircraft composed of laminar structure.Said elongated element comprises base with opposed first and second surface, rib with third and fourth surfaces, and fifth surface arranged between first and third surfaces and connected therewith, and sixth surface arranged between second and fourth surfaces and bonded therewith. Elongated element includes change in geometry composed of first surface displacement toward second surface and reduction in fifth surface width. Method of making elongated element model for its production comprises preparation of first data indicating availability of changes in distance of base from reference plane, generation of second data to define rib geometry, and application of said first and second data for generation of element model. Decrease in risks of origination of defects in elongated element is ensured by making local variations in rib geometry using aforesaid model that comprise introducing angle section in the rib extending to elongated element base. Method of element production comprises preparing the mould, its profile being defined from elongated element model, laying layers of composite material into said mild, and hardening if said layers.

EFFECT: decreased strains in spar.

26 cl, 15 dwg

FIELD: information technology.

SUBSTANCE: in the method of drawing advanced maps based on a three-dimensional digital model of an area, involving central projection of points of the three-dimensional digital model of the area by a beam onto a plane, the mapping object is selected in form of a three-dimensional digital model of the area and its boundaries in the horizontal projection are determined; settings of the advanced map to be drawn are given; optimality criteria for advanced display of the mapping object are selected; the value of the horizontal and vertical viewing angle is given; a certain preliminary path of observation points is drawn around the mapped object in the horizontal projection such that the mapped object fits into the section of horizontal viewing angles.

EFFECT: broader functional capabilities by finding the optimum position of the projection centre when drawing an advanced map based on a three-dimensional digital model.

4 cl, 4 dwg

FIELD: information technology.

SUBSTANCE: light control system has a light source (4) for illuminating a region (10), a pointing device (6) for projecting location information (11) onto the region (16), and a receiver (5) for detecting location information (11), the system being configured to determine the position of location information (11) on the region (10), and the controller being configured to control illumination of the region (10) depending on said location information (11).

EFFECT: easier interaction of a user with a light source.

9 cl, 3 dwg

FIELD: information technology.

SUBSTANCE: device for measuring depletion of copper-nickel sulphide ore further includes two voltage stabilisers, the outputs of which are connected to two variable resistors, the movable contacts of which are connected to inputs of analogue-to-digital converters, built into an AVR microcontroller, the output of which is connected to the gate of a MOSFET transistor, the drain circuit of which includes the radiating coil of an inductive sensor, which is concentric with a receiving coil, the output of which is connected to a shaping amplifier, the output signal of which is fed to the input of a precision amplitude detector, the output of which is connected to the input of a scaling amplifier with the possibility of controlling the origin and the maximum of the measuring scale; the output of the scaling amplifier is connected through a voltage repeater to the analogue input of the AVR microcontroller.

EFFECT: enabling rapid analysis of the depletion of copper-nickel sulphide ore in a measured volume for further use of the measurement results to reduce loss in quality of the mined ore.

2 dwg

FIELD: medicine.

SUBSTANCE: invention relates to field of medicine. In method realisation probing by means of ultrasound signal, directed to area of fetus heart location, is carried out. Reflected Doppler signal is amplified and converted into digital form. Ultrasonic irradiation signal represents series of pulses, where frequency of their repetition equals carrier frequency of ultrasonic signal and is determined depending on maximal depth of probing. Amplification of signal is first performed in frequency band, consistent with duration of series of irradiation with discretisation frequency, corresponding to band of received signal. Transfer of Doppler signal spectrum on zero frequency and accumulation of complex signal counts in accordance with position and size of measured volume by probing depth are performed. Filtration of Doppler signal in specified frequency band is carried out. Search for peaks of autocorrelation function is carried out and distance between them is determined. Doppler shift is calculated on the basis of calculation of difference of phases between Doppler signals from successive series of ultrasound signal. Width of spectrum of Doppler frequencies and energy of Doppler signal from all sections of probing depth are evaluated. After that, control of parameters of Doppler signals from entire probing depth is performed, for which purpose separation of sections of probing depth, related to fetal heartbeats is carried out on the basis of the analysis of parameters of spectrum of Doppler frequencies. After that, adaptive reception gate for signal of fetal heartbeats is formed, in accordance with which the following corrected ultrasound signal is formed.

EFFECT: invention makes it possible to increase accuracy of frequency of heart rate in fetus by automatic setting of position and value of measurement Doppler volume in accordance with fetus heart position by probing depth.

4 dwg

FIELD: electrical engineering.

SUBSTANCE: method for measurement of a data processing centre power supply efficiency consists in the following: one performs initial measurements of power at multiple places within the data processing centre, creates an efficiency model for the data processing centre based on initial power measurements, establishes reference (test) performance levels using the efficiency model, performs current power measurements and compares the results of such current power measurements to the reference (test) performance levels. Additionally proposed are systems for efficiency control and power efficiency modelling.

EFFECT: improvement of measurement accuracy.

14 cl, 21 dwg

FIELD: information technologies.

SUBSTANCE: method is realised for building of semantic relations between elements extracted from document content, in order to generate semantic representation of content. Semantic representations may contain elements identified or analysed in the text part of the content, elements of which may be associated with other elements, which jointly use semantic relations, such as relations of an agent, a location or a topic. Relations may also be built by means of association of one element, which is connected to another element or is near, thus allowing for quick and efficient comparison of associations found in the semantic representation, with associations received from requests. Semantic relations may be defined on the basis of semantic information, such as potential values and grammatical functions of each element within the text part of the content.

EFFECT: provision of quick detection of most relevant results.

21 cl, 11 dwg

FIELD: information technologies.

SUBSTANCE: method is realised for building of semantic relations between elements extracted from document content, in order to generate semantic representation of content. Semantic representations may contain elements identified or analysed in the text part of the content, elements of which may be associated with other elements, which jointly use semantic relations, such as relations of an agent, a location or a topic. Relations may also be built by means of association of one element, which is connected to another element or is near, thus allowing for quick and efficient comparison of associations found in the semantic representation, with associations received from requests. Semantic relations may be defined on the basis of semantic information, such as potential values and grammatical functions of each element within the text part of the content.

EFFECT: provision of quick detection of most relevant results.

21 cl, 11 dwg

FIELD: medicine.

SUBSTANCE: for the purpose of erythrocyte count on blood smear photos, black-and-white photos of blood cells are formed. A Canny edge detector is used to contour the cells. Circles of pre-set diameters are found in the contours by Hough method. Their centres are localised. When forming the photos, the centres of the found circles are distributed. The formed photos are morphologically treated with a close operator, and compact cell count respective to a target erythrocyte count is found.

EFFECT: precision in erythrocyte count on the blood smear photos.

2 cl, 4 dwg

FIELD: information technologies.

SUBSTANCE: animation scheme is defined using a declarative language, which includes instructions that determine animations and/or visual effects subject to application to one or several objects, and how animations or visual effects shall be applied. The animation scheme may include rules, which, when assessed, define how the objects shall be animated. The mechanism of the animation scheme is also provided for assessment of the animation scheme together with other factors for application of the appropriate animation to each object. The mechanism of the animation scheme extracts the animation scheme and data related to objects. The animation scheme mechanism then assesses the animation scheme together with data related to objects, for identification of animation to be used on each object. Identified animations and visual effects are then applied to objects.

EFFECT: reduction of time and increased convenience in creation of an animated presentation due to application of animation schemes.

9 cl, 5 dwg

FIELD: radio engineering, communication.

SUBSTANCE: initial diagram of the investigated network is formed; a set of W possible types of security threats and Z appropriate security means are determined; operation of the modelled communication network in case of accidental faults in network vertices and arms is simulated; nominal rates VN of transmitting messages and the number of errors KN are determined; a security threat is simulated; an intermediate diagram is formed, which includes the remaining vertices and arms connecting said vertices; if the measured value tservST of the time of servicing subscribers satisfies the condition tservST<tserv"тр", security threat simulation conditions are adjusted by adding one more security threat, and if tservST>tserv"тр", the number of security threats carried out PST is stored; use of security means is simulated; the time tservSM of servicing subscribers is measured and compared to tserv"тр" and if tservSM>tserv"тр", conditions for simulating use of appropriate security means are adjusted by adding one more security means, and if tservSM<tserv"тр", the current rate VC of transmitting messages and the number of errors KC of the transmitted messages are measured; and if the conditions VC<VN and KC<KN are satisfied, the number of security threats carried out is stored taking into account use of security means PSM, and the critical number Pcr of security threats carried out is calculated, and if VC=VN and KC=KN, security threat simulation conditions are readjusted by adding one more security threat.

EFFECT: more complete conformity of the modelled procedure of realising conditions of operation of a modelled communication network.

6 dwg

FIELD: information technologies.

SUBSTANCE: availability or absence of a PUCCH channel is inspected to transfer information of a scheduling request, including transfer of information of the scheduling request and rank indication in the PUCCH channel for transfer of the dispatching request information. In case of inspection of the PUCCH channel availability for transfer of information of the dispatching request, reception of dispatching request information, PUCCH channel decoding, reception of modulation signals. Information of modulation signals is demodulated, and rank indication information is received.

EFFECT: elimination of violation of code orthogonality between physical channels of upward line control in process of RI and SR information sending in one and the same subframe, and provision of system integrity.

3 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: disclosed is a method of classifying web pages and organising the corresponding information content, which includes a step of recording Internet addresses, which in turn involves executing processes for automatic recording of said Internet addresses, and a selection step for assigning said Internet addresses a corresponding pertinence value. At the selection step said pertinence value is assigned proportional to its record at the recording step and Internet addresses whose pertinence value is higher than a given threshold value are selected. The method also includes a reduction step where Internet addresses with pertinence value greater than the threshold value, which do not satisfy at least one essentiality criterion, are deleted, and a verification step where several said Internet addresses which satisfy said essentiality criteria are verified, wherein the verification step comprises human actions.

EFFECT: high efficiency of searches on web pages.

18 cl, 6 dwg

FIELD: formation of payment documents.

SUBSTANCE: the information meaningful for the payment document is introduced in the computer of the financial settlements and check-out center, the payment document is formed in the form a text and the first dot-and-dash code line and introduced in the electronic data base, the document is displayed for payment to the payer, the payer introduces the information on the quantity of consumed resources and the sum of payments to the document according to the acting tariff rates, these data are introduced in the cash register of the payment reception center with the date of payment, types and sums of payments, these data and the second dot-and-dash code line with these data on the document are represented by means of the cash register, it is sent to the financial settlements and check-out center, the first and second dot-and-dash code lines are read out, the information is compared with the electronic base, entered together with the information on reception of the payment in its electronic data base.

EFFECT: automated formation of documents with due account made for payments determined by the floor space, quantity of people living there and the quantity of the consumed resources.

8 cl, 1 dwg

Up!