Video compression system and method for compensating for bandwidth limitations of communication channel

FIELD: information technology.

SUBSTANCE: computer-implemented video compression method for an online video game or application, involving running video games and applications on a hosting service in response to user input received from a plurality of client devices, wherein the video games and applications generate uncompressed video; detecting a maximum data rate of a communication channel between a hosting service and a client by transmitting a feedback signal from the client to the hosting service; compressing the uncompressed video using a low-latency video compressor to generate a low-latency compressed video stream; transmitting the low-latency compressed video stream from the hosting service to the client; detecting that the maximum data rate will be exceeded if a specific frame of a frame sequence is transmitted from the hosting service to the client over that communication channel, and instead of transmitting the frame which may cause to exceed the maximum data rate, ensuring that the client continues display on the screen the previous frame of the frame sequence.

EFFECT: reduced latency.

26 cl, 40 dwg

 

A related application

This application is a partial continuation (CIP) application with a serial number 10/315460, filed December 10, 2002, entitled "APPARATUS AND METHOD FOR WIRELESS VIDEO GAMING", the rights to which are owned by the applicant of the present CIP application.

The technical field to which the invention relates

The present disclosure of the subject invention in General relates to the field of data processing systems that improve the ability of users to manipulate audio and video and connect to them.

The level of technology

The recorded audio and kinesically are the two sides of social life since the days of Thomas Edison. At the beginning of the 20th century were widespread recordable audio (rolls and records) and kinesically (motion picture and movies), but both technologies were, however, in its early stage of development. In the late 1920's movies were combined with sound market-based consumer goods, this was followed by color films with sound. Broadcasting gradually took the form of broadcast media audio market consumer goods, supported by advertising. When in the mid 1940's has set the standard television (TV) broadcast TV when oedenrode to radio as a form of broadcast media market consumer goods, and previously recorded movies or movie images live came into the house.

By the middle of the 20th century in most American homes were gramophones to play back the recorded audio, radio receiver for receiving direct gear and the TV to play audio/video (A/V) media direct transmission. Very often, these 3 "media player" (player, radio and TV) were combined into one unit with the joint use of conventional speakers, which became the "media center" for the house. Despite the fact that the choice of media was limited to the consumer, "ecosystem" of the media has been fairly stable. Most consumers knew how to use the "media" and could make full use of their functionality. However, the publishers of the media (much of the film studios and TV studios and music companies) could put their media and in theaters and for home use and does not suffer from widespread piracy or "secondary sales", i.e. the resale of used media. As a rule, publishers do not receive income from "secondary sales", and, essentially, this reduces the income that publishers would otherwise receive from the buyer of the used media because Nova is sales. Despite the fact that in the middle of the 20th century, of course, was used sold records, such sales do not have a big impact on publishers ' records, because, unlike film or video that an adult, as a rule, looking once or only a few times, the music soundtrack can be listened to hundreds or even thousands of times. Therefore, the medium of the music information is much less "transient" (i.e., it has lasting value for adult consumers)than film/video media. After buying the records, if the consumer liked the music, he will probably keep it for a long time.

Since the mid-20th century to the present ecosystem of media has undergone a series of radical changes as to the benefit of consumers and publishers, and damage them. With the widespread introduction of tape recorders, especially cassette tapes with high quality stereo sound, of course, consumers have become much more comfortable. But it also marked the beginning of what is now a widespread practice in relation to consumer media: piracy. Of course, many consumers used cassette tapes to record on it's own records just for convenience, but g is an increasing number of consumers (for example, students in student dormitories with free access to collections of plates to each other) could make pirated copies. In addition, consumers could record to tape music broadcast on the radio, instead of buying CDs or tape from the publisher.

The emergence of consumer VCR (VCR) has led to greater convenience to consumers, because now cassette recorder could be set to record a program, which can be viewed at a later time, and it also led to the creation of outlets where access to films and TV programmes could be provided "on demand". The rapid development of home storage market commodities since the mid-1980s led to an unprecedented level of choice and convenience for consumers, but also led to the rapid expansion of the publishing market media.

Currently, consumers are faced with a large selection of media, as well as many storage devices, many of which are tied to specific types of media or specific publishers. A passionate consumer of media can have a stack of devices connected to televisions, and computers located in different areas of the house that results in web camera is her leading to one or more television and/or personal computers (PC), as well as to the group of remote controls. (In the context of this application, the term "personal computer" or "PC" refers to any type of computer suitable for home or office, including a desktop computer, Macintosh®, or other computers that do not require a Windows operating system, Windows-compatible devices, flavors of UNIX, laptops etc), These devices may include a video game console, cassette recorder, DVD-player, sound processor/amplifier surround sound, satellite television, a set-top box, TV box cable TV, etc. and, for a passionate consumer, there may be many devices with similar functions due to compatibility issues. For example, the consumer may belong to both HD-DVD and Blu-ray DVD player, or Microsoft Xbox®and Sony Playstation® video games. Actually, because of incompatibilities with some games on all versions of game consoles, the consumer may be the XBox, and later, for example, Xbox 360®. Often consumers confusing what the input is and what the remote control to use. Even after the disc is inserted into the appropriate player (n is an example, DVD, HD-DVD, Blu-ray disc players, Xbox or Playstation), the selected video input and audio input for this device and found the corresponding remote control, the consumer, however, is faced with technical problems. For example, in the case of a widescreen DVD, the user may want to first determine and then set the appropriate image format on your monitor or TV (for example, 4:3, Full, Zoom, Wide Zoom, Cinema Wide, and so on). Similarly, the user may want to first determine and then set the appropriate audio surround system (for example, AC3, Dolby Digital, DTS, etc). Often the consumer is not aware that he may not use the content in full functionality of your TV or audio system (for example, when viewing a film in a compressed form in the correct format image or when listening to audio with stereo sound, not surround sound).

More and more data storage devices based on Internet technologies, are added to the stack of devices. Audio devices, such a system Sonos® Digital Music, transmit the audio stream directly from the Internet. Similarly, device, like a TV set-top box Slingbox™, record a video and submit it to flow through your home network or over the Internet, where it is motret remotely on a PC. Services and IP television (IPTV) offering services such as cable, digital subscriber line (DSL) or other connections to the Internet in the house. Recently there have also been attempts to integrate a variety of multimedia functions in a single device, such as Moxi® Media Center PC running Windows XP Media Center Edition. Although each of these devices offers the element of the mechanism for the functions that it performs, each missing a pervasive and easy access to most media. In addition, the production of such devices often costs hundreds of dollars, often due to the need for costly treatment and/or local storage devices. In addition, these modern consumer electronic devices typically consume a lot of power, even in standby mode, which means that they are costly over time and irrational use of energy resources. For example, the device may continue to operate, if the user forgot to turn it off or switched to a different input. And while none of the devices is not a complete solution, it should be integrated with other stack devices in the house that still leaves the user with a web of wires and a large number of remotes * remote control and operation the control.

In addition, when many of the newest devices based on Internet technologies that really work properly, they usually offer media in a more General form than it could otherwise be available. For example, devices that transmit a video stream over the Internet, often transmit stream only video and not interactive "additional materials"that often accompany the DVD, such as "opinion about video, games, or the comments of the Director. This is due to the fact that often interactive material is displayed in a specific format designed for a specific device, which handles interactivity locally. For example, each of the disks, DVD, HD-DVD and Blu-ray has its own specific interactive format. Any home storage device or local computer, which could be designed to support all popular formats may require some level of sophistication and flexibility that probably would have made their functioning unacceptably expensive and difficult for the consumer.

In addition to this problem, if it is later introduced a new format, the local device may not exist hardware to support this new format, which would mean that the consumer will have to buy the upgraded lock the aspects of the storage device. For example, if later entered high resolution video or stereoscopic video (for example, one stream for each eye), the local device may not have the processing power to decode the video or it may not have the hardware for video output in the new format (for example, suppose stereoscopic perception is achieved through video at 120 fps, synchronized with the shutter glasses, and the frames are served on each eye at 60 fps, if the video consumer can only support video at 60 fps, then this option is not will be available without buying the upgraded hardware).

The question of complexity and moral deterioration of the storage devices is a serious problem when it comes to sophisticated interactive multimedia, especially video games.

Modern video game applications are basically divided into four main machinesometimes hardware platform: Sony PlayStation® 1, 2 and 3 (PS1, PS2 and PS3), Microsoft Xbox®, Xbox 360® and Nintendo Gamecube® and Wii™, and the PC based games. Each of these platforms is different from the other so that the games are written to run on one platform, usually not executed on a different platform. There may also be compatibility issues with devices from generation to generation the structure. Despite the fact that most developers of games that create game programs that are independent of a specific platform for the execution of a particular game on a particular platform, and requires its own layer of software (often called "mechanism of development of game software") to adapt the game for use on a particular platform. Each platform is sold to consumers as "console" (i.e. stand-alone unit that connects to a TV or monitor/speakers), or she is a PC. As a rule, video games are sold on an optical storage medium, e.g. DVD, Blu-ray, DVD-ROM or CD-ROM that contains the video game, embodied as a sophisticated real-time application. As the speed of home broadband access has increased, video games are becoming more and more available for download.

Requirements specifications for achieving interoperability platform software video games are extremely high due to the nature of real-time and high computational requirements of advanced video game. For example, one would expect full compatibility of video games from generation to generation (for example, from XBox to XBox 360 or Playstation 2 ("PS2") for Playstation 3 ("PS3"), just as there is a General compatibility of business applications (such as Microsoft Word) from one PC to another with a faster CPU, or core. But with video games it is not. As manufacturers of video games with the release of generation of videogames, as a rule, try to achieve the best possible performance for this price point, often significantly alter the architecture of the system so that many games written for the system of the previous generation, are not implemented on the system later generations. For example, the XBox is made on the basis of a family of x86 processors, while the XBox 360 is made on the basis of the PowerPC family.

Can be used ways emulation of the earlier architecture, but given the fact that video games are real-time applications, it is often impossible to achieve identical behavior emulation. This is detrimental to the consumer, the manufacturer of the video game console and software publisher of video games. For the consumer this means the need for video game consoles both the old and new generations, connected to the TV for the conduct of all games. For the manufacturer of the console, this means the costs associated with emulation, and the slower adoption of new consoles. And for the publisher, this means the need to release multiple versions of new games in order to reach all potential customers - not only the release version for each brand of video games (such as XBox, Playstation), but often versions for each the th version of this brand (for example, PS2 and PS3). For example, a separate version of "Madden NFL 08 Electronic Arts was developed for XBox, XBox 360, PS2, PS3, Gamecube, Wii, and PC, along with other platforms.

Portable devices, such as cellular ("cell") phones and portable media players, also present challenges for game developers. An increasing number of such devices connected to wireless data networks and can download video games. But on the market there is a wide selection of cell phones and storage devices with a wide range of different resolutions of the display and computing power. In addition, because these devices usually have restrictions on power consumption, cost and weight, they have, as a rule, there is no advanced hardware acceleration to perform graphics operations, such graphical processing unit ("GPU"), for example, devices manufactured by NVIDIA Corporation, Santa Clara, CA (Santa Clara, Calif.). Therefore, developers of games that usually develop this computer game simultaneously for many different types of portable devices. You may find that this computer game is not available specifically for your cell phone or portable media player.

In the case of home game consoles manufacturers of hardware platforms, it is to rule, prescribed license fee for developers of games for the possible release of the game on their platform. Mobile operators also typically prescribed license fee for a publisher of games to download games to your mobile phone. In the case of computer games have no licensing fee paid for the game, but the game developers, as a rule, faced with high costs due to the higher level of indirect costs on customer service to support a wide range of PC configurations and installation issues that may arise. In addition, PC, as a rule, are of smaller obstacles for the piracy of games, as experienced in technical terms it's easy to reprogram them, and the game can be easier relicensing republish and easier to distribute (for example, via the Internet). Accordingly, for the developer's programs there are costs and unfavourable conditions in the publishing game consoles, cell phones and PC.

For publishers of game console and computer software costs are not limited to this. To distribute games through retail channels, publishers assign a wholesale price below the selling price to retailers to obtain net income. The publisher also, as a rule, should pay the performance cost of production and distribution of physical media, contains the game. The retailer also often imposes on the publisher consumption on payment of the price protection" to cover any unforeseen expenses, for example, when the game is not sold out, or if the price of the game down, or if the retailer must repay part or all of the wholesale price and/or to take back the game from the buyer. In addition, retailers also usually lay on publishers to pay sales support games in promotional materials. In addition, retailers are increasingly buying games for users who stopped playing them, and then sell them as used games, usually not sharing the income from their sale to the publisher of games. Addition to the burdens imposed on games publishers, is the fact that the games often relicensing republish and distribute via the Internet for users to download and making them free copies.

As the speed of the broadband Internet access increases, and broadband connections have become more widespread in the United States and around the world, in particular to the house and to the Internet"café", where the rented PC connected to the Internet, games are increasingly distributed via download in PC or console. In addition, broadband connections are increasingly being used to conduct multiplayer online games and mass mn is popolzovalsja online games (and those and others in the present disclosure of the subject invention is designated by the abbreviation "MMOG"). These changes reduce some of the costs and issues associated with the distribution at retail. Download online games aimed at overcoming some of the adverse conditions for publishers due to the fact that the cost of distribution, as a rule, decrease, and costs for unsold media are small or absent. But downloaded games, still prone to piracy, and because of their size (often size in many gigabytes) they can be loaded for a very long time. In addition, many games can fill the drives small (capacity), as those that are sold with portable computers or video game consoles. However, for games is large or MMOG is required to allow this game on-line connection, the problem of piracy decreases as this usually requires that the user have a valid user account. In contrast to linear media (e.g. video and music)that you can copy camera when shooting video display screen or the microphone when recording audio from speakers, each experience of using video games unique and cannot be copied using simple video/audio recording. Accordingly, even in the regions, the de is not ensured strict compliance with copyright laws, and piracy is widely distributed, MMOG can be protected from piracy, and, therefore, can support entrepreneurial activity. For example, an MMOG World of Warcraft," the media conglomerate Vivendi SA successfully introduced worldwide and does not suffer from piracy. Many online game or MMOG, for example MMOG Second Life (Second Life) the company Linden Lab, generate income for operators of games in economic models, built-in games, where you can buy, sell and even create resources using online tools. Accordingly, for payment of online games can be used mechanisms in addition to the usual purchases of game software or subscription services.

Although piracy in many cases may be reduced due to the nature of online (games), or MMOG, the operator of online games, however, is faced with other problems. Many games for executing them properly require significant local (i.e. home) of the processing resources for online (games), or MMOG. If the performance of the local computer of the user is low (for example, such a computer without a GPU as a laptop younger model), then he may not be able to play the game. In addition, as game consoles become obsolete, they are also behind the current level once the development and may not be able to handle advanced games. Even if we assume that the local PC user can satisfy the computational requirements of the game, then often there is the difficulty of installation. There may be a driver incompatibility (for example, if a new game is loaded, it can install a new version of the graphics driver, which renders the previously installed the game, depending on the old version of the graphics driver, which will not function). The console can run out of space on the local disk when it loads more games. A difficult game, as a rule, accept downloadable patch from game developer over time, as detected and corrected errors, or if the game made a modification (for example, if a game developer believes that the level of play is too complex or too simple). These patches you want to download. But sometimes not all users are downloading all patches. In other cases, the downloaded patches introduce other issues of compatibility or expense of disk space.

In addition, during the game you may need to download large amounts of data to provide graphical or behavioral information on a local PC or console. For example, if a user logs in to the natural in an MMOG and is faced with a scene or character, composed of image data, or behavior, not on the user's local machine, then the data of the character or scene must be loaded. This can result in significant delays during the game, if the Internet connection is not fast enough. And if a character or scene, face, require much memory or processing power that exceeds the capacity of the local PC or console, you can create a situation where the user cannot continue the game or should continue with graphics worse quality. Accordingly, online game or MMOG often limit their requirements to the computational complexity and/or memory. In addition, they often limit the number of transmissions during the game. And online game or MMOG can also narrow the market of users who can conduct these games.

In addition, experienced technical users increasingly Decompiler local copies of the games and modify these games so that they could deceive. Deceptions can be as simple as making repeated pressing faster than people (for example, to shoot a gun very quickly). In games that support in-game transaction resources, deception can access igat level of fraud, which in result leads to fraudulent transactions, including resources actual economic value. When the economic model online (games) or games MMOG based on the transaction resource, it may lead to consequences, causing significant damage to the operators of the games.

The cost of developing a new game has grown, when the PC and console have been able to withdraw more and more complicated games (for example, with more realistic graphics, for example, ray tracing in real time, and a more realistic behavior, such as physics simulation in real time). At the dawn of the video game industry to develop a video game was a process very similar to the development of the application software, that is a large part of development costs amounted to software development, in contrast to the development of such graphics, sound and behavioral elements or "resources"as those that can be developed for kinofilme with spatial effects. Currently, many programs work on the development of sophisticated video games are more like the development of movies with special effects than software development. For example, many video games provide a simulated 3D worlds and create more fot is realistic (i.e. computer graphics, which seems so realistic, as the frame of the actors and photographically) characters, props and environment. One of the most promising aspects of the development of photorealistic games is the creation of a computer generated human faces, which cannot be distinguished from the human face in the game actors. Capture technology faces as Contour™ Reality Capture, developed Mova, San Francisco, CA (San Francisco, California), captures and tracks the exact geometry of the actor's face with high resolution when it is in motion. This technology enables visualization of face 3D on PC or gaming console, which is virtually indistinguishable from the captured face when playing actors. Accurately capturing and rendering photoreal" human face is useful in several respects. First, video games are often easily recognizable celebrities or athletes (often paid), and deficiencies can be obvious to the user that distracted during the game or makes the viewer uncomfortable. Often requires a high degree of detail to achieve a high degree of photorealistic, demanding visualization of a large number of polygons and textures with high resolution, possibly with the polygons and/or textures is mi, changing on a frame-by-frame basis during the movement of the person.

When a scene with a large number of polygons, with detailed textures quickly replaced, PC or game console that supports the game may not have sufficient memory RAM for data storage of a sufficient number of textures and polygons for the required number of animation frames generated in the segment of the game. In addition, on your PC or games console, typically, only one optical drive or single drive on magnetic disks, which are usually much slower than RAM, and usually can't keep up with the maximum data rate that the GPU can take when rendering polygons and textures. Modern games usually download most of polygons and textures in RAM, which means that this stage is largely limited by the complexity and duration because of the amount of RAM. In the case of animate entities, for example, it can restrict PC or game console, or to persons with low resolution, which is not photoreal, or to photoreal face, which can be animated only for a limited number of frames before the game will pause and load the polygons and textures (and other data) for the extra frames.

On ludena for slow moving progress bar on the screen when displayed on the screen of the PC or console messages like "Loading..." ("Loading...") is a modern users as an integral complex deficiency the video game. The delay in loading the next scene from the disk ("disk" in this description, unless otherwise specified, refers to a non-volatile optical or magnetic storage media, as well as to rediscover media, such as solid-state "Flash"memory) may take several seconds or even several minutes. It is a waste of time and may be frustrating to the player. As discussed earlier, most or all delay may occur due to the loading time of the polygon, texture or other data from the disk, but it can also happen that a part of the loading time is spent on the preparation of the CPU and/or GPU in PC or console data for the scene. For example, the video game football can provide players the opportunity to select from a large number of players, teams, stadiums and weather conditions. So, depending on what the specific combination selected for the scene can have different polygons, textures, and other data (collectively "objects") (for example, different teams have different colors and pictures on your form). It may be possible to enumerate many or all of the various changes and pre-compute many or all of the objects in advance, and to store these objects on disk used to store games. But, if the number of changes is what I great the amount of memory required for all objects that may be too large to match the disk (or too unrealistic to boot). Accordingly, the existing PC and console systems, as a rule, are limited in complexity and duration of the playback scenes, and suffer from long loading time for complex scenes.

Another significant constraint systems application software and gaming systems in the prior art is that they are increasingly using large databases, such 3D objects as polygons and textures that you want to download in PC or game console for processing. As discussed above, the loading of such databases can take a long time when stored on the local disk. Boot time, however, is usually much larger if the database is stored remotely and accessed via the Internet. In this case, can take minutes, hours or even days to download a large database. In addition, the creation of such databases is often associated with high costs (for example, a 3D detailed model of a sailing vessel, equipped with a high mast, for use in the game, film or historical documentary film), and they are for sale local end user is Liu. However, the database is at risk of unauthorized use after it is loaded by the local user. In many cases, the user is required to download the database just for its evaluation in order to verify whether it needs (for example, does 3D suit for game character satisfactory appearance when the user performs a specific movement). Long loading time can be a limiting factor for the user, evaluating 3D database before you decide to purchase.

Similar issues occur in an MMOG, in particular, as for example, in games that enable users to use more own custom characters. To PC or game console taken out to the character screen, they must have access to a database of 3D geometry (polygons, textures, etc), and behavior (for example, does the character shield, whether this durable shield to deflect the spear or not) for this character. Typically, when a user first takes an MMOG, a large number of databases for the character already comes with the original copy of the game, which is available locally on an optical disc games or downloaded to disk. But as the play progresses, if the user encounters a character or object, the database is s which is not available locally (for example, if another user has created a character, set yourself), before this character or object can be displayed on the screen, the database must be loaded. This result can lead to significant delays in the game.

Given the level of sophistication and complexity of video games another problem for game developers and publishers associated with video game consoles prior art is that the development of video games often takes two to three years at a cost of tens of millions of dollars. Considering the fact that the new platform video game console is injected at a rate of approximately one every five years, game developers need to start developing these games for years to release a new game console to the video games went on sale simultaneously with the release of the new platform. Sometimes produce multiple consoles from competing manufacturers around the same time (for example, with an interval of one or two years), but still unknown to the popularity of each console, for example, which console will make the largest amount of sales software video games. For example, in the recent cycle of consoles was scheduled to enter the Microsoft XBox 360, Sony Playstation 3 and Nintendo Wii approximately one period of time. But during the years preceding the introduction of the game developers essentially on what should have been "doing their bets" what platform console will be more successful than others, and to give their development resources accordingly. The film also must be adequately allocate their limited production resources based on what they estimate will be the likely success of the film, long before the release of this film. With the increase of the level of investment required for the games, the gaming industry is becoming more and more similar to film production and manufacturing company games usually give their inputs on the basis of its assessment of the future success of a particular video games. But unlike (they) film companies, this rate is based not just on the success of the production, rather it is based on the success of a gaming console on which the game must be fulfilled. Release games on many consoles at the same time can reduce the risk, but because of this additional work programme increases the cost, and there is often delay the actual release of the game.

Working conditions and user conditions for the development of application software on the PC require more computation, become dynamic and interactive, not only to make them more visually appealing to users, but also to make them more useful and intuitive. For example, as new operation is I have Windows Vista™, and future versions of the Macintosh operating system® include visual animation effects. Advanced graphical tools, such as Maya™, Autodesk, Inc., provide the ability to playback of the dynamic image and the most advanced 3D visualization, which push the boundaries of modern CPU and GPU. However, the computational demands of these new tools raise a number of practical issues for users and software developers of such products.

As a visual display of the operating system (OS) should work on a wide range of computers including computers of the previous generation, which is no longer sold, but which, nevertheless, you can replace the operating system (OS) on a new graphic requirements OS is limited largely "lowest common denominator" of computers that use this OS, which typically includes computers that do not include a GPU. This really limits the functionality of graphical tools OS. In addition, portable computers with battery supply (e.g., laptops) limit the ability of the visual display, as the high level of computational activity of the CPU or GPU, as a rule, the result is higher power level and less time battery life. Portable computers typically include software that automatically lowers the activity of the processor to reduce power consumption when the processor is not in use. In some models of computer user can lower the activity of the processor manually. For example, laptop VGN-SZ280P of Sony Corporation contains a switch labeled "Stamina", on the one hand (for low level performance, longer battery life) and "Speed" on the other (for high performance, less time battery life). OS, running on a laptop computer, should be usable even if your computer is having performance equal to his maximum performance. Accordingly, the performance of the graphics subsystem OS is often much lower than the available state of the art computing power.

Applications with greater functionality, requiring a large amount of computation, such as Maya, often sold with the expectation that they will be used on high-performance PC. This usually is determined by the requirement of "least common denominator", more expensive and less portable with much more performance. As the result is s, such applications are much more limited target audience, than the universal OS (or universal desktop applications like Microsoft Office), and, as a rule, they are sold in much smaller volume than the software universal OS or universal application software. The potential audience is also limited because the potential user is often difficult to test such applications that require a large amount of computation in advance. For example, suppose the student is required to learn how to use Maya, or potential buyer, who is well known for such applications, you may want to try Maya before you invest money in the purchase (which may also include the purchase of a car of an older model that can fulfill Maya). Despite the fact that a student or a prospective buyer can download the demo version of Maya or get a physical copy of the media information with the demo version of Maya, if they don't have a computer that can run Maya with all its potential (for example, c processing complex 3D scenes), they will not be able to fully appreciate this product. This significantly limits the audience for such applications with greater functionality. It also helps to increase the sales price, that is how the cost of development usually is repaid far fewer purchases than in the case of universal application.

Expensive application also encourage people and companies to use "pirated" copies of software. As a result, high-performance application software suffers from much of the widespread piracy, despite considerable efforts of the publishers of such software to reduce piracy of this kind in various ways. However, even when using "pirate" applications with greater functionality, users cannot avoid the need to invest in expensive modern PC for execution of pirated copies. Therefore, despite the fact that users of pirated software can use the app for a price equal to the actual retail price, they, nevertheless, have to buy or purchase an expensive PC to use this application.

This also applies to users of pirated videogames with high performance. Despite the fact that pirates can get the game for a price equal to their actual rates, they, nevertheless, have to buy expensive computer equipment (for example, PC with enhanced or high-performance GPU video game console, like the XBox 360)necessary to conduct the games properly. Considering the fact that video games usually are pastime for consumers, the additional costs of high-end gaming video system can be an obstacle. This situation is worse in countries (such as China)where the average annual income of workers at present is quite low relative to the average annual income of workers in the United States. The result is a much smaller percentage of people owns a high-performance gaming video or high-performance PC. In such countries it is common for "Internet cafe", in which users pay for the use of the computer connected to the Internet. Often these Internet cafes have an old model or a low-end PC without high-performance characteristics, such as GPU, which might otherwise provide an opportunity for the players to video games that require a large amount of computation. This is a crucial factor in the success of the games, which are performed on low-end PC, for example, "World of Warcraft" media conglomerate Vivendi, which is very successful in China, and there it is usually played in Internet cafes. On the contrary, in a game that requires a large amount of computation, like Second Life, are much less likely to play on PC, installed in a Chinese Internet cafe. These games are actually available to users who have access only to low-end PC in the Internet cafe.

Barriers also exist for users who are considering the purchase of video games and would like to first try the demo version of the game by downloading it via the Internet to your home computer. The demo version of video games is often a full-featured version of the game with some of the available features or limitations imposed on the number of staging the games. Could be a long process (possibly hours) downloading gigabytes of data before the game will be installed and executed on a PC or on the console. If PC can also be implied figuring out what special drivers are needed (for example, drivers OpenGL or DirectX) for this game, download the appropriate version, installing them, and then determining whether on PC lead this game. This last step can include determining whether the PC sufficient capacity for processing (CPU and GPU), enough RAM and a compatible OS (for example, some games are executed on Windows XP, but not enforced on Vista). Accordingly, after a long process when you try to run the demo version of video games, you may find that the demo version of video games, may not be reproduced, taking into account the configuration of the PC user. Worse, if after downloading the new drive the ditch by the user to test the demo version, these drivers may not be compatible with other games or applications that the user always uses on PC, respectively, the installation of the demo version can cause previously working games or applications in the idle status. These obstacles not only disappoint the user, but they create obstacles to the software publishers of video games and video game developers to market their games.

Another problem, which results in economic inefficiency is related to the fact that these PC or console game is usually designed to provide a certain level of performance requirements for applications and/or games. For example, some PCs have more or less RAM, slower or faster CPU and slower or faster GPU, if they even have a GPU. Some games or applications take advantage of all the computing power of this PC or console, while many games or applications do not use them. If the game or application that the user chooses, the required capacity is less than the maximum performance of the local PC or console, then you can waste money on unused signs of PC or console. In the case of the console the console manufacturer vietsopetro more than necessary to Finance the cost of the console.

Another problem that exists in the marketing and use of video games, includes the account that the user is looking at other games before he makes a purchase of this game. In the prior art there are several approaches for recording a video game to be repeated in the future. For example, in patent application U.S. No. 5558339 reported record information about the game state, which includes the operation of a game controller, during the "gameplay" (the game) in the client computer video games (owned by the same or another user). This status information can be used in the future to repeat some or all of the operations of the game on the client computer, video games (PC or console). A significant drawback of this approach is that in order for the user to view recorded game, he should have the client computer video games that can play the game, and should have a video game application executing on the computer so that the gameplay is identical when replaying the state recorded games. In addition, the video game application should be written in such a way that there is no possibility of differences COI the log recorded between playing and re-playing game.

For example, a graphical representation of the game is usually calculated on a frame-by-frame basis. For many games the game logic can sometimes take less or more time than one frame period, to calculate the graphical representation displayed on the screen for the next frame, depending on whether the scene is particularly complex, or if there are other delays that slow down the execution (for example, on a PC can execute another process that takes away CPU cycles from gaming applications). In this game you can meet the "threshold" frame, which is calculated for several less time than one frame period (say, a few less CPU cycles). When again is calculated identical scene using identical information about the state of the game, it can easily take a few CPU cycles longer than one frame period (for example, if the internal CPU bus bit does not match in phase with the external DRAM bus, and it introduces several time units of CPU cycle delay, even if there is no big delay because another process that takes a millisecond of CPU time processing of the game). Therefore, when the game is re-played, the frame is calculated for two periods of the frame, not within one frame period. Some line of conduct is based on how often the game Vici who is a new frame (e.g., when the game selects input from game controllers). When is the game, this mismatch in time for different lines of conduct does not affect the game play, but it may lead to the fact that when replaying the game will display a different result. For example, if the ballistics basketball is calculated with a steady speed of 60 fps, but the sample input game controller is based on the speed calculated by frame speed calculated frames could be 53 fps when the game was recorded, but 52 fps when the game is re-played, which may lead to mismatches between the entered if the basketball in the basket or not that will result in a different outcome. Accordingly, the use of the state of the game to record a video game requires a very careful design of the game program to ensure that repeat playback using identical information about the state of the game will display the same outcome.

Another approach of the prior art for recording video games is a simple recording of video or PC games video systems (e.g., VCR, recorder to DVD or charge capture video on PC). Video can be rewound and re-played, or as alter the assets, the recorded video can be uploaded to the Internet, usually after compression. The disadvantage of this approach is that, when the sequence of frames 3D games repeatedly reproduced, the user is restricted to viewing this sequence of frames only from the point of view, with which the sequence of frames was recorded. In other words, the user cannot change the point of view of the scene.

In addition, when the compressed video sequence of frames of the game that is played on a home PC or gaming console, make available to other users via the Internet, even if this video is compressed in real time, it may be impossible to unload the compressed video in real time on the Internet. The reason this happens is that many houses in the world, which are connected to the Internet, have a high degree of asymmetric broadband connection (such as DSL and cable modem, usually have much more bandwidth downstream than the bandwidth of an upstream data flow). Compressed video sequences with high resolution often have larger bandwidth than the bandwidth of an upstream data flow network that makes it impossible to unload them in real time. Accordingly, there may be significant delay poluoprokinuty sequence of frames of the game (possibly minutes or even hours) before another user in the Internet will be able to view the game. Although this delay is acceptable in certain situations (for example, to view executions of the player, which has happened before), it makes it impossible to view the games live (for example, competition in basketball with the winners) or "retry"when the game is played live.

Another approach of the prior art allows the viewer with TV, preview, video games live, but only running television production group. Some television channels in the United States and in other countries provide channels to view the game on which a television audience can look for specific users in video games (for example, players with the highest rating, participating in competitions). This is done in the presence of video gaming systems (PC and/or console), served in the equipment for processing and distributing video for the television channel. It differs little from the broadcast by the television channel basketball games live, in which several cameras provide a transmission line live at different angles around the basketball courts. After this body is Sonny channel can use its equipment for special effects and video/audio manipulation out of different games video. For example, a television channel may impose text over video from video games, which indicates the status of the different players (similar to how you can overlay text during basketball games live), and the television channel may impose additional audio from a commentator who can discuss the actions that occur during games. In addition, video game output can be combined with cameras, video recording of the actual players of the game (for example, showing their emotional reaction to the game).

One problem with this approach is that such a transmission line live video should be available for equipment television channel for processing and distributing real-time video to ensure that he had the excitation of the direct transmission. However, as discussed earlier, it is often impossible to carry out)when playing video system control of the house, especially if part of the broadcast includes live video from a camera that captures a real video player. In addition, during the competition there is the concern that a player who is at home can modify the game and cheat, as described above. For these reasons, such broadcast video games on TV often make it so that players and game VI is ecosistemi concentrated in a public place (for example, in a television Studio or in the arena), where the television production equipment can take line video transmission of a variety of game systems, and possibly cameras for direct transmission.

Although such video game television channels of the prior art can provide a very exciting idea for a television audience, which is the practical experience close to real sporting event, for example, videogramme presented as "athletes", and in the sense of their actions in the world of video games, and in the sense of their actions in the real world these games video systems is often limited by the provisions, when the players are in close physical proximity to each other. And so as a television broadcast channels, each broadcast channel can display only one video stream, which is selected industrial group TV channel. Because of these limitations and the high cost of airtime, production equipment and production groups, on such television channels usually show only the players with the highest rating, participating in the most important competitions.

In addition, this TV with a full-screen image of the video games entire television audience, shows the t only one video game at this time. This really limits the choices of the viewer. For example, the viewer may not be interested in the game(s)show(s) at this time. Another viewer may be interested only in watching the game a particular player, which is not shown on the television channel at a given time. In other cases, the viewer may only be interested see how experienced the player controls a particular level in the game. However, other viewers may want to control the point of view from which you look the video game, which differs from a viewpoint selected industrial group, etc. in short, the viewer can have many preferences when viewing the video games that are not provided by the particular broadcast television network, even if there are several different television channels. For all the above reasons, a video game television channels of the prior art have significant limitations in the representation of the video viewers.

Another disadvantage of the gaming systems of the prior art systems and application software is that they are complex and usually suffer from errors, crash and/or unintended and undesirable behavior (collectively, "errors"). Despite the fact that games and applications, asrule go through the process of configuring and debugging (often called "quality assurance software" or SQA) before release, almost invariably, after the release of the game or application to a wide audience, in operation errors occur unexpectedly. Unfortunately, the software developer, it is difficult to identify and locate many of the bugs after release. Software developers can be hard to find the error. Even when they learn about the error, they may be available only limited information to identify the causes of this error. For example, the user can call the phone number in the customer service game developer and leave a message about that when playing games the screen started to flicker, then became dark blue, and the PC hung. This gives the group SQA very little useful information for error detection. Some games or applications that communicate online, can sometimes give more information in certain cases. For example, can sometimes be used to "guard" the process for monitoring the game or application on the subject of "emergency waiver". The watchdog process may collect statistics about the status of applications or games (for example, about the status of the stack memory usage, ka is far game or application has progressed and so on), when the crash occurs, and then transfer this information to the group SQA via the Internet. But in the complex game or app decoding such information may take a very long time to accurately determine what the user was doing at the time of emergency failure. Even then, it may be impossible to determine the sequence of events that led to the crash.

Another problem associated with PC and console games, is that they should be serviced, which causes great inconvenience to the consumer. Maintenance issues also affect the manufacturer's PC or gaming console, as they usually have to send a special box for safe transport of broken PC or console, and then to bear the cost of repair, if the PC or the console's warranty. The publisher of the application software or games may be affected by the loss of sales (or use an online service) due to the fact that the PC and/or console are in a state of repair.

1 shows a gaming system of the prior art, for example, Sony Playstation® 3, Microsoft Xbox 360®, Nintendo Wii™, PC Windows-based or Macintosh Apple. Each of these systems includes a central processing unit (CPU) for executing program code, typically a graphical processing unit (GPU) for performing advanced graphical operations, and multiple forms of input/output (I/O) for communicating with external devices and users. Each of these systems includes a Central processing unit (CPU) for execution of the control program, as a rule, the graphics processor (GPU) to perform advanced graphics operations, and various types of input/output (I/O) to exchange information with external devices and users. For simplicity, these components are depicted together as one unit 100. Also shows that video games prior art of figure 1 includes a memory 104 on the optical storage media (for example, high-capacity drive to drive), the drive 103 on the hard disks for storing data and a control program video games, network connection 105 for running games with multiple participants, to download games, patches, demos or other media, random access memory (RAM) 101 for storing a control program that is currently executing CPU/GPU 100, the controller 106 for receiving input commands from the user during gameplay and display 102 (e.g., SDTV/HDTV or computer monitor).

The system of the prior art, depicted in figure 1, has several limitations. First, the drives 104 on the optical disks and drives 103 to hard drives usually have a much slower speed access sravnenie is the speed of the RAM 101. When working directly through the RAM 101, the CPU/GPU 100 may, in practice, to handle a lot more polygons per second than is possible when the control program and data are read directly from the tape drive 103 on hard drives or memory 104 on the optical disk due to the fact that the RAM 101 typically has a much greater bandwidth and does not suffer from delays disk mechanisms relatively long search. But in these systems of the prior art provided only a limited amount of RAM (for example, 256-512 MB). Therefore, often requires a sequence of frames "Loading...", during which the RAM 101 is periodically populated with data for the next scene of the video games.

In some systems attempt to combine the loading of the control program simultaneously with the gameplay, but it can be done only when you know the sequence of events (for example, if the vehicle is traveling on the road, it can be loaded geometry for the approaching buildings, located on the edge of the road, while the car is moving). For complex and/or a quick scene change this type of combination usually doesn't work. For example, in the case when the user is in the middle of the battle, and RAM 101 is filled with data representing the objects in the field of view at this point, if the user is quick moves the image to the left to view objects, which is not currently loaded into the RAM 101, the result will be a breach of continuity in action, since there will be enough time to load new objects from memory 103 on hard drives or optical media drive 104 information in the RAM 101.

Another problem associated with the system of figure 1, occurs due to the limitations of memory capacity drives 103 on hard drives and optical storage media 104. Despite the fact that the disk storage device can be manufactured with relatively large memory capacity (for example, 50 gigabytes or more), they, however, do not provide sufficient memory capacity for specific scenarios encountered in modern video games. For example, as mentioned earlier, in the video game football to the user may be provided the opportunity to select from dozens of teams, players and stadiums around the world. For each team, each player and each stadium must be a large number of cards displaying textures and maps of the environment to describe the 3D surfaces in the world (for example, each team has a unique t-shirt, and each requires a unique map texture mapping).

One method used to solve this last problem is to pre-calculate a map display text is ture and environment maps after as they are selected by the user. This may include several processes that require a large amount of computation, including recovery of compressed images, 3D rendering, shading, organization, data structures, etc. as a result, the user may be a delay while in the video game performs these calculations. One way to reduce this delay, in principle, is to perform all these calculations, including every change team, the team and the stadium when the game was originally developed. Manufactured version of the game then you can include all of these pre-processed data stored on an optical storage medium 104, or on one or more servers in the Internet with only switchable pre-processed data for the data team, team, stadium, downloadable via the Internet in the memory 103 on the hard drives when the user makes a selection. As a practical matter, however, such pre-loaded data for each change that is possible when playing games, can easily be terabytes of data that exceeds the capacity of current optical storage devices. In addition, the data for the data team, the team, the stadium can easily be hundreds of megabytes of data, the sludge is more. In relation to home network connection, say, 10 Mbps, upload this data via a network connection 105 may take more time than the computation of such data locally.

Accordingly, when the architecture of the game prior art, depicted in figure 1, the user there is a significant delay between the main mounting transitions difficult games.

Another problem with such approaches of the prior art, as the one depicted in figure 1, is that over time, video games have become more advanced and require more processing power CPU/GPU. Accordingly, even assuming an unlimited amount of RAM, the hardware requirements video games exceed the maximum level of computing power available in these systems. As a result, users must upgrade play equipment every few years to keep up (or to the newer games at lower quality levels). One consequence of the trend even more improvements to the game is that the home video game machines, as a rule, are economically inefficient, because their value is usually determined in accordance with the requirements of the game the highest Cycling is, they can support. For example, the XBox 360 can be used for the conduct of games like "Gears of War", which requires a high-performance CPU, GPU and hundreds of megabytes of RAM, or XBox 360 can be used to play Pac Man, the game from the 1970s, which requires only kilobytes of RAM and CPU very poor performance. Actually, the XBox 360 has sufficient computing power to accommodate many concurrent games Pac Man at the same time.

Video game machines, usually off over a longer amount of time per week. According to a study by Nielsen Entertainment in July 2006, active players aged 13 years and older, on average, active players spend fourteen hours a week playing console video games, or only 12% of the total time per week. This means that the average video game console is not used 88% of the time, which is an inefficient use of expensive resources. This is especially important given the fact that video game consoles are often funded by the manufacturer to reduce the purchase price (with the expectation that this grant will be returned due to royalty payments from future purchases software video games).

Video game console also bear the costs associated with almost any consumer electronic device. For example, electronics and mechanisms of item must be mounted in the housing. The manufacturer shall provide a warranty service. The retailer that sells the system has to make a profit from any sale system and/or from the sale of software, video games. All these factors add to the cost of the video game console, which needs to be financed or the manufacturer, with the transfer to the consumer, or both.

In addition, the main problem video game industry is piracy. The security mechanisms used on almost every main gaming system, "break" over time that results in unauthorized copying of video games. For example, a security system, the Xbox 360 was hacked in July 2006, and users can now download an illegal copy online. Games that you can download (for example, games for PC or Mac) are particularly vulnerable to piracy. In certain regions of the world where piracy is poorly controlled, essentially, there is no viable market for stand-alone software video games because users can buy pirated copies as easily as a legal copy, for a price equal to a very small fraction of the price. In addition, in many parts of the world, the price of the gaming console is such a high percentage of the income that even if itstwo controlled, there are few people could afford a modern gaming system.

In addition, the market of used games reduces the income of the video game industry. When the user loses interest in the game, he can sell the game to a retailer that sells this game to other users. It is unauthorized, but the usual practice significantly reduces the incomes of game publishers. Similarly, the usually reduced sales by approximately 50% when switching to another platform every few years. This is because users no longer buy games for older platforms, when they learn that soon-to-be released new version of the platform (for example, when going to release the Playstation 3, users no longer buy games for Playstation 2). Taken together, the decrease in sales and increase in development costs associated with new platforms, can have a very significant adverse impact on the profitability of game developers.

New game consoles are also very expensive. Xbox 360, Nintendo Wii and Sony Playstation 3 - all retail sold for hundreds of dollars. High-end gaming system personal computer can cost up to $ 8,000. This is for users significant investment, especially considering the fact that hardware becomes obsolete after a few years,and many systems are bought for children.

One approach to solving the foregoing problems are online games in which program controls the game and the information is hosted on a server and are delivered to client machines on demand as compressed video and audio streaming over broadband digital network. Some companies, for example, G-Cluster in Finland (now a subsidiary of Japanese Corporation SOFTBANK Broadmedia) currently provide these services online. Similar games services are available in local networks, such as LANs hotels, and offers DSL providers and cable television. The main disadvantage of these systems is the problem of waiting time, i.e. the time it takes for the signal to pass to and from the game server, which is usually located in the "Central station" operator. Dynamic video games-operatives (also known as TWIC video games "twitch" ("car")) require very low latency between the time when the user performs an operation with a game controller, and when the display screen is updated to show the result of the operation of the user. Low latency is required that the user have the feeling that the game responds immediately. Users can be satisfied with different online what rvalue timeout, depending on the game type and level of skill of the user. For example, the waiting time of 100 MS may be acceptable for slow casual games (such backgammon) or slow RPG, but in a dynamic game-operative waiting time in excess of 70 or 80 MS, and can affect the game and user, respectively, is unacceptable. For example, in the game that requires quick reaction time, deteriorating the accuracy by increasing the timeout from 50 to 100 MS.

When's the server or the application server is installed in a nearby managed network environment, or where the network path to the user is predictable, and/or it can withstand peak bandwidth, it is much easier to control the waiting time from the point of view of maximum waiting time from the point of view of the constancy of the waiting time (for example, so that the user observes the steady movement from digital video streamed over the network). This level of control can be achieved between the switchgear cable television network home subscriber cable television, or from the Central office DSL to the house of the subscriber DSL, or in a commercial environment institutional local area network (LAN) from the server or user. You can also get specially separated private connection point-to-point between what the companies which have guaranteed bandwidth and latency. But in the games or software system that puts the game in a data center connected to the Internet, and then transmits the compressed video stream to the user via a broadband connection, many factors affect the waiting time, which results in serious limitations in the use of systems of the prior art.

In a typical house, connected to a broadband channel, the user can have a DSL or cable modem for broadband access. For such services broadband transmission time of the signal back and forth between the home user and Internet sharing is typically 25 MS (and sometimes more). In addition, there are values of the time-out signal transfer there and back, which are the result of routing data through the Internet in the server center. The waiting time transfer via the Internet varies depending on the route specified data, and delays that occur due to routing on it. In addition to delay routing the signal back and forth is also a consequence of the velocity of light passing through the fiber wire, which connects a large part of the Internet. For example, for every 1000 miles (1600 km) transmission time of the signal back and forth arr is siteline is 22 MS due to the speed of light through fiber wire and other losses.

Additional waiting time may occur due to the transfer rate of data flow through the Internet. For example, if the user is provided with the DSL service, which is sold as "service DSL 6 megabits per second, in practice, the user will probably get less than 5 megabits per second of bandwidth downstream at best and probably will be observed that the connection periodically deteriorates due to various factors, such as congestion during periods of peak load in the access multiplexer digital subscriber line (DSLAM). A similar issue can occur with a decrease in data transfer speed cable modem (used) for compounds sold as a "service cable modem connection 6 megabits per second, up to speed much lower than that specified, if congestion occurs in a local shared coaxial cable constituting the communication line through the neighborhood or elsewhere in the network system cable modems. If the data packets with a steady speed of 4 Mbps will be streamed in the same direction in the format transfer Protocol user datagram (UDP) from the server center via such connections, if everything will work correctly, the data packets will be transmitted without additional waiting time, but in the event of an overload (and the other obstacles), and if streaming data to the user there is only 3.5 Mbit/s, in a typical situation, or packets will be dropped, which will lead to loss of data, or packets will be standing in line at the point of overload, as they will not be sent, therefore, with the introduction of additional waiting time. In different points of congestion there are different capacity queue for storing delayed packets, respectively, in some cases, packets that cannot pass through the point of overload, immediately reset. In other cases, the queue are several megabits of data and eventually leave. But, in almost all cases, the queue at the point of overload is limited in capacity, and after exceeding these limitations queue overflow, packets will be dropped. Accordingly, to eliminate the additional waiting time (or, even worse, packet loss) elimination of exceeding the capacity of the data transfer speed from the game server or application server to the user.

The waiting time is also a consequence of the time required for video compression in the server and restore the compressed video to the client device. The timeout also occurs when in a video game executing on the server calculates the next frame, which must be displayed on the screen. Available now available algorithms for video compression or suffer from high speeds or due to the long waiting time. For example, the algorithm for the compression of moving images MJPEG is only intraframe compression algorithm is lossy, which differs little waiting time. Each video frame is compressed independently from every other video frame. When a client device receives a frame of video compressed according to the compression algorithm of moving images MJPEG, it can immediately restore the compressed frame and display it with the resulting very low latency. But due to the fact that each frame is compressed separately, the above algorithm cannot utilize similarities between successive frames, and as a result, only the intraframe video compression algorithms suffer from a very high data transmission speeds. For example, video 640×480 pixels, 60 fps (frames per second), compressed by the compression of moving images MJPEG, may require 40 Mbps (megabits per second) or large (baud rate) data. Such high-speed data transfer for video Windows with such low resolution require excessively high costs in many broadband applications (and of course for most consumer applications based on Internet technologies). In addition, due to the fact that each frame is compressed independently, artifacts in frames, which can happen as a result of lossy compression will probably be owlets in different locations in consecutive frames. This may cause the viewer looks like a moving visual artifacts when recovering a compressed video.

Other compression algorithms such as MPEG2, H.264 or VC9 from Microsoft, as they are used in the configurations of the prior art, can achieve high compression ratios, but due to the large waiting time. Such algorithms use interframe and intraframe compression. Periodically, these algorithms perform only intraframe compression frame. This frame is known as a key frame (usually called an I-frame). After that, these algorithms typically compare an I-frame with the previous frames and future frames. Instead of jamming the previous frames and subsequent frames, the algorithm determines what has changed in the image preceding and subsequent frames in comparison with the I-frame, and then saves these changes as the so-called B-frames for changes preceding I-frame and P-frames, for a change, following the I-frame. This results in a much lower data rates than only intraframe compression. But this is usually due to larger time-out. An I-frame is usually much larger than a B-frame or P-frame (often 10 times more), and financial p is Tata transfer takes proportionally more time at a given data rate.

For example, consider the situation, when I-frames are 10 times more B-frames and P-frames, and there are 29 B-frames+30 P-frames=59 intermediate frames for each individual I-intricate, or a total of 60 frames for each group of Frames (GOP). Accordingly, at 60 fps, every second there is a 1 60-frame GOP. Assume that the maximum data transmission rate on the transmission channel is equal to 2 Mbit/s To obtain the video quality in this mode, the compression algorithm may output a data stream to 2 Mbit/s, and taking into account the above factors, the upshot is a 2 megabit (Mbit)/(59+10)=30 394 bits each intricate and 303 935 bits in each I-frame. When the algorithm restoration of the compressed data takes the compressed video stream, for stable video playback, required to restore each compressed frame and display them on the screen at regular intervals (e.g., 60 fps). To obtain this result, if any of the frame there is a waiting time of transfer, all frames must be delayed at least during this waiting time, respectively, the largest waiting time frame determines the waiting time for each video frame. I-frames enter the large values of the waiting time of the transfer, as they are the greatest, and the entire I-frame must be received before may restore the Yong compressed I-frame and displayed on the screen (or any intermediate frame, dependent I-frame). Given the fact that the data rate of the channel is 2 Mbps, the transmission of I-frame will occupy 303 935/2 Mbit = 145 MS.

In the system of interframe video compression, as described above, using a large percentage of the bandwidth of the transmission channel, there are large values of the waiting time due to the large size of I-frame relative to the average frame size. Or, in other words, despite the fact that when the algorithms interframe compression prior art achieves a lower average per-frame data rate than when only intra-frame compression algorithms (for example, 2 Mbps, compared to 40 Mbps), they, however, suffer from high peak frame data rate (e.g., 303 935×60=18.2 Mbps) due to the large I-frames. However, you should take into account the fact that the above analysis assumes that all P-frames and B-frames are much smaller than I-frames. Although this is in General true, it's not fair for personnel with a high level of complexity of the image, uncorrelated with the previous frame, large displacement or scene change. In such situations, P-frames or B-frames can become as large as I-frames (if P-frame or B-frame becomes larger I-frame, an advanced compression algorithm, as a rule, "provides" an I-frame and semester-frame or B-frame I-frame). Accordingly, in the digital video stream at any time there may be the peak data transfer rate, with the size of the I-frame. Accordingly, in the case of compressed video, when the average data rate of the video approaches the throughput speed data transmission channels (as is often the case, taking into account the requirements of high speed data transfer for video), high peak data transfer rate because I-frames or large P-frames or B-frames result in a large time-out frame.

Undoubtedly, in the above discussion describes only the waiting time of the compression algorithm created large B-frames, P-frames or I-frames in the GOP. If you use B-frames, the wait time will be more. The reason this happens is that before the B-frame can be displayed on the screen, all the B-frames following the B-frame and I-frame should be taken. Accordingly, in the sequence of group of pictures (GOP), for example, BBBBBIPPPPPBBBBBIPPPPP, in which there are 5 B-frames before each I-frame, the first B-frame cannot be displayed by the device restore compressed video up until will not be accepted subsequent B-frames and I-frames. Accordingly, if transmit video stream at 60 fps (i.e. 16,67 MS/frame), before it can be vos is made of compressed first B-frame, you will need 16,67×6=100 MS for the reception of five B-frames and I-frames, regardless of the speed of bandwidth, and it is only with 5 B-frames. Compressed video sequences with 30 B-frames are fairly common. And, when the channel bandwidth is 2 Mbps, the influence of the waiting time caused by the size of the I-frame is a great addition to the effect of the waiting time due to pending receipt of B-frames. Accordingly, channel 2 Mbit/s, with a large number of B-frames is quite easy to exceed the timeout of 500 MS or more using video compression technology of the prior art. If B-frames are not used (due to the smaller ratio for a given level of quality), the B-frame does not cause a timeout, but still, the waiting time is called the maximum frame sizes described above.

The problem is aggravated by the nature of many video games. Algorithms for video compression using the GOP structure described above, largely optimized for use with live video or movie designed for passive viewing. Typically, the camera (or the real camera and the virtual camera in the case created by computer animation and scene are relatively stable, simply because if the camera or the scene moves the tsya too abruptly, the material of the film or video (a) generally not nice to look at, and (b) if his watch, then typically the audience is not closely monitoring the action when the camera suddenly turns (for example, if the camera hits when shooting, when the child blows out the candles on the cake to the birthday, and suddenly turns away from the cake and back, the audience, usually focused on the child and the cake, and do not pay attention to a brief interruption when the camera suddenly moves). In the case of interviews or video teleconferencing, the camera can remain in a fixed position and not move, which results in a very very small number of peaks in the data. But 3D video with high activity differs by a constant movement (for example, consider the 3D race, where the entire frame is moving fast during the race, or consider game-militants from the first person, where the virtual camera is constantly moving by jerks). These video games can result in sequences of frames with large and frequent peaks, where the user may need to see clearly what is happening during these sudden movements. As such, the compression artifacts are much less acceptable in 3D video games with high activity. Accordingly, many video games, which due to their nature, outputs the compressed video stream with a very high and frequent peaks.

Considering the fact that users dynamic video game fighters are not tolerant of large waiting times, and taking into account all of the above reasons timeout, up to this time there were restrictions placed on the server the game that gave the video stream on the Internet. In addition, users of applications that require a high degree of interactivity, suffer from similar limitations, if these applications are hosted in a shared Internet and transmit the video stream. These services require network configuration, in which the web hosting servers are installed directly into the distribution device (in the case of cable broadband) or Central office (in the case of digital subscriber lines (DSL), or within the LAN (or on a specially separated private connection) in a commercial setting so that the route and the distance from the client device to the server control to minimize waiting time, and the peaks can be adapted so that they will not cause a timeout. LAN (typically having a speed in the range of 100 Mbps-1 Gbps) and a leased line connection with a corresponding bandwidth, typically can support requirements for peak bandwidth (for example, peak lane about is Askania 18 Mbps is equal to a minor fraction of the bandwidth of the LAN 100 Mbit/s).

Requirements for the maximum bandwidth can be achieved by the development of broadband network infrastructure associated with residential houses, if you have a special adaptation. For example, in a cable television system, digital videotraffic may be provided with a dedicated bandwidth that can handle the peaks, such as large I-frames. And, in the DSL system may be provided with a DSL modem with a higher speed, in view of the high peaks, or may be provided with specially separated connection, which can process data at higher transmission speeds. But conventional cable modem and DSL infrastructure, connected to the shared Internet, have a much lower tolerance for requirements on the maximum bandwidth for the compressed video. Accordingly, online services that host games or applications in data centers, located at a great distance from the client device, and then transmit the compressed stream video over the Internet through a normal broadband connection associated with residential homes, suffer from significant time-out and the limitation on the maximum bandwidth, especially for games and applications that require very low latency (e.g., game-militants from the first person and other multiplayer interaktivnega-militants or application, requiring small response time).

Brief description of drawings

The present disclosure of the subject invention will become fully understood from the following detailed description and from the accompanying drawings, which, however, should not be construed as limiting the disclosed subject matter specific variants depicted implementation, and should be regarded as designed only for explanation and understanding.

Figure 1 depicts the architecture's video systems of the prior art.

On figa-fig.2b depicts the architecture of the high-level system according to one variant of implementation.

Figure 3 shows the actual, estimated, and the required data rate for transmission of information between client and server.

On figa depicts the hosting service and the client, used according to one variant of implementation.

On fig.4b shows illustrative values of the waiting time associated with the transmission of information between the client and the hosting service.

On figs depicts a client device according to one variant of implementation.

On 4d depicts a client device according to another variant implementation.

On file depicts an illustrative block diagram of a client device on figs.

On fig.4f depicted illustration the I block diagram of a client device on fig.4d.

Figure 5 shows illustrative form of video compression that can be applied according to one variant of implementation.

On figa depicts an illustrative form of video compression that can be applied in another variant implementation.

On fig.6b shows the peak data rate associated with transmission of video sequences with low activity, low level of complexity.

On figs shows the peak data rate associated with transmission of video sequences with high activity, high level of complexity.

On figa-fig.7b depict illustrative methods of video compression used in one embodiment.

On Fig shows additional illustrative methods of video compression used in one embodiment.

On figa-figs depict illustrative methods used in one embodiment to reduce peak data transfer rate.

On figa-fig.10b depicts one alternative implementation that efficiently packs the fragments inside the packages.

On figa-fig.11d depicted embodiments of which are applied the way forward error correction.

On Fig depicts one alternative implementation that uses multi-core processors for compression.

On figa-fig.13b depicts the geographical is some position and communication between the hosting services according to different variants of implementation.

On Fig shows illustrative values of the waiting time associated with the transmission of information between the client and the hosting service.

On Fig depicts an illustrative architecture of a server centre hosting service.

On Fig depicts an illustrative screenshot of a variant of implementation of the user interface, which includes a lot of video Windows in real time.

On Fig depicts a user interface for pig after selecting a specific video Windows.

On Fig depicts a user interface for pig after plowing up these specific video Windows in full screen mode.

On Fig depicts an illustrative joint custom video, combined on the game screen with multiple participants.

On Fig depicts an illustrative user page for the player in the hosting service.

On Fig depicts an illustrative 3D interactive advertising.

On Fig depicts an illustrative sequence of steps for output photoreal image having a textured surface, based on the gripping surface of live performance.

On Fig shows an illustrative page of the user interface, which provides a choice of linear multimedia content.

Fig - graph that Il is ustrinum time which will be held before the web page will become active, as compared with the speed of the connection.

Description of illustrative embodiments

In the following description, specific details such as device types, system configurations, methods of information exchange, etc. to ensure complete understanding of the present disclosure of the subject invention. However, experts in the relevant field of technology to which this invention relates, it will be understood that these specific details are not required for application of the described embodiments.

On figa-fig.2b provided high-level architecture of the two embodiments in which video games and applications are hosted by the service 210 hosting and accessed the client device 205, located on the territory of 211 user (note that "territory user" means any place where the user is located, including outdoors, if he uses a mobile device), via the Internet 206 (or other public or private network) according to customer service. The client device 205 may be General purpose computers, such as PC Microsoft Windows or Linux PC or Macintosh computers from Apple, Inc. with a wired or wireless connection to the Internet with built-in or external display is m 222, or they can be specialized client devices, such as television subscriber station (wired or wireless connection to the Internet)that outputs video and audio to the monitor or TV 222, or they may be mobile devices, presumably with a wireless connection to the Internet.

Any of these devices can have their own input device of the user (e.g., keyboard, buttons, touch screen, trackpad, or light pen, camera, video capture card and/or camera that tracks the movement, and so on), or they can use external devices 221 input (e.g. keyboard, mouse, game controllers, light pen, camera, video capture card and/or camera that tracks the movement, and so on), connected by wire or wirelessly. As described in more detail below, the service 210 hosting includes servers with different performance levels, including servers with powerful processing CPU/GPU. During the game or application service 210 hosting a home or office client device 205 receives the input data of the controller and/or keyboard, and then it sends the input data to the controller via the Internet 206 to the service 210 hosting, which in response executes a program that controls a game, and which is no sequential frames of the video output (video loop) for the game or application software (for example, if the user clicks on the button, which can move the character on the screen to the right, then the game program generates a sequence of video images, which depict the character is moving to the right). This sequence of video images then compressed by using a video compression with a small timeout, and after that, the service 210 hosting transmits the video stream with low latency via the Internet 206. Home or office client device then decodes the compressed video stream and render the reconstructed video image on a monitor or TV. Therefore, the requirements of computational and graphical technique the client device 205 is significantly reduced. You want the client 205 had the computing power for relaying input/keyboard controller in Internet 206 and decoding and restoring compressed video stream received from the Internet 206, which is currently virtually any personal computer can run the software on your CPU (e.g. dual core processor of Intel Corporation (Intel Corporation Core Duo CPU))operating with a clock frequency of approximately 2 GHz, can restore a compressed HDTV 1280×720, encoded using such devices compression, H.264 and Windows Media VC9). And, when you learn any of client devices, specialized integrated circuit can perform a recovery of compressed video for such standards in real time at a much lower cost and with much less power consumption than universal CPU, for example, which may be required for a modern PC. Namely, to perform the function of forwarding the input data of the controller and restore compressed video, in home client devices 205 does not require any specialized graphics processors (GPU), optical drive or hard disk drives, such as video games prior art, depicted in figure 1.

As games and application software become more complex and more photorealistic, they require a GPU, CPU higher performance, more RAM and faster hard drives, magnetic disks with higher capacity, and computing power service 210 hosting can continuously be upgraded, but the end user will not need to upgrade your home or office client platform 205, as the requirements for its processing will remain constant for the resolution of the display and the frame rate at which this recovery algorithm compressed video. Accordingly, the hardware limitations and issues compatible with the value currently available, does not exist in the system depicted in FIGU-fig.2b.

In addition, since the game and application software are executed only in servers in the service 210 hosting, there is no copy of the game or application software (either in the form of optical media, or as downloadable software) in the office ("office", as used in this description, unless otherwise specified, includes any non-residential environment, not associated with a permanent residence, including classrooms, for example) or home user. This significantly reduces the probability that the game or application software will be illegally copied (illegally be used), and also reduces the likelihood that valuable database that can be (used) game or application software, will be illegally used. Actually, if the game requires specialized servers (for example, requiring very expensive, large or noisy equipment or application software, which in practice cannot be used at home or in the office, even if you get a pirated copy of the game or application software, it cannot be used at home or in the office.

In one embodiment, about what westline, service 210 hosting provides development tools for software developers 220 application software or games (which generally applies to companies in developing software, games or television studios or publishers in your application software or games), designing video games, so that they can design games that you can do in the service 210 hosting. Such tools provide developers the ability to use the characteristics of the hosting services that are typically not available in a standalone PC or game console (for example, quick access to very large databases of complex geometry ("geometry", unless otherwise specified, is used in this description to refer to the polygons, textures, equipment, lighting, line of conduct, and other components and parameters that define the database 3D)).

Different business models are possible for this architecture. One model, the service 210 hosting receives a monthly fee from the end user and pays a royalty to the developers 220, as shown in Fig. 2a. In an alternative implementation, depicted in fig.2b, developers 220 receives a monthly fee directly from the user and pay the service 210 for hosting host the ng content, applications or games. These underlying principles are not limited to any specific business model to provide application hosting or online games.

Characteristics of compressed video

As discussed earlier, one significant problem associated with the provision of services for video games or services for application software online is a timeout issue. Timeout 70-80 MS (when the input device is actuated by the user, until the moment when the answer is displayed on the display screen) is at the upper end for games and applications that require a small response time. However, to achieve such delay is very difficult in the context of the architecture depicted in figa and fig.2b, due to several practical and physical limitations.

As indicated in figure 3, when a user subscribes to an Internet service, the connection is usually estimated nominal maximum speed of 301 data to the home or office user. Depending on the policy providers and hardware for routing may be more or less ensured strict compliance with this maximum data transfer rate, but, as a rule, the actual available data rate is lower for one or more different reasons. For example, in the Central office DSL and the local line to the cable modem connection may be too much network traffic, or in a cable network may contain noise, which causes a reset packet, or the provider can set the maximum number of bits per month for each user. Currently, the maximum transmission speed of downstream data for DSL services and cable services typically ranges from a few hundred kilobits per second (kbps) up to 30 Mbit/s mobile Services are generally limited to hundreds of Kbit/s downstream. However, the speed of broadband services and the number of users that subscribe to broadband service, will increase significantly over time. Currently, some analysts estimate that 33% of U.S. broadband subscribers have the transmission rate downstream data 2 Mbps or more. For example, some analysts predict that by 2010, over 85% of U.S. broadband subscribers will have a data transfer rate of 2 Mbps or more.

As indicated in figure 3, the actual maximum available speed 302 data can vary over time. Accordingly, in the context of the application software or online games with low latency is sometimes difficult to predict the actual available data rate for a particular video stream. If MSE is ity 303 data required to support a given level of quality at a given number of frames per second (fps) with a given resolution (e.g., 640×480, 60 frame/s) for a particular value of scene complexity and movement, rises above the actual maximum available speed 302 data (as indicated by the peak in figure 3), there can be a few problems. For example, some Internet services will simply be discharged packets that will result in data loss and distorted/lost images on the screen of the user. Other services additional packages will be placed temporarily in the buffer (i.e. queued), and these packages will be provided to the client with the available data rate that will result to increase the wait time is unacceptable result for many games and applications. Finally, some providers of Internet services will be increasing the data transfer rate as a malicious attack, such as the attack of denial of service (known way, (used) by hackers to block network connections), and will terminate the user connection to the Internet for a certain period of time. Accordingly, in embodiments, the implementation described in this document, steps are taken to ensure that the transmission rate given is s, required for video games, will not exceed the maximum available data rate.

The architecture of the hosting service

On figa depicts the architecture 210 hosting service according to one variant of implementation. Service 210 hosting or can be located in one Central server or may be distributed across multiple server centers (to allow connections from users with less waiting time, which have routes with less waiting time to a certain server centers, other than to provide load balancing among users and provide redundancy in case of failure of one or more data center). Service 210 hosting may eventually include hundreds of thousands or even millions of servers 402, serving a very large number of users. System 401 management hosting service provides centralized management for service 210 hosting and manages the routers, servers, systems, video compression systems, accounting and billing, etc. In one embodiment, the system 401 management hosting service is implemented in a distributed data processing system based on the Linux operating system, connected to the disk array, the RAID type used to store the database for users who tion information information for the server and statistics system. In the above descriptions, the various operations implemented by the service 210 hosting, if they are not assigned to other special systems are initiated and managed by the system 401 management hosting service.

Service 210 hosting includes multiple servers 402, such servers that currently are commercially available from Intel, IBM, Hewlett Packard and others. Alternatively, the server 402 can be collected in a custom configuration of the components, or, in the end, they can be integrated so that the whole server will be implemented as a single chip integrated circuit. Although this diagram for the example shown a small number of servers 402, in case of actual use can be only one server 402 or millions of servers 402 or more. All of the servers 402 may be configured the same way (as an example, some configuration parameters, with the same performance and CPU type, with or without the GPU, if the GPU, with the same performance and GPU, with the same number of CPU and GPU, with the same volume and type/speed of RAM and with the same configuration RAM), or different subsets of the servers 402 may have the same configuration (for example, 25% of servers can be done in a certain way, 50% of each the m-way and 25% of another way), or all of the servers 402 may be different.

In one embodiment, the server 402 without disk, i.e. instead of having their own local mass storage device (regardless of whether it is optical or magnetic storage device, or storage device on the semiconductor, such as a flash memory, or other mass memory, performing a similar function), each server has access to the total mass memory through high-speed line or accelerated network connection. In one embodiment, this high-speed connection is the network 403 storage area network (SAN)connected to the set of matrices 405 independent disk drives with redundancy (RAID) connections between devices implemented using Gigabit Ethernet technology. As is well known to specialists in this field of technology, SAN 403 can be used to combine multiple disk arrays 405 type RAID together, which results in an extremely wide bandwidth, approaching or possibly exceeding the bandwidth available from the RAM used in modern gaming consoles and PC. And, while disk arrays RAID type based on rotating media, for example, magnetic storage media, often have substantially the e waiting time access during search disk arrays RAID types on the basis of the semiconductor memory can be implemented with much less waiting time for access. In another configuration, some or all of the servers 402 provide some or all of their own mass storage device locally. For example, the server 402 may store information, often addressed, such as its operating system and a copy of the video game or application to a local storage device based on flash memory with low latency, but it can use the SAN to access disk arrays 405 RAID type based on rotating media with a large search time, to gain access to large databases, geometry or information about the state of the game with less frequency.

In addition, in one embodiment, the service 210 hosting uses logic 404 video compression with a small timeout, described in detail below. Logic 404 video compression can be implemented in software, hardware, or any combination of (specific embodiments of which are described below). Logic 404 video compression includes logic for compressing audio and visual material.

When the operation while doing video games or use applications on-site 211 user via the keyboard, we and, game controller or other device 421 input logic 413 a control signal to the client 415 transmits control signals 406a-406b (usually in the form of UDP packets), representing keystrokes (and other types of user inputs), which is driven by the user, the service 210 of the hosting service 210. Control signals from the user are routed to the appropriate server 402 (or servers, if a user input device responsive multiple servers). As shown in figa, control signals 406a can be routed to the server 402 via a SAN. Alternatively or in addition, control signals 406b can be routed directly to the server 402 over the network hosting service (for example, Ethernet). Regardless of how they were submitted, the server or servers play the game or application software in response to control signals 406a-406b. Despite figa not depict various network components, such as the firewall(s) and/or gateway(s), they can handle the incoming and outgoing traffic at the border service 210 hosting (for example, between service 210 hosting and Internet 410) and/or on the boundary of the site 211 user between Internet 410 and home or office client 415. Graphical output and audio output executable of the game or application software provided with the I - that is, the new sequence of video images is provided in a logic 404 video compression with low latency, which compresses the sequence of video images according to the methods of video compression, low latency, for example, according to the methods described in this document, and transmits the compressed video stream, usually compressed or uncompressed audio back to the client 415 via the Internet 410 (or, as described below, through an optimized high-speed network service, which betrays bypassing General Internet). Thereafter, the logic 412 recovery of compressed video with a small timeout in the client 415 restores the compressed video and audio streams and renders the recovered video stream, and, as a rule, recovered the audio stream on the display 422. Alternatively, audio can be played from the speakers, separated from the display 422, or just not play. Note that despite the fact that the device 421 input and display 422 is depicted in figa and fig.2b as a stand-alone device, they can be integrated within the client devices, such as portable computers or mobile devices.

Home or office client 415 (described earlier on figa and fig.2b as a home or office client 205 can be a very economical device with low power consumption, with very limited facilitarnos performance or graphics performance, and may have very limited local mass memory, or it just may be missing. On the contrary, each server 402, connected with SAN 403 and lots of RAID 405, can be extremely high-performance computing system, and, of course, if shared by multiple servers in a configuration parallel processing, there are almost no restrictions on the computational performance of the system and graphics performance, which can be used. And because of the compression 404 video with low latency and compression 412 video with low latency the user the impression that the mentioned processing power of the servers 402 is provided to the user. When the user presses a key on the device 421 of the input image on the display 422 is updated in response to pressing the power key without significant delay from the point of view of perception as in the case of the local execution of the game or application software. Accordingly, with a home or office client 415, which is a computer with very low productivity or only economical integrated circuit that implements the recovery of compressed video with low latency and logic 413 control signal, the user is provided with virtually arbitrarily wybir is my computing power from a remote location, moreover, the impression that they are locally. This gives users the ability to run the most advanced intensive CPU (usually new) video games and high performance applications.

On figs depicted very basic and cost-effective home or office client device 465. This device is an option for the home or office client 415 on figa and fig.4b. Its length is approximately equal to 5 centimeters. It has a socket 462 Ethernet, which provides the interface with the Ethernet cable with power over Ethernet (PoE), through which it receives power and gets a connection to the Internet. It can perform network address translation (NAT) inside a network that supports NAT. In the office environment, many new Ethernet switches are PoE and bring PoE directly to the Ethernet socket in the office. In this case, all you need is an Ethernet cable from the wall socket to the client 465. If the existing Ethernet connection does not transfer energy (e.g., in the house with DSL or cable modem, but without PoE), on sale are cheap wall "modules" (i.e., power supplies), which take the Ethernet cable without power supply and output Ethernet with PoE.

Client 465 contains the logic 413 control signal (on figa), which is connected with a wireless Bluetooth interface that provides an interface with devices 479 input, Bluetooth, for example, keyboard, mouse, game controller and/or a microphone and/or headphones. In addition, one version of the implementation of the client 465 can output video at 120 fps, in connection with the display 468 can support video 120 fps and to transmit signals (usually through infrared radiation) in the shutter glasses 466 for alternately closing one eye, then the other with each successive frame. The result perceived by the user, is the stereoscopic 3D image that "POPs out" from the display screen. One such display 468, which supports this operation is Samsung HL-T5076S. As the video stream for each eye is single, in one embodiment, two independent compressed video stream service 210 hosting frames are interleaved in time, and compressed frames are restored as two independent process of restoring compressed data within the client 465.

Client 465 also includes logic 412 recovery of compressed video with low latency, which restores the compressed incoming video and audio outputs via HDMI multimedia interface high-definition), connection cable 463, which is connected to SDTV (TV standard definition) or HDTV (high definition TV) 468, if the TV video and audio, or to monitor 468, the which supports HDMI. If the monitor 468 user does not support HDMI, it can be used HDMI-to-DVI (digital video interface), but the audio will be lost. According to the HDMI standard, characteristics 464 display (for example, supported resolutions, frame rate) are transmitted from the display 468, and this information is then returned through the connection 462 to the Internet back in service 210 hosting, so it can transmit the compressed video stream in a format suitable for display.

On fig.4d depicts a home or office client device 475, which is identical to a home or office client device 465, depicted on figs, except that he has more external interfaces. In addition, the customer 475 may take any PoE for power, or it can operate from an external power adapter (not illustrated)which is inserted into the pin socket on the wall. Using the USB input of the client 475 camcorder 477 provides compressed video in the client 475, which is unloaded by the customer 475 service 210 hosting for the uses described below. The compression device with a small timeout, use the compression methods described below, is a built-in camera 477.

In addition to the Ethernet connector to connect to the Internet client 475 also has 802.11g wireless interface to the Internet. Both interfaces can use is to NAT internal network which supports NAT.

Furthermore, in addition to the HDMI connector for video output and the audio client 475 also has a connector Dual Link DVI-I (dual-link DVI-I), which includes analog output and the reference adapter cable provides VGA output). It also has analog outputs for composite video and S-video.

For audio, the client 475 has left/right RCA jacks for analog stereo and digital audio, it has a TOSLINK (optical output).

In addition to wireless Bluetooth devices 479 input, it also has a USB socket for providing an interface with input devices.

On file shows one variant of implementation of the internal architecture of the client 465. All or some of the devices shown in the diagram, can be implemented in user-programmable logic matrix, the custom ASIC scheme or in several separate devices, or registered or available in finished form.

Ethernet with PoE 497 is connected to the interface 481 Ethernet. Nutrition 499 is supplied from Ethernet with PoE 497 and connects with other devices in the client 465. Bus 480 is a common bus to exchange information between devices.

Managing CPU 483 (almost any small CPU, such as CPU series R4000 performance million instructions per second (MIPS) and a clock frequency of 100 MHz with built-in RAM meets the requirements of the Sabbath.), performing a small client control application from flash memory 476, provides execution of the Protocol stack for the network (i.e. Ethernet), and also communicates with the service 210 of hosting, and configures all the devices in the customer 465. He also manages the interfaces with devices 469 input and sends the packets back to the service 210 hosting with custom data controller, protected by forward error correction, if necessary. In addition, managing CPU 483 monitors packet traffic (for example, if packets are lost or delayed, and makes a note of the time of their receipt). This information is sent back to the service 210 hosting so that it can constantly monitor the network connection and correct what she sends, respectively. In the flash memory 476 initially during manufacture load control program for the control of the CPU 483, and a serial number which is unique to a particular client 465 units. This serial number enables service 210 hosting uniquely identify the client 465 unit.

Interface 484 Bluetooth communicates with devices 469 input wirelessly through the antenna inside the customer is and 465.

The device 486 recovery of compressed video is a device for recovery of compressed video with a small timeout, configured to implement a recovery of compressed video described in this document. There are a large number of devices recovery of compressed video, or available in finished form, or in the form of intellectual property (IP) design, which can be integrated in an FPGA or custom ASIC scheme. One company offering IP H.264 decoder is Ocean Logic, Manly, NSW Australia. The advantage of using IP is that the compression methods used in this description, do not correspond to compression standards. Some of the standard restoration of the compressed data are sufficiently flexible to provide compression methods described in this document, but some may not (to support them). But, by IP, there is full flexibility to redesign the device to restore the compressed data as required.

The output recovery of compressed video is connected to the subsystem 487 video output, which connects the video output interface 490 HDMI.

Subsystem 488 restore compressed audio is implemented or using the standard restore compressed audio, which is commercially available, or it can be the realization of AWANA as IP, or restore compressed audio can be implemented within control processor 483, which may, for example, to implement the device restore compressed audio files.

The device that performs recovery of compressed audio is connected to the subsystem 489 audio output, which connects to the audio with the audio output interface 490 HDMI.

On fig.4f shows one variant of implementation of the internal architecture of the client 475. As you can see, this architecture is identical to the architecture of the client 465 except for the additional interfaces and an optional external DC power from the power adapter, which is inserted into the contact socket on the wall, and if so used, replace the power supply, which may come from PoE 497 Ethernet. The functionality that is present in the client 465 will not be repeated below, but additional features are described as follows.

CPU 483 communicate with additional devices and configures them.

Subsystem 482 WiFi provides wireless access to the Internet, as an alternative to Ethernet 497 through its antenna. In commercially available subsystems WiFi from a wide range of manufacturers, including Atheros Communications, Santa Clara, CA (Santa Clara, Calif.).

Subsystem 485 USB provides an alternative to the of to Bluetooth connectivity to wired devices 479 USB input. The USB subsystem are pretty standard and publicly available for matrices FPGA and schemes ASIC, and they are often embedded in existing ready-made device that performs other functions, such recovery of compressed video.

Subsystem 487 video output displays a wider range of outputs, rather than within the client 465. In addition to providing video output HDMI 490, it provides DVI-I 491, S-492 video and composite video 493. In addition, when the digital video interface DVI-I 491, characteristics 464 display are transferred back from the display to the control CPU 483 so that he could notify the service 210 hosting on characteristics of the display 478. All interfaces provided by the subsystem 487 video output, are pretty standard interfaces and accessible in many types.

Subsystem 489 output audio outputs audio in digital form via the digital interface 494 (S/PDIF and/or TOSLINK) and audio in analog form via stereo analog interface 495.

Analysis of the time signal transfer there and back again

Undoubtedly, to realize the benefits outlined in the previous section, the transmission time of the signal back and forth between user action using device 421 of the input and emergence of the result on the display 420 should not exceed 70-80 MS. This time, expect the project should consider all factors on the path from device 421 input on the territory of 211 user to service 210 hosting and back-up 211 user, to display 422. On fig.4b depicts the various components and the network has to pass through signals, and above these components and networks is the timeline on that is terminated in an orderly manner illustrative of the magnitude of the waiting time, which can be expected in practice. Note that fig.4b is simplified and shows only routing critical path. Another routing data used for other signs of the system, described below. Bi-directional arrows (for example, the arrow 453) indicate the time of signal transmission to and fro, and unidirectional arrows (for example, the arrow 457) indicate the time of signal transmission in one direction, and "~" indicates the approximate dimension. You should specify that you will be real situation where you will get these amounts of time waiting, but in most cases in the United States, with connections via cable modem and DSL with area 211 of the user, these values of the waiting time can be obtained in the cases described in the next section. Also note that, although the cellular wireless communication with the Internet definitely will function in the depicted system, in most modern American cellular data transmission systems (e.g. EVDO) there are very large values of time and expectations, and cannot be retrieved values of the waiting time shown on fig.4b. However, these underlying principles can be implemented in future cellular technologies, which can be implemented this level timeout.

Since device 421 input on the territory of 211 user, after the user actuates the device 421 of the input control signal is sent to the user in the client 415 (which may be a standalone device, such as a television subscriber's set-top box, or it can be software or hardware operating in another device, such as a PC or mobile device) and is broken into packets (UDP format in one embodiment), and the package provides the destination address to send service 210 hosting. The package will also contain information to indicate which user receives the control signals. After that, the package(s) control signal is sent(are) through the device 443 firewall/router/NAT (network address translation) in the interface 442 WAN. Interface 442 WAN is an interface device is provided for the territory of 211 user's ISP user (provider of Internet services). Interface 442 WAN may be a cable or DSL modem, WiMax transceiver, preempted the chick for fiber-optic communication lines, interface mobile data transmission systems, data interface over IP through the electrical network (Internet Protocol-over-powerline) or any other of the many interfaces with the Internet. In addition, the device 443 firewall/NAT/router (and possibly interface 442 WAN) can be integrated into the client 415. An example of this can be mobile phone, which includes software for implementing the functionality of your home or office client 415, and the means for routing and connecting to the Internet wirelessly through a certain standard (e.g., 802.11g).

After that, the interface 442 WAN routes control signals in what is called in this description "the entry point into the Internet" 441 provider for Internet service providers (ISP) user, which is a tool that provides an interface between the transmission medium data WAN connected with area 211 of the user, and General Internet or a private network. Characteristics of the Internet access point changes depending on the nature of the provided Internet service. For DSL, as a rule, will be the Central office of the telephone company, where is the DSLAM. For cable modems, as a rule, is the Central station Multisystem operator (MSO) cable line. For cellular systems, as a rule, will dispatch associated with Tannoy mast cell communication. But whatever the nature of the Internet access point, it then routes the packet(s) of the control signal to the Internet 410. Package(s) of the control signal is further routed to the interface 441 WAN service 210 hosting, through what is most likely the interface transceiver for fiber-optic communication lines. WAN 441 then routes the packets of the control signal in the logic 409 routing (which can be implemented in many different ways, including the Ethernet switch and the routing servers), which determines the address of the user and routes the managing Director(s) signal(s) in the appropriate server 402 for this user.

After that, the server 402 receives control signals as input signals for the game or application software, which are executed on the server 402, and uses them to process the next frame of the game or application. After the formation of the next frame of the video and audio are output from the server 402 in the device 404 video compression. Video and audio can be output from the server 402 in the device 404 compression through various means. First, the device 404 compression can be integrated into the server 402, respectively, the compression can be implemented locally within the server 402. Or video and/or audio can be displayed in the form of packets through the network link is, for example, an Ethernet connection in the network that is of any private network between the server 402 and the device 404 video compression, or through a shared network, such as SAN 403. Or video can be displayed through the video output connector of the server 402, for example, a VGA or DVI connector, and then be captured by the compression device 404 video. In addition, audio can be output from the server 402 or as digital audio (e.g., via connector S/PDIF or TOSLINK), or as analog audio, which is digitized and encoded logic audio compression inside device 404 video compression.

After the capture device 404 video compression video frame and audio generated during this period frame, from the server device 402 video compression compresses video and audio using the methods described below. After compressing the video and audio are formed packets with the address to send back to the client 415 user, and their route to the interface 441 WAN, which then routes the video and audio packets through a common Internet 410, which then routes the video and audio packets in point 441 Internet access ISP user, which routes the video and audio packets in the interface 442 WAN on-site user that routes video and audio packets in the device 443 firewall/NAT/router, which then routes the video and audio packets in the client 415.

The client 415 is, each of the compressed video and audio and then displays video on the display screen 422 (or built-in customer display) and sends audio to the display 422, or a separate amplifier/speakers or the amplifier/speakers built-in client.

For the user to perceive the described process without delay two-way delay should be less than 70 or 80 MS. Some of the delay time described the way there and back is controlled by the service 210 hosting and/or by the user, while others are not controlled. However, the following measurements are based on analysis and testing of a large number of real-life scenarios are approximate.

The time of transmission in one direction to send control signals 451 typically less than 1 MS, routing to and fro through the territory 452 user is typically performed using a public switch consumer-level firewall/NAT/router via Ethernet approximately 1 MS. I (Connection) user ISP significantly differ in their latency 453 signal transmission to and fro, but with providers of cable modem and DSL, as a rule, there is a delay between 10 and 25 MS. The time of signal transmission to and fro on the General Internet 410 may be very different depending on how traffic is routed, and whether there are any cracks on the route (and these issues are discussed below), but, as a rule, the General Internet provides enough of the optimum routes and the waiting time is largely determined by the speed of light through fiber wire according to the distance to the destination. As also discussed below, the authors found the service 210 hosting at most at a distance of about 1000 miles (1600 km), which, as expected, it will be located on the territory of 211 user. The actual transmission time of the signal through the Internet at (distance) 1000 miles (1600 km) (2000 miles (3200 km) round trip) is approximately equal to 22 MS. Interface 441 WAN service 210 hosting is usually interface fiber-optic communication lines with high speed commercial grade with little waiting time. Accordingly, the time 454 total Internet is usually between 1 and 10 MS. Can be achieved timeout routing 455 in one direction through the service 210 hosting less than 1 MS. The server 402, as a rule, calculates a new frame for the game or application in a time less than one frame period (which at 60 fps equal to 16.7 MS), respectively, 16 MS is acceptable to use the maximum time 456 signal transmission in one direction. In an optimized hardware implementation of the compression algorithms for audio and video compression, described in this document, the compression 457 may be completed within 1 MS. In less optimized versions of compression may take 6 MS (certainly, it is less optimized version may take longer but such implementations may affect the total waiting time of the transfer there and back, and may require other values of the waiting time was less (for example, can be reduced allowable distance via public Internet) to maintain the target waiting time of 70-80 MS). The amount of time the signal back and forth for Internet 454, (connection) 453 user ISP and routing 452 on-site user has already been discussed, so it remains to consider the waiting time recovery 458 compressed video, which, depending on, if implemented recovery 458 compressed video in a specialized hardware, or implemented in software on the client device 415 (e.g., PC or mobile device)may vary depending on screen size and CPU performance for restoration of the compressed data. Typically, recovery 458 compressed data takes between 1 and 8 MS.

Accordingly, by summing all the values of the waiting time in the worst case observed in practice, it is possible to determine the time of signal transmission to and fro in the worst case, which, as you might expect, will be tested by the user of the system depicted in figa. They are equal: 1+1+25+22+1+16+6+8=80 MS. And, of course, in practice (with the caveats discussed neither the e), this is approximately equal to the transmission time of the signal back and forth, observed when using the pilot version of the model depicted in FIGU using the available ready-made Windows PC as client devices and home connections, cable modem and DSL within the United States. Certainly scenarios that better worst case, can result in much smaller quantities timeout, but they cannot be based upon the development of commercial services, which is widely used.

To obtain the values of the waiting time displayed in an orderly manner on fig.4b, (if signal) via the General Internet, it is required that the device 404 and video compression unit 412 recovery of compressed video figa client 415 formed a stream of packets with very specific characteristics, such as the sequence of packets generated by all the way from the service 210 hosting to display 422, are not subjected to delays or excessive packet loss and, in particular, was constantly satisfy the constraints of the bandwidth available to the user for connecting to the Internet user via the interface 442 WAN and firewall/router/NAT 443. In addition, the device video compression must create a stream of packets, which is quite stable, so that he could avoid the inevitable packet loss and the change of order packets which occurs in the conventional Internet and network transmission.

Compress video with low latency

To achieve the foregoing objectives, in one embodiment, uses a new approach to video compression, whereby to reduce the waiting time and the requirements for the maximum bandwidth for video transmission. Before describing these embodiments will be provided with the analysis of modern video compression methods according to figure 5 and figa-fig.6b. Undoubtedly, these methods can be applied according to the underlying principles, if the user has enough bandwidth to handle the data transfer rate required by these methods. Note that this description does not address the compression of the audio, except for stating that it is implemented simultaneously and synchronously with the video compression. There are ways audio compression prior art that meet the requirements for this system.

Figure 5 shows one particular method of prior art for video compression, in which each individual video frame 501-503 is compressed by logic 520 compression using a specific compression algorithm to construct a sequence of compressed frames 511-513. One embodiment of this method is the compression algorithm is viewwise JPEG", in which each frame is compressed according to a compression algorithm for still images developed by the joint group of experts on machine processing of photographic images (JPEG) based on the discrete cosine transform (DCT). However, can be applied to various other types of compression algorithms, although still with the implementation of these underlying principles (for example, compression algorithms based on wavelet, for example, JPEG-2000).

One problem associated with this type of compression is that it reduces the data rate of each frame, but it does not use similar features between successive frames to reduce the data rate of the entire video stream. For example, as shown in figure 5, assume that the frame rate 640×480×24 bits/pixel=640×480×24/8/1024=900 KB/frame (KB/frame)for a given image quality through compression of moving images you can compress MJPEG stream with a factor of 10, which results in the data flow 90 KB/frame. At 60 fps would be required bandwidth of the channel 90 KB×8 bits×60 fps=42,2 Mbit/s, which would be too broad bandwidth for almost all home connections to the Internet in the United States at the present time and too broad bandwidth for many office connection to the Internet. Of course, with consideration of the m, that will require a constant stream of data from such a wide bandwidth, and it will serve only one user, even in an office LAN environment, it will consume most of the bandwidth LAN Ethernet 100 Mbps and fairly Ethernet switches that support LAN. Accordingly, compression for moving video inefficient when compared to other compression methods (e.g., compression methods, described below). In addition, the compression of one frame, similar to the JPEG and JPEG-2000, which use algorithms lossy compression output compression artifacts, which may not be visible in still images (for example, the artifact inside the thick foliage in the scene may not appear to be an artifact, because the eye does not know exactly how thick the foliage should look). But, if the scene is moving, the artifact may be noticeable because the eye sees, what the artifact (varies from frame to frame, despite the fact that the artifact is in the area of the stage where he could be invisible in a stationary image. This results in a perception of "background noise" in the sequence of frames, similar in appearance with interference "snow", visible during boundary analogue television reception. Undoubtedly, this type of compression can, however, be used in certain embodiments of the implementation described in this is the document, but, generally speaking, for a given quality of perception requires the removal of the background noise in the scene, high data rate (i.e. a lower compression ratio).

Other types of compression, such as H.264 or Windows Media VC9, MPEG2 and MPEG4 are all the more effective when compressing the video stream, because they use similar features between successive frames. All of these methods are based on the same General methods of video compression. Accordingly, although will be described in the H.264 standard, however, is identical to the General principles apply to various other compression algorithms. A large number of devices H.264 compression and restoration of the compressed data are available, including software library open source ×264 compression H.264 and library software open source FFmpeg to restore H.264 compressed data.

On figa and fig.6b depicts an illustrative method of compression of the prior art, in which a sequence of uncompressed video frames 501-503, 559-561 squeeze through logic 620 compression in the sequence of I-frames" 611, 671, "P-frames" 612-613 and "B-frames" 670. The vertical axis on figa in General refers to the resulting size of each of the encoded frames (despite the fact that frames are not drawn in scale). As described above, Cody is Finance video using I-frames, B-frames and P-frames clear to experts in the given field of technology. In a few words, an I-frame 611 is compression based on DCT full uncompressed frame 501 (similar to compressed JPEG image, as described above). P-frames 612-613, in General, much smaller than I-frames 611, because they use data from previous I-frame or P-frame, i.e. they contain data indicating changes between the previous I-frame or P-frame. B-frames 670 are similar to P-frames, except that B-frames using the frame in the next keyframe, and possibly the frame in the previous reference frame.

The following discussion will assume that the required frame rate is 60 frames/second, each I-frame is equal to approximately 160 Kbps, the average P-frame and B-frame is equal to 16 Kbps, and that the new I-frame is generated every second. With this set of parameters, the average data transfer rate equal to: 160 Kbps+16 Kbps×59=1.1 Mbit/s This speed of data transfer is quite within the maximum data rate for many modern high-speed connections to the Internet in homes and offices. Through this way you can also solve the problem of background noise when only intra-frame encoding, because P - and B-frames track differences between frames, so compression artifacts should not appear and scisat from frame to frame, consequently, reduces the problem of background interference described above.

One problem associated with the above types of compression, is that, despite the fact that the average data rate is relatively low (for example, 1.1 Mbps), transfer of one I-frame may take several frame periods. For example, using the methods of the prior art, a network connection to 2.2 Mbit/s (for example, DSL or cable modem with a peak of 2.2 Mbps maximum available data rate 302 on figa) is usually enough to send a video stream with a rate of 1.1 Mbps I-frame 160 Kbit/s every 60 frames. This can be accomplished, if the unit of restoration of the compressed data will be put into queue 1 second video to restore the compressed video. 1 second is passed to 1.1 Mbits of data that can be easily achieved at the maximum available data rate of 2.2 Mbps, even with the assumption that the available data rate may be reduced by 50%. Unfortunately, this approach of the prior art results in to the wait time of 1 second for video because of the 1-second videobuffer in the receiver. This delay meets the requirements of many applications of the prior art (for example, the linear playback of the video), but is what I too long waiting time for dynamic video game fighters, are not allowed to timeout more than 70-80 MS.

If you try to eliminate the 1-second video buffer, it still will not reduce the waiting time sufficient for dynamic video game fighters. For example, the use of B-frames, as described previously, inevitably entails the intake of all B-frames preceding the I-frame and I-frame. Assuming that 59 personnel non-I-frames roughly divided between P - and B-frames, taken at least 29 B-frames and I-frame before any B-frame can be displayed on the screen. Accordingly, regardless of the available bandwidth of the channel, this inevitably entails a delay 29+1=30 frames duration 1/60 of a second each, or 500 MS timeout. Obviously, this is too long.

Accordingly, another approach should exclude B-frames and use only I - and P-frames. (One consequence of this is that the data rate is increased for a given level of quality, but for consistency in this example, we will continue to assume that the size of each I-frame is equal to 160 Kbps, and the average P-frame is equal to 16 Kbps, and, accordingly, the data rate is still equal to 1.1 Mbit/s) This approach eliminates the inevitable wait time, enter B-frames, since the decoding of each P-frame depends only on the previous printtag the frame. The problem that remains with this approach, is that an I-frame so more than the average P-frame, which is the channel with a small bandwidth, as usually happens in most homes and in many offices, the transmission of I-frame adds a significant amount of time waiting. This is depicted in fig.6b. Speed 624 data stream available below maximum speed 621 data except the I-frames, where the peak data rate required for I-frames 623, far exceeds the available maximum speed of 622 data (and even nominal maximum speed 621 data). The data rate, the required P-frames, is less than the maximum available data rate. Even if the peak at 2.2 Mbps maximum available data rate remain at its maximum speed of 2.2 Mbps, the transmission of I-frame will take 160 Kbit/2,2 MB=71 MS, and if available the maximum speed of 622 data will be reduced by 50% (1.1 Mbps), then the transmission of I-frame will take 142 MS. Accordingly, the waiting time for the transmission of I-frame is approximately between 71-142 MS. This waiting time is in addition to the values of the waiting time, identified on fig.4b, which in the worst case amounts to 70 MS, so the total time of signal transmission to and fro,from the moment when the user actuates the device 421 input before the image appears on the display 422 is 141-222 MS, which is too large. And if the available maximum data rate becomes less than 2.2 Mbps, the wait time increases.

Note also that, in General, there are serious consequences "create tube" ISP through peak velocity 623 data, which far exceeds the available speed of 622 data. Equipment from different ISP behaves differently, but the following guidelines are fairly common among ISP cable modem and DSL when receiving a packet with a much higher data rate than the available speed of 622 data: (a) packet delay by enqueueing them (the introduction of a time-out), (b) discharge of some or all of the packages, (c) interruption of the connection for a certain period of time (most likely because your ISP is concerned that this is a malicious attack, such as denial of service attacks). Accordingly, the flow of packets at full speed data transfer with such characteristics as characteristics depicted in fig.6b not a viable option. Peaks 623 can stand in line at the service 210 hosting, otpravlyas is with a data rate below the maximum available data rate, that introduces unacceptable latency, described in the previous paragraph.

In addition, the sequence 624 data rate of the video stream shown on fig.6b is very "normal" sequence data rate of the video stream and can be a form of sequence data rate, which, as you might expect, is obtained by the video compression of video sequences, which does not change very much and finds very little movement (for example, as is typically the case when conducting video teleconferencing, where the cameras are in a fixed position and their little move, and objects in the scene, such as sitting talking people who are not moving).

The sequence 634 data rate of the video stream shown on figs, is a sequence, which you can usually expect to see from the video with a much larger number of actions, for example, which may be formed in a movie or video game, or some application software. Note that, along with peaks 633 I-frame, there are also peaks of the P-frame, for example, 635 and 636, which are quite large and exceeds the maximum available data rate in many cases. Despite the fact that these peaks P-frame is not quite as big as peaks I-frame, they, them is no less are too large for transmission over the channel at full speed data transfer, and as in the case of peaks I-frame, they are the peaks of the P-frame must be transmitted slowly (hence, with increasing waiting time).

Channel bandwidth (for example, LAN 100 Mbit/s or individual in connection with a bandwidth of 100 Mbit/s) in the network may be permitted large peaks, such as peaks 633 I-frame or peaks 636 P-frame, and, in principle, can be maintained low latency. But such networks are often shared among multiple users (for example, in an office environment), and such data "peak" affect the performance of the LAN, in particular, if the network traffic is routed to the private, shared connection (for example, from a remote Central data processing and storage in the office). First of all, note that in this example, the video stream with a relatively low resolution of 640×480 pixels with 60 frame/sec Flows HDTV 1920×1080 with 60 fps easily handled by modern computers and displays, and more commercially displays with a resolution of 2560×1440 with 60 fps (for example, display 30" Apple, Inc). Video sequences with a high activity of 1920×1080 with 60 fps, you may need to 4.5 Mbit/s using H.264 compression for an acceptable level of quality. If the site is to lay down, that I-frames peak at the nominal data rate 10X, this can result in peaks of 45 Mbps, as well as lesser, but nonetheless significant peak P-frame. If multiple users receive the video streams on the same network 100 Mbit/s (for example, a private network connection between the office and the Central data processing and storage), it is easy to see how the peaks of the video stream, multiple users can be combined with the overflow of network bandwidth and possible overflow bandwidth backbone switches that support users in the network. Even in the case of network technology Gigabit Ethernet, if a sufficient number of peaks of a sufficient number of users are combined at the same time, it can overwhelm the network or network switches. And when a video with a resolution of 2560×1440 will become more commonplace, the average data rate of the video stream may be equal to 9.5 Mbps, which can lead to a peak data transfer speed of 95 MB/s there is No doubt that the connection is 100 Mbit/s between the Central data processing and storage and office (which is currently extremely high speed connection) will be completely overcrowded peak traffic of the individual user. Accordingly, despite the fact that LAN and private with the network connection may allow for streaming video with peaks, streaming video with high peaks is not desirable and may require special planning and adaptation of the IT Department office.

Undoubtedly, for standard applications, linear video, these issues are not a problem, because the data transfer rate "smoothed" at the time of transfer, and data for each frame below the maximum available speed of 622 data in the buffer in the client, the stored sequence of I-, P - and B-frames before they are restored. Accordingly, the data rate of the network remains close to the average data rate of the video stream. Unfortunately, it enters a wait, even if B-frames are not used, which is unacceptable for applications with low latency, such as video games and applications require small response time.

One solution of the prior art to reduce the video streams that have high peaks, is the use of the method, often called coding "Constant bit rate (constant bit rate, CBR). Although it seems that the term CBR implies that all frames are compressed so that they had the same bit rate (i.e. the size), it usually refers to a compression scheme in which the permitted maximum bit rate specific to what icesto frames (in our case, 1 frame). For example, in the case figs, if the constraint CBR is applied to the encoding, which limits the bit rate, for example, up to 70% of rated maximum speed 621 data, then the compression algorithm limits the compression of each of the frames so that each frame, which is typically compressed using more than 70% of the rated maximum speed 621 data, compressed by means of a smaller number of bits. The result of this is that shots that would normally require more bits to maintain this level of quality, "lack" in bits, and the image quality of these frames is worse than the image quality of other frames that do not require a greater number of bits than 70% (speed) maximum speed 621 data. This approach can lead to acceptable results for certain types of compressed video, when (a) expected little motion or scene changes, and (b) users can be considered acceptable periodic deterioration. A good example of an application suitable for CBR is a video teleconferencing, as there's little peaks, and if the quality deteriorates in a short period of time (for example, if the camera performs panning, which results in significant movement of the stage and a large peaks during panoramas the training may not be enough bits to compress high-quality images, that may result in degraded image quality), this is acceptable for most users. Unfortunately, CBR poorly suited for many other applications in which there are scenes of high complexity or high traffic loads, and/or require a relatively constant level of quality.

Logic 404 compression with a small timeout, used in one embodiment, uses several different methods for solving a range of problems associated with the transfer of the compressed video stream with low latency, while maintaining a high level of quality. First, the logic 404 compression with a small timeout generates only I-frames and P-frames, therefore, decreases the need to wait for several periods of the frame for decoding each B-frame. In addition, as shown in figa, in one embodiment, logic 404 compression with a small timeout splits each uncompressed frame 701-760 in a sequence of "fragments" ("tile", "tile") and separately encodes each piece or as an I-frame or P-frame. The group of compressed I-frames and P-frames is called in this description of R-frames 711-770. In the specific example depicted in figa, each uncompressed frame is divided into a matrix of 4×4 of the 16 fragments. However, these underlying principles are not limited to any to the particular scheme units.

In one embodiment, logic 404 compression with a small timeout divides the video frame into several fragments and encodes (i.e. compress) one piece of each frame as an I-frame (i.e. the fragment is compressed as if it was a separate video frame size 1/16 of the full image, and the compression used for this mini-frame is compressed I-frame)and the remaining parts of the image as P-frames (i.e., the compression used for each "mini" 1/16 (part) of the frame is compressed P-frame). Fragments, compressed as I-frames and P-frames, should be called "I - slices and P - slices, respectively. With each subsequent video frame fragment, which must be encoded as an I-slice is changed. Accordingly, in this period frame, only one fragment from the fragments in the video frame is an I-slice and the remaining fragments are P-slices. For example, on figa, fragment 0 uncompressed frame 701 is encoded as an I-slice I0and the rest 1-15 fragments encoded as P-fragments with P1P15to output the R-frame 711. In the next uncompressed video frame 702, fragment 1 uncompressed frame 701 is encoded as an I-slice I1and the other slices 0 and 2 through 15 are encoded as P-slices, P0and with P2P15to output R-frame 712. Accordingly, the I-slices and P-slices for fragm now gradually alternate in time in successive frames. The process continues until, until it formed a R-frame 770 with the last fragment in the matrix, encoded as an I-slice (i.e. I15). After this process is repeated with the formation of another R-frame, such as frame 711 (i.e. the encoding of the I-slice for slice 0), etc. Despite the fact that figa not depicted, in one embodiment, the first R-frame of the sequence of R-frames contains only I-slices (i.e. subsequent R-frames contain the data of the reference image on the basis of which to calculate the movement). Alternatively, in one embodiment, the sequence at the initial time uses the schema I-slice is identical to normal, but does not include P-slices for those fragments that are not yet encoded by the I-slice. In other words, certain parts are not encoded any data as long until the first I-fragment, therefore, eliminated the initial peaks in speed 934 data stream on figa that is explained in more detail below. In addition, as described below, various other sizes and shapes can be used for fragments, although still with the implementation of these underlying principles.

Logic 412 recovery of compressed video, which is executed in the client 415, restores each compressed fra the COP, as if it were a separate sequence of small I - and P-frames, and then transmits each fragment in the frame buffer that controls the display 422. For example, I0and P0of R-frames 711-770 used to restore the compressed fragment 0 of the video and its visualization. Similarly, I1and P1of R-frames 711-770 used for recovery of fragment 1 and so on. As mentioned above, recovery of compressed I-frames and P-frames is known in the art, and recovering the compressed I-slices and P-slices can be performed by means of a plurality of devices recovery of compressed video, functioning in the client 415. Although it appears that the increase in the number of processes increases the cost of computing resources on the client 415, it's actually not so, because the fragment proportionally smaller relative to the number of additional processes, so the number of displayed pixels are identical, as if there was one process and used the usual I - and P-frames full size.

This method R-frame significantly reduces the peak bandwidth normally associated with I-frames, depicted on fig.6b and pigs, because any given frame is typically composed of P-frames, which are usually smaller than I-frames. For example, again pre is put, that regular I-frame is equal to 160 Kbps, then the I-slices each of the frames depicted in FIGU approximately equal to 1/16 of this magnitude or 10 Kbps. Similarly, assume that the normal P-frame is equal to 16 Kbps, then P-frames for each of the fragments depicted in FIGU may be approximately equal to 1 Kbps. The end result is R-frame of approximately 10 Kbps+15×1 Kbit=25 Kbps. Accordingly, each sequence of 60 frames equal to 25 bits×60=1.5 Mbit/s, Respectively, at 60 frames/second required bandwidth with support for bandwidth 1.5 Mbps, but with a much smaller peaks appearing because I fragments distributed over the entire interval of 60 frames.

Note that in the previous examples with the same assumption about the speed data for I-frames and P-frames, the average data transfer rate was equal to 1.1 Mbit/S. the Reason for this is that in the previous examples, a new I-frame is entered only once every 60 periods of the frame, whereas in this example, 16 fragments that comprise an I-frame, cycle through the 16 periods of the frame, and, essentially, the equivalent of an I-frame is inserted every 16 periods of the frame, resulting in a slightly higher average speed data transmission. In practice, however, more frequent introduction of I-frames does not increase the data rate l is nano. This is due to the fact that P-frame (or P-fragment) initially encodes the difference between subsequent frame from the previous. Accordingly, if the preceding frame is quite similar to the next frame, P-frame is very small, if the previous frame is very different from the next frame, P-frame is very large. But since P-frame is largely derived from the previous frame, and not from the actual frame, the resulting encoded frame may contain more errors (e.g., visual artifacts)than an I-frame with the corresponding number of bits. And when one P-frame follows another P-frame, it may be an accumulation of errors, which worsens when you have a long sequence of P-frames. Further, a complicated device video compression detects that the image quality deteriorates after a sequence of P-frames, and, if necessary, it allocates more bits subsequent P-frames to improve the quality or if it will be the most effective way of doing things, replaces P-frame I-frame. Accordingly, when using a long sequence of P-frames (e.g. 59 P-frames, as in the previous examples above), in particular, when the scene is very complex and/or there is a lot of movement, typically requires more bits for P-frames in the ore remove them from the I-frame.

In other words, if you look at the P-frames from the opposite point of view, P-frames that immediately follow I-frames typically require fewer bits than P-frames, which are more remote from the I-frame. So, in the example depicted in figa, all P-frames removed them from the previous I-frame is not more than 15 frames, then as in the previous example, P-frame may be located at a distance of 59 frames from an I-frame. Accordingly, the greater the frequency of I-frames P-frames less. Undoubtedly, the exact relative sizes vary depending on the nature of the video stream, but in the example on figa, if the I-slice is 10 Kbps, the size of the P-slices average can only be set to 0.75 KB, which results in a 10 Kbps+15×0.75 KB=each holding 21.25 KB, or at (speed) 60 frames per second, the data transmission speed of each holding 21.25 KB×60=1.3 Mbps, or approximately 16% higher than the transmission rate of the data stream with the I-frame, followed by 59 P-frames, 1.1 MB/s And again, the relative results of these two approaches to video compression vary depending on the sequence, but, generally, experience shows that the use of R-frames requires about 20% more bits for a given level of quality than using sequences I/P-frames. But, undoubtedly, the R-frames substantially at enhaut peaks, that greatly reduces the waiting time when using the above-mentioned sequences, compared with the waiting time for sequence I/P-frames.

R-frames can be configured in many different ways, depending on the nature of the video sequence, the reliability of the channel and available data transfer rate. In an alternative embodiment uses the number of fragments other than 16 in the configuration of 4×4. For example, can be used 2 fragment in the configuration of the 2×1 or 1×2, can be used 4 fragment in the configuration of 2×2, 4×1 or 1×4, can be used 6 slices configuration 3×2, 2×3, 6×1 or 1×6, or can be used 8 slices in configuration 4×2 (as shown in fig.7b), 2×4, 8×1 or 1×8. Note that the fragments do not necessarily have to be square, also the video frame does not necessarily have to be square or even rectangular. Fragments can take any form that is best suited for your application or video stream.

In another embodiment, a cyclic repetition of the I - and P-slices are not fixed by the number of fragments. For example, in configuration 4×2 with 8 fragments may, however, be used in sequence with periodic repetition through 16 elements, as shown in fig.7b. Serial nesg the th frames 721, 722, 723, each divided into 8 fragments, 0-7, and each fragment is compressed separately. In the R-frame 731 only fragment 0 is compressed as an I-slice and the remaining fragments are compressed as P-slices. In the following R-frame 732 all 8 slices compressed as P-slices, and then in the next R-frame 733 fragment 1 is compressed as an I-slice and the remaining fragments of all compressed as P-slices. And this sequence continues for 16 frames, with the I-slice is formed only through the frame, respectively, the last I-block is generated for a fragment 7 in the period 15-th frame (fig.7b not shown) and during the period of the 16th compress frame R-frame 780 using all P-slices. Then the sequence begins again with fragment 0, compressed as an I-slice, while other fragments are compressed as P-slices. As in the previous embodiment, the first frame of the entire sequence usually consists of all the I-slices to provide supporting elements (reference) for P-slices, starting from this moment. Cyclic repetition of the I-slices and P-slices do not even must be even, multiple of the number of fragments. For example, with 8 segments each frame into an I-slice may follow 2 frame with all P-fragments before being used by another I-slice. In another embodiment, to the op is edelenyi fragments can be installed sequence I-fragments more than for other fragments, if, for example, it is known that in certain areas of the screen more traffic that requires frequent I-fragments, while others are more static (for example, with the image of the account in the game)that requires less frequent I-slices. In addition, despite the fact that each frame on figa-fig.7b depicted with one I-fragment, a single frame can be encoded many I-slices (depending on the bandwidth of the transmission channel). Conversely, certain frames or sequences of frames can be transmitted without I-fragments (i.e. only P-fragments).

The reason that the approaches described in the previous paragraph, work well, is that if you do not distribute I-slices for each individual frame, it seems, it will result in large peaks, the system behavior is not so simple. As each fragment is compressed separately from other fragments, as fragments become smaller, the encoding of each fragment may be less effective, because the compression device of this fragment cannot use similar signs images and similar movement of the other fragments. Accordingly, the division of the screen into 16 fragments in General will result in less efficient encoding than the division of the screen is and in 8 slices. But, if the screen is divided into 8 slices, and this causes the input of a full I-frame every 8 frames instead of every 16 frames, this results in a much higher speed data transmission in General. Accordingly, when the introduction of a full I-frame every 16 frames, instead of having to enter it every 8 frames, the total data rate is reduced. In addition, when using 8 large chunks, instead of 16 smaller fragments, decreases the overall speed of data transmission, which also reduces to some extent the peaks of the data caused by large fragments.

In another embodiment, logic 404 video compression with a small timeout on figa and fig.7b manages the allocation of bits to different fragments of R-frames or with the prior configuration by plants, on the basis of known characteristics of the video sequence, which must be compressed, or automatically, based on a continuous analysis of image quality in each segment. For example, in some video games-racing, the front part of the vehicle of the player (which is relatively stationary in the scene) occupies much of the lower half of the screen, while the upper half of the screen completely filled impending highway, buildings and landscape, which are almost always in motion. If logic 404 LF the party allocates an equal number of bits to each fragment, the fragments in the lower half of the screen (parts 4-7) in an uncompressed frame 721 on fig.7b will, in General, be compressed with a higher quality than the fragments (the fragments) in the upper half of the screen (fragments 0-3) in an uncompressed frame 721 on fig.7b. If you know that this particular game or this particular stage of the game have such characteristics, the operators 210 hosting service can configure logic 404 compression to allocate more bits to the fragments in the upper part of the screen than the fragments located in the lower part of the screen. Or, logic 404 compression can evaluate the quality of compression of the fragments after compression frames (using one or more of the many indicators of the quality of compression, for example, the peak signal-to-noise ratio (PSNR)and if it determines that for a certain time window as the specific fragments constantly improving, then it gradually releases more bits fragments whose quality deteriorates as long as the level of quality of the various fragments will not be about the same. In an alternative embodiment, the logic device 404 compression allocates bits to obtain higher quality in a particular fragment or group of fragments. For example, higher quality in the center of the screen than at the edges, it can provide Lucchese perception.

In one embodiment, to improve resolution of certain areas of the video stream logic 404 video compression uses smaller pieces for encoding regions of the video stream with a relatively greater complexity of the scene and/or more movement than for areas of the video stream with respect to the lower complexity of the scene and/or a smaller movement. For example, as shown in Fig, smaller pieces are used around the moving character 805 in one area of one R-frame 811 (which may be a sequence of R-frames with identical sizes of fragments (not shown)). Further, when the character 805 is moved to a new area of the image, the smaller fragments are used around this new area inside the other R-frame 812, as shown. As mentioned above, various other sizes and shapes can be applied as "fragments", although, still, with the implementation of these underlying principles.

Although cyclic I/P-fragments described above significantly reduce the peaks in the data rate of the video stream, they do not resolve the peaks completely, in particular, in the case of such rapidly changing or highly complex video images, which are found in movies, video games and some application software. For example, during sudden erection is about transition, for complex frame can be followed by another complex frame that is completely different from him. Despite the fact that several I fragments can be before mounting transition only a few periods of the frame, they will not help in this situation, because the material of the new frame has no connection with the previous I-fragments. In this situation (and in other situations where, despite the fact that not all changes, a large part of the image varies) device 404 video compression determines that many, if not all, of P-fragments more efficiently encoded as I-slices, and the result is a very large peak in the data transfer rate for this frame.

As discussed earlier, it just makes the place that most connections to the Internet consumer level (and many office connections)is simply not feasible for the data (creating) "tube"that exceed the available maximum data rate represented by the position 622 on figs, together with a nominal maximum speed 621 data. Note that the nominal maximum speed 621 data (for example, DSL 6 Mbit/s") is essentially the number of sales for users considering the purchase of Internet connectivity, but in General, it does not guarantee the level of performance. For the purposes of this application, it is not them who has the attitude, since the only problem is the maximum available speed of 622 data transmission during transmission of the video stream over the connection. Therefore, figa and figs, as we describe the solution of the peaks, the nominal maximum data rate is not shown on the chart, and shows only the available maximum speed 922 data. The data rate of the video stream should not exceed the available maximum speed 922 data.

To resolve this (problem), the first thing that makes the device 404 video compression is the definition of peak velocity 941 data which is the data rate that the channel is able to handle steady. This speed can be defined in several ways. One such method consists in the gradual send a test flow with a higher speed transfer of data from service 210 placement of information on the server in the client 415 on figa and fig.4b and providing customer feedback to the hosting service regarding the level of packet loss and latency. When the packet loss and/or waiting times start to increase sharply, this indicates an approximation to the maximum available speed 922 data. After that, the service 210 hosting can gradually decrease the transmission speed of data is x test flow until while the client 415 informs that for a sufficient period of time, the test stream was passed with an acceptable level of packet loss, and latency is almost minimal. This sets the peak maximum speed 941 data, which is then used as the maximum data rate for streaming video. Over time the maximum speed 941 data will fluctuate (for example, if someone living in this house, will begin to intensively use the connection to the Internet), and the client 415 will constantly monitor over timeouts or packet loss, indicating that the available maximum speed 922 data falls below a previously set maximum speed 941 data, and if so, the maximum speed 941 data. Similarly, if over time the client 415 finds that the packet loss and timeouts remain at optimum levels, it can send a request to device video compression slowly increased the data rate to test whether increased the available maximum data rate (for example, stopped if another user living in the house, heavily use the connection to the Internet), and again to wait until until packet loss and/or higher the time of waiting will not indicate available maximum speed 922 data was exceeded, and again can be found in lower level for peak velocity 941 data, but one that may be higher than the level that was before the test to increase the data transfer rate. Accordingly, using this method (and other methods such as this) can be found peak speed 941 data and periodically adjusted, as necessary. The maximum speed 941 data sets the maximum data rate that can be used by device 404 video compression to broadcast video stream to the user. Logic to determine the peak data rate can be realized on the territory of 211 user and/or service 210 hosting. On-site 211 user's client device 415 performs calculations to determine the maximum data rate and transmits this information back to the service 210 of the hosting service 210 hosting server 402 performs calculations to determine the maximum data rate based on statistics taken from the client 415 (e.g., packet loss, latency, maximum data rate, and so on).

On figa depicts an illustrative speed 934 data stream, which contains C acatalog the complexity of the scene and/or significant movement, formed using the method for compressing the cyclic I/P-fragment described earlier and depicted in figa, fig.7b and Fig. The device 404 video compression has been configured to output compressed video with an average data rate, which is below the maximum speed 941 data, and note that almost all the time, the data rate of the video stream remains below the peak velocity 941 data. When comparing speed 934 data transfer speeds 634 data stream depicted on figs created using I/P/B or I/P-frames, it is seen that at the output of the compression cyclic I/P-slice is a much smoother data transmission rate. However, in the peak 952 2× frame (which is approaching a peak speed 942 data 2×) and the peak 954 4× frame (which is approaching a peak speed 944 data 4×) the data transfer rate exceeds the maximum rate 941 data, which is unacceptable. In practice, even with video with high activity in a video game changes very quickly, peaks exceeding the maximum speed 941 data, there are less than 2% of the frames, peaks exceeding the maximum speed 942 data 2×, are rare, and peaks exceeding the maximum speed 943 data 3×, do not occur almost never. But when they the school is Ino meet (for example, during installation of the transition), the required data rate is required for video output good quality.

One way to solve this problem is to simply configure the device 404 video compression so that the output is the maximum data transfer rate is the maximum speed 941 data. Unfortunately, the resulting quality of the video output during peak frame is bad, because the compression algorithm "lacks" in bits. As a result, the compression artifacts when are sudden jumps or fast movement, and over time, the user understands that the artifacts are always unexpected occur when there are sudden changes or fast motion, and it can be very annoying.

Despite the fact that the human visual system is quite sensitive to the visual artifacts that occur during sudden changes or fast motion, it is not very sensitive to notice a drop in frame rate in such situations.

In fact, when there are sudden changes, it appears that the human visual system is busy tracking changes, and she doesn't notice, if the frame rate for a short time drops from 60 fps to 30 fps and then immediately in suraweera to 60 frame/sec And, in the case of very significant transition, such sudden change of scene, the human eye does not notice if the frame rate drops to 20 fps or even 15 fps and then returns immediately to 60 frame/s Up until the drop in frame rate rarely happens, the human observer, it seems that the video is continuously transmitted at a speed of 60 frames/S.

This property of the human visual system is used in the ways shown on fig.9b. The server 402 (figa and fig.4b) displays the uncompressed output stream with a steady frame rate (60 fps in one embodiment). The timeline shows that each frame 961-970 displayed every 1/60-th of a second. Each uncompressed video frame, since the frame 961, appears in the device 404 video compression with low latency, which compresses the frame time less than the period of the frame, with the conclusion to the first frame compressed frame 981 1. Output for compressed frame 981 1, can be larger or smaller, depending on many factors, as described previously. If the data size is so small that they can be transferred to the client 415 within one frame period (1/60-th of a second) or less, with a peak speed 941 data, they are transmitted during the period 991 transmission (xmit time (length of the arrow indicates the duration of periodperiod). During the next frame period, the server 402 outputs the uncompressed frame 962 2, it is compressed into a compressed frame 982 2, and it is transmitted to the client 415 during the period 992 transmission, which is less than the frame period with a peak speed 941 data.

Next, during the next frame period, the server 402 outputs the uncompressed frame 963 3. When it is compressed by the device 404 video compression, the resulting compressed frame 983 3 is more data than can be transmitted at the peak rate 941 data within one frame period. Accordingly, it is transmitted during the period 993 transmission (peak 2×), which occupies the entire period of the frame and part of the next frame period. Next, during the next frame period, the server 402 displays another uncompressed frame 964 4 and outputs it to the device 404 video compression, but the data are ignored, and they are depicted by the position of 974. This is because the device 404 video compression configured to ignore subsequent uncompressed video frames that come when it still passes the previous compressed frame. Undoubtedly, restore device compressed video client 415 does not accept the frame 4, but it just will continue to display on the display screen 422 frame 3 within two frame periods (i.e. for a short time will reduce the frame rate from 60 fps to 30 fps).

T is for the next frame 5 server 402 outputs the uncompressed frame 965 5, which is compressed into a compressed frame 985 5 and dispatched within 1 frame period 995 transmission. Restore device compressed video client 415 restores the compressed frame 5 and displays it on the display screen 422. Next, the server 402 outputs the uncompressed frame 966 6, the device 404 video compression compresses it into a compressed frame 986 6, but this time the size of the resulting data is very large. The compressed frame is transmitted within a period 996 transmission (peak 4×) with a peak speed 941 data transfer, but the transfer frame is almost 4 period frame. Over the next 3 periods of the frame, the device 404 video compression ignores 3 frames from the server 402, and a device for restoration of the compressed data of the client 415 constantly displays the frame 6 on the display 422 for 4 periods of the frame (i.e. for a short time reduces the frame rate from 60 fps to 15 fps). And finally, the server 402 outputs the frame 970 10, the device 404 video compression compresses it into a compressed frame 987 10, and it is transmitted during the period 997 transfer, and restoration of the compressed data of the client 415 restores the compressed frame 10 and displays it on the display screen 422, and again the video output resumes at 60 frame/s

Note that, although the device 404 video compression resets the video frames of the video stream generated by the server 402, it does not reset the audio data, regardless of the regardless of the form arrives audio and it continues to compress audio data, when the video frames are reset, and sends them to the client 415, which continues to restore a compressed audio data and to provide audio to the device that the user uses to play audio, whatever it was. Accordingly, the audio continues to play completely during periods when personnel are discharged. Compressed audio uses a relatively small percentage of the bandwidth, compared to a compressed video, and as a result no significant effect on the overall data rate. Although this is not depicted in any of the circuits data transfer rate, there is always bandwidth data transfer rate that is reserved for the compressed audio stream within the maximum speed 941 data.

The example just described for fig.9b, was selected to illustrate how the frame rate drops during peak data transfer rate, but it does not illustrate how to use the methods with cyclic I/P-fragment described previously, such peaks speed data transmission and the subsequent reset frames are rare, even during sequences with high activity/high complexity scenes, such as sequences that are found in video games, movies, and certainly the application software. Therefore, the reduced frame rate are infrequent and brief, and the human eye not to notice.

If you have just described the mechanism of reducing the framerate is applied to the data rate of the video stream shown on figa, the resulting data rate of the video stream shown on figs. In this example, the peak 952 2× was reduced to a smoothed peak 953 2×, and peak 955 4× was reduced to a smoothed peak 955 4×, and all speed 934 data stream remains at the level of the peak velocity 941 data transfer or below it.

Accordingly, using the methods described above, the video stream with high activity can be transmitted with low latency via a common Internet and through connection to the Internet consumer level. In addition, in an office setting in the LAN (e.g. Ethernet 100 Mbit or wireless 802.11g) or a private network (for example, connection of 100 Mbps between the Central data processing and storage and offices) video stream with high activity can be transmitted without interference so that many users (for example, transmitting 1920×1080 with 60 fps with 4.5 Mbit/s) can use the LAN or shared private data connection without overlapping peaks, overwhelm the network or backbone network switch.

Corr is Kirovka data transfer rate

In one embodiment, the service 210 hosting first estimates the maximum available speed of 622 data transfer and waiting time of the channel to determine the appropriate data rate for the video stream and then dynamically adjusts the data transmission rate in response. To adjust the speed data service 210 hosting may, for example, modify the resolution of the image and/or the number of frames/second video stream, which must be sent to the client 415. In addition, the hosting service may adjust the level of quality of compressed video. When you change the resolution of the video stream, for example, with a resolution of 1280×720 640×360 logic 412 recovery of compressed video in the client 415 can lead to scale the image to maintain the identical size of the image on the display screen.

In one embodiment, in the situation when the channel signal disappears, the service 210 hosting pauses the game. In the case of games with multiple parties, the hosting service informs other users that the user has been eliminated from the game and/or pauses the game (s) by other users.

Dropped or delayed packets

In one embodiment, if data is lost due to packet loss between your device 404 with the Atiyah video and the client 415 on figa or fig.4b, or due to packet received out of order, which comes too late to restore the compressed frame and meet the requirements to the wait time of the restored frame, logic 412 recovery of compressed video can reduce visual artifacts. When implementing streaming I/P-frame, if there is a lost/delayed package, it affects the whole screen, possibly causing a complete hang of the screen for a certain period of time, or display other visual artifacts all over the screen. For example, if the lost/delayed packet, causing the loss of I-frame, the unit of restoration of the compressed data will not be supporting element for all subsequent P-frames until then, until the adoption of the new I-frame. If a P-frame is lost, it will affect P-frames for the entire screen, which followed. Depending on how much time will pass before you receive the I-frame, it will have a longer or shorter visual impact. Using alternating I/P-slices, as shown in figa and fig.7b, is much less likely that a lost/delayed package will affect the whole screen, as it will only affect the fragments contained in the affected package. If the data of each fragment sent in a separate package, if the package is lost, this will only affect Odie is a fragment. Undoubtedly, the duration of a visual artifact will depend on if the package is lost I-slice, and if the P-slice is lost, then after how many shots will appear I-slice. But given the fact that different fragments on the screen are updated by I-frames very often (maybe every frame), even if affected one fragment on the screen, other parts can not be addressed. In addition, if some event causes the loss of several packets at the same time (for example, a surge voltage is near the line (DSL), which briefly interrupts the data stream), then some of the fragments will be affected more than others, but due to the fact that some parts will be quickly updated by a subsequent I-slice, then it will affect only for a short time. In addition, with the implementation of the streaming I/P-frames, I-frames are not only the most important frame, but I-frames are extremely large, so if there is an event that causes reset/delay of the packet, there is a good chance that will be affected by an I-frame (i.e., if any part of the I-frame is lost, it is unlikely that an I-frame can be completely restored), much smaller than the I-slice. Due to all these reasons, the use of I/P-fragments resulting in much of Maine is PWM visual artifacts when the packets will be dropped/delayed than with I/P-frames.

One variant of implementation aimed at reducing the impact of lost packets through the sustainable packaging compressed fragments inside a TCP (transmission control Protocol) or UDP packets (Protocol transmission of datagrams user). For example, in one embodiment, the fragments are aligned to the boundaries of the package whenever possible. On figa shows how fragments can be Packed within the sequence of packets 1001-1005 without the implementation of this feature. Namely, figa, fragments cross package boundaries and Packed inefficiently so that the loss of a single packet results in the loss of many frames. For example, if the lost packets 1003 or 1004, then lost three fragments, which may lead to visual artifacts.

In contrast, fig.10b depicts the logic 1010 packaging fragment for efficient packaging of parts inside a package to reduce the impact of packet losses. First logic 1010 packaging fragment aligns fragments on the limits of the package. Accordingly, fragments of T1, T3, T4, T7 and T2 is aligned to the boundary of packages 1001-1005, respectively. Logic packaging fragment is also aimed at the placement of the fragments inside the packages in the most efficient way possible without crossing g the src package. Based on the size of each of the fragments, the fragments T1 and T6 are integrated into one package 1001, T3 and T5 combined in one package 1002, fragments of T4 and T8 are integrated into one package 1003, a fragment of T(7) added to the package 1004 and a fragment of T2 is added to the package 1005. Accordingly, according to this scheme, the loss of one packet will result in the loss of not more than 2 fragments (instead of 3 fragments, as shown in figa).

One additional advantage in relation to option implementation depicted in fig.10b, is that the fragments are transmitted in the order in which they appear within the image. Accordingly, if the next packet is lost due to identical events that prevent the transfer, it will affect areas that are not next to each other on the screen, which will create less noticeable artifacts on display.

In one embodiment, methods are applied forward error correction (FEC) to protect certain parts of the video stream from channel errors. As is known in the art, the methods FEC, such as reed-Solomon and Viterbi form and add information data for error correction to data transmitted over the communication channel. If an error occurs in the master data (for example, an I-frame), you can use FEC to correct this error.

Codes FEC soon increase the th transmission data, therefore, preferably, they are used only where they are needed most. If the data is sent, which may not result in a very noticeable visual artifact, it may be preferable not to use FEC codes for data protection. For example, the lost P-fragment, which directly goes before I slice you will create on the screen a visual artifact (i.e. the fragment on the screen will not be updated) only 1/60-th of a second. The human eye does not notice this visual artifact. The further P-fragments are separated from the next I-slice, the more noticeable is the loss of the P-slice. For example, if the cyclic scheme of an I-slice, followed by 15 P-slices before I fragment will be available again, if lost P-fragment immediately following an I-slice, it will lead to the fact that this fragment will show the wrong image for 15 periods of the frame (at 60 fps, this is equal to 250 MS). The human eye can easily notice the gap in the flow duration of 250 MS. Accordingly, the farther separated P-fragment from the next new I-slice (i.e. the closer the P-fragments are located to the previous I-slice), the more noticeable it becomes an artifact. As discussed earlier, despite the fact that, in General, che is closer to the P-slice is set to the previous I-slice the less data for this P-slice. Accordingly, P-fragments that follow I-fragments not only more important to protect against the loss, but they are smaller. And, in General, the less data that must be protected, the less code FEC is needed to protect them.

Accordingly, as shown in figa, in one embodiment, because of the importance of I-slices in the video stream only I-slices are provided FEC codes. Accordingly, the FEC 1101 contains error correction code for the I-slice 1100, and FEC 1104 contains error correction code for I-fragment 1103. In this embodiment, the FEC is not formed for P-slices.

In one embodiment, depicted in fig.11b, FEC codes are also generated for P-slices, which, most likely, will cause visual artifacts if will be lost. In this embodiment, FEC codes 1105 codes provide error correction for the first 3 P-slices, but not for the subsequent P-slices. In another embodiment, FEC codes are generated for P-slices, the size of the data which is the smallest (which will be generally self-select P-fragments which have been found after the I-slice, the protection of which the most important).

In another embodiment, instead of sending FEC code fragment, the fragment is passed two of the Aza, each time in a different package. If lost/delayed one package, it uses a different package.

In one embodiment, depicted in figs, codes 1111 and 1113 FEC is generated for the audio packets, 1110 and 1112, respectively, transmitted from the hosting service at the same time with the video. It is especially important to maintain the integrity of the audio in the video stream, because distorted audio (for example, clicking or hissing) results in a particularly unpleasant practical experience of the user. Codes FEC contribute to ensuring that the content of the audio in the computer 415 client without distortion.

In another embodiment, instead of sending FEC code with the audio data, the audio data is transmitted twice, each time in a different package. If lost/delayed one package, it uses a different package.

In addition, in one embodiment, depicted in fig.11d, codes 1121 and 1123 FEC is used for commands 1120 and 1122 user input, respectively (e.g., pressing)transmitted in the uplink direction from the client 415 service 210 hosting. This is important because the loss of a key press or mouse movement in the video game or application can result in an unpleasant experience for the user.

In another embodiment, instead of sending FEC code with Dunn the mi command user input, these commands are user input is transmitted twice, each time in a different package. If lost/delayed one package, it uses a different package.

In one embodiment, the service 210 hosting evaluates the quality of the communication channel with the client 415 to determine whether to use FEC, and if so, what part of the video, audio and user commands with which the FEC should be used. Assessment of the "quality" of the channel may include functions such as assessment of packet loss, latency, etc. as described above. If the channel is particularly unreliable, the service 210 hosting can apply FEC to all I-slices and P - slices, audio and user commands. On the contrary, if the channel is reliable, the service 210 hosting may apply the FEC only to audio and user commands, or may not apply FEC no audio, no video, or may not use FEC. Can be applied to various other permutations of the application of FEC, although still with the implementation of these underlying principles. In one embodiment, the service 210 hosting continuously monitors conditions on the channel and accordingly changes the policy FEC.

In another embodiment, according figa and fig.4b, when a packet is lost or delayed, which may lead to loss of data fragment, or if, perhaps, because of the particularly large packet loss, FEC cannot correct the lost data fragment, the client 415 evaluates how many frames remains before a new I-slice is adopted, and compares this number with the signal back and forth from the client 415 to service 210 hosting. If the signal back and forth less than the number of frames remaining before the expected receipt of a new I-slice, the client 415 sends the message to the service placement 210 on the server with a request for a new I-slice. This message is routed to the device 404 video compression, and instead of forming the P-fragment to fragment the data which has been lost, it establishes an I-slice. Considering the fact that the system depicted in figa and fig.4b, is intended to ensure the transmission time of the signal back and forth, which, as a rule, less than 80 MS, the result will be that the fragment is corrected within 80 MS (at 60 fps, the frame duration is equal 16,67 MS, respectively, in the full frame periods, the waiting time of 80 MS, resulting in correction of the fragment within 83,33 MS, which is equal to 5 periods of the frame is a noticeable gap, but much less noticeable than, for example, the gap 250 MS for 15 frames). When the device 404 compression forms such I-fragment outside of its normal cyclic order, if the I-slice is the fact that the band prop the Scania this frame exceeds the available bandwidth, the device 404 compression delay cycles other fragments so that other fragments accept P-fragments during this period of the frame (even if one fragment should usually take I-fragment within this frame), and then, starting from the next frame, a continuous cyclic repetition, and the piece which usually took an I-slice in the previous frame, take I-slice. Although this action for a short time delays the phase of the cyclic repetition of the R-frame, visually, this is usually not noticeable.

Implementation of a hardware compression/restore device compressed video and audio

On Fig depicts one particular implementation, which uses a multi-core processor and/or multi-processor 1200 to compress 8 fragments in parallel. In one embodiment, uses a computer system with a Quad-core Xeon CPU, dual processor operating with a clock frequency of 2.66 GHz or higher, and each core implements the H.264 compression device ×264 open source as an independent process. However, it can be used in various other configurations of hardware/software, although still with the implementation of these underlying principles. For example, each of the CPU cores can be replaced by a device H.264 compression implemented in the FPGA In the example depicted on Fig, kernel 1201-1208 used for simultaneous processing of I-slices and P-slices as eight independent streams. As is known in the art, modern multi-core and multiprocessor computer system by its nature can implement multithreading when integrated with a multi-threaded operating systems such as Microsoft Windows XP Professional Edition (64-bit or 32-bit) and Linux.

In the embodiment depicted in Fig, as each of these 8 cores is responsible only for one piece, it operates largely independently of the other cores, and each performs a separate instance ×264. Capture card DVI, based on the PCI Express ×1, for example, Sendero Video Imaging IP Development Board from microtronix on, Oosterhout, the Netherlands, is used to capture uncompressed video with a resolution of 640×480, 800×600 or 1280×720, and the FPGA on the card uses direct memory access (DMA) to transfer the captured video through DVI in system RAM. The fragments are arranged in a layout 1205 4×2 (although they are depicted as square fragments, in this embodiment, they have a resolution of 160×240). Each instance ×264 configured to compress one of the 8 fragments of 160×240, and they are synchronized so that after the initial compression of the I-slice each core included in the cycle, when the eat each frame does not coincide in phase with the other, to compress an I-slice, followed by seven P-slices, and depicted on Fig.

In each period of the frame resulting short fragments are merged into a stream of packets using the methods described previously, and then compressed fragments are transmitted in the target client 415.

Although not depicted on Fig, if the data rate of the United 8 fragments exceeds the maximum speed 941 data, all 8 processes x264 shall be postponed by such number of periods of the frame that you want to until all data referred to the joint 8 of the fragments will not be able to be transferred.

In one embodiment, the client 415 is implemented as software on a PC, on which celebrate 8 instances of FFmpeg. The receiving process receives 8 fragments, and each fragment is routed to an instance of FFmpeg, which restores the compressed fragment and render it in the corresponding location of the fragment on the display 422.

The client 415 accepts keyboard input, mouse or game controller driver input device PC and transmits it to the server 402. The server 402 then uses the received data input device and applies them to a game or an application executing on the server 402, which is a PC running Windows to use the cation dual core (Core Duo) CPU Intel 2,16 GHz. The server 402 then outputs the new frame, and outputs it via its DVI output, or located on the system Board graphics system or through the exit DVI PCI NVIDIA 8800GTX.

Simultaneously, the server 402 outputs the audio output of a game or application, via its digital audio output (for example, S/PDIF), which is connected with a digital audio input on a dual Quad-core PC-based Xeon, which implements the video compression. Device audio compression open source Vorbis is used to compress audio simultaneously with video using all cores available for flow. In one embodiment, the core, which compresses its own piece, first performs audio compression. Compressed audio is then transmitted together with the compressed video and is restored to the client 415 using the device restore compressed audio files.

Distribution of server centre hosting services

Light through glass, for example, fiber wire, passes at a rate equal to some fraction of the speed of light in vacuum, and, accordingly, can be determined the exact speed of light in the light wave conductors. But, in practice, taking into account the time delays routing inefficiency of transmission and other losses, the authors observed that the optimal value of the waiting time in the Internet reflect the speed of the front is closer to 50% of the speed of light. Accordingly, the optimal transmission time of the signal back and forth to the distance of 1000 miles (1600 km) is approximately 22 MS, and optimum signal transfer back and forth at a distance of 3000 miles (4800 km) is approximately 64 MS. Accordingly, one server on the same coast of the United States, is too far for customer service on the other coast (which may be at a distance of 3000 miles (4800 km)) with the required timeout. However, as shown in figa, if the server center 1300 service 210 hosting is located in the center of the USA (for example, in Kansas, Nebraska, and so on) so that the distance to any point in the continental United States is approximately 1500 miles (2400 km) or less, the transmission time of the signal back and forth over the Internet may be equal to 32 MS. According fig.4b, we note that, although the magnitude of the waiting time in the worst case, valid for (connection) 453 user ISP equal to 25 MS, as a rule, the authors observed value time-out closer to 10-15 MS systems cable modem and DSL. In addition, according fig.4b it is assumed that the maximum distance from site 211 user to the hosting center 210 is 1000 miles (1600 km). Accordingly, in a typical time-use signal back and forth (on the connection), the user ISP 15 MS and the maximum is om a distance via the Internet 1500 miles (2400 km) of transmission time of the signal back and forth MS, the total transmission time of the signal back and forth from the time when the user actuates the device 421 input and observing the response on the display 422, as well 1+1+15+32+1+16+6+8=80 MS. Accordingly, (during transmission) on the Internet at a distance of 1500 miles (2400 km), as a rule, might get a response time of 80 MS. It can provide any territory of the user relatively short time 453 expectations (for connection) user's ISP in the continental U.S. to access a single data centre, which is located in the center.

In another embodiment, depicted in fig.13b, server centers, HS1-HS6, service 210 hosting strategically placed across the United States (or other geographic region), and certain large server centers hosting services are located close to centres of high population density (such as HS2 and HS5). In one embodiment, the server centers HS1-HS6 exchange information through the network 1301, which may be the Internet or a private network, or a combination of both. In respect of multiple data center services can be less waiting time for users whose time 453 expectations ISP user is large.

Despite the fact that the distance over the Internet is definitely a factor that increases the signal back and forth via the Internet, which sometimes play a role of other factors, mostly not associated with the waiting time. Sometimes the flow of packets is routed through the Internet to a remote location and back, which may lead to timeout due to long lines of communication. Sometimes the equipment for routing in the path (signal), does not work properly, which results in the transmission delay. Sometimes the path is overloaded with traffic, which introduces a delay. And, sometimes, failure occurs that does not allow the ISP to the user to be routed to this destination. Accordingly, although the General Internet usually provides a connection from one point to another with a fairly reliable and optimal route and waiting time, which largely is determined by the distance (especially in compounds of telecommunication, which result in routing outside the local area of the user), such reliability and latency in any case not guaranteed and often cannot be obtained from the location of the user in the destination Internet in General.

In one embodiment, when the client 415 a user first connects to the service 210 hosting for playing video games or using the application, the client communicates with each data center HS1-HS6 service is s hosting available after launch (for example, using methods described above). If the waiting time is small enough for a particular connection, use the connection. In one embodiment, the client communicates with all server centers hosting service or a subset of, and is selected server center with the connection with the smallest waiting time. The client can choose the service center with the connection with the smallest waiting time, or the service centers can identify the service center with the connection with the smallest waiting time and provide this information (e.g., Internet address) to the client.

If a particular server centre hosting service is overloaded and/or for the application or game user timeout for the connection to other less loaded server centre hosting service is valid, the client 415 can be redirected to another server centre hosting services. In this situation, the game or application, executable by the user to stop the server 402 in the congested data center user and data on the status of applications or games are transferred to the server 402, located in another data center hosting services. After that, the game or application resume. In one embodiment, the implementation of the surveillance service 210 hosting waits until while the game or application will not reach the natural point of suspension (for example, between the levels in the game, or after the user initiates the operation "save" in the Appendix), for the said transfer. In yet another embodiment, the service 210 hosting waits up until the user activity will not cease for a specified period of time (for example, 1 minute), and then initiates the transfer at this time.

As described above, in one embodiment, the service 210 hosting abonare service 440 to bypass Internet on Fig with the aim of providing its customers with guaranteed waiting time. Services to circumvent Internet, as used in this description are the services that provide routes through the private network from one point to another (point) on the Internet with guaranteed characteristics (e.g., latency, data rate etc). For example, if the service 210 hosting takes a large amount of traffic from users using DSL service AT&T offered in San Francisco, instead of routing to San Francisco Central office AT&T service 210 hosting can rent a private data connection with high bandwidth service provider (perhaps AT&T directly or from another provider) between San is rancisco Central office and one or more server centers for service 210 hosting. Then, if the routes from all server centers HS1-HS6 hosting service through a common Internet to a user in San Francisco, using DSL AT&T result in too large a time-out, instead of them can be used private data connection. Despite the fact that the private information of the connection, as a rule, are more expensive than routes via public Internet, as long as they represent a small percentage of connections services 210 hosting users, the overall impact on the cost will remain at a low level, and users will experience a more constant practical experience of service.

Server centers often have two levels of backup power in the event of a failure in the power system. The first level, as a rule, is a backup battery (or alternatively, in readiness for immediate use, energy source, such a flywheel, which is maintained in working condition and connected to the generator), which immediately provides power when there is a failure in the power line, and support of the operational state of the server center. If a failure in the power supply system is short, and rapid return of the supply line (for example, within one minute), then the battery is all that is required to maintain a server center in working condition. But if the failure in the power supply system occurs over a longer period of time, then, as a rule, run the generators (e.g., diesel), which take over the function of the battery and can work until they have fuel. These generators are extremely expensive because they must be able to display such power, which usually gets the server center of the power line.

In one embodiment, each of the services HS1-HS5 hosting shares user data so that if one data center fails in the power system, it may suspend games and applications that are in process (execution), and then transmits the data on the status of applications or games from each server 402 to the server 402, in other data centers, and then will notify the client 415 each user and requires that he passed information to the new server 402. Considering the fact that such situations are rare, it may be acceptable to transfer the user's data centre hosting service, which may not provide the optimal waiting time (i.e. the user will just have to put up with a higher timeout for failure in the power supply system), which provides much greater opportunities for transmission to users. N the example, given the different time zones in the US, the users on the East coast can lie to pour at 23:30, while users on the West Coast in 20:30 begin to maximize the use of video games. If at this time there is a failure in the power supply system in the server centre hosting services, located on the West coast, you probably don't have enough servers 402, located on the West coast, in other data centers hosting service to manage all users. In this situation, some users may be transferred to the server centers hosting service, located on the East coast, in which there are available servers 402, and the only consequence for users will be more waiting time. After the transfer of users from the server center, which remained without electricity, the server center may initiate orderly shutdown of its servers and equipment so that all the equipment was turned off to discharge the battery (or other immediately plug back-up power). Therefore, the server center may avoid the cost of the generator.

In one embodiment, during periods of heavy load services 210 hosting (or due to peak user load, or due to failure of one or more with wernich centers) users transmit to other server centers on the basis of time-out game or application, which they use. Accordingly, users that use games or applications that require low latency, preferred (if the available connections to servers with low latency with limited power.

Signs hosting service

On Fig shows a variant implementation of the components of the server of the center for service 210 of hosting that is used in the following description, characteristics. As in the case with the service 210 hosting depicted on figa, components of this server center controls and (their work) coordinate system 401 management service 210 hosting, unless otherwise specified.

Incoming traffic 1501 in internet from clients 415 user is placed into the inbound routing 1502. Typically, incoming traffic 1501 in internet included in the server center via high-speed fiber-optic connection to the Internet, but it will be enough any means of network connections to meet the requirements of bandwidth, reliability and low latency. Inbound routing 1502 is a system network (the network may be implemented as an Ethernet network, a network of fiber-optic channels or through any other vehicles) switches and servers routing enabled switches, which accepts incoming packets and routes sirwec each packet to the appropriate server 1521-1525 apps/games ("app/game"). In one embodiment, the package that is delivered to a specific application server/game represents the subset of the data received from the client, and/or can be converted/changed other network components (e.g., network components such as gateways and routers) in the Central data processing and storage. In some cases, packets are routed in multiple servers 1521-1525 at the same time, for example, if the game or application are executed in parallel simultaneously on multiple servers. Disk array 1511-1512 RAID type connected to the network 1502 with inbound routing so that the servers 1521-1525 apps/games can be read from the disk array 1511-1512 RAID type and write on them. In addition, the disk array 1515 RAID type (which may be implemented as multiple disk arrays RAID types) is also connected to the inbound routing 1502, and the data from the disk array 1515 RAID type can be read from servers 1521-1525 apps/games. Inbound routing 1502 may be implemented in a wide range of network architectures of the prior art, comprising a tree structure of switches, with incoming traffic 1501 in internet at its root, in the honeycomb structure, connecting all the different devices, or as a set of interconnected subnets, traffic, concentrated between (device is you) internal communication separated from traffic, focused among other devices. One type of network configuration is SAN, which, although it is usually used for storage devices, it can also be used for conventional high-speed data transfer between devices. In addition, the servers 1521-1525 app/game, and each can have multiple network connections in the inbound routing 1502. For example, the server 1521-1525 may have a network connection to a subnet that is connected to the disk array 1511-1512 RAID type, and other network connection to a subnet that is connected to other devices.

Servers 1521-1525 apps/games can all be configured the same, some different, or different, as described previously in respect of the servers 402 in the embodiment depicted in figa. In one embodiment, each user when using the hosting service usually (uses), at least one server 1521-1525 apps/games. For ease of explanation, assume that the user uses the server 1521 apps/games, but many servers can be used by a single user and multiple users can share a single server 1521-1525 apps/games. The input control signal of the user sent from the client 415, as opisaniya, accepted as incoming traffic 1501 in Internet and routed through inbound routing 1502 in the server 1521 apps/games. Server 1521 apps/games uses the input control signal of the user as the input control signal in a game or an application executing on the server, and calculates the next frame of the video and audio associated with it. Server 1521 apps/games then outputs uncompressed video/audio 1529 in the shared compression 1530 video. Server applications/games can output uncompressed video through any means, including one or more connections on Gigabit Ethernet technology, but in one embodiment, the video output through the DVI connection, and other compressed audio data and the status information of the communication channel is output via the connection, a universal serial bus (USB).

Shared compression 1530 video compresses uncompressed video and audio from servers 1521-525 apps/games. Compression can be implemented entirely in hardware, or hardware executing software. For each server 1521-1525 apps/games can (to be) allocated to the compression device, or if compression devices are fast enough, the compression device can be used to compress the video/audio from multiple servers 1521-1525 own the supply/games. For example, at 60 fps, the time frame is 16,67 MS. If the device compression can compress one frame is 1 MS, then the compression device can be used to compress the video/audio of the 16 servers 1521-1525 apps/games by receiving input from one server after the other, and the compression device maintains the state of each process compression video/audio and switches the context as it cyclically switches between streams video/audio server. This results in significant cost savings on hardware for compression. Since different servers will complete frames at different times, in one embodiment, the resources of the clamping devices are in the shared pool 1530 shared storage medium (e.g., RAM, flash memory) for storing the status of each of the compression process, and when the frame server 1521-1525 completed and ready for compression, the control determines which resource compression is available at this point, provides resource compression state of the compression process server and a frame of uncompressed video/audio compression.

Note that part of the state of compression process on each server includes information about the compression, for example, the recovered data to the frame buffer of the previous frame, which can be used as a reference element DL is P-fragments, the resolution of the output video compression quality, mosaic structure, the allocation of bits for each of the fragments, the quality of the compression, audio format (e.g., stereo, surround sound, AC Dolby® 3). But the condition of the compression process also includes information about the state of the communication channel with respect to the peak speed 941 data and display in real time the previous frame (as shown in fig.9b) (and in the current frame should be ignored), and, perhaps, whether there are characteristics of the channel that should be considered when compressed, for example, excessive packet loss, which affect the decision making in relation to compression (for example, from the point of view of the frequency of the I-slices, and so on). As the maximum speed 941 data or other characteristics of the channel change over time, as is determined by the server 1521-1525 apps/games that support data monitoring each user sent from the client 415, the server 1521-1525 apps/games sends the appropriate information in the shared hardware compression 1530.

The shared hardware compression 1530 also breaks down into packets, compressed video/audio using tools, for example, means, previously described, and if required, with the use of FEC codes, the duplication of certain data or accept what events other measures for the corresponding ensure that the client 415 stream video/audio data, and restoring the compressed data with the highest possible quality and reliability.

Some applications, for example, described below, require that the video/audio output this server 1521-1525 app/game was available with many abilities allow (or in various other formats) at the same time. If the server 1521-1525 apps/games so notifies the resource shared hardware compression 1530, uncompressed video audio 1529 this server 1521-1525 applications/games will be simultaneously compressed in different formats with different resolution capabilities and/or with different structures bug fixes/packages. In some cases, resources compression can be shared by many processes of compression, compressive identical video/audio (for example, many compression algorithms, there is a stage at which the image is scaled in a variety of sizes before applying compression. If you want to output images of different sizes, this stage can be used to serve multiple processes of compression at the same time. In other cases, for each format requires a separate resources compression. In any case, the compressed video/audio 1539 with all the required different abilities and allow all the required different formats for this server 1521-1525 apps/games (whether one or many) is displayed simultaneously in outbound routing 1540. In one embodiment, is sushestvennee output compressed video/audio 1539 in the form of UDP, accordingly, it is a unidirectional flow of packets.

Network 1540 outbound routing contains a set of switches and routing servers, which send each stream of compressed video/audio to the user(s)to which time it is intended, or to other destinations via the interface outbound 1599 from the Internet (which, as a rule, is connected to a fiber-optic interface with the Internet), and/or back into the buffer 1515 delay, and/or back to the inbound routing 1502, and/or through a private network (not illustrated) for video distribution. Note that (as described below) outbound routing 1540 may display this video/audio stream to multiple destinations simultaneously. In one embodiment, this is implemented using a multicast Internet Protocol (IP), which is passed to the UDP stream, for transmitting the stream to multiple destinations simultaneously, and this broadcast is repeated switches and servers routing outbound routing 1540. Many destinations broadcasting can be clients 415 multiple users (available through the Internet, many servers 1521-1525 apps/games (available) through the inbound routing 1502, and/or one or more buffers 1515 delay. Accordingly, the output of this is about server 1521-1522 squeeze in one or many formats and each compressed stream is sent to one or many destinations.

In addition, in another embodiment, if multiple servers 1521-1525 apps/games simultaneously used by one user (for example, in the configuration of the parallel processing to create a 3D output a complex scene) and each server displays a part of the resulting image, the video output from multiple servers 1521-1525 can join the shared hardware compression 1530 in the combined frame, and from this point on, it is processed as described above, as if it came from one server 1521-1525 apps/games.

Note that in one embodiment, a copy (at least, with a resolution of the video being viewed by the user, or above) the whole video generated by the servers 1521-1525 apps/games, is recorded in the buffer 1515 delay, at least for some number of minutes (15 minutes in one embodiment). This allows each user to "rewind" the video of each session to review past actions or deeds (in the case of games). Accordingly, in one embodiment, for each output 1539 flow of compressed video/audio that is routed to the client 415 user is group re is the ache in the buffer 1515 delay. When video/audio is stored in the buffer 1515 delay, the directory buffer 1515 delay provides a cross-reference between the network address of the server 1521-1525 apps/games, which is the source of the delayed video/audio, and place in the buffer 1515 delay, where the delayed video/audio can be found.

Games in real time, immediately displayed on the screen, with the possibility of immediate management games

Servers 1521-1525 apps/games can be used not only for the execution of a given application or video games for the user, but they can also be used to create front-end user applications for service 210 hosting that support navigation in the service 210 hosting and other signs. Screenshot of one such front-end user applications depicted in Fig, the screen "Game Finder" ("search games"). This particular user interface screen provides the user an opportunity to watch 15 games, which are conducted in real-time (or delayed) by other users. Each of the "miniature" of video Windows, such as 1600, is a window with the video in real-time in motion, showing one video game a single user. As shown in miniature, can be a form identical to the one that the user sees, is whether it can be delayed (for example, if the user conducts a fighting game, then the user may want that other users couldn't see where he was hiding, and he may decide to hold all or any of its gameplay for a certain period of time, say, 10 minutes). Views can also be a field of view of the camera of the game, which differs from the views of any user. By selecting the menu items (not shown in this example), the user can select a selection of games for simultaneous viewing on the basis of various criteria. As a small sampling of illustrative selections, the user can select a random sample of games (for example, depicted in Fig), all games of the same species (all of which are different players), only vsokokalorina players of the game, players at this level in the game, or discontinuousness players (for example, if a player learns the basics), players who are "friends" (or opponents)games, in which the greatest number of spectators, etc.

Note that, in General, each user decides whether other watch his video games or application, and, if so, what and when to watch it, is it possible to watch only with a delay.

Server 1521-1525 application/game that generates a user interface screen depicted in Fig requests 15 lines of transmission of the video/audio on what redstem send messages to the server 1521-1525 apps/games for each user, which he asks for the game. The message is sent via inbound routing 1502 or another network. The message includes the size and format of the requested video/audio and identifies the user viewing the user interface screen. The user may decide to select "privacy" ("confidentiality") and not to provide an opportunity for any other users to view the video/audio of his game (or from his point of view, or from another point of view) or as described in the previous paragraph, the user may decide to provide video/audio of his game, but delay the video/audio. Server 1521-1525 apps/games user, receiving a request and agreeing to provide the opportunity of viewing video/audio, sends the confirmation to the requesting server, and it also notifies the shared hardware compression 1530 on the need to build an additional stream of compressed video in the requested format or with the required size of the screen (assuming that the format and the screen size is different from those already formed), and it also specifies the destination for the compressed video (i.e. the requesting server). If the requested video/audio only detained, the requesting server 1521-1525 apps/games are notified of the fact, and he asks delayed video/audio from the buffer 1515 delay by searching for the location of video/audio directory, stored in the buffer 1515 delay, and network address server 1521-1525 apps/games, which is the source of the delayed video/audio. After forming and processing all the requests up to 15 video streams miniature size in real-time routed from outbound routing 1540 in the inbound routing 1502 in the server 1521-1525 apps/games that form the user interface screen, and restored and displayed by the server. Detainees video/audio streams can be too large screen size, and if so, the server 1521-1525 apps/games will restore compressed streams and reduce the scale of the video streams to the thumbnail size. In one embodiment, requests for audio/video send (and she controls them) to the Central service control similar to the control system of the hosting service on figa (not depicted on Fig), which then forwards those requests to the appropriate server 1521-1525 apps/games. In addition, in one embodiment, may not be required query, because thumbnails "pushed" to customers those users who give permission for it.

Audio from 15 games, all mixed at the same time, which may create a cacophony of sounds. The user may decide to mix all the sounds together in a way only to get the feeling of "noise"created all viewed the action), or the user may choose to listen to only audio from one game in a given time. The choice of one game perform by moving the yellow frame 1601 choice selection in this game (move the yellow frame can be performed when using the cursor key on the keyboard when the mouse moves when you move the joystick or pressing the direction button on another device, such as a mobile phone). After you select one game from this game plays audio only. In addition, it shows the information 1602 about the game. In the case of this game, for example the logo of the publisher ("EA"), and the logo of the game "Need for Speed Carbon" ("need for speed"), and orange horizontal bar indicate in relative terms the number of people leading this game or watching it at this particular moment (a lot, in this case, so the game is "hot"). In addition, is provided "Statistical data" ("Stats") stating that there are 145 players, actively leading 80 different instances of Need for Speed Game (i.e. it can be or how the game is single player or as a game with multiple participants), and there are 680 viewers (one of which is actpolitical). Note that these statistics (and other statistical data) collected by the system 401 management hosting service and stored in the disk array 1511-1512 RAID type for logging service 210 hosting and for the proper billing of users and payments to publishers who provide content. Some statistical data is written due to the actions of system 401 management services, and some are reported in the system 401 management services private server 1521-1525 apps/games. For example, the server 1521-1525 apps/games, performing this app Game Finder (finder games), sends messages to system 401 management of the hosting service when the games are viewed (and when they cease to view), so she can update the statistical data on how many games is displayed on the screen). Some statistical data is available for the front-end user applications, for example for this application Game Finder (the finder of the game).

If the user clicks on the activation button on your input device, then he sees that the miniature video in the yellow frame increases, while it remains in real time, to the size of the full screen. This effect is depicted in the process Fig. Note that the video window 1700 increased in size. For i.e. monitoring) reference and exposure, server 1521-1525 application/game asks the server 1521-1525 application/game executing on the game, to receive a copy of the video stream for the size of the full screen (with a resolution of the display 422 user) games that are routed to it. Server 1521-1525 apps/games, acting game, notifies the shared hardware device 1530 compression that a copy of the miniature size of the game is no longer required (if another server 1521-1525 apps/games do not require such a thumbnail), and then he tells that he sent a copy of the video size to full screen in the server 1521-1525 applications/games opening video. The user, leading the game may or may not have a display 422 with a resolution identical to the resolution of the display of the user, enhancing the image of the game. In addition, other spectators of the game may or may not have 422 displays with a resolution identical to the resolution of the display of the user, enhancing the image of the game (and may have other means of audio playback, for example, a stereo or surround sound). Accordingly, the shared hardware device 1530 compression determines the already formed a suitable stream of compressed video/audio that meets the requirements of the user requesting mentioned the video/audio stream, and if it already exists, it shall notify the outbound routing 1540 to marshrutiziruya a copy of the stream to the server 1521-1525 applications/games with zoom video, and if it does not compress another copy of the video, which is suitable for the user, issues a command to the outbound routing to send the mentioned thread back to the inbound routing 1502 and server 1521-1525 apps/games that change the scale of the video image. This server is now receiving full-screen version of the selected video, restores it and gradually scale it up to full size.

On Fig shows how the screen looks after the final zoom game to full screen, and the game shows with full resolution of the display 422 of the user, as shown by the image, which is indicated by arrow 1800. Server 1521-1525 apps/games, performing application search tools, games, sends messages to other servers 1521-1525 applications/games that provided miniatures, that they are no longer needed, and messages in the server 401 management hosting service, what other games are no longer visible. At this point, the only image that it generates, is the imposition 1801 another graphic image at the top of the screen, the cat is PoE provides information and controls menu to the user. Note that as you progress the game, the audience increased to 2 503 spectators. With such a large number of spectators, there must be a lot of viewers display 422, resolutions which are identical or similar (each server 1521-1525 apps/games can scale the video to adjust the match).

As depicted, the game is a game with several participants, the user can decide to join the game at some point in time. Service 210 hosting can provide, or may provide the user an opportunity to join the game for different reasons. For example, the user may need to pay in order to play the game, and he decided not to pay, the user may not have sufficient rating to join this particular game (for example, it can not compete with other players), or connecting to the Internet user may not have a sufficiently low latency to enable the user to play the game (for example, there is no limit on the waiting time for game viewing, respectively, a game that spend far (of course, on another continent), you can watch without worrying about time-out, but in order to play the game, the waiting time must be sufficiently small so that the user (a) has received the allowances from the game and (b) were in equal conditions with other players which may have connections with less waiting time). If the user is provided the opportunity to play the game, the server 1521-1525 apps/games, which provides the user with a user interface Game Finder (search tools, games), sends a request to the server 401 management hosting service initiated (i.e. determined the location and launched the server 1521-1525 apps/games, which are respectively configured for specific games, download games from the disk array 1511-1512 RAID type, and after that, the server 401 management hosting service issues a command in the inbound routing 1502 for transmitting control signals from the user's application server/game, which is currently implementing hosting the games, and he issues a command in the shared hardware compression 1530 to switch from compression video/audio from the application server/game, which provides application hosting Game Finder (search tools, games) to the video/audio compression from the application server/game, which currently provides hosting games. HR clock pulse service application/game Game Finder (finder games) and the new application server/game, which provides hosting games, not synchronized, and as a result, there is probably a time difference between the two clock pulses is lsame. Because of the shared hardware 1530 video compression will compress the video after the server 1521-1525 apps/games ends of the frame, the first frame of the new server can be completed earlier than (will be) a full period of the frame of the old server, which may occur before the end of the transfer of the previous compressed frame (for example, consider the period 992 transmission fig.9b: if the uncompressed frame 963 3 was completed on pollperiod frame before, he would have faced a period of 992 transmission). In this situation, the shared hardware 1530 video compression ignores the first frame of the new server (for example, just as ignored 974 frame 964 4), and the client 415 holds the last frame of the old server to the additional period of the frame, and shared hardware 1530 video compression starts the video compression of the next frame period of the new application server/game, which provides the hosting of the games. Visually, for the user, switching from one application server/game to another will be uninterrupted. The server 401 management hosting service then notifies the game server 1521-1525 apps/games that managed hosting Game Finder (the finder of the game)to switch to the idle state until such time until it is needed again.

Then p is lovatelli can lead the game. Exceptional is that it creates the impression that the game can be conducted immediately (because it is loaded on the game server 1521-1525 apps/games from the disk array 1511-1512 type RAID speeds of gigabits/second), and the game is loaded on the server, fully fit for the game, together with the operating system, fully configured for games with perfect drivers, configuration of system register (in the case of Windows), and other applications, which can interfere with the functioning of the game, the server will not be executed.

In addition, as the user in the game, each of the segments of the game is loaded into the server speeds of gigabits/second (i.e. 1 GB load 8 seconds) of the disk array 1511-1512 RAID type, and because of the huge memory capacity of the disk array 1511-1512 RAID type (as it is a shared resource among many users, it can be very large and, nevertheless, to be cost-effective), setting the geometry or configuration of another segment of the game can be precalculated and stored on a disk array 1511-1512 RAID type and load extremely quickly. In addition, due to the fact that the hardware configuration and the processing power of each server 1521-1525 apps/games are known, can be pre-calculations (ISOE what Itanium) Shader vertices and pixels.

Accordingly, the game starts almost immediately, she turns in an ideal environment, and subsequent segments are loaded almost immediately.

But in addition to these benefits, the user can see how others play (through Game Finder (finder games), described earlier, and other means), as (also) to decide whether the game is interesting, and if so, to consider the recommendations during the observation of others. And the user can use demo version of the game immediately, and there is no need to expect a lengthy download and/or installation, and the user is able to play immediately, it is possible to test for a small fee, or on a long term basis. And the user can play on PC Windows, Macintosh, on TV, at home, while traveling, and even on a mobile phone through a wireless connection with low latency. And that's all you can do, and never to have a physical copy of the game.

As mentioned earlier, the user may decide not to provide others (users) can view its gameplay, to provide the opportunity of seeing him in action after a delay, to provide the opportunity of seeing him in action to the selected users or to provide for a preview of the games to all users. Independent is anything, video/audio will be stored, in one embodiment, for 15 minutes in buffer 1515 delay, and the user will be able to "rewind" and view its previous game, and pause, play it slowly, quickly move forward and so on, as he can do while watching TV with digital video recorder (DVR). Although in this example the user is playing, the identical means "DVR" is available if the user uses the application. This can be useful when viewing previous work and other applications, as described in detail below. In addition, if the game was designed with means for rewinding on the basis of information about the game state so that the field of view of the camera can be changed and so on, then this tool "3D DVR" will also be supported, but will require that the game was developed with its support. Means "DVR" using buffer 1515 delay works with any game or application, of course, with limitations, for video, which is formed during use of the game or application, but in the case of games with vehicle DVR 3D, the user can control the "through-flight" (fly through) in 3D earlier segment, and to record the resulting video in the buffer 1515 delay, and to record the state of the game for segm the NTA of the game. Accordingly, a specific "end-to-end fast movement is recorded as compressed video, but since the game state is also written, the other through fast moving for the same segment of the game later on might.

As described below, each user has on the hosting service 210 user page (User Page), where he can put information about yourself or other data. Among other things, users can place the segments of the video of the gameplay (the game), which they retained. For example, if the user has overcome a particularly difficult serious challenge in the game, you can "rewind" to the place directly in front of his great achievement in the game, and then issue a command to the service 210 hosting to save the video segment of a certain length (for example, 30 seconds) on the user page (User Page) that user to view other users. To implement this, it is simply a question of the server 1521-1525 apps/games that the user uses to read the video stored in the buffer 1515 delay in the disk array 1511-1512 RAID type and then make a reference to this segment of the video on the user page (User Page) mentioned user.

If in the game there is a tool DVR 3D, as described above, information about the state of the game required for DVR 3D, can also be C is signed by the user and made available on the user page (User Page) mentioned user.

When the game is designed with the possibility of having "visitors" (i.e., which users can navigate through a 3D world and observe the action without participation in it) along with active players, the Game Finder (finder games) allows users to join the game as the visitors and players. From the point of view of implementation, to system 210 hosting there is no difference whether the user is a visitor or an active player. The game is loaded on the server 1521-1525 apps/games, and the user controls the game (for example, through control of the virtual camera, which looks into the world). The only difference will be the experience of using the game by the user.

Collaboration of multiple users

Another symptom of service 210 hosting is the possibility of joint work of many users when they are viewing the video in real time, even when using a significantly different display devices. This is useful when playing games and using applications.

Many PC and mobile phones have cameras in them there is a tool for performing video compression in real time, in particular when the image is small. In addition, commercially available small camera, which is s can be connected to the TV and not difficult to implement compression in real time, or in software, or using one of the many hardware compression for video compression. In addition, many PC and all mobile phones there are microphones, and are commercially available headphones with microphones.

These cameras and/or microphones, combined with local compression tool video/audio (in particular, by the means of video compression, low latency, described in this document), provide the user with the ability to transmit video and/or audio from the site 211 user service 210 hosting together with management data input device. When using such methods, it is possible to obtain the tool illustrated in Fig: the user can output your video and audio 1900 on the screen inside the app or game from another user. This example is a game with multiple participants, where teammates jointly participate in a car race. Video/audio the user can be selectively viewed/listened only to his teammates. And, as virtually no timeout when using the methods described above, players can talk to each other or to show a gesture to each other in real time without noticeable delay.

This integration is the situation of audio/video is performed by receiving the compressed video and/or audio from the camera/microphone user as incoming traffic 1501 in Internet. After this inbound routing 1502 routes video and/or audio in game servers 1521-1525 apps/games, which provided the ability to view/listen to the video and/or audio. Further, the users of the respective game servers 1521-1525 applications/games that decided to use the video and/or audio, restore it and integrate, as required for output within the game or application, for example, as depicted in the position of 1900.

In the example on Fig shows how such a collaboration is used in the game, but such collaboration can be a powerful tool for applications. Consider a situation when a large building is designed for new York architects in Chicago for the developer, which is located in new York, but the decision involves a financial investor who travels and is at the airport in Miami, and the decision must be made about certain elements of a building project based on how it is consistent with the buildings around it, so that the investor and the developer have been satisfied. Suppose that the architectural firm has a monitor with a high resolution camera connected to a PC in Chicago, the developer has a laptop with camera in new York, and the investor has a mobile phone with a camera in Miami. Architectural firm can use the service 210 is the costing for placing a powerful application for architectural design with the possibility of highly realistic 3D visualizations and the ability to use a large database of buildings in new York, and database designed buildings. The application of architectural design is performed on one, or if you want more processing power, on several of the servers 1521-1525 apps/games. Each of these 3 users located in different places, connected with the service 210 of hosting, and everyone has the simultaneous appearance of the output of the application of architectural design, but the size will be changed accordingly shared hardware compression 1530 for this device and characteristics of the network connection that has each user (e.g., architectural firm, you can see the image, 2560×1440, 60 fps through commercial connection to the Internet 20 Mbps, the developer in new York can see the image, 1280×720, 60 fps, DSL 6 Mbps on my laptop, and the investor can see the image, 320×180, 60 fps, on a cellular information connection 250 Kbit/s on your mobile phone. Each participant hears the voice of the other participants (holding conference calls handled by any of a variety of widely available software packages for conducting the conference call server(s) 1521-1525 apps/games) and by activating the button on the user input device, the user will be able to display video on the screen with your POC is through the camera. During the meeting, the architects will be able to show what it looks like structure when they rotate it and then quickly move along it to the other building in this area, with extremely photo-realistic 3D visualization, and all participants see the identical video with a resolution display of each participant. It does not matter that none of the local device, used by any participant cannot handle 3D animation with such realism, not to mention downloading or even store huge databases required for visualization of the nearby buildings in new York. From the point of view of each of the users, despite the fact that they are at a distance, and despite various local devices, they simply will be uninterrupted experience of using with an incredible degree of realism. And when one party requires that his face could be seen with the best transfer of his emotional state, he can do it. In addition, if either the investor or the investor is required to gain control of the architectural program and to use its own input device (regardless of whether it's a keyboard, mouse, keypad or touch screen), they can do it, and it will respond without noticeable latency (assuming that its network connection is p is iamlive timeout). For example, in the case of the mobile phone if the mobile phone is connected to WiFi network at the airport, he will have a very low latency). But if he uses a cellular data network, currently available in the United States, it is likely to experience a noticeable delay. However, for most purposes, of the meeting, when the investor observes as architects manage rapid movement along the building, or to conduct video teleconferencing, even the waiting time for cellular communication should be acceptable.

Finally, at the end of the joint conference, the developer and the investor will make comments and disconnected from the hosting service, the architectural firm will be able to "rewind" the video conference, which was recorded in the buffer 1515 delay and view comments, facial expressions and/or actions related to model 3D buildings made during the meeting. If there are specific segments that they want to save, then these segments of the video/audio can be moved from the buffer 1515 delay in the disk array 1511-1512 RAID type for archival storage and playback later.

In addition, from a cost perspective, if the architects require only the computing power and a large database of new York for 15 minutes, conference calls, they must pay the ü only for the time during which these resources are used, not for the possession of powerful workstations and purchase an expensive copy of a large database.

Extensive public video services

Service 210 hosting provides an unprecedented opportunity for the installation of extensive public video services on the Internet. On Fig depicts an illustrative user page (User Page) for gamer 210 hosting. As in the case of Game Finder (the finder of the game), the User Page (user page) is an application that is executed on one of the servers 1521-1525 apps/games. All thumbnails and video Windows this page is constantly monitored moving video (if the segments are short, they are cyclically repeated.)

Using a camcorder or by uploading videos, user (user name which is "KILLHAZARD") can put a video of himself 2000, which can be viewed by other users. Video is stored in the disk array 1511-1512 RAID type. In addition, when other users browse to a user page (User Page) KILLHAZARD if KILLHAZARD uses the service 210 hosting at this time, you will be shown a video 2001 in real time what he does (assuming that it enables users to view the page on which isolates (User Page) to monitor him). This is done by the server 1521-1525 apps/games, which provides the hosting application's User Page (user page), with a request, the system 401 management services, whether KILLHAZARD active, and if so, the server 1521-1525 apps/games that he uses. Then, using identical methods used Game Finder (the finder), the compressed video stream with the appropriate resolution and format will be sent to the server 1521-1525 apps/games, performing application User Page (user page), and it will be displayed on the screen. If the user selects the box gameplay in real-time KILLHAZARD, and then proper click on your input device, the window increases (and again using identical methods as in the application Game Finder (search tools, games, and video in real time fills the screen resolution of the display 422 observing the user, in accordance with the characteristics of Internet connectivity observing user.

The main advantage of this approach over the approach of the prior art is that the user viewing the User Page (user page), can see the game being played in real time, which does not belong to the user and he may not have local the aqueous computer or game console to play games. What the user sees user in the User Page (user page) "in action"game master gives him great opportunities, and this is an opportunity to learn about the game, which is viewing the user may want to test or improve your results in it.

Recorded by camera or uploaded videos from my friends in 2002 KILLHAZARD are also shown in the User Page (user page), and under each video there is a text that indicates whether this fellow online and leads the game (for example, six_shot plays "Eragon", and MrSnuggles99 is offline, etc). When you click on a menu item (not illustrated) video PAL will switch from displaying the recorded or downloaded videos to the video in real time that the friends that are currently playing in the service 210 hosting do at this point in their games. Accordingly, it becomes a Game Finder (the finder of the game), that groups by friends. If vybiraetsya the game is PAL, and the user clicks it, it will grow to full screen, and the user will be able to watch the game played in full screen in real time.

And again, the user viewing the game buddy, does not belong to a copy of this (the game), and do not belong to the local vychislitelaaunoi/resources gaming console for doing this game. The game viewing is actually immediate.

As previously described above, when the user plays in the service 210 hosting, the user can "rewind" the game and find the segment of the video, which he needs to save, and then stores the video segment on your user page (User Page). They are called "Brag Clip" ("clips boasting"). All segments of the 2003 video are Brag Clip 2003, saved KILLHAZARD from previous games, which he led. In position 2004 shows how many times Brag Clip has been viewed, and when the Brag Clip is viewed, users have the opportunity to evaluate it, and the number of orange icons of 2005 in the form of a keyhole indicates how high is the rating. Brag Clip 2003 continuously cyclically repeated, when a user views a page, user Page), along with the rest of the video on the page. If the user selects and clicks on one of the Brag Clip 2003, it increased to represent Brag Clip 2003, and controls DVR provide the ability to play, pause, rewind, fast forward, passing through the stages, etc.

Play Brag Clip 2003 is implemented by the server 1521-1525 apps/games that loads the compressed video segment stored in the disk array 1511-1512 RAID type, when the user recorded Brag Clip, and restore them it and reproducing it.

Brag Clip 2003 can also be segments of video DVR 3D" (i.e. the sequence of States of the game from the game, which can be repeatedly reproduced and enables the user to change the camera's point of view) of the games that support this feature. In this case, information about the game state is saved, along with recording the compressed video specific "end-to-end migration", which was made by the user when the recorded segment of the game. When you view the User Page (user page), and all thumbnails and video Windows constantly loop through, Brag Clip 2003 DVR 3D continually loops through the Brag Clip 2003, which was recorded as compressed video, when the user wrote "through flight" segment of the game. But when the user selects Brag Clip 2003 DVR 3D and clicks on it, along with controls DVR that enables the playback of compressed video Brag Clip, the user can click on the button, which provides him with the means DVR 3D segment of the game. He will be able to manage cross-span cameras during the segment of the game itself, and if it needed (and the user to whom this page belongs to the user, provides this opportunity, he will be able to record alternative "end-to-end migration" Brag Clip in the form of compressed video, (which) will then be available to other zritel the m user page (or immediately, or after the owner of the page the user will be able to view this Brag Clip).

This tool Brag Clip 2003 DVR 3D is provided by activation of the game, which is ready to start repeat playback of recorded information about the state of the game on another server 1521-1525 apps/games. Since the game can be activated almost instantly (as described previously), it is not difficult to activate and play it limited to the state of the game, recorded segment Brag Clip, and then provide the user an opportunity to perform "end-to-end flight", and the camera records the compressed video buffer 1515 delay. After the user has completed the execution of a "through flight" game disconnected.

From the point of view of the user activation of a "through flight" through the Brag Clip 2003 DVR 3D does not require more effort than management controls DVR linear Brag Clip 2003. He may know nothing about the game or even how to play the game. It is only the operator of the virtual camera looking into the 3D world within a segment of the game, written by others.

Users can also impose their own audio to Brag Clip, which is either recorded with microphones or unloaded. Therefore, Brag Clip can be used to create a custom animation using characters and dei is out of game. This method of animation is usually known as "Machinima".

As you progress users in the games, they reach different levels of skill. Played the game report these achievements to the control system 401 service, and these skill levels will be shown on the User Page (user pages).

Interactive animated ads

The transition was carried out online ads from text to still images, video and now to interactive segments, typically implemented using thin clients animations, such as Adobe Flash. Why use thin clients animations, is that users are impatient at the delay of the implementation of the preemptive rights of the receipt of the product or services that are transferred to them. In addition, thin clients are executed in a very low-end PC and, essentially, the face of advertising can be quite confident that the interactive advertisement will work properly. Unfortunately, thin clients, animation, such as Adobe Flash, is limited by the degree of interactivity and duration of usage experience (to reduce download time).

On Fig depicted interactive advertising, in which the user must choose the external and internal the color of the car during rotation of the car in the showroom, when, by means of ray tracing in real time demonstrates how the car looks. Next, the user selects the "avatar" to control the car, and then the user can rent a car for a trip or on the track for racing, or exotic location, for example, Monaco. The user can choose the engine with a large displacement or better tires and can verify, as a modified configuration affects the ability of the vehicle to accelerate or keep the road.

Certainly, the advertising is actually a complicated 3D video game. But to play such advertising on the PC or video game console, you may need to download 100 megabytes and, in the case of a PC, you may need to install special drivers, and may not be possible at all its execution, if the PC is not meeting the requirements of compute capability of the GPU or CPU. Accordingly, such advertisements almost impossible configurations of the prior art.

The service 210 hosting the ads start almost instantly and perfectly executed, regardless of the capabilities of the client 415 user. Accordingly, they run more quickly than online ads, thin client, with a much more vast is the first experience of use and high degree of reliability.

Streaming geometry in the animation in real time

Disk array 1511-1512 RAID type and inbound routing 1502 can provide data transfer speeds that are so big and with values of the waiting time so small that it is possible to design video games and applications that are based on a disk array 1511-1512 RAID type and inbound routing 1502 for reliable delivery of geometry on the fly in the middle of gameplay or application during the animation in real time (e.g., "end-to-end flight" with a complex database.

With systems of the prior art, for example, the gaming system shown in figure 1, the available mass memory, in particular, in practice home devices are too slow to transfer the flow geometry during gameplay, except for situations where the required geometry is to some extent predictable. For example, in the game with driving, where there is a specified highway geometry for buildings, which appear in the field of view may be acceptable predictable, and the mass memory can search in advance to the place where is approaching geometry.

But in a complex scene with unpredictable changes (for example, in the battle scene with complex characters everywhere), if RM to PC or game system completely filled with geometry for objects currently in the field of view, and then you suddenly turn your character to see what is behind it, if the geometry has not been pre-loaded into the RAM, then there may be a delay before displaying it on the screen.

The service 210 hosting, disk array 1511-1512 RAID type can stream to transfer data at speeds exceeding the speed on Gigabit Ethernet technology, and in the case of a SAN, you can reach speeds of 10 gigabits/second for 10 Gigabit Ethernet or other network technologies. At 10 gigabits/sec gigabytes of data downloaded in less than a second. In the period of the frame 60 frame/s (16,67 MS), can be loaded with approximately 170 megabits (21 MB) of data. Rotating media, no doubt, even in a RAID configuration, still, cause the magnitude of the waiting time exceeding the period of the frame, but mass storage RAID device based on flash memory will eventually be of such size as disk arrays RAID type of rotating storage media, and will not cause such a large timeout. In one embodiment, caching is used with write-through in the massive RAM to provide access to a very small timeout.

Accordingly, a sufficiently high speed network and sufficient mass memory with a fairly small bremelanotide, the geometry can be streamed in game servers 1521-1525 applications/games with such speed that the CPU and/or GPU can process 3D data. Accordingly, in the example given earlier, where the user suddenly turns your character and looks back, the geometry for all the characters in the background can be loaded before the character finishes the turn, and, accordingly, the user will feel as if he or she is in a photo-realistic world that is as real as the actors.

As discussed earlier, one of the last frontiers in photorealistic computer animation is a human face, and because of the sensitivity of human eyes to the imperfections, the slightest mistake in photoreal face may cause a negative reaction in the viewer. On Fig shows the live performance, captured using the technology of Contour™ Reality Capture (the subject of the applications that are in the process of simultaneous consideration: "Apparatus and method for capturing the motion of a performer" Ser. No. 10/942609, filed September 15, 2004, "Apparatus and method for capturing the expression of a performer" Ser. No. 10/942413, filed September 15, 2004, "Apparatus and method for improving marker identification within a motion capture system" Ser. No. 11/066954, filed February 25, 2005, "Apparatus and method for performing motion capture using shutter sync" Ser. No. 11/077628, filed March 10, 2005, "Apparatus and method for performing motin capture using a random pattern on capture surfaces," Ser. No. 11/255854, filed October 20, 2005, "System and method for performing motion capture using phosphor application techniques," Ser. No. 11/449131, filed June 7, 2006, "System and method for performing motion capture by strobing a fluorescent lamp," Ser. No. 11/449043, filed June 7, 2006, "System and method for three dimensional capture of stop-motion animated characters," Ser. No. 11/449127, filed June 7, 2006, the rights of each of which are owned by the applicant of this application CIP), resulting in a very smooth captured surface, then to the mesh surface, with a large number of polygons (i.e. the movement of the polygon accurately follows the movement of the face). Finally, when the video of the live performance show on the mesh surface for displaying the textured surface is displayed photoreal result.

Despite the fact that by means of modern technologies GPU, you can visualize the number of polygons in the mesh surface and texture and illuminate the surface in real time, if the polygons and textures are changed every frame period (that leads to the most photoreal results), all available RAM modern PC or video game console will be quickly used up.

Using streaming geometry described above, it is possible to use in practice, continuous feed geometry in game servers 1521-1525 apps/games that they continuously could of antimirov is to be photoreal face with capability to create video games with people which is almost indistinguishable from that of persons with actors.

Integration of linear content with interactive features

Movies, television programs and audio (collectively, "linear content") are widely available for home and office users in many species. Linear content can be purchased on physical media, such media CD, DVD, HD-DVD and Blu-ray. It can also be recorded via DVR broadcast cable TV and satellite broadcasting transmission. And it is available as content with pay-per-view (PPV) via satellite and cable TV and video on demand (VOD) on cable television.

More linear content is available via the Internet, and as a download and as streaming content. Now, actually there is no one place where you can experience all of the symptoms associated with linear media. For example, DVDs and other optical media video, as a rule, have interactive features that are not available anywhere else, such as comments filmmaker, short movies "opinion", etc. Online music sites cover art and song information, not generally available on CD, but not all CDS are available online. And on the Web is the AIT, associated with the television programs, often there are additional signs, blogs, and sometimes the comments of the actors and creative staff.

In addition, many movies or sporting events, there are often video games, which produce (in the case of movies) often together with linear media, or (in the case of sports) which can be closely associated with real events (such as trading players).

Service 210 hosting is well suited for the delivery of linear content in the layout of disparate forms of interrelated content. Of course, delivery of movies requires more effort (than) delivery of highly interactive games, and service 210 hosting can deliver linear content in a wide range of devices found in homes or offices, or mobile device. On Fig shows an illustrative page of the user interface for the service 210 hosting, which depicts the selection of the linear content.

But unlike most of the delivery systems of linear content, the service 210 hosting can also deliver interrelated interactive components (e.g., menus, and signs on DVD, interactive overlays one graphical image to another on HD-DVD and animation Adobe Flash (as explained below) on the Web sites. the respectively, limitations of the client device 415 is no longer impose restrictions on what features are available.

In addition, the system 210 hosting can dynamically and in real time to compose linear content with the content of video games. For example, if the user is watching a match of Quidditch in the Harry Potter movies, and decides that she would like to play Quidditch, she can simply click on the button and the movie will pause and it will jump to the segment of the video games Quidditch Harry Potter. After the match Quidditch another click and the movie will be immediately continued.

With photoreal graphics and production technology, in which photographically captured video cannot be distinguished from the characters with the actors, when the user moves from playing Quidditch in the film with the actors to play Quidditch in the video game on the hosting service, as described in this document, these two scenes are virtually impossible to distinguish. This provides a completely new creative options for Directors and linear content, and interactive content (e.g. video games), because the boundaries between these two worlds become indistinguishable.

Using architecture hosting service depicted in Fig, the viewer may be provided to control the virtual world is through the camera in the 3D movie. For example, in the scene that takes place inside the train, you can provide viewers the opportunity to control the virtual camera and inspect the car in the course of the story. This assumes that all 3D objects ("resources") in the car are available as well as meeting the requirements of level computing capabilities with the ability to render scenes in real time, as well as the original film.

And even for entertainment, formed not on your computer, you may be given very exciting interactive features. For example, in the movie "Pride and Prejudice", 2005, there are many scenes in ornate old English mansions. For scenes in a mansion, the user can pause the video and then control the camera to take a trip through the mansion, or perhaps the surroundings. To implement this, the camera can be worn on the mansion with type lens fisheye, while it keeps a record of his position, much in the same way as performed QuickTime VR of the prior art of Apple, Inc. Then different frames can be converted so that the image will not be distorted, and then stored in the disk array 1511-1512 RAID type together with the film, and reproduced when the user decides to go on a virtual journey.

In Rel is to sporting events, sporting event taking place in real time, such as playing basketball, can be transmitted via the service 210 hosting to users watched it any way they can watch it on regular TV. After the user has watched a particular game, video game this game (in the end, with players that look also tutorialno as real players) may appear on the screen, and the players start with identical positions, and users (perhaps everyone gets control of one player) can restore the game to see if they can do better than these players.

Service 210 hosting, described in this document, is very well suited to support the future of this world, because it can attract computational capabilities and resources of the mass memory, which it is not practical to install in the home or in most office environments, and also its computational resources are always modern, with the latest available computing hardware, while at home, there will always be at home with video games and PC of previous generations. And, in service of 210 hosting, all this computational complexity is hidden from the user, therefore, despite the fact that they can use very plain the installed system, from a user perspective, it is as simple as switching channels on the TV. In addition, users can access all the computing power and experience, which can provide this computing power from any client 415.

The multiplayer

In those cases, when the game is a game with multiple participants, then it can communicate with the game servers 1521-1525 apps/games through a network of inbound routing 1502 and through a network bridge with the Internet (not illustrated) with servers or gaming machines that do not operate in the service 210 hosting. When playing games with more participants by means of computers in General Internet game servers 1521-1525 apps/games will have the advantage of extremely fast access to the Internet (compared to if the game is executed on the server at home), but they will be limited by the capabilities of other computers that play on slower connections, and also, possibly, is limited by the fact that the game servers in the Internet are designed to provide the lowest common denominator, which can be home computers at a relatively slow consumer connections to the Internet.

But when playing with multiple participants is entirely inside the server center services is 210 hosting then we can achieve great differences. Each game server 1521-1525 apps/games, which provides hosting games for the user to connect with other game servers 1521-1525 apps/games, and any servers that are hosting the Central management for multi-player gaming with extremely high speed connection with extremely low latency and with a huge, very fast storage arrays. For example, if you use Gigabit Ethernet for network inbound routing 1502, game servers 1521-1525 apps/games communicate with each other and communicate with any servers that are hosting the Central management game with several participants at speeds of gigabits/second, perhaps waiting time of only 1 MS or less. In addition, the disk array 1511-1512 RAID type will be able to respond very quickly and then to transfer data at speeds of gigabits/second. As an example, if the user adjusts independently of the character from the point of view of appearance and equipment so that this character has a large amount of geometry and behavior patterns that are unique to the character, the systems of the prior art, which is limited by the game client running the house n the PC or game console, if this character should appear in sight of another user, that user must wait until a long, slow loading, so that all these paths and geometry booted up his computer. Within the service 210 hosting, identical loading can be carried out on Gigabit Ethernet with the maintenance of the disk array 1511-1512 type RAID speeds of gigabits/second. Even if the home user has a connection to the Internet 8 Mbit/s (which is extremely fast by modern standards), Gigabit Ethernet is 100 times faster. Accordingly, it takes one minute for fast connection to the Internet, will take less than one second over Gigabit Ethernet.

A group of good players and competition

Service 210 hosting is very well suited for competitions. Because the game is not executed in the local client, for users there is no way to cheat. In addition, due to the fact that the output routing 1540 can implement multicast UDP flows, the service 210 hosting can broadcast the main event of the thousands of people in the audience at the same time.

In fact, when there are some streams that are so popular that thousands of users accept the same stream (e.g., the display of the main competition), can the eye is Atisa more efficient to send the video in a content delivery network (CDN), for example, Akamai or Limelight for mass distribution to many client devices 415.

A similar level of efficiency can be obtained when a CDN is used to display pages of Game Finder (finder games) groups the best players.

For major events, you can use real sports commentator celebrities to provide comments during certain matches. Although a large number of users will see the main competition, and a relatively small number of users will play in the competition. Audio from a sports commentator celebrities may be routed to game servers 1521-1525 apps/games, exercising hosting players in the competition, and hosting any copy modes visitor games during the competition, and this audio can be used on top of audioguru. Video sports commentator celebrities may also be imposed on the game, maybe only on display in the visitor's screen.

Faster loading web pages

Source transport Protocol of the world wide web, the hypertext transfer Protocol file (HTTP) was designed and defined in a period when only the companies were high-speed connection to the Internet, and consumers who were online, used modems commute what has been created transmission line or ISDN. At this time, the "gold standard" for high-speed connection, a T1 line, which provides a data rate of 1.5 Mbps symmetrical (i.e. with equal data rate in both directions).

Currently, the situation is completely different. The average speed home connection through the connection cable modem or DSL in most developed countries has a much higher data rate downstream data than a T1 line. In fact, in some parts of the world, through input technologies in building optical cable (fiber-to-the-curb) in the house served data rate from 50 to 100 Mbit/s

Unfortunately, HTTP was not designed (it was not implemented) for the actual use of these significant increases in speed. The Web site is a collection of files on a remote server. In the most simple words, HTTP requests the first file, waits for it to load, and then requests the second file, waits for it to load, etc. in fact, HTTP provides the possibility of several "open connections", i.e. the query multiple files at once, but because of the established standards (and the desire to avoid overloading web servers) allows only a very small number of open connections. In addition, because of the way creating a Web page is down, browsers are often not aware that multiple pages can be simultaneously available for immediate download (i.e. only after analysis (parsing) page becomes obvious that there must be loaded a new file, for example, an image). Accordingly, the files on the web-site being loaded one after the other. And, because of the Protocol request-and-response that uses HTTP, the waiting time associated with each download file is approximately (accessing a typical web servers in the USA) of 100 MS.

In respect of compounds with a relatively low speed, it is not a big issue, because the download time of files dominates the waiting time for web pages. But, as connection speeds have increased, especially in relation to complex web pages, problems start to occur.

In the example depicted in Fig, presents a typical commercial web site (this particular web site was a site from a leading brand sports shoes). On this website there are 54 file. The files include HTML, CSS, JPEG, PHP, JavaScript and Flash files, and include video content. In total, must be downloaded 1.5 megabytes before the page becomes active (i.e. the user can click on it and start using it). There are several reasons for a large number of files. Firstly, that we-the page is complex and is at the level of modern requirements, and, secondly, it is a web page that is dynamically assembled and raw-converted on the basis of information about the user accessing the page (for example, from which country the user what language did the user before purchase, etc) and depending on all these factors, load different files. However, it is very typical commercial web page.

On Fig presents the amount of time that elapses before the web page becomes the active page, as the link speed increases. When speed 2401 connection of 1.5 Mbps, using an ordinary web server through a standard web browser is required to 13.5 seconds for the web page became active. When speed 2402 connection 12 Mbit/s, the download time is reduced to 6.5 seconds, or almost twice as fast. But when speed 2403 connection 96 Mbps, boot time is only reduced to approximately 5.5 seconds. The reason this happens is that at such high speed download download time of files is minimal, but the waiting time for each file, about 100 MS, still remains, which results in 54 file×100 MS=5.4 seconds timeout. Accordingly, regardless of the speed of the connection to the house, time to make this website was active, always Boo the em well, at least 5.4 seconds. Another factor is Queuing on the server side. Every HTTP request is added to the end of the queue, respectively, on a busy server this has a significant impact, because to get from the web server every little element, the HTTP request must wait their turn.

One way around this problem is the refusal from HTTP or override. Or, perhaps better, to the owner of the web site have combined their files into a single file (e.g., Adobe Flash). But, in practice, this company, as well as many others (the company) has invested heavily in the architecture of your web site. In addition, while in some homes there are connections 12-100 Mbps, most houses still exist at a slower speed, and HTTP works well at slow speed.

One alternative is hosting the web browser on the server 1521-1525 apps/games and hosting files for web servers on a disk array 1511-1512 RAID type (or, perhaps, in RAM or on a local storage device on the server 1521-1525 apps/games, exercising hosting web browsers). Due to a very fast interconnection through the inbound routing 1502 (or from a local storage device), instead of 100 MS timeout if you are using HTTP for each file will be things which contribute little wait time when using HTTP for each file. Then, instead of a user in his house had access to the web page via HTTP, the user can access the web page through the client 415. Then, even with the connection of 1.5 Mbps (because the web page does not require a large bandwidth for your video), web-page will be active less than 1 second for each line 2400 of communication. Essentially, there would not be time to wait before displaying the active page in a web browser executing on a server 1521-1525 apps/games, and will not be noticeable amount of time to wait before displaying the client 415 output from a web browser. As the user draws a mouse on a web page and/or gaining the text, information, user input is sent to a web browser executing on a server 1521-1525 apps/games, and web browser responds accordingly.

One disadvantage of this approach is that if the compression device continuously transmits a video, the bandwidth is used, even if the web page is fixed. This can be corrected by performing the compression device with the ability to transmit data only when (and if) the web page is changed, and then to transmit data only in part of the page that have changed. Although there are some web pages with flash banners, etc. that are constantly changing, these web pages usually are annoying, and usually web pages are fixed, unless there is a reason for any movement (e.g., video). For these web pages, it is probably the case when using the service 210 hosting will be sent less data than for normal web server, because it will only transfer the actually displayed image, no executable code thin client and any large objects that may never be viewed, for example, switching images.

Accordingly, with the use of the service 210 hosting for the implementation of the hosting legacy web page load time of web pages can be reduced so that the opening of the web page will be like switching TV channels: web page actually becomes active immediately.

Software debugging games and apps

As mentioned earlier, video games and applications with graphics in real time are very complex applications and, as a rule, when they are produced in real conditions of use, they contain errors. Although software developers get feedback from users (messages) error, and they may have some tools for postback machine state after warinig failures, it is very difficult to accurately identify what caused the crash or improper performance of the game or real-time application.

When a game or an application executed in the service 210 hosting, video/audio out of the game or application is constantly written to the buffer 1515 delay. In addition, the patrol process is executed by each server 1521-1525 apps/games, which regularly reports to the system 401 management services hosted on that server 1521-1525 application/game is executed smoothly. If the watchdog process is not transmitting a message, the system 401 of the management server tries to contact the server 1521-1525 apps/games, and if successful, collects all available native state. All available information, along with the video/audio recorded buffer 1515 delay, will be sent to the software developer.

Accordingly, when the developer of the application software or the game gets a notification about the emergency denial of service 210 hosting, (he) receives a frame-by-frame account of what led to this crash. This information can be very valuable in detecting errors and correcting them.

Also note that when there is a crash of the server 1521-1525 the app/game server is recycled from the last restart point, and the user predeccesor with apologies for the technical problem.

Sharing resources and reducing expenses

The system depicted in figa and fig.4b, provides many benefits for both end-users and for developers of apps and games. For example, as a rule, home and office client system (e.g., PC or gaming console) are used only for small amounts of time per week. According to a press release issued on 5 October 2006, Nielsen Entertainment "Active Gamer Benchmark Study" (http://www.prnewswire.com/cgi-bin/stories.pl?ACCT=104&STORY=/www/story/10-05-2006/0004446115&EDATE=) active players spend on average 14 hours a week on the game for video game consoles and approximately 17 hours per week on pocket computers. This message also States that all game activities (including management games for PC, pocket PC, console) (active players) on average 13 hours per week. With regard to higher time management games on the console, 24×7=168 hours per week, which implies that the house active player video game console is used only 17/168=10% of time per week. Or 90% of the time video game console is not used. Given the high cost of video game consoles, and the fact that manufacturers are funding such devices, it is a very inefficient use of an expensive resource. PC, in to the companies also, as a rule, are only used part time in a week, especially stationary desktop PC, often required for applications with greater functionality, for example, Autodesk Maya. Although some companies are working all the time and in the holidays, and some PC (for example, laptops, brought home to work in the evening) are used all the time and in the holidays, most commercial activities, generally concentrated from approximately 9 am to 17 PM, time zone, Monday through Friday, except holidays and breaks (e.g. lunch), and since most PC's are used when the user is actively working with PC, this implies that the desktop PC, as a rule, are used in this work time. Assuming that the PC is constantly used from 9 am to 17 PM, 5 days a week, it means that the PC used 40/168=24% of the time per week. High-performance desktop PC is a very expensive investment for companies, and this is reflected in the very low level of use. Schools in which training is conducted on desktop computers, can use computers for even smaller part of the week, and, although it varies depending on time of training, most training takes place in the daytime from Monday to Friday. Rela is estwenno, in General, PC and video game console uses only a small fraction of time a week.

That is because many people work in companies or in school during the daytime, Monday through Friday, except holidays, these people usually do not play video games at this time, and therefore, actually, they play video games, as a rule, at other times, such as evenings, weekends and holidays.

With the configuration of the hosting service, depicted in figa, the usage pattern described in the two paragraphs above, result in very efficient use of resources. Obviously, there is a limit on the number of users that can be serviced by the service 210 hosting at this time, especially if users require rapid response in real time for complex applications, similar complicated 3D video games. But, unlike a video game console at home or PC used by the company, which, as a rule, are not used most of the time, the server 402 can be reused by different users at different times. For example, high-end server 402 with high-performance dual CPU and dual GPU and plenty of RAM can be used by companies and schools from 9 hours to 5 hours on weekdays, but be used by players, websiteonline video games in the evenings, on weekends and holidays. Similarly, applications with low performance can be used by companies and schools on low-end server 402 with a Celeron CPU, no GPU (or with very low-end GPU) and limited RAM during the working time, and the game with low performance can use low-end server 402 during non-working time.

In addition, the layout of the hosting services described in this document, the resources shared are actually thousands, if not millions, of users. In General, only a small percentage of the total number of users of online services using the service at the moment. If we consider statistics Nielsen video games, listed previously, it is easy to understand why. If the active players are playing console games only 17 hours per week and assuming that the peak usage time of the game is usually on the outside time, outside time in the evenings (17) hours before (24) hours, 7×5 days =35 hours/week) and on weekends (8 hours before (24) hours, 16×2=32 hours/week), that is 35+32=65 peak hours a week for 17 hours of gameplay. Accurate peak user load on the system it is difficult to estimate for many reasons: some users play in non-peak periods, there may be periods of the day, when), the peaks are split into groups of users, with these peak periods can influence the type of ongoing games (for example, in children's games will probably play in the evening at an earlier time), etc. But, given the fact that the average number of hours that a player spends on the game, is much smaller than the number of hours of the day when the player is probably playing the game, only the part on the number of service users 210 hosting will use it at the moment. For this analysis, assume that the peak load is 12.5%. Accordingly, only 12.5% of the resources of bandwidth, compression, and calculations are currently being used, which results in only 12.5% of the cost of the hardware to support the user for doing this run level of the game due to the re-use of resources.

In addition, taking into account the fact that some games and applications require more processing power than others, resources can be allocated dynamically based on ongoing games or applications that are performed by the users. Accordingly, the user selects an application or a game with low performance will be slower (less expensive) server 402, and the user selects the application or game with high performance, will be allocated to high-performance (more expensive) server 402. the absolute, this game or application can have sections with higher performance and lower performance games or applications, and the user can switch from one server 402 to another server 402 between sections of the game or application to support user on the server 402 with the lowest cost that meets the needs of the applications or games. Note that disk arrays 405 RAID type, which is much faster than a single drive available to even low-end servers 402, which will have the advantage of faster disk transfer speeds. Accordingly, the average cost of each server 402 on all ongoing games or used applications is much less than the cost of the most expensive server 402, which are executed application or game with the highest performance, but even low-end servers 402, benefit performance of the disk from the disk array 405 RAID type.

In addition, the server 402 in the service 210 hosting may be nothing more than a motherboard PC without interfaces peripheral or disk storage devices, except for the network interface, and over time, can be integrated into a single chip integrated circuit only with a fast network interface 403 SAN. In addition, disk arrays 405 type AID, likely be shared by a much larger number of users than there are on the disks, so the cost of disk storage devices for each active user will be much less than a single drive on magnetic disks. All of this equipment is likely to be in the rack in the situation room to install servers with adjustable characteristics of the environment. If there is a failure of the server 402, it can be quickly replaced or repaired in the service 210 hosting. On the contrary, PC or game console in the home or office shall be rigid, self-contained device that is meant to withstand reasonable wear and tear due to bumps or drops, requires housing has at least one drive on magnetic disks, must be able to withstand adverse environmental conditions (for example, being crammed into a superheated audio and videocopy together with other equipment), requires warranty service must be Packed and shipped, and sold to a retailer, who probably gets a retail margin. In addition, PC or game console should be performed in accordance with the peak characteristics of the expected applications and games requiring the most computationally intensive, which will be used at some point in the future, even though n is then, what app or game with lower specifications (or sections of games or applications can be executed most of the time. And, if you fail PC or console, their repair is costly, time-consuming (with adverse effects on the manufacturer, user and software developer).

Accordingly, taking into account the fact that the system depicted in figa, provides user experience, comparable to the experience of using the local computing resource to the user in the home, office or school to experience this level of computing power, much cheaper to provide these computational capabilities through architecture, depicted in figa.

The elimination of the need for modernization

In addition, users no longer have to worry about upgrading PC and/or console to play new games or manage new applications with higher performance. Any game or application service 210 hosting, regardless of what type of server 402 is required for this game or application available to the user, and all games and applications are executed almost immediately (i.e., fast-loading from disk arrays 405 RAID type, or a local storage device on the servers 402) and due about the time with the latest updates and bug fixes (i.e. software developers can choose the ideal server configuration for the server(s) 402(s) complies(comply) with the game or application, and then configure the server(s) 402 with the best drivers, and subsequently over time, developers can provide updates, bug fixes, etc. for all copies of the game or application service 210 hosting at the same time). Of course, once the user starts to use the service 210 hosting, you will probably find that games and applications continue to provide the best experience use (for example, through updates and/or bug fixes, and it may be that you will discover a year later that a new game or application became available in the service 210, which uses the technology of computing (for example, a higher performance GPU), which did not even exist a year before, respectively, the year before it was impossible that the user has bought the technology, which could to perform a game or application a year later. As computing resource on which the game is carried out, or who performs the application is invisible to the user (i.e. from the point of view of the user, the user simply selects a game or application that start to happen almost immediately - mail and as well, as if the user had switched TV channels), hardware user will be "upgraded" without the user even if he is aware of modernization.

The elimination of the need for backups

Another important issue for users in companies, schools and homes are backups. Information stored in the local PC or video game console (for example, in the case of the console, rating and achievements in the game the user), may be lost in case of disk failure or accidental Erasure. There are many available applications that provide manual or automatic backup for PC, and the state gaming console can be uploaded to the online server for backup, but the local backup is usually copied to another local disk (or other storage device), which should be kept in some safe place and be organized, and reservation online service is often limited due to the low velocity of the upward flow of the data available through the typical low-cost connection to the Internet. Through service 210 hosting figa data stored in the disk arrays 405 RAID type can be configured using such methods, the RAID configuration of the prior art, known to the special the sheets in the art,thus, what if there is a disk failure, the data will not be lost, and techniques in the data center, which houses damaged disc will be notified, and then (he) will replace the disk, which will then be automatically updated so that the disk array type RAID was again robust. In addition, since all the drives on magnetic disks are next to each other (software) fast local networks between them via SAN 403, the data center is not difficult to arrange to be regularly backed up all disk systems on auxiliary storage device that can be stored in the server center or moved beyond it. From the viewpoint of service users 210 hosting their data is always protected, and they never have to think about backups.

Access to demo versions

Users often want to experience games or apps before purchasing them. As described previously, there are tools of the prior art for testing the demo version (the verb form of "demo" means to test the demo version, which is also called "demo", but as a noun) games and applications, but in each of them there are limitations and/or disadvantages. Using service 210 hosting users to easily and conveniently and is attempting demo version. Actually, everything the user does is selects the demo version through the user interface (e.g. interface, described below) and tried it. The demo version is uploaded to the server 402, suitable for the demo version, almost immediately, and it is played like any other game or application. Regardless of whether you want to demo a very high-end server 402 or low-end server 402, and regardless of what type of home or office client 415 of the user from the user's perspective, the demo version just works. Software publisher demo version or games or applications can regulate what the demo version to give the user to test and at what time, and, of course, this demo version may include user interface elements that provide the user the ability to access the full version of the demonstrated game or application.

Because the demo version will probably be offered at a lower price or for free, some users may try to use the demo version again (especially the demo version of the game, playing again in to the which can be fun). Service 210 hosting can apply various methods to limit the use of the demo version for this user. The most direct approach is to set the user ID for each user and limiting the number of tests demo version provided to the user ID. The user may, however, establish a set of user ID, especially if they are free. One way of addressing this problem is limiting the number of tests demo version provided to the client 415. If the client is a standalone device, this device is the serial number, and service 210 hosting may restrict the number of times that the client can access the demo version with this serial number. If the client 415 is executed as software on a PC or other device, the serial number may be assigned by the service 210 hosting and stored on a PC and be used to restrict use of the demo version, but given the fact that the PC can be reprogrammed by the user, and the serial number can be erased or changed, another possible option for service 210 hosting is accounting address control Protocol media access (MAC) set the CSOs adapter PC (and/or other machine-specific identifiers, for example, serial numbers of hard disk drives and so on) and the restriction of the demo version for him. Considering the fact that the MAC address of network adapters can be changed, however, is not a reliable method. Another approach is to limit the number of tests demo version of this IP address. Despite the fact that IP addresses can be periodically assigned by providers of DSL and cable modem connection, in practice this does not happen very often, and if you can determine (for example, by contacting ISP) that this IP is the unit IP address for accesses to a DSL or cable modem connection associated with residential houses, for this house, as a rule, can be fitted with a small number of uses of the demo version. In addition, there may be many devices in the house behind a NAT router that share the identical IP address, but usually in a residential environment there is a limited number of such devices. If the IP address is in the unit, the technician of the company, then the company may be running a greater number of demos. But, ultimately, the combination of all the previously mentioned approaches is the best way to limit the number of demos for PC. Despite the fact that may not be EDINOGO way by whom committed and technically skilled user can limit the number of repeated uses of demo versions, with a large number of obstacles, you can create a sufficient deterrent that abuse the system demo version will be impractical for most PC users, and most likely they will use the demo version as it meant testing of new games and applications.

Benefits for schools, companies and other institutions

Significant benefits, in particular, are companies, schools and other agencies that use the system depicted in figa. In companies and schools, there are significant costs associated with installing, maintaining and upgrading PC, especially when it comes to the PC to run applications with high performance, for example, Maya. As stated earlier, PC, generally used only part-time in a week, and as in the house, the cost of the PC with this level of performance is much higher than in the environment of the school or office, than in the environment of the server center.

In the case of large companies or schools (e.g., universities), it may be desirable to finish what s IT such organizations have established server centers and supported computers who will have access remotely via the connection level LAN. There are several solutions for remote access computers via LAN or over a private broadband connection between offices. For example, through a terminal server Microsoft Windows Terminal Server or via remote access applications to the desktop computer (virtual network computing), similar to VNC, RealVNC, Ltd., or through the means of a thin client from Sun Microsystems, users can remotely access your PC or servers with a range of quality and response time graphics devices and practical experience of the user. In addition, such self-managed server-based centers are typically anchored by one company or school and, as such, may not use the benefits of combining the use of which is possible when different applications (for example, entertainment applications, and business applications) use identical computing resources at different times of the week. Accordingly, many companies and schools there is no scale, resources, or special knowledge to install the server center on your own, in which there is a network connection speed LAN with each user. Actually, a large percentage of schools and companies has the same connection to the Internet (such as DSL, cable modems), as well as in homes.

But still the organizations may, however, be need for very high performance computing or continuously, or periodically. For example, in a small architectural firm may be only a small number of architects, with relatively modest computing needs when performing design works, but it may require very high-performance computing 3D (for example, when creating a 3D end-to-end flight" a new architectural project for a client). The system depicted in figa, very well suited for such organizations. Mentioned organizations require not that other, as the network connection type, identical to those offered for houses (e.g., DSL, cable modems) and, as a rule, are very economical. They can either use a cheap PC as a client 415, or completely do without a PC and use cost-effective specialized devices that simply implement the logic 413 control signal and restoring 412 compressed video with low latency. These characteristics, in particular, are attractive to schools, which can be a problem with theft PC or component damage, requiring careful handling inside the PC.

This arrangement solves a few problems for these organizations (and many of these benefits together using the tsya home users, perform universal computation. For example, the operating costs (which, ultimately, should be returned in some form to the users, in order to have a viable business) can be much less, because (a) computing resources are shared with other applications, which have different peak periods of use during the week, (b) the organization can access (and bear the costs of) the high-performance computing resources only when required, (c) the organization should not provide resources for backup or any other way to support high-performance computing resources.

The exception piracy

In addition, apps, games, interactive movies, and so on, you can no longer illegal to use, as at present. As the game is performed in the service center, users do not have access to the underlying control program, respectively, there is nothing that can illegally be used. Even if the user can copy the source code, he can't do it on a standard gaming console or home computer. This opens up markets in such countries as China, where the standard video games are not available. Resale of used games also Neuve is smosna.

For game developers, there is less concentrated heterogeneity of the market, as is the case currently. Service 210 hosting can be incrementally updated over time as requirements change games, unlike the current situation, when a whole new generation of technology forces users and developers to upgrade and developer of games depends on timely delivery of the hardware platform.

Interactive streaming video

The above descriptions provide a wide range of applications, the opportunity which is provided by a new underlying idea of interactive video streaming based on common Internet technologies with a small timeout (which also implicitly includes audio along with the video, as used in this description). Systems of the prior art that provide streaming video via the Internet, allow only applications that can be implemented with interactions with a large timeout. For example, the main control elements for linear playback of the video (e.g., pause, rewind, fast forward) are, respectively, high latency, and you can choose between transmission lines line video. And, as stated earlier, the characteristics of the EP some video game provides the possibility of their doing with a large timeout. But large waiting time (or a lower compression ratio) approaches of the prior art for video streaming severely limits the possible applications of streaming video or restricts their use to specialized network environments, and even in these environments, the methods of the prior art impose a significant load on the network. The technology described in this document, opens the door for a wide range of applications possible with interactive video streaming with low latency via the Internet, in particular, to those that are provided through a connection to the Internet consumer level.

Actually, with such small client devices as the client 465 on pigs, which is enough to provide an enhanced practical experience of the user with a virtually arbitrary number of computing power, an arbitrary amount of fast memory and extremely fast networking among powerful servers, this provides the possibility of a new era of computing. In addition, since the requirements for the bandwidth does not increase along with the growth of computing power of the system (i.e. since the requirements for bandwidth bound only to the resolution of the display quality and frame rate), after the ubiquitous high-speed connectivity Iternet (for example, through universal wireless connectivity with low latency), reliable and with a high enough bandwidth to meet the needs of 422 displays all users, the question arises whether the necessary fat clients (e.g., PC or mobile phones running Windows, Linux, OSX etc) or even thin clients (such as Adobe Flash or Java) for the typical consumer and business applications.

The emergence of interactive streaming video in result leads to revised assumptions about the architecture of the computing system. An example of this is an implementation option server service center 210 hosting depicted on Fig. The way the video to buffer delay and/or group video 1550 is a feedback loop, which is passed to the user group interactive streaming video servers 1521-1525 apps/games is fed back into the servers 1521-1525 apps/games or in real time on the way 1552 or according to the selectable delay paths 1551. This allows a wide range of practical applications (such as that depicted in Fig, Fig and pig)that would be or is impossible or impracticable by the local architecture of computing systems, or the server prior art. But,as a more General architectural characteristic, what makes the circuit 1550 feedback is the recursion level interactive streaming video, because video can go back indefinitely, when the application requires it. It provides a wide range of application capabilities that previously were not available.

Another key architectural feature is that the video streams are unidirectional UDP flows. This provides the possibility of practically arbitrary level multicast interactive streaming video (for comparison, bilateral flows, such as flows of TCP/IP, would create more traffic congestion in the networks due to the transfer of information back and forth with increasing number of users). Multicast is an important tool in the data center, because it enables the system to quickly respond to the growing needs of Internet users (and, indeed, of the world's population) in the implementation of the communication on the basis of one-to-many or even many-to-many. And again, the examples discussed in this description, for example, Fig, which depicts the use and recursion interactive video streaming and multicast, are only the visible part of a huge iceberg of possibilities.

In one embodiment, the various functional modules is, illustrated in this description and the associated stages can be made specific hardware components that contain hardcoded logic to perform these steps, for example, a specialized integrated circuit ("ASIC"), or any combination of programmed computer components and custom hardware components.

In one embodiment, the modules may be implemented on a programmable digital signal processor ("DSP"), for example, architecture TMS320x Texas Instruments (for example, TMS320C6000, TMS320C5000... etc). Can be used in various other DSP, although still with the implementation of these underlying principles.

Options for implementation may include various stages, as described above. The steps may be embodied in machine-executable commands that cause certain parts of the universal or specialized processor. The various elements that do not belong to these underlying principles, such as computer memory, hard drive, input devices, were not included in the drawings in order to avoid difficulties understanding relevant aspects.

The elements of the disclosed subject invention can also be provided as a machine-readable storage medium for storing machine-executable commands. Parking is itemy storage media may include, for example, flash memory, optical disks, CD-ROM, DVD-ROM, RAM, EPROM, EEPROM, magnetic or optical cards, the propagation medium or machine-readable media of another type, suitable for storage of computer commands. For example, the present invention may be downloaded as a computer program which may be transferred from a remote computer (e.g., server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier or other transmission medium via a communication line (for example, a modem or network connection).

Also it should be understood that the elements of the disclosed subject invention can also be provided as a computer program product that may include a machine-readable storage medium on which is stored commands that can be used for programming a computer (e.g., processor or other electronic devices) to perform a sequence of operations. Alternatively, the operation can be performed by a combination of hardware and software. Machine-readable media may include, for example, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, ROM, RAM, EPROM, EEPROM, magnetic or optical cards, the propagation medium or medium/m is sincerely media of another type, suitable for storage of computer commands. For example, the elements of the disclosed subject matter of the present invention may be downloaded as a computer program product, and this program may be transferred from a remote computer or electronic device to the requesting process by means of data signals embodied in a carrier or other transmission medium via a communication line (for example, a modem or network connection).

In addition, although the disclosed subject matter has been described with specific variants of implementation, there are quite numerous modifications and changes in the scope of the present disclosure of the subject invention. Accordingly, the description and drawings should be considered as illustrations, not limitations.

1. A machine-implemented method of performing video compression for online video games or application that contains:
performance games and applications on the hosting service in response to user input received from multiple client devices, and video games and applications generate uncompressed video;
detecting the maximum data transfer rate of the communication channel between the hosting service and the client by sending a signal feedback from the client to the hosting service, and the feedback signal indicates the loss rate p is the ket or waiting time for packets transferred from hosting services to the client;
compressing the uncompressed video using a video compression with little waiting time for the formation of the compressed video stream with a small timeout, containing a sequence of compressed video frames;
the transmission of the compressed video stream with low latency hosting service to the customer,
moreover, the compressed video stream with a small waiting time is compressed and streamed to the client with such time that the user feels that the video game or application responds immediately to user input;
the discovery that the maximum data rate is exceeded, if a particular frame of the sequence of frames is transmitted from the hosting service to the client on this link
instead of the transmission frame, which may exceed the maximum speed of data transfer, ensuring that the client continues to display the previous frame of the sequence.

2. The method according to claim 1, additionally containing a selection of the set of compression parameters for compressing frames videopoke based on the maximum data transfer rate.

3. The method according to claim 1, in which forward error correction (FEC) is used to protect the video stream, associated audio and/or input control signal of the user to take emich from the client.

4. The method according to claim 3, in which the FEC codes are used to protect some parts of the video stream, but not other parts.

5. The method according to claim 3, in which the FEC codes are used to protect data that are most likely to cause visual artifacts in case of loss.

6. The method according to claim 1, additionally containing reducing peak bandwidth by distributing large peak bandwidth of the compressed video sequence frame by frame.

7. The method according to claim 6, in which the distribution of large peaks in the bandwidth of the compressed video sequence of consecutive frames further comprises transmitting compressed frames in which there is more data than can be transmitted at the maximum transmission rate of data in one frame period, for one or more additional periods of the frame.

8. The method according to claim 7, in which during said additional frame periods, the client continues to display previously restored from the compressed data frame.

9. The method of claim 8, further containing ignoring frames taken during said additional periods of the frame.

10. The method according to claim 6, in which the video frame rate is temporarily reduced during additional periods of the frame.

11. The method according to claim 6, in which additional frame period is 1 period to the DRA.

12. The method according to claim 6, in which additional frame period is more than 1 frame period.

13. The method according to claim 1, additionally containing dynamically adjusting the data rate of the video stream.

14. The method according to item 13, in which the data rate is adjusted in response to a detected maximum data rate of the channel.

15. The method according to item 13, in which the data rate is adjusted by modifying the frame rate.

16. The method according to item 13, in which the data rate is adjusted by modifying the compression quality of the video stream.

17. The method according to item 13, in which the data rate is adjusted by modifying the resolution of the image.

18. The method according to 17, wherein, when the resolution of the image is modified, the client increases the scale of the image to maintain the identical size of the image on the display screen.

19. The method according to claim 1, wherein, when the communication channel for the client channel is completely out of order, one or more servers pause the game played by this client.

20. The method according to claim 19, in which, if the client is eliminated, and the user of this client plays the game with several participants, other users be notified that the user has left.

21. The method according to claim 20, in which the game is suspended is vlivaetsa for other users, when the user is disposed.

22. The method according to claim 1, additionally containing an initial determination of the maximum speed of data transfer by sending a test flow with a higher speed transfer of data over the network output routing as long as the packet loss and/or increasing the timeout will not indicate that the bandwidth of the communication channel is exceeded.

23. The method according to item 22, further comprising:
determining that the bandwidth of the communication channel is decreased, based on the detected increase in packet loss and/or increase the timeout, and
in response, the reduction peak data transfer rate up until packet loss and/or the timeout is not reached acceptable values.

24. The method according to item 22, further comprising:
determining that the bandwidth of the communication channel, may have increased on the basis of continuous measurements of packet loss and/or waiting time, and
in response, the increase in peak data rates up until packet loss and/or the waiting time does not reach an unacceptable level.

25. The method according to claim 1, additionally containing the transmission of data packets in the order in which they appear on the screen.

26. The method according to claim 1, in which the waiting time is less than 80 MS.



 

Same patents:

FIELD: information technology.

SUBSTANCE: quantisation parameter for each macroblock is selected by limiting the range of all possible quantisation parameters to a particular range of possible quantisation parameter values, wherein the range is a subset of possible quantisation parameters and the range is based on the value of the predicted quantisation parameter.

EFFECT: higher video coding speed and fewer quantisation parameters which are verified for each video macroblock.

44 cl, 5 dwg

FIELD: information technologies.

SUBSTANCE: method and system are provided to use conversion of more than 8×8 and non-rectangular conversion and for generation of a syntactic element "conversion size" indicating conversion for decoding of video information. The syntactic element "conversion size" is generated with the help of a coder, being based on the size of the predicted video block and on content of the video block. Additionally, the syntactic element "conversion size" may be generated in according with the set of rules for selection from 4×4, 8×8 and large sizes of conversion during the coding process. The decoder performs reverse conversion, being based on the syntactic element "conversion size" and rules, used by the coder, at the same time the syntactic element "conversion size" is transmitted to the decoder as a part of a coded bit stream of video information.

EFFECT: improved efficiency of video coding.

16 cl, 7 dwg

FIELD: information technology.

SUBSTANCE: image encoding method, involving: dividing a current picture into a unit having a predetermined size; determining internal prediction mode to be applied to the current unit to be encoded, according to the size of the current unit; and performing internal prediction with respect to the current unit according to the determined internal prediction mode, wherein the internal prediction mode includes a prediction mode for performing prediction using a drawn line having an angle tan-1(dy/dx), where dx and dy are integers, near each pixel in the current unit.

EFFECT: high efficiency of encoding video.

15 cl, 44 dwg, 5 tbl

FIELD: information technology.

SUBSTANCE: disclosed is a video encoding and decoding method for altered time compression based on fragmented links and not full reference images. In a typical video frame sequence, only portions (i.e. mosaic element) of each frame includes moving objects. Furthermore, in each frame, moving objects are inherently enclosed in specific regions which are common for all frames in the video frame sequence. Such common regions of movement are identified and images are extracted from the identified regions of video frames. Since these images can only display portions of frames, these images are named "fragments", and when compensating for movement, in order to generate predicted frames, these fragments and not the whole frame are used as reference images.

EFFECT: high efficiency of encoding video.

40 cl, 18 dwg

FIELD: information technologies.

SUBSTANCE: decoding stage is carried out to decode the first identifying information, which indicates whether signals of appropriate colour components are independently coded, and to decode the second identifying information in the case when the first identifying information indicates that signals of appropriate colour components are coded independently, the second identifying information indicates whether all colour components are intra-coded, besides, in the case when the first identifying information indicates that signals of the appropriate colour components are coded independently, and the second identifying information indicates that all images of appropriate colour components are intra-coded, the decoding stages generates a decoded image, in which performance of deblocking filtration is off at each border between blocks serving as elements of conversion and quantisation.

EFFECT: improved optimality and efficiency of decoding in case when coded signals of a dynamic image do not have a difference in respect of a number of counts between colour components.

15 dwg

FIELD: information technologies.

SUBSTANCE: video recording device comprises a compression facility, which compresses video data in process of speed control implementation, including variation of a quantisation step so that the bit transfer speed is reduced within the previously determined period to the target speed of bit transfer, a recording facility, which records compressed data onto a recording medium, and a facility for calculation of full speed, which calculates full speed of transferring bits of the result of video data compression with the help of the compression facility from the start of compression to the current moment. The compression facility comprises a facility for limitation of a quantisation step, which limits the varied quantisation step previously for a certain upper limit, which is lower than the upper limit varied in the compression facility, and a facility for fixation of the quantisation step, which fixes the quantisation step to the previously determined upper limit, when the calculated full speed of bit transfer exceeds the target speed of bit transfer.

EFFECT: compression of video data so that bit transfer speed is reduced to target bit transfer speed together with saving minimum quality of an image, and recording of compressed video data.

6 cl, 3 dwg

FIELD: radio engineering, communication.

SUBSTANCE: digital multimedia coder/decoder is proposed, which uses a method of flexible quantisation, making it possible to vary quantisation by different measurements of coded digital multimedia data, including spatial channels, channels of frequency subranges and colour channels. A codec efficiently applies an alarm circuit to alarm various shifts of flexible quantisation combinations for initial usage scenarios. When selection of a quantiser is available, the codec efficiently codes the current quantiser, determining a subset of quantisers, and indexes the current quantizer from the set.

EFFECT: provision of flexible quantisation by different measurements of coded digital multimedia data.

18 cl, 15 dwg

FIELD: physics.

SUBSTANCE: invention relates to digital television and particularly to compression of a digital video stream in a television communication channel. A field sequence is divided into groups of three types: O-fields, which act as the reference; N-even fields, which are encoded via prediction based on the previous field within one frame; M-odd fields, which are coded with prediction based on the previous odd field from another frame. To improve visual quality of the image according to the disclosed method, odd and even fields change places in the entire video sequence or group of frames. Frames with a higher image definition are formed as a result. To increase efficiency of compressing a digital stream, the disclosed method employs cutting of readings in N-fields horizontally and vertically with subsequent restoration thereof at the receiving side, wherein two neighbouring N-fields form a full half-frame (field).

EFFECT: communication channel bandwidth reduction, as well as high efficiency of digital conversion of a video signal, lying in additional cutting of the volume of digital information and increasing visual quality of the image, with sufficiently easy implementation.

13 dwg, 1 tbl

FIELD: information technology.

SUBSTANCE: disclosed is an apparatus for encoding video and a corresponding method of applying orthogonal transformation to a prediction error signal between a video signal for a target encoding region and a predicted signal for the video signal, and sampling the obtained orthogonal transformation coefficient using a given sampling interval so as encode the coefficient. Prediction error power is calculated, which is the prediction error signal power. For input information such as the calculated prediction error power, the given sampling interval and the upper limit for the volume of the code generated for the target encoding region, it is determined whether the upper limit of the volume of the code, generated when performing sampling using a predetermined sampling interval, is exceeded or not, and the encoding process is changed based on the determination result.

EFFECT: providing a video encoding device which does not require repeated encoding or encoding which corresponds to two or more encoding modes, and performs encoding, the volume of the generated code of which does not exceed the upper limit, without waiting for the result of measuring the volume of the generated code.

5 cl, 20 dwg

FIELD: information technology.

SUBSTANCE: prediction error signals is calculated by determining values of the difference between sample values of a predicted data block and sample values of the initial input block. A modified error prediction signal is generated by replacing difference values which are overshoots with difference values which are not overshoots, wherein the difference value which is an overshoot depends on the expected value of the amplitude of the difference value. Further, transform coding is performed over the prediction error signal in order to create a first presentation of the first error signal component and spatial coding is performed over the prediction error signal in order to create a second presentation of the first prediction error signal component. The first and second presentations are merged and coded.

EFFECT: efficient spatial presentation of those prediction error signals of the same image block which are not well correlated with basic functions of the applied transformation.

33 cl, 11 dwg

FIELD: information technology.

SUBSTANCE: codec encodes conversion coefficients through composite coding of nonzero coefficients with subsequent series of coefficients with zero values (dwg. 14). When nonzero coefficients are last in their unit, the last indicator is replaced for the value of the series in the symbol of that coefficient (1435). Initial nonzero coefficients are indicated in a special symbol which jointly codes the nonzero coefficient together with initial and subsequent series of zeroes (1440). The codec enables several coding contexts by detecting interruptions in the series of nonzero coefficients and coding nonzero coefficients on any side of that interruption separately (1460). The codec also reduces the size of the code table by indicating in each symbol whether a nonzero coefficient has an absolute value greater than 1, and whether the series of zeroes have positive values (1475), and separately codes the level of coefficients and the length of the series outside the symbols (1490).

EFFECT: high efficiency of compressing conversion coefficients and rate of coding and decoding.

25 cl, 28 dwg

FIELD: physics, communications.

SUBSTANCE: invention concerns digital broadband data transfer systems, particularly decoding of frame error correction of multiple-protocol encapsulation (MPE-FEC) in handheld digital video broadcasting system (DVB-H). Invention claims method and device for MPE-FEC frame decoding in DVB-H. Filtration of packet identification (PID) is performed in TS packet received over wireless network to identify TS packet, and table ID is detected by data heading information intended for identification of section data type. If section data are MPE section, then the frame is buffered. If after IP datagram storage for last MPE section some part remains in data area, then remaining part is augmented with zeros. If section data are MPE-FEC section, then frame is buffered based on parity data obtained from MPE-FEC section.

EFFECT: efficient method of MPE-FEC frame decoding in handheld DVB-H for reception of transport stream (TS) packet and Internet protocol (IP) datagram reconstruction.

18 cl, 12 dwg

FIELD: processing of digital images, possible use in systems for capturing and compressing images, for example, photo-video-cameras.

SUBSTANCE: for known method of compression of digital images, including serial usage of operations for dividing unprocessed digital data, received from image capturing device, on a set of channels, with their following direct color transformation, wavelet transformation and quantization, suggested are changed rules of direct color transformation of channels, allowing more complete correlation between digital channels of image, resulting in possible compression of data to lesser size. Also, during processing of Byer's mosaic images suggested compression method allows transition to YCbCr color representation system, for which quantization coefficients are known. Usage of these coefficients results in production of compressed image of lesser size in comparison to quantization of original R, G, B channels. Suggested also is device for realization of method.

EFFECT: increased degree of compression of digital images.

2 cl, 3 dwg

FIELD: video communications, in particular, technology for masking decoder errors.

SUBSTANCE: in accordance to one variant of invention, system and method decode, order and pack video information to video data packets for transfer via communication line with commutated channels, due to which system conceals errors, caused by loss of video data packets, when system receives, unpacks, orders and decodes data packets. In accordance to another variant, system and method decode and pack video information so that adjacent macro-blocks may not be positioned in same data packets. Also, system and method may provide information, accompanying packets of video data for simplification of decoding process. Advantage of described scheme is that errors caused due to data loss are distributed spatially across whole video frame. Therefore, areas of data, surrounding lost macro-blocks, are decoded successfully, and decoder may predict movement vectors and spatial content with high degree of precision.

EFFECT: improved quality of image.

4 cl, 10 dwg

FIELD: engineering of systems for encoding moving image, namely - methods for encoding moving image, directed at increase of encoding efficiency with use of time-wise remote supporting frames.

SUBSTANCE: in the method in process of encoding/decoding of each block of B-frame in direct prediction mode movement vectors are determined, using movement vector of shifted block in given frame, utilized for encoding/decoding B-frame, and, if type of given frame is time-wise remote supporting frame, one of movement vectors, subject to determining, is taken equal to movement vector of shifted block, while another one of movement vectors, subject to determining, is taken equal to 0.

EFFECT: increased encoding efficiency in direct prediction mode, decreased amount of information bits for frame, wherein a change of scene occurs.

2 cl, 6 dwg

FIELD: engineering of systems for encoding moving images, namely, methods for encoding moving images, directed at increasing efficiency of encoding with use of time-wise remote supporting frames.

SUBSTANCE: method includes receiving index of supporting frame, standing for supporting frame, pointed at by other block, providing movement vector for determining movement vector of current block, and determining movement vector of current block with utilization of supporting frame index, denoting a supporting frame.

EFFECT: increased efficiency of encoding in direct prediction mode, decreased number of information bits for frame, in which scene change occurs.

3 cl, 6 dwg

FIELD: television.

SUBSTANCE: device has blocks: first interface block, providing receipt of data about switching of programs by subscriber, electronic watch block, first memory block for archiving data about time of viewing of each selected program, second memory block, containing electronic addresses of broadcast companies, block for rearranging data about viewing time, processor, forming packet of data about which TV program and time of its viewing, third interface block, providing output along phone network of data about viewing time of each TV program to server of company, which broadcast current TV program.

EFFECT: higher efficiency.

1 dwg

FIELD: television.

SUBSTANCE: device has scaling block, two delay registers, block for forming pixel blocks, buffer register, block for calculating movement vectors, two subtracters, demultiplexer, enlargement block, pulsation filtering block, mathematical detectors block, multiplexer, reverse scaling block, as a result of interaction of which it is possible to detect and remove some series of TV frames from programs, which cause harmful effect to viewer, specifically pulsations of brightness signals and color signals with frequency 6-13 Hz.

EFFECT: higher efficiency.

1 dwg

The invention relates to radio engineering and is intended for the discharge of compressed clock signal to a device for separating signal and the clock signal, essentially agreed with a synchronizing signal encoding device

The invention relates to the fields of electronics, communications, computer science, television, interactive television, video telephony and video conferencing

FIELD: information technology.

SUBSTANCE: online video game or application hosting system has a hosting service, having one or more servers performing one or more video game or application twitches to provide one or more streams of compressed interactive streaming video with short latency, transmitted over a network which includes public network components to at least one client apparatus, situated away from at least one of the servers. The system performs operations for receiving and providing control signals to the servers, performing one or more video game or application twitches on the servers, providing streams of compressed interactive streaming video with short latency and decompressing the compressed interactive streaming video with short latency. Said operations are performed with such a short latency that a user interacting with at least one of the one or more video game or application twitches has the perception that the video game or application twitch instantly responds to control signals.

EFFECT: reproducing compressed interactive streaming video with short latency regardless of the power of the client apparatus.

25 cl, 40 dwg

Up!