System of three dimensional video games

FIELD: information technology.

SUBSTANCE: a) the system of three-dimensional videogame is capable of displaying left - the right sequence through various independent channels - VGA or the video-channel, with the device of the display sharing memory in an immersion mode, b) the system contains the videogame cursor operating and checking reliability of foreshortenings of the image, appointing structures, illumination, positions, movement and the aspects connected with each object, participating in game; creates left and right background buffers, creates images and displays the information in working buffers, c) the system allows to process the information of data connected with coordinates xyz of the image of object in real time, in it the volume operative memories (RAM) for the left-right buffer is increased, thus there is an opportunity of recognition and a choice of the corresponding background buffer which information is transferred in the working buffer or the additional independent device of display sharing memory in an immersion mode.

EFFECT: solution to the problem of incompatibility of technology in display of three dimensional images.

11 cl, 13 dwg

 

The technical field

The present invention relates to displaying three-dimensional video images and, in particular, to the structure of hardware and software designed for viewing three-dimensional (3D) images, easy to integrate into an existing television, personal computers and video game equipment system.

Description of the prior art

Constant efforts to improve the image in the visual man-machine interface for a wide range of applications: military, biomedical research output images for medical purposes, genetic influences, airport security, entertainment, gaming, computing and other systems for display on the screen.

Three-dimensional (3D) information is a key factor in achieving success in addressing the critical issues that need to provide the user with reliable information.

System stereoscopic vision based on the capabilities of the human eye to see the same object at two different angles (left and right). In the brain, both images are merged, the result is a depth and depth perception, which then the brain is converted to distance, surface and volume.

In the known technical solutions implemented several attempts to obtain three is hernych images for example, using the following technologies:

- Red-blue polarization;

- Vertical-horizontal polarization;

Glasses for multiplexed image;

- System of a virtual three-dimensional reality;

- Volumetric displays;

- Autostereoscopic displays.

All the above technologies are incompatible image, side effects and lack of compatibility with existing technologies, namely:

System red-blue polarization to view require a special projector and a white screen is large in size; after a few minutes of browsing begin to appear side effects, particularly headache, dizziness and other symptoms associated with images displayed on the screen using three-dimensional effect. This technology has long been used in systems of cinema, but because of the above problems of such systems has essentially disappeared from the market. Side effects are caused by a significant difference between content that is perceived by the left eye and the right eye (one eye perceives information using the blue polarization, and the other perceives information using the red polarization)that creates excessive tension in the optical nerve and the brain. In addition, two images is the position displayed on the screen simultaneously. To view on this technology, you need an external screen using polarized colored glasses. If the user does not put on red-blue glasses, the effect of volume is not shown, and can see only a blurred double image.

System horizontal vertical polarization combines two images issued by the stereoscopic camera with two lenses; the left and right images are respectively horizontal and vertical polarization. These systems are used in some of the new cinemas, such as theaters, Disney and IMAX3D. This technology requires the use of expensive production systems and is designed for limited, equipped with special equipment selected audience, which reduces market size and scope. Especially the interest in three-dimensional (3D) content increased in the last three years; as an example, let us take the performances of Tom Hanks and "Titanic", which were created with three-dimensional content on technology IMAX3D. This technology after a few minutes of browsing also creates significant side effects on the user, require an external screen and use polarized glasses; if the user does not put on these glasses, he can see only a blurred image.

System that uses the technology with the bolt carrier is the customs for multiplexed images carry out switching between the left and right images by blocking one of these images so that it will within a short period of time could not get to the appropriate eye. The specified lock is synchronized with the images displayed on the screen (monitor or TV). If the user does not put on his glasses, he can see only a blurred image, and in a few minutes manifested and side effects. Currently, this technology is available (along with other) BARCO SYSTEMS for companies Mercedes Benz®Ford® and Boeing® in a kind of "space", designed to create three-dimensional images through multiplexing (using shutter glasses) in order to create appropriate models before they are assembled on the production line.

A three-dimensional virtual reality (3D virtual reality systems - VR3D) systems are computer-based, creating a computer scenes, interacting with the user with interfaces positioning, such as gloves information and position sensors. In these images, computer generated, use images that are created on the basis vectors, polygons and monocular depth to simulate depth and volume, calculated program is Oh, but the images are rendered using the helmet as a display device located in front of the eyes; the user is immersed in a computer generated scene that exists only in the computer, but not in the real world. The scene generated by the computer is called "virtual reality". For this system requires very expensive computers, such as SGI Oxygen® or SGI Onyx Computers®that is not available to a standard user. Using this technology creates a serious game and is simulation, through the same channel VGA or video generated left-right sequence, and software contains special commands for switching video images during breeding on the screen with a frequency of 60 Hz. Software or a program video games interact directly with the graphics card.

There is a technology called "I-O SYSTEM (I-O SYSTEMS), in which multiplexed images are displayed on binocular screens using system multiplexing the left-right and switching images at a frequency of from 80 to 100 Hz, but even so, there is a flicker.

Only a few manufacturers, such as Perspectra Systems, creating a system of three-dimensional (volumetric display systms® ). In these systems are the ability of the human eye to save the image in a few milliseconds and the rotation of the display with a very high frequency; then, depending on the angle of view on the device you can see the corresponding image, including turning off color pixels, due to the high frequency of rotation of the eye can perceive "floating" image. These systems are very expensive ("sphere" is approximately 50,000 U.S. dollars) and require special and adequate hardware and software. This technology is used in military applications.

Autostereoscopic displays are monitors with semi-cylindrical lines, top-down, imposed only on the foreground and background images; there is no real third dimension, but only its imitation in two planes prospects. Currently, Philips® and SEGA® work in this three-dimensional technology with the purpose of obtaining any technological advantages. The results are very low, and the resolution is less than 50%. This technology is not compatible with the existing technological infrastructure and requires complete replacement monitors users. In applications that are not created specifically for this technology, the screen displays a blurry image that makes them completely nessov estimate with the current infrastructure. So the viewer could see a three-dimensional image, it should be located at a distance of about 16 inches (40,64 cm), which varies depending on the size of the monitor, in addition, the viewer must look in the center of the screen is perpendicular to it, and fix your gaze at a point of focus in front of the actual screen. When a small deviation of the line of view or the angle of view of the volumetric image is lost.

In the prior art there are several patents that are associated with the development of this technology, namely:

U.S. patent No. 6593929 dated July 15, 2003, and No. 6556197 April 29, 2003 issued by Timothy van Hoek (Timothy Van Hook), and others, aimed at creating a budget video game system, which can be used to simulate three-dimensional world and project it onto a two-dimensional screen, and the image based on mutual change in real time by the user of the observation points using game controllers.

In U.S. patent No. 6591019 July 8, 2003 issued by Claudine from Comair (Claude Comair), and others have used the method for compressing and restoring data to transform the matrix in three-dimensional graphics system, computer generated, and this method is the transformation matrix of real numbers in a matrix of integers in the search for zeros in the matrix. The compressed matrix is much smaller than IU is in the memory, therefore, three-dimensional animation can effectively recover in real time.

In U.S. patent No. 6542971 from April 1, 2003, issued to David reed (David Reed), a system memory access and the way in which instead of the auxiliary memory, the system has a memory attached to the memory, which writes and reads data entered from one or more peripheral devices.

In U.S. patent No. 6492987 on December 10, 2002 issued by Stefan Morano (Morein Stephen), describes a method and apparatus for processing of nonvisual objects; first, by comparing the geometric properties of at least one element of one object with characteristic geometric properties for a group of pixels. During the display elements of the object are determined by the new geometric properties and the data is refreshed with the new values.

In U.S. patent No. 6456290 on September 24, 2002, issued to the Vimala Parikh (Vimal Parikh) and others, we offer graphic system interface for user applications and training programs. In a number of characteristics includes a unique display of vertices, in which graphic string may contain information about the state of the top, set the projection matrix and the team dives for the frame buffer.

Every video is a about what the programs written in a computer language. The goal is the simulation of a non-existent world and the introduction of the player or user in this world, while the main efforts are concentrated on improving the visual characteristics and controllability analysis of images and making decisions in a competitive and advanced (level of difficulty) environment, and a large number of scenarios artistic content. As for the game engine, most of the games are divided into the following structures: video game, games library with relevant graphics and audiodevice, and graphics engine contains source codes are two-dimensional and three-dimensional images, and audiodisc contains codes effects and music. Each unit of these game engines is performed in a cyclic mode, and the cycle is called the game loop, and each of these engines and libraries is responsible for its own operation.

For example:

Graphics engine: displays the image in General.

The source code for two-dimensional images: a static image, background image" and "sprites"that appears when the image of video games.

The source code for three-dimensional images of the dynamic image, a vector image processing in real time, treated as independent objects in the xyz coordinates inside g is priuilege computer world;

Audiodisc: play sound

Code effects: when playing special events such as explosions, crashes, jumps, etc.

Code music: background music, usually played in accordance with the medium of video games.

Implementation of all these blocks in a cyclic mode allows you to check the accuracy of the current position, conditions and game metrics; as a result of obtaining this information is the impact on the elements of the video game.

The difference between gaming programs created for game consoles and computers, is that the original personal computers (PCs) type IBM PC were not designed for games, and ironically, the best games are using technology that is compatible with computers like the IBM PC. When comparing personal computers (PC) video games last time and processing capabilities at the present time we can say that personal computers (PCs) were completely archaic and was suitable only as a means of processing the low-level (Assembly language), and the first created games directly used graphics card and the computer's speaker. The situation has changed, processing power and graphics capabilities of modern CPUs CPU (CPU - central processing unit), and the emergence of cards, specially designed for would accelerate the Oia processing graphics (GPU - graphic processing unit GPU), has led to the fact that they have left far behind the characteristics of the so-called supercomputers eighties.

In 1996, the system was presented graphics accelerator, called a HARDWARE ACCELERATOR that contains graphics processors that can perform high-speed mathematical and matrix operations, thus reducing the load on the main Central processing unit (CPU) through a specialized for this map the data channels and the programming language being at the level called HAL (hardware abstraction layer hardware abstraction layer), which allows real-time to produce the information processing associated with the xyz coordinates using the coordinate matrices and matrix math operations such as addition, scalar multiplication and matrix comparison floating-point.

Brief description of the invention

The aim of the present invention is to solve the problem of incompatibility of technologies in the three-dimensional display image.

Another objective of the present invention is to offer a multi-purpose technology that enables the end user to view video, computer graphics, play video games and run simulations using the same condition the device.

The next objective of the present invention is to offer technology that eliminates the side effects that occur after viewing three-dimensional images generated by existing technologies, even within a few hours of constant use.

Another objective of the present invention is to offer high-tech software integration through the creation of a pair of buffers corresponding to the left eye and the right eye of the hardware with additional independent display device that shares memory in dive mode, the digital video processors.

Another aim of the present invention is a physical display of the image with the two working buffers generated by the graphics processor or graphics accelerators.

Another purpose of this invention is the provision of the brain perception of depth and volume with highly realistic image, even if its creation software computer designed for graphics processing.

Another objective of the present invention is to propose an algorithm TDVision®that is destined to create highly realistic computer images.

The next objective of this from the retene is making changes in the current technological base to create a new digital process of forming an image using optical technology in order to achieve the perception of real image by setting the image from the right camera.

Another objective of the present invention is to achieve a digital media compatibility, in which the computer plays the recording from the DVD, portable computer that plays movies, the ability to transmit video over the Internet, personal computers and game consoles can be used in the structure of the Internet.

Another aim of the present invention is to propose a new algorithm in Assembly language, analog and digital equipment designed to achieve the best fit to the 3D-equipment of existing technologies.

Another objective of the present invention is to propose a three-dimensional visual computer system for creating stereoscopic images using animation, rendering, and software modeling.

Brief description of drawings

Figure 1 shows the flow chart of video games TDVision®.

Figure 2 shows the basic structure of video games based on the known state of the art.

Figure 3 shows the most important three-dimensional element, required to build the object located at a certain position in space.

Figure 4 shows the overall procedure of the development program of video games based on the functional technologies OpenGL (Open Graphic Library - open graphics library) and DirecTX API (Application Programming Interface - API).

On figa shows the block diagram of the algorithm for generating the left and right buffers and additional recognition, whether TDVision technology.

On fig.4b shows the flowchart of the subroutine for adjusting the image from the right camera after recording the image in the right background buffer in the function from the right vector of the camera, he also recognizes whether a technological format TDVision.

Figure 5 shows a General block diagram of the calculation of the modifications in the graphics adapter is required for implementation of technology TDVision, which, on the other hand, enables the exchange of data, contains the programming language and allows you to process the information data associated with the set of images.

Figure 6 shows the block diagram of the algorithm that allows to record information in a background buffer TDVision and present it on the screen in 3D DirecTX format.

7 shows a sequence of display on the screen using OpenGL format.

On Fig presents a flowchart of the procedure display information on the screen using the left and right background buffers using the algorithm of OpenGL.

Figure 9 shows the changes that need to be done in the graphics card used for TDVision technology.

Detailed described the s inventions

Video games are a process that starts with providing multiple independently associated logic States, which include a set of programmable options, where each programmable option selection corresponds to the different characteristics of the image. Common commands of the program can be compiled to code multiple computing devices without necessarily generating object code for each device.

Computer devices such as personal computers, laptop computers, video games, etc. that contain Central processing units, system memory, schema processing video, audio-processing circuit and ports for peripherals. Usually the Central unit executes the program to generate geometric data related to the image to be displayed on the screen, and gives the geometric data in the schema of the video, which generates the pixel data stored in the frame memory, where information is transmitted to the display device on the screen, and the above-mentioned elements as a whole are called video game engine. 2).

Some game engines issued licences to third parties, as an example, the program Quake III Arena program that contains the game engine QUAKE ENGINE; this engine license provided Alena game VOYAGER ELITE FORCE which uses the engine "quake". In this way, the game developers can focus on the metrics of the game instead of developing the game engine from the outset. The original video was used only two-dimensional images, called "spiritualists"who were the protagonists of the games.

A big part of video games and related technologies allows objects to be modeled in three-dimensional environment or the world, where each object is characterized by a position in xyz coordinates and is surrounded by objects with the same characteristics, and they act together in the world with the origin at the point(0, 0, 0).

First part of video game consoles that are not related to the category of computers first appeared three-dimensional graph as a physical graphic components of these devices, and later relevant technologies include hardware personal computers. Also part of the situation analysis, commonly known as artificial intelligence applied to games, this component analyzes the situation, position, conflict's risks and benefits, then on the basis of this analysis generated a response action for each object participating in games.

The background buffer is used as a memory, in which the temporarily recorded image, to be displayed on the screen without issue on the video card. If the image is to give directly to the video screen will be flickering; therefore, information is recorded and processed quickly in the background buffer. The specified background buffer is usually located inside the physical memory random access memory RAM (random-access memory - RAM) graphics card and the graphics accelerator.

A typical procedure in the framework of the algorithm is video games can be the following:

1. Display the main screen.

2. Load characteristics, objects, textures and sounds in memory.

3. The creation of memory cells, called duplicate or background buffer for temporary data.

4. Displaying a background image.

5. The image recording under each element involved in the game.

6. Remove all items from memory (double buffer).

7. Check the information entered by the user, and updating the player's position.

8. Treatment of enemy positions using artificial intelligence (artificial intelligence - AI).

9. Moving each of the principal object in the new position.

10. Check conflicts between objects.

11. The transition to the next frame of the animation.

12. Record objects in the background memory buffer.

13. Data transfer from the background buffer to the screen.

14. Return to Sha is 5, unless the user wants to stop the game (step 15).

15. Remove all objects from memory.

16. The end of the game.

Most often in the video game console uses the following items: CPU (CPU) or a Central processor that processes data in the cycle, the user input from the keyboard using a mouse or a gaming device, such as a gamepad or joystick, as well as data processing game artificial intelligence.

The GPU or graphics processor executes a modeling polygons, texture mapping, transformation and lighting simulation.

Audio DSP or digital signal processor (Digital Signal Processor) handles background music, sound effects and three-dimensional positional sounds.

The graphics engine is the game section, responsible for managing the angles and checking their credibility, assigning textures (metal, leather etc), lighting, position, movement, and all other aspects associated with each object involved in the video game, intended for video game console or a personal computer (PC). This set of images is processed depending on the assigned start point coordinates and the calculated values of distance, depth and angle position. This procedure is performed in two steps, but is a complex process because of the prisoners in her mathematical operations, in particular, the procedure of object shift (offset from the origin) and the procedure of the object rotation (angle of rotation relative to the current position).

It is important to note that the minimum elements of the image (figure 3) consist of a minimum of control elements, called "vertices", and represents a single point in space xyz. The minimum allowable geometric element is a triangle composed of at least three points in space; from a triangular base members are formed objects larger, consisting of thousands of triangles of smaller size, such as in the game "Mario Sunshine". This view is called "mesh ("Mesh"), and the texture, color and even the graphical characteristics of the display can be associated with each mesh object or even with each triangle. This information characterizes the three-dimensional graphics. It is important to note that even if the graphics created by the xyz vectors, due to their nature, are called three-dimensional, the final display output for users on the screen, is in General two-dimensional, generated in the engine is a flat image, with the content based on the three-dimensional vectors, which the user sees, as if they were in front of him, they appear only to give the image the structure programmed characteristics depth and light but for the human brain they don't seem to be having a volume in space.

Initially to perform acceleration and complex mathematical operations required to apply directly to the graphics card, this means that to support other video card was required to almost completely rewrite the game. Given this problem, the company Silicon Graphics® focused on the software level (OpenGL®)that interacts directly with the hardware with a number of useful functions and routines that regardless of the hardware can communicate with them only in the graphical aspects. Microsoft® has also developed a group of similar functions, called DirecTX 3D and is very similar to OpenGL®but with more advanced functionality, since it covers the management of the sounds and, among other things, the field of online games.

These functions and a set of subroutines called by the graphics application programming interface (Graphics Applications Programming Interface - GRAPHICS API). Access to these application programming interfaces (APIs) available from different programming languages, such as C, C++, Visual.Net, C# and Visual Basic.

In each of these virtual reality systems currently use left-right sequence, passing through one and the W is a diagram VGA or video channels, moreover, these systems require software that includes special commands for interchange of video images in the background buffer while displaying information on the screen, use a well-known algorithm bias using offsets and angles as in the simulation.

In addition to the functions provided by the technology OpenGL® DirecTX® API, there are a number of graphics processing functions within the application programming interface provided by Windows®called WINDOWS API.

The development program for the video games based on these technologies is reflected in figure 4, which presents the video game program developed in the present invention by the Corporation TDVision®. Figure 4 shows the block diagram of the algorithm at the beginning of which is the execution of the program with an adequate metric for video games (40), followed by a program developed in any suitable programming language such as C, C++, Visual Basic and others) (41), the source code for video games (42), specify the logic of the game and introduces the characteristics of the object, sounds, events, etc. (43), (44) is the selector of events that operate using the Windows API (45), OpenGL (46) or DirecTX (47), and, finally, the image is transmitted to the video display (48).

All this applies to software, and, interestingly, many of the functions of money what is DirecTX, with Microsoft® has ensured that even when initially some features require special hardware, DirecTX API is able to emulate the hardware programmatically, as if they were present in reality.

In the present invention is maximized and optimized use of technology OpenGL® DirecTX®, resulting in software with certain special characteristics, algorithms and processes that will meet the technical requirements set by the company TDVision and used in the present invention.

As for hardware, the drivers of each card can analyze the hardware abstraction layer (Hal - hardware abstraction layer) and direct interface, and technology for TDVision it is necessary to analyze the minimum technical conditions and requirements, as well as possible changes in technology that will allow you to achieve true three-dimensional image systems 3DVisor company TDVision.

As far as the display on the screen or view the information generated by the software and stored in the graphics device context or in the form of the surface image, is transferred directly to the last cascade graphics card, which converts digital) and the al in analog or digital video signal (depending on the monitor display), and then the image is displayed on the screen.

Currently known methods of display on the screen:

Analog monitor with digital signal of the computer.

Digital monitor.

Analog monitor with a TV signal.

System three-dimensional virtual reality.

Type(s) of output depends(Yat) from the video card, which must be connected to a compatible monitor.

On figa an algorithm to create a memory for temporary graphics (left and right background buffers), while normally an external memory area, the left buffer stage (400) and (401) is detected whether the TDVision technology; if Yes - is the right buffer at step (402), and ends at step (403); if TDVision technology is not used, the algorithm ends at step (403), because to recognize nothing.

On fig.4b shows the block diagram of the recognition algorithm and display images with the left and right cameras; image from the left camera is set to step (410), the image is recorded in the left background buffer (411) depending on the position of the camera, the image is displayed on the left screen (412), then at step (413) is detected whether the format TDVision, if so, calculates the position coordinates of the right image (414), the image is recorded in the right background puff the R function from the position of the left camera (415), the image is then displayed on the right screen (416), the algorithm ends in step (417)unless it is necessary in the recognition phase (413) since the image is issued in a known format, then the algorithm proceeds to the final stage (417) and ends, as there is no need in the calculation of the other coordinates and the display of parallel information. New in the present invention is a graphics processor, shown in figure 5 (HARDWARE GPU), and the graphics engine (GRAPHICS ENGINE, SOFTWARE).

Modification of the software are as follows:

- Increase the amount of RAM (random access memory) of the left and right background buffers.

- The use of additional independent display device in the display buffer and shared memory in dive mode so that it used the appropriate background buffer.

In this case it is necessary to RAM the background buffer and a work buffer of the video card was sufficient to support the left and right channels simultaneously. You must have at least volume 32 MB to support four buffer color depth at 1024×768×4 bytes each. Additionally, the signal output is a double (two port VGA), or it is capable of R is to work with multiple monitors, for example, as in the case of ATI RADEON 9500®equipped with two used for selection systems display images on the screen, one VGA port and one video port S-Video. Created graphics card with dual output, designed to work with the second display, and providing the frequency of the display 60 frames per second on the left-right channel to connect to the system 3DVisor is equipped with outputs SVGA, S-Video, RCA or DVideo.

The computing system presents with changes to the compilation system setup, illustrated in figure 5 and contains Central processing unit (CPU) (50), driver memory (51), extended memory (52), this memory is used to maintain driver audio channel (53) and speakers (54), in addition, the driver input and output (55), which in turn controls the disk ports (56) and interactive elements of user interaction (57)such as mouse, keyboard, gamepad and joystick; on the other hand, the graphics driver communicates directly with the monitor (59) and peaks surround vision 3DVISORS (59b).

With specific regard to graphics hardware (HAL), the necessary changes to bring the system into compliance with the requirements of TDVision technology, applied system (500), transmitting the information to the graphics drivers (501)working with hardware graphics support (502)requires Serie the different physical changes to conform with the requirements of TDVision technology. For TDVision technology implementation using OpenGL and DirecTX, you need to make changes in parts of the software section of the video games and, as mentioned above, in some hardware partitions.

As for software, the standard work of the algorithm it is necessary to introduce some special features, and the ability to call the subroutine TDVision, as shown in Fig.6:

Download information about the surface (600).

Download information about the nets (601).

- Create a background buffer TDVision (602), and in memory creates the left background buffer, if it TDVision technology, the memory is created right background buffer.

- The imposition of the initial coordinates (603).

- Define the game logic (604).

- Verification and artificial intelligence (605).

- Calculation of the position (606).

- Check for conflicts (607).

Record information in a background buffer TDVision and display it on the screen (608), must be specified image from the right camera, the image is recorded in the right background buffer as a function of the current vector of the right camera, and displays the image on the right screen (working buffer). If TDVision technology is used, then the calculation of the left pair of coordinates, setting the image from the left camera, recording images in the left background buffer of the current vector of the left camera, the output image is agenia on the right screen (working buffer), which require modification of the hardware.

Thus, a pair of buffers of the respective right and left eyes, which when used in the cycle given vector coordinates of rendering each of the right camera (current) and the left camera (addition, computed by the function SETXYZTDV, are shown below.

It is important to note that these on-screen output buffers or working buffers are assigned first to the surface of the video display (device context) or consider a surface (the surface), but to display information on the screen in the system 3Dvisor company TDVision for compatibility with TDVision technology necessary to physically attended two outputs: right output (standard VGA) and left output (optional VGA, digital additional or S-Video). The example uses DirecTX, but the same process and the concept can be applied to OpenGL format.

Figure 7 shows the structure of the algorithm (70), describing the sequence of each channel display communication interface with graphical applications, where effectively using trigonometry (72) through transactions with vertices (77) is constructed image (71) and using operations on the pixels or picture elements (75) with (73), grid display (74) and memory that assigns Tex is ur image (76), is transferred to the frame memory (70F) by operations (79). Windows software (700) communicates with (702) and a graphical language card (701), which in turn contains a library of graphical information, which with advantage can be used in its supply at(703), (704).

On Fig presents TDVision technology using the algorithm of OpenGL (80) display left and right images of the object, it erases the background buffer (81), get a pointer to the background buffer (82), closing background buffer (83), re-write scenes (84), the opening of the background buffer (85), unlocking pointer background buffer (86), transfer the image to the left of the display surface; in stage (800) occurs recognition, applied technology TDVision, and if Yes, then erases the memory (801) and receiving pointer to the background buffer (802), closing background buffer (803), obtaining the coordinates of the new perspective (804), the re-entry scene (805), opening memory (806), a second pointer record background buffer (807) and send the image to the right of the display surface(808).

Figure 9 shows the scheme with modifications (90)that need to be done in the graphics card to ensure compliance with TDVision technology, namely: the left standard background buffer (91)previous left standard primary tonovember (92), which in turn is connected to the VGA output monitor (95) and must have another VGA output, so that he could take the data from the right primary background buffer (94), which in turn contains as previous background buffer related to technology TDVision. Both left and right buffers can be connected to the system 3DVisor (96) with dual VGA input for receiving and displaying the information transmitted by the background buffers (91) and (93).

In these modifications of the software, use the following API functions in DirecT X:

To create a background buffer TDVision:

FUNCTION CREATE BACKBUFFERTDV(Q)

Left buffer

Set d3dDevice=

d3d.CreateDevice(D3DADAPTER_DEFAULT,_

D3DDEVTYPE_HAL,hWndL,_

D3DCREATE_SOFTWARE_VERTEXPROCESSING,

d3dpp)

IfGAMEISTDV then

Right Buffer

Set d3dDeviceRight=

d3d.CreateDevice(D3DADAPTER_DEFAULT,_

D3DDEVTYPE_HAL,hWndR,_

D3DCREATE_SOFTWARE_VERTEXPROCESSING,

d3dpp2)

Endif

END SUB

Recording the image in the background buffer TDVision:

FUNCTION DRAWBACKBUFFERTDV()

DRAW LEFT the SCENE

d3dDivice.BeginScene

d3dDivece.SetStreamSource0, poly l_vb, Len(polyl. v1)

d3dDevice.DrawPrimitive D3DPT_TRIANGLELIST,0,1

d3dDevice.EndScene

Copy the background buffer in the working buffer to the screen

D3dDivice. Present By Val 0, By Val 0, 0, By Val 0

'VERIFIES IF IT IS A TDVISION PROGRAM BY CHECKING THE FLAG

IF GAMEISTDV THEN

'CALCULATE COORDINATES RIGHT CAMERA

SETXYZTDV ()

'Draw right scene

d3dDevice2.BeginScene

d3dDevice2.Set StreamSource 0, poly2_vb, Len(polyl, v1)

d3dDevice2.DrawPrimitive D3DPT_TRIANGLELIST, 0, 1

d3dDevice2.EndScene

d3dDevice2.Present ByVal 0, ByVal 0, , ByVal

END SUB.

Modification of the vector xyz camera:

VecCameraSource.z = z position

D3DXMatrixLook AtLH matView, vecCameraSource,_VecCameraTarget, Create Vector (0, 1, 0)

D3dDevice 2.SetTransform D3DTS_VIEW, matView VecCameraSource.x = x position

D3DXMatrixLook AtLH matView, vecCameraSource,_VecCameraTarget, CreateVector (0, 1, 0)

D3dDevice 2.SetTransform D3DTS_VIEW, matView VecCameraSource.y = y position

D3DXMatrixLook AtLH matView, vecCameraSource,_VecCameraTarget, CreateVector (0, 1, 0)

D3dDevice 2.SetTransform D3DTS_VIEW, matView

Thus, a pair of buffers corresponding to the left and right eye, which after evaluation by the game loop gain vector coordinates visualization of the right and left cameras (the addition is calculated by the function SETXYZTDV), using regular expressions to convert the coordinates.

It is important to note that these on-screen output or working buffers are assigned to the first device context or given surface, but to display information in the system 3Dvisor company TDVision and compatibility with TDVision technology necessary to physically attended two video output, right output (standard VGA) and left output (optional VGA, digital additional or SVIDEO).

An example was presented using DirecTX, but the same procedure and the concept can be applied to OpenGL format, as shown in Fig.

In this case, you need to RAM the background buffer and a work buffer of the graphics were up to the rather large amount, in order to support the left and right channels simultaneously. This requires a minimum of 32 MB to support four background buffer with 1024 color depth×768×4 bytes each. As mentioned above, the signal output must be double (two port VGA) or must be able to work with multiple monitors, as in the case of ATI RADEON 9500®equipped with two systems display images on the screen from which you can make a choice: one VGA port and one video port S-Video.

Created graphics card with dual output and providing a frame rate of 60 frames per second on the left-right channel to connect to the system 3DVisor is equipped with outputs SVGA, S-Video, RCA or DVideo.

Thus, it is possible to obtain an image corresponding to the camera's point of view in the left and right angles, and the instrument will recognize the information being display on two different and independent video outputs without multiplexing and display it in real time. Currently, all of the technologies used multiplexing and software modeling techniques, as proposed in the present invention, the real information can be obtained using system 3Dvisors, where the image can be displayed in two different views, and the brain will give and the amount, occupied in space, without any blurring on the screen, which effect is observed in known technologies.

The method of calculating the coordinates of the secondary camera (SETXYZTDVQ) allows you to create three-dimensional computer visual system for generating stereoscopic images through animation, rendering, and software modeling. This method allows to obtain the spatial coordinates (x, y, z), designed to assign two computer-generated virtual cameras imaging with the aim of creating a stereoscopic image through the use of any programs that simulate the third dimension and generating image by moving the object or by moving a virtual camera, which simultaneously monitors computer generated object, such as Autocad, Micrografix Simply 3D, 3Dmax Studio, Point, Dark Basic, Maya, Marionette, Blender, Excel, Word, Paint, Power, Corel Draw, Photo-paint, Photoshop, and so on; but all these programs are intended to display the image with only one camera and one fixed or a floating view.

In the previous program introduces additional features of three-dimensional modeling and animation through expressions for the coordinate transformation, namely:

x=h cosφ-u sinφ

y=h sinφ+u cosφ

The exact position of vechicle is raised to the second or secondary camera, directly associated with the first camera, and using that simultaneously receive the two images at different angles, simulating the property of the person to see objects in stereoscopic perspective. In these procedures, using the algorithms in real time calculates the position of the additional camera to place it in the correct position and to achieve the simulation image and the display image from the second camera, this is achieved using the expressions for the transformation of the coordinates and move the camera at the origin, which were calculated angle and the distance between the extra camera and the object, then the main camera, the object and the secondary camera is again moved to the calculated position. Next, you need to know the seven parameters, namely the first coordinates (Xp, Yp, Zpmain camera in the original coordinate system, the fourth parameter is the distance equivalent to the distance between the eyes (from 6.5 to 7.0 cm), and three position coordinates of the object, when observation is performed with the help of cameras.

Output parameters are the coordinates of the auxiliary cameras observing the same object, that is, (Xs, Ys, Zs), which are calculated by using the following steps:

Obtaining coordinates of the main camera in the beginning of the Noah coordinate system (X p, Yp, Zp),

Obtaining coordinates (xt, yt, zt).

- Converts only the coordinates "x" and "z", because the position and/or height of the camera remains constant (no visual deviations for the observer).

The coordinates of the main camera are calculated from the position (0, ys, 0).

The object also moves.

Calculates the slope of the line connecting the camera and the object.

Creates an angle between the axis and the vector connecting the main camera object.

The quadrant in which it is located, for the application of special conditions when calculating the angle is classified by using the inverse function of the tangent.

The new coordinates will receive, turning the whole system of coordinates relative to the axis at the same angle between the axis and the vector, the result is a new coordinate system in which the object is placed on the axis 'z', and the main camera is at the beginning of the new coordinate system.

The coordinates of the secondary camera, get, putting her at a distance equal to the average distance between human eyes.

These coordinates are turning to the same initial angle.

Added displacement "x" and "z", which were originally deducted to move the main camera to the origin.

Finally, these two new coordinates Xsand Zsappoint additional camera, and remains the same EIT is giving the coordinates of ur, defines the height of that point and the end point coordinate system (Xs, Yp, Zs), which must be assigned to the secondary chamber.

This procedure can also be used in languages such as Delphi, C, C++, Visual C++, Omnis, etc, but the result will be the same.

The generalized application of this algorithm can be used in any application requiring calculation in real time the position of the secondary camera.

This algorithm is designed for use in any existing software, which is processed in two dimensions, but improved for stereoscopic video systems.

Although illustrated and described specific embodiments of the present invention, for technical specialists it is obvious that several modifications and changes, not beyond the scope of the present invention. Applied the formula of the invention comprises the above-mentioned information so that all changes and modifications are included in the scope of the present invention.

1. The system of three-dimensional video games, capable of processing the video data stream through the graphics engine, a processing code corresponding to the technical requirements of three-dimensional graphics, characterized in that it includes:

receiving commands from the program OpenGL® or DirecTX® or from the application inside the game engine;

pairing buffers or physical lines of the memory corresponding to the left eye and right eye;

display image from the left virtual camera in the left background buffer or setting of the left image depending on the camera position;

calculating a coordinate position for the right viewpoint;

displays the image in the background buffer as a function of the position of the left virtual camera; creating a separate memory areas for temporary graphics or left and right background buffers, which introduces additional memory when configuring the right buffer located in a contiguous area of memory that is different from the left buffer; providing additional independent display device in the display buffer and shared memory in dive mode so that you can recognize and use appropriate back-end buffer;

the GPU in addition to the graphical engine, where random access memory (RAM) is increased to the independent left or right background buffer; graphics card with dual video output;

generating a left or right image in different channels - VGA or video channel.

2. The system of three-dimensional video game according to claim 1, ex the different increase of the background buffer or the amount of physical memory random access memory (RAM) graphics card to support both output buffers and temporary and quickly capture a set of images of a scene without issuing an output signal on the video card and stereoscopic pairs in the application before it is displayed.

3. The system of three-dimensional video game according to claim 1, characterized in that when installed, the left buffer is recognized, whether it TDVision technology® and set the right buffer in memory; set the point of view of the right camera records the image in the right background buffer in the function from a vector of the right camera and displays the image in the working buffer, calculates the left pair of coordinates; sets the point of view of the left camera records the image in the left background buffer in the function from a vector of the left camera through the three-dimensional rotation and translation axes and displays the information in real time on the screen TDVision® with the aim of obtaining perceptions of depth, volume or distance and surface.

4. The system of three-dimensional video game according to claim 1, characterized in that the additional data stream is a video data stream in real time; graphics engine contains a control to display digital video stream in real time, the left and right images on the respective monitors, the video stream is displayed on independent VGA or additional display device (in working buffer), jointly using the eat memory in dive mode and is capable of displaying the left and right sequence through an independent channel.

5. The system of three-dimensional video game according to claim 1, characterized in that the amount of random access memory (RAM) of the left background buffer is increased to values greater than 32 MB.

6. The system of three-dimensional video game according to claim 1, characterized in that the amount of random access memory (RAM) of the right background buffer is increased to values greater than 32 MB.

7. The system of three-dimensional video game according to claim 1, characterized in that it has the ability to recognize and use the appropriate background buffer for independent display a full screen image.

8. The system of three-dimensional video games, which used video game software, written in any programming language, using the source code and game logic that responds to user actions and events with a series of graphics processing functions within the software interface, such as OpenGL® DirecTX®transferring the image on the display surface, wherein the software TDVision® includes:

the routine that loads information about the surface; download information about the grid;

creating using routines of the left and right background buffers;

imposition of the initial coordinates;

applying game logic.

validation of artificial Intel is known;

calculating positions; checking conflicts;

writing information into the left and right background buffers and display it on the screen;

assessment loop;

obtaining a vector of coordinates corresponding visualization of the left and right cameras;

displaying information in real time through a series of graphics processing functions within the software interface, such as OpenGL® or DirecTX®and transferring images on the display surface.

9. The system of three-dimensional video game according to claim 8, characterized by the steps of recording in the background buffer TDVision®including:

creating the image, visible on the left viewpoint;

the entry in the left background buffer as a function of the position of the camera;

displays the image at the left working buffer;

recognizing whether the format used to TDVision technology®;

the calculation of the right pair of coordinates;

recording in the background buffer in the function from the left position of the camera; displaying information in the working buffer.

10. The system of three-dimensional video game according to claim 8, characterized by using an algorithm to display the image from the background buffer on the screen, comprising the following steps:

clearing the background buffer,

get a pointer to the background buffer,

closing backgrounds is th buffer,

re-recording of a scene;

the opening of the background buffer,

unlock pointer to the background buffer;

recognizing whether your format format TDVision®;

the image on the left screen.

if the format is the format of the TDVision®then:

clearing the background buffer,

get a pointer to the background buffer,

closing background buffer,

obtaining coordinates of the pointer;

re-recording of a scene; the opening of the background buffer,

unlock pointer background buffer;

displaying the image on the right screen.

11. The system of three-dimensional video game according to claim 8, characterized by software that calculates the coordinates of the additional stereoscopic camera, which allows you to obtain the computer system of three-dimensional vision to generate stereoscopic images, retrieves spatial coordinates (x, y, z) by the equations for transformation of coordinates assigned two virtual cameras visualizations that are designed to achieve stereoscopic vision; calculates the exact position of the secondary camera connected directly with the first chamber, located at the origin, creating two simultaneous full image in different the angles, which provide human stereoscopic vision.



 

Same patents:

FIELD: technical vision systems, possible use in building to measure movements of products, for example, during examination of shape forming of cylindrical rib covers and other similar products.

SUBSTANCE: method for examining cylindrical products includes producing a video image with following computer processing of each recorded frame and analysis of resulting parameters and is characterized by installation of video image source on the side of product end, placing within limits of observation angle of its cross-section, and video recording is performed from one immovably fixed point, and video recording is performed continuously in process of forming of longitudinal ellipsoid ribs and shaping the product as a cylinder, simultaneous computer processing is performed for each recorded frame of movements and deformations of points of longitudinal ellipsoid ribs and of cylindrical product as a whole for evaluation purposes.

EFFECT: increased precision of measurement of deformations and large movements, simplified examination of products.

6 dwg

FIELD: method for cutting off a line for removing unneeded links of line with nodes and structure of links in three-dimensional image.

SUBSTANCE: in accordance to method it is determined whether nodes of line of three-dimensional image, subject to being projected onto a plane, inside or outside displayed volume, by comparing values along axis z for these nodes to values along axis z for cross-section plane relatively to position of center of projection, while cut-off plane is positioned between location of center of projection and projection plane, calculated are points of intersection of cut-off plane and links, connecting nodes of line, new line is built using calculated intersection points and nodes, located inside displayed volume. Also stated are methods of displaying of three-dimensional image with usage of line cut-off.

EFFECT: decreased volume and increased efficiency of calculations, and also increased element displaying speed during displaying of three-dimensional images.

6 cl, 22 dwg

FIELD: technology for processing images of moving objects, possible use, in particular, in theatric art, show business when registration/recording is necessary or repeated reproduction of scenic performance.

SUBSTANCE: method includes inserting enumeration system for each object and performing projection of enumerated objects onto plane, while projection is displayed in form of graph with trajectories of movement of enumerated objects in each staging.

EFFECT: spatial-temporal serial graphic display of scenic action for its further identification and repeated reproduction.

2 dwg

FIELD: technology for encoding and decoding of given three-dimensional objects, consisting of point texture data, voxel data or octet tree data.

SUBSTANCE: method for encoding data pertaining to three-dimensional objects includes following procedures as follows: forming of three-dimensional objects data, having tree-like structure, with marks assigned to nodes pointing out their types; encoding of data nodes of three-dimensional objects; and forming of three-dimensional objects data for objects, nodes of which are encoded into bit stream.

EFFECT: higher compression level for information about image with depth.

12 cl, 29 dwg

FIELD: computer network communication means.

SUBSTANCE: method includes conversion of speech to electric digital signal, transfer of said signal to sound-playing device, conversion of person face to electric digital signal, recognition of face, its characteristic areas and their movement parameters, transfer of information along communication channels to graphic information output device, control of shape changes and space direction of artificial three-dimensional object and its characteristic areas. Method additionally includes detecting errors in face recognition and accompanying parameters by detecting mismatches between configurations of face areas and characteristics of movement thereof for speaking person in electric digital signals, and correction of mistakes before visualization of artificial three-dimensional object by forming control commands on basis of previously recorded shape signs and orientation of three-dimensional object and its characteristic areas for speech characteristics.

EFFECT: higher reliability and precision.

3 cl, 1 dwg

The invention relates to the field of processing and displaying information, the means of data transformation in computer graphics, namely the perceptual system perspectives

The invention relates to computer animation images

The invention relates to the representation of three-dimensional objects obtained using photos of real objects

The invention relates to devices and methods for creating images

The invention relates to a stereoscopic video and can be used to obtain three-dimensional images from any source composite or component video (VCR, camcorder, DVD, video games) when using the standard video displays as a television type (any system: PAL, SECAM, NTSC) and computer monitors (any standard: VGA, SVGA, XGA and others)

FIELD: computer network communication means.

SUBSTANCE: method includes conversion of speech to electric digital signal, transfer of said signal to sound-playing device, conversion of person face to electric digital signal, recognition of face, its characteristic areas and their movement parameters, transfer of information along communication channels to graphic information output device, control of shape changes and space direction of artificial three-dimensional object and its characteristic areas. Method additionally includes detecting errors in face recognition and accompanying parameters by detecting mismatches between configurations of face areas and characteristics of movement thereof for speaking person in electric digital signals, and correction of mistakes before visualization of artificial three-dimensional object by forming control commands on basis of previously recorded shape signs and orientation of three-dimensional object and its characteristic areas for speech characteristics.

EFFECT: higher reliability and precision.

3 cl, 1 dwg

FIELD: technology for encoding and decoding of given three-dimensional objects, consisting of point texture data, voxel data or octet tree data.

SUBSTANCE: method for encoding data pertaining to three-dimensional objects includes following procedures as follows: forming of three-dimensional objects data, having tree-like structure, with marks assigned to nodes pointing out their types; encoding of data nodes of three-dimensional objects; and forming of three-dimensional objects data for objects, nodes of which are encoded into bit stream.

EFFECT: higher compression level for information about image with depth.

12 cl, 29 dwg

FIELD: technology for processing images of moving objects, possible use, in particular, in theatric art, show business when registration/recording is necessary or repeated reproduction of scenic performance.

SUBSTANCE: method includes inserting enumeration system for each object and performing projection of enumerated objects onto plane, while projection is displayed in form of graph with trajectories of movement of enumerated objects in each staging.

EFFECT: spatial-temporal serial graphic display of scenic action for its further identification and repeated reproduction.

2 dwg

FIELD: method for cutting off a line for removing unneeded links of line with nodes and structure of links in three-dimensional image.

SUBSTANCE: in accordance to method it is determined whether nodes of line of three-dimensional image, subject to being projected onto a plane, inside or outside displayed volume, by comparing values along axis z for these nodes to values along axis z for cross-section plane relatively to position of center of projection, while cut-off plane is positioned between location of center of projection and projection plane, calculated are points of intersection of cut-off plane and links, connecting nodes of line, new line is built using calculated intersection points and nodes, located inside displayed volume. Also stated are methods of displaying of three-dimensional image with usage of line cut-off.

EFFECT: decreased volume and increased efficiency of calculations, and also increased element displaying speed during displaying of three-dimensional images.

6 cl, 22 dwg

FIELD: technical vision systems, possible use in building to measure movements of products, for example, during examination of shape forming of cylindrical rib covers and other similar products.

SUBSTANCE: method for examining cylindrical products includes producing a video image with following computer processing of each recorded frame and analysis of resulting parameters and is characterized by installation of video image source on the side of product end, placing within limits of observation angle of its cross-section, and video recording is performed from one immovably fixed point, and video recording is performed continuously in process of forming of longitudinal ellipsoid ribs and shaping the product as a cylinder, simultaneous computer processing is performed for each recorded frame of movements and deformations of points of longitudinal ellipsoid ribs and of cylindrical product as a whole for evaluation purposes.

EFFECT: increased precision of measurement of deformations and large movements, simplified examination of products.

6 dwg

FIELD: information technology.

SUBSTANCE: a) the system of three-dimensional videogame is capable of displaying left - the right sequence through various independent channels - VGA or the video-channel, with the device of the display sharing memory in an immersion mode, b) the system contains the videogame cursor operating and checking reliability of foreshortenings of the image, appointing structures, illumination, positions, movement and the aspects connected with each object, participating in game; creates left and right background buffers, creates images and displays the information in working buffers, c) the system allows to process the information of data connected with coordinates xyz of the image of object in real time, in it the volume operative memories (RAM) for the left-right buffer is increased, thus there is an opportunity of recognition and a choice of the corresponding background buffer which information is transferred in the working buffer or the additional independent device of display sharing memory in an immersion mode.

EFFECT: solution to the problem of incompatibility of technology in display of three dimensional images.

11 cl, 13 dwg

FIELD: information technology.

SUBSTANCE: invention relates to animation systems, and particularly to communication protocol for supporting information and time synchronisation between several animation systems. Proposed is a communication protocol, which controls asynchronous data exchange between a high-level animation system and a low-level animation system. The high-level animation system has varying, average frame frequency and is optimised for interactivity. The low-level animation system has constant, high frame frequency and is optimised for high frequency of frame updating. The communication protocol contains messages which can be sent by the high-level animation system to the low-level animation system so as to describe animation and how animation should change for after given period of time. As a result, the low-level system can display information at high rate of updating, even if animation data are not being received from the high-level system for each frame.

EFFECT: more efficient exchange of required data in an animation system.

26 cl, 9 dwg

FIELD: physics; image processing.

SUBSTANCE: invention relates to stand advertisement, marketing, exhibition and hardware and software systems for said purposes. The system for controlling movement of people is made in form of a control computer and at least one interactive programmable stand with an interactive information imaging station and a system for 3D display and identification of viewers-participants. The stand has display apparatus, video cameras, a module for detecting contact with the screen, a module for capturing movement with a source of infrared or some other radiation and an infrared camera or some other device, an analysis and control module connected to the 3D display module, which determines at the output the characteristics of viewers-participants and the sphere of their possible interests for playing an online game with them, which allows for directing and changing movement of viewers-participants in public places. At least one screen can be fitted on the floor, and the 3D display system can have multiple screens and can be fitted with at least one mirror. The system can have several stands which are joined into a network for tracking viewers-participants.

EFFECT: wider functional capabilities.

6 cl, 2 dwg

FIELD: physics; image processing.

SUBSTANCE: invention relates to advertisement, marketing, exhibition and other activities in which there is need for individual effect on the audience depending on sex and age. The advertisement display system is made in form of interactive SD display and identification systems. The system has a computer and display apparatus for formation of a virtual image on screens, a video camera and a module for capturing movement with a radiation source and a camera for capturing movement of viewers-participants. The capturing module is fitted with possibility of transmitting information to an analysis and control module connected to the SD display module. A signal is generated, which determines control effects on the virtual image. Information reaches the input of a 3D display module in form of a set of one or more regions of an image, corresponding to the same participant and providing for determination of sex and age characteristics of participants for playing an online game with them. The 3D display system can have multiple screens and can be fitted with a mirror. The screen can be fitted on the floor.

EFFECT: wider functional capabilities.

6 cl, 2 dwg

FIELD: physics, image processing.

SUBSTANCE: invention relates to image processing methods. Images are linked by forming a square grid. Units of the square grid are then mapped and incorrect mapping is eliminated by using a verification procedure during which the initial order of conjugated points on the grid is detected. Collineatory transformation between all images is then evaluated and the resultant image is subsequently formed using an adaptive blending procedure.

EFFECT: method enables creation of a full-scale landscape panorama through compensation for projective images in automatic mode.

3 cl, 8 dwg

Up!