RussianPatents.com

Display data management technology. RU patent 2511637.

Display data management technology. RU patent 2511637.
IPC classes for russian patent Display data management technology. RU patent 2511637. (RU 2511637):

G06T3/00 - Geometric image transformation in the plane of the image, e.g. from bit-mapped to bit-mapped creating a different image
Another patents in same IPC classes:
Method of generating integer orthogonal decorrelating matrices of given dimensions and apparatus for realising said method Method of generating integer orthogonal decorrelating matrices of given dimensions and apparatus for realising said method / 2509364
Method involves, based on a selected generating matrix, generating integer orthogonal decorrelating matrices whose dimensions are selected equal to prime numbers, performing Kronecker multiplication or multiple-scaled combination of two or more elementary matrices, after which the obtained matrices are stored as integer orthogonal decorrelating matrices of given dimensions.
Multi-layered slide transitions Multi-layered slide transitions / 2501089
System has a component for dividing a slide into separate layers, a component for transitioning the separate slide layers independently of the corresponding layers of the next slide, a processor which executes instructions associated with the division or transition component.
Method of compressing graphics files Method of compressing graphics files / 2498407
Method of compressing graphics files includes operations for changing geometrical dimensions of initial frames of a graphic image with subsequent decompression of frames of the graphic image and the qualitative estimate of parameters, wherein the peak signal-to-noise ratio is predetermined, a time iter value equal to zero is assigned; performing two-dimensional wavelet transformation over the initial frame of the graphic image A(l,h), wavelet coefficients of which form a matrix Y(l,h), which is then compressed and then decompressed, after which a zero matrix is formed and the elements are replaced with corresponding elements of the decompressed matrix; the reconstructed image is then formed by performing inverse two-dimensional wavelet transformation over the zero matrix with the changed elements, after which the peak signal-to-noise ratio is determined, which characterises the quality of the reconstructed frame compared with the initial frame and, if the calculated ratio is greater than a given value, the described operations are carried out while increasing the current value of the variable iter by one and replacing the matrix of the initial graphic image (l,h) with the formed matrix Y(I,h).
Method of incorporating additional information into digital graphic images (versions) Method of incorporating additional information into digital graphic images (versions) / 2495494
Additional information which identifies a digital graphic image is incorporated into a digital graphic image, and its separate pixels are colour coded. According to the present invention, the width and height of the image are determined when incorporating additional information into digital graphic images. A text line intended to describe the identified digital graphic image is input into the corresponding text field; if the line is not empty, in accordance with the length of the input text description, a matrix is formed, which determines the position of separate pixels relative a selected point in the digital graphic image, into which the incorporated text information is coded.
Model-based playfield registration Model-based playfield registration / 2480832
Method, apparatus and system for model-based playfield registration. An input video image is processed. The processing of the video image includes extracting key points relating to the video image. Further, whether enough key points relating to the video image were extracted is determined, and a direct estimation of the video image is performed if enough key points have been extracted and then, a tomographic matrix of the final video image based on the direct estimation is generated.
Device and method to process images, device to input images and software Device and method to process images, device to input images and software / 2462757
Section (33) to set a composition sample sets a composition sample complying with an input image, on the basis of a number of identified areas, where attention is drawn on the input image, and scenes of the input images. On the basis of the composition sample set by the section (33) of composition sample setting, a section (34) of composition analysis defines a "cutting" area on the input image, which is optimal for the image received by "cutting", following the composition sample from the input image. This invention may be applied, for instance, in a device for processing images, which tunes a composition of the input image.
Method of determining spatial shift of images Method of determining spatial shift of images / 2460137
Method involves converting an image to an equal number of rows and columns; selecting feature space for the first and second images; generating feature matrices, each element of which is a vector of pixel feature values; a plurality of elements is selected from the feature matrix of the first image; for each element, a plurality of elements is selected from the feature matrix of the second image; images are formed, which are matrices of scalar elements; a matrices of estimate values are formed; the matrices are approximated; the spatial shift of the second image relative the first image is determined.
Image forming device and method, program for realising said method and data medium storing program Image forming device and method, program for realising said method and data medium storing program / 2454721
Image C, having exactly the same resolution as image B is formed by magnifying image A; presence or absence of a point in image B corresponding to each pixel location in image C, as well as location of the relevant corresponding point is evaluated; and each pixel location in image C, for which it was evaluated that the corresponding point exists, is assigned image data from the corresponding location in image B. Formation of image data in each pixel location in image C is facilitated, for which when evaluating corresponding points, it was evaluated that the corresponding point does not exist, by using image data assigned in accordance with the evaluation result, expressed in that the corresponding point does not exist.
Image generating method and apparatus, image generating program and machine-readable medium storing said program Image generating method and apparatus, image generating program and machine-readable medium storing said program / 2440615
Presence or absence of a point in the first colour signal X of the second image B, which corresponds to each pixel position of the first colour signal X of the first image A is evaluated, and the position of the relevant corresponding point is also evaluated. For each of the evaluated pixel position in the colour signal Y of image A, image information of the corresponding position in the colour signal Y of the second image B is assigned. The colour signal Y is generated in the pixel position on image A, for which evaluation on the absence of the corresponding point was obtained, through image interpolation using image information of the colour signal Y assigned to pixels having the corresponding points.
Image forming apparatus and method, programme for realising said method and data medium storing said programme Image forming apparatus and method, programme for realising said method and data medium storing said programme / 2438179
Image data are formed in image C by using image A and image B, having a higher bit depth than image A. Image C, having the same bit depth as image B, is formed by increasing bit depth of image A through superposition of hue maps. Presence or absence of points on image B corresponding to each pixel position in image C, as well as the position of the relevant corresponding point is determined. Each pixel position on image C, for which it was determined that the corresponding point exists, is assigned image data from the corresponding position on image B. Possibility of forming image data at each pixel position on image C, for which, during evaluation of the corresponding point, it was determined that the corresponding point does not exist, is facilitated by using image data assigned according to the evaluation result, consisting in that the corresponding point exists.
Method for compressing and restoring messages Method for compressing and restoring messages / 2261532
Previously, at transmitting and receiving sides random quadratic matrix is identically generated with size m×m elements and two pairs of random key matrices with sizes N×m and m×N elements. From k frames of colored images with sound signal k of matrices of quantized counts of colored moving image are formed with size M×M×k elements and Z-digit sound vector. Received matrices are transformed to digital form on basis of presentation of each of them in form of result of multiplication of three matrices: random rectangular matrix with size N×m elements, random quadratic matrix with size m×m elements and random rectangular matrix with size m×N elements. Into digital communication channel elements of rectangular matrices of size N×m and m×N elements are transferred. Restoration of images is performed in reversed order.
Creation of sequence of stereoscopic images from sequence of two-dimensional images Creation of sequence of stereoscopic images from sequence of two-dimensional images / 2287858
Two-dimensional image is analyzed to find out its type of scene. Depending on the type of scene deformation corresponding to this type of scene is chosen. The chosen deformation is used to transform the two-dimensional image and to put it into at least one channel of view. In addition, different transient functions are defined and these functions provide smooth transition without any interference from one deformation to every new deformation.
Method for creating matrix image of an object Method for creating matrix image of an object / 2305320
In accordance to invention, in the method, by means of objective, original optical image of object is projected onto block of light-sensitive elements, original matrix image of object is formed, for part of elements of original matrix image, distance is determined, unambiguously corresponding to distance from objective to object section; final matrix image of object is formed, such that characteristic of mutual position of elements of final matrix image is in maximally precise match with altered, according to given scale, mutual position of axonometric projections corresponding to elements of final matrix image of object sections visible on the side of objective onto projection plane perpendicular to projection line along parallel projection lines, at least one of which passes through part of object visible through objective and intersects with external surface of the external - most remote from block of light-sensitive elements - lens of objective, and each element of final matrix image displays information about averaged brightness of color component corresponding to element of final matrix image of object section.
Electronic generator of video signals and estimation method Electronic generator of video signals and estimation method / 2305912
Electronic video signals generator contains several single video signal generators with several pixels each, while pixels of at least one single video signals generator are read only partially, so that separate zones of image are selectively masked, to decrease amount of video data subject to processing, when prior to partial reading of pixels of at least one single video signal generator all pixels of at least one single video signals generator are read for determining position of image element, and after that decision is made about which pixels should not be subject to reading. Also a capability is included for changing masking of partial zones depending on movement of object being filmed, and outputting of individual signal into data processing block by each pixel.
Method for interpolation of images Method for interpolation of images / 2310911
In accordance to the invention, advance processing of image is performed to remove compression artifacts with losses, if such artifacts are present; for each pair of coordinates of original image, coordinates of sixteen closest neighbors are determined and values of edge direction map elements are computed, which may take on values from one to eight, depending on values of partial derivatives in eight directions; morphological operations are performed over edge direction map; a set of coordinates is computed, in which it is necessary to compute brightness values; for each pair of computed coordinates, one-dimensional interpolation is performed in direction which is determined by value of corresponding element in edge direction map; previous step is repeated for each color component; following image processing is performed to reduce artifacts and improve boundaries.
Method for generating an electronic image for conduction of an electronic game and device for realization of said method Method for generating an electronic image for conduction of an electronic game and device for realization of said method / 2317586
In accordance to the invention, image field is generated by means of generation of random "k" numbers along horizontal axis, generation of random "m" numbers along vertical axis of image field, generation of random "n" numbers along additional component of image field, generation of random "d", "w", "l" numbers respectively for direction, width and length of image elements zone and generation of corresponding image matrix. Fixed image is compared to given prize image.
Method of automatic photograph framing Method of automatic photograph framing / 2329535
Automatic photograph framing method is suggested. In case of the album orientation of the image, the image horizontal line uniformity analysis is performed, whereas in case of the portrait orientation of the image, the image vertical line uniformity analysis is performed. The analysis is based on the line fragment clustering with the use of their texture attributes. Based on the analysis results, the number g(i) of different clusters including rectangular fragments covering the line, is determined. The g(i) number is used for estimating uniformity of the image line. Based on this estimation, the preliminary location of the image cutting frame is defined for the image. Then, the location of faces on the image is defined, and the position of the frame is corrected by finding the maximal y-coordinate yt and the minimal y-coordinate yb of the rectangles circumscribed around the defined faces with consecutive straightening of the image cutting frame with the centre of the vertical interval with its ends at the points yt and yb.
Data controlled actions for network forms Data controlled actions for network forms / 2419842
System includes one or more processes; one or more computer-readable data media which store computer-readable instructions. One or more processes make the system perform operations involving: receiving from a server over the network, a loaded copy of the network form data which is configured to service a copy of the network form data; displaying the network form in a visual presentation available for editing. The visual presentation of the network form available for editing is obtained in a computing device from the copy of the loaded data of the said network form.
Image super-resolution method and nonlinear digital filter for realising said method Image super-resolution method and nonlinear digital filter for realising said method / 2431889
Obtaining super-resolution of images involves exposing several frames, obtaining initial images by reading from a sensor, equalising said images, forming a magnified image and filtration thereof with a filter. The initial image is obtained from a digital sensor in form of a continuous sequence of frames via high-speed photography, where the frame frequency is inversely proportional to the value of the scanned part of the light-sensitive region of the sensor. The magnified image is formed by merging initial low-resolution images and resolution is increased using a nonlinear filter by applying the filter to the magnified image. The magnified image is formed by merging the clearest frames of the initial images.
Method of superimposing images obtained using different photosensors, and apparatus for realising said method Method of superimposing images obtained using different photosensors, and apparatus for realising said method / 2435221
Method is realised using a device having a visible spectrum television camera or an infrared television camera, a beam splitter, series-connected first image pixel-by-pixel reading unit, pixel recording switch, memory for storing the superimposed image and second image pixel-by-pixel reading unit. Light flux reflected from the objects under analysis is formed into separate images which are then recorded into a common image. Mutual alignment of the objects is determined from the common image. The light flux is first split into two streams. A visible spectrum image is formed from the first stream and an infrared spectrum image is formed from the second stream. Pixels of the first and second images are read and alternately recorded into the memory for storing the common image for said pixels. Pixels of the first and second images are successively recorded on odd rows and pixels of the second and first images are successively recorded on even rows.

FIELD: physics, computer technology.

SUBSTANCE: invention relates to display devices. The device has a mechanism for obtaining graphic data, which generates frame data from input graphic data, a frame buffer control unit which determines if the connected display device has a frame buffer. If the connected display device has a frame buffer, said control unit bypasses the operation of storing frame data in a local frame buffer and transmits the frame data to the connected display device.

EFFECT: high efficiency of the device for processing graphic data owing to use of a remote frame buffer.

20 cl, 8 dwg

 

The level of technology

In computer applications usually use the output signals graphics device to provide information to users. Such information can be presented in the form of text, charts, images (moving or still), etc. Such information typically display in the device displays connected to the platform processing (for example, a personal computer) via the data interface.

Usually generate graphical data includes conveyor graphics processing that gets displayed content in the form of data frames (or images) on the basis of directives of applications. After generating the data of such a frame typically retain in memory frame buffer in the platform processing.

After it happens such preservation, the frame data can be transferred to your device's display via the normal display interface. Examples of such conventional interfaces include the video graphics adapter (VGA), digital visual interface (DVI), multimedia interface HDMI, display port (DP) and analog TV formats. In turn, the device display then controls the display of received data frame. This is based on the time synchronization, which can be managed by using the logical devices located in the processing platform or in the device display.

As systems become more and more complex, the need to develop technologies for efficient and flexible management of the data display.

Brief description of drawings

In the drawings the same numbers of the reference positions, in General, marked identical, functionally similar and/or structurally similar elements. The drawing, which the item first appeared, marked with the left-most digit (digits) in the room of reference position. Present the invention will be described with reference to the attached drawings, in which:

figure 1 shows the approximate scheme of the operating environment;

figure 2 shows the approximate scheme variant of realization of the pipeline graphics processing;

figure 3 shows a block diagram of the sequence of processing;

figure 4 shows diagram representing the mapping between conveyors for graphics and frame buffers;

figure 5 and 6 shows a block diagram of the sequence of processing;

7 shows the scheme of realization of the storage device display frame;

on Fig shows the block diagram of the processing sequence.

Summary of the invention

Options for implementation provide technologies to generate and output display. For example, variants of implementation presents elements, including data storage frame in the device display. In addition, options implementation presents elements, including the isolation of the different contexts of the user for different frame buffers. In addition, options implementation provides effective technologies to save the data frame after the transition between power States. In addition, options for implementation of the given technology for flexible and dynamic distribution of many contents are displayed to the physical display.

For example, the device may include a mechanism for graphic data that is designed to generate the frame data from the input image data, and the management module frame buffer, designed to identify, does it include the connected device display the frame buffer. When the attached device's display includes the frame buffer, the management module frame buffer should bypass the save operation data frame in the local frame buffer and must transmit the frame data on the device display.

The frame data may include the data of the difference between the current frame and previous frame. In addition, the management module frame buffer can choose between a compressed format transmission and uncompressed transfer format based on one or more characteristics of the data interface (for example, without limitation, USB interface and/or LAN) with a connected display device. These one or more characteristics may include throughput data interface. In addition, the management module frame buffer can encrypt the data frame, designed to send in the device display. The module of the data interface is designed for data transmission of the frame via the data interface on the device display.

How can include: generate data frame from the input image data; determine that the device display includes the frame buffer; and transmit data frame in the device display to save in the frame buffer. Such transfer may include a bypass operation save the data frame in the local frame buffer. The frame data may include the data of the difference between the current frame and previous frame. Transmission of the data frame may include the transmission of the data frame in the device's display via the data interface.

Furthermore, the method can encrypt the data frame, designed for transmission via the data interface. Furthermore, the method can choose between a compressed format transmission and uncompressed transfer format based on one or more characteristics of the data interface. One or more characteristics of the data interface can include throughput data interface.

The product may contain read by the device the media, which contains instructions that, when run on the device provides the device: generating data frame from the input image data; determination that the device display includes the frame buffer; and the transmission of the data frame in the device display to save in the frame buffer. Such transfer may contain bypass the save operation data frame in the local frame buffer.

The system may include the platform processing and display device. Handling framework includes a mechanism for graphic data that is designed to generate the frame data from the input image data, and the management module frame buffer, designed to determine whether the device display the frame buffer. When the device display includes the frame buffer, the management module frame buffer should bypass the save operation data frame in the local frame buffer and must transmit the data frame in the device display.

In addition, the system can include a platform processing and display device. Handling framework is the first pipeline graphics processing designed to generate data for the first frame for the first set of one or more applications; and a second pipeline graphics processing designed to generate the data of the second frame for the second set of one or more applications. The device display includes the first frame buffer and buffer the second frame. First conveyor graphics processing must transmit information of the first frame in the first frame buffer. In addition, a second pipeline graphics processing must transmit information of the second frame buffer second frame.

The device display can contain the physical display and user interface. The user interface can accept the user's selection of one of the first and second frame buffers. The physical display should show the data frame in the selected frame buffer.

The first set of one or more applications can match the first operating system, and a second set of one or more applications can match the second operating system.

The system can additionally include the data interface between the platform processing and display device. Platform processing can transfer data first frame via the data interface, using the first connection; and can transmit data through the second frame data interface through a second connection. These first and second connection can be isolated. Data interface can be a USB or LAN.

Furthermore, the method may include: generate data for the first frame to the first set of one or more applications; generate data for the second frame for the second set of one or more applications; transmit data the first frame in the buffer of the first frame in the device display; transmit the data to the second frame buffer second frame in the display device; and on the basis of the user's choice remove some of the data for the first frame and the data of the second frame to the physical display. The first set of one or more applications can match the first operating system, and a second set of one or more applications can match the second operating system. The data transfer first frame can contain data of the first frame after the first connection of the data interface, and data transfer of the second frame can contain data of the second frame through the second connection of the data interface. The first and the second connection can be isolated from each other.

The device display may include volatile storage medium designed to preserve the data frame (for example, in one or more buffers frame); non-volatile storage medium; and the management module, designed to save the frame data in non-volatile storage medium on the basis of transition to a lower power state. This lower power state can be an inactive state. The transition to a lower power state can be based on the Directive, adopted from the connected processing platform. The management module can restore the data frame in volatile storage medium on the basis of transition from a lower power state to a higher state power. Volatile storage media can contain dynamic random access memory (RAM), and non-volatile storage media can contain memory of a trip flash.

One additional method may include: storing data frame buffer frame buffer frame is included in the device display and contains volatile storage medium; on the basis of transition to a lower power state (for example, to an inactive state) keep the frame data in non-volatile storage media that are included in the device display. Furthermore, the method may include, accept from handling framework Directive on the transition to a lower power state. The method can also include a transition from a lower power state to a higher state power, and on the basis of this transition, recovery of a data frame in the frame buffer.

Also one additional method includes: take the choice of the user to output one or more data flows frame for physical display; based on user selection, highlight one or more areas of conservation in the frame buffer local media, and one or more areas of conservation in the buffer frame corresponds to one or more streams of the frame; take one or more data streams video processing platform;

retain one or more of the received data streams frame on the storage media, preservation perform in accordance with allocation; and eliminate one or more of the received data streams frame for physical display in accordance with the selection a user.

The selection of one or more areas of conservation in the frame buffer local media device contains generate a table display frame (FMT). Furthermore, the method can contain retain FMT on the local media. How can optionally contain: define the resolution for each of one or more data streams frame; and indicate for processing platform resolution for each of one or more data streams frame.

Reception of one or more data streams video processing platform may contain, taking each of one or more data streams frame in accordance with the relevant specific resolution. Furthermore, the reception of one or more data streams video processing platform contains, take one or more data streams frame via the data interface. In addition, each of one or more data streams can be accepted through the appropriate connection in the data interface.

The above-described property represents the illustration. Thus, ways of implementation are not limited to these properties. Other properties variants of implementation will be clear from the following description and attached drawings.

Link in the description to "one version of the exercise" or "an implementation option" means a specific property or feature described in connection with the option exercise included, in at least one version of the exercise. Thus, the appearance of the phrase "in one embodiment," or "embodiment" in various places of this description is not necessarily in all cases refer to the same version of the implementation. In addition, various properties, structures or features can be combined in any appropriate method in one or more variants of implementation.

Figure 1 shows the scheme of approximate operating environment 100, which can be used the techniques described here. Wednesday 100 may include different elements. For example, figure 1 shows the environment 100, which includes a platform 101 processing, the device 103 display and interface 105. These elements can be embodied in any combination of hardware and/or software.

Platform 101 processing may include one or more operating systems (OS). For example, figure 1 shows platform 101 processing, in which operating systems work 108A and 108b. However, any number of operating systems can be used. These operating systems can be designed in many computer applications. For example, figure 1 shows platform 101 processing running the application 102a-d. Each of these applications can run in the appropriate one OS 108a-b.

In applications 102a-d graphics used data that lead to one or more displays (such as device 103 display). Examples of applications include various business related applications (such as word processors, spreadsheet application, applications, presentations, email, applications, send messages, etc), business applications, video applications, and/or other applications.

In particular, conveyors a-b graphics processing perform graphics operations in response to the directives received and processed through the graphics API. Approximate operations include granting and out of images (frames) in the device 103 of the display. As described above, conveyors a-B graphics processing can be embodied in any combination of hardware and/or software. Thus, in the variants of implementation conveyors a-B graphics processing can be implemented using one or more modules graphics processing unit (GPU).

Figure 1 shows that the device 103 display includes the physical display 109, media 110 information module 112 management interface 113 user and non-volatile media 114 information.

The physical display 109 provides a visual output to the user. In variants of the implementation of this output is presented in the form of consecutive images or frames. In accordance with this approximate physical displays include a display with light-emitting diodes (LED), liquid crystal displays (LCD), plasma displays and cathode ray tube (CRT). Options for implementation, however, is not limited to these examples.

Each of shots physical display 109, can contain many pixels. Data representing these pixels (for example, color and/or intensity), remain in the frame buffer within media 110 information. Such data can be called "data frame. Thus, by storing the data frame, the frame buffer "controls" physical display 109.

Options exercise device 103 display can provide multiple frame buffers. For example, figure 1 shows a device 103 display, including buffers 111a and 111b of the frame. However, you can use any number of buffers frame. Such frame buffer can be included in the media 110 information. Media 110 information may contain volatile random access memory (RAM) (for example, dynamic RAM). However, you can use other types of media, such as non-volatile storage device.

Options exercise device 103 display receives the data frame of the platform 101 processing. More specifically, conveyors a-b graphics processing (through the interface 105) can provide data to 120 frames per device 103 of the display. After taking the device 103 display saves the frame data in the buffers 111a and/or 111b of the frame. In turn, such stored data frame can be opened by physical display 109 in accordance with the technologies described here.

As described above, the device 103 display includes a module 112 management, which directs the various operations of the device 103 of the display. Such operations include reception, preservation and output data frames received from the platform 101 processing. In accordance with this module 112 control can process data to be transmitted via the interface 105, at time display, with power management, etc. In addition, the module 112 management can control the interactions of the user interface 113 user or through the channel 122 management virtualization software stacks.

Interface 113 user allows the user to interact with the device 103 of the display. Such cooperation may include the execution of user-various operations described here. Such operations include, but are not limited to the choice of frame buffer device 103 display for output, select the output mode buffer one frame or mode output buffer with multiple frames, and submission, and disabling desktop power supply for the device 103 of the display. Interface 113 user can be realised in different ways. For example, the interface 113 user can include various buttons, keys, disk laborately numbers and/or other input devices. In addition, or alternatively, the interface 113 user can include various menus and/or elements of the touch screen provided through physical display 109.

Options exercise device 103 display may include non-volatile media 114 information (for example, storage device type flash). As described in more detail below, the non-volatile media 114 information can ensure that the contents of the buffers 111a-b frame when the device 103 display switches between power States (for example, from a higher state power in a lower power state). Alternatively, options for implementation may provide such properties as a result of implementation of buffers 111a-b frames in the form of non-volatile memory so that they are always available and contain its contents.

Interface 105 connected between the platform 101 processing and device 103 of the display. In particular, the interface 105 allows for platform 101 processing to provide to device 103 display data 120 frame. Interface 105 also allows for platform 101 processing and device 103 display to share with each other information 122 management.

Interface 105 can be implemented in different ways. For example, variants of implementation of the interface 105 may include the type interface "plug and play", such as a universal serial bus (USB) (for example, USB 1.0, USB 2.0, USB 3.0, and so on). However, you can use various other serial and/or parallel interfaces. In addition, the interface 105 can be made with the local cable network (LAN) (such as Ethernet). In addition, the interface 105 can be carried out with the wireless network. Wireless network examples include networks such as wireless LAN IEEE 802.11 (WiFi), IEEE 802.16 WiMAX, and wireless personal area networks (WPAN) (for example, WPAN 60 GHz). However, options for implementation are not limited to these examples.

In variants of implementation buffers street 111A-b frames device 103 display look like "display" for processes and/or operating system platform 101 processing. Thus, in the variants of implementation processes and/or operating systems do not contain data on the physical display 109 device 103 of the display. In addition, options implementation of the user or independent software stack controls how a frame-buffer device 103 display actually looking at the physical display 109. For example, options for implementation provide for a user or software functions "flip" or "flipping" through various buffers frame. Alternatively, or in addition ways to implement allow users or software to display different frame buffers in different independent areas on the physical display 109 so that many frames can be viewed simultaneously.

In variants of implementation, the elements of figure 1 can be implemented as a computer system. For example, the platform 101 processing can be a personal computer and the device 103 display can present it in the appropriate monitor. Options for implementation, however, is not exclusive layouts.

In addition, the elements of figure 1 can include one or more processors (such as the microprocessor). For example, the platform treatment can contain any combination of microprocessors. As an example, the operations described here (such as operations OS 108a-b applications 102a-d and conveyors, graphics processing, can be provided to the Central processor unit (devices) of the CPU and/or module (modules) graphics processing unit (GPU. Such CPU and/or GPU can work in accordance with instructions (for example, software)is stored on the storage media. Such information medium (which can be included in the platform 101 processing) may include memory (volatile or non-volatile), a disk drive, etc. In accordance with this device 103 display may also include one or more processors to provide the properties described here. These processors can execute instructions (for example, software)that are saved on the storage media. Such information medium (which can be enabled in the device 103 display) may include memory (volatile or non-volatile), disk storage media, etc. for More details in respect of such options for implementing provided below.

Operations for different variants of implementation can be further described with reference to the following drawings and accompanying examples. Some of the drawings can include logic flow. Although such drawings presented here may include certain logical flow, we can understand that the logical flow simply provides an example of how the common functions described here can be implemented. In addition, given the logical flow does not necessarily have to be performed in the order presented, unless otherwise is indicated. In addition, given the logical flow can be implemented using the hardware element (elements), programme element (elements)by one or more processors, and any combination of these. Options for implementation is not limited to this context.

Traditional conveyor graphic video processing includes the display content that is passed in memory frame buffer system in the platform processing, and then provides data (data frames)transmitted to the device's display via the usual interface of the display. Examples of such conventional interfaces include the video graphics adapter (VGA), digital visual interface (DVI), multimedia interface HDMI, display port (DP) and analog TV formats. In turn, the device display then manages the display, using the received data frame. This is based on the moments of time that can be managed using the Boolean funds under the platform processing or in the device display.

After taking the device display uses its internal logical tool to expand Delta and save uncompressed data in its own frame buffer. In addition, the device display can handle various operations such as local update of the screen display, zoom, rotate and turn on/off the display. Because these technologies are used in common interfaces, multiple display devices can be supported (limited buffer size frame platform processing and bandwidth of its processing).

Unfortunately, such operations o read and compare (which run platform processing) require a significant amount of processing-processing platform. For example, for video with high intensity of the applied action may be required, in essence, all the ability to handle the typical processing platform (for example, a personal computer).

Thus, this approach is different disadvantages. One drawback is that the platform processing has no data on the output from the frame buffer and does not use the advantages of remote frame buffer. As a result, processing platform uses no bandwidth and processing power by building and maintaining their own frame buffer in its global local storage device.

Another disadvantage of this approach is that translation and reading frame buffer processing platform (for example, using software rejection frame of a third party) is an open place in the security system. For this reason, the software of third parties can capture any content frame (such as content containing personal data) and send them anywhere. Thus, such a rejection of the frame will probably be banned in the future.

In variants of implementation of such shortcomings overcome the advantages of displays that have integrated frame buffers. For example, options for implementation, processing platform can have a pipeline (conveyors) graphics processing, which imply remotely set the frame buffer and include this remote frame buffer as part of the overall process of conveyor graphics processing and ensure reliable preservation of detention from the time of its formation until the display. Remote frame buffer can be connected via any digital interface, and if you want encryption of content, can be adopted appropriate encryption method interface.

Thanks to the inclusion of the concept of remotely located frame buffer processing platform can only offer data Delta from frame to frame, eliminating, thus, the need for the above mentioned operations comparison of the frame to frame. Additional options for implementation may determine whether to compress or not to compress such data before sending them through the interface. Such determination may be based on the available bandwidth of the interface. For example, compression can be used to USB 1.0 and USB 2.0, but not for USB 3.0.

Figure 2 shows the approximate scheme variant 200 implementation, which can be incorporated into the pipeline graphics processing, such as any of conveyors graphics processing of figure 1. Such implementation may include different elements. To illustrate (and not to limit) figure 2 shows the mechanism 201 receive image data, the module 214 management frame buffer, the buffer 216 frame and module 218 data interface. These elements can be embodied in any combination of hardware and/or software.

The mechanism 201 receive image data generates data frame from the input image data 220. The mechanism 201 receive image data can be presented in different ways. Figure 2 shows mechanism 201 receive image data, which includes the module 202 conversion module, 204 lighting, 206 module installed, the module 208 ROP, the module 210 texture module 212 processing pixel.

As shown in figure 2, the mechanism 201 obtain accepts input image data image data 220 module 202 conversion. Input of graphical data 220 contain representations of scenes in model space (for example, the three-dimensional space). In variants of implementation of the input 220 graphics data can be taken from one or more applications (for example, through the graphic API ()).

Module 202 conversion converts coordinates in the input image data 220 from model space to the space of the screen. In addition, the module 202 conversion executes operations such as testing of the clip and any operation of the formation of the clip.

Module 204 lighting expects vertex colors, based on predefined colors of materials and the vertex normals. If necessary it can include one color to the top. Module 206 installation calculates the slope of the edges of the primary colors, and gradients (changes in directions X and Y) on the parameters of depth, color and texture.

Module 208 rasterization finds all valid sampling pixel for primary colors and calculates the correct depth, color and texture in each sample point. Module 210 texture searches one or more component values textures from a mass storage device texture and performs operations offsets are specific to the texture of the color of the incoming pixel.

Module 212 processing pixel performs operations that occur once per pixel, such as comparison of depth, pixel offset and other similar operations. As a result, the module 212 processing pixel provides 214 management module frame buffer pixel data that represent the contents of an image one screen.

Module 214 management frame buffer performs operations related to the conservation of pixel data. For example, a module 214 management frame buffer determines whether to store the pixel data locally in the buffer 216 frame, or whether to transfer the data of the pixel in the frame buffer in the display device (such as a device 103 display).

Module 218 data interface provides access to the interface that connects to a display device (such as an interface 105 figure 1). For example, a module 218 data interface can provide the interface with the data pixel (taken either from the buffer 216 frame, or directly from the module 214 management frame buffer). In addition, the module 218 data interface can accept (through the interface) the management information provided by the device display. Such data control can indicate whether the device display buffer (the buffer) frame. In variants of the implementation of the data management can be in the data structure that describes the capabilities of the display device, such as the extended data identification display (EDID). However, you can use other data formats.

As shown in figure 2, the module 218 data interface provides the module 214 management frame buffer such information management. For example, figure 2 shows the module 218 data interface, which provides the module 214 management frame buffer indicator 222 frame buffer and indicator 224 bandwidth interface. Indicator 222 frame buffer indicates that includes whether a device display the frame buffer. Indicator 224 bandwidth interface indicates throughput (bandwidth) interface-processing platform that works with your device display. In variants of the implementation of such an indicator can set the interface type (for example, USB 1.0, USB 2.0, USB 3.0, and so on). However, you can use other indicators bandwidth.

On the basis of such indicators module 214 management frame buffer determines how to process the generated data frame. For example, if the connected device's display has an integrated frame buffer, it saves the generated data frame in local buffer 216 frame. In line with this, such locally stored data frame will be transmitted to the display device in accordance with standard technology display interface (for example, VGA, DVI, HDMI, DP, TV etc).

In contrast, if the connected device's display has an integrated frame buffer the module 214 management frame buffer transfers data frame (for example, "Delta" or contrast to the previous frame) in the device display (via module 214 interface and connected interface) to preserve its integrated frame buffer. Such data frame is passed on the basis of bandwidth interface. For example, for interfaces higher throughput data frame (e.g. Delta) can be transmitted in uncompressed format. However, for interfaces with low bandwidth, data frame (e.g. Delta) can be transmitted in a compressed format. Options for implementation, however, is not limited to this example.

In addition, the module 214 management frame buffer can encrypt the data frame, designed for transmission via the interface (via module 218 data interface). Different encryption technologies can be used, such as standard a mechanism (s) of encryption used by the interface between the platform processing and display device. In line with this, the device display (for example, control module)can decrypt such data after administration.

Figure 3 shows a diagram of logical flow of 300, which can be performed in one or more variants of implementation. This flow includes rendering platform, which has one or more pipelines graphics processing, and display device that has one or more buffers frame. Thus, this thread can be performed with use of the elements shown in figure 1. However, options implementation is not limited to this context. Although figure 3 shows a specific sequence, the other sequences can be used. In addition, the operation can be performed in different parallel and/or serial combinations.

In block 304 platform processing determines the throughput (bandwidth) interface. On the basis of this platform processing defines the format for data transmission frame in box 306. For example, for low bandwidth platform of treatment can determine whether to use a compressed format for data transmission frame. However, for higher throughput platform processing can determine whether to use an uncompressed format for data transmission frame.

In variants of implementation capacity may be determined by the type of interface being used. For example, the USB 1.0 and USB 2.0 can be regarded as interfaces with lower bandwidth (which leads to the transfer format with compression). In contrast, USB 3.0 and LAN (for example, Ethernet) can be regarded as interfaces with higher bandwidth, resulting in transfer format without compression. However, options for implementation are not limited to these examples.

Thus, in box 307 platform processing generates a data frame and passes them through the interface to be displayed in accordance with the selected transfer format.

As described above, unit 308 performed if the display device does not have an integrated frame buffer. In line with this, the unit 308 platform processing generates the frame data and stores them in its own frame buffer. In the context of figure 2, it can contain a module 214 management frame buffer that contains the data frame generated by 200 option implementation, in the buffer 216 frame.

After this, there is a block of 310, in which platform processing passes locally stored data frame in the display device in accordance with standard technology display interface (for example, VGA, DVI, HDMI, DP, TV etc).

As described above, options for implementation may provide multiple frame buffers within the display device (such as a device 103 display). For example, figure 1 shows a device 103 display having buffers 111a and 111b of the frame. Between so many buffers frame can be maintained complete isolation. Thus, each frame buffer in the display device can be assigned to different treatment (treatment options) or operation (transaction). For example, each frame buffer can be assigned a specific OS or one or more specific applications. In addition, as described above, processing platform (such as platform 101 processing) may include a set of pipelines graphics processing. In variants of implementation of each of the many conveyors can be routed to the appropriate frame buffer device display.

Figure 4 shows a schematic, representing an approximate allocation of the frame data in the context of figure 1. In particular, figure 4 shows the pipeline a graphics processing, providing a stream of data 420 frame buffer 111a of the frame, and the conveyor 106b graphics processing that ensures the flow of 420b data frame buffer 111b of the frame. With reference to figure 1, these data streams frame can be part of the data 120 frame and information 122 management. In variants of implementation flows 420 and 420b the frame data can be transferred across different connections (for example, isolated compounds), provided interface 105. Alternatively, the same connection can be used for all data frames. However, within such connection data frame mark for their respective buffers frame in the source (for example, in the platform processing). Using the scheme mark can be defined by selecting the user in the device display. Thus, in the variants of implementation can be achieved in isolation from the device display (for example, the control module in the device display).

Threads 420 and 420b data frame may be different contexts user (for example, different processes or operations platform 101 processing). For example, a thread 420 data frame may correspond to processes (for example, applications) is the first operating system running on the platform 101 processing, while the flow 422 data frame may correspond to processes (for example, applications) second operating system running on the platform 101 processing. In addition, the flow 420 data frame may correspond to the first group of one or more applications, while the flow 422 data frame can match the second group of one or more applications. These application can be distributed in any way among one or more operating systems. In variants of implementation of the applications or operating systems may have information about their respective frame buffer.

Options for implementation may provide isolation between threads data frame. This isolation can be provided through a lot of connections provided by the interface. For example, in the context of USB interfaces can be shared by each data stream of the frame (and its corresponding information management) through one or more channels USB. However, can use other technologies of formation of the interface and/or isolation. In addition, each frame buffer device 103 of the display can be "fixed" thread-specific data frame, so that it cannot be re-appointed without permission of the user.

This approach may be preferable to give the user a convenient way to switch back and forth between different content displayed on the device display. In addition, this approach provides a reliable way for personal content. For example, this approach provides an opportunity for a secure OS and user's share of the monitor (display device). In addition, this approach can provide the ability to quickly switch between environments without complex switch or display memory.

Figure 5 illustrates the logic flow of 500, which may represent the operations performed in one or more variants of implementation. This flow includes platform processing and display device. Thus, this thread can be done elements, shown in figure 1. However, options implementation is not limited in such a context. Although figure 5 shows a certain sequence can use a different sequence. In addition, presents operations can be performed in different parallel and/or serial combinations.

In block 502 user determines the selection of operations processing platform for multiple frame buffers in the device display. For example, the user can select the processes of the first operating system for the first frame buffer and processes the second operating system for the second frame buffer.

In block 504 isolated connections, frame buffers are established for each selection. For example, one or more channels USB can be installed for each selection. Alternatively, one or more of tunnels Internet Protocol (IP)that can be set for each selection.

Thus, in the block 506 the frame data for selected operations pass through the appropriate communication (connection) in the device display. Such data receive and retain in their respective frame buffers in the device display unit 508.

Block 510, the user selects the frame buffer for output using the display. In variants of the implementation of this may imply the user interacts with the user interface of the device display. However, options implementation is not restricted to this example. On the basis of this device selection of the display outputs the contents of the selected buffer frame in the block of 512. As indicated in figure 5, the operation may return to the block 510, where the user changes the selection frame buffer for output in the block of 512.

As described above, options implementation can provide a display device having a buffer (the buffer) frame and non-volatile storage medium. For example, figure 1 shows a device 103 display, including buffers 111a-b frame and nonvolatile media 114 information. This property provides efficient processing power States. For example, when you go to a lower power (for example, suspension of power supply, the device's display can save the contents of your clipboard (buffers) of the frame in the non-volatile storage media. On the contrary, when you return to the state of the higher power, the display device can record saved contents back into its frame buffers.

Such properties preferably possible to overcome the shortcomings of conventional approaches. For example, conventional approaches do not provide such non-volatile saving in the device display. In addition, for display devices that have a frame buffers, conventional approaches do not contain frame buffer in the appropriate platform processing. Thus, in order for the device display to log in this state of low power (for example, a suspend state power), handling framework should consider the contents of the frame buffer back of the device display and save the content in their own non-volatile storage media. Unfortunately, this may require excessive amounts of energy and time.

Unit 602 handling framework specifies that you want to change the power state (for example, the event of system goes into standby mode). In particular, handling framework defines the transition from the first (higher) power status to the second (lower) nutritional status. This definition can be based on any one or more of the triggering conditions (for example, a user inactivity, low battery and so on). In versions of the event the system is in standby mode can be a state of transition in the idle state C3 or C4 in accordance with the specification of the Advanced management interface configuration and power interface (ACPI). Options for implementation, however, is not restricted to these approximate States.

Based on this definition, the platform processing (in block 604) to send a Directive to change the power state of the device display. In the context of figure 1, the exchange of this Directive may be carried out through the interface 105 as information 122 management.

The display device accepts this Directive in the unit 606. In turn, the device display determines that the submission of the working power in its buffer (the buffer) frame must be suspended. In accordance with this unit 608 device display saves the contents of this buffer (the buffer) (for example, data frame) in its non-volatile storage media. After that the display device can enter directed the status of the power 610. In the context of figure 1, performance units 606-610 can be controlled by the module 112 management.

In block 612 platform processing determines what should happen additional power state changes. In particular, handling framework defines what should happen transition from the second low power state to a higher state power. In variants of implementation, this could mean a return to the first state power. Alternatively, it may be a transfer to another higher power status.

In accordance with this unit 614 platform processing passes the Directive in the device display. With reference to figure 1, this Directive can be transmitted through the interface 105 as information 122 management.

The display device accepts this Directive in box 616. After taking the device display provides food for work in its own buffer (buffer) frame in box 618. In turn, in the block 620 device display copies the saved contents of non-volatile storage device back into your buffer (buffer) frame. Thus, the frame data recording into a buffer (the buffer) frame. In the context of figure 1 performance units 616-620 can be controlled by the module 112 management.

As described here, the options for implementation may provide a display device having an integrated frame buffers. This can be done in a local storage device display frame. In addition, options implementation unit display (e.g. device 103 of the display) can have a table maps frame (FMT) in its local memory mapping of the frame. FMT able to manage his memory in such a way that it may not be divided into many independent frame buffers that display on one physical display.

Such property preferably allows you to selectively use a large display that provides a single sense when viewed, or "split screen", providing the feeling of many independent mappings. For example, the user can divide the screen in half (for example, either horizontally or vertically)to get the perception independent mappings. In addition, the user can dynamically switch between the perception of one display and perception of many independent mappings.

It is specified that each monitor, added to the business, increases productivity by 50%. Thus, this property is preferable provides to the user the opportunity to get a sense multiple monitors using one monitor. Thus, such a property, preferably, left space on the table.

In addition, FMT may include field labels assosiated with blocks of storage device display display that protect blocks from each other, and provides the ability to allocate memory for various operating systems, applications, and/or users.

In variants of implementation FMT manipulated using the user interface on the device display. This is different from the environment of the window because the management FMT does not depend on user interfaces provided by relevant platforms processing (for example, a platform 101 processing).

Such choice of the user (for example, multiple monitors and one monitor), leads to different output resolution. Thus, for the provision of the properties of the device display provides a platform processing the permission information (based on the user's choice) through the interface that connects these items (for example, interface 105). Such information about the resolution may be provided in different formats, such as structure (structure) EDID data.

As described above, with reference to figure 1, the device 103 display includes media 110 information, which provides the first buffer 111a frame and a second buffer 111b of the frame. 7 shows the scheme of alternative layout, which uses FMT. In particular, 7 shows the media 110' of information that can be used in a display device, such as a device 103 of the display.

As shown in Fig.7, media 110' information includes space 702 frame buffer, which can be flexibly allocated to provide one or more buffers frame. Such flexibility in the allocation is determined FMT 704. For example, figure 7 shows that FMT 704 allocates space 702 frame buffer in the first section 706 of the frame buffer and the second section 708 of the frame buffer. However, in the variants of implementation FMT 704 can allocate space 702 frame buffer in a separate part of the frame buffer or any many sections of the frame buffer.

On Fig shows the flow pattern 800 logical processing, which may represent the operations performed in one or more variants of implementation. This thread provides a platform processing, display device having one or more buffers frame and non-volatile storage medium. Thus, this thread can be performed with the help of the elements in figure 1. However, options implementation is not limited to this context. Although Fig shows a specific sequence, the other sequences can be used. In addition, the operation can be performed in the form of various parallel and/or serial combinations.

In block 802 display device accepts a user's choice to output one or more data streams display in his physical display. For example, the choice of the user can be a single data stream frame (feeling one monitor). Alternatively, select the user can submit the conclusion of many data streams frame (the feeling of many monitors). In addition, the user can refer to output formats (for example, next to each other, tiling above, below, and so on). This choice can be passed through the user interface of the display device (for example, through the interface 113 user of figure 1).

Based on user selection in the device display highlights (in block 804) plots save the frame buffer in the media. For example, in the context Fig.7 such areas would be within the space 702 frame buffer. In addition, the selection may include the generation of a FMT.

In block 806, the device display indicates the information transmitted in the processing device on the choice of the user. This information can mean the selected data streams personnel. In addition, such information may indicate the information permits (on the basis of the user's choice). Options exercise of such information may be transmitted through the interface between the platform processing and display device (for example, interface 105).

In block 807 platform processing accept the information transmitted in the block 806. On the basis of such information, the platform processing generates the selected flow (flow) data frame in accordance with the notation (for example, in accordance with the value (values) of the resolution). In addition, the platform processing passes the selected data streams frame through the interface. As described here, each data stream frame can be transmitted through the appropriate connection (for example, isolated connection), which provides an interface.

The display device accepts the selected data streams frame in box 808. In turn, the display device stores the data frame allocated on the respective section (sections) of your media in block 810.

On this basis, the device display displays the selected flow (flow) data frame in accordance with the user's selection in the block 812.

As described here, different variants of implementation can be implemented using the elements of hardware, software elements or any combination of them. Examples of hardware elements may include processors, microprocessors, diagrams, circuit elements (such as transistors, resistors, capacitors, inductance coils and so on), integrated circuits, application-specific integrated circuits (ASIC), programmable logic device (PLD), digital signal processors (DSP), logframe, user-programmable (FPGA), logic gates, registers, semiconductor devices, integrated circuits, crystals, chipsets, etc.

Examples of software may include software components, software, applications, computer software, application software, system software, computer programs, software, operating systems, middleware, embedded software, software modules, procedures, sub-procedures, functions, methods, routines software interfaces, interfaces, software application (API), sets of instructions, code computing, computer code, the code segments, segments of computer code, words, meanings, symbols or any combination of them

Some variants of implementation can be realized, for example, using material that is read by the device the media (the media) or product that may contain a statement or set of statements that, when executed in the device may to ensure that the device method and/or operations in accordance with the variants of implementation. Such a device may include, for example, any relevant information platform processing, computing platform, computing device, device processing, computer system, processing, computer, processor or the like and can be implemented using any appropriate combination of hardware and/or software.

Read by the device the media (the media) or the unit may include, for example, any the correct type of memory module, the memory device, the memory device, the device-drive, the device conservation, environment preservation and/or module, for example, storage, removable or non-removable media, erase or non-erasable media, media writable or with the possibility of re-recording, digital or analog media, hard disk, floppy disk, the permanent storage on compact disk (CD-ROM), compact disc recordable (CD-R), CD-ROM rewritable (CD-RW)optical disk, magnetic environment, magneto-optical media, removable card or memory disks, different types of digital versatile disk (DVD), magnetic tape, tape or the like. The instructions may include any appropriate type code, such as source code, compiled code, as interpreted code, Executive code, static code, dynamic code, encrypted code, etc. implemented by any appropriate programming language high-level, low-level, object-oriented, visual, compiled and/or interpreted language.

Some variants of implementation can be described with the use of the expression "connected" and "connected" with their derivatives. Such terms are not intended to be used as synonyms of each other. For example, some variants of implementation can be described with the use of the terms "connected" and/or "connected"to indicate that two or more of the element are in close physical or electrical contact with each other. The term "connected", however, may also mean that two or more of the element are not in direct contact with each other, but still interact and influence each other.

While different ways of implementing the present invention described above, you should understand that they have been presented only as an example and not limitation. In accordance with this for specialists in the art it will be understood that various changes in form and detail can be made without going beyond the nature and scope of the invention. Thus, the breadth and scope of the present invention should not be limited to any of the above estimated variants of implementation, but shall be determined only in accordance with the following formula of the invention and its equivalents.

1. The display device, including a mechanism providing graphic data, intended to generate data frame from the input image data; management module frame buffer, designed to identify, does it include the connected device display the frame buffer; in which, when connected device display includes itself the frame buffer, the management module frame buffer bypasses save data frame in the local frame buffer and passes the data frame on the device display.

2. The device under item 1, in which the data frame contain the data to the difference between the current frame and previous frame.

3. The device under item 1, in which the management module frame buffer chooses between compressed format transmission and uncompressed transfer format based on one or more characteristics of the data interface to the connected display device.

4. The device under item 3, in which one or more characteristics of the data interface includes throughput data interface.

5. The device under item 3, additionally contains the module of data communication interface, intended for the transmission of the frame data via the data interface on the device display.

6. The device under item 3, in which the data interface is the interface universal serial bus (USB).

7. The device under item 3, in which the data interface is the interface local area network (LAN).

8. The device under item 1, in which the management module frame buffer is for data encryption frame designed for transmission in the device display.

12. Way under item 9, additionally contains: encrypt data frame, designed for transmission via the data interface.

13. Way under item 9, additionally contains: choose between a compressed format transmission and uncompressed transfer format based on one or more characteristics of the data interface.

14. Way under item 9, in which one or more characteristics of the data interface include throughput data interface.

15. The material is read by the device storage media that contains the instructions in the device ensures: generating data frame from the input image data; determination that the device display includes the frame buffer; and the transmission of the data frame in the device display to save in the frame buffer; in which the transfer contains bypass the save operation data frame in local buffer frame.

16. Media under item 15 in which the data frame contain the data to the difference between the current frame and previous frame.

17. Media under item 15 which statement when the device is additionally provide performance of the device: data frame in the device's display via the data interface.

18. Media under item 15 which statement when the device is additionally provide performance of the device: data encryption frame designed for transmission via the data interface.

19. Media under item 15 which statement when the device is additionally provide performance of the device: select between the format of the compressed transmission and format of uncompressed transfer on the basis of one or more characteristics of the data interface.

20. Processing system graphical data, containing: a platform processing and display device; in which platform treatment includes a mechanism for graphic data that is designed to generate the frame data from the input image data, and the management module frame buffer, designed for definition, includes whether a device display the frame buffer; and in which, when the device display includes the frame buffer, the management module frame buffer bypasses save data frame in the local frame buffer and passes the data frame in the device display.

 

© 2013-2014 Russian business network RussianPatents.com - Special Russian commercial information project for world wide. Foreign filing in English.