Markup language and object model for vector graphics

FIELD: computer engineering.

SUBSTANCE: the system contains markup language, object model of graphics, converter of types, analyzer-translator, system of presenters, interface for applied programming of visuals and indication interface.

EFFECT: ensured organized interaction of computer program developers with data structure of scene graph for creation of graphics.

27 cl, 31 dwg

 

The present invention is related to the following jointly consider applications for U.S. patent: 10/184,795, entitled "System and method for multi-level graphics" [Multi-Level Graphics Processing system and Method], 10/184,796, entitled "a Generic parameterization for scene graph" [Generic Parameterization for Scene Graph], 10/185,775, entitled "Intelligent data structure caching for direct graphics mode [Intelligent Caching Data Structure for Immediate Mode Graphics]filed June 27, 2002 and application for U.S. patent entitled "Interfaces, visualization and scene graph" [Visual and Scene Graph Interfaces], (registered with the attorney No. 3470), filed simultaneously with them. The rights of each of the related applications are owned by the applicant of the present patent application.

The scope of the invention

The invention relates generally to computer systems and, in particular, to the processing of graphics and other video information to display in the computer systems.

Prior art

The traditional model of direct access mode graph in computer systems has exhausted its possibilities in part because the memory speed and bus lag behind improvements in the main CPUs and/or GPUs. In General, the modern model (for example, WM_PAINT) for the preparation of the frame requires the processing of such large amounts of data that the frequency of regeneration equipment on azyvaetsja insufficient, when you need complex graphics effects. As a result, when attempting to create complex graphics effects using traditional graphical models, instead of the complete changes that lead to the expected visual effects in the next shot, the changes can be added in different frames, leading to unwanted visually observed results.

In the above-mentioned patent U.S. No. 10/184,795, 10/184,796 and 10/185,775 describes a new model for managing graphics output. This new model provides a number of significant improvements in the technology of processing graphics. For example, application U.S. No. 10/184,795, mainly dedicated to the system and method of multi-level graphics processing, in which the higher level (for example, the operating system takes a large amount of calculations to build a scene graph, update the animation settings and traversal of data structures of a scene graph, at a relatively low speed, passing low-level component of the simplified structure of the data and/or commands graphics. Since the high-level processing greatly simplifies data, low-level component can operate at higher speed (relative to the top-level component), for example, at a speed that corresponds to the refresh rate of the graphics subsystem, transformed rasua data in constant output for the graphics subsystem. When using the low-level animation processing means instead of re-drawing the entire scene changes to interpolate the parameter intervals as necessary to obtain instantaneous values, which when rendered, provide a slightly modified scene for each frame, providing a smooth animation.

In the application U.S. No. 10/184,796 described by parameterized scene graph, which provides variable (animated) values and containers parameterized graph, which software code providing for drawing graphics (e.g., an application program or operating system component)may selectively modify certain aspects of the description of the scene graph, leaving other aspects unchanged. The code also may reuse previously constructed sections of a scene graph, possibly with different parameters. Obviously, the ability to easily change the appearance of the displayed items by means of the parameterization and/or reuse of existing parts of the scene graph provides a significant increase in the overall efficiency of graphics processing.

In the application U.S. No. 10/185,775 in General, the described data structure caching and appropriate mechanisms to preserve the visual information through objects and data in the scene graph. Structure the RA data in General is associated with mechanisms which intelligently control the filling of visual information and its use. For example, in the absence of specific requirements from the application program, much of the information stored in the data structure, has no external references, which allows to optimize or otherwise process the information. Obviously, this provides the efficiency and economy of resources, for example, the data in the data structure of the cache can be converted to another format that is more compact and/or reduces the need for subsequent re-processing, such as a raster image format or other further processing.

Although the above-mentioned improvements and provide significant advantages in technology graphics, however I need a way to effectively and easily use this advanced model graphics and other related improvements. Need universal and at the same time a simple way, allowing programs to use its many features and capabilities of graphics processing provided improved graphics model, and, thus, effectively display complex graphics.

The invention

In principle, the present invention provides for bhaktau model elements and markup language vector graphics to provide access to this object model elements so that to the software code could consistently interact with the data structure of the scene graph to create graphics. Markup language for vector graphics contains the format of mutual exchange for the expression vector graphics through the object model elements. When interpreting the markup is parsed into data, including the elements in the elemental tree, which is transmitted in the data structure objects scene graph. At the level of the elemental tree provided by the property system and the system of presenters, providing ample programming, including characteristics of inheritance and event handling, facilitate designers scenes on the construction, possibly complex scenes. In the General case, the elements of the vector correspond to the elements of "form" and other elements, including the elements of "image" and "video", which correlate with the objects of a scene graph object model scene graph. Properties and other resources elements vector graphics also correlate with similar properties and resources object model scene graph.

Thus, the system of vector graphics can be programmed at the level of elements, where each drawing is presented as an element on the same level as the rest of programmable elements in the page/screen that is allows you to interact with the presenters, events and properties. The vector graphics system also provides a mechanism for programming on the level of resources, due to which the designers of the scenes can significantly cut the elemental tree, and the system of presenters and programming directly at the level of the visualization API, which communicates with the data structure of the scene graph. This provides a more efficient and simple way of prompting the object, albeit with partial loss of programmability at the item level. In one implementation, when programming the fill type "brush of the visual analyzer can directly call the API level using the level data resources to create a corresponding object coloring of the visual (which is also a correlation between the object model elements and object model of the scene graph). In this two-tier system of vector graphics elements analyzed (parsed) into the created elements, which are subject to further broadcast in objects, whereas vector graphics resources analyzed and directly remains effective way. At the same time, the elements and part of the elemental tree can refer to the thus created data or object level resources. For this items, including items coloring of the visual, you can supply the names. So the m way the designer scene has the ability to balance between efficiency and programmability as necessary.

The class hierarchy of elements includes a form class, the class image, class videos and class of the canvas. Members of the class of shapes include rectangle, polyline, polygon, path, line, and ellipse. Each element may include data (properties) fill, stroke data, the data cut-off, data transformation, data filter effect and the data mask or be associated with them. Forms correspond to the geometry object model scene graph), draw with inherited and cascaded properties presentations that are used to build the pen and brush, necessary for drawing shapes. The class "image" is more specific than "form", and may include more raster image data, and the class "video" you can play video or similar media) in the display element. Class "canvas" can act as a container for forms to provide a lightweight form.

In one embodiment, the implementation of the markup code is interpreted by the parser/translator, which, in General, adds the items from the item level to the system element tree/properties and attaches to these elements presenters. The system then presented the ditch takes the elemental tree with attached presenters and translates the data into objects (Builder) and accesses the API level of visuals which interacts with the count of scenes and creates scene graph.

A markup language provides different ways of describing the item, including the format simple lines and a sophisticated recording system object (a complex property syntax). With regard to the format of a simple string parser/translator and/or presenters uses a type Converter to convert the string into a corresponding visualization API. When the fill attribute is too complex to fit on one line, to describe a set of properties using a complex property syntax, which can be embedded in the markup. Since the item-level and API level share the same model visualization, many of the objects are the same, which ensures high efficiency of the analysis/broadcast and other benefits. An instance of the resource may also be placed in another location (for example, in the markup or file) and allows reference by name. Thus, the designer of the scenes can reuse the element of the elemental tree within the scene, including the elements described complex property syntax. Other benefits and advantages made it clear from the following detailed description given in conjunction with the drawings.

Brief description of drawings

Figure 1 - block diagram illustrative of a computer C is themes, suitable for implementing the present invention.

Figure 2 is a generalized block diagram of a layered architecture, graphics, suitable for implementing the present invention.

Figure 3 - representation of the scene graph, consisting of visualization and related components for processing the scene graph, for example, by traversing a scene graph to provide graphics commands and other data in accordance with an aspect of the present invention.

4 is a representation of the scene graph consisting of the visual identity, visual drawing and related primitives drawing created in accordance with an aspect of the present invention.

5 is a class representation of the visual object model in accordance with an aspect of the present invention.

6 is a representation of the various other objects of the object model in accordance with an aspect of the present invention.

Fig.7. diagram of transformation of visual data in accordance with an aspect of the present invention.

Fig. 8A and 8B is a representation of the data transformations of the visual in the geometric scale and non-uniform scale, respectively, in accordance with an aspect of the present invention.

Fig. 9A-9C is a block diagram objects "visual surface" and other visuals and components in accordance with an aspect of the present invention.

Fig. 10A and 10B is a chart that not only is marijuana objects "visual HWnd", in accordance with an aspect of the present invention.

11 is a diagram of a multilayer object "visual" in accordance with an aspect of the present invention.

Fig - class view of the geometry object model in accordance with an aspect of the present invention.

Fig - structure representation PathGeometry ("the geometry of paths") in accordance with an aspect of the present invention.

Fig - representation of the scene graph, consisting of visualization primitives and drawing, which shows an illustrative graph created by the primitives, in accordance with an aspect of the present invention.

Fig - class view of the brush object model in accordance with an aspect of the present invention.

Fig representation of the rendered graphics generated from the data in the object brush with a linear gradient, in accordance with an aspect of the present invention.

Fig representation of the rendered graphics generated from the data in the object brush with a radial gradient, in accordance with an aspect of the present invention.

Fig representation of the rendered graphics obtained at different values of strain, in accordance with an aspect of the present invention.

Fig representation of the rendered graphics resulting from the presence of different element values of the mosaic image, in accordance with an aspect of the present invented the I.

Fig - sequence of operations, which, in General, the logic for the interpretation of visuals, including object "brush"that allows you to generate the schedule, in accordance with an aspect of the present invention.

Fig - grid view and transformed mesh, obtained from the data in the object brush of visual", in accordance with an aspect of the present invention.

Fig - grid view and transformed mesh rendered graphics which are drawn from the visual, in accordance with an aspect of the present invention.

Fig representation of the visualized object "brush with dematiaceae grid in accordance with an aspect of the present invention.

Fig - class view of the transformation of the object model in accordance with an aspect of the present invention.

Fig - class view of the element in the object model elements in accordance with an aspect of the present invention.

Fig - performance components to interpret the code markup language for interacting with the API level visualization in accordance with an aspect of the present invention.

Fig view of the cut through path geometry in accordance with an aspect of the present invention.

Detailed description

Illustrative operating environment

Figure 1 shows an example take machine vision into production the th environment 100, suitable for carrying out the invention. Environment computing system 100 is only one example of a suitable computing environment and does not impose any restrictions on the scope of use or functionality of the invention. In addition, the computing environment 100 should not be interpreted as having any dependency or need related to one or combination of components identified in the illustrative operating environment 100.

The invention provides numerous other environments or configurations of computer systems General purpose and special purpose. Examples of well known computing systems, environments and/or configurations that may be suitable for implementing the present invention, include, but are not limited to, personal computers, server computers, handheld or laptop devices, tablet devices, multiprocessor systems, systems based on microprocessor, set top boxes, programmable consumer electronics, network PCs, mini-computers, main computers, distributed computing environments that include any of the above systems or devices, and so on

The invention can be described in the General context for executing the computer instructions, such as program modules, executed by the computer. In General, programme the e modules include procedures programs, objects, components, data structures, etc. that perform particular tasks or implement certain abstract data types. The invention can also be implemented in distributed computing environments where tasks are performed by remote processing devices connected via a communication network. In a distributed computing environment, program modules can be stored on local and remote computer storage media including memory devices.

According to figure 1 illustrative system implementing the present invention, includes a computing device for General purposes in the form of a computer 110. Components of computer 110 may include, but not limited to, a processor 120, system memory 130, and a system bus 121 that connects various system components including the system memory to the processor 120. The system bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus and a local bus using any of a variety of bus architectures. By way of example, but not limitation, such architectures include bus, the appropriate industry standard architecture (ISA)bus, corresponding to the standard microchannel architecture (MS, is), the bus corresponding to the extended ISA (EISA), the local bus corresponding to the standard of the Association for standards in the field of video electronics (VESA), and the bus, the appropriate standard for connecting peripheral components (PCI) (also called the expansion bus).

The computer 110 typically includes a variety of computer-readable media. Machine-readable media can be any available media that can be accessed computer 110, including both volatile and nonvolatile media, removable and fixed media. By way of example, but not limitation, machine-readable media may include computer storage media for storing data and environment data. Computer storage media include volatile and nonvolatile, removable and fixed media implemented in any method or technology for storage of information such as machine-readable commands, data structures, program modules or other data. Computer storage media include, without limitation, RAM, ROM, EEPROM, flash memory and other storage devices, CD-ROM, digital versatile disks (DVD) or other optical disk storage media, magnetic cassettes, magnetic tape, magnetic disk, the nose of the tel data, or other magnetic storage devices, or any other media that can be used to store useful information and can access the computer 110. The transmission medium typically implement machine-readable commands, data structures, program modules or other data in the form of a signal, the modulated data, for example, a carrier wave or other transport mechanism and include any medium of information delivery. The term "signal, the modulated data signals, one or more characteristics which establish or change in a certain way, by encoding the information. The transmission medium may, for example, inter alia, include a wired medium, such as a wired network or direct-wired connection, and wireless environments, such as acoustic, RF (radio frequency), infrared and other wireless environments. The concept of computer-readable media includes any combination of the above media.

The system memory 130 includes computer storage media in the form of volatile and non-volatile memory, such as random-access memory (ROM) 131 and random access memory (RAM) 132. The system basic input/output system (BIOS) 133 that contains the basic procedures for the transfer and the formation between elements of the computer 110, for example, when starting up, is typically stored in ROM 131. RAM 132 typically stored data and software modules to be quickly accessed by the processor 120, or processed by it at the moment. By way of example, but not limitation, figure 1 illustrates operating system 134, application programs 136, other program modules 137, and program data 138.

The computer 110 may also include other removable/fixed, volatile/nonvolatile computer storage media. Solely by way of example, figure 1 shows a hard disk drive 141 that allows you to read information from the fixed non-volatile magnetic media and to record information on it, drive 151 for floppy disks, which reads information from a removable nonvolatile magnetic disk 152 or writes information on it, and drive 155 optical drive that reads information from a removable nonvolatile optical disk 156 such as a CD-ROM or other optical media, or writes information on it. In the illustrative operating environment, you can also use other removable/fixed, volatile/nonvolatile computer storage media, such as cassettes, magnetic tape, flash memory cards, digital versatile disks, tape chirowaveguides, semiconductor RAM, solid state ROM, etc. the Hard disk drive 141 is typically connected to the system bus 121 via an interface of the storage device with a stationary medium, such as interface 140, and the drive 151 for diskettes and diskette drive 155 for optical disks, typically connected to the system bus 121 via an interface of the storage device with removable media, such as interface 150.

The above and depicted in figure 1 drives and associated computer storage media provide storage of computer-readable commands, data structures, program modules and other data for the computer 110. For example, according to figure 1 on the hard disk drive 141 is stored the operating system 144, application programs 145, other program modules 146, and program data 147. Note that these components may be identical to the operating system 134, application programs 135, other program modules 137 and program data 138 or different from them. Operating system 144, application programs 145, other program modules 146, and program data 147 are marked in other positions, because they, at least, can be a different copy. The user can enter commands and information into the computer 110 through input devices, such as a keyboard 162 and pointing device 161, as the cat is, which may be a mouse, the trackball or touch pad. Other input devices (not shown) may include a microphone, joystick, game console, satellite dish, scanner, etc., These and other input devices are typically connected to the processor 120 via the interface 160 user input, connected to the system bus 121, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor 191 or the display device of another type is also connected to the system bus 121 via an interface, such as a video interface 190. The monitor 191 can also be combined with touch panel 193 or similar device, is able to enter the digitized input signal, such as handwritten text, the computer system 110 via the interface, for example interface 192 touch screen. Note that the display and/or touch pad can be physically connected to the housing, which houses the computing device 110, such as a personal computer, a tablet-type touch panel 193, essentially plays the role of a tablet 164. In addition, computers in particular, computing device 110 may also include other peripheral output devices such as speakers 197 or printer 196, which may enter uchatsa interface 195 peripheral output devices.

The computer 110 may operate in a networked or distributed environment using logical lines of communication with one or more remote computers, such as a remote computer 180. The remote computer 180 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above with respect to computer 110, although figure 1 shows only the memory device 181. Logical lines of communication indicated in figure 1 include a local area network (LAN) 171 and a wide area network (WAN) 173, but may also, and other networks/buses. Such networking environments are widely distributed in homes, institutions, institutional computer networks, intranets and the Internet.

When using in a network environment LAN computer 110 connected to the LAN 171 through a network interface or adapter 170. When using in a network environment HS computer 110 typically includes a modem 172 or other means of communication through HS 173, such as the Internet. The modem 172, which may be internal or external, can be connected to the system bus 121 via the interface 160 user input or by other suitable mechanism. In a networked environment, program modules specified in relation to the computer 110, or ragment can be stored in a remote storage device. By way of example, but not to limit, figure 1 shows that remote application programs 185 stored in the storage device 181. It is obvious that, in addition to a network of communication lines, shown as an example, to establish lines of communication between computers, you can use other means.

Architecture graphics

According to one aspect of the present invention, software code, such as an application or operating system component that sends drawing commands and other information (e.g., bitmap) graphic components to render graphical output on the system display. For this purpose, the present invention provides a markup language together with a number of elements of the form and other elements, the grouping system and blending and uniting with the system properties in the object model, so that the program can fill in the scene graph data structures, primitives (commands) drawing and other data related to the graph. As a result of processing a scene graph get graphics (graphic images)displayed on the screen.

Figure 2 provides an overview of the layered architecture 200 suitable for implementing the present invention. According to figure 2, you can develop software code 202 (e.g., application program or component is operational the system etc), allows you to display image data of one or more different ways, including through the construction of the image 204, through vector-based graphical elements 206 and/or by calls to the function/method posted/CSOs directly at the level of 212 application programming interface (API) visualization. Direct interaction with the API level is additionally described in the above jointly examined patent application titled "Interfaces, visualization and graphs scenes".

In the General case, the construction of the image 204 provides software code 202 mechanism load, edit and save images, such as bitmaps. These images can be used by other parts of the system, and also has a way of using code drawing primitive to directly draw the image.

According to the aspect of the present invention, the vector-based graphical elements 206 provide another way of drawing graphics, consistent with the rest of the object model (described below). The vector-based graphical elements 206 can be created using a markup language, which the system 208 elements/properties and system 210 presenters process to make appropriate calls to the level API 212 visualization. As described below with reference to Fig, in General, the elements 206 vector graphy and analyzed objects in the object model, a drawing scene graph, which can be provided to count the scene through item-level system 208 elements/properties and system 210 presenters, or can be provided in a more effective way on the level of resources that will be also described below.

In one implementation, the architecture 200 level graphics includes a high-level machine 214 overlay and animation, which includes the data structure 216 cached or otherwise associated with it. The data structure 216 cache contains a scene graph containing hierarchically organized objects that are managed according to a given object model, described below. In General, the level 212 visualization API provides programmatic code 202 (and the system 210 presenters) interface to the data structure 216 caching, which includes the ability to create objects, open and close objects for transferring data, etc. in Other words, high-level machine 214 overlay and animation provides a level API 212 uniform environments with which developers can Express intentions in regard to graphics and environments, to display graphical information and to provide the underlying platform sufficient information so that the platform could optimize the use of hardware to code. For example, n is releasa platform will be responsible for caching, alignment of resources and integration environments.

In one embodiment, the implementation of high-level machine 214 overlay and animation conveys the flow of commands and possibly other data (for example, pointers to bitmaps) fast, low-level machine 218 overlay and animation. Used herein, the terms "high-level" and "low-level" similar to those used in other computing scenarios, where, in General, the lower the software component is relatively higher components, the closer the component to the equipment. So, for example, image information sent from a high-level machine 214 overlay and animation, can be adopted for low-level machine 218 overlay and animation, where the information is used to send graphics data to the graphics subsystem 222 containing equipment.

High-level machine 214 overlays and animations, together with the program code 202, building a scene graph to represent the scene graphs provide software code 202. For example, each item to be drawing, you can download using the drawing programs that the system can cache the data structure 216 of the scene graph. As will be described below, there are several different ways to define this data structure 216 and what to draw. Next, a high-level machine 214 overlay and is nimali combined with systems 220 bronirovania and animation to provide declarative (or other) control the animation (for example, intervals animation) and management bronirovanie. Note that the animation system allows you to send animated values almost everywhere in the system, including, for example, at the level of 208 item/properties inside level 212 visualization API and any other resources. System bronirovania appears on the levels of elements and visualization.

Low-level machine 218 overlay and animation controls overlay, animation and rendering of the scene, which then comes to the graphics subsystem 222. Low-level machine 218 assembles rendering for scenes multiple applications and components visualization performs the actual rendering graphics on the screen. Note, however, that at times may be necessary and/or beneficial to carry out a particular visualization at higher levels. For example, although the lower levels serve requests from multiple applications, higher levels of processed-based application that allows through mechanisms 204 imaging to perform time-saving or specialized applications visualization at higher levels and send links to the bitmap lower levels.

Object model scene graph

As described below, the model visualization is shared high-level, UE is supplied elements 206 vector graphics and low-level objects, created by level API 212 of the visuals used in the structure 216 data scene graph. This provides a significant degree of correlation between the high-level elements according to the present invention and the low-level objects. The following describes one implementation of the object model scene graph.

In Fig. 3 and 4 show illustrative graphs of the scene 300 and 400, respectively, the base object which is called visual. In General, the visual is an object representing the user of the virtual surface and having a visual representation on the display screen. According to figure 5 visual basic class provides basic functions for the visualization of other types, i.e. class 500 visualization is an abstract base class from which types of visuals (e.g., 501-506).

According to figure 3 visual 302 top-level (root visual) associated with the object 304 "Manager visualization", which also has a connection (for example, by means of the handle) with window 306 (HWnd) or similar module in which image data is output to code. Manager 304 visualization controls the drawing of the visual top level (and any descendants of the visuals in this window 306. Figure 6 Manager of the visualization is shown as one of a number of other objects 620 in the object model described here, the graphics system.

Drawing Manager 304 Viswa the s processes (for example, traverses or passes) scene graph according to the schedule specified by the Manager 308, and transmits team graphics and other data low-level component 218 (figure 2) to its corresponding window 306, which in General is described in patent applications U.S. No. 10/184,795, 10/184,796 and 10/185,775. Processing the scene graph is usually planned by the Manager 308 at a frequency lower than the update frequency of the low-level component 218 and/or graphics subsystem 222. Figure 3 shows several child visualization 310-315, organized hierarchically under the visual 302 top-level (root visual), some of which are represented as filled contexts drawing 316, 317 (shown as dashed rectangles that expresses their temporary nature and are associated with lists 318 and 319 commands, respectively, containing, for example, the drawing primitives and other visuals. Visualization can also contain other information properties, as shown in the following illustrative class of visualization:

The transformation, set the property "transformation", specifies the coordinate system for the subgraph of visuals. The coordinate system before the transformation is called pretransformation coordinate system, and after transformation posttransformation coordinate system,i.e. the visual transformation is equivalent to the visuals, with the node transformation as a parent. Figure 7 provides an overview of the transformation that identifies pretransformation and posttransformation coordinate system in relation to the visual. To get or set the transformation of the visual, you can use a property transformation (Transform).

Note that the coordinate transformation can be applied in a uniform way for all, as in a bitmap. Note that this does not mean that the transformation is always applied to bitmap images, but that is visualized, the transformation is affected equally. For example, if the user draws a circle round pen width of one inch, and then applies the scaling in the X direction by a factor of two, then the pen will have a width of two inches left to right and only one inch from the top down. This is sometimes referred to as overlay or transformation of raster images (in contrast to skeletal or geometric scaling, which only affects the geometry). On figa presents the transformation of the scale, and the untransformed image 800 shown on the left and the transformed image 802 with non-uniform scaling is shown to the right. On FIGU presents the transformation of the scale, and tranformirovanie image 800 shown on the left, and the transformed image 804 with geometric scaling is shown to the right.

In relation to the coordinate transformation of the visual, TransformToDescendant transforms a point from the source of the visuals in the child visuals. The point is transformed from posttransformation the coordinate space of the source in visual posttransformation coordinate space child visuals. TransformFromDescendant transforms a point from the child's visual up the parent chain in the original visual. The point is transformed from posttransformation the coordinate space of the child in visual posttransformation coordinate space of the original visuals. Method CalculateBounds returns the bounding rectangle of the content of the visuals from posttransformation coordinate space. Note that there may be an alternative version of the API, where permitted more specific guidance, such as the transformation of the visual interpreted during the coordinate transformation. For example, it is possible to consider or not to consider the transformation on the source and the child visuals. Thus, this alternative provides four options, for example, the coordinates can be transformed from pretransformation space in pretransformation space of pretransformation space in posttransfer the operating space, from posttransformation space in pretransformation space and posttransformation space in posttransformation space. The same idea applies to testing a choice, for example, testing may begin in pretransformation or posttransformation coordinate space, and the test results of the selection can be pretransformation or posttransformation coordinate space.

Property "cut-off" sets (and receives) the clipping area of the visual. As a clipping area, you can use any geometry (Geometry (class geometry described below with reference to Fig), and the clipping region is used in posttransformation coordinate space. In one implementation, the default setting for clipping area is null, i.e. no clipping, which can be considered as infinitely large rectangle clipping from (-∞,-∞) to (+∞,+∞).

Property opacity (Opacity) gets/sets the opacity of the visual, so the content of the visuals are mixed on the drawing surface, depending on the opacity and blending mode. Property "blending mode" (BlendMode) can be used to set (or get) used a blending mode. the example the opacity (alpha) can be set in the range from 0.0 to 1.0, choosing relatively linear alpha blend mode, such as Color = alpha * color image + (1.0 - alpha) * background color). In visual may include other services, such as properties of the special effects, such as blur, black and white etc.

Various services (including transformation, opacity, clipping) can be placed and retrieved on the drawing context, and operation of the premises/extraction can be nested, if only the call of the extract is combined with the call room. For example, the sequence of operations PushTransform(...); PushOpacity(...); PopTransform(...) is illegal because before calling PopTransform, you need to call PopOpacity.

Method PushTransform places (sets, sets) transformation. Subsequent drawing operations are performed in relation to a given transformation. PopTransform retrieves the transformation of the specified combined challenge PushTransform:

void PushTransform(Transform transform);

void PushTransform(Matrix matrix);

void PopTransform().

Similarly, the method PushOpacity places (sets) opacity. Subsequent drawing operations are visualized on a temporary surface with a specified opacity value, and then linked into the scene. PopOpacity retrieves the opacity placed combined challenge PushOpacity:

void PushOpacity(float opacity);

vod PushOpacity(NumberAnimationBase opacity);

void PopOpacity().

Method PushClip specifies the geometry of the cut-off. Subsequent drawing operations are clipped to the geometry. Clipping is applied in posttransformation space. PopClip retrieves the clipping area, placed combined challenge PushClip:

void PushClip(Geometry clip);

void PopClip().

Note that the operation of the premises can be nested to any depth, if only retrieval operations are combined with location. For example, a valid the following sequence of operations:

Test selection is performed in posttransformation coordinate space and returns the ID of each of the visuals, test of choice, chosen, for example, by registering a click of the pen or mouse. An alternative version of the interface allows to test selection began in pretransformation coordinate space relative to the visual, where the test of choice. Choose the visuals are returned in the order from right to left and from depth to the surface. Test selection can be controlled using various flags, including HitTestable, which determines whether the visual test of choice (its default value is true), and HitTestFinal, which determines stops if the test of choice in the selection of visuals, i.e. if a visa is l is selected and the property HitTestFinal the visual is set to "true", the test selection is terminated and returns the results collected up to this point (its default value is false). Another flag is HitTestIgnoreChildren, which determines whether to consider the descendants of the visuals, when testing the selection is done in the visual (its default value is false).

Visual proxy (ProxiVisual) is the visuals, which you can add to a scene graph more than once. As to any visual that is referenced by the visual proxy, you can come from the root in a number of ways, reading (TransformToDescendant, TransformFromDescendant and HitTest not working through visual proxy. In fact, there is one canonical path from each of the visuals to the root of the tree visualization, and this path does not include visualization intermediaries.

According to figure 5 in the object model are set to different types of visuals, including visual containers (ContainerVisual) 501, visualization drawing (DrawingVisual) 502, visual identity (ValidationVisual) 503, the visualization surface (SufaceVisual) 504 and visualization HWnd (HWndVisual) 505. The table below describes illustrative methods of visual drawing:

Visual drawing is a container for graphical content (e.g., lines, text, images etc). Note that the visual can be added to visual draw, but in some implementations it is unacceptable. Visas the al drawing 502 includes a method of opening (Open) which returns IDrawingContext that you can use to populate the visual drawing, for example, other visuals and drawing primitives, which are described below. In one implementation, for various reasons, also described below, the visual drawing can be opened only once to fill its drawing context; in other words, this visual drawing is immutable. After completing the visual drawing is closed using the method close (Close), for example, on the drawing context. Note that the call to open a can clear any contents (descendants) of the visuals, however, in one alternative implementation includes a method of attaching a (Append), which allows you to open the current visual attach to this visual. In other words, the call OpenForAppend works as Open except that the current content of the visual drawing is not cleared when opening.

Below is an example of how the drawing context is used to populate the visual:

ContainerVisual cv1 = new ContainerVisual();
DrawingVisual dv1 = new DrawingVisual();
//Open the drawing context. The context will be automatically
//closed when you exit the procedure using. This is also riveted
//replace any contexts that may have already been in dv1.
using (IDrawingContext dc = dv1.Open())
{
dc.DrawLine(new Pen(Brushes.Blue), new Point(...),
new Point(...));
}
//Add dv1 to the entirety of the descendants cv1

cv1.Children.Add(dv1);
//Add another arbitrary visual to cv1
cv1.Children.Add(someOtherVisual);
//Create another visual drawing
DrawingVisual dv2 = new DrawingVisual();
using (IDrawingContext dc = dv2.Open())
{
//Specify a new coordinate system, where everything is twice as large.
dv.PushTransform(new Scale(2.0, 2.0));
Ȁ //Draw a line in the coordinate system with the new scale.
dc.DrawLine(new Pen(Brushes.Red), new Point(...),
new Point(...));
//Return to the original coordinate system.
dv.PopTransform();
dc.DrawLine(new Pen(Brushes.Green), new Point(...),
new Point(...));
}
//Add dv2 to the entirety of the descendants of cv1.
Cv1.Children.Add(dv2);

In General, visual identity (ValidationVisual) 503 conceptually similar to the visual drawing except that the visual identity is populated when the system requires it to fill, and not when the code wants to fill it. For example, as described in patent application U.S. No. 10/185,775, high-level machine 214 overlay and animation (figure 2) can revoke the scene graph, when resources are required, for example, when the part of the scene graph is not visible. For example, if some parts are outside of the screen are cut off, etc. If cancelled data will be required in the future, caused by software code 202 will be called again to re-draw (authenticate) cancelled a fragment of a scene graph. To do this, one typical usage scenario is intended to make the code categorize visual identity and replaced the OnValidate method. When the system calls the OnValidate, he passed the drawing context, the program uses the drawing context to repopulate the visual identity.

The example below shows one way to implement a simple visual identity, for example, draws a line of a certain color. The line color can be changed by invoking the procedure SetColor. To force the update of the visual identity, SetColor calls Invalidate to force the graphics subsystem to re-certify visual identity:

public class MyValidationVsiual: ValidationVisual
{
public override void OnValidate(IDrawingContext dc)
{
dc.DrawLine(m_color, ...);
}
public void SetColor(Color newColor)
{
m_color = color;
Invalidate(); //Force redraw visual
//ID to reflect
//change color.
}
private Color m_color
}

This example shows how to use the visual identity:

4 shows an example graph 400 scenes where the visuals containers and visualization of the drawing are connected in the graph of the scene and provided with the relevant data in the form of drawing, for example, in the relevant contexts drawing). Visual container (ContainerVisual) is a container for the visuals, and the visuals containers can be nested into each other. Descendants of the visual container can be manipulated using a "totality of visualization" (VisualCollection)returned with what oustom the descendants (Children) of the visual container. The order of the visuals together visualization determines the order of visualization visualization, i.e. the visuals are rendered from lowest index to highest index of the background to the foreground (in order of drawing). For example, with proper parameters using three visualization drawing representing red, green and blue rectangles hierarchically under the visual container, the following code will result in drawing three rectangles (with a shift to the right and down), the red rectangle in the background, the green rectangle in the middle and a blue rectangle in the foreground:

Figure 5 shows another type of visual object visual surface (SurfaceVisual) 504. In General, according to figure 3, the object 315 SurfaceVisual associated with surface stored in memory (bitmap) 322, that can be accessed by program code 202 (figure 2). The code of the client 202 may maintain their own memory surfaces, or may require that the memory allocated by the object surface.

Code 202 has an option to open the visual surface and get the drawing context 323, in which a software code 202 can write pixel data 324, etc. and directly placed the these pixels on the surface. This is reflected in figure 3 by the dashed line between the object 322 "surface", the drawing context 323 (shown in the dotted rectangle to reflect the temporary nature) and pixel data 324.

Code 202 also has an option to create a Manager 330 visualization surface and to bind the subgraph 332 visualization with visual surface 315. This option is presented on figure 3 by the dashed line between the object 322 surface and Manager 330 visualization surface. Note that the subgraph 332 visualization also allows the attachment of other visualization surface, which is also shown in figure 3. Manager 330 visualization surface (also shown as the type of another object in the set of 620 figure 6) bypasses the subgraph 332 visualization to update a bitmap image 322 of the visual surface. Further, note that this bypass is planned by the Manager 308 and efficiency can be controlled to adjust the frequency of updates to this raster image 322. Manager 330 visualization surface is not necessary to bypass the subgraph 332 visualization each time and/or with the same frequency with which the Manager 302 visualization of the upper level bypasses the rest of the scene graph. Regarding surfaces, as described below with reference to figa-9S, in General, the present model graphics allows you to apply a number of visuals on the surface, to visualize in direct mode, etc) is rye and raster primitives in the surface, overlay surface on the Desk or other surface and to control the surface in the list of surfaces to use to overlay on it or draw on it. The list of surfaces define as a set of one or more surfaces (i.e. frames/buffers) physical (system or video memory used to store the compositions of the visuals or graphics, or both. One surface of a list of surfaces can be set as the current buffer back where it draws and/or overlay, and one surface of the list of surfaces set as the current primary buffer or buffer the foreground, which is used to overlay another purpose of visualization.

The surface can be used in different ways. For example, on figa shows the overlay to the surface. According figa, object 900 "Manager of the visualization surface is associated with a list 902 of surfaces, which is the goal of the visualization tree 904 visualization. In each cycle of the overlay visuals are superimposed on the surface of the list of surfaces, which currently serves as the active buffer back to the list of surfaces. The surface on which overprint, may include surface at the disposal of the client/ high-level machine 21 overlay (figure 2) for the in-process scripting overlay the surface at the disposal of low-level machine 218 overlay for scenarios where the client does not need bits, but low-level machine 218 overlay needs them to overlay surface to another purpose of rendering or cross-process surface, for scenarios where the client needs access to the bits of the surface, but low-level machine 218 overlay also requires surface for other activities overlap.

The overlay is controlled by services bronirovania attached to the Manager of the visualization. One example of services bronirovania is a manual mode, an example of which is shown below:

//create the service manual bronirovania and attach Manager
//visuals
TimingService timingService =
new ManualTimingService(visualManager);
//to put the tree visualization for the current buffer background
//surface
visualManager.Render();
foreach (Tick tick in timingService)
{
//put the buffer back in the next frame surface
surfaceList.NextFrame();
//convert the time tree visualization
timingService/Tick(tick);
//to put the tree visualization for the current buffer background
//surface
visualManager.Render();
}

Another way to use the surface consists of direct visualization on the surface through the context. Attaching the list of surfaces to the visual (the visual surface) can be done direct visualization on the surface of the list of surfaces, which currently serves as the active buffer back to the list of surfaces. This visualization exercise, getting the drawing context of the visual surface, and performing in this context, drawing commands, as described above. Note that a drawing context blocks the surface, not allowing you to perform other operations overlay on it. Each drawing command is executed immediately, and vectors and other surfaces you can paint (mix) on the surface. what, however, other visuals, you cannot draw on the surface, but it can be applied to the surface, linking it to the Manager of the visualization, as described previously (for example, on figa).

//append the list of surfaces to visual
SurfaceVisual surfaceVisual = new SurfaceVisual(surfaceList);
//allow direct visualization on the surface
//buffer the background (and lock)

BaseDrawingContext dc = surfaceVisual.Open();
//draw a line (immediately) in the current buffer background
//surface
dc.DrawLine (pen, startPoint, endPoint);
//unlock the surface - direct visualization is performed
surfaceVisual.Close(dc);

Another way to use surfaces is to overlay surface to another goal visualization. For this, attaching a list of surfaces to the visual surface, the surface can be attached in the quality of the node in the tree visualization, and the surface of the list of surfaces, which currently serves as the primary buffer or buffer the foreground, can be applied to another surface or to the desktop.

This is illustrated in figv and in the example below:

//append the list of surfaces to visual
SurfaceVisual surfaceVisual = new SurfaceVisual(surfaceList);
//Add the visual surface of the tree visualization for overlay
//another target visualization
rootVisual.Add(surfaceVisual);

Direct overlay on/from the surface and presents on figs, where the above features combined so that the overlay on the surface of the buffer background from a list of surfaces and the imposition of the surface of the buffer foreground from the list of surfaces (e.g. on the desktop) are carried out simultaneously. Note that, in order to exclude unwanted effect, the so-called gap image, a list of surfaces must contain at least two surfaces, the surface of the buffers foreground and background. The surface used according pigs is most likely to be at the disposal of low-level machine 218 or is the cross-process surface to improve overlay implemented in low-level machine 218.

Surfaces are constructed as independent objects that are specified in the following examples constructors:

public class Surface
{
//create and allocate a blank surface without initial data
public Surface(int width,
int height,
int dpi
PixelFormat pixelFormat,
SurfaceFlags flags)
//create a surface using the allocated memory
public Surface(int width,
int height,
int dpi
PixelFormat pixelFormat,
IntPtr pixels, //managed memory
//surface
Int stride)
//create the source (i.e. clone)
public Surface(Surface sourceSurface,
SurfaceFlags flags)
//create from a file or URL
public Surface(String filename,
SurfaceFlags flags)
//create from the stream
public Surface(System.IO.Stream stream,
SurfaceFlags flags)
//create the HBITMAP (which cannot be selected in HDC)
public Surface(HBITMAP hbitmap, HPALETTE hPalette)
//create from HICON
public Surface(HICON hicon)
//read-only properties
public int Width {get; }
public int Dpi {get; }

public PixelFormat Format {get; }
public int Stride {get; }
public IntPtr Buffer {get; }
}
public class SurfaceList
{
//Create an empty list of surfaces (no primary data).
public SurfaceList(int width,
int height,
int dpi
PixelFormat pixelFormat,
int numSurfaces,
SurfaceFlags flags)
//Create a list of surfaces that uses the specified
//the surface. All surfaces must have the same
//properties (w, h, dpi, etc)
public SurfaceList(Surface []surfaces)
//change the buffer foreground "first in line"
//buffer background
public Flip()

//put the buffer back into the next surface
public Next()
public int FrontBufferIndex {get; set;}
public int BackBufferIndex {get; set}
public Surface GetFrontBuffer()
public Surface GetBackBuffer()
public Surface GetSurface(int surfaceIndex)
}

After you build the surface and/or a list of surfaces can be attached to the object visual surface or to the object Manager "visualization".

//Create the visual surface
public SurfaceDrawingVisual(Surface surface)
public SurfaceDrawingVisual(SurfaceList surfaceList)
//Create the Manager of the visualization to visualize the "surface"
public VisualManager(Surface surface)
public VisualManager(SurfaceList surfaceList)

Next, the surface can receive data from the decoder and/or to send their data to the encoder to write to a particular file is ornate. The surface can also receive/send data from/to interfaces/s effects. The surface can be built for any pixel format from the full set of supported types of surface format. However valid some of the settings of the specified pixel format, for example, if the specified pixel format is less than 32 bits per pixel, then the format can be expanded to 32 bits per pixel. Whenever bits are requested from the surface in its original format, the surface will be copied into the buffer of the requested pixel format using filter format conversion.

Figure 5 shows another visual, namely visual HWnd (HwndVisual) 505, which places the child object Win32 HWnd in the scene graph. In particular, existing programs still work using the method WM_PAINT (or similar), which draws in the child HWnd (or similar object) on the basis of available technology graphics. To support these programs in the new model graphics processing visual HWnd allow HWnd contained in the graph of the scene and moved as re-hosting the parent of the visuals that are presented on figa. However, as a result of limitations in the existing HWnd, when rendering the child HWnd can be placed only on top of other Windows and is not subject to rotation or scaling, in contrast to the other above visualization. There may be some clipping shown in figv, where the dotted lines indicate the display rectangle HWnd, clip during the relative movement in relation to its parent visual.

There are other types of visuals 506, and this object model is extendable, allowing others to develop it. For example, according to 11 multi-layered visual 1100 allows the application developer to separately manage the information in the visual through multiple data streams, providing greater granularity of control compared to the visuals, with one data flow. Note that a similar level of detail control can be achieved with (e.g., three) separate subsidiary of visuals under one parent visuals, but this requires that the code worked with several visuals that are more difficult than working with one multi-layered visuals, with pointers to multiple layers.

For example, according to 11 background data, context data and the data boundary is contained in a single multi-layered visuals, but separated from each other by specifying the values of the layer, for example 0, 1, or 2, respectively. Layers can be inserted, including attach at either end, and/or delete, and the stacking order of layers (for example, from left to right, as shown) specifies p apologeomai Z-order display. Note that for the safety of the child content and other data in a multi-layered visuals should not be enumerated.

Other types of visuals include visualization containers and forwarded to the child visuals HWnd, in which the content is drawn into a bitmap image and is included in the visual surface. Three-dimensional visualization provides the connection between two-dimensional and three-dimensional worlds, for example, using the two-dimensional visuals, having a form in a three dimensional world, you can create a view like the one that gives the camera.

Many of the objects "resource" are immutable after creation, meaning once they are created they cannot be changed for various reasons, including the ease of threading, preventing damage to others and simplify interaction with the elements and APIs. Note that this generally simplifies the system. However, it should be noted that it is permissible to have a system where such objects are mutable, but, for example, requiring the control graph of the dependency. For example, although it is possible to have a system where such objects are mutable, if the code will change the trim installed on the visuals, the visuals will need to re-render, which requires a mechanism of notification/registration, for example, if the visual assign a new trim, the visual registrera the t itself with the cut-off for notifications (for example, notification about the change cut-off). Thus, in one embodiment, the implementation in order to simplify objects "resource" are immutable.

These objects "resource" can be set using the constructor that is simple, the usual way of creating the object, or by using a companion object "Builder", as described below. For example, to create a SolidColorBrush (object "brush" described below), you can use the constructor:

Brush MyBrush = new SolidColorBrush(Colors.Red).

The user can also use the static members of the class Brushes (brushes)to obtain a set of predefined colors.

Since immutable objects cannot be modified, in order to effectively change the object, the user needs to create a new object and replace the old object with the new one. For this many objects resource in the system can use the Builder pattern in which immutable objects are created using the class Builder, which is an accompanying class, allowing changes. The user creates an immutable object to display the set parameters to the Builder, creates a new Builder for this object and initializes it from the immutable object. The user then modifies the Builder as needed. After that, the user can build a new object, modifying the Builder and the setting different using it to create another immutable object. Note that it is desirable to have immutable objects with specified properties, and that immutable objects cannot be modified, but only to replace activating a property change event.

Thus, instead of using the constructor to create a SolidColorBrush, as described above, can be used SolidColorBrushBuilder:

SolidColorBrushBuilder MyBuilder = new;

SolidColorBrushBuilder();

MyBuilder.Color = Colors.Red;

Brush MyBrush = MyBuilder.ToBrush().

Most objects that accept static values, can also take objects in the animation. For example, on the DrawingContext have a replacement on DrawCircle that takes PointAnimationBase to the center of the circle. Thus, the user can specify animation information at the level of primitives. For objects "resource" there is a combination of animation, in addition to the base value. They overlap, what if the user wants to animate the above example, the user, before you can build a brush, you can ask the following illustrative line:

MyBuilder.ColorAnimations.Add(new ColorAnimation(...)).

Note that the object with the animation parameters is still immutable, because its animation parameters are static. However, when processing the scene graph (e.g., crawling) the value of the animation parameters change over time, because of what appear animated, esteticheskie data.

According to the above visualization you can draw, filling their contexts of drawing using a variety of drawing, including geometry (Geometry), "image data" (ImageData) and "video" (VideoData). In addition, there is a set of resources and classes that are shared throughout the group. They include pens, brushes, geometry, transformations, and effects. IDrawingContext represents the set of drawing operations that can be used to fill the DrawingVisual (visual drawing), ValidationVisual (visual identity). ISurfaceDrawingContext, the main interface with context IDrawing, can be used to fill SurfaceVisual (visual surface). In other words, the drawing context is a set of drawing operations; for each drawing operation there are two methods: one that takes as arguments to constants, and the other, which takes as arguments the animators. The DrawLine method draws a line using the specified pen from the start point to the endpoint.

Method DrawRoundedRectangle draws a smoothed rectangle with the specified brush and pen; brush and the pen can be null.

DrawGeometry method draws the path using the specified brush and pen; brush and the pen can be null.

Method DrawRectangle draws a rectangle with the specified brush and pen; brush and the pen can be null.

Method DrawSurface draws the surface.

Geometry is a class type (Fig), which specifies the skeleton vector graphics, no stroke or fill. Each object geometry is a simple form (LineGeometry (geometry), EllipsGeometry (geometry of the ellipse), RectangleGeometry (geometry rectangle)), complex single form (PathGeometry (geometry path)) or a list of such forms GeometryList (list of geometries) with a given connection operation (e.g., Union, intersection, etc).

According Fig geometry path PathGeometry is a set of object shape (Figure). In turn, each shape object consists of one or more objects "segment (Segment), which actually define the shape of the figure. The figure is a subkey of geometry Geometry, which specifies the set of segments. This set of segments is a series of connected one behind the other two is hernych objects "segment". The figure can be either a closed form with a certain area, or simply a series of connected segments that define the curve, but not the closed region.

Filled area geometry path PathGeometry ask, choosing the contained figures that have property "fill" (Filled) is set to "true", and applying the FillMode (fill mode) to define a closed area. Note that the FillMode enumeration type specifies how the intersecting areas of interest "figure"contained in the "geometry"unite, forming the resulting area "geometry". The rule of alternation allows you to determine whether a point is inside a canvas. For this you need to mentally hold the beam from this point to infinity in any direction, and then mark the places where a segment of the shape crosses the ray. Then you need to start counting from zero and add one every time a path segment crosses the ray from left to right and subtract unit every time a path segment crosses the ray from right to left. If the count of intersection will give the result equal to zero, then the point is outside the path. Otherwise it is inside. The rule of "coiling" allows you to determine whether a point is on the canvas inside. For this you need to mentally hold the beam from this point to infinity in any direction and counting the number of path segments and is of this form, which this ray intersects. If this number is odd, then the point is inside; if even something outside. According pig when drawing geometry (for example, rectangle) you can set the brush or pen, as described below. This pen also has the object "brush". The object brush sets, as graphically to fill the plane, and there is a hierarchy of object classes "brush". It presents on Fig filled rectangle 1402, which is obtained in the processing of visual, containing the rectangle and the commands and options of the brush.

As described below, certain types of brushes (for example, gradients and dematiaceae grid) determine your size. When using the size of these brushes is obtained from the bounding box, for example, when GradientUnits/DestinationUnits for Brush set to ObjectBoundingBox use the bounding box draw primitive. If these properties are set equal UserSpaceOnUse use the coordinate space.

The object of the pen (Pen) is supported by brush, together with the properties Width, LineJoin, LineCap, MiterLimit, and DashArray DashOffset that presented in the example below:

As noted above, the object model schedules that meet present invention, includes an object model of the brush, which, in General, I have is the floor plane of the pixels. Examples of the types of brushes presented in the hierarchy depicted in Fig, and in the base class brush includes a brush pure color (SolidColorBrush), gradient brush (GradientBrush), brush image (ImageBrush), the brush of the visual (VisualBrush) (which can refer to visual and brush dematiaceae grid (NineGridBrush). Gradient brush includes objects "linear gradient" (LinearGradient) and "radial gradient" (RadialGradient). As described above objects of the brush are immutable.

Below is an example of a class Builder brush (BrushBuilder):

Note that the objects "brush" when used to detect, as they are associated with the coordinate system and/or how they relate to the bounding box of the form in which they are used. In the General case, information such as size, can be extracted from the object, which draws the brush. In particular, many types of brushes use the coordinate system to set some of its parameters. This coordinate system can either be set as associated with a simple bounding box of the shape, to which is applied a brush or binding of the R o o f coordinate space, which is active during use of the brush. It is a known modes ObjectBoundingBox and UserSpaceOnUse respectively.

The object brush pure color" (SolidColorBrush) fills the specified plane of pure (solid) color. In the presence of the alpha component of the color, it is combined by multiplying with the corresponding attribute of opacity in the base class "brush". The following is an example of a SolidColorBrush object:

Objects "gradient brush" (GradientBrush) or simply gradients provide a gradient fill and silhouetted by setting the number of stops of the gradient, which specify the colors together with some sort of progression. The gradient draws, performing linear interpolation between the stops of the gradient in the RGB color space with a gamma of 2.2, acceptable alternatives are also interpolated for other schemes or other color spaces (HSB, CMYK, etc). There are two types of objects gradient, namely, linear and radial gradients.

In General, the gradients consist of a list of the stops of the gradient. Each of these stops of the gradient contains color (alpha value) and the offset. If neither one-stop gradient is not set, the brush paints the East transparent black as if the brush is not specified at all. If only one gradient stop, the brush draws pure color, one specified color. Like other resource classes, the class stops of the gradient (the example in the table below) is immutable.

There is also a class together, described in the following example:

According to the table below GradientSpreadMethod specifies how we should draw the gradient outside the specified vector or space. There are three values, including "Playground" (pad), in which the boundary color (first and last) are used to fill in the rest, reflect (reflect), in which the stop is again reproduced in the reverse order to fill the space, and "repeat" (repeat)in which stop are repeated in that order until the space will not be filled in:

On Fig shows examples GradientSpreadMethod. Each form has a linear gradient going from white to gray. The solid line represents the gradient vector.

LinearGradient brush sets with a linear gradient along the vector. A separate stop set the color stops bbefore the b vector. An example is shown in the table below:

public class System.Windows.Media.LinearGradient: GradientBrush
{
//Sets the gradient with two colors and gradient vector,
//specified for the fill of the object to which it is applied
//the gradient. It is assumed that the property GradientUnits
//takes the value ObjectBoundingBox
public LinearGradient(Color color1, Color color2,
Float angle);
public BrushMappingMode GradientUnits { get; }
public Transform GradientTransform { get; }
public GradientSpreadMethod SpreadMethod { get; }
//The gradient vector
public Point VectorStart { get; }
public PointAnimationCollection VectorStartAniations
{ get; }
public PointAnimationCollection VectorEndAnimations { get; }
//Stop gradient
public GradientStopCollection GradientStops { get; }
}
public class System.Window.Media.LinearGradientbuilder:
GradientBrushBuilder
{
public LinearGradientBuilder();
public LinearGradientBuilder(Color color1, Color color2,
float angle);
public LinearGradientBuilder(LinearGradient lg);
//GradientUnits: default - ObjectBoundingBox
public BrushMappingMode GradientUnits { get; set; }
//GradientTransform: default - identical
//convert
public Transform GradientTransform { get; set; }
//SpreadMethod: default - Pad
public GradientSpreadMethod SpreadMethod { get; set; }
//The gradient vector
//Vector defaults to (0,0) - (1,0)
public Point VectorStart { get; set; }
public PointAnimationCollectionBuilder VectorStartAnimations
{ get; set; }
public Point VectorEnd { get; set; }
public PointAnimationCollectionBuilder VectorEndAnimations
{ get; set; }
//Stop gradient
public void AddStop(Color color, float offset);
public GradientStopCollectionBuilder GradientStops
{ get; set; }
}

RadialGradient has a programming model similar to a linear gradient. However, at that time, as a linear gradient specifies the gradient vector with the starting and ending points, a radial gradient specifies the behavior of the gradient through the circumference together with the focal point. The circle defines the endpoint of the gradient, i.e. the gradient stop at 1.0 sets the color for the circle. The focal point defines the center of the gradient. Gradient stop at 0.0 sets the color in the focal point.

On Fig shows a radial gradient from white to gray. The outer circle represents the circle of the gradient, whereas the dot indicates the focal point. In this illustrative gradient SpreadMethod set to Pad:

public class System.Windows.Media.RadilGradient: GradientBrush
{
//Sets the gradient with two colors.
//Assumes that the property GradientUnits has the value
//ObjectBoundingBox, and the center (0.5,0.5), radius 0.5, and
public RadialGradient(Color color1, Color color2);
public BrushMappingMode GradientUnits { get; }
public Transform GradientTransform { get; }
public GradientSpreadMethod SpreadMethod { get; }
//Definition of the gradient
public Point CircleCenter { get; }
public PointAnimationCollection CircleCenterAnimations
{ get; }
public float CircleRadius { get; }
public FloatAnimationCollection CircleRadiusAnimations
{ get; }
public Point Focus { get; }
public PointAnimationCollection FocusAnimations { get; }
//Stop gradient
public GradientStopCollection GradientStops { get; }
}
public class System.Window.Media.RadialGradientbuilder:
GradientBrushBuilder
{
public RadialGradientBuilder();
public RadialGradientBuilder(Color color1, Color color2);
public RadialGradientBuilder(RadialGradient rg);
//GradientUnits: default - ObjectBoundingBox
public BrushMappingMode GradientUnits { get; set; }
//GradientTransform: default - identical
//convert
public Transform GradientTransform { get; set; }
//SpreadMethod: default - Pad
public GradientSpreadMethod SpreadMethod { get; set; }
//Definition of the gradient
public oint CircleCentre { get; set; }//default:
//(0.5, 0.5)
public PointAnimationCollectionBuilder
CircleCenterAnimations { get; set;}
public float CircleRadius { get; set;}//default: 0.5
public FloatAnimationCollectionBuilder
CircleRadiusAnimations { get; set;}
public Point Focus { get; set; } //default: (0.5,0.5)
public PointAnimationCollectionBuilder
FocusAnimations { get; set; }
//Stop gradient
public void AddStop(Color color, float offset);
public GradientStopCollectionBuilder GradientStops
{ get; set; }
}

Another object "brush", presented at Fig is the volume of the CT brush of the visual" (VisualBrush). In principle, the brush of the visual provides a method for drawing a visual duplicate, mosaic way, like fill. Objects coloring pages of visuals also provide a mechanism that allows the markup language to work with the API level on the level of resources, as described below. As an example of such fill Fig presents the brush of visual referencing visual (and any subsidiary visualization), which specifies a single circular form 1420, and fills the circular shape rectangle 1422. Thus, the object of the brush of the visual" can refer to the visual, to specify how to draw this brush, which introduces a type of multiple use for visualization. Thus, the program can use the "metafile" arbitrary graphics to fill the area with a brush or pen. Because it is a compressed view of storage and use free graphics, it acts as a resource graphics. The following is an example of a brush of visual":

The contents of the brush of the visual does not have built-in borders and effectively describe the infinite plane. These contents exist in its own coordinate space, and the space that is filled with the brush of the visuals, is the local coordinate space at the time of application. The space content is displayed in the local space, depending on the properties of the "window of visibility" (ViewBox), "the viewport (ViewPort), align (Alignment) and "stretching" (Stretch). Window visibility set in the space of content, and this rectangle is displayed in the viewport rectangle (specified through properties Origin (origin) and size).

The viewer sets the place will eventually be drawn content, creating a basic element of the mosaic for this brush. If the value DestinationUnits equal UserSpaceOnUse properties Origin and Size considered in the local space at the time of application. If instead, the value DestinationUnits is ObjectBoundingBox, properties, Origin, and Size considered in the coordinate space, where (0,0) is the upper left corner of the bounding box of the object rendered with a brush, and (1,1) is the lower right corner of the same window. For example, consider the fill object RectangleGeometry drawn from (100,100) to (200,200). In this example, if DestinationUnits is set ObjectBoundingBox, the Origin is (0,0)and Size equal to (1,1),will describe the entire content area. If Size is empty, then this "brush" nothing renders.

Window visibility ViewBox set in space content. This rectangle is transformed to fit it into the viewer, in accordance with the properties of "alignment" and "stretching". If "extension" is "none", then the contents are not subject to scaling. If the property is "stretching" is set to Fill (filled), then the window of visibility scale independently in the X and Y directions, to give it the same size as the viewport. If the stretching is Uniform or UniformToFill, the logic is similar, but the X and Y scale uniformly, preserving aspect ratio content. If the stretching is Uniform, then the window of visibility scale, to give it a more limited size equal to the size of the preview window. If "extension" is set to UniformToFill, the window of visibility scale to give it a less limited size equal to the size of the preview window. In other words, as Uniform, and UniformToFill preserves the aspect ratio, but Uniform ensures that all the window visibility is inside the viewport (potentially leaving lots viewer uncovered window visibility), and UniformToFill ensures that all the viewer filled window visibility (potentially causing lots of Windows likely to be out of oknaproma). If the window visibility is empty, then no stretching (Stretch) does not apply. Note that alignment applies even in this case, and it positions the "point" window of visibility.

On Fig shows a view of a single element 1800 mosaic in the graph, rendered at different installations stretching, including the element 1800 mosaic where you installed the extension, set to none (no). Element 1802 mosaic represents the stretching of the relevant Uniform, item 1804 mosaic represents the stretching of importance UniforfToFill, and the element 1806 mosaic represents the stretching of importance to Fill.

Defining the viewing window (on the basis of DestinationUnits) and determining the size of the window of visibility (on the basis of a Stretch), you need to position the window visible in the preview window. If the window visibility is the same size as the viewport (if Stretch is Fill, or if it just happened to be one of the other three values Stretch), then the window visibility is positioned at the beginning of the countdown to coincide with the viewer. Otherwise, apply the horizontal alignment and vertical alignment. Based on these properties window visibility align in directions X and Y. If the "horizontal alignment" takes the value Left, then the left edge of the window of visibility combined with the left edge of the viewport. If t is in takes a value Center, the center of the window of visibility combined with the center of the window, and if Right, then combine right edges. The same process is used for the direction Y.

If the window visibility is (0,0,0,0), it is considered unknown, and therefore consider the property ContentUnits. If the property ContentUnits takes the value UserSpaceOnUse, no scaling or offset does not produce and study the content in the preview window without any transformation. If ContentUnits takes the value ObjectBoundingBox, the origin of the content combined with the reference point of the display window and scale the content width and height of the bounding box of the object.

When filling a space with a brush of visual content displayed in the preview window, as described above, and is clipped by the viewport. It turns out the base element of the mosaic to fill, and the remaining space is filled in based on the value TileMode brush. Finally, apply the transformation brush, if installed, it is performed after all other operations of the display, zoom, offset, etc. the TileMode Property is an enumerated type used to describe, pour over whether and how to fill the space with a brush, which is connected to this property. The brush, which can be provided to the mosaic element has the specified rectangle tile, this tile has a base place ulozhenie fill in the space. The rest of the space filled in based on the value TileMode. On Fig shows a view illustrative graphics with different settings TileMode, including "None" 1900, "Tile" 1902, "FlipX" 1904, "FlipY" 1906 and "FlipXY" 1908. The upper left element of the mosaic in various illustrative graphs represents the basic element of the mosaic.

On Fig presents the process of generating pixels for this brush. Note that the logic described in Fig is only one possible way to implement the logic, and it should be understood that other valid, including more effective ways. For example, surely there are more efficient ways of processing data, for example, when the content does not draw on every iteration, and the element of the mosaic study and cache. However Fig provides a simple description.

In General, whenever verisoft content template, create a new coordinate system. Origin and offset of each view set through properties, Origin and Size, filtered through properties DestinationUnits and Transform.

The coordinate system is set based on the properties DestinationUnits. To do this, if at step 2000 determines that the property DestinationUnits is set UserSpaceOnUse, then at step 2002 the current coordinate system at the time of use of the brush appoint the initial coordinate system. If etape determine this property is set to ObjectBoundingBox on stage 2004 use the bounding box of geometry which to apply this brush to set a new coordinate system where the upper left corner of the bounding box is displayed at (0,0)and the lower left corner of the bounding box is shown at (1,1). In any case, at step 2006, this coordinate system is used, the Transform property, which actually sets the grid.

On Fig presents mesh brush of visuals, asked for tiles in the hand of the visuals. The first round is a simple grid, and the second circle obtained by the transformation skew (Skew) in the X direction equal to 47.

At step 2008 in each cell of the grid draw the visual that shown in Fig, where visual draws the relevant data. If at step 2010 determines that the window visibility is set, then at step 2012 visual fits in the grid cell in accordance with the attribute ViewBox Stretch, HorizontalAlign, VerticalAlign. Properties DestinationUnits and Transform is used to apply the appropriate transformation to center the visual in the grid cell.

If the ViewBox attribute is not set, at step 2014 establish a new coordinate system for narioki content.

The coordinate system is set so that its origin coincides with the reference point (Origin) of a given grid cell, in which a painting is made.

At step 2018 apply clipping on the basis of the Size property for this element of the mosaic was not drawn outside the bounds of the cell. Origin and Size appropriately modify based on the properties DestinationUnits.

Then modify the coordinate system on the basis of the properties SourceUnits. To do this, if at step 2020 determines that the property SourceUnits is set ObjectBoundingBox, then at step 2022 apply the corresponding transformation scaling, otherwise, if it is UserSpaceOnUse any new transformation is not used. At step 2024 apply the Transform property, and at step 2026 draw the content.

Note that if any component of the property "size" is zero, then don't paint anything, and if the tension is set to None, the transformation window visibility (ViewBox) set so that one unit in the new coordinate system was equal to one unit in the old coordinate system. Transformation, in essence, becomes the offset-dependent attributes, alignment, and size of the window of visibility. As described above, in steps 2010 and 2012 properties "stretching" and "alignment" is used only when you set the window visible. Window visibility sets a new coordinate system for the contents, and "stretching" helps to define how to display these contents in the window of visibility. Alignment options align the window of visibility, but not pin the options. Thus, for example, if the window visibility set to "0 0 10 10" and something drawn in (-10,-10) and aligned in the upper-left corner, then this image will be clipped.

According Fig brush image can be considered as a special case of the brush of the visuals. Although the program can create the visuals, to put him in the picture and attach it to brush the visuals, the API for these operations would be cumbersome. Due to the lack of the necessary coordinate system of the content of the property element's ViewBox attribute and ContentUnits no longer used.

public class System.Windows.Media.VisualBrush: Brush
{
public VisualBrush (Visual v);
public BrushMappingMode DestinationUnits { get; }
public BrushMappingMode ContentUnits { get; }
public Transform Transform { get; }
public ViewBox Rect { get; }
public Stretch Stretch { get; }
public HorizontalAlign HorizontalAlign { get; }
public VerticalAlign VerticalAlign {get; }
public Point Origin { get; }
public PointAnimationCollection OriginAnimations { get; }
public Size Size { get; }
public SizeAnimationCollection SizeAnimations { get; }
//Visual
public Visual Visual { get; }
}
public class System.Window.Media.VisualBrushBuilder:
BrushBuilder
{
public VisualBrushBuilder();
public VisualBrushBuilder(Visual v);
public VisualBrushBuilder(VisualBrush vb);
//DestinationUnits: default - ObjectBoundingBox
public BrushMappingMode DestinationUnits { get; set; }
//ContentUnits: default - ObjectBoundingBox
public BrushMappingMode ContentUnits { get; set; }
//Transform: default - identical transformation
public Transform Transform { get; set; }
//ViewBox: default (0,0,0,0) is not installed, and
//ignored
public ViewBox Rect { get; set; }
//Stretch: None by default - and ignored
//because ViewBox attribute is not specified
public Stretch Stretch { get; set;}
//HorizontalAlign: default - Center and ignored
public HorizontalAlign HorizontalAlign { get; set; }
//VerticalAlign: default - Center and ignored
public VerticalAlign VerticalAlign {get; set;}
//Origin: the default (0,0)
public Point Origin { get; set; }
public PointAnimationCollectionBuilder OriginAnimations
{ get; set; }
//Size: default (1,1)
public Size Size { get; set; }
public SizeAnimationCollectionBuilder SizeAnimations
{ get; set; }
//Visual: default null - nothing drawn
public Visual Visual { get; set; }
}
public class System.Windows.Meda.ImageBrush: Brush
{
public ImageBrush(ImageData image);
public BrushMappingMode DestinationUnits { get; }
public Transform Transform { get; }
public Stretch Stretch { get; }
public HorizontalAlign HorizontalAlign { get; }
public VerticalAlign VerticalAlign {get; }
public Point Origin { get; }
public PointAnimationCollection OriginAnimations { get; }
public Size Size { get; }
public SizeAnimationCollection SizeAnimations { get; }
public ImageData ImageData { get; }
}
public class System.Window.Media.ImageBrushBuilder:
BrushBuilder
{
public ImageBrushBuilder();
public ImageBrushBuilder(ImageData image);
public ImageBrushBuilder(ImageBrush ib);
//DestinationUnits: default - ObjectBoundingBox
public BrushMappingMode DestinationUnits { get; set; }
//Transform: default - identical transformation
public Transform Transform { get; set; }
//Stretch: None by default
public Stretch Stretch { get; set;}
//HorizontalAlign: default Center
public HorizontalAlign HorizontalAlign { get; set; }
//VerticalAlign: default Center
public VerticalAlign VerticalAlign {get; set;}
//Origin: the default (0,0)
public Point Origin { get; set; }
public PointAnimationCollectionBuilder OriginAnimations
{ get; set; }
//Size: default (1,1)
public Size Size { get; set; }
public SizeAnimationCollectionBuilder SizeAnimations
{ get; set; }
//ImageDaa: default null - nothing drawn
public ImageData ImageData { get; set; }
}

Brush dematiaceae grid NineGridBrush similar to the brush image ImageBrush except that the image is deformed depending on size. In essence, brush dematiaceae grid can be viewed as a specialized type of stretching, in which certain parts of the image are stretched, while others (e.g., borders) - no. Thus, while depending on the size of the image in the image brush is easy zooming, brush dematiaceae grid provides a non-uniform scaling to the desired size. Units for remasshtabirovanie areas are custom units when using the brush, i.e. ContentUnits (if he existed for dematiaceae brush) should be set equal UserUnitsOnUse. The Transform property of the brush can be used effectively. Note that in the border elements are the edges of the image.

For example, on Fig the picture shown dematiaceae grid, increasing from the first example 2302 to the second example 2304, with four types of areas. According pig to keep the shape of the border, the area marked "a", extends horizontally, the sphere is, marked "b", extend vertically, the area marked "C", extend horizontally and vertically, and the area marked "d", do not change in size.

As described in General terms above object graphics model that meets the present invention includes an object model transformation (Transform), which includes the types of transformations represented in the hierarchy shown in Fig under base class "transformation". These different types of components that carry out the transformation may include a list of transformations" (TransformList), "the transformation of migration" (TranslateTransform), "the transformation of turning" (RotateTransform), "transformation of scale" (ScaleTransform), "the transformation of bias" (SkewTransform) and "transformation matrix" (MatrixTransform). Individual properties can be animated, for example, the software developer can animate a property "angle" (Angle) transformation of turn.

Matrix 2-dimensional calculations are presented in the form of a matrix of 3×3. For the desired transformations require only six values instead of the full matrix 3×3. They provided the names and specified as follows:

The matrix to be multiplied by the point, transforms this point of the new coordinate system to the previous coordinate system:

Transformations can be nested to any level. The application of each new transformation is equivalent to multiplying on the right by matrix to the current transformation:

Most places in the API do not accept the matrix directly, but instead use the class "transformation"that supports animation.

Markup language and object model for vector graphics

In accordance with an aspect of the present invention provides a markup language and object model elements that allow user programs and tools to interact with the structure 216 data scene graph, without requiring specific knowledge of the details of the level API 212 (figure 2). In General, there is a markup language vector, which contains the exchange format together with a simple format copyright system based markup expression vector graphics object through fashion and elements. Using this language, you can program the markup (for example, the content type is HTML or XML). Then, to build a scene graph, layout, analyze and transmit to the appropriate level objects API visuals, which were described above. At this higher operational level provided by the elemental tree, the property system and the system of presenters who will undertake the most cumbersome processing, thereby providing designers scenes simple tools to design even very complex scenes.

In General, the vector graphics system typically includes a set of elements of "form" and other elements that combine with the system properties, the grouping system and imposing a two-level approach (item-level and resource-level)that allows the user to program in such a way as to meet the requirements of flexibility and performance. According to one aspect of the present invention the object model to work with vector graphics correlates with the object model of the scene graph. In other words, the system of vector graphics and API level visualization share a set of resources at the level of object model elements, for example, "brush" is used when verisoft on visualization API and is also the type of fill properties on "form" (Shape). Still the way a markup language not only has elements that correlate with the objects of the scene graph, but also in conjunction with API level visualization is near resources primitives (for example, brushes, transformations, and so on). The vector graphics system also provides entertainment and extends the capabilities of the API level of visuals to be wide sharing between levels.

In addition, as described below, the system of vector graphics can be programmed for different profiles, or levels, including item-level or resource level. At the level of elements each form drawing is represented by the item in the same level as the rest of programmable elements in the page/screen. This means that the forms fully interact with the system presenters, events, and properties. At the level of system resources vector graphics are clean resource format similar to traditional metafile graphics. The level of resources effective, but has some limited support for cascaded properties, event processing and programmability with fine detail. Thus, the designer scene has the ability to balance between efficiency and programmability, as necessary.

According to one aspect of the present invention, the system of vector graphics on the level of resources also cor is elerhet (correlated) with API level visualization is the layout of the level of resources in one of the implementations is expressed in the form of a brush of the visuals. When parsing a markup resource create object "visual". Object "visual" is set equal to a VisualBrush (brush the visual)that can be used with forms, controls and other elements-level elements.

On Fig presents the hierarchy 2500 classes of elements. Classes object model markup language that meets the present invention, represented by rectangles with shadows and include the class 2502 forms, class 2504 image, class 2506 video and class 2508 canvas. Members of the class of shapes include rectangle 2510, polyline 2512, polygon 2514, road 2516, line 2518 and the ellipse 2520. Note that in some implementations, the item range may not be present, which is indicated by the dashed rectangle 2522 on Fig, but for different examples of the element 2522 "circle" will be described. Each element may include data (properties) fill, stroke data, the data cut-off, data transformation, data filter effect and the data mask or to be associated with them.

As described below, forms correspond to the geometry, draw with inherited and cascaded properties of the presentation. Property presentation is used to build the pen and brush, necessary for drawing shapes. In one implementation of f is RMI are complete presenters, like other controls. However, in other implementations, the class 2508 canvas may be provided as a container for forms, and shapes you can draw only when they are in the canvas. For example, to ensure ease of forms can not be allowed to attach to the forms of the presenters. Instead, the canvas is attached to the presenter and draws the shape. The elements of the canvas described in more detail below.

Also, as described below, the image class is more specific than the form, and, for example, may include data boundaries, which can be difficult. For example, the border can be set with one color on top of another color on the sides, possibly, various set values of the thickness and other properties. For images or similar window element, such as text or video, you can set the position, size, rotation, and scale. Note that the elements of "image" and "video" can exist and be shown out of canvas and also inherit from window element, for example to get the background, border and support padding on this element.

The item "video" you can play video or similar media) in the display element. Thus, the vector graphics system provides an interface markup level API that seamlessly according to what is with different types of multimedia information, including text, two-dimensional graphics, three-dimensional graphics, animation, video, still images and audio. This allows designers accustomed to working with some environments, it is easy to integrate with other environments in applications and documents. The vector graphics system also allows you to animate multimedia information in the same way that other elements, again, allows designers to use multimedia information, as other elements, without sacrificing the fundamental internal uniqueness of each individual type of environments. For example, the designer can use the same naming scheme for the rotation, scale, animation, overlays and other effects in different types of environments that allows designers to easily create a very rich application, and also allows you to embed them in a very efficient implementation of visualization and overlay.

On Fig shows one implementation in which the markup code 2602 is interpreted by the parser/translator 2604. In General, the parser/translator 2604 adds elements to the element tree / system properties 208 (also presented in figure 2) and attaches the presenters to these elements. System presenters 210 takes the element tree 210 is attached presenters and translates the data into objects and calls on level API 212 of the visuals. Note, Thu the stream need not all elements but only those, which are attached to the presenters. In the General case, the element is an object-level elements, which participates in the system properties, event handling and layout/presentation. The analyzer searches for tags and decides whether these tags to specify the element or resource object. In the particular case of a brush of the visual, the same tags can be interpreted as elements or also can be interpreted as resource objects, depending on the context where these tags are, for example, depending on whether they are in complex property syntax or not.

According to one aspect of the present invention, the markup language provides different ways of resource description, including the format simple lines and a sophisticated recording system object. To format a simple string parser/translator 2604 uses the Converter 2608 types to convert a string into a corresponding visualization API. For example, the following line of markup, the value of the Fill property can be converted to object "brush" by the inverter 2608 types:

<Circle CenterX="10" CenterY="10" Radius="5" Fill="Red" />

It is obvious that the transformation of such a built-in line ragovoy markup using simple string parameters in the object "brush" is C is direct and provides the designer scene easiest way to add a form and its attributes to the stage.

However, it happens that the fill attribute is too complex to fit into a simple string. In this case, set this property, use complex syntax properties. For example, the following complex property syntax describes the fill circle with a gradient instead of a pure color, setting colors for the different stops of the gradient (which can take values from 0 to 1):

The resource instance can be represented not only embedded in the markup, but can also be hosted in a different location (for example, in the markup, or the file can be local or on a remote network and properly loaded)referenced by name (for example, text name, link, or other suitable identifier). Thus, the designer scene can reuse the element of the elemental tree in different parts of the scene, including the elements described using complex property syntax.

The analyzer operates with markup in complex property syntax, referring, if necessary, to the inverter 2608 types, as well as matching the specified parameters with the properties of the object, thereby simplifying the design stage. Thus, the analyzer is not just sadae the objects, but also sets the attributes on objects. Note that the parser actually instructs the Builder to create objects, because objects are immutable.

Since the item-level and API level share the same model visualization, many objects are essentially the same. This increases the efficiency analysis/broadcast, and also allows the programming languages of different types (for example, languages like C#) it is easy to convert the markup into your own syntax and Vice versa. Note that, according pig, another such language 2610 programming can add items to the tree elements 208 or may interact directly with the level 212 visualization API.

According pig and in accordance with an aspect of the present invention the same markup 2602 can be used for programming at the item level and at the resource level. According to the above item level provides the designer scene full programmability, access to system properties, which provides inheritance (for example, feature type style sheets) and event handling (for example, by means of which the element you can attach code to change its appearance, position, etc. in response to a user interface event). However, the present invention also provided is a mechanism of resource level, by which designers scenes can significantly cut down the tree of elements and the system of presenters and programming directly on the API level of visuals. For many types of static forms, images, etc., do not require particular level elements, it provides a more effective and easiest way of conclusion of the respective object. For this purpose, the analyzer recognizes when there is a fill type brush of visual", and directly causes the level API 212 using data 2612 level of resources to create the object. In other words, as shown in Fig, elemental level vector is analyzed in the generated items of further broadcast in objects, whereas the resource level vector is analyzed and directly remains effective way.

For example, the following markup is directly deduced from the object model for object "linear gradient" and fills the space outside the circle with a brush of visuals. The contents of this brush visuals are internal markings. Note that this syntax is typically used for expression of various brushes, transformations, and animations:

Note that, although these objects, the hall is made with a brush of visual, effectively stored on the data level of resources (or created with their help objects) can be referenced by means of the elements and parts of the tree 208 elements, which in General form is presented on Fig. To do this, these resources brush the visuals should provide name (for example, by name, link, or other suitable identifier) and be able to rely on them, like other resources, is described using complex property syntax.

Returning to the explanation of the canvas, mentioned above in connection with one alternative implementation, it is possible to facilitate forms and, thus, to claim that they are contained in a canvas. In this alternative implementation during the rendering of the content is rendered on the endless device-independent canvas, which is linked to a coordinate system. The canvas element can, thus, to position the content according to the absolute coordinates. The canvas element can optionally specify a viewer that specifies the clipping, transformation, the preferred aspect ratio and the display method of the display window in the parent space. If no viewer is not installed, then the canvas element" specifies only the grouping of drawing and can set the transform, opacity, and other attributes overlay

Below is an example of markup for sample canvas:

Note that in one implementation, when the coordinates are given without units, they are considered as "logical pixels at 96 dpi, and in the above example, the line will be 200 pixels in length. In addition to the coordinates, other properties include width, height, alignment horizontally and vertically and window visibility (ViewBox) (type Rect (rectangle); not set by default or (0,0,0,0), i.e. no alignment is not performed, and the properties of stretching and alignment are ignored). As described above in General terms with reference to Fig-20, other properties include tensile, which, when not specified, stores the original size or may be set to 1) Fill in which the aspect ratio is not preserved, and the content is scaled to fill the boundaries set by top/left/width/height (top/left/width/height), 2) Uniform, providing for uniform scaling, while the image will not fit in the boundaries set by top/left/width/height (top/left/width/height), or 3) UniformToFill providing uniform scaling to fill the boundaries set by top/left/width/height (top/left/width/height), the, if necessary, trim.

For further correlation with the low-level object model property transformation establishes a new coordinate system for the element's descendants, while the property "cut-off" limits the scope within which the content can draw it on the canvas, and the way the default clipping is defined as the bounding box. The ZIndex property can be used to specify how the visualization of nested elements "canvas panel.

Window visibility (ViewBox) sets a new coordinate system for the contents, for example, overriding the extent and origin of the viewport (ViewPort). Stretching helps to define how these contents are displayed in the preview window. The value of the ViewBox attribute is a list of four dimensionless numbers <min-x>, <min-y>, <width> and <height>, for example, separated by whitespace and/or comma, and is of type Rect. A value of type Rect window visibility specifies a rectangle in user space, which is displayed in the bounding box. It has the same effect as inserting scaleX and scaleY. Property stretching (when option other than none) provides additional control to preserve the aspect ratio of the graphics. An additional transformation is applied to the descendants of the given element to obtain the above effect.

In the above PR is as effective the rectangle in the above sample markup for each rule stretching will be:

None from (100,600) to (200,650)
Fill - from (100,100) to (900,700)
Uniform (100,?) $ (900,?) the height is 400, and will be centered based on the HorizontalAlign and VerticalAlign.
UniformToFill - from (?,100) to (?,700) the New width is equal to 1200, and again will be centered based on the HorizontalAlign and VerticalAlign.

In the presence of transformation on the canvas, it is essentially used above (e.g., tree) display in the window of visibility. Note that this mapping will stretch any items on the canvas, such as Windows, text, etc. but not the form. Next, note that if the window of visibility, the canvas is already adjusts its size to its content, but has the specified size. If you also specify the y-width and y height, properties of linear equalization is used to fit the window visibility to the specified width and height.

To each element in the object model attribute can be used to 'Clip' (clipping). On some elements, especially the forms, this is represented directly as a General property of the execution of the language, whereas in others (e.g. the, most controls) this property is set through DinamicProperty (dynamic property).

In General, the path clipping limits the area within which you can draw content that, in General, presented on Fig where the button shown in uncircumcised form 2702 and 2704, which set the clipping path (the dotted line represents the path clipping). In principle, any way of drawing outside the area bounded currently active clipping path, not drawn. A clipping path can be seen as a mask, in which those pixels that are outside the clipping path are black with an alpha value of zero, and those pixels that are inside the clipping path are white with an alpha value equal to one (with the possible exception of anti-aliasing along the edges of the silhouette).

The path cut-off set by the object geometry or integrated, or, more commonly, in the resources section. The clipping path is used and/or are referenced using the property "Clip" on the element, as shown in the example below:

Note that the animation properties "clipping" similar animation transformations:

The way I draw, setting data "geometry and visualization properties, for example, a Fill, Stroke (stroke) and StrokeWidth (stroke width) on the element path. The following is an example of the markup for the path:

The string "data" path refers to the type of Geometry. A more detailed and complete way to specify the drawn path involves the use of complex property syntax, which is described above. Markup (for example, as in the example below) directly in the above-described classes of Builder geometry:

String data path also can be described using the following notation to describe the grammar of rows of data paths:

*: 0 or more
+: 1 or more
?: 0 or 1
(): grouping
|: separates alternatives
double quotes surround letters

Below is the information of the data line of the path described by this system of records (note that in one implementation here can be set the FillMode property instead of the item level):

The item "image" (Fig) indicates that the full contents of file to be rendered in the given rectangle in the current user coordinate system. The image (specified by the tag of the image) may refer to files, raster images, such as PNG or JPEG files, or files with a MIME type of "image/wvg"that is described in the following example:

The table below presents information on some illustrative properties for images:

NameTypeH/FSThe default value isDescription
TopBoxUnitCoordinate of the top edge of the image
LeftBoxUnitCoordinatiog edge of the image
WidthBoxUnitThe width of the image
HeightBoxUnitThe height of the image
SourceImageDataThe image source
DpiFloat96 (?)The final DPI, used to set size
HorizontalAlignenum {

Left (?),

Center (?),

Right (?)

}
Center
VerticalAlignenum {

Top (?),

Middle (?),

Bottom (?)

}
Middle
Stretchenum StretchNoneNone: the original size
{
None,

Fill,

Uniform

UniformToFill
Fill: aspect ratio is not preserved, and the content is scaled to fill the limits set by tlbh
}
Uniform: scales the size of the uniformly until the image would not fit in the boundaries set by tlbh
UniformToFill: scales the size uniformly to fill the limits set by tlbh
ReadyStateEnum {

MetaDataReady,

Loading,

Loaded

LoadError

}
LoadCounterIntReadingNullCounter increment when ReadyState takes is Loading
NameStrungCustomizable text for the image

According to the form described above correspond to the geometry drawn with inherited and cascaded properties of the presentation. The following table shows an example of the form properties for the elements of the basic forms described above (rectangle, ellipse, line, polyline, polygon). Note that these basic forms can have properties outline, fill properties, and when used as clipping paths to have the characteristics of inheritance and be applied to the item level, so is the level of resources:

FS
NameTypeH/FSThe default value isDescription
FillBrushFSnullThe coordinate of the upper side of the rectangle
FillOpacityFloatFS1.0The coordinate of the left side of the rectangle
StrokeBrushFSnullThe width of the rectangle
StrokeOpacityFloatFS1.0The height of the rectangle
StrokeWidthBoxUnitFS1 pixelThe width of the stroke. 1 pixel = 1 / 96th inch
FillRuleenum {

EvenOdd,

NonZero

}
FSEvenOddFillRule specifies the algorithm that should be used to determine which parts of the canvas are included in the form.
StrokeLineCapenum {

Butt,

Round,

Square,

Diamond

}
FSButtStrokeLineCap specifies the form that you want to use at the end of open subpaths when they are stroked.
StrokeLineJointenum {

Miter,

Round,

Bevel

}
MiterStrokeLineJoin specifies the form that you want to use at the corners of paths (or other vector shapes), circled, when they circled.
StrokeMiterLimitFloatFS4.0The limit of the ratio MiterLength to StrokeWidth. The value must be >=1
StrokeDashArrayPointListFSnullStrokeDashArray controls the pattern of dashes and gaps used for templates of dashes and gaps used to stroke paths. <dasharray> contains a list of items <number>, separated by spaces or commas, that specify the lengths of alternating dashes and gaps in user units. If there are an odd number of values, list of values is repeated to provide an even number of values. Thus, an array of strokes strokes: 5 3 2 is equivalent to an array of strokes strokes: 5 3 2 5 3 2.
StrokeDashOffsetPointFSStrokeDashOffset specifies the distance in the pattern of strokes to start the stroke.
TransformTransformFSnullTransform sets a new coordinate system for the child element
ClipGeometryFSnull Clip limits the scope to which you can apply paint on the canvas. By default, the clipping path set as the bounding box.

The following is an example of the markup syntax for a rectangle:

The rectangle has the following properties in the object model (note that the rectangles allow read/write, have default values of zero support inheritance and apply as to the item-level and object-level):

NameTypeDescription
TopBoxUnitThe coordinate of the upper side of the rectangle
LeftBoxUnitThe coordinate of the left side of the rectangle
WidthBoxUnitThe width of the rectangle
HeightBoxUnitThe height of the rectangle
RadiusXBoxUnitFor rounded rectangles, the radius of the ellipse on the X-axis that is used to round the corners of the rectangle. If you specify a negative radius on the X-axis, then use the absolute value of the radius
adiusY BoxUnitFor rounded rectangles, the radius of the ellipse along the Y-axis that is used to round the corners of the rectangle. If you specify a negative radius on the Y axis, then use the absolute value of the radius

The following is an example of the markup syntax for the circle:

The circle has the following properties in the object model (note that the circles allow read/write, have default values of zero support inheritance and apply as to the item-level and object-level):

NameTypeDescription
CenterXBoxUnitX-coordinate of center of circle
CenterYBoxUnitY-coordinate of center of circle
RadiusBoxUnitThe radius of the circle

The following is an example of the markup syntax for ellipse:

The ellipse has the following properties in the object model (note that the circles allow read/write, have a default value equal to well the Yu, support inheritance and apply as to the item-level and object-level):

NameTypeDescription
CenterXCoordinateX-coordinate of center of ellipse
CenterYCoordinateY-coordinate of center of ellipse
RadiusXLengthThe radius of the ellipse along the axis X. If you specify a negative radius on the X-axis, then use the absolute value of the radius.
RadiusYLengthThe radius of the ellipse on the y-axis. If you specify a negative radius on the Y axis, then use the absolute value of the radius.

The following is an example of the markup syntax for the line:

The line has the following properties in the object model (note that lines allow read/write, have default values of zero support inheritance and apply as to the item-level and object-level):

NameTypeDescription
X1BoxUnitCoordinate on the X-axis beginning of the line. EIT is giving the default value is 0.
Y1BoxUnitThe Y-coordinate of the beginning of the line. The default value is 0.
X2BoxUnitThe coordinate axis X of the end of the line. The default value is 0.
Y2BoxUnitThe Y-coordinate of the end of the line. The default value is 0.

'Poliline' (polyline) specifies a set of connected straight line segments. Usually polyline defines an open shape.

The following is an example of the markup syntax for polyline:

The polyline has the following properties in the object model (note that the polyline allow read/write, have default values, null, support inheritance and apply as to the item-level and object-level):

NameTypeDescription
PointsPointCollectionPoints that make up the polyline. Coordinate values are specified in the user coordinate system.

Element polygon defines a closed form containing a set of connected straight line segments. The following is an example of the markup syntax for the of removelink:

Polygon has the following properties in the object model (note that the polygons allow read/write, have default values, null, support inheritance and apply as to the item-level and object-level):

NameTypeDescription
PointsPointCollectionPoints that form the polygon. Coordinate values are specified in the user coordinate system. If given an odd number of coordinates, the element is specified with an error.

Grammar for defining points in polyline and polygon described by the following system of records:

*: 0 or more
+: 1 or more
?: 0 or 1
(): grouping
|: separates alternatives
double quotes surround letters

The following describes the task of points in the polyline, and polygon with COI is whether the above recording system:

Conclusion

As you can see from the above detailed description, there is a system, method, and element/object model, which provides the code of the different mechanisms of interaction with the scene graph. System, method and object model is easy to use, but they differ in their power, flexibility and extensibility.

Although the invention permits of various modifications and alternative constructions, certain illustrated elements of its implementation is shown in the drawings and have been described in detail above. However, it should be understood that disclosed private forms are not intended to limit the invention but, on the contrary, the invention covers all modifications, alternative constructions and equivalents, the essence and scope of the invention.

1. The system implemented on the computer in the computing environment for the coordinated interaction of the developers of the computer program with the data structure of the scene graph to create graphics, and referred implemented on the computer system contains

the markup language with the holding team graphics, and team graphics contain a format string and a representation of the object, the view object contains the graphics class graphics;

object model graphs contains the class of graphical elements, and the element class contains a class shape, class pictures, class videos and class of the canvas, and referred to the class of elements of an integrated system with common properties;

the type Converter is configured to convert commands graphics in the format string in the object API visuals

the parser/translator, and

the parser/translator made with the ability to interpret team graphics and team graphics contain calls directly code, the code calls the object model and team graphics written using the markup language; and

the parser/translator additionally made with the possibility to appeal to the type Converter that is configured to convert commands graphics in the format string in the object API visualization, and

the parser/translator additionally made with the ability to interpret the markup code and when interpreting the markup code to add the members of the class of graphical elements to the element tree.

the system of the presenters made with the possibility to broadcast the elemental tree graphics calls to the application programming interface of visuals

the application programming interface of the visualization made with the possibility of interaction with the presenters, the interaction with the analyzer/translator, interaction with the code calls directly from programming languages, and

the application programming interface of the visualization is additionally configured to, in response to requests from the system presenters analyzer/translator to create objects in the scene graph, and

the interface display is configured to display the graphical objects in the scene graph.

2. The system according to claim 1, characterized in that the elements of the object model elements correlate with the objects of the object model scene graph.

3. The system according to claim 1, wherein the marking includes a built-in text that contains a string that specifies the property of the element, and the translator communicates with a type Converter to convert the above string to a property of an object.

4. The system according to claim 1, characterized in that the markup contains an internal text, which contains the syntax of properties property syntax specifies the set of attributes of vector graphics.

5. The system according to claim 4, characterized in that the embedded text is identified by reference that refers to another place markup.

6. The system according to claim 4, ex is different, however, what built-in text is identified by reference that references the file.

7. The system according to claim 4, characterized in that the embedded text is identified by reference, which corresponds to a file that can be downloaded from a remote location on the network.

8. The system according to claim 1, characterized in that the markup contains a built-in text containing complex property syntax, the corresponding graphic resource.

9. The system of claim 8, wherein the graphics resource describes an object brush of visual", the translator provides the data of the level of resources for direct exchange with level application programming interface visualization for object creation coloring pages of visuals, the corresponding element described complex property syntax.

10. The system according to claim 9, characterized in that the data of the level of resources identified by reference that refers to another location in the markup.

11. The system according to claim 9, characterized in that the data of the level of resources identified by reference that references the file.

12. The system according to claim 9, characterized in that the data of the level of resources identified by reference, which refers to a file that can be downloaded from a remote location on the network.

13. The system according to claim 1, characterized in that one of the elements of the object model e the elements includes the element of "image".

14. The system according to claim 1, characterized in that the element "form" class forms includes an element of "polyline".

15. The system according to claim 1, characterized in that the element "form" class forms includes an element of "polygon".

16. The system according to claim 1, characterized in that the element "form" class forms includes an element of "the way."

17. The system according to claim 1, characterized in that the element "form" class forms includes an element of "line".

18. The system according to claim 1, characterized in that the element "form" class forms includes an element of the ellipse.

19. The system of clause 16, wherein the element "form" class forms includes the element "circle".

20. The system according to claim 1, characterized in that the element "form" of the form class contains the properties of the fill.

21. The system according to claim 1, characterized in that the element "form" of the form class contains the properties of the stroke.

22. The system according to claim 1, characterized in that the element "form" of the form class contains the properties of the "cut-off".

23. The system according to claim 1, characterized in that the element "form" of the form class contains the properties of the transformation.

24. The system according to claim 1, characterized in that the element "form" of the form class contains the data of the effect.

25. The system according to claim 1, characterized in that the element "form" of the form class contains the data of opacity.

26. The system according to claim 1, characterized in that the element "form" of the form class contains the data for the blend mode.

27. The system according to claim 1, characterized in that the analyzer/translator requests initialization of at least one Builder to create objects.



 

Same patents:

The invention relates to methods of information exchange in computer networks

FIELD: technologies of data processing in microprocessor systems, in particular, generation of visual data displays in automated expert systems, possible use in systems for visual analysis and prediction of variable multi-parameter states of systems or processes, including individual conditions of certain person.

SUBSTANCE: in known method for color-code display from a set of all parameters on basis of one or more topic signs, subsets of parameters are grouped and ranked, with which methods of color code display are used separately, while in accordance to ranks of subsets, width of strips of parameters of subsets is formed and/or position of strips of subsets is determined on diagram relatively to strips of other subsets with their possible isolation.

EFFECT: less time needed for faster and improved quality monitoring of object states and improved ergonomics of visualization results.

8 cl, 2 dwg

The invention relates to the field of physical optics and can be used in optical astronomy

The invention relates to computing, and in particular to systems, data mining

The invention relates to computer technology and can be used in data mining systems, including processing and analysis of geological and geophysical information and other data obtained in the study of natural or socio-economic objects or phenomena

FIELD: technologies of data processing in microprocessor systems, in particular, generation of visual data displays in automated expert systems, possible use in systems for visual analysis and prediction of variable multi-parameter states of systems or processes, including individual conditions of certain person.

SUBSTANCE: in known method for color-code display from a set of all parameters on basis of one or more topic signs, subsets of parameters are grouped and ranked, with which methods of color code display are used separately, while in accordance to ranks of subsets, width of strips of parameters of subsets is formed and/or position of strips of subsets is determined on diagram relatively to strips of other subsets with their possible isolation.

EFFECT: less time needed for faster and improved quality monitoring of object states and improved ergonomics of visualization results.

8 cl, 2 dwg

FIELD: computer engineering.

SUBSTANCE: the system contains markup language, object model of graphics, converter of types, analyzer-translator, system of presenters, interface for applied programming of visuals and indication interface.

EFFECT: ensured organized interaction of computer program developers with data structure of scene graph for creation of graphics.

27 cl, 31 dwg

FIELD: physics, processing of images.

SUBSTANCE: invention is related to methods of television image processing, namely, to methods of detection and smoothing of stepped edges on image. Method consists in the fact that pixels intensity values (PIV) of image are recorded in memory; for every line: PIV of the current line is extracted; PIV of line that follows the current line is extracted; dependence of pixel intensity difference module dependence (PIDMD) is calculated for the mentioned lines that correspond to single column; PIDMD is processed with threshold function for prevention of noise; "hill" areas are determined in PIDMD; single steps are defined out of "hill" areas; PIV of line that is next nearest to the current line is extracted; for current line and line next nearest to the current line operations of "hill" areas definition are repeated; for every part of image line that is defined as single step, availability of stepped area is checked in image in higher line, if so, these two stepped areas are defined as double stepped area (DSA); parts of DSA lines are shifted in respect to each other, and DSA is divided into two single steps; values of line pixels intensity are extracted for the line that is located in two lines from the current line, and operations of "hill" areas definition are repeated; single steps are smoothened by averaging of pixel intensity values.

EFFECT: improvement of quality of image stepped edges correction.

2 dwg

FIELD: physics, measurement.

SUBSTANCE: invention concerns methods of electromagnetic signal processing for tool of modelling and visualisation of stratified underground fractions surrounding the tool. Electromagnetic signals corresponding to current position of tool measurement point are obtained for measurement during drilling, and multilayer model is generated by the electromagnetic signals. Histogram describing multilayer model uncertainty is used to generate multiple colour tone values, representing formation property forecasts for depth level over/under the tool, and corresponding multiple saturation values. Screen diagram is generated and displayed. Screen diagram uses colours for visualisation of formation property forecast for depth levels over and under the tool for further positions of measurement point. New column in screen diagram is generated for current measurement point. Colours of new column are based on multiple colour tone and saturation values obtained from histogram. Saturation values of new column represent uncertainties of respective forecasts.

EFFECT: modeling and visualisation of underground fraction properties during well drilling.

25 cl, 10 dwg

FIELD: information technologies.

SUBSTANCE: method for reproduction of diagram related to document includes conversion of object-diagram into description of diagram on the basis of figures, where specified object-diagram, describes this diagram with application of diagram elements, and specified description of diagram on the basis of figures describes this diagram with application of figures; and saving object-diagram in specified document so that access to initial data contained in diagram is possible. System includes object-diagram, describing diagram with application of diagram elements; graphical module capable of reproducing figures, module of diagram creation, generating description of diagram on the basis of figures, based on specified object-diagram, where specified description of diagram on the basis of figures describes this diagram with application of figures, which may be reproduced by specified graphical module.

EFFECT: provision of coordination in process of diagrams reproduction and processing in various applications, provision of unified, high-quality reproduction of diagrams.

25 cl, 5 dwg

FIELD: printing industry.

SUBSTANCE: background area is detected in a bitmapped image; a background type is detected; a command is saved into a metafile, which relates to background display; multicoloured areas are detected in a bitmapped image; multicoloured areas are saved into a metafile as a command related to display of bitmapped image fragments; single-coloured areas are detected in a bitmapped image; a command is saved into a metafile related to display of single-coloured areas.

EFFECT: provision of high quality of display of a bitmapped image converted into a metafile, with considerable reduction of saved data volume compared to memory volume required to store an initial digital bitmapped image.

7 cl, 17 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to automated drawing means. The method includes identifying a previously drawn object within a grid with a first gridline spacing; determining a dimensional unit of said object; automatically adjusting gridline spacing from the first to the second as a function of the dimensional unit, where the first spacing differs from the second and where some of the steps for identification, determination or automatic adjustment are carried out by a computer processing unit.

EFFECT: high speed of drawing by providing dynamic adaptation of the gridline spacing to the object being drawn at the present moment.

20 cl, 9 dwg

FIELD: physics, video.

SUBSTANCE: invention relates to methods of presenting a collection of images. The computer-implemented method for dynamic visualisation of a collection of images in the form of a collage includes a step of obtaining an image from a collection of images. Further, the method includes adjusting parameters of dynamic visualisation and analysing distribution of colours in local areas of the image and the collage. The method also includes modifying the image by adding decorative elements, the appearance of which depends on the distribution of colours in the local areas of the image and the collage. Furthermore, the method includes modifying the collage by changing an appearance of decorative elements in the image.

EFFECT: improved visual quality of a collection of images owing to automated modification of display of decorative elements depending on the colour of the images.

6 cl, 5 dwg

FIELD: physics.

SUBSTANCE: invention relates to automatic acquisition of clinical MRI image data. The method comprises: acquiring a first inspection image with a first field of view, the first inspection image having a first spatial resolution, locating a first region of interest and at least one anatomic landmark in the first inspection image; a step comprising: creating a three-dimensional volume (202), determining (132) a set of contours (204) in the three-dimensional volume, identifying one or more anatomic landmarks (206) in the three-dimensional volume, automatic segmentation of the three-dimensional volume (208); determining the position and orientation of the first region of interest using the anatomic landmark; the position and orientation of the first region are used to schedule a second inspection image; obtaining a second inspection image with a second field of view, the second field of view having a second spatial resolution, the second spatial resolution being higher than the first spatial resolution; creating geometry scheduling for the anatomic region of interest using the second inspection image; and acquiring a diagnostic image of the anatomic region of interest using geometry scheduling.

EFFECT: providing fast and accurate scheduling of diagnostic scanning.

15 cl, 4 dwg

FIELD: physics, computer engineering.

SUBSTANCE: invention relates to creating a processed set of image data. The system comprises: a plurality of sets of parameter data, wherein a set of parameter data corresponds to a clinically classified population of patients and represents a transfer function, wherein the set of parameter data includes statistical distribution of measured characteristics of the clinically classified population of patients; a selection unit in the form of a computer hardware processing unit for selecting a set of parameter data from the plurality of sets of parameter data; and an image processing subset in the form of said computer hardware processing unit for applying the transfer function, represented by the selected set of parameter data, for at least part of the set of image data which is characteristic for a patent to obtain a processed set of image data.

EFFECT: high accuracy of processing a set of image data of a patient.

12 cl, 5 dwg

FIELD: computer engineering.

SUBSTANCE: the system contains markup language, object model of graphics, converter of types, analyzer-translator, system of presenters, interface for applied programming of visuals and indication interface.

EFFECT: ensured organized interaction of computer program developers with data structure of scene graph for creation of graphics.

27 cl, 31 dwg

FIELD: technological processes.

SUBSTANCE: invention is related to components and subsystems of computer system for maintenance and provision of graphical user interface presentations. System of graphical image animation is disclosed, which supports time variation of image elements properties for elements inside the graphical image. Animation system uses image structure for maintenance of image elements set. Image elements include value of variable property. System of animation also uses system of properties that supports properties related to image elements that are supported by image structure. Properties include dynamic properties, which may vary in time and, accordingly, affect presentation of corresponding element on graphical image. Animation system contains animation classes, from which the exemplars of animation objects are created, which are related to the property of image element during execution. Exemplars of animation object provide time variable values that affect values assigned to dynamic properties.

EFFECT: provides more efficient and flexible execution of animation modes in images of graphical user interface.

38 cl, 7 dwg

FIELD: physics; computer facilities.

SUBSTANCE: invention concerns to mapping devices. The expedient of mapping of the equipment of a bottom of a boring column (EBBC) with use of the vector drawing includes parse and interpretation of initial data EBBC for development of packages of the data corresponding to EBBC builders; EBBC assemblage with use of builders of the vector drawing of library of the vector drawing and builders of the vector drawing represent EBBC builders, and EBBC mapping in the chosen gauge. The system contains processor and storage in which the program for embodying of an expedient of mapping of the equipment of a bottom of a boring column contains.

EFFECT: representation of the bore-hole and the superficial measuring with an animated drawing.

25 cl, 23 dwg

FIELD: physics; image processing.

SUBSTANCE: present invention relates to simulation of movements of a virtual mannequin. The proposed graphic interface system, allows for displaying two windows (10, 20) on a screen. First window (10) has the general image of mannequin (100), which allows for selecting part of the body of the mannequin directly on the screen, using a selection device (a mouse, for example). As a result, the selected part of the body appears on second window (20) in magnified form together with symbols (120), indicating all degrees of freedom, provided for that part of the body. An operator can have direct effect on symbols (120) of degrees of freedom for blocking or unblocking them, as well as for direct control of the kinematics of the model.

EFFECT: easier direct control of kinematics and control of the degrees of freedom of a virtual mannequin.

10 cl, 7 dwg

FIELD: physics; image processing.

SUBSTANCE: present invention relates to a multifactorial system and method of moving virtual mannequin (10) in a virtual medium. Mannequin (10) is defined by a general position and several degrees of freedom of articulation. The method provides for: endowment of attraction factor (32), acting on several degrees of freedom of articulation of mannequin (10) for moving it to a target; endowment of a displacement factor (21), acting on the general position of mannequin (10) depending on parametres, defining its surrounding medium, for preventing mannequin (10) from colliding with elements of the medium. The method also provides for ergonomic factor (34), acting on several degrees of freedom of articulation of mannequin (10) for automatic adjustment of the position of mannequin (10) when moving to a target.

EFFECT: provision for an optimum level of comfort of positions occupied by the mannequin without need for carrying out further tests.

21 cl, 11 dwg

Up!