Method and device for pausing execution of a stream until a certain memory access is performed

FIELD: engineering of means for pausing execution of a stream until certain memory access occurs.

SUBSTANCE: in one variant of realization, processor contains a set of executive devices, capable of executing a set of streams. First stream includes a command, which determines the address being tracked. Logical pausing means pause execution of first stream, and monitor causes renewal of first flow as reaction to access of given address being tracked.

EFFECT: increased processor productiveness.

5 cl, 14 dwg

 

RELATED APPLICATIONS

This application corresponds to the application serial number10/039777entitled "Suspending Execution of a Thread in a Multi-threaded Processor" ("Suspending thread execution in a multithreaded processor"); application serial number10/039656entitled "Coherency Techniques for Suspending Execution of a Thread Until a Specified Memory Access Occurs" ("Ways of coherence to suspend execution of the thread until then, until you made certain memory access"); application serial number10/039650entitled "Instruction Sequences for Suspending Execution of a Thread Until a Specified Memory Access Occurs" ("Sequence of commands to suspend execution of the thread until then, until you made certain memory access"), all of which are filed on the same day as the present application.

The technical field to which the invention relates.

This invention relates to the field of processors. More specifically, the present invention relates to multi-threaded processors and means for suspending processing of one thread in a multithreaded processor.

Prior art

A multithreaded processor capable of simultaneous processing of many different sequences of commands. The main motivation to perform many streams of commands in a single processor is to increase the efficiency of use of the process is ora. Highly parallel architecture has evolved over the years, but was often difficult to extract sufficient parallelism from a single stream of commands when working with multiple execution units. Processors parallel processing of multiple processes try to run many threads teams simultaneously in performing various resources run in an attempt to better use these resources. Multithreading can be particularly advantageous for programs that are faced with long delays or who are often waiting for the occurrence of some event. When one thread waits for tasks that have a long delay, or waits for a specified event, at this time can be processed by another thread.

It was suggested many different ways to control the switching of the processor between threads. For example, some processors find particularly long wait events, such as an unsuccessful appeal to the L2 cache (second level cache), and in accordance with this switch threads. Although the detection of such events long wait may be effective in some circumstances, this method of detection events are unlikely to find all points that can be effective to switch streams. In particular, switching between threads, based on the events that may seem not able to detect the point in the program, where the delay provided by the programmer.

In fact, the programmer is often in the best position to determine when it would be efficient to switch streams to avoid uneconomical cycles semaphore timeouts or other resourcedatabase way delay. Thus, allowing the program to manage switching between threads, you can let the programs to work more efficiently. In this respect, can be best explicit command of the program that affect the choice of the stream. For example, the command "Pause ("Pause"), described in patent application U.S. No. 09/489,130, filed 21.01.2000. The Pause command allows the thread to be suspended temporarily or until it reaches the count, or until the team goes through the processor pipeline. You can offer different ways to provide programmers the ability to use resources in a multithreaded processor.

List of figures

The present invention is illustrated as an example, but not limitation, in the figures of the accompanying drawings.

Figure 1 - illustration of one possible implementation of a multithreaded processor having a means of monitoring (monitor)that is designed to monitor memory access.

Figure 2 - block diagram of the sequence of operations, Creusa work multithreaded processor of figure 1 according to one variant of implementation.

Figure 3 is a more detailed illustration of one possible implementation of a multithreaded processor.

Figure 4 - illustration of separation, sharing and duplication of resources according to one variant of implementation.

5 is a block diagram of a sequence of operations illustrating suspend and resume thread execution according to one variant of implementation.

Figa - block diagram of the sequence of operations illustrating activation and logical monitoring tools according to one variant of implementation.

Fig.6b - block diagram of the sequence of operations illustrating empowerment monitoring write operations according to one variant of implementation.

7 is a block diagram of a sequence of operations illustrating the operation of the monitor according to one variant of implementation.

Fig - illustration of a system according to one variant of implementation.

Figa-9c - illustrate different embodiments of program sequences, using the disclosed methods and command processor.

Figure 10 illustrates an alternative implementation, which allows the monitored address to be cached.

11 is an illustration of various structural representations or formats for simulation, emulation and made what I design using the disclosed methods.

Detailed description

In the following description of the disclosed methods for suspending execution of the thread until then, unless there is a specific memory access. In the following description, numerous specific details, such as variants of logical tools, opcodes, means to specify operands, the choices for splitting/sharing/duplication of resources, types and interrelationships of system components and options in respect of fork/join logical means, formulated to provide the most complete understanding of the invention. However, specialists in the art will appreciate that the invention may be implemented without these specific details. In other examples, control structures, circuit level logic gate and full software command sequence is not shown in detail so as not to obscure the invention. Experts in the art, using the included descriptions, will be able to implement appropriate functionality without undue experimentation.

Disclosed methods may allow the programmer to implement time-out mechanism in one thread, while allowing other threads to use the computing resources. The monitor can be with konfigurieren so, so that the thread could be suspended until and unless there is a specific memory access, such as write to the specified memory cell. Thus, the thread can be resumed by accident of a specified event without completing the procedure, excessive consuming CPU resources, such as a loop waiting for a semaphore. In some embodiments, implementation of sections in advance intended for suspended thread can be passed to other threads while a thread is suspended. These and/or other disclosed methods can improve the total performance of the processor.

Figure 1 illustrates one way of implementing a multi-threaded processor 100 with the monitor 110 of the memory access for monitoring memory access. "Processor" in some versions can be formed as a single integrated chip. In other embodiments, the implementation of multiple integrated circuits can work together to shape the processor, and in some cases implementation together to form the processor can hardware and software (e.g., procedures, binary translation). In the embodiment of figure 1, the controller 120 bus/memory passes the commands to run on the preprocessor 130. The preprocessor 130 controls the retrieving commands from different streams according to the criminal code is the result of 170 teams. The logical tool command pointer is duplicated to support multiple threads.

The preprocessor 130 delivers the commands to the shared resources between threads 140 for further processing. Shared resources between threads 140 include logically separate sections dedicated to specific threads when multiple threads are active in the processor 100. In one embodiment, each individual section contains commands only from a thread that was allocated to this section. Shared resources between threads 140 may include, for example, the command queue. In single-threaded mode shared resources between threads 140 can be merged to form a single large partition that is allocated to a single thread.

The processor 100 also includes duplicate state 180. Duplicate state 180 includes state variables, sufficient to maintain the context of the logical processor. Using duplicate States 180 multiple threads can run without contention access to the storage state variables. In addition, a logical means of allocating registers can be duplicated for each thread. Logical resources associated with a redundant state, working with the relevant sections of resources to prepare incoming to the team for execution.

Shared resources between threads 140 transmit commands to the shared resources 150. Shared resources 150 operate teams regardless of their source. For example, the scheduler and execution device can be jointly used resources, not aware of the threads. Shared resources 140 can send commands from multiple threads to shared resources 150 by interleaving between threads equitable way that ensures continuous progress in each active thread. Thus, the shared resources can perform the transmitted commands to them in an appropriate state, without worrying about mixing flows.

For shared resources 150 may follow a different set shared between threads resources 160. Shared resources between threads 160 may include a reset-resources-type buffer reordering, etc. Respectively, shared resources between threads 160 can guarantee that the execution of commands each thread will be completed properly and that the appropriate status for this thread will be updated accordingly.

As previously mentioned, it may be desirable to give programmers a loop waiting for a semaphore without requiring constant op the wasp memory cell or even the execution of commands. Thus, the processor 100 of figure 1 includes a monitor 110, a memory access. The monitor 110, a memory access is made with the possibility of programming using information on cycle memory access in respect of which the monitor 110 can be activated for tracking. Accordingly, the monitor 110 includes an information register 112 tracked cycle, which is compared with information of a bus cycle adopted from the controller 120 bus/memory by logical means 114 comparison. If there is a match, that generates a signal to resume a suspended thread. Information about the memory access can be obtained from internal and/or external bus of the processor.

Information register 112 tracked cycle may contain details that determine the type of cycle and/or address, which should start the resumption of flow. In one embodiment, an information register 112 tracked cycle stores the physical address and the monitor keeps track of each bus cycle, which indicates actual or potential write operation on this physical address. This cycle can be in the form of an explicit loop recording and/or may be a read operation for exclusive use or the cycle of deprivation valid status by another subscriber bus, trying to get the IP is lusitaniae exclusive use of the cached lines, so he could write in this line without external bus transactions. In any case, the monitor can be programmed to run different transactions in different variants of implementation.

Functioning according to the variant of implementation of figure 1 can be further explained with reference to the block diagram of the sequence of figure 2. In one embodiment, the instruction set of the processor 100 includes an operation code (command) MONITOR, which specifies information about monitored transactions. On stage 200 ID operations MONITOR accept as part of a sequence of commands of the first thread (T1). As indicated by step 210, as the implementation of the voucher MONITOR, the CPU 100 activates the monitor 110 in order to monitor accesses to a in relation to a given memory access. Running the memory access can be defined implicit or explicit operand. Therefore, when performing the operation code MONITOR, you can specify the monitored address, because the tracking address may be stored in advance in the register or in another place as an implicit operand. As indicated by step 215, the monitor scans detected whether a given cycle. If not, the monitor continues to monitor the memory accesses. If running a cycle is detected, then set the indicator tracked events waiting in the queue for processing, as indicated by step 220.

And the execution of the opcode MONITOR triggers activation of the monitor 110. Monitor 110 may begin to work in parallel with other operations in the processor. In one embodiment, by itself, the MONITOR command only configures the monitor 110 with appropriate information about the cycle memory and activates the monitor 110 without enabling IRQ-unmasking tracked events. In other words, in this embodiment, after execution of the operation MONITOR tracked events can accumulate, but may not be recognized if they are not explicitly demaskirovanie.

Thus, at step 225 the start of the standby memory is designated as a separate event. In some embodiments, the implementation of the operation code memory wait (MWAIT) can be used to trigger recognition of tracked events and suspend the thread T1. Using two separate commands to configure and start the suspension flow can provide the programmer with additional flexibility in programming and the ability to achieve greater efficiency in programming. However, in an alternative embodiment, start waiting for memory from the first operation code, which also configures the monitor 110. In either case, one or more commands to activate the monitor and start the recognition of tracked events.

Options for implementation, which uses a separate transaction codes to activate the monitor 110 and C is the start of the recognition of tracked events it may be advantageous to perform a check to verify that the monitor has been activated to stop the flow, as shown in step 230. Additionally, by checking whether they expect already monitored event queue for processing (not shown), pause, T1 can be avoided, and the operation may continue at step 250. Assuming that the monitor 110 is activated and no event monitor already queued for processing, T1 may be suspended as shown in step 235.

With a suspended T1 processor included in an implementation-dependent state, which allows other threads to more fully utilize processor resources. In some embodiments, the implementation of the CPU can release some or all of the sections of the shared resources 140 and 160, which were allocated to T1. In other embodiments, implementation of the various code changes the operation MONITOR or related facilities can specify which resources are to be freed, if they exist. For example, when the programmer assumes a short wait, a thread can be suspended, but continues to maintain its resources sections. The performance is still improved because of the shared resources can be used by other threads during the period of suspension flow. It suggests the more I long wait, the release of all partitions associated with a suspended thread allows other threads to have additional resources that potentially increases the performance of other threads. For more performance, but at the price of unproductive costs associated with removing and adding sections, when the flows, respectively, are suspended and resumed.

T1 remains suspended until such time as will not be tracked events waiting in the queue for processing. As previously mentioned, the monitor 110 operates independently to detect tracked events and alarms about them (steps 215-220). If at step 240, the processor determines that the monitored event is waiting in the queue for processing, then T1 is resumed, as indicated by step 250. No active process commands in T1 to a monitored event has awakened T1. On the contrary, T1 remains suspended, and activated the monitor 110 signals the event to the processor. The processor processes the event, recognizes that the event indicates that T1 must be resumed, and performs the appropriate action for the resumption of T1.

So, options for implementation of figure 1 and 2 provide methods that the thread was suspended by the program, to be resumed when produced by the procession of a given memory access. In one embodiment, other events also lead to the resumption of T1. For example, the interrupt may cause the resumption of T1. This option is beneficial manner allows the monitor to be not quite perfect in the sense that he is able to run (not discover) some memory accesses or other conditions that should cause the resumption of flow. As a result, from time to time T1 can be awakened unnecessarily. However, this embodiment reduces the probability that T1 will be constantly stopped (frozen) because of missed events, which simplifies the design and testing of the hardware. Unnecessary Wake-up T1 can be only a minor inconvenience, since the loop can be designed to have a double check for T1 in that condition, which he was expecting, really happened, and if not, to suspend himself again.

In some embodiments, the implementation of shared resources between threads, the duplicated resources and shared resources can accommodate differently. In some embodiments, the implementation may not be available shared resources on both ends of the shared resources. In some embodiments, the implementation of the shared resources may not be strictly separated, but on the contrary, could the t to allow certain teams to cross sections, or may allow the sections to vary in size depending on the flow, running in this section, or of the total number of threads running. Additionally, a mixture of different resources can be designated as shared, duplicated and broken up into sections resources.

Figure 3 illustrates additional details of one possible implementation of a multithreaded processor. An implementation option in Figure 3 includes, among other things, associated with logical coherence means 350, one embodiment of the monitor 310 and one specific embodiment of the logic means 377 suspend and resume a thread. In the embodiment according to Figure 3 the bus interface 300 includes a controller 340 bus, logical means 345 events are detected, the monitor 310 and associated with the logical coherence means 350.

The bus interface 300 transmits the command to the preprocessor 365, which performs the generation of microamperage (and PR), generating an microamperage of macros. Resources 370 execution take microamperage from preprocessor 365 and logical means 380 postprocessor dropping various microamperage after they were made. In one embodiment, an unordered execution is supported by the preprocessor, postprocessor and resources execution.

Various details of the operations considered in more detail in accordance with Fig.-9. Briefly, the operation code MONITOR can be entered in the processor through the bus interface 300 and be prepared for execution by the preprocessor 365. In one embodiment, is generated special microprint MONITOR for execution resources 370 performance. Microprint MONITOR can be processed by the executing device, similarly to the save operations with the monitored address, broadcast logical means 375 broadcast in the physical address that is sent to the monitor 310. The monitor 310 communicates with the logical means 377 suspend and resume threads in order to cause the resumption of flows. Logical means to suspend and resume a thread can perform the partitioning of substances into sections and reunion resources as changes in the number of active threads.

For example, Figure 4 illustrates the partitioning, duplication and sharing of resources according to one variant of implementation. Broken into sections, resources can be partitioned, and reunited (connected together for re-use by other threads) in accordance with the descending order and the addition of active threads in the car. In the embodiment according to Figure 4 duplicate resources include logical tool command pointer in the part of the retrieval team from the conveyor, l is the logical means of rename register in the part of the renaming of part of the conveyor, state variables (not shown, but they made the appeal at various stages in the pipeline) and the interrupt controller (not shown, typically asynchronous to the pipeline). Shared resources in the embodiment in Figure 4 include planners at the planning stage of the pipeline, a set (pool) of registers in parts of the read and write registers of the pipeline, resources, execution in the execution of the pipeline. Additionally, the cache tracking and the L1 cache (first level) data can be shared resources, which are filled according to the memory accesses regardless of the context of the thread. In other embodiments, the implementation of consideration of the context of the thread can be used in making decisions about caching. Broken into sections resources in the embodiment in Figure 4 include two stages in stages Queuing conveyor, the reordering buffer at the stage of the discharge conveyor and the storage buffer. Multiplexing logic picker flow switch between different duplicated and broken into sections resources to ensure reasonable access to both threads.

For the purposes of example, assume that the partitioning, sharing, splitting and duplication, shown in figure 4, are used in the place of the embodiment according to Figure 3 in the further description of the operation of a variant of implementation of the processor in Figure 3. In particular, additional details of a variant of implementation in Figure 3 and will be discussed further in accordance with the flowchart of the sequence of operations Figure 5. It is assumed that the CPU is running in multithreaded mode with at least two active threads.

At step 500, the preprocessor takes 365 voucher MONITOR during execution of the first thread (T1). Special microprint monitor is generated by the preprocessor 365 in one embodiment. Microprint monitor transmit resources 370 performance. Microprint monitor has an associated address that specifies the address that can be monitored (tracked by address). Associated address may be in the form of explicit or implicit operand (i.e., the associated address must be taken from a predetermined register or other memory location). Associate address "indicates tracked addresses in the sense that it has sufficient information to determine the monitored address (possibly in combination with other registers or information). For example, the associated address may be linear address that has a corresponding physical address, which is properly monitored address. Alternatively, the tracking address may be given in the format of a virtual address, or what may be designated as a relative address or asked another known or convenient method of specifying the address. If operands are used in the form of virtual addresses, it may be desirable to allow a General protection violations to be detected as events forced shutdown.

Tracked address may indicate any suitable memory unit for monitoring. For example, in one embodiment, the tracking address may indicate a cache line. However, in alternative embodiments, the implementation of the tracking address may indicate the portion of the cache line, a portion or block of memory of a certain/a selected size, which may have a different relationship to the size of the cache lines of different processors, or on a separate url addresses. Tracked address, therefore, may indicate the block that contains the data specified by the operand (and more data), or may specify the address for the requested data block.

In the embodiment according to Figure 3 the monitored address is served by logical means 375 broadcast address and is transmitted to the monitor 310, where it remains in the register 335 monitored addresses. In response to the operation code MONITOR resources 370 performance then you launch and activate the monitor 310, as indicated by step 510, and in more detail is shown in Fig.6. As will be discussed in more detail below in relation to Fig., it may be advantageous to temporarily block any save operations that occur after the operation code and MONITOR to ensure that the save operation is processed and, therefore, detected before there is any suspension flow. Thus, some operations can take place as a result of activation of the monitor 310 before any subsequent commands can be executed in this embodiment. However, step 510 is shown as taking place in parallel to the step 505, because the monitor 310 continues to work in parallel with other operations, will not occur until the event of a forced shutdown after its activation by the opcode MONITOR in this variant implementation.

At step 505 the opcode memory wait (MWAIT) adopted in thread 1 and sent for execution. The execution of the opcode MWAIT unmask tracked events in the embodiment according to Figure 5. In response to the operation code MWAIT test is performed, as indicated by step 515 to determine whether they expect the queue to the processing of the monitored event. If there are no tracked events waiting to be processed, then at step 520 is performed the test, to make sure that the monitor is active. For example, if MWAIT was performed without pre-execution MONITOR, the monitor 310 would not be active. If either m the monitor is inactive, or tracking event waiting queue for processing, the execution of thread 1 continues at step 580.

If the monitor 310 is active and there is no event monitor, waiting for processing, the execution of thread 1 is suspended, as indicated by step 525. Logical means 377 suspend/resume flow includes logical means 382 conveyor cleaning, which cleans the processor pipeline to remove all commands, as indicated by step 530. After cleaning the conveyor logical means 385 splitting/rejoining of sections prescribe to release all broken into sections resources associated exclusively with thread 1, for use by other threads, as indicated by step 535. These freed resources unite to form large set of resources to use their remaining active threads. For example, as for the two flows illustrated in Figure 4, remove all commands related to the thread 1, from both queues. Each pair of queues then combine to create a big line for the second thread. Similarly, a larger number of registers from the pool of registers made available to the second thread, a greater number of buffer elements conservation released for the second thread, and a greater number of elements in the buffer reordering make the access is passed to the second thread. In fact, these structures are returned to be assigned to one thread structures with double the size. Of course, different proportions may result from different ways, using different number of threads.

At stage 540, 545, and 550 test various events to determine whether the thread 1 to be renewed. It should be noted that these tests are not commands executed as part of the thread 1. On the contrary, these operations are performed by the processor in parallel with the processing of other threads. As will be discussed in more detail in accordance with Fig. 6, the monitor itself checks whether there was a tracking event records and indicates the result of the check by setting the indicator of the event, the waiting queue for processing. The indicator of the event, the waiting queue for processing, is transmitted through the signal EVENT logic means 377 suspend/resume (e.g., microcode). Microcode can recognize watched event on the corresponding border of the team in one embodiment (step 540), as this event was demirkiran the opcode MWAIT in step 505. Logical means 345 event detection can detect other events, such as interrupts, which are represented as events forced stop (step 545). Additionally, in an optional order can use isometsa a timer to periodically exit the wait state memory, in order to guarantee that the processor will not enter the frozen state due to some specific sequences of events (step 550). If none of these events indicates the output from the wait state memory, the thread 1 is suspended.

If thread 1 is resumed, the logical tools 377 suspend/resume thread again activated upon detection of the corresponding event. Again, the pipeline is cleaned as indicated by step 560 to remove commands from the pipeline so that resources could be again partitioned to adapt to the thread 1, which should be soon resumed. At step 570 appropriate resources re-divided into sections, and thread 1 resumes at step 580.

Figa illustrates in more detail the activation and operation of the monitor 310. At step 600 is performed by the preprocessor removing for thread 1 is stopped to prevent the entry into the machine further commands flow 1. At step 605 associated address operand transform of the linear address into a physical address by logical means 375 broadcast addresses. At step 610, the ability to monitor write operations on the monitored address is incremented. In General, the objective of this operation is to force caching subscribers to make write operations that impact the information stored on the monitored address that is visible to the monitor 310. In more detail one particular implementation discussed in accordance with Fig.6b. At step 615 the physical address to monitor, maintain, although it should be noted that this address can be saved in this sequence both earlier and later.

Then, as indicated by step 620, the monitor activates. The monitor monitors bus cycles in respect of the accounts on the physical address, which is monitored by the address stored in the register 335 monitored addresses. More monitoring is discussed below in accordance with 7. After activating the monitor perform an operation to temporarily block the save, as indicated by step 625. Operation temporary blocking of conservation ensures that all the save operation in the machine processed to completion of execution of the operation MONITOR. With all the save operations, prior to the removal of the MONITOR from the machine, the probability that the input is in a wait state memory is done incorrectly, is reduced. The operation of the temporary blocking of preservation, but as a precaution, and can be a long operation.

This temporary blocking save is optional, because the mechanism MONITOR/MWAIT this option, the implementation of which was b designed as a mechanism with multiple outputs. In other words, the various events of some type of interrupt, system or built-in timers, etc. can also cause an exit from the wait state memory. Thus, it is not guaranteed in this embodiment, that the only reason the thread will be resumed, consists in changing the value of the tracked data. Respectively (see also Figa-c below), in this embodiment, the software should double check, have you changed to a specific value stored in memory. In one embodiment, some of the events, including input interrupt INTR, NMI, and SMI; interrupt machine check; violations are events forced shutdown and others, including the event of power are not. In one embodiment, the test plug A20M the event of a forced shutdown.

As indicated by step 630, the monitor continues to check, indicate whether occurring cycles bus write operation on the monitored address or appear to be, whether originating cycles indicating a write operation on the monitored address. If the bus cycle is detected, set the indicator tracked events waiting in the queue for processing, as indicated by step 635. After the execution of the operation code MWAIT (step 505, Fig. 5) this indicator events, projection is found the queue for processing, is treated as an event and causes the resumption of the flow at the stages 560-580 in Figure 5. Additionally, events that change the broadcast address may cause resumption of thread 1. For example, events that cause flushing of the buffer background broadcast, you can run the resumption of thread 1, as the broadcast is made to generate the monitored addresses from the address converted from linear to physical, may no longer be valid. For example, in the processor, compatible with the architecture Intel86, write to control registers CRO, CR3 and CR4, as well as in some registers, depending on the specific machine, may cause the exit from the wait state memory.

As noted above, Fig.6b illustrates in more detail the expansion of the monitoring write operations on the monitored address (step 610 by Figa). In one embodiment, the processor clears the cache line associated with the monitored address of all internal caches of the processor, as indicated by step 650. As a result of this cleanup any subsequent write operation on the monitored address reaches the bus interface 300, providing the ability to detect the monitor 310, which is included in the bus interface 300. In one embodiment, microprint MONITOR modeled after and has the same model solving is Vesti, command CLFLUSH clean cache line, which is a command from the system command x86. Microprint MONITOR performs address translation from linear to physical and clears the internal caches, by and large, as well as CLFLUSH; however, the bus interface recognizes the difference between MONITOR and CLFLUSH and handles microprint MONITOR, respectively.

Then, as indicated by step 655, logical means 350 associated with coherency, bus interface 300 activate logical means 355 generation of the read line to generate a transaction from reading a row on the processor bus. The transaction is read-line to the monitored address ensures that none of the other caches in the processor on the bus do not store data on the monitored address is neither shared nor in exceptional condition (according to the well-known MESI Protocol). Other protocols can be used in other States; however, the transaction is designed to reduce the likelihood that the other person can write on the monitored address without transaction observed by the monitor 310. In other words, the operations of recording or indicating the operation of the write transaction is subsequently broadcast so that they can be detected by the monitor. As soon as the read operation of the line is executed, the monitor 10 will monitor the transaction on the bus.

Because on the bus occur additional transactions, logical resources associated with the coherence, continue to maintain surveillance of the monitored addresses, trying to prevent the attempts of users to receive the cache line associated with the monitored address in exclusive use. According to a bus Protocol, this can be accomplished logical 360 tools of signal generation treatment that generates the HIT signal# during the stage of monitoring in respect of any read operation on the monitored address, as indicated by step 660. The formation HIT# prevents other caches from moving beyond the sharing status in the MESI Protocol in the exclusive state, and then, perhaps in a modified state. As a result, as indicated by step 665, no subscribers in the selected domain coherence (the part of memory that is supported by coherent) may not have data in the modified or exclusive state (or their equivalents). The processor is effectively manifested as having a cache line monitored addresses, cached despite the fact that it was removed from the internal caches in this variant implementation.

Referring now to Fig.7, hereinafter more fully described steps associated with step 620 on Figa. In particular, 7 explains in more detail OPE the purpose of the monitor 310. At step 700, the monitor 310 receives the request and address information from the controller 340 bus to bus transactions. As indicated by step 710, the monitor 310 examines the type of bus cycle involved and address (es). In particular, logical means 320 comparison cycles determine whether the bus cycle of the specified cycle. In one embodiment, the circuit 330 compare address compare address bus transactions with the monitored address stored in the register 335 monitored addresses, and logical means 325 locate records decode information type of cycle from the controller 340 bus to discover whether there was a write operation. If a write operation on the monitored address has occurred, then set the indicator tracked events waiting in the queue for processing, as indicated by step 720. The signal RECORDING DETECTED (DETECTED WRITE) is passed a Boolean tools 377 suspend/resume flow for alarm on the event (and next will be served in the assumption that they have been activated during the execution of the MWAIT). Finally, the monitor 310 pause, as indicated by stage 730. Shutdown monitor saves energy, but it's not crucial, while false tracked events masked or otherwise, is not generated. At this point, the indicator tracked events can also be reinstalled. Typically, the EAC is ugiwanie tracked events also masks the recognition of further events to be monitored, while MWAIT again will fail.

In the case of reading the tracked address activate logical means 350 associated with coherence. As indicated by step 740, form the signal (type HIT#)to prevent the exclusive use by another subscriber, which would allow in the future to perform write operations without broadcasting coherence. The monitor 310 is active and then returns to step 700 unaffected by the reading on the monitored address. Additionally, if the transaction is neither a read nor a write on the monitored address, the monitor remains active and returns to step 700.

In some embodiments, the implementation team MONITOR restricted so that only certain types of access can be tracked. These accesses can be accesses selected as indicative of the effective ways of programming, or can be selected for other reasons. For example, in one embodiment, the memory access should be cached by the store in-memory write-back, which naturally aligned. Aligned in a natural way an element is an N-bit element that begins with an address divisible by N. In use are naturally aligned elements need to access the same cache line (in contrast to the two article is OK cache as would be necessary when the data is split between two cache lines)to be recorded on the monitored address. As a result, using are naturally aligned memory addresses, you can simplify the process of monitoring the bus.

Fig explains one way of implementing a system that uses disclosed here how multithreaded out-of-memory. In the embodiment, by Fig a set of N multi-threaded processors, the processors from 805-1 to 805-N connected to the bus 802. In other embodiments, the implementation may use a single processor, or a combination of multi-threaded processors and single-threaded processors. Can also use other known or otherwise available to build systems. For example, the processor may be connected by way of a "point to point", and parts such as the memory interface, can be integrated into each processor.

In the embodiment, by Fig memory interface 815 connected to the bus connected to the memory 830 and interface 820 environment. The memory 830 contains the operating system 835, supports multiple processors, as well as the commands for the first stream 840 and the team for the second thread 845. Command 830 include a wait loop according to the disclosed methods, different variants of which are shown in Figa-9c.

To fulfill these different is cnie functions appropriate software may be provided in any of a number of machine-readable media. Interface 820 environment is the interface for this software. Interface 820 may be an interface to the data carrier (for example, storage on magnetic disks, optical drive, tape drive, volatile memory, non-volatile memory and the like) or to the transmission medium (for example, network interface or other digital or analog interface). Interface 820 environment can read the software procedure with the environment (for example, from media 792 data or environment 795 transmission). Machine-readable media is any medium that can store at least temporarily the information read via the machine interface. This may include signaling (via wired, optical, or terrestrial environment) and/or physical media 792 data, such as various types of memory and disk devices.

Fig. 9a illustrates a wait loop according to one variant of implementation. At step 905 the command MONITOR perform with address 1 as its operand, i.e. the monitored address. The team MWAIT perform on stage 910 within the same thread. As previously discussed, the team MWAIT causes the suspension flow, assuming that other conditions proper education is completed. When an event of a forced shutdown occurs at step 915, the procedure moves to step 920 to determine any changes to the stored value for the monitored address. If the value for the monitored address has changed, then the execution flow continues as indicated by step 922. If the value does not change, then the event occurred a false awakening. Wakeup event is false in the sense that from MWAIT was carried out, and recording the monitored address. If the value has not changed, the loop returns to step 905, where the monitor configure again. This software implementation cycle allows you to develop a monitor to allow a false awakening.

Fig. 9b illustrates an alternative cycle expectations. In the embodiment, by Fig.9b added one additional check, to further reduce the likelihood that the team MWAIT will not be able to catch a write operation on the monitored memory address. Again the flow begins Fig.9b with command MONITOR with address 1 as its operand, as indicated by step 925. Additionally, at step 930 a software routine reads the value of the memory tracking address. At step 935, the program performs a double check to ensure that the memory value has not changed in relation to the value okazyvajus the mu that thread must be placed in the idle state. If the value has changed, then the execution of the thread continues, as indicated by step 952. If the value has not changed, then run MWAIT, as indicated by step 940. As previously discussed, the thread is suspended until the event of forced shutdown will not occur at step 945. But again, as false events forced shutdown is allowed, check whether did is again performed at step 950. If the value has not changed, the loop returns again to activate the monitor to check the address 1, returning to step 925. If the value has changed, then the execution flow continues at step 952. In some embodiments, the implementation may be no need to run the command MONITOR again after the events of false awakenings before the command MWAIT will be performed again to pause the stream.

Figs illustrates another example of a program sequence, use the command MONITOR and MWAIT. In the example on Figs cycle does not go into standby mode while within the thread will not have two separate jobs that do not have any work. Constant value CV1 remain in the working cell WL1, when there is work that must be performed the first procedure. Similarly, the second constant value CV2 maintain Aut in WL2, when there is work that must be performed a second procedure. To use one of the monitored address, WL1 and WL2 selected to be the memory cells in the same cache line. Alternatively, one working cell can also be used to store status indicators memory for many jobs. For example, each of the one or more bits in one byte or another block may represent different job tasks.

As indicated by step 955, the monitor configured to monitor WL1. At step 960 check, it is not known whether WL1 constant value, indicating that there is work that you want to perform. If so, the work associated with WL1, perform, as indicated by step 965. If not, then at step 970 check, it is not known whether WL2 value CV2, indicating that there is work that must be performed relative to WL2. If so, do the work associated with WL2, as indicated by step 975. If not, the cycle may proceed to determining whether to cause the processor power management on the step 980. For example, if it is selected amount of time, the logical processor may be transferred to the state of reduced power consumption (for example, in one set of States C, defined in accordance with the specification Advanced configuration is the generation and Interface power interface (ACPI), Version 1.Ob (or later), published 08.02.99, available at www.acpi.info with regard to the filing of this application). If so, processor power management is called at step 985. In one case, 965, 975 and 985, where there was work to be done, the thread performs the operation and then returns to make again the same definitions after installing the monitor at step 955. In an alternative embodiment, the return cycle of the blocks 965, 975 and 985 may be made to the unit 960 while the monitor remains active.

If there was any work on the stages 965, 975 and 985, run MWAIT, as indicated by step 990. From a suspended state of flux, due to MWAIT will eventually come when the event occurs forced shutdown, as indicated by step 995. At this point, the cycle returns to step 955 to install the monitor and after to determine whether WL1 or WL2 that there is work you need to do. If no work is to be performed (for example, in the event of a false awakening), the loop will return to MWAIT in step 990, and then suspend the thread until the event of a forced shutdown.

Figure 10 illustrates one alternative implementation of the processor, which allows the monitored value to be cached in the esche L1. The processor in Figure 10 includes an execution unit 1005, the cache 1010 L1 and buffers 1020 Association records between the L1 cache and inclusive cache 1030 L2. Buffers 1020 Association records include port 1044 tracking, which guarantees the coherence of the internal caches with another memory through the operation received via the bus interface 1040 from the bus 1045. As affecting the coherence transaction reaches buffers 1020 Association records through the port 1044 tracking, the monitor may be located at the level of L1 cache, and still take enough information to determine when the event record in memory has a place on the bus 1045. Thus, the memory line corresponding to the monitored address, can be maintained in the L1 cache. The monitor is able to detect, as the entries in the L1 cache from the executing device, and records from the bus 1045 port 1044 tracking.

In another alternative embodiment, the supported command MONITOR with two operands. One operand specifies the memory address, as previously discussed. The second operand is a mask that specifies which of the many events that otherwise would not have broken the wait state memory, should trigger an interrupt that particular memory wait. For example, one bit of the mask may indicate that masked interrupts should be allowed to interrupt condition is Zidane memory despite the fact that what interrupts are masked (e.g., resolution of Wake-up events even when this bit IF set EFLAGS on the masking of interrupts). Perhaps then one of the commands executed after the interrupt wait States memory unmask this interrupt, and thus it will be served. Other events that otherwise would not have broken the wait state memory can be allowed to interrupt the wait state memory, or Vice versa events that normally interrupt wait state memory can be locked. As was discussed in the case of the first operand, the second operand may be explicit or implicit.

Fig. 11 illustrates various structural representations or formats for simulation, emulation and fabrication, using the disclosed methods. Data representing the design, can represent the design in a number of ways. First, as is useful in the simulation, the hardware can be represented using a description language hardware (HDL) or another language functional descriptions, which essentially gives the computerised model of how the designed hardware is supposed to work. Model 1110 hardware can be stored in the storage media 1100 data, such as a computer memory so that this model could be reproduced in c and the use of software modeling 1120, which applies the defined test set 1130 programs to model 1110 hardware to determine whether it functions as intended. In some embodiments, the implementation of the simulation software is not recorded, not recorded and is not contained on the media.

Additionally, the model-level diagrams with logical and/or transistor gates can be performed at certain stages of the design process. This model can similarly be played sometimes with the use of specialized equipment simulators, which form the model using programmable logic means. This type of modeling to be undertaken at the next stage, can be a way of emulation. In any case preconfigurable hardware can represent another variant implementation, which may include a machine-readable medium that stores the model using the disclosed methods.

In addition, most of structures at some stage reach a level of data representing the physical location of the various devices in the hardware model. In the case of using the ordinary methods of manufacture of semiconductors, data representing a model of the hardware can be data, determining presence is a journey or absence of various features at different levels masks for masks, used for the production of integrated circuits. Again, these data represent an integrated chip, embody the methods disclosed in that schema and logical tools in these data can be modeled and manufactured to perform these methods.

In any view, the design data may be stored in any form on a machine-readable carrier. The carrier may be an optical or electrical wave 1160, modulated, or otherwise generated to transfer such information, the memory 1150 or magnetic or optical media 1140, such as a disk. A set of bits describing the design or a certain part of the design is the product that can be sold by itself or can be used by others for further performance or workmanship.

Thus, the disclosed methods for suspending execution of the thread until then, unless there is a specific memory access. Although certain illustrative embodiments of the described and illustrated in the accompanying drawings, it should be understood that such embodiments of given merely to illustrate and not limit the scope of the invention and that this invention is not limited to the specific illustrated and described constructions and choice, since various other modifications mo is ut implemented by specialists in the art after studying this disclosure.

1. The processor contains multiple execution units for execution of multiple threads including a first thread having a first command having associated with it an address operand specifies the monitored address; suspension means for suspending execution of the first mentioned flow; monitoring tool for monitoring, in response to the first command, the memory accesses on the monitored address, and the means for monitoring is configured to cause the resumption of the first thread in response to the memory access on the monitored address.

2. The processor according to claim 1, in which the monitoring tool causes the resumption in response to access to the memory only if the memory access indicates actual or potential write operation on the monitored address.

3. The processor according to claim 1 or 2, in which the monitoring tool causes the resumption of the first thread in response to the comparison result of the address is the memory access, and track address, if the first thread is suspended and tracked events nemarkirovana.

4. The processor according to claim 1, additionally containing a logical means of detecting events that are intended to cause resumption of the first thread in response to an event other than at manutoo memory access.

5. The processor according to claim 4 in which the said event is an interruption or disturbance.

6. The processor according to claim 1, in which the mentioned address associated operand is implicit operand.

7. The processor according to claim 6, in which the mentioned address associated operand is stored in a predetermined register.

8. The processor according to claim 1 in which the means suspend suspend execution of the first thread in response to the second command, which unmasks the events that signaled the monitoring tool.

9. Processor of claim 8, in which the above-mentioned second command only starts the monitor if you completed the first command.

10. The processor according to claim 1, in which means suspend suspend execution of the first mentioned stream in response to the first command.

11. The processor according to claim 1, additionally containing means coherence to improve the ability to monitor the operations of preserving referred monitored address.

12. The processor according to claim 1, additionally containing means coherence to ensure that no cache within the domain of coherence does not store the information mentioned in the monitored address in a modified or exclusive state.

13. The processor according to claim 11 or 12, in which mentioned the coherence clean cache line, associated with said monitored address of all internal caches and generate a transaction bus corresponding to the read line, in relation to the cache line associated with said monitored by address, other processors connected to this processor.

14. The processor according to item 13, in which the means of coherence cause the generation of a CPU bus cycle to prevent the execution of the transaction account on the said monitored to address any other subscribers bus without broadcast on write transactions.

15. The processor according to item 13, further containing a means for signal conversion in response to the information read by another bus participant mentioned the monitored address.

16. The processor indicated in paragraph 15, in which the means for signal conversion form the signal handling during phase tracking in relation to bus transactions, which includes the monitored address.

17. The processor according to item 16, further containing a logical means of broadcast addresses for broadcast referred to associate the address of the operand in the monitored address is a physical address.

18. The CPU 17, in which the monitored address is selected from a set consisting of a physical address, virtual address, otnositel the th addresses and linear addresses.

19. The processor according to claim 1, additionally containing a lot of shared resources to be partitioned in such a way as to allocate a portion of each of the shared resource to each active thread mentioned multiple threads when multiple threads are active, these funds pause release any of the mentioned multiple partitions allocated to the first thread in response to the suspension of the execution of the said first stream.

20. The processor according to claim 19, in which the monitoring tool causes the re-partitioning mentioned many shared resources in order to adapt to the execution of the first thread in response to the memory access on the monitored address.

21. The processor according to claim 20, wherein a set of shared resources includes a command queue; the reordering buffer; a pool of registers; many conservation buffers.

22. The processor according to item 21, further containing many redundant resources, and mentioned many duplicate resource is duplicated for each of the multiple threads and mentioned many duplicate resource contains a set of state variables of the processor; pointer; logical means of renaming registers.

23. The processor on p.22, optionally containing a number compatible with the but of the resources used, which is available for use by any of the aforementioned multiple threads, and mentioned many shared resources includes many performing funds; cache; the scheduler.

24. System for processing of machine instructions that contain processor according to any one of claims 1 to 23; a memory for storing commands for these multiple threads.

25. The system of paragraph 24, optionally containing a multithreaded operating system; multimedia interface.

26. The processor contains a preprocessor for receiving the first command and the second command, the first command has an operand that specifies the monitored address; lots of resources for execution of the first command and the second command to enter in an implementation-dependent state in response to the second command if the first command was executed; monitor designed to cause the output of the implementation-dependent state in response to the memory access on the monitored address.

27. The processor on p, in which many resources execution is in an implementation-dependent state in response to the second command if the first command has been executed and has not occurred events forced shutdown after running the first command, but before executing the second command.

28. The processor on p or 27, to which the op mentioned the operand is an implicit operand of a predetermined register, specifies the linear address, the processor further comprises logic means broadcast address for broadcast mentioned linear addresses with the aim of obtaining a monitored address is a physical address.

29. The processor on p, optionally containing logical means of coherence, designed to ensure that no cache of another processor, connected to the processor, stores the information mentioned in the monitored address in a modified or exclusive state.

30. The processor according to clause 29, further containing a means for signal conversion in response to the tracking tracked by another processor.

31. The processor on p, in which the first command and the second command are commands from the first thread of the multiple threads, and the above-mentioned implementation-dependent condition includes a condition without partitioning, which is divided into sections resources for the first stream is freed.

32. The processor that contains many shared between resource flows to receive commands; many shared resources to execute the command together with the aforementioned multiple shared between resource flows; logical means of suspension flow for the ristanovi the first thread in response to the command in the first stream, moreover, the mentioned logical means of suspension of the free flow sections mentioned many shared between resource flows associated with the first flow, in addition to suspending the first thread; monitor designed to trigger the processor re-partitioning shared between many resource flows into sections and resume execution of the first thread in response to access to the memory address specified by the first thread.

33. The processor on p, in which access to the memory address specified by the first command executed in the above-mentioned first flow, and monitor demirkiran for alarm about the monitored events to cause resumption of the flow of a command, in response to which the logical means of suspension thread suspend the first thread.

34. The processor on p or 33, wherein a set of shared resources includes a command queue; the reordering buffer; a pool of registers; many buffers, conservation, and the said processor further comprises multiple redundant resources, and mentioned many duplicate resource is duplicated for each of these multiple threads, these variety of duplicate resources contains many state variables of the processor; the pointer is the commands themselves; logical means of renaming registers.

35. The method of processing of machine instructions, comprising stages, which take the first operation code in the first thread of execution, and referred to the first operation code has an associated address operand specifies the monitored address; perform monitoring of memory accesses on the monitored address in response to the first operation code; suspend mentioned first stream; detect memory access on the monitored address; resume mentioned first thread in response to the detection of memory access on the monitored address.

36. The method according to p, in which the step of suspending the first thread includes the steps, which are the second team in the first stream; suspend the first thread in response to the second command.

37. The method according to p or 36, in which the memory access is a write access.

38. The method according to clause 37, further comprising stages on which the broadcast referred to associate the address of the operand in the monitored physical address, and the step of detecting memory access on the monitored address includes a stage on which detect write access on the monitored physical address.

39. The method according to 38, optionally containing phase, which is before the rotate obtaining other subscribers exclusive use information, stored on the monitored address.

40. The method according to p, in which the detection step includes the steps, which take the cycle information from the external bus transactions; detect write operations on the monitored address.

41. The method according to p, optionally containing phase, which is referred to resume the first thread in response to an event that is different from the memory access on the monitored address.

42. The method according to paragraph 41, in which the said event is an interrupt.

43. The method according to 42, in which the said interrupt is a masked interrupt specified by the second operand, which nevertheless should be considered as an event of a forced shutdown.

44. The method according to p, optionally containing phase in which broadcast the linear address associated with the first opcode in the physical address.

45. The method according to item 44, optionally containing a stage at which signals on the appeal, if the other party bus reads on the said physical address.



 

Same patents:

FIELD: engineering of computers for controlling memory, in particular, external memory controllers.

SUBSTANCE: memory control device for operation in memory controller network contains memory controller being an owner unit, capable of controlling the blocking of certain data area during execution of input-output outputs, and component for exchanging messages, providing for transmission of at least one message with blocking request, permission of blocking, blocking removal request and blocking removal signal, and also input-output component, while any image of aforementioned data area, received by instant copying thereof, is maintained as coherent relatively to data area itself, and input-output component may position previous direct confirmation, that this data area remains coherent to any such image, to cash-memory, and may perform input-output operations on basis of aforementioned previous direct confirmation. Method describes operation of aforementioned device. Software product for computer is realized on machine-readable carrier and contains a program recorded thereon, realizing operations of aforementioned method.

EFFECT: expanded functional capabilities.

3 cl, 3 dwg

FIELD: computer engineering, possible use in data exchange systems and local computing networks.

SUBSTANCE: device contains N≥2 client blocks, clock impulse generator, N client time controllers, OR element, AND-NOT element, selector-multiplexer, two N-input AND-NOT elements, two priority encoders, main wait time controller.

EFFECT: increased probability of timely servicing of clients under conditions of real functioning process of data exchange systems, with continuous dynamics of change of modes of different priority requests from clients.

4 cl, 7 dwg

FIELD: engineering of interrupt processing mechanisms in computer systems.

SUBSTANCE: system contains processor with multiple contexts for execution of commands stored in memory. In response to common interrupt logical processors of processor with multiple contexts compete for receiving access to jointly utilized register. First logical processor gaining access to aforementioned jointly utilized register processes common interrupt. Remaining logical processors return from interrupt.

EFFECT: increased productiveness of system.

4 cl, 5 dwg

FIELD: engineering of information processing systems.

SUBSTANCE: system contains master-system for processing information, interface, central communication device, client system for processing information, object model. In accordance to method each master system sends to central communication device elements of its data array determined in appropriate master-representation, while in master-representation of connected master system elements of data array are contained, for which system has data priority.

EFFECT: simplified specification and development of interfaces between technical applications.

2 cl, 6 dwg

FIELD: method and device for processing data for preserving recurrent status in data processing device.

SUBSTANCE: device has data processing block, having multiple functioning modes, for each of which special memory stack is present. Method describes operation of this device. Data carrier contains program, configuring data processing device for performing stages of method.

EFFECT: decreased size of code and decreased interruption processing delay.

3 cl, 16 dwg

FIELD: computer science.

SUBSTANCE: device has n-byte query register, query limits location systems, each of which consists of counting timer and OR element, OR element, AND element, keys cascade.

EFFECT: higher reliability and speed of operation.

1 dwg

The invention relates to computing and is used in automatic control systems of technological processes

The invention relates to computing, and in particular to a device priority, and can be used to control access to multiple subscribers to the shared resource

The invention relates to the field of computer technology, in particular to a device priority service requests

FIELD: computer science.

SUBSTANCE: device has n-byte query register, query limits location systems, each of which consists of counting timer and OR element, OR element, AND element, keys cascade.

EFFECT: higher reliability and speed of operation.

1 dwg

FIELD: method and device for processing data for preserving recurrent status in data processing device.

SUBSTANCE: device has data processing block, having multiple functioning modes, for each of which special memory stack is present. Method describes operation of this device. Data carrier contains program, configuring data processing device for performing stages of method.

EFFECT: decreased size of code and decreased interruption processing delay.

3 cl, 16 dwg

FIELD: engineering of information processing systems.

SUBSTANCE: system contains master-system for processing information, interface, central communication device, client system for processing information, object model. In accordance to method each master system sends to central communication device elements of its data array determined in appropriate master-representation, while in master-representation of connected master system elements of data array are contained, for which system has data priority.

EFFECT: simplified specification and development of interfaces between technical applications.

2 cl, 6 dwg

FIELD: engineering of interrupt processing mechanisms in computer systems.

SUBSTANCE: system contains processor with multiple contexts for execution of commands stored in memory. In response to common interrupt logical processors of processor with multiple contexts compete for receiving access to jointly utilized register. First logical processor gaining access to aforementioned jointly utilized register processes common interrupt. Remaining logical processors return from interrupt.

EFFECT: increased productiveness of system.

4 cl, 5 dwg

FIELD: computer engineering, possible use in data exchange systems and local computing networks.

SUBSTANCE: device contains N≥2 client blocks, clock impulse generator, N client time controllers, OR element, AND-NOT element, selector-multiplexer, two N-input AND-NOT elements, two priority encoders, main wait time controller.

EFFECT: increased probability of timely servicing of clients under conditions of real functioning process of data exchange systems, with continuous dynamics of change of modes of different priority requests from clients.

4 cl, 7 dwg

FIELD: engineering of computers for controlling memory, in particular, external memory controllers.

SUBSTANCE: memory control device for operation in memory controller network contains memory controller being an owner unit, capable of controlling the blocking of certain data area during execution of input-output outputs, and component for exchanging messages, providing for transmission of at least one message with blocking request, permission of blocking, blocking removal request and blocking removal signal, and also input-output component, while any image of aforementioned data area, received by instant copying thereof, is maintained as coherent relatively to data area itself, and input-output component may position previous direct confirmation, that this data area remains coherent to any such image, to cash-memory, and may perform input-output operations on basis of aforementioned previous direct confirmation. Method describes operation of aforementioned device. Software product for computer is realized on machine-readable carrier and contains a program recorded thereon, realizing operations of aforementioned method.

EFFECT: expanded functional capabilities.

3 cl, 3 dwg

FIELD: engineering of means for pausing execution of a stream until certain memory access occurs.

SUBSTANCE: in one variant of realization, processor contains a set of executive devices, capable of executing a set of streams. First stream includes a command, which determines the address being tracked. Logical pausing means pause execution of first stream, and monitor causes renewal of first flow as reaction to access of given address being tracked.

EFFECT: increased processor productiveness.

5 cl, 14 dwg

FIELD: methods for automatic execution of a program, connected to data file, when data file and program being executed are positioned on different computer units.

SUBSTANCE: in methods, program being executed is accessed through graphic image of data file type, realized in the network, which includes client system and a set of server systems. Client system receives the scheme, which determines connection between the set of programs being executed and corresponding set of data file types. Graphic image of data files is displayed, information about selection of graphic image of data file is received from server system, on basis of it program to be executed is selected and executed.

EFFECT: increased productivity of system due to distributed execution of programs.

9 cl, 19 dwg, 3 tbl

FIELD: method and system for providing user interface information to client.

SUBSTANCE: in accordance to the invention, access system contains registration mechanism. Client environment for automatic processing of user interface receives registration information from the client and transmits user interface information after receipt. Server for automatic processing of user interface receives registration information from client environment for automatic processing of user interface and notifies processor of user interface about registration, and also receives user interface information from user interface processor. The server contains filtration device for filtering out information of no interest to client, and notification device for notifying the client about information which is of interest to the client.

EFFECT: ensured capacity for filtration and coordination of excessive and disorienting notifications.

2 cl, 11 dwg

FIELD: telecommunications.

SUBSTANCE: device contains a set of central processor units, which are assigned a common external up address in telecommunication network which allows packet data. IP messages, addressed to a network element, are received, and received IP messages which contain first messages are identified. First value is identified in first message and first message is transmitted to central processor unit on basis of identified first value, if identified first value is not equal to zero.

EFFECT: ensured load balancing for central processor when using several types of traffic.

3 cl, 3 dwg

Up!