RussianPatents.com
|
High-speed communication system and high-speed communication method. RU patent 2510981. |
|||||||||||||||||||||
IPC classes for russian patent High-speed communication system and high-speed communication method. RU patent 2510981. (RU 2510981):
|
FIELD: physics, communications. SUBSTANCE: present invention relates to a high-speed communication system. The high-speed communication system includes a plurality of nodes located in a communication channel and a plurality of connections set up between the said nodes. The plurality of nodes exchange a plurality of fragments of information on the operating characteristic model between them, wherein the plurality of fragments of information on the operating characteristic model represent communication operating characteristics achieved by each of the plurality of connections. Each of the plurality of nodes controls communication based on any of the plurality of fragments of information on the operating characteristic model. EFFECT: fewer data accumulating in each queue of a plurality of nodes present in a communication channel and improved operating characteristics. 10 cl, 6 dwg
The technical field to which the invention relates The present invention relates to a system of high-speed communication and the method of high speed communication in which data is sent across many nodes. The level of technology The scheme of connection TCP (transmission control Protocol)/IP (Internet Protocol) is the representative communication Protocol used on the Internet. In the scheme of TCP/IP communications transmitting terminal, decide on the data transfer rate to adapt to the speed of communication in which connection is carried out using the maximum possible for transmission of data within the window size (information such as the number of segments of the data or the amount of data that the receiving terminal is able to receive)reported receiving terminal, and communicates. It is known that the speed of data transfer, which should be selected by the transmitting terminal, decreases inversely as the network latency, and the square root of the ratio of packet loss. In other words, in the network, high-latency, or a network with a high rate of packet loss, TCP provides poor working performance. There is a scheme in which data is transferred to the device the final destination so that the data successively transferred between terminals or nodes through the use of a TCP connection, as shown in Fig. 1. In the communication scheme in which each set of nodes in the network manages on the decision concerning a data transfer rate TCP, network latency and packet loss is partitioned network between nodes. For this reason, network latency and packet loss between hosts in a TCP connection is reduced and, thus, performance TCP communication improve. Prior art is designed to improve the operational characteristics of radio communication, disclosed in Document 1 of the patent. [DOCUMENT PREVIOUS LEVEL OF TECHNOLOGY] [PATENT DOCUMENT] [Patent document 1] Unexamined patent application Japan, the first publication №2008-199332 SUMMARY OF THE INVENTION The problem which must be solved by the invention However, in this scheme of communication, because each TCP connection is working regardless, a large amount of data can accumulate in the queue node, which transfers the data. The case when the node that carry data, sends the data, received from one network through a TCP connection A, another network through a TCP connection, is described as an example. In this case, when the throughput of a TCP connection And more than the throughput of a TCP connection B, the amount of data accumulated in the queue node increases. When the amount of data accumulated in the queue node, almost reaches the upper limit of the accumulated quantities, site carries out the transmission of the notification that the potential size of the received data should be restricted, another node in the source data, using the window announcing TCP, and then suspends the connection. However, in this way the control is performed only just before the amount of data accumulated in the queues node reaches the upper limit of the accrued amount, and, therefore, it is impossible to prevent the accumulation of large amounts of data in the queue. Moreover, the way to reduce data throughput from the source of the transfer window, announcing, as for signal transmission requires time, performance is reduced because of the delay, even when the connection resumes. Similarly, there is a way to signal to temporarily suspend the connection, such as a signal interruption specified in IEEE802.1, the source node transmission, when the amount of data accumulated in the queue exceeds the upper limit of the accumulated amount. However, when this method is implemented in lines of communication in which there is a delay, excessive reduction of operational communication performance is caused by a delay between the time the connection is temporarily suspended, and the time until the line resumed. For this reason it is usually not used. In connection TCP when the package is lost in the network, is repeated transfer of data. When re-transmission of data is carried out, the bandwidth can be reduced and, thus, the host queue can grow. In the ordinary management of TCP, the decision on the rate of increase data transfer rate depending on network latency. For this reason, in the sections of the network that have a different one from another delay, there is a difference in speed of data transfer, and, thus, there is a problem that a lot of data stored in the queue node. Approximate the present invention is to create a system of high-speed communication and the method of high-speed connection, which can reduce the amount of data accumulated in each of the queues of multiple nodes that are present in the communication channel, and improve operational performance. WAYS OF PROBLEM SOLUTION To achieve the above objectives, the system of high-speed communication of the present invention includes many nodes located in the communication channel; and a lot of connections connection established between the various nodes, and many nodes exchanges the set fragments of information about the model performance between nodes, and numerous pieces of information about the model performance is a performance due to be achieved by each of the many compounds of communication, and each of the many sites controls the following : communications on the basis of any one of the many pieces of information about the model performance. In the system of high-speed communication of the present invention is preferable to many nodes included node is the starting point, the host destination and repeater hub between node start point and the destination node. The node is the starting point may transmit to the site destination managing the starting signal, indicating that a connection must be carried out, the host destination, which is found managing the starting signal, can calculate information about the model performance host destination based on the discovery Manager startup signal, and relay host can calculate information about the model performance relay host on the basis of detection of managing a starting signal. The node is the starting point can transmit signal notification of model performance, which stores information about the model business characteristics of the host destination, and relay host can compare information about the model of the performance variables that stored in signal notification of model performance, with information about the model performance, calculated relay node, and can shift the signal notice of model performance, which stores information about the model performance, which represents a lower value when the relay node transmits the signal connection on the model of working characteristics of the host destination. Then the destination node can save the information about model performance, stored in a received signal notification of model performance, and can transmit the signal decision on model performance, which stores information about the model performance, the node is the starting point; repeater site that retransmitted signal decision on model performance node start point, and the node is the starting point can save information about the model of the performance variables that stored in signal decision on model performance and the node is the starting point and repeater node can manage TCP-based information about the model of the performance variables that stored in signal decision on model performance. According to the aspect of the present invention communication is carried out without increasing the amount of data cumulative in the queue node, through the above-mentioned process, and, thus, the connection can be made at high speed. Moreover, according to this aspect of the present invention TCP connections before and after the host differ in speed from each other, and thus, because of the increased amount of data cumulative node, can be reduced, the connection can be made at high speed. Moreover, according this aspect of the present invention management is done so that the degree of increase and decrease the speed of data transfer becomes the same before and after the host, and thus, because of the increased amount of data cumulative node, can be reduced, the communication can be carried out on high speed. Moreover, according to this aspect of the present invention, even when the communication is suspended from the use of the control signal, resume communication is not delayed due to delays in the transmission when the connection resumes. Poorer performance communication can be prevented. BRIEF DESCRIPTION OF DRAWINGS In Fig. 1 shows the block diagram illustrating the system configuration high-speed communications under the first approximate variant of the implementation of the present invention. In Fig. 2 shows a diagram illustrating the functional blocks of the transmitting terminal, shown in Fig. 1. In Fig. 3 shows a diagram illustrating the functional blocks of the receiving terminal, shown in Fig. 1. In Fig. 4 shows a diagram that illustrates the functional blocks of the relay node, shown in Fig. 1. In Fig. 5 shows a diagram that illustrates the sequence of operations in the system of high-speed communications under the first approximate variant of the implementation of the present invention. In Fig. 6 shows a diagram illustrating the status of the queue at each node under the first approximate variant of the implementation of the present invention. EMBODIMENTS OF THE PRESENT INVENTION The following information describes the system of high-speed communications under the first approximate variant of the implementation of the present invention with reference to the drawings. In Fig. 1 shows the block diagram illustrating the system configuration high-speed communications under the first approximate variant of implementation. High-speed-connection (broadband device), shown in Fig. 1, consists of transmitting terminal 101, receiving terminal 102 and repeater nodes 103 and 104. In Fig. 1 presents an example in which the transmitting terminal 101 communicates with receiving terminal 102. The data transmitted by the transmitting terminal 101, passed in relay node 103. Relay node 103 transmits the data to the repeater site 104. Repeater site 104 passes the data to the receiving terminal 102. The data transmitted between the terminal and the host, which are adjacent to each other, or between adjacent nodes using the TCP connection. During data transmission, data transmission node 103 on the downward flow of data transmission, the transmitting terminal 101 notifies achievable average bandwidth B. Node 103 compares average throughput In, adopted by the transmitting terminal 101, with average throughput achievable your own site, and then transmits less throughput In repeater site 104 on the downward flow of data transmission. Repeater site 104 similarly compares average throughput In, adopted from repeater node 103, with an average throughput of your own site and then transmits less throughput In the receiving terminal 102 on the downward flow of data transmission. Through this process, the receiving terminal 102 can be alerted to the lowest average throughput in sections B transmissions of TCP reaching receiving terminal 102 from the transmitting terminal 101. Moreover, similarly, transferring data node 103 on the downward flow of data transmission, the transmitting terminal 101 notifies the speed ΔC increase capacity in its own terminal. Node 103 compares the speed ΔC increasing taken from the transmitting terminal 101, with a speed ΔC increase capacity of its own site and then passes a lower speed ΔC increase in retransmission site 104 on the downward flow of data transmission. Relay site 104 similarly compares the speed ΔC increase throughput, adopted from repeater node 103, speed ΔC increase capacity of its own site and then passes a lower speed ΔC increase receiving terminal 102 in a top-down flow data. Through this process, the receiving terminal 102 can be alerted to the lowest speed ΔC increase capacity in sections transmissions of TCP reaching receiving terminal 102 from the transmitting terminal 101. The receiving terminal 102 transmits information about the average throughput and speed ΔC bandwidth taken from the repeater site 104, at the repeater site 104. Repeater site 104 transmits information about the average throughput and speed ΔC increase capacity at the repeater site 103. Relay node 103 transmits information about the average throughput and speed ΔC bandwidth for transmitting terminal 101. Through this process each node and each terminal present in the communication channel data can discover information about the low bandwidth and very low speed ΔC increase capacity in the communication channel. Each node and each terminal present in the channel of data transfer, manage calls, using the average throughput (the lowest average bandwidth in the channel) and speed ΔC increase of throughput, transmitted from the ultimate destination of data transmission. In normal communication technology implemented notification speed communication lines, available in case of connection to communication lines, that is, the maximum throughput. On the other hand in an exemplary embodiment of the present invention, the network is broken into multiple TCP sections, and each of the transmitting terminal, receiving terminal and relay host estimates the average throughput achievable in each TCP section, the state of the network quality and exchanges information. Through this process bandwidth that can actually be used in the communication channel, it may be known, and control is accomplished using the information so that a large amount of data was not collected in the queue in each terminal and host. Moreover, according approximate variant of the implementation of the present invention, in addition to the average bandwidth In, valued in each TCP partition, as described above, each terminal and each relay node exchanges speed ΔC increase capacity, and communication management is carried out with the use of this information. TCP is a Protocol that dynamically adapt bandwidth according to the state of network congestion. Speed ΔC increase throughput means the rate of increase of throughput per unit of time. TCP RENO, which is derived from the TCP method, basically, throughput is increased by one package at a time (turn-around time (RTT)), during which the service runs between terminals or between the terminal and the host. Therefore, the speed ΔC increase of throughput per unit of time can be calculated by the following formula: ΔC=PacketSize/RTT. In CUBIC TCP, which is derived from other TCP method, bandwidth, the decision according to the rate of packet loss regardless RTT. Thus, even in different derivatives of TCP methods, receiving the speed of increase of throughput per unit of time, sharing between the terminals and hosts and forcing each node to work with the minimum average bandwidth and minimum speed bandwidth, you can adjust the speed at which data is queued. As a result, the amount of data accumulated in the queue, you can reduce. Moreover, in an exemplary embodiment of the present invention, in addition to the above process between nodes and between the terminal and the host is an exchange of information about the time required for re a packet when the packet is lost in the network, reducing the width when the bandwidth is reduced, and the amount of data accumulated in the queue in each terminal and host. For example, the site 104 detects packet loss in a TCP connection from the repeater site 104 to the receiving terminal 102. In this case, the node 104 sends control signal S used for the notification of neighbouring relay host 103 at the previous stage, the emergence of packet loss. After taking control signal S relay node 103 is waiting for a packet within "the time required site 104 to re-pass the packet to the receiving terminal 102", included in the control signal S, and then resume the connection with such bandwidth as "bandwidth at the time of resumption of communications", included in the control signal S. Through this process the amount of data accumulated in the queue relay host 103, can be reduced, and you can save the connection without having to suspend transmission of data by site 104. In this exemplary embodiment of the present invention management is implemented to not more than required, but still a sufficient amount of data accumulated in the queue each node, taking into account the exchanged information about the model business communication performance (throughput B, speed ΔC increase throughput or similar) and the amount of data present in the queue. Block 11 of processing input and output transports package taken from the network in block 12 IP processing, and forwards the packet transferred from block 12 IP processing in the network. Block 12 IP processing determines the packet's destination address that is entered from the block 11 of processing input and output, sends it in block 13 TCP-transmission, paketira TCP segment moved from the block 13 TCP-transmission, and brings it in block 11 of processing input and output. Block 13 TCP-transmission takes control signal from a communications partner decides way TCP processing according to him, takes the TCP processing of the data transferred from the application to generate a TCP segment, and moving it to the block 12 IP processing. Block 14 processing application reads data from a block 15 memory data and transfers them in block 13 TCP-transmission. Block 13 TCP-transmission includes block 131 reception of the control signal, the unit 132 deciding on a model, the first block 133 decision on the congestion window, the second block 134 decision on the congestion window and block 135 data. Block 131 admission control signal takes control signal from a communications partner, moving it to the block 132 decision model. Block 132 decision model decides on the method of operation of the second unit 134 decision on the congestion window on the basis of the control signal and notifies the second block 134 decision on the congestion window on the way to work. The first block 133 decision on the congestion window provides regular management window over TCP, and the second block 134 decision on the congestion window provides adjustable control the congestion window according to the mode of operation specified from the block 132 decision model. Block 135 data stores data, moved from the block 14 of processing the application in block 16 memory segments, makes a decision on the transmission of data in accordance with the congestion window of the first unit 133 decision on the congestion window or second block 134 decision on the congestion window, generates a TCP segment, moving it to block 12 IP processing. In Fig. 3 shows a diagram illustrating the functional blocks of the receiving terminal. As shown in Fig. 3, the receiving terminal 102 includes the block 21 processing input and output, unit 22 IP processing, block 23 TCP receive and block 24 of the application processing. Block 21 processing input and output receives a packet from the network, moving it to the block 22 IP processing and sends the packet transferred from unit 22 IP processing in the network. Block 22 IP processing decides on the packet's destination address that is entered from the block 21 processing input and output, sends it to the TCP, paketira ACK transferred from block 23 TCP receive, and moving it to the block 21 handling input and output. Block 23 TCP-accept packet from the block 22 IP processing, carries out the process of obtaining TCP transfers control signal to the unit 22 IP processing as necessary and transfers the data in the block of processing of the application. Block 24 of application processing takes data from block 23 TCP receive and store them in the block 25 data memory. Block 23 TCP receive includes the block 231 receive data, block 232 transmit the ACK and block 233 transmission and reception of the control signal. Block 231 data reception remove a segment from a package, moved from the block 22 IP processing, stores it in a block 234 memory segments, converts the segment data (orders or connects segment) in accordance with the request from the application transfers the data refers to the remaining capacity of the queue in the block 234 memory segments and notifies the block 232 transmit the ACK about it. Block 232 transmit the ACK receives the notification from the block 231 receive data, generates ACK, moving it to the block 22 IP processing. Block 233 transmission and reception of the control signal transmits or receives a control signal. In Fig. 4 shows a diagram that illustrates the functional blocks of the relay host. Relay nodes 103 and 104 realize the TCP receive an incoming packet, and make the process of its transformation to the data, transferring data in the block 34 TCP-transmission through the block 35 data migration and passing package. There is described the difference between functions of relay nodes 103 and 104 and functions of the transmitting terminal 101 or receiving terminal 102. Relay nodes 103 and 104 include block 31 handle input and output block 32 IP processing unit 33 TCP reception, block 34 TCP-transmission and block 35 data transfer. Block 31 handle input and output block 32 IP processing unit 33 TCP receive and block 34 TCP-transmission carry out the same treatment as the relevant unit of the transmitting terminal 101 or receiving terminal 102, but differ in the following. Namely, when there is packet loss, block 341 reception and transmission control signal block 34 TCP-transmission relay nodes 103 and 104 shall notify the unit 333 reception and transmission control signal unit 33 TCP receive about experiencing packet loss. Moreover, the block 333 reception Manager signal unit 33 TCP reception is processing the notification of the communication partner, source of transfer of packet loss, about the current amount of data waiting to be sent, present in the block 34 TCP-transmission and bandwidth at the time of resuming communication. In Fig. 5 presents the sequence of the process of high-speed communication system. Next proper will be described in detail the sequence of process systems for high speed communications under the present approximate variant of implementation. The description will be continued at the example where the data is in the system of high-speed communication Fig. 1. The data transmitted by the transmitting terminal 101, in order, are transferred to relay node 103 repeater site 104 and the receiving terminal 102. First, the transmitting terminal 101 passes managing the start signal at the receiving terminal 102 (phase S101). While managing a start signal is transmitted to a receiving terminal 102 through relay nodes 103 and 104. Then the transmitting terminal 101, which gave managing the starting signal, calculates the average throughput Bl, achievable in a TCP connection between the sending terminal 101 and retransmission node 103 (phase S102). Average throughput B1 is calculated by the formula , (1)where k=103 representing relay node 103, W k is the smaller value of the maximum size of the congestion window transmitting terminal 101 and the maximum size of the receive window relay host 103, which is the next node; d is the RTT between transmitting terminal 101 and retransmission 103 node, which is the next node. C is a constant and p is the rate of packet loss. Preferably, d, and p is calculated on the basis of statistical significance in advance and updated during communication as necessary. The transmitting terminal 101 calculates the rate ΔC increase capacity in the following way (phase S103): ΔC=PacketSize/d where "PacketSize" - the size of the segment. The transmitting terminal 101 gets the time FRT (hereinafter referred to as the estimated time of resumption of communication) to reestablish the link again packet, assuming that the packet loss occurs in its own terminal, as follows (phase S103): FRT=d,where d is the RTT between transmitting terminal 101 and retransmission node 103. The transmitting terminal 101 calculates the coefficient BD (further reduction factor bandwidth ability)that in case of packet loss will decrease throughput relative to the bandwidth that was before the rise of packet loss, and thus renew the connection re-sending the packet, as follows (phase S104): BD=1/2.Relay host 103 repeater site 104 and the receiving terminal 102, which adopted managing the starting signal sent from the transmitting terminal 101 on stage S101, calculate the average throughput of a TCP connection with the site or the destination terminal transmission speed ΔC increase throughput the ability of TCP connections, estimated time FRT resume communication and the coefficient BD reduce the bandwidth similar to the transmitting terminal 101. The relevant pieces of information, calculated by the transmitting terminal 101, expressed as average throughput Bl, speed ΔC1 increase capacity, estimated time FRT1 resume communication and the coefficient BD1 bandwidth reduction. The relevant pieces of information, calculated receiving terminal 102, expressed as average throughput B2, speed ΔC2 increase capacity, estimated time FRT2 resume communication and the coefficient BD2 bandwidth reduction. The relevant pieces of information, calculated relay node 103, expressed as average throughput B3, speed ΔC3 increase capacity, estimated time FRT3 resume communication and the coefficient BD3 bandwidth reduction. The relevant pieces of information, calculated retransmission site 104, expressed as average throughput B4, speed ΔC4 increase capacity, estimated time FRT4 resume communication and the coefficient BD4 bandwidth reduction. Then the transmitting terminal 101 generates a signal notification of model performance, holding the calculated average throughput B1, speed ΔC1 increase capacity, estimated time FRT1 resume communication and the coefficient BD1 reduce the bandwidth and transmits the signal notification model performance relay node 103 (phase S105). After reception of the notification about the model performance relay node 103 compares average throughput B1 saved in the signal notification of model performance with an average throughput of B3, calculated in its own node (phase S106). Then relay node 103 updates the information saved in the signal notification of model performance to the average bandwidth In representing a lower value when compared, and the speed ΔC increase capacity, estimated time FRT resume communication and the coefficient BD bandwidth reduction, which is calculated by the device (either by the transmitting terminal 101 or relay node 103), which calculated the average throughput B. Moreover, relay node 103 sends the updated signal notification of model performance relay node 104 (phase S107). Repeater site 104 is also performs the same process. In other words, the relay site 104 receives an average bandwidth of the signal notification of model performance taken from a repeater node 103, compares the average throughput the ability To medium bandwidth B4 your own site (stage S108) and chooses a lower average throughput B. Then relayed site 104 updates the information saved in the signal notification of model performance to medium bandwidth, representing a lower value when compared, and the speed ΔC increase capacity, estimated time FRT resume communication and coefficient BD bandwidth reduction, which is calculated by the device, which calculated the average throughput B. Relay node 103 sends the updated signal notification of model performance receiving terminal 102 (phase S109). The receiving terminal 102 stores average throughput B, speed ΔC increase capacity, estimated time FRT resume communication, and the coefficient BD bandwidth reduction in the memory block. The receiving terminal 102 generates a signal decision model working characteristics of keeping average throughput B, speed ΔC increase capacity and coefficient BD bandwidth reduction of the number of pieces of information, stored in memory, and passes it to the relay host 104 (phase S110). Repeater site 104 saves the average throughput B, speed ΔC increase capacity and coefficient BD reduce the bandwidth saved in the signal decision on model performance, memory block and transmits the signal decision on model performance relay node 103 (phase S111). Similarly, relay node 103 stores average throughput B, speed ΔC increase capacity and coefficient BD reduce the bandwidth that were recorded in the signal decision on model performance, in block records and transmits the signal decision on the model of the working characteristics of the transmitting terminal 101 (phase S112 (). The transmitting terminal 101 stores average throughput B, speed ΔC increase capacity and coefficient BD reduce the bandwidth saved in the signal decision on model performance in the memory block. Then the transmitting terminal 101, including block 132 deciding on a model of work in block 13 TCP-transmission and retransmission node 103 and repeater site 104, which includes the block 342 decision on the model works in the block 34 TCP transmission, take a decision on the work of the TCP control as set forth below, based on information obtained from the signal decision on model performance (hereinafter, fragments of information received from the signal decision on model performance, referred to as the average throughput Bx, speed ΔCx increase capacity and the coefficient BDx bandwidth reduction) (phase S113). First blocks (132 and 342) a decision on the model of operation of the transmitting terminal 101 and repeater nodes 103 and 104 calculate the following items (a), (b) and (c): a) the maximum window size = average throughput Bx x 2; b) increases the width of the congestion window when one ACK packet is accepted = 2 x speed ΔCx/d increase in capacity, where d is the RTT between the terminal and the host or RTT between nodes; b) reducing congestion window at the time of packet loss = coefficient BDx bandwidth reduction. Every first block (133, 343) a decision on the congestion window provides regular TCP-operation, for example, as follows. The largest window size = optional installation for each terminal. Increase the width of the window overload when one ACK packet adopted = the corresponding one package. Reducing congestion window at the time of packet loss = 1/2. Examples TCP management include TCP-management decision-making in operations, based on the definition of packet loss, TCP management the decision on operations based on the increase delay and TCP management, taking into account as packet loss and delay; and TCP control can be carried out any operation. Each terminal or relay host makes a TCP-based management parameter, which adopted the decision. In Fig. 6 shows a diagram illustrating the status of the queue at each node. Each of relay nodes 103 and 104 and receiving terminal 102 provides TCP operation, adopted by the decision through the above process. Moreover, each of relay nodes 103 and 104 and receiving terminal 102 shall notify the upstream site or terminal upstream on the definition signal hangup according to the number of pending transfer of data accumulated in the queue blocks (23 and 33) TCP receive, as follows. In this description, i=0, 1, 2 and 3. As for the node (i), then the node (0) represents the transmitting terminal 101, host (1) is relayed 103 node, the node (2) represents the repeater site 104 and host (3) represents the receiving terminal 102. Moreover, the queue q(i) of node i is represented as q(0), q(l), q(2) and q(3). It brings to the previous node (i-1) and the subsequent node (i), compiled after a previous site (i-1). (A): the Case q(i)<m x average throughput B(i)x FRT(i-l). This is the case when the amount of data stored in the queue q(i) subsequent node i, fewer data obtained by multiplying m (m is a constant) to the amount of data sent when communicating with the average throughput B(i-l) between the nodes during the RTT between the previous node (i-1) and the subsequent node (i). In this case, since there is a place in the transmission of data from subsequent node (i) on the previous site (i-1) is transmitted definition signal hangup, representing the first block (133 or 343) a decision on the congestion window, or which provides regular management of the congestion window. (B). The case m x average throughput B(i)x FRT(i-l) max q(i) max X. This is the case when the amount of data stored in the queue q(i) subsequent node (i)greater than or equal to the number of data obtained by multiplying m (m is a constant) to the amount of data while communicating with an average throughput the ability of B(i-l) between the nodes during the RTT between the previous node (i-1) and the subsequent node (i), and less constants X, which is the upper limit, representing the suspension of communication. In this case, since there is no place in the transmission of data from subsequent node (i) on the previous site (i-1) is transmitted definition signal hangup, representing the second block (134 or 344) a decision on the congestion window that manages to reduce the congestion window. (C). Case X<q(i). This is a case where the amount of data stored in the queue q(i) subsequent node i, more than the upper limit X, represents the suspension of communication. In this case, from the next node (i) on the previous site (i-1) is transmitted definition signal hangup that represents the data transfer should be suspended during FRT(i), that is the time RTT between subsequent node (i) and the next node (i+1). The constant m is set to a numerical value approximately in the range of 2-4, and constant X set to a numerical value approximately in the range of 5-10. Then each node that took definition signal hangup, manages the communication on the basis of the information submitted by definition signal hangup. Furthermore, each node or terminal detects the emergence of packet loss. When discovered the emergence of packet loss, the node (i)that detects packet loss, sends control signal S, as indicated below, and the node (i-1), which adopted this control signal S, works as specified below. (D). When the node (i)that detects packet loss, communicates using the second block of the decision on the congestion window, definition signal hangup that represents the connection should be suspended for FRT(i), that is, on time RTT between subsequent node (i) and the next node (i+1)is transmitted from subsequent node (i) on the previous site (i-1). (E). Site (i-1) suspends the connection at the time FRT(i)of the definition signal hangup. Then, after FRT(i) expires, the node (i-1) multiplies the congestion window on the coefficient BD reduce the bandwidth makes the decision on the new size of the congestion window and resume the connection. Through the above process, the system of high-speed communications under the present approximate variant of the implementation is in the process of changing the way the TCP control in accordance with the amount of data accumulated in the queue node. Describes the case when due to the rather large the backlog of data transfer, the remaining accumulated in the node (i), pending transfer data node (i)shall not apply to 0, even when the host (i-1) packet loss is occurring and the connection is suspended for FRT(i-l), that is, on time RTT between the host (i) and the previous node (i-1). In this case, the node (i-1) carries out the operation of TCP, using the second block (134 or 344) a decision on the congestion window, which affects throughput. For example, describes a case in which a TCP connection from the repeater site 104 to receiving terminal 102 is a bottleneck. In this case, relay node 104 provides TCP operation, using the first block (343) a decision on the congestion window. Moreover, another relay host or transmitting terminal 101 manages to ensure essentially the same bandwidth as the connection between the relay node 104 and foster terminal 101, through TCP operation of the second unit (134 or 344) decision on the congestion window. Also, for example, describes a case in which a TCP connection from the transmitting terminal 101 to relay host 103 is a bottleneck. In this case, the transmitting terminal 101 provides TCP operation, using the first block (133) a decision on the congestion window and another node manages to ensure essentially the same bandwidth as the connection between the sending terminal 101 and retransmission node 103, through the operation of TCP second unit (344) the decision on the congestion window. Following are the system of high-speed communication according to the second approximate variant of implementation. High-speed-connection according to the second approximate variant of implementation is an example of a case where in the configuration described in the first exemplary embodiment, the first block 343 a decision on the window overload relay host 103 processes other than the processing described in the first exemplary embodiment. In this description assumes that a relay node 103 must be a node i. The first block 133 decision on the congestion window of node i is set as specified in (1)-(3), to satisfy the conditions (I) and (II). (I). Average throughput B(i) in node i = average throughput B(i+1) in the node (i+1). II. The coefficient BD(i) bandwidth reduction when the node i occurs packet loss = coefficient BD(i+l) reduce the bandwidth when the node (i+1) packet loss is occurring. 1) maximum window size = (average throughput B(i+l)x d(i+l))x(d(i)/d(i+l)). 2) increase the width of the window overload when one ACK packet adopted = 2 x speed ΔC increase capacity of the site (i+l)/d(i+l), where d is the RTT between the node and the node that follows it. 3) lowering the value of the congestion window in the node (i) at the time of packet loss = the reduction ratio BD(i+l) capacity, when the node (i+1) there is packet loss. By means of this installation, even when the channel delay (i) communication from the host (i) to the node (i+1) is very large, and the channel delay (i+1) communication from the host (i+1) to the node (i+2) a small link on each of them can be implemented with the same throughput interaction with each other. This is effective when the communication channel from node to 103 site 104 is a communication channel that has no packet loss, but has a very long delay, as in the case of partitions with underwater cable. This is because in a normal TCP congestion window is increased by one package for each RTT, causing a very slow increase in throughput. This problem can be solved by using a sample implementation of the present invention. The system of high-speed communication described above. According to the above described process communication is carried out without increasing the number of data accumulated in the queue node, and, therefore, the communication can be carried out at high speed. Moreover, according to approximate the modalities for the implementation of the present invention TCP connections before and after site differ from each other in speed, and, therefore, since the increase in the number of data accumulated in the node can be reduced, the communication can be carried out at high speed. Moreover, according to approximate the modalities for the implementation of the present invention, the control is made in such a way that the degree of increase and decrease speed data transfer becomes the same before and after the host, and, thus, since the increase in the number of data accumulated in the site, can be reduced, the communication can be carried out at high speed. Moreover, according to approximate the modalities for the implementation of the present invention, even when the communication is suspended control signal, resume communication is not delayed due to delays in the transmission when the connection resumes. Accordingly, the reduction of operational communication performance can be prevented. The transmitting terminal, relay host and receiving terminal include internal computer system. Process each treatment, described above, is stored in machine-readable storage medium in the form of a program, and described above process is reading and execution of this program through a computer. In this description and examples of machine-readable medium of communication include magnetic disk, optical disk, CD-ROM, DVD-ROM and semiconductor memory. A computer program can be delivered to a computer via the communication, and the computer receiving it, can execute the program. Some of the above functions can be carried out using the program. The present invention is described with reference to indicative options for implementation, but the present invention is not limited to the above text approximate variant of implementation. Specialist in the art can make various changes to the configuration or parts of the present invention without deviating from the essence of the present invention. This application is based and claims the priority of an application for a patent Japan №2010-032872, filed February 17, 2010, the description of which is included in this document in full by reference. INDUSTRIAL APPLICABILITY The present invention may be applied to a system of high-speed communication and the method of high-speed communications that allow the transmission of data across a set of nodes. According to the system of high-speed communication and the way high-speed connection, the connection can be made without increasing the number of data accumulated in the queue node and, thus, the connection can be made at high speed. DESCRIPTION OF THE REFERENCE POSITIONS 101 - transmitting terminal 102 - receiving terminal 103, 104 - relay host 11,21,31 - processing module input and output 12, 22, 32-processing module IP 13, 34 - block TCP-transmission 23 - block TCP receive 14, 24, 35 - processing module applications 1. System high-speed communications that contain many nodes located in the communication channel; and a lot of connections connection established between the various nodes, these sets of nodes communicate with many pieces of information about the model performance between nodes, and many pieces of information about the model performance is a performance due to be achieved by each of the many compounds of communication, and each of the many sites manages the communication on the basis of any one of the many pieces of information about the model performance. 2. System high-speed connection of claim 1 in which each of the many sites estimates the average bandwidth between nodes included in the achieved performance of communication, and each of the many sites to share information about the model performance, presents the average throughput ability between nodes, decides on a target average bandwidth, and manages the communication based on the target average bandwidth. 3. High-speed-connection of claim 1 in which each of the many sites assesses the rate of increase throughput ability between the nodes included in the achieved performance of communication, and each of the many sites to share information about the model performance presented speed bandwidth between nodes, decides on a target rate of increase throughput the ability of the object and manages the communication based on the target speed bandwidth. 4. The system of high speed communication on any one of claims 1 to 3, where many nodes includes the node is the starting point, the host destination and relay node, situated between the node start point and the destination node, the node is the starting point sends to the destination node Manager start signal, notifying you that a link should be made, the host destination, which is found managing the starting signal, calculates information about model performance host destination-based discovery Manager start signal, and relay node computes the information about the model performance relay host-based detection of the governing start signal, the node is the starting point transmits a signal notice of model performance, which stores information about the model performance, the host destination, relay host compares the information about the model of the performance variables that stored in signal notification of model performance, with information about the model business characteristics calculated relay host, and carries the signal notification of model performance, which stores information about the model performance, which represents a lower value when the relay site carries the signal notification of the model performance in the site the destination host destination stores information about the model performance, stored in a received signal notification of model performance, and transmits the signal decision on model performance, which stores information about the model performance, site the initial point, repeater site that retransmitted signal decision on model performance node start point, and the node is the starting point retain information about the model of the performance variables that stored in signal decision on model performance, and node start point and repeater node perform TCP-based management information model of the performance variables that stored in signal decision on model performance. 5. High-speed-connection according to claim 4, which node is the starting point, repeater node and destination node include the previous node and the next node, which is adjacent connected with them, the next node transmits a signal definitions hangup representing the ordinary management of the congestion window, the previous node in case when the amount of data accumulated in the queue, the next node is less than the number of data obtained by multiplying the constants on the amount of data while communicating with the average bandwidth between the previous node and the next node during the RTT between the previous node and the next node, and the previous node manages TCP-based definition signal hangup. 6. High-speed-connection according to claim 4, which node is the starting point, repeater site and the host destination include the previous node and the next node, which is adjacent connected with them, the next node transmits a signal definitions hangup representing management reducing congestion Windows, the previous node in case when the amount of data, accumulated in the queue subsequent node equal to or greater than the amount of data that is obtained by multiplying the constants on the amount of data while communicating with the average bandwidth between predshestvuyuschih node, posleduyushim site during the RTT between predshestvuyuschih node, posleduyushim node, and less constant, which is the upper limit, representing the suspension of communication, and the previous node manages TCP-based definition signal hangup. 7. High-speed-connection according to claim 4, which node is the starting point, repeater node and node destination include the previous node and the next node, which is adjacent connected with them, the next node transmits a signal definitions hangup that represents the data transfer is interrupted for the time RTT between the node and the next node that follows node in case when the amount of data accumulated in the queue, the next node is greater than the upper limit, representing the suspension of communication, and the previous node performs management of TCP-based definition signal hangup. 8. High-speed-connection according to claim 4, which subsequent node transmits a signal definitions hangup that represents the data transfer is interrupted for the time RTT between the node and the next node, the previous node in case when the next node detects packet loss and subsequent node performs management to reduce the congestion window in the management of TCP on the next node, and the previous node performs management of TCP-based definition signal hangup. 9. High-speed-connection of claim 8, in which the previous node suspends bond on time RTT between the host and the next node, makes a decision on the new window size overload by multiplying the congestion window on the coefficient of reduction of bandwidth, after time RTT and renew the relationship.
|
© 2013-2014 Russian business network RussianPatents.com - Special Russian commercial information project for world wide. Foreign filing in English. |