Here, the arrows represent packets crossing the network. The first packet carries the request from the client to the server. The two packets from the server carry the response containing the HTML results from running request.asp (3000 bytes long).
As shown in the previous section, Command Initial Response Time is calculated by finding the time difference between the request packet arriving at the server, and the response packet leaving the server. Command Processing Time includes this initial response time (TimeB – TimeA) plus the time taken to transmit all of the response packets (TimeC – TimeB).
The following example shows a more complicated interaction between a client and server, where the response data leaving the server must be sent in more than one burst. Each burst of data consists of two data packets. For more information, see .
Since the Command Processing Time metric deals with what occurs at the back end, it includes the Command Initial Response Time, but does not include time required client-side activities (i.e., creating and sending an ACK). Thus, Command Processing Time here is command processing time 1 + command processing time 2.
This calculation holds true for commands that return even more content shown in the above example. All this means is there will be more bursts sent from the server, in which case, the processing time for each will be measured and added together.
The TCP protocol limits how much data the server can send in a burst without receiving an acknowledgement (ACK) from the client. This maximum amount of data the server can send is called the TCP window size. This window size is negotiated between the client and server when the TCP connection is established, and is used for congestion control on the network. (In the example used in the previous section, the TCP window size was 3000 bytes.)
As the server sends data, it reduces the TCP window size by the amount of data that was sent. As the client ACKs data, the server can increase its current window size by the amount of data ACKed. So long as the server continually receives ACKs from the client, its TCP window will remain “open,” allowing it to continue to send data. However, if there is congestion or delays on the network, the server will continue to send data until its window size is 0, which means the window is “closed.” At that point, the server must stop sending data and wait for an ACK from the client.
Obviously, larger windows allow the server to burst more data at once, but also increase the chance for congestion which can lead to dropped packets and the server needing to resend data. Smaller windows decrease the chance for causing congestion but result in less efficient communication between client and server.
While the Command Processing Time metric, and by extension, Command Initial Response Time, shed light on the timings at the back end, the Command Client Time metric determines how much time is spent at the client side during client-server communication with multiple bursts.
The following example represents a common interaction between a client and server, in which a large file, sent by the server, is separated into and sent as several packets (1500 bytes). Whenever data is sent by the server, the TCP protocol requires the client to routinely send acknowledgements (ACK) after it has received two full data packets.
The three previous sections explained the calculations behind the initial response time of the server, the server request-processing time, and the client acknowledgement time. The Command Completion Time metric is the sum of all three of these actions:
Command Completion Time includes the entire delay from when the first packet in the client’s request arrives until the last packet in the server’s response is sent. Here, the delay is equal to the Command Processing Time.
In this example, the total Command Processing Time is split because the client-bound data is sent over more than one burst (TimeB – TimeA, and TimeD – TimeC). The Command Client Time is the total amount of time required for the client to acknowledge the first burst, which includes both travel time and client-side processing time (TimeC – TimeB).
Again, Command Completion Time includes the entire delay from when the first client packet in the request arrives at the server (A), until the last packet in the server’s response is sent (D). This includes all server-side processing time (i.e., Command Processing Time 1 + Command Processing Time 2), as well as all client time (i.e., Command Client Time).