Hi all, from this post I’m going to start a new category for network performance. When talking about the performance of a network, in the last few years we can see a growing demand for the speed. Speed means actually the faster user experience. There are few advantages we can gain by this faster user experience as a only business vendor,
- Faster sites lead to better user engagement.
- Faster sites lead to better user retention.
- Faster sites lead to higher conversions.
So we can mention speed as a feature too. But it is not easy to provide this feature for every application or to every network. There are few limitations for this. Let’s see two most important terms used with this speed factor. The performance of all network traffic relies on these two,
- Latency : The time between the source sending the packet and the destination receiving the packet.
- Bandwidth : The number of packets we can send in a unit time. Also known as the maximum throughput.
Now let’s have a closer look at the latency of a network. As we talked earlier, the latency, is the time between a sender sending the message or a packet and the destination receiving it. Though it seems to be a simple term, there is a huge explanation behind it. There are few terms used with the term latency. Let’s see what they are?
- Propagation Delay : Amount of time required for a message to travel from the sender to receiver, which is a function of distance over speed with which the signal propagates.
- Transmission Delay : Amount of time required to push all the packet’s bits into the link, which is a function of the packet’s length and data rate of the link.
- Processing Delay : Amount of time required to process the packet header, check for bit level errors, and determine the packet’s destination.
- Queuing Delay : Amount of time the incoming packet is waiting in the queue until it can be processed.
So simply we can say,
Total Latency = Propagation Delay + Transmission Delay + Processing Delay + Queuing Delay
Propagation Delay depends on the distance from the source to destination and the medium used to transfer data. Transmission Delay depends on the available data rate of the transmitting link. When the packet has arrived, it’s header should be examined to find the destination, to find the bit level errors and etc. This will take some time too, and it’s known as Processing Delay. Finally the Queuing Delay. It comes in to play, when the receiver is slower than the sender. When sender is sending data packets at a high speed which the receiver cannot control, then there should be a mechanism to hold packets and release them in a sped which can be controlled by the receiver. So this is known as Queuing and the time taken for that is called Queuing Delay.
As we all know the fastest speed which can be taken is the speed of light. The speed of light is about 299,792,458 meters per second, or 186,282 miles per second in a vacuum. But when we are sending a signal, we have to use a medium for that. It’s not vacuum. So it will slow down the speed. This ratio of the speed of light and the speed with which the packet travels in a material is known as the Refractive Index of the material. When this index becomes high, the speed gets down. When talking about the Refractive Index for Optical Fiber, it’s about 1.4-1.6. Because of that the speed light in Optical Fiber is around 200,000,000.
Here the distance is the shortest distance, straight line distance, and there is no optical fiber like that. We can see how people will react to a delay in the network,
- Delay of 100-200 ms : Lag
- Delay of 300 ms : Sluggish
- Delay of 1,000 ms : Users have already performed a mental context switch while waiting for the response
So now it’s clear that why we should have a high response time in our network.
There is a command which can be used for identifying the routing path of the packet and the latency of each network hop in an IP network.
Now let’s have a closer look at the next factor, Bandwidth. As I mentioned earlier, Bandwidth is the number of packets we can send in a unit time. Also known as the maximum throughput.
This visualization was taken from the Akami web site.
Also you can do a test for your local ISP to check the upstream and downstream speeds using the ookla web site.
As now a days we are using more sites for streaming videos, network traffic has rise by a huge margin. As a solution for this we can increase the bandwidth. To do that we have to concern about following things,
- Add more fibers into your fiber optic links.
- Deploy more links across the congested routes.
- Improve the WDM techniques to transfer more data through existing links.
- Reduce round trips, move the data closer to the client.
- Build applications that can hide the latency through caching, prefetching, and a variety of similar techniques.
- Make the distance shorter from sender to the receiver.
Using those techniques we can make our network much more speeder than earlier. This will retain our customers as it can provide a good response time. Hop now you have a clear idea about the latency and the bandwidth and how do they concern in the networking. See you soon with another interesting topic. Thank You!