Decisive for the efficiency of applications: Network latency

The time it takes to transmit data between two points is called network latency. In other words, it is the time that elapses between sending a request and receiving the response.

Here is an example: The input of a device must be sent to a server via the Internet. The server must process the request and then send it back to the end device via the Internet. The network latency or ping indicates the time taken for the entire process in milliseconds (MS).

There are many factors that can influence the network latency: 
 

  • - Physical distance between transmitter and receiver
  • - Signal interference
  • - Network overload
  • - Outdated hardware or software

Consequences of high network latency
If the network latency is too high, this can have serious consequencesfor the functionality of various applications. Excessively long processing times with a high latency significantly impair the performance of software and end devices.

The IP network is slower than expected, which increases the round-trip time (RTT). Applications may stop responding or take a very long time to load. High network latency can also cause losses of data packages. So-called 'content delivery networks' are no longer functional. 

Private individuals experience this phenomenon as disconnections, quality losses and delayed audio/video streaming. In Industry 4.0, something like this can

cause massive damage to the product and
technology
 in automatically operating machines.


Reducing latency in the network

There are various methods for reducing the delay in data processing. This includes methods such as:
 

- Using the latest hardware and software
- Configuring the network correctly
- Using a faster connection
- Increasing the bandwidth
- Using your own servers and dispensing with external
providers
- Reducing data volumes via temporary storage
 
However, the most effective method of reducing network latency is probably shortening the communication path (ping traceroute). If the distance from the source to the destination is too long, the data must also cover this distance. And that takes time!

It therefore makes sense to work with edge computing. Edge computing shortens the data transmission path to a minimum. Data can be processed on site and return to the end device almost in real time. Network latencies, which can increase up to several seconds depending on the network structure, are eliminated in this way.

Finally, the use of Quality of Service (QoS) protocols can help. Particularly important applications are given high priority. This can reduce latency for one application, but increase it for another application with lower priority.

We are looking forward to your message

Product Request IT-Systems

Product
Persönliche Informationen
Your message
Privacy policy
Text

All fields marked with an * are compulsory. We take the utmost care when handling your personal data and comply with the latest data protection legislation.

Send