System Design Fundamentals

System Efficiency

System efficiency measures how well a system works. The two metrics used to measure system efficiency are Latency and Throughput.

Throughput

Throughput refers to how much data can be processed within a specific period of time.

It’s a measure of the quantity of data being sent or received within a unit of time. The unit used to measure throughput is megabits per second(Mb/s).

For example, 1TB of data can be processed per hour.

In the case of a client-server system, client throughput is the amount of responses per time a client can get for requests made while Server throughput, measures how many requests per time(usually in seconds) a server can process.

Latency

Latency is a measure of delay. The unit used to measure latency is Millisecond.

In a client-server system, there are two types of latency:

  1. Network Latency - It’s the amount of time it takes for data/packets to travel from a client to the server. The time can be measured as one way or as a round trip.

  2. Server latency - It’s the time taken by the server to process and generate a response.

Why are latency and throughput important?

If the latency is high, this means that there is a high delay in the responses. If the throughput is low this means that the amount of requests processed is low.

High latency and low throughput impair the performance of a system. There are systems such as games where latency matters a lot. If the latency is high, a user will experience lagging which will drastically impair the user experience.

When making database queries one can improve server latency/throughput by using cached memory. The following is an example of latency tests.

Latency Tests

Latency tests carried across the key data storage such as in-memory cache, HDD, SDD, and network calls reveal the following:

  1. Reading 1MB sequentially from cache memory takes 250 microseconds.

  2. Reading 1MB sequentially from an SSD takes 1,000 microseconds or 1 millisecond.

  3. Reading 1MB sequentially from disk (HDDs) takes 20,000 microseconds or 20 milliseconds.

  4. Sending 1MB packet of data from California to the Netherlands and back to California using a network takes 150,000 microseconds.

Note

1000 nanoseconds = 1 microsecond

1000 microseconds = 1 millisecond

1000 milliseconds = 1 second

Therefore reading from an in-memory cache is 80 times faster than reading from an HDD disk!

Whenever you're ready

There are 4 ways we can help you become a great backend engineer:

The MB Platform

Join 1000+ backend engineers learning backend engineering. Build real-world backend projects, learn from expert-vetted courses and roadmaps, track your learnings and set schedules, and solve backend engineering tasks, exercises, and challenges.

The MB Academy

The “MB Academy” is a 6-month intensive Advanced Backend Engineering BootCamp to produce great backend engineers.

Join Backend Weekly

If you like post like this, you will absolutely enjoy our exclusive weekly newsletter, Sharing exclusive backend engineering resources to help you become a great Backend Engineer.

Get Backend Jobs

Find over 2,000+ Tailored International Remote Backend Jobs or Reach 50,000+ backend engineers on the #1 Backend Engineering Job Board