A data-intensive application is typically built front standard building blocks that provide commonly needed functionality. For example, many applications need to:
In this book, we focus on three concerns that are important in most software systems:
The things that can go wrong are called faults, and systems that anticipate faults and can cope with them are called fault-tolerant or resilient.
Note that a fault is not the same as a failure. A fault is usually defined as one component of the system deviating from its spec, whereas a failure is when the system as a whole stops providing the required service to the user.
When we think of the causes of system failure, hardware faults quickly come to mind.
Our first response is usually to add redundancy to the individual hardware components in order to reduce the failure rate of the system. Disks may be set up in a RAID configuration, servers may have dual power supplies and hot-swappable CPUs, and datacenters may have batteries and diesel generators for backup power. When one component dies, the redundant component can take its place while the broken component is replaced.
Another class of fault is a systematic error within the system.
There is no quick solution to the problem of systematic faults in software. Lots of small things can help: carefully thinking about assumptions and interactions in the system; thorough testing; process isolation; allowing processes to crash and restart; measureing; monitoring, and analyzing system behaviour in production.
Humans design and build software systems, and the operators who keep the systems running are also human. Even when they have the best intentions, humans are known to be unreliable.
How do we make our systems reliable, in spite of unreliable humans? The best systems combine several approaches:
Scalability is the term we use to describe a system’s ability to cope with increased load.
Load can be described with a few numbers which we call load parameters. The best choice of parameters depends on the architecture of your system: it may be requests per second to a web server, the ratio of reads to writes in a database, the number of simultaneously active users in a chat room, the hit rate on a cache, or something else.
Once you have described the load on your system, you can investigate what happens when the load increases. You can look at it in two ways:
Both questions require performance numbers:
We need to think of response time not as a single number, but as a distribution of values that you can measure.