One of the simplest ways to effect fail-over is to manually modify the DNS records for a host. If a server fails then a DNS lookup for the host can return the IP address of another machine. DNS can also be used to implement scalability by assigning multiple IP addresses to a single hostname in DNS. Modern DNS servers such as BIND14 8.x will deterministically issue the different IP addresses assigned to mail.bigisp.com in a round-robin fashion[5]. Unfortunately this has a number of fundamental problems [1], [11].
Problems 2 and 3 are more likely to be a problem when dealing with a site with a very large number of end users, such as a large corporate network or an Internet Services Provider (ISP) - users tend to make more assumptions about the static nature of a network that they are connected to than other networks. In particular, these are unlikely to be problems on large internet sites that are not provided primarily for the users of a particular ISP.
Problems 1 and 4 are inherent problems with a DNS based solution. While there is no good way to get around the granularity problem, both this and the TTL problem are generally helped by lowering the TTL. Provided that there are well linked, powerful DNS servers to handle the higher than otherwise required number of DNS requests.
Problem 5 can be aided by a more intelligent DNS server that takes into account feedback from servers about their load, availability and other metrics. This enables load to be distributed to servers compatible with their ability to serve customers.
Despite all of these problems an intelligent DNS server arguably one of the most robust, easy to implement and transparent to end users method of distributing traffic to multiple servers. This is particularly true when the servers are geographically distributed, remembering that each geographically separated server could be a point of contact to a web farm using a technology such as layer 4 switching to manage traffic within a point of presence.