between endpoints from many sources is needed for the network to function.
As further degrees of interoperation between the network and the processing resources of the data centre both at the infrastructure edge and at the regional or national scale are used to support next generation use cases, the functions of these three upper layers of the OSI model will be increasingly used to provide that underlying network and data centre infrastructure with intelligence about the specific characteristics of the application in use, which will then be used to make nuanced decisions about how infrastructure resources can be allocated and used for optimal cost and user experience.
3.4 Ethernet
Ethernet is an example of a layer 2 protocol and is the most commonly used layer 2 protocol today. A basic understanding of some of the characteristics of Ethernet is useful in the context of infrastructure edge computing, as the protocol is so widely used both within the infrastructure edge data centre, as well as between them and between other facilities of both similar and larger scale.
Ethernet uses broadcast communication to perform network endpoint discovery. This means that when an Ethernet endpoint receives a frame with a destination MAC address and the endpoint does not have an existing entry in its switching table for that destination MAC address, a request is sent to all other Ethernet endpoints on that segment of the network, asking for the location of the endpoint which has that destination MAC address assigned to one of its interfaces. The protocol was designed in this way for implementation simplicity and low cost, both of which have helped Ethernet become established as the dominant layer 2 protocol today; but there is an equal drawback in regard to the inefficiency of this behaviour in a larger network, where the volume of broadcast traffic can be substantial enough to impact the performance of network endpoints and ultimately of the network.
The protocol is capable of supporting frame sizes between 64 and 1518 bytes as standard, and some equipment can be configured to support so‐called jumbo frames of up to 9600 bytes. The latter are useful for some use cases which rely on these jumbo frames in order to lower the overhead of large numbers of Ethernet frame headers involved when carrying data, or for protocols such as those of storage area networks (SANs), which natively use data segmentation sizes closer to a jumbo frame.
The most common type of traffic encountered on a modern network today is an Ethernet frame that encapsulates an Internet Protocol version 4 (IPv4) or Internet Protocol version 6 (IPv6) packet, using TCP or UDP as its transport layer protocol, carrying some application data from a source endpoint to its destination endpoint. This combination of protocols is used for a wide range of use cases and across almost every scale of network in common use today.
3.5 IPv4 and IPv6
Both IPv4 and IPv6 are examples of layer 3 protocols. They are the most commonly encountered layer 3 protocols, and as such, they provide a method for the end‐to‐end addressing of endpoints across the network using a globally unique address space. When each endpoint has a globally unique identifier, data can be addressed to a specific endpoint without ambiguity; this function allows data to be transmitted between endpoints which reside on different networks, even at a worldwide scale.
In the context of both the internet and infrastructure edge computing, both of the Internet Protocol (IP) versions, IPv4 and, to a growing extent, IPv6 are ubiquitous. Any application, endpoint, or piece of infrastructure must support these protocols; no real competitor currently exists and is unlikely to do so for some time due to the ubiquity of both IPv4 and IPv6, driving their integration into billions of devices and applications across the world. In addition, many of the issues with these protocols have been tempered by the industry using various means, so few see a pressing need to replace them.
IPv6 adoption, although behind its earlier cousin IPv4 as of today, is growing across the world and is expected to reach parity with and then exceed the amount of global internet traffic transmitted atop IPv6 compared to IPv4 as measured on a daily basis. One of the growth areas for IPv6 is expected to be the widespread deployment of city‐scale IoT, where potentially millions of devices must be able to connect with remote applications operating in other networks, requiring these devices to have a globally unique IP address. This need combined with the global exhaustion of the IPv4 address space looks set to drive the future adoption of IPv6, although IPv4 address conservation mechanisms such as network address translation (NAT) remain in use and will continue to be for many years ahead.
3.6 Routing and Switching
Both routing and switching are vast topics, each with significant history and many unique intricacies. The focus of this book is not on either of these fields, but they are closely related to any discussion of network design and operation, and so this section will describe some of the key points related to routing and switching that are relevant to network design and operation for modern networks so that it can be referred to during later chapters as many of the same core principles apply to the new networks being designed, deployed, and operated to support infrastructure edge computing as well.
3.6.1 Routing
On the subject of routing, which is the process where a series of network endpoints use layer 3 information as well as other characteristics of the data in transit and of the network itself to deliver data from its source to its destination, there are two primary approaches to performing this process.
One approach is referred to as hop‐by‐hop routing. Using this routing methodology, the onus for directing data in transit on to the optimal path towards its destination is placed on each router (a term referring to an endpoint which makes a routing decision, based on layer 3 data and other knowledge of the network, in order to determine where to send data in transit) in the path. Each of these routers uses its own local knowledge of the state of the network to make its routing decisions.
Another approach is resource reservation. This approach aims to reserve a specific path through the network for data in transit. Although this approach may seem preferable (and is in some cases), historically it has been challenging to implement as the act of resource reservation across a network requires additional state to be maintained for each traffic flow at each network endpoint in the path from source to destination to ensure that the resource allocation is operating as expected. In cases where the entire network path between the source and destination is under the control of a single network operator, this methodology is more likely to be successful; a resource reservation scheme is easier to implement in this case as the network operator can be aware of all of the resources that are available on the path, compared to a path which involves multiple network operators who may not provide that level of transparency or may not wish to allocate available resources to the traffic.
Both of these approaches seek to achieve best path routing, where traffic is sent from its source to its destination using the combination of network endpoints and links, which results in the optimal balance of resource usage, cost, and performance. If both approaches have the same aim, why are there two approaches to begin with? First, the definition of what would make a particular path from source to destination the best path is not always as simple as the lowest number of hops or using the lowest latency links in the network; once factors such as cost are introduced, business logic and related considerations begin to influence the routing process, which is where resource reservation becomes more favourable in many cases. Second, there is a trade‐off between the ability of a single system which oversees the network to identify and reserve specific paths in a manner that provides enhanced functionality or performance compared to a hop‐by‐hop routing approach. Across a large network such as the internet, it is not uncommon for traffic to pass through a number of networks, many of which use hop‐by‐hop routing alongside others which use resource reservation internally.
One consideration is when a router receives traffic for which it does not know a specific route on which to send the traffic to reach its destination. In this case, a router will typically have a default route configured. This is a catch‐all route for destination