RFC 9667 | Dynamic Flooding | October 2024 |
Li, et al. | Experimental | [Page] |
Routing with link-state protocols in dense network topologies can result in suboptimal convergence times due to the overhead associated with flooding. This can be addressed by decreasing the flooding topology so that it is less dense.¶
This document discusses the problem in some depth and an architectural solution. Specific protocol changes for IS-IS, OSPFv2, and OSPFv3 are described in this document.¶
This document is not an Internet Standards Track specification; it is published for examination, experimental implementation, and evaluation.¶
This document defines an Experimental Protocol for the Internet community. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are candidates for any level of Internet Standard; see Section 2 of RFC 7841.¶
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at https://www.rfc-editor.org/info/rfc9667.¶
Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
In recent years, there has been increased focus on how to address the dynamic routing of networks that have a bipartite (also known as spine-leaf or leaf-spine), Clos [Clos], or Fat-tree [Leiserson] topology. Conventional Interior Gateway Protocols (IGPs; i.e., IS-IS [ISO10589], OSPFv2 [RFC2328], and OSPFv3 [RFC5340]) underperform, redundantly flooding information throughout the dense topology. This leads to overloaded control plane inputs and thereby create operational issues. For practical considerations, network architects have resorted to applying unconventional techniques to address the problem, e.g., applying BGP in the data center [RFC7938]. However, some network architects feel that using an Exterior Gateway Protocol (EGP) as an IGP is suboptimal, perhaps only because of the configuration overhead.¶
The primary issue that is demonstrated when conventional IGPs are applied is the poor reaction of the network to topology changes. Normal link-state routing protocols rely on a flooding algorithm for state distribution within an area. In a dense topology, this flooding algorithm is highly redundant and results in unnecessary overhead. Each node in the topology receives each link state update multiple times. Ultimately, all of the redundant copies will be discarded, but only after they have reached the control plane and have been processed. This creates issues because significant Link State Database (LSDB) updates can become queued behind many redundant copies of another update. This delays convergence as the LSDB does not stabilize promptly.¶
In a real-world implementation, the packet queues leading to the control plane are necessarily of finite size, so if the flooding rate exceeds the update processing rate for long enough, then the control plane will be obligated to drop incoming updates. If these lost updates are of significance, this will further delay the stabilization of the LSDB and the convergence of the network.¶
This is not a new problem. Historically, when routing protocols have been deployed in networks where the underlying topology is a complete graph, there have been similar issues. This was more common when the underlying link-layer fabric presented the network layer with a full mesh of virtual connections. This was addressed by reducing the flooding topology through IS-IS Mesh Groups [RFC2973], but this approach requires careful configuration of the flooding topology.¶
Thus, the root problem is not limited to massively scalable data centers. It exists with any dense topology at scale.¶
Link-state routing protocols were conceived when links were very expensive and topologies were sparse. The fact that those same designs are suboptimal in a dense topology should not come as a huge surprise. Technology has progressed to the point where links are cheap and common. This represents a complete reversal in the economic fundamentals of network engineering. The original designs are to be commended for continuing to provide correct operation to this point and optimizations for operation in today's environment are to be expected.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
These words may also appear in this document in lower case as plain English words without their normative meanings.¶
In a dense topology, the flooding algorithm that is the heart of conventional link-state routing protocols causes a great deal of redundant messaging. This is exacerbated by scale. While the protocol can survive this combination, the redundant messaging is unnecessary overhead and delays convergence. Thus, the problem is how to provide routing in dense, scalable topologies with rapid convergence.¶
A solution to this problem must meet the following requirements:¶
Provide a dynamic routing solution. Reachability must be restored after any topology change.¶
Provide a significant improvement in convergence.¶
The solution should address a variety of dense topologies. Just addressing a complete bipartite topology such as K5,8 is insufficient (see [Bondy]). Multi-stage Clos topologies must also be addressed, as well as topologies that are slight variants. Addressing complete graphs is a good demonstration of generality.¶
There must be no single point of failure. The loss of any link or node should not unduly hinder convergence.¶
The workload for flooding should be evenly distributed. A hot spot, where one node has an extreme workload, would be a performance limitation and a vulnerability for resiliency.¶
Dense topologies are subgraphs of much larger topologies. Operational efficiency requires that the dense subgraph not operate in a radically different manner than the remainder of the topology. While some operational differences are permissible, they should be minimized. Any change to any node outside of the dense subgraph is not acceptable. These situations occur when massively scaled data centers are part of an overall larger wide-area network. Having a second protocol operating just on this subgraph would add much more complexity at the edge of the subgraph where the two protocols would have to interoperate.¶
The combination of a dense topology and flooding on the physical topology is suboptimal for network scaling. However, if the flooding topology is decoupled from the physical topology and restricted to a greatly reduced portion of that topology, the result can be efficient flooding and the resilience of existing protocols. A node that supports flooding on the decoupled flooding topology is said to support dynamic flooding.¶
With dynamic flooding, the flooding topology is computed within an IGP area with the dense topology either centrally on an elected node, termed the Area Leader, or in a distributed manner on all nodes that are supporting dynamic flooding. If the flooding topology is computed centrally, it is encoded into and distributed as part of the normal LSDB. This is the centralized mode of operation. If the flooding topology is computed in a distributed fashion, this is the distributed mode of operation. Nodes within such an IGP area would only flood on the flooding topology. On links outside of the flooding topology, normal database synchronization mechanisms, i.e., OSPF database exchange and IS-IS Complete Sequence Number PDUs (CSNPs), would apply, but flooding may not. The detailed behavior of the nodes participating in IGP is described in Section 6. New link-state information that arrives from outside of the flooding topology suggests that the sender has no flooding topology information or that it is operating on old information about the flooding topology. In these cases, the new link-state information should be flooded on the flooding topology as well.¶
The flooding topology covers the full set of nodes within the area, but excludes some of the links that standard flooding would employ.¶
Since the flooding topology is computed before topology changes, the effort required to compute it does not factor into the convergence time and can be done when the topology is stable. In the case of centralized mode, the speed of the computation and its distribution is not a significant issue.¶
Graph theory defines the "degree" of a node to be the number of edges that are attached to the node. To keep the flooding workload scalable and distributed, there should be no nodes in the flooding topology that have a much higher degree than other nodes.¶
If a node does not have any flooding topology information when it receives new link-state information, it should flood according to standard flooding rules. This situation will occur when the dense topology is first established but is unlikely to recur.¶
Link-state protocols are intentionally designed to be asynchronous with nodes acting independently. During the flooding process, different nodes will have different information, resulting in transient conditions that can temporarily produce suboptimal forwarding. These periods of transient conditions are known as "transients."¶
When centralized mode is used and if there are multiple flooding topologies being advertised during a transient, then nodes should flood link-state updates on all of the flooding topologies. Each node should locally evaluate the election of the Area Leader for the IGP area and first flood on its flooding topology. The rationale behind this is straightforward: if there is a transient and there has been a recent change in Area Leader, then propagating topology information promptly along the most likely flooding topology should be the priority.¶
During transients, loops may form in the flooding topology. This is not problematic, as the standard flooding rules would cause duplicate updates to be ignored. Similarly, during transients, the flooding topology may become disconnected. Section 6.8.11 discusses how such conditions are handled.¶
In a complete graph, this approach is appealing because it drastically decreases the flooding topology without the manual configuration of mesh groups. By controlling the diameter of the flooding topology, as well as the maximum node degree in the flooding topology, convergence time goals can be met, and the stability of the control plane can be assured.¶
Similarly, in a massively scaled data center (where there are many opportunities for redundant flooding), this mechanism guarantees that flooding is redundant, with each leaf and spine well connected, while ensuring that no update takes too many hops and that no node shares an undue portion of the flooding effort.¶
In a network where only a portion of the nodes support dynamic flooding, the remaining nodes will continue to perform standard flooding. This is not an issue for correctness, as no node can become isolated.¶
Flooding that is initiated by nodes that support dynamic flooding will remain within the flooding topology until it reaches a legacy node, where standard flooding is resumed. Standard flooding will be bounded by nodes supporting dynamic flooding, which can help limit the propagation of unnecessary flooding. Whether or not the network can remain stable in this condition is very dependent on the number and location of the nodes that support dynamic flooding.¶
During incremental deployment of dynamic flooding, an area will consist of one or more sets of connected nodes that support dynamic flooding and one or more sets of connected nodes that do not, i.e., nodes that support standard flooding. The flooding topology is the union of these sets of nodes. Each set of nodes that does not support dynamic flooding needs to be part of the flooding topology and such a set of nodes may provide connectivity between two or more sets of nodes that support dynamic flooding.¶
A single node within the dense topology is elected as an Area Leader.¶
A generalization of the mechanisms used in existing Designated Router (OSPF) or Designated Intermediate-System (IS-IS) elections is used for leader election. The elected node is known as the Area Leader.¶
In the case of centralized mode, the Area Leader is responsible for computing and distributing the flooding topology. When a new Area Leader is elected and has distributed new flooding topology information, then any prior Area Leaders should withdraw any of their flooding topology information from their LSDB entries.¶
In the case of distributed mode, the distributed algorithm advertised by the Area Leader MUST be used by all nodes that participate in dynamic flooding.¶
Not every node needs to be a candidate to be the Area Leader within an area, as a single candidate is sufficient for correct operation. However, for redundancy, it is strongly RECOMMENDED that there be multiple candidates.¶
There is a great deal of flexibility in how the flooding topology may be computed. For resilience, it needs to at least contain a cycle of all nodes in the dense subgraph. However, additional links could be added to decrease the convergence time. The trade-off between the density of the flooding topology and the convergence time is a matter for further study. The exact algorithm for computing the flooding topology in the case of the centralized computation need not be standardized, as it is not an interoperability issue. Only the encoding of the resultant topology needs to be documented. In the case of distributed mode, all nodes in the IGP area need to use the same algorithm to compute the flooding topology. It is possible to use private algorithms to compute flooding topology, so long as all nodes in the IGP area use the same algorithm.¶
While the flooding topology should be a covering cycle, it need not be a Hamiltonian cycle where each node appears only once. In fact, in many relevant topologies, this will not be possible (e.g., K5,8). This is fortunate, as computing a Hamiltonian cycle is known to be NP-complete.¶
A simple algorithm to compute the topology for a complete bipartite graph is to simply select unvisited nodes on each side of the graph until both sides are completely visited. If the numbers of nodes on each side of the graph are unequal, then revisiting nodes on the less populated side of the graph will be inevitable. This algorithm can run in O(N) time, so it is quite efficient.¶
While a simple cycle is adequate for correctness and resiliency, it may not be optimal for convergence. At scale, a cycle may have a diameter that is half the number of nodes in the graph. This could cause an undue delay in link-state update propagation. Therefore, it may be useful to have a bound on the diameter of the flooding topology. Introducing more links into the flooding topology would reduce the diameter but at the trade-off of possibly adding redundant messaging. The optimal trade-off between convergence time and graph diameter is for further study.¶
Similarly, if additional redundancy is added to the flooding topology, specific nodes in that topology may end up with a very high degree. This could result in overloading the control plane of those nodes, resulting in poor convergence. Thus, it may be preferable to have an upper bound on the degree of nodes in the flooding topology. Again, the optimal trade-off between graph diameter, node degree, convergence time, and topology computation time is for further study.¶
If the leader chooses to include a multi-access broadcast LAN segment as part of the flooding topology, all of the adjacencies in that LAN segment should be included as well. Once updates are flooded on the LAN, they will be received by every attached node.¶
Complete bipartite graph topologies have become popular for data center applications and are commonly called leaf-spine or spine-leaf topologies. This section discusses some flooding topologies that are of particular interest in these networks.¶
A minimal flooding topology on a complete bipartite graph is one in which the topology is connected and each node has at least degree two. This is of interest because it guarantees that the flooding topology has no single point of failure.¶
In practice, this implies that every leaf node in the flooding topology will have a degree of two. As there are usually more leaves than spines, the degree of the spines will be higher, but the load on the individual spines can be evenly distributed.¶
This type of flooding topology is also of interest because it scales well. As the number of leaves increases, it is possible to construct flooding topologies that perform well. Specifically, for N spines and M leaves, if M >= N(N/2-1), then there is a flooding topology that has a diameter of 4.¶
A Xia topology on a complete bipartite graph is one in which all spine nodes are biconnected through leaves with degree two, but the remaining leaves all have degree one and are evenly distributed across the spines.¶
Constructively, one can create a Xia topology by iterating through the spines. Each spine can be connected to the next spine by selecting any unused leaf. Since leaves are connected to all spines, all leaves will have a connection to both the first and second spine and one can therefore choose any leaf without loss of generality. Continuing this iteration across all of the spines, selecting a new leaf at each iteration will result in a path that connects all spines. Adding one more leaf between the last and first spine will produce a cycle of N spines and N leaves.¶
At this point, M-N leaves remain unconnected. These can be distributed evenly across the remaining spines and connected by a single link.¶
Xia topologies represent a compromise that trades off increased risk and decreased performance for lower flooding amplification. Xia topologies will have a larger diameter. For M spines, the diameter will be M + 2.¶
In a Xia topology, some leaves are singly connected. This represents a risk in that convergence may be delayed in some failures. However, there may be some alternate behaviors that can be employed to mitigate these risks. If a leaf node sees that its single link on the flooding topology has failed, it can compensate by performing a database synchronization check with a different spine. Similarly, if a leaf determines that its connected spine on the flooding topology has failed, it can compensate by performing a database synchronization check with a different spine. In both of these cases, the synchronization check is intended to ameliorate any delays in link-state propagation due to the fragmentation of the flooding topology.¶
The benefit of this topology is that flooding load is easily understood. Each node in the spine cycle will never receive an update more than twice. For M leaves and N spines, a spine never transmits more than (M/N +1) updates.¶
If two nodes are adjacent in the flooding topology and there is a set of parallel links between them, then any given update MUST be flooded over only one of those links. The selection of the specific link is implementation-specific.¶
There are a variety of ways that the flooding topology could be encoded efficiently. If the topology was only a cycle, a simple list of the nodes in the topology would suffice. However, this is insufficiently flexible, as it would require a slightly different encoding scheme as soon as a single additional link is added. Instead, this document chooses to encode the flooding topology as a set of intersecting paths, where each path is a set of connected links.¶
Advertisement of the flooding topology includes support for multi-access broadcast LANs. When a LAN is included in the flooding topology, all edges between the LAN and nodes connected to the LAN are assumed to be part of the flooding topology. To reduce the size of the flooding topology advertisement, explicit advertisement of these edges is optional. Note that this may result in the possibility of "hidden nodes" or "stealth nodes", which are part of the flooding topology but are not explicitly mentioned in the flooding topology advertisements. These hidden nodes can be found by examination of the LSDB where connectivity between a LAN and nodes connected to the LAN is fully specified.¶
Note that while all nodes MUST be part of the advertised flooding topology, not all multi-access LANs need to be included. Only those LANs that are part of the flooding topology need to be included in the advertised flooding topology.¶
Other encodings are certainly possible. This document has attempted to make a useful trade-off between simplicity, generality, and space.¶
Correct operation of the flooding topology requires that all nodes that participate in the flooding topology choose local links for flooding that are part of the calculated flooding topology. Failure to do so could result in an unexpected partition of the flooding topology and/or suboptimal flooding reduction. As an aid to diagnosing problems when dynamic flooding is in use, this document defines a means of advertising the Local Edges Enabled for Flooding (LEEF). The protocol-specific encodings are defined in Sections 5.1.6 and 5.2.8.¶
The following guidelines apply:¶
Advertisement of LEEF is optional.¶
As the flooding topology is defined in terms of edges (i.e., pairs of nodes) and not in terms of links, the advertisement SHOULD indicate that all such links have been enabled in cases where parallel adjacencies to the same neighbor exist.¶
LEEF advertisements MUST NOT include edges enabled for temporary flooding (Section 6.7).¶
LEEF advertisements MUST NOT be used either when calculating a flooding topology or when determining what links to add temporarily to the flooding topology when the flooding topology is temporarily partitioned.¶
The following TLVs/sub-TLVs are added to IS-IS:¶
A sub-TLV that an IS may include in its Link State PDU (LSP) to indicate its preference for becoming the Area Leader.¶
A sub-TLV that an IS may include in its LSP to indicate that it supports dynamic flooding and the algorithms that it supports for distributed mode, if any.¶
A TLV to advertise the list of system IDs that compose the flooding topology for the area. A system ID is an identifier for a node.¶
A TLV to advertise a path that is part of the flooding topology.¶
A TLV that requests flooding from the adjacent node.¶
The IS-IS Area Leader Sub-TLV allows a system to:¶
Indicate its eligibility and priority for becoming the Area Leader.¶
Indicate whether centralized or distributed mode is to be used to compute the flooding topology in the area.¶
Indicate the algorithm identifier for the algorithm that is used to compute the flooding topology in distributed mode.¶
Intermediate Systems (nodes) that are not advertising this sub-TLV are not eligible to become the Area Leader.¶
The Area Leader is the node with the numerically highest Area Leader priority in the area. In the event of ties, the node with the numerically highest system ID is the Area Leader. Due to transients during database flooding, different nodes may not agree on the Area Leader. This is not problematic, as subsequent flooding will cause the entire area to converge.¶
The IS-IS Area Leader Sub-TLV is advertised as a sub-TLV of the IS-IS Router Capability TLV (242) [RFC7981] and has the following format:¶
A numeric identifier in the range 0-255 that identifies the algorithm used to calculate the flooding topology. The following values are defined:¶
The IS-IS Dynamic Flooding Sub-TLV allows a system to:¶
Indicate that it supports dynamic flooding. This is indicated by the advertisement of this sub-TLV.¶
Indicate the set of algorithms that it supports.¶
In incremental deployments, understanding which nodes support dynamic flooding can be used to optimize the flooding topology. In distributed mode, knowing the capabilities of the nodes can allow the Area Leader to select the optimal algorithm.¶
The IS-IS Dynamic Flooding Sub-TLV is advertised as a sub-TLV of the IS-IS Router Capability TLV (242) [RFC7981] and has the following format:¶
The IS-IS Area Node IDs TLV is only used in centralized mode.¶
The IS-IS Area Node IDs TLV is used by the Area Leader to enumerate the node IDs (System ID + pseudonode ID) that it has used in computing the area flooding topology. Conceptually, the Area Leader creates a list of node IDs for all nodes in the area (including pseudonodes for all LANs in the topology) and assigns an index to each node, starting with index 0. Indices are implicitly assigned sequentially, with the index of the first node being the Starting Index and each subsequent node's index is the previous node's index + 1.¶
Because the space in a single TLV is limited, more than one TLV may be required to encode all of the node IDs in the area. This TLV may be present in multiple LSPs.¶
The IS-IS Area Node IDs TLV has the following format:¶
If multiple IS-IS Area Node IDs TLVs with the L bit set are advertised by the same node, the TLV that specifies the smaller maximum index is used and the other TLVs with the L bit set are ignored. TLVs that specify node IDs with indices greater than that specified by the TLV with the L bit set are also ignored.¶
The IS-IS Flooding Path TLV is only used in centralized mode.¶
The IS-IS Flooding Path TLV is used to denote a path in the flooding topology. The goal is an efficient encoding of the links of the topology. A single link is a simple case of a path that only covers two nodes. A connected path may be described as a sequence of indices (I1, I2, I3, ...), denoting a link from the system with index 1 to the system with index 2, a link from the system with index 2 to the system with index 3, and so on.¶
If a path exceeds the size that can be stored in a single TLV, then the path may be distributed across multiple TLVs by the replication of a single system index.¶
Complex topologies that are not a single path can be described using multiple TLVs.¶
The IS-IS Flooding Path TLV contains a list of system indices relative to the systems advertised through the IS-IS Area Node IDs TLV. At least 2 indices must be included in the TLV. Due to the length restriction of TLVs, this TLV can contain 126 system indices at most.¶
The IS-IS Flooding Path TLV has the following format:¶
The IS-IS Flooding Request TLV allows a system to request an adjacent node to enable flooding towards it on a specific link in the case where the connection to the adjacent node is not part of the existing flooding topology.¶
A node that supports dynamic flooding MAY include the IS-IS Flooding Request TLV in its IS-IS Hello (IIH) Protocol Data Units (PDUs).¶
The IS-IS Flooding Request TLV has the following format:¶
Circuit flooding scope MUST NOT be sent in the Flooding Request TLV and MUST be ignored if received.¶
When the TLV is received in a level-specific LAN-Hello PDU (L1-LAN-IIH or L2-LAN-IIH), only levels that match the PDU type are valid. Levels that do not match the PDU type MUST be ignored on receipt.¶
When the TLV is received in a Point-to-Point Hello (P2P-IIH), only levels that are supported by the established adjacency are valid. Levels that are not supported by the adjacency MUST be ignored on receipt.¶
If flooding was disabled on the received link due to dynamic flooding, then flooding MUST be temporarily enabled over the link for the specified Circuit Types and flooding scopes received in the in the IS-IS Flooding Request TLV. Flooding MUST be enabled until the Circuit Type or Flooding Scope is no longer advertised in the IS-IS Flooding Request TLV or the TLV no longer appears in IIH PDUs received on the link.¶
When flooding is temporarily enabled on the link for any Circuit Type or Flooding Scope due to receiving the IS-IS Flooding Request TLV, the receiver MUST perform standard database synchronization for the corresponding Circuit Types and flooding scopes on the link. In the case of IS-IS, this results in setting the Send Routeing Message (SRM) flag for all related LSPs on the link and sending CSNPs.¶
So long as the IS-IS Flooding Request TLV is being received, flooding MUST NOT be disabled for any of the Circuit Types or flooding scopes present in the IS-IS Flooding Request TLV, even if the connection between the neighbors is removed from the flooding topology. Flooding for such Circuit Types or flooding scopes MUST continue on the link and be considered temporarily enabled.¶
In support of advertising which edges are currently enabled in the flooding topology, an implementation MAY indicate that a link is part of the flooding topology by advertising a bit value in the Link Attributes sub-TLV defined by [RFC5029].¶
The following bit-value is defined by this document:¶
This section defines new Link State Advertisements (LSAs) and TLVs for both OSPFv2 and OSPFv3.¶
The following LSAs and TLVs/sub-TLVs are added to OSPFv2/OSPFv3:¶
A TLV that is used to advertise the preference for becoming the Area Leader.¶
A TLV that is used to indicate the support for dynamic flooding and the algorithms that the advertising node supports for distributed mode, if any.¶
An OSPFv2 Opaque LSA and OSPFv3 LSA to advertise the flooding topology for centralized mode.¶
A TLV to advertise the list of Router IDs that comprise the flooding topology for the area.¶
A TLV to advertise a path that is part of the flooding topology.¶
A bit in the Link-Local Signaling (LLS) Type 1 Extended Options and Flags that requests flooding from the adjacent node.¶
The usage of the OSPF Area Leader Sub-TLV is identical to that of the IS-IS Area Leader Sub-TLV described in Section 5.1.1.¶
The OSPF Area Leader Sub-TLV is used by both OSPFv2 and OSPFv3.¶
The OSPF Area Leader Sub-TLV is advertised as a top-level TLV of the Router Information (RI) LSA that is defined in [RFC7770] and has the following format:¶
The usage of the OSPF Dynamic Flooding Sub-TLV is identical to that of the IS-IS Dynamic Flooding Sub-TLV described in Section 5.1.2.¶
The OSPF Dynamic Flooding Sub-TLV is used by both OSPFv2 and OSPFv3.¶
The OSPF Dynamic Flooding Sub-TLV is advertised as a top-level TLV of the RI LSA that is defined in [RFC7770] and has the following format:¶
The OSPFv2 Dynamic Flooding Opaque LSA is only used in centralized mode.¶
The OSPFv2 Dynamic Flooding Opaque LSA is used to advertise additional data related to dynamic flooding in OSPFv2. OSPFv2 Opaque LSAs are described in [RFC5250].¶
Multiple OSPFv2 Dynamic Flooding Opaque LSAs can be advertised by an OSPFv2 router. The flooding scope of the OSPFv2 Dynamic Flooding Opaque LSA is area-local.¶
The format of the OSPFv2 Dynamic Flooding Opaque LSA is as follows:¶
The opaque type used by OSPFv2 Dynamic Flooding Opaque LSA is 10. The opaque type is used to differentiate the various types of OSPFv2 Opaque LSAs as described in Section 3 of [RFC5250]. The LS Type is 10. The LSA Length field [RFC2328] represents the total length (in octets) of the Opaque LSA including the LSA header and all TLVs (including padding).¶
The Opaque ID field is an arbitrary value used to maintain multiple Dynamic Flooding Opaque LSAs. For OSPFv2 Dynamic Flooding Opaque LSAs, the Opaque ID has no semantic significance other than to differentiate Dynamic Flooding Opaque LSAs originated from the same OSPFv2 router.¶
The format of the TLVs within the body of the OSPFv2 Dynamic Flooding Opaque LSA is the same as the format used by the Traffic Engineering Extensions to OSPF [RFC3630].¶
The Length field defines the length of the value portion in octets (thus a TLV with no value portion would have a length of 0 octets). The TLV is padded to a 4-octet alignment; padding is not included in the length field (so a 3-octet value would have a length of 3 octets, but the total size of the TLV would be 8 octets). Nested TLVs are also 32-bit aligned. For example, a 1-octet value would have the length field set to 1, and 3 octets of padding would be added to the end of the value portion of the TLV. The padding is composed of zeros.¶
The OSPFv3 Dynamic Flooding Opaque LSA is only used in centralized mode.¶
The OSPFv3 Dynamic Flooding LSA is used to advertise additional data related to dynamic flooding in OSPFv3.¶
The OSPFv3 Dynamic Flooding LSA has a function code of 16. The flooding scope of the OSPFv3 Dynamic Flooding LSA is area-local. The U bit will be set indicating that the OSPFv3 Dynamic Flooding LSA should be flooded even if it is not understood. The Link State ID (LSID) value for this LSA is the Instance ID. OSPFv3 routers MAY advertise multiple OSPFv3 Dynamic Flooding Opaque LSAs in each area.¶
The format of the OSPFv3 Dynamic Flooding LSA is as follows:¶
In OSPF, TLVs are defined to advertise indices associated with nodes and Broadcast / Non-Broadcast Multi-Access (NBMA) networks. Due to identifier differences between OSPFv2 and OSPFv3, two different TLVs are defined as described in the following sub-sections.¶
The OSPF Area Router ID TLVs are used by the Area Leader to enumerate the Router IDs that it has used in computing the flooding topology. This includes the identifiers associated with Broadcast/NBMA networks as defined for Network LSAs. Conceptually, the Area Leader creates a list of Router IDs for all routers in the area and assigns an index to each router, starting with index 0. Indices are implicitly assigned sequentially, with the index of the first node being the Starting Index and each subsequent node's index is the previous node's index + 1.¶
This TLV is a top-level TLV of the OSPFv2 Dynamic Flooding Opaque LSA.¶
Because the space in a single OSPFv2 opaque LSA is limited, more than one LSA may be required to encode all of the Router IDs in the area. This TLV MAY be advertised in multiple OSPFv2 Dynamic Flooding Opaque LSAs so that all Router IDs can be advertised.¶
The OSPFv2 Area Router IDs TLV has the following format:¶
If multiple OSPFv2 Area Router ID TLVs with the L bit set are advertised by the same router, the TLV that specifies the smaller maximum index is used and the other TLVs with L bit set are ignored. TLVs that specify Router IDs with indices greater than that specified by the TLV with the L bit set are also ignored.¶
Each entry in the OSPFv2 Area Router IDs TLV represents either a node or a Broadcast/NBMA network identifier. An entry has the following format:¶
This TLV is a top-level TLV of the OSPFv3 Dynamic Flooding LSA.¶
Because the space in a single OSPFv3 Dynamic Flooding LSA is limited, more than one LSA may be required to encode all of the Router IDs in the area. This TLV MAY be advertised in multiple OSPFv3 Dynamic Flooding Opaque LSAs so that all Router IDs can be advertised.¶
The OSPFv3 Area Router IDs TLV has the following format:¶
If multiple OSPFv3 Area Router ID TLVs with the L bit set are advertised by the same router the TLV that specifies the smaller maximum index is used and the other TLVs with L bit set are ignored. TLVs that specify Router IDs with indices greater than that specified by the TLV with the L bit set are also ignored.¶
Each entry in the OSPFv3 Area Router IDs TLV represents either a router or a Broadcast/NBMA network identifier. An entry has the following format:¶
1 octet. The following values are defined:¶
The Originating ID Entry takes one of the following forms, depending on the ID Type.¶
For a Router:¶
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Originating Router ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+¶
The length of the Originating ID Entry is (4 * Number of IDs) octets.¶
For a Designated Router:¶
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Originating Router ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Interface ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+¶
The length of the Originating ID Entry is (8 * Number of IDs) octets.¶
The OSPF Flooding Path TLV is a top-level TLV of the OSPFv2 Dynamic Flooding Opaque LSAs and OSPFv3 Dynamic Flooding LSA.¶
The usage of the OSPF Flooding Path TLV is identical to that of the IS-IS Flooding Path TLV described in Section 5.1.4.¶
The OSPF Flooding Path TLV contains a list of Router ID indices relative to the Router IDs advertised through the OSPF Area Router IDs TLV. At least 2 indices must be included in the TLV.¶
Multiple OSPF Flooding Path TLVs can be advertised in a single OSPFv2 Dynamic Flooding Opaque LSA or OSPFv3 Dynamic Flooding LSA. OSPF Flooding Path TLVs can also be advertised in multiple OSPFv2 Dynamic Flooding Opaque LSAs or OSPFv3 Dynamic Flooding LSAs if they all cannot fit in a single LSA.¶
The OSPF Flooding Path TLV has the following format:¶
A single new option bit, the Flooding Request (FR) bit, is defined in the LLS Type 1 Extended Options and Flags field [RFC5613]. The FR bit allows a router to request an adjacent node to enable flooding towards it on a specific link in the case where the connection to the adjacent node is not part of the current flooding topology.¶
A node that supports dynamic flooding MAY include the FR bit in its OSPF LLS Extended Options and Flags TLV.¶
If the FR bit is signaled for a link on which flooding was disabled due to dynamic flooding, then flooding MUST be temporarily enabled over the link. Flooding MUST be enabled until the FR bit is no longer advertised in the OSPF LLS Extended Options and Flags TLV or the OSPF LLS Extended Options and Flags TLV no longer appears in the OSPF Hellos.¶
When flooding is temporarily enabled on the link for any area due to receiving the FR bit in the OSPF LLS Extended Options and Flags TLV, the receiver MUST perform standard database synchronization for the area corresponding to the link. If the adjacency is already in the FULL state, the mechanism specified in [RFC4811] MUST be used for database resynchronization.¶
So long as the FR bit is being received in the OSPF LLS Extended Options and Flags TLV for a link, flooding MUST NOT be disabled on the link, even if the connection between the neighbors is removed from the flooding topology. Flooding MUST continue on the link and be considered as temporarily enabled.¶
In support of advertising the specific edges that are currently enabled in the flooding topology, an implementation MAY indicate that a link is part of the flooding topology. The OSPF Link Attributes Bits TLV is defined to support this advertisement.¶
The following bits are defined:¶
OSPF Link-attribute Bits TLV appears as:¶
This section specifies the detailed behavior of the nodes participating in the IGP.¶
Some terminology to be used in the following sections:¶
The flooding topology MUST include all reachable nodes in the area.¶
If a node's reachability changes, the flooding topology MUST be recalculated. In centralized mode, the Area Leader MUST advertise a new flooding topology.¶
If a node becomes disconnected from the current flooding topology but is still reachable, then a new flooding topology MUST be calculated. In centralized mode, the Area Leader MUST advertise the new flooding topology.¶
The flooding topology SHOULD be biconnected to provide network resiliency, but this does incur some amount of redundant flooding. Xia topologies (Section 4.4.2) are an example of an explicit decision to sacrifice resiliency to avoid redundancy.¶
Any capable node MAY advertise its eligibility to become the Area Leader.¶
Nodes that are not reachable are not eligible to become the Area Leader. Nodes that do not advertise their eligibility to become the Area Leader are not eligible. Amongst the eligible nodes, the node with the numerically highest priority is the Area Leader. If multiple nodes all have the highest priority, then the node with the numerically highest system identifier (in the case of IS-IS) or Router ID (in the case of OSPFv2 and OSPFv3) is the Area Leader.¶
If the Area Leader operates in centralized mode, it MUST advertise algorithm 0 in its Area Leader Sub-TLV. For dynamic flooding to be enabled, it also MUST compute and advertise a flooding topology for the area. The Area Leader may update the flooding topology at any time. However, it should not destabilize the network with undue or overly frequent topology changes. If the Area Leader operates in centralized mode and needs to advertise a new flooding topology, it floods the new flooding topology on both the new and old flooding topologies.¶
If the Area Leader operates in distributed mode, it MUST advertise a nonzero algorithm in its Area Leader Sub-TLV.¶
When the Area Leader advertises algorithm 0 in its Area Leader Sub-TLV and does not advertise a flooding topology, dynamic flooding is disabled for the area. Note this applies whether the Area Leader intends to operate in centralized mode or distributed mode.¶
Note that once dynamic flooding is enabled, disabling it risks destabilizing the network due to the issues discussed in Section 1.¶
If the Area Leader advertises a nonzero algorithm in its Area Leader Sub-TLV, all nodes in the area that support dynamic flooding and support the algorithm advertised by the Area Leader MUST compute the flooding topology based on the Area Leader's advertised algorithm.¶
Nodes that do not support the advertised algorithm MUST continue to use standard IS-IS/OSPF flooding mechanisms. Nodes that do not support the flooding algorithm advertised by the Area Leader MUST be considered as dynamic flooding incapable nodes by the Area Leader.¶
If the value of the algorithm advertised by the Area Leader is from the range 128-254 (private distributed algorithms), it is the responsibility of the network operator to guarantee that all nodes in the area agree on the dynamic flooding algorithm corresponding to the advertised value.¶
The use of LANs in the flooding topology differs depending on whether the area is operating in centralized mode or distributed mode.¶
As specified in Section 4.5, when a LAN is advertised as part of the flooding topology, all nodes connected to the LAN are assumed to be using the LAN as part of the flooding topology. This assumption is made to reduce the size of the flooding topology advertisement.¶
In distributed mode, the flooding topology is NOT advertised; thus, the space consumed to advertise it is not a concern. Therefore, it is possible to assign only a subset of the nodes connected to the LAN to use the LAN as part of the flooding topology. Doing so may further optimize flooding by reducing the amount of redundant flooding on a LAN. However, support of flooding by a subset of the nodes connected to a LAN requires some modest but backward-compatible changes in the way flooding is performed on a LAN.¶
The Designated Intermediate System (DIS) for a LAN MUST use the standard flooding behavior.¶
Non-DIS nodes whose connection to the LAN is included in the flooding topology MUST use the standard flooding behavior.¶
Non-DIS nodes whose connection to the LAN is NOT included in the flooding topology behave as follows:¶
Received CSNPs from the DIS are ignored.¶
Partial Sequence Number PDUs (PSNPs) are NOT originated on the LAN.¶
An LSP that is received on the LAN and is newer than the corresponding LSP present in the Link State PDU Database (LSPDB) is retained and flooded on all local circuits that are part of the flooding topology (i.e., do not discard newer LSPs simply because they were received on a LAN that the receiving node is not using for flooding).¶
An LSP received on the LAN that is older or the same as the corresponding LSP in the LSPDB is silently discarded.¶
LSPs received on links other than the LAN are NOT flooded on the LAN.¶
NOTE: If any node connected to the LAN requests the enablement of temporary flooding, all nodes MUST revert to the standard flooding behavior on the LAN.¶
The Designated Router (DR) and Backup Designated Router (BDR) for LANs MUST use the standard flooding behavior.¶
Non-DR/BDR nodes with a connection to a LAN that is included in the flooding topology use the standard flooding behavior on that LAN.¶
Non-DR/BDR nodes with a connection to a LAN that is NOT included in the flooding topology behave as follows:¶
LSAs received on the LAN are acknowledged to the DR/BDR.¶
LSAs received on interfaces other than the LAN are NOT flooded on the LAN.¶
NOTE: If any node connected to the LAN requests the enablement of temporary flooding, all nodes revert to the standard flooding behavior.¶
NOTE: The sending of LSA Acknowledgements by nodes NOT using the LAN as part of the flooding topology eliminates the need for changes on the part of the DR/BDR, which might include nodes that do not support the dynamic flooding algorithm.¶
Nodes that support dynamic flooding MUST use the flooding topology for flooding when possible and MUST NOT revert to standard flooding when a valid flooding topology is available.¶
In some cases, a node that supports dynamic flooding may need to add local links to the flooding topology temporarily, even though the links are not part of the calculated flooding topology. This is termed "temporary flooding" and is discussed in Section 6.8.1.¶
In distributed mode, the flooding topology is calculated locally. In centralized mode, the flooding topology is advertised in the area LSDB. Received link-state updates, whether received on a link that is in the flooding topology or on a link that is not in the flooding topology, MUST be flooded on all links that are in the flooding topology except for the link on which the update was received.¶
In centralized mode, new information in the form of new paths or new node ID assignments can be received at any time. This may replace some or all of the existing information about the flooding topology. There may be transient conditions where the information that a node has is inconsistent or incomplete. If a node detects that its current information is inconsistent, then the node may wait for an implementation-specific amount of time, expecting more information to arrive that will provide a consistent, complete view of the flooding topology.¶
In both centralized and distributed mode, if a node determines that some of its adjacencies are to be added to the flooding topology, it should add those and begin flooding on those adjacencies immediately. If a node determines that adjacencies are to be removed from the flooding topology, then it should wait for an implementation-specific amount of time before acting on that information. This serves to ensure that new information is flooded promptly and completely, allowing all nodes to receive updates in a timely fashion.¶
This section explicitly considers a variety of different topological events in the network and how dynamic flooding should address them.¶
When temporary flooding is enabled on the link, the flooding needs to be enabled in both directions. To achieve that, the following steps MUST be performed:¶
The request for temporary flooding MUST be withdrawn on the link when all of the following conditions are met:¶
Any change in the flooding topology MUST result in an evaluation of the above conditions for any link on which temporary flooding was enabled.¶
Temporary flooding is stopped on the link when both adjacent nodes stop requesting temporary flooding on the link.¶
If a local link is added to the topology, the protocol will form a normal adjacency on the link and update the appropriate LSAs for the nodes on either end of the link. These link state updates will be flooded on the flooding topology.¶
In centralized mode, the Area Leader may choose to retain the existing flooding topology or modify the flooding topology upon receiving these updates. If the Area Leader decides to change the flooding topology, it will update the flooding topology in the LSDB and flood it using the new flooding topology.¶
In distributed mode, any change in the topology, including the link addition, MUST trigger the flooding topology recalculation. This is done to ensure that all nodes converge to the same flooding topology, regardless of the time of the calculation.¶
Temporary flooding MUST be enabled on the newly added local link as long as at least one of the following conditions are met:¶
Note that in this case there is no need to perform a database synchronization as part of the enablement of the temporary flooding because it was part of the adjacency bring-up itself.¶
If multiple local links are added to the topology before the flooding topology is updated, temporary flooding MUST be enabled on a subset of these links per the conditions discussed in Section 6.8.12.¶
If a node is added to the topology, then at least one link is also added to the topology. Section 6.8.2 applies.¶
A node that has a large number of neighbors is at risk of introducing a local flooding storm if all neighbors are brought up at once and temporary flooding is enabled on all links simultaneously. The most robust way to address this is to limit the rate of initial adjacency formation following bootup. This reduces unnecessary redundant flooding as part of initial database synchronization and minimizes the need for temporary flooding, as it allows time for the new node to be added to the flooding topology after only a small number of adjacencies have been formed.¶
In the event a node elects to bring up a large number of adjacencies simultaneously, a significant amount of redundant flooding may be introduced as multiple neighbors of the new node enable temporary flooding to the new node, which initially is not part of the flooding topology.¶
If a link that is not part of the flooding topology fails, then the adjacent nodes will update their LSAs and flood them on the flooding topology.¶
In centralized mode, the Area Leader may choose to retain the existing flooding topology or modify the flooding topology upon receiving these updates. If it elects to change the flooding topology, it will update the flooding topology in the LSDB and flood it using the new flooding topology.¶
In distributed mode, any change in the topology, including the failure of the link that is not part of the flooding topology, MUST trigger the flooding topology recalculation. This is done to ensure that all nodes converge to the same flooding topology, regardless of the time of the calculation.¶
If there is a failure on the flooding topology, the adjacent nodes will update their LSAs and flood them. If the original flooding topology is biconnected, the flooding topology should still be connected despite a single failure.¶
If the failed local link represented the only connection to the flooding topology on the node where the link failed, the node MUST enable temporary flooding on a subset of its local links. This allows the node to send its updated LSAs and receive link-state updates from other nodes in the network before the new flooding topology is calculated and distributed (in the case of centralized mode).¶
In centralized mode, the Area Leader will notice the change in the flooding topology, recompute the flooding topology, and flood it using the new flooding topology.¶
In distributed mode, all nodes supporting dynamic flooding will notice the change in the topology and recompute the new flooding topology.¶
If a node is deleted from the topology, then at least one link is also removed from the topology. Section 6.8.4 and Section 6.8.5 apply.¶
If the flooding topology changes and a local link that was not part of the flooding topology is now part of the flooding topology, then the node MUST:¶
If the flooding topology changes and a local link that was part of the flooding topology is no longer part of the flooding topology, then the node MUST remove the link from the flooding topology.¶
The node MUST keep flooding on such link for a limited amount of time to allow other nodes to migrate to the new flooding topology.¶
If the removed local link represented the only connection to the flooding topology on the node, the node MUST enable temporary flooding on a subset of its local links. This allows the node to send its updated LSAs and receive link-state updates from other nodes in the network before the new flooding topology is calculated and distributed (in the case of centralized mode).¶
Every time there is a change in the flooding topology, a node MUST check if any adjacent nodes are disconnected from the current flooding topology. Temporary flooding MUST be enabled towards a subset of the disconnected nodes per Sections 6.8.12 and 6.7.¶
The failure of the Area Leader can be detected by observing that it is no longer reachable. In this case, the Area Leader election process is repeated and a new Area Leader is elected.¶
To minimize disruption to dynamic flooding if the Area Leader becomes unreachable, the node that has the second-highest priority for becoming Area Leader (including the system identifier / Router ID tiebreaker if necessary) SHOULD advertise the same algorithm in its Area Leader Sub-TLV as the Area Leader and (in centralized mode) SHOULD advertise a flooding topology. This SHOULD be done even when the Area Leader is reachable.¶
In centralized mode, the new Area Leader will compute a new flooding topology and flood it using the new flooding topology. To minimize disruption, the new flooding topology SHOULD have as much in common as possible with the old flooding topology. This will minimize the risk of excess flooding with the new flooding topology.¶
In the distributed mode, the new flooding topology will be calculated on all nodes that support the algorithm that is advertised by the new Area Leader. Nodes that do not support the algorithm advertised by the new Area Leader will no longer participate in dynamic flooding and will revert to standard flooding.¶
In the event of multiple failures on the flooding topology, it may become partitioned. The nodes that remain active on the edges of the flooding topology partitions will recognize this and will try to repair the flooding topology locally by enabling temporary flooding towards the nodes that they consider disconnected from the flooding topology until a new flooding topology becomes connected again.¶
Nodes, where local failure was detected, update their LSAs and flood them on the remainder of the flooding topology.¶
In centralized mode, the Area Leader will notice the change in the flooding topology, recompute the flooding topology, and flood it using the new flooding topology.¶
In distributed mode, all nodes that actively participate in dynamic flooding will compute the new flooding topology.¶
Note that this is very different from the area partition because there is still a connected network graph between the nodes in the area. The area may remain connected and forwarding may still be functioning correctly.¶
As discussed in the previous sections, some events require the introduction of temporary flooding on edges that are not part of the current flooding topology. This can occur regardless of whether the area is operating in centralized mode or distributed mode.¶
Nodes that decide to enable temporary flooding also have to decide whether to do so on a subset of the edges that are currently not part of the flooding topology or on all the edges that are currently not part of the flooding topology. Doing the former risks a longer convergence time as it may miss vital edges and not fully repair the flooding topology. Doing the latter risks introducing a flooding storm that destabilizes the network.¶
It is recommended that a node rate limit the number of edges on which it chooses to enable temporary flooding. Initial values for the number of edges on which to enable temporary flooding and the rate at which additional edges may subsequently be enabled is left as an implementation decision.¶
The following code points have been assigned in the "IS-IS Sub-TLVs for IS-IS Router CAPABILITY TLV" registry (IS-IS TLV 242).¶
Type | Description | Reference |
---|---|---|
27 | IS-IS Area Leader | RFC 9667 (Section 5.1.1) |
28 | IS-IS Dynamic Flooding | RFC 9667 (Section 5.1.2) |
IANA has assigned code points from the "IS-IS Top-Level TLV Codepoints" registry, one for each of the following TLVs:¶
Type | Description | Reference |
---|---|---|
17 | IS-IS Area Node IDs | RFC 9667 (Section 5.1.3) |
18 | IS-IS Flooding Path | RFC 9667 (Section 5.1.4) |
19 | IS-IS Flooding Request | RFC 9667 (Section 5.1.5) |
IANA has extended the "IS-IS Neighbor Link-Attribute Bit Values" registry to contain an "L2BM" column that indicates if a bit may appear in an L2 Bundle Member Attributes TLV. All existing rows have the value "N" for "L2BM". The following explanatory note has been added to the registry:¶
The "L2BM" column indicates applicability to the L2 Bundle Member Attributes TLV. The options for the "L2BM" column are:¶
Y - This bit MAY appear in the L2 Bundle Member Attributes TLV.¶
N - This bit MUST NOT appear in the L2 Bundle Member Attributes TLV.¶
IANA has allocated a new bit-value from the "IS-IS Neighbor Link-Attribute Bit Values" registry.¶
Value | L2BM | Name | Reference |
---|---|---|---|
0x4 | N | Local Edge Enabled for Flooding (LEEF) | RFC 9667 |
The following code points have been assigned in the "OSPF Router Information (RI) TLVs" registry:¶
Value | TLV Name | Reference |
---|---|---|
17 | OSPF Area Leader | RFC 9667 (Section 5.2.1) |
18 | OSPF Dynamic Flooding | RFC 9667 (Section 5.2.2) |
The following code points have been assigned in the "Opaque Link-State Advertisements (LSA) Option Types" registry:¶
Value | Opaque Type | Reference |
---|---|---|
10 | OSPFv2 Dynamic Flooding Opaque LSA | RFC 9667 (Section 5.2.3) |
The following code point has been assigned in the "OSPFv3 LSA Function Codes" registry:¶
Value | LSA Function Code Name | Reference |
---|---|---|
16 | OSPFv3 Dynamic Flooding LSA | RFC 9667 (Section 5.2.4) |
IANA has assigned a new bit in the "LLS Type 1 Extended Options and Flags" registry:¶
Bit Position | Description | Reference |
---|---|---|
0x00000020 | Flooding Request bit | RFC 9667 (Section 5.2.7) |
The following code point has been assigned in the "OSPFv2 Extended Link TLV Sub-TLVs" registry:¶
Type | Description | Reference | L2 Bundle Member Attributes (L2BM) |
---|---|---|---|
21 | OSPFv2 Link Attributes Bits Sub-TLV | RFC 9667 (Section 5.2.8) | Y |
The following code point has been assigned in the "OSPFv3 Extended LSA Sub-TLVs" registry:¶
Type | Description | Reference | L2 Bundle Member Attributes (L2BM) |
---|---|---|---|
10 | OSPFv3 Link Attributes Bits Sub-TLV | RFC 9667 (Section 5.2.8) | Y |
A new registry has been created: "OSPF Dynamic Flooding LSA TLVs". New values can be allocated via IETF Review or IESG Approval.¶
The "OSPF Dynamic Flooding LSA TLVs" registry defines top-level TLVs for the OSPFv2 Dynamic Flooding Opaque LSA and OSPFv3 Dynamic Flooding LSAs. It has been added to the "Open Shortest Path First (OSPF) Parameters" registry group.¶
The following initial values have been allocated:¶
Type | Description | Reference |
---|---|---|
0 | Reserved | RFC 9667 |
1 | OSPF Area Router IDs | RFC 9667 (Section 5.2.5) |
2 | OSPF Flooding Path | RFC 9667 (Section 5.2.6) |
Types in the range 32768-33023 are Reserved for Experimental Use; these will not be registered with IANA and MUST NOT be mentioned by RFCs.¶
Types in the range 33024-65535 are Reserved. They are not to be assigned at this time. Before any assignments can be made in the 33024-65535 range, there MUST be an IETF specification that specifies IANA Considerations that cover the range being assigned.¶
A new registry has been created: "OSPF Link Attributes Sub-TLV Bit Values". New values can be allocated via IETF Review or IESG Approval.¶
The "OSPF Link Attributes Sub-TLV Bit Values" registry defines Link Attribute bit-values for the OSPFv2 Link Attributes Sub-TLV and OSPFv3 Link Attributes Sub-TLV. It has been added to the "Open Shortest Path First (OSPF) Parameters" registry group. This registry contains a column "L2BM" that indicates if a bit may appear in an L2 Bundle Member Attributes (L2BM) Sub-TLV. The following explanatory note has been added to the registry:¶
The "L2BM" column indicates applicability to the L2 Bundle Member Attributes sub-TLV. The options for the "L2BM" column are:¶
Y - This bit MAY appear in the L2 Bundle Member Attributes sub-TLV.¶
N - This bit MUST NOT appear in the L2 Bundle Member Attributes sub-TLV.¶
The following initial value is allocated:¶
Bit Number | Description | Reference | L2 Bundle Member Attributes (L2BM) |
---|---|---|---|
0 | Local Edge Enabled for Flooding (LEEF) | RFC 9667 (Section 5.2.8) | N |
IANA has created a registry called "IGP Algorithm Type For Computing Flooding Topology" in the existing "Interior Gateway Protocol (IGP) Parameters" registry group.¶
The registration policy for this registry is Expert Review.¶
Values in this registry come from the range 0-255.¶
The initial values in the "IGP Algorithm Type For Computing Flooding Topology" registry are as follows:¶
Value | Description |
---|---|
0 | Reserved for centralized mode |
1-127 | Unassigned. Individual values are to be assigned according to the "Expert Review" policy defined in [RFC8126]. The designated experts should require a clear, public specification of the algorithm and comply with [RFC7370]. |
128-254 | Reserved for Private Use |
255 | Reserved |
This document introduces no new security issues. Security of routing within a domain is already addressed as part of the routing protocols themselves. This document proposes no changes to those security architectures.¶
An attacker could become the Area Leader and introduce a flawed flooding algorithm into the network thus compromising the operation of the protocol. Authentication methods as described in [RFC5304] and [RFC5310] for IS-IS, [RFC2328] and [RFC7474] for OSPFv2, and [RFC5340] and [RFC4552] for OSPFv3 SHOULD be used to prevent such attacks.¶
The authors would like to thank Sarah Chen, Tony Przygienda, Dave Cooper, Gyan Mishra, and Les Ginsberg for their contributions to this work. The authors would also like to thank Arista Networks for supporting the development of this technology.¶
The authors would like to thank Zeqing (Fred) Xia, Naiming Shen, Adam Sweeney, Acee Lindem, and Olufemi Komolafe for their helpful comments.¶
The authors would like to thank Tom Edsall for initially introducing them to the problem.¶
Advertising Local Edges Enabled for Flooding (LEEF) is based on an idea proposed by Huaimo Chen, Mehmet Toy, Yi Yang, Aijun Wang, Xufeng Liu, Yanhe Fan, and Lei Liu. The authors wish to thank them for their contributions.¶