http://www.internet2.edu/qos/wg/papers/qbArch/1.0/draft-i2-qbone-arch-1.0.html
Internet2 QoS Working Group Draft
August, 1999
Editor: Ben Teitelbaum <ben@internet2.edu>
This document specifies the architectural requirements for the QBone - an interdomain testbed for differentiated services (DiffServ) that seeks to provide the higher-education community with end-to-end services in support of emerging advanced networked applications. The emphasis is on specifying the minimum requirements for a network to participate in the QBone and support interdomain DiffServ services. Implementation techniques will be explored in other documents.This document is a product of the Internet2 QoS Working Group. which is overseeing the QBone testbed initiative. It should be regarded as a work in progress and open to constructive criticism from all Internet2 QBone participants.
The QBone architecture seeks to remain consistent with the emerging IETF standards for DiffServ, which have been described in a number of recent Internet Drafts, many in last call. It is not the intent of this document to reiterate or elaborate upon these emerging standards, but rather to clarify what subset of them will be implemented by the QBone and specify QBone requirements that are outside the scope of the IETF's work. Where this document references Internet Drafts, which are themselves works in progress, the references should be considered as being to the most recent versions of these drafts.
Consistent with the terminology defined in [DSARCH], each network participating in the QBone will be considered a "DS domain" and the union of these networks - the QBone itself - a "DS region". QBone participants must cooperate to provide one or more interdomain services besides the default, traditional best effort IP service model. The first such service to be implemented is the Virtual Leased Line (a.k.a. "Premium") Service described in [2BIT]. Every QBone DS domain must support the expedited forwarding (EF) per-hop behavior (PHB) [EF] and configure its traffic classifiers and conditioners (meters, markers, shapers, and droppers) to provide a VLL service to EF aggregates.
Additionally, the QBone must support an integrated measurement infrastructure with hooks to support end-to-end debugging and auditing by users, network operators, and implementers. Both active and passive measurement data will be collected and shared openly among participants. The Internet2 Measurements Working Group is working to provide additional guidance in this area.
Another area where the QBone architecture will provide a "value-add" on top of the base DiffServ architecture is in interdomain operations. At a minimum, we must specify a common set of operational practices and procedures to be followed by network operators. As the project progresses, it should be expected that network operators will begin to rely on automated tools to make admission control decisions and configure network devices. This document uses the term "bandwidth broker" (BB) to refer to the abstraction that automates this admission control and configuration functionality. The discussion of bandwidth brokers in [2BIT] suggests that BBs must eventually communicate among themselves to negotiate interdomain reservation setup. The QBone architecture must admit the use of prototype inter-BB signaling protocols when they develop.
A high-level architectural requirement of the QBone is contiguity. Unlike IPv6 and MBone technology, quality of service cannot be implemented as an overlay network service based on address aggregation only. The QBone is necessarily a contiguous set of DS domains - which is to say: a DS Region. Within the QBone, each participating network is a DS Domain that interoperates with other QBone networks to provide the end-to-end QBone services described in Section 3.
Each QBone network must have a well-defined administrative boundary, across which it peers with neighboring QBone DS domains. Bilateral service level specifications (SLSes) exist between adjacent QBone DS domains. These SLSes specify how traffic is classified, policed, and forwarded by DS boundary nodes. Although SLSes are necessarily bilateral and may contain any number of arcane arrangements, within the QBone there are certain minimal features of any SLS that are required to implement each QBone service. These SLS requirements are described in detail in Section 3.
Section 4 specifies the requirements for the QBone Measurement Architecture, including what measurement metrics must be collected and how they are to be disseminated.
Section 5 describes the process through which interdomain reservations are established through the QBone testbed infrastructure.
QPS exploits the Expedited Forwarding (EF) per-hop forwarding behavior [EF]. EF requires that the "departure rate of the aggregate's packets from any DiffServ node must equal or exceed a configurable rate" and that the EF traffic "SHOULD receive this rate independent of the intensity of any other traffic attempting to transit the node". EF may be implemented by any of a variety of queuing disciplines. Services like QPS are built from EF through careful conditioning of EF aggregates so that the arrival rate of EF packets at any node is always less than that node's configured minimum departure rate for any interval of time equal to or greater than the time to send an MTU-sized packet at the configured peakRate (where the terms in bold are defined below). Note that QPS is defined with an explicit jitter bound that imposes configuration requirements more stringent than those required by RFC2598 [EF].
Initiators of QPS reservations request and contract for a peak rate peakRate of EF traffic, at a specified maximum transmission unit MTU for transmission with a specified jitter bound jitter. Each QPS reservation is also parameterized by a DS-domain-to-DS-domain route route and a specified time interval {startTime, endTime}. In summary, a QPS reservation {source, dest, route, startTime, endTime, peakRate, MTU, jitter} is an agreement to provide the transmission assurances of the QBone Premium Service (see below) starting at startTime and ending at endTime across the chain of DS-domains route between source source and destination dest for EF traffic ingressing at source and conforming to a "CBR-like" traffic profile parameterized by a token bucket profiler with:
The parameter jitter is a worst case bound on instantaneous packet delay variation, or IPDV (defined in section 4.2.1.2), with the caveat that the bound does not apply to EF packets with different IP routes. Measurable IPDV should occur primarily due to synchronization between EF packets contending for the EF forwarding resources of network elements along the QPS path. Any computation of a QBone domain's worst case IPDV must include an analysis of possible synchronization between converging EF aggregates in the domain. At any node, an EF packet may find itself synchronized with EF packets converging from other circuits and may additionally find itself synchronized with a non-EF packet whose transmission on an output interface may have already begun at the time of the EF packet's arrival. To make appropriate admissions control decisions for QPS reservation requests, it is important that a QBone domain understand the worst case variation in delay that an EF packet might experience. In practice, the worst case jitter will be very rare and QPS users may find measurements of 99.5th percentile IPDV a more relevant empirical gauge of jitter. The appendix contains an example computation of the worst case IPDV for a QPS reservation across a typical Internet2 path; the bound computed is better than 10ms, which is sufficient to meet the jitter requirements of most advanced Internet2 applications.
The initial design of the QBone Measurement Architecture (QMA) is focused on verifying the service goals of the QBone Premium Service (QPS) and helping with the debugging, provisioning, and understanding of EF behavior aggregates. The initial QPS goals are low loss and low delay variation, so measuring loss and delay variation is to be prioritized. To help with provisioning and understanding the use of EF service level specifications, the amount of bandwidth reserved relative to the instantaneous EF load on ingress and egress links will be measured. To help with debugging and isolation of faults, metrics are required to be measured domain-to-domain, with measurement points at interdomain interfaces. It is also recommended that end-to-end measurements be taken using the same metrics.
Both active and passive measurements will be used to answer the question: Is the EF PHB working as expected? Active measurements are measurements made by injecting traffic into the network and measuring the properties of the injected traffic. Passive measurements observe properties of traffic by means that do not interfere with the traffic itself. The IETF IPPM definitions of one-way packet loss and one-way packet delay are examples of metrics commonly obtained through active measurement, while "bytes transmitted on link L" is an example of a metric that must be obtained through passive observation of traffic. Note that passive measurements do not necessarily make a distinction between production traffic and injected traffic.
Figure 4.1 shows the definitions used to measure the capacity and bandwidth of a link. The link bandwidth is the total bandwidth of a given link, in bits per second. The contracted (EF) capacity or EF commitment is the total fraction of link bandwidth contracted to Premium service in bits per second. Reservation load is the current bandwidth of the link reserved for EF. This ultimately will be dynamic. The reservation load is the result of reservations made through bandwidth brokers, and is less than or equal to the EF commitment. For the initial phase of the QBone, the reservation load should be static and equal to the EF commitment. Finally, the EF load is the link bandwidth actually used by EF-marked packets, in bits per second. This will always be less than or equal to the reservation load.
Required metrics include metrics like one-way packet loss, one-way delay variation, and routing information, which are obtained through active measurement, as well as metrics like link utilization, which are generally obtained through SNMP polling of MIBs or through passive measurement devices that eavesdrop on traffic. Finally, there are some required metrics that will come from configuration information (and eventually from the bandwidth broker reservation system); these include link bandwidths, per-interface EF commitments, and EF reservation load.
Suggested metrics include EF and BE interface discards, one-way packet delay, and burst throughput. EF and BE interface loss statistics should be reported if they are available in proprietary MIBs. One-way packet delay shows both minimum latencies and queuing (indicated by values greater than the minimum) along a path, and can be used to compute one-way delay variation. Burst throughput consists of sending short EF and BE bursts end-to-end, in order to test policing of EF traffic and that BE bursts do not affect the EF traffic.
Future metrics include routing metrics and application-specific metrics. Routing metrics may be derived from routing protocols themselves, such as autonomous system (AS) path stability, and may be useful for understanding the sensitivity of interdomain QoS reservations to routing and for designing signaling protocols and resource allocation algorithms to cope with route changes. Application-specific metrics are metrics that indicate how well applications would perform over specific paths. The application-specific metrics envisioned so far fall into two types: derived and active end-to-end. In the first case, application-specific metrics could be derived from "network base metrics", such as loss and delay variation. Thus, they would take the base metrics and produce some indication of a benchmark application's performance (e.g. MPEG-1 picture quality). In the second case, application-specific metrics could consist of minimal applications themselves: something that one runs, perhaps at two endpoints, and that gives an indication of how well the application of interest would perform.
In addition to the loss and delay measurements, traceroute measurements must be performed in parallel to verify that EF packets are being routed as intended, and give an indication of path stability. Since traceroutes require resources of all intermediate points, they should be performed with a lower frequency than the other active tests initially on ten minute intervals.
Conforming EF traffic should not be dropped and should not experience significant delay variation due to queuing. Thus, these active tests may be used to verify the basic QPS service goals. If packets are lost or variation is introduced, the position of the measurement machines sending active test traffic will assist in isolating the problem to a single QBone domain. Interface load and loss metrics and information on the reservation load should further assist in the isolation of faults and deepen our understanding of DiffServ provisioning issues.
Figure 4.2:
Measurement points
All measurements are to be taken at or as close to interdomain boundary routers as possible. Figure 4.3 illustrates an example QBone measurement node configuration. Actual configurations will vary significantly according to local issues.
When possible, active measurement probes should have direct interfaces to boundary routers. These probes are the sources and sinks of dedicated measurement paths terminating at the measurement node.
Passive measurement probes must observe the traffic on
inter-domain links without perturbing it in any respect. This is commonly
accomplished through the use of a splitter.
Passive measurement probes may additionally be located on
intra-domain links to assist in deriving metrics such as EF and BE interface
discards. Passive measurement equipment may also be used to measure probe
flows created by active measurement equipment.
SNMP-based polling agents that extract MIBs to support QBone utilization metrics may be located anywhere.
Figure 4.3 Example QBone Measurement Node Configuration
Figure 4.4:
Measurement paths
An explicit level of global clock synchronization is not required. However, to achieve reasonably consistent accuracy of timestamps across the QBone measurement infrastructure, it is strongly recommended that participants report timestamps as closely synchronized with UTC as possible.
For QBone measurements, the initial "reasonable period of time" is two minutes or more. Two minutes allows more than ample time to account for NTP-level clock drift among the measurement instruments, is the default TCP timeout, and is long enough that for most advanced applications the packet is as good as lost.
Type-P represents the type of test packets sent. Since loss measurements should be taken continuously, small packets are required so as to not disrupt user EF traffic. The Type-P of the measurement packets should be 100 byte or less UDP packets, with EF specified in the DS-byte as appropriate.
This metric requires a Type-P-One-way-Loss-Poisson-Stream, with a lambda of twice per second. A Poisson stream attempts to avoid synchronizing the test traffic with other network events. A high lambda is desirable to see more detail (in particular the results of shorter bursts), however, it is important that the test traffic does not perturb user traffic, and, in addition, that any packet bursts that occur as a result of the Poisson distribution do not exceed the test stream's reservation profile.
Note that a Type-P-One-way-Loss-Poisson-Stream is derivable from a Type-P-One-way-Delay-Poisson-Stream, so a single stream generated using measurement machines with closely synchronized time-of-day references can be used for EF and BE losses, IPDV, and one-way delay.
When measuring EF and BE path losses, care must be taken to prevent synchronization of the Poisson processes driving the test streams.
Pay attention to the error analysis section of the IPPM One-way loss document, and report any details of "Type-P" that have been left unspecified here (for example, what UDP ports are used) so that the results of the metrics can be understood in the future.
The purpose of the one-way packet loss measurement is to estimate the packet loss along EF-enabled paths in the QBone. This can be done more or less continuously and can provide important background information which can be used to interpret other measurements taken. One use for such simultaneous one-way BE and EF packet loss measurements is to measure the effect the various levels of EF reservation have on best effort packet loss. Further, correlations between EF and best effort loss can be calculated. The correlations can point out problems with the isolation of EF traffic and consequently possible problems in PHB implementation.
The Type-P-One-Way-IPDV metric makes use of the similar methods used in [DELAY], but being a differential measurement, the IPDV metric does not require clock synchronization of distant measurement points. For more information on measurements with un-synchronized clocks, please refer to [IPDV].
Only the collection of EF IPDV measurements is required within the QBone. There is no need at this time to measure IPDV for BE packet streams.
The general procedure for taking IPDV measurements is as follows:
Type-P represents the type of test packets sent. Since loss measurements should be taken continuously, small packets are required so as to not disrupt user EF traffic. The Type-P of the measurement packets should be UDP packets with size 100 bytes or less, with EF specified in the DS-byte as appropriate.
This metric requires a Type-P-One-way-Delay-Poisson-Stream, with a lambda of twice per second. A Poisson stream attempts to avoid synchronizing the test traffic with other network events. A high lambda is desirable to see more detail (in particular the results of shorter bursts), however, it is important that the test traffic does not perturb user traffic, and, in addition, that any packet bursts that occur as a result of the Poisson distribution do not exceed the test stream's reservation profile.
IPDV measurement may be combined with loss measurement. A comparison of EF and BE IPDV in this case may be used to evaluate performance differences between EF and BE measurement flows if no losses occur.
Traceroute should be run between measurement machines directly connected to the border routers of each QBone domain. For example, in Figure 4.4, traceroutes should be run from QBone Domain 1 to QBone Domain 2 (if there is a DMZ between these domains where other routers might be, otherwise it is not necessary) and from the measurement point in Domain 1 to the furthest measurement point in Domain 2. In addition, Domain 2 should run traceroute between its two ingress/egress points. Note that in this case, Domain 2 should run traceroute from each of its border routers to the other, since the routes in each direction may be different. Further, any QBone domains exchanging traffic with one another should run traceroute during their experiments.
Since traceroute exploits IP TTL (time-to-live) expiration and generally results in an exception to normal packet forwarding that consumes non-negligible resources, it is recommended that traceroute be run no more frequently than once every 10 minutes. Nodes initiating traceroute measurements should randomize the starting times in an interval around the 10 minute start time so that the traceroutes in the QBone are not synchronized.
Metrics that could be derived from traceroute measurement include frequency of path change and average length of time a path stays in place. This is important planning information that is additionally important for experiments where one needs to know what path a data stream took in order to interpret other measurements correctly.
Load measurements can be taken with an external passive device with the capability of filtering packets (to distinguish EF and BE counts) or they can be taken by polling the appropriate MIB counters every minute. BE and EF statistics should be collected simultaneously so that the loads can be correlated with each other. The timing of the poll should be as accurate as possible so that the load over the interval can be accurately computed in bits/second.
The distinction between traffic types does not appear in the standard MIB-2 interface fields (you can only get the information for the total interface), and so vendor-specific MIBs must be used. One example of a vendor-specific MIB is the Cisco CAR MIB. This MIB has statistics defined for various configured rate limits that can be used to separate EF and BE traffic.
The purpose of the load measurements is, first, to see what fraction of the link's capacity (as opposed to the reserved capacity) the EF traffic is using, and second, to use simultaneous EF Load and BE Load measurements to see whether the EF traffic is sufficiently insulated from spikes in the BE traffic.
The link bandwidth measurement is triggered by
EF commitment measurements are triggered by
Initially, the QBone will use "static" reservations and manually configured SLSes. Consequently, the EF reservation load will also be configured manually. In later implementations, the EF reservation load may be available from bandwidth brokers or directly from edge routers.
EF reservation load measurements are triggered by
The reservation load metric is useful for comparing to the actual load generated by the EF traffic and also for understanding the reservation dynamics which will have an effect on the SLSes as well as on provisioning.
As with load, standard MIB-2 loss fields do not distinguish between behavior aggregates. Vendor-specific MIBs may be available to distinguish between aggregates. Alternatively, it may be possible to capture these loss measurements external to the routers with passive measurement devices like protocol analyzers. If measuring through polling MIBs, these polls should be combined with the polls for the EF and BE load measurements if possible, in order to synchronize all 8 measurements in time, as well as to minimize the polling traffic overhead.
The purpose of the interface loss measurements is to be able, first, to check the quality of the EF and BE services on all measured interfaces, and second, to compare these measurements with the active "sampling" measurements provided by the path loss measurements. These measurements can be correlated with the passive load measurements in order to see that the EF and BE aggregates are properly isolated, and to observe the effect of EF traffic on BE loss.
For QBone measurements, a "reasonable period of time" is two minutes or more. Two minutes is the default TCP timeout, and is long enough that for most advanced applications the packet is as good as lost.
As with the packet loss measurements above, Type-P represents the type of test packets sent. Since delay measurements should be taken continuously, small packets are required so as to not disrupt user EF traffic. The Type-P of the measurement packets should be 100 byte or less UDP packets, with EF specified in the DS-byte as appropriate.
This metric requires a Type-P-One-way-Delay-Poisson-Stream, with a lambda of twice per second. A Poisson stream attempts to avoid synchronizing the test traffic with other network events. A high lambda is desirable to see more detail (in particular the results of shorter bursts), however, it is important that the test traffic does not perturb user traffic, and, in addition, that any packet bursts that occur as a result of the Poisson distribution do not exceed the test stream's reservation profile.
Note that a Type-P-One-way-Loss-Poisson-Stream is derivable from a Type-P-One-way-Delay-Poisson-Stream, so a single stream generated using measurement machines with closely synchronized time-of-day references can be used for EF and BE losses, IPDV, and one-way delay.
When measuring One-way delay of EF and BE paths simultaneously, care must be taken to prevent synchronization of the Poisson processes driving the test streams. For example, if the two packets are always sent out back-to-back, the second might get consistently worse treatment (if it's always waiting in a queue) or consistently better treatment (if the first causes routers to cache information along the path). Since these cases are difficult to disambiguate, prevent synchronization of the two streams.
Pay attention to the metric reporting and error analysis sections of the IPPM One-way Delay document, and report any details of "Type-P" that have been left unspecified here (for example, what UDP ports are used) so that the results of the metrics can be understood in the future.
The purpose of the one-way delay measurement is to estimate the delay along EF-enabled paths in the QBone. This can be done more or less continuously and can provide important background information which can be used to interpret other measurements taken. One use for one-way EF packet delay measurements is to detect queuing within the EF path, since in the absence of routing changes, delays greater than the minimum generally indicate queuing is occurring along a path.
Each QBone domain must provide a web site for disseminating and presenting measurements taken at points within the domain. Both summary plots and raw measurement data are to be made available through this web interface. By standardizing on a simple namespace and several simple reporting styles, it will be straightforward for QBone domains to create a rich mesh of links to each other's sites, without rigidly specifying all aspects of how an individual domain must present its measurements.
Canonical Name | Metric | Data Type | Units |
efPathLoss | EF Path Loss | comma-delimited unsigned integer tuple (e.g. "100, 92") |
EF (packets sent, packets received) per 1 minute |
bePathLoss | BE Path Loss | comma-delimited unsigned integer tuple | BE (packets sent, packets received) per 1 minute |
efInterfaceLoss | EF Interface Loss | comma-delimited unsigned integer 4-tuple | EF (packets sent, packets received, bytes sent, bytes received) per 1 minute |
beInterfaceLoss | BE Interface Loss | comma-delimited unsigned integer 4-tuple | BE (packets sent, packets received, bytes sent, bytes received) per 1 minute |
efDV | EF Delay Variation | unsigned integer | microseconds |
efLoad | EF Load | unsigned integer pair | bits per second, packets per second |
beLoad | BE Load | unsigned integer pair | bits per second, packets per second |
efTrace | EF Traceroute | string of dot-notated, comma-delimited IP addresses (e.g. "a1.b1.c1.d1, a2.b2.c2.d2, ...") |
NA |
beTrace | BE Traceroute | string of dot-notated, comma-delimited IP addresses (e.g. "a1.b1.c1.d1, a2.b2.c2.d2, ...") |
NA |
linkBW | Link Bandwidth | unsigned integer ordered pair | bits per second, timestamp |
efCom | EF Commitment | unsigned integer ordered pair | bits per second, timestamp |
efRes | EF Reservation Load | unsigned integer ordered pair | bits per second, timestamp |
All reports are to begin at 00:00:00 UTC for the specified day (see Section 4.3.3) and have a duration of one day. All clocks used in the QBone measurement infrastructure must be accurate to within 100 milliseconds of UTC.
Three types of reports are to be made available: image reports, page reports, and file reports.
Figure 4.4: MRTG Load Example
The record format for summarized reports is:
<metric data type shown in 4.3.1 or "-1" for missing data> | \n |
The record format for unsummarized reports is:
<timestamp in milliseconds since midnight UTC> | , | <metric data type shown in 4.3.1 or "-1" for missing data> | \n |
The following table lists the reports required by the QBone
architecture. For each report, the summary interval and summary function
are given. Summary intervals have been chosen based on an educated guess
of what is reasonable for a QBone domain to collect and archive.
Metric | Image Report (*.gif) /Page
Reports (*.html) (Summary Interval | Summary Function) |
File Reports
(*.txt) (Summary Interval | Summary Function) | ||
efPathLoss | 5 minutes | percentage of EF packets lost | 5 minutes | summed efPathLoss tuples |
none | timestamped efPathLoss tuples | |||
bePathLoss | 5 minutes | percentage of BE packets lost | 5 minutes | summed bePathLoss tuples |
none | timestamped bePathLoss tuples | |||
efInterfaceLoss | 5 minutes | percentage of EF packets lost | 5 minutes | percentage of EF packets lost |
percentage of EF bytes lost | 5 minutes | percentage of EF bytes lost | ||
none | timestamped efInterfaceLoss 4-tuples | |||
beInterfaceLoss | 5 minutes | percentage of BE packets lost | 5 minutes | percentage of BE packets lost |
percentage of BE bytes lost | 5 minutes | percentage of BE bytes lost | ||
none | timestamped beInterfaceLoss 4-tuples | |||
efDV | 5 minutes | milliseconds of IPDV at the 50th percentile | 5 minutes | |
5 minutes | milliseconds of IPDV at the 90th percentile | 5 minutes | ||
5 minutes | milliseconds of IPDV at the 99.5th percentile | 5 minutes | ||
efLoad | 5 minutes | sum of EF bps | 5 minutes | sum of EF bps |
none | raw 1 minute EF load measurements (no timestamps needed) | |||
beLoad | 5 minutes | sum of BE bps | 5 minutes | sum of BE bps |
none | raw 1 minute BE load measurements (no timestamps needed) | |||
efTrace | optional | timestamped raw traceroutes | ||
beTrace | optional | timestamped raw traceroutes | ||
linkBW | 24 hour | link bandwidth in bps | 24 hours | link bandwidth in bps |
efCom | 24 hour | mean EF commitment in bps | 24 hours | mean EF commitment in bps |
efRes | 24 hour | mean EF reservation in bps | 24 hours | mean EF reservation in bps |
Each measurement point is at an inter-domain interface specified by the triple: {<source domain>, <dest domain>, <first hop>}. The domain names are the canonical domain names published by the respective QBone domains, The <first hop> is the IP address of the first-hop router in the destination domain to disambiguate multihoming. <first hop> may be "default" if there is a unique peering between the two domains.
For static reports and objects:
<root URL>/<source domain>/<dest domain>/<first hop>/<date>/<type>.<aggregation>.{html | gif | txt}
For dynamic and CGI-based reports and objects:
<root URL>/cgi-bin/getStat.cgi?source=<source domain>&dest=<dest domain>&first=<first hop>&date=<date>&type=<type>&agg=<aggregation>&report={html | gif | txt}
<root URL> | Root where all measurement reports hosted by a given domain may be found (e.g. "http://qbone.umn.edu") |
<source domain> | Canonical name for the source QBone domain (e.g."minnesota") |
<dest domain> | Canonical name for the destination QBone domain (e.g."duke") |
<first hop> | IP address of the first-hop router in the destination domain or "default" |
<date> | YYYYMMDD where year is common era and all fields are zero-padded (e.g. valentines day = "19990214") |
<type> | Canonical name of a measurement data type (see below) |
<aggregation> | Summary aggregation interval in minutes |
In order to foster research into providing quality of service over differentiated service networks, all measurements taken within the QBone as mandated by the architecture document will be public. No restrictions will be placed on web site access, and raw data and summaries should be made available to bona fide network researchers.
source | IPv4 network prefix (A.B.C.D/x) |
dest | IPv4 network prefix (A.B.C.D/x) |
route | {D1, D2, ...}, where each DX is the
canonical name of the QBone domain X, as described in Section 4.3.3 |
startTime | ISO-8601 date/time notation: YYYYMMDDHHMN where time is given as Universal Time (UTC) |
endTime | ISO-8601 date/time notation: YYYYMMDDHHMN where time is given as Universal Time (UTC) |
peakRate | bits per seconds |
MTU | bytes |
jitter | micoseconds |
The worst case IPDV will occur when one EF packet is forwarded with no
queuing delay, followed by a second EF packet that experiences maximum queuing
delay. The maximum queuing delay at any hop is related to the fan-in of
converging EF traffic, which, in the worst case, may be perfectly
synchronized. Table A.1 below shows the IPDV bound (in milliseconds) for this
example assuming different MTU sizes.
|
|
|
|
|
|
|
|