Yeti DNS


本站和网页 http://www.yeti-dns.org/documents.html 的作者无关,不对其内容负责。快照谨为网络故障时之索引,不代表被搜索网站的即时页面。

Yeti DNS
Yeti DNS Project Phase-2
--A Live IPv6-only Root DNS Server System Testbed
Introduction
Events & Announcements
Yeti Root Zone
Documents & Resource
Operators and Participants
Statistics
Monitoring
Acknowledgement
Yeti Blog
Join us
About us
FAQ
The Yeti project is a live testbed. This page serves as an all-in-one page to collect all information of Yeti work including Yeti documents, various results that we have gotten from experiments, software tools, and observations during the operation of the Yeti testbed.
1. Project & Testbed Description
Problem statement of Yeti DNS Project
The problem statement describes why the Yeti project was started and the questions that it aims to answer.
RFC8483 Yeti DNS Testbed
RFC8483 describes the motivation for the Yeti project,
describes the Yeti testbed infrastructure, and provides the technical
and operational experiences of some users of the Yeti testbed.
How to become a Yeti operator
This document explains how new Yeti root servers are added.
How Yeti DMs work: Yeti-DM-Setup
How Yeti DMs work: Yeti-DM-Sync
These documents cover the setup of the Yeti Distribution Masters
(DM), which are the parts of Yeti which convert the IANA root zone
to the Yeti root zone and then distribute that to the Yeti root
servers.
Data sharing document
This describes the rules for protecting privacy in the Yeti project.
Yeti Health monitoring
The monitoring framework of Yeti is described here.
DSC/Data server backup plan
We have redundancy of the server that collects Yeti data, as
described in this document.
2. Yeti operational findings
During the setup and operation of the Yeti project, a number of issues were discovered and documented that are unrelated to specific experiments, but were nevertheless interesting.
It is noteworthy that all of the name servers used in the project had fixes or changes based on the work. That includes BIND 9, NSD, Knot, PowerDNS, and Microsoft DNS.
IPv6 Fragmentation Issues
IPv6 fragmentation is a concern due to DNS large responses. The Yeti experiment
produces responses larger than 1900 bytes, so is affected.
Via some tests and discussion inside Yeti mailing list, we have identifed two issues regarding IPv6 fragmentation, not only for the DNS root but for DNS in general.
One issue is that the stateless model of UDP-based applications like DNS makes it difficult to use ICMP/ICMPv6 signaling. More information: APNIC article on IPv6 fragmentation.
Another issue regarding IPv6 fragmentation is related to the coordination between TCP MSS and IPV6_USE_MIN_MTU option. One TCP segment is fragmented into two IP packets and one of them may be dropped in the middle. Please see TCP and MTU in IPv6 presented by Akira Kato at the 2016 Yeti workshop.
An experiment regarding the IPv6 framentation issue in Yeti testbed :
1.Scoring the Yeti DNS Root Server System
Root Server Glue and BIND 9
It was discovered that BIND 9 does not return glue for the root zone, unless also configured as authoritative for the zone servers themselves. This is true for the IANA root, since the root servers are authoritative for the root and the ROOT-SERVERS.NET domain. For Yeti, this is not true since there is no Yeti zone: all of the Yeti name servers are managed from an independent delegation.
This issue was discussed on the Yeti discuss list:
http://lists.yeti-dns.org/pipermail/discuss/2015-May/000013.html
The BIND 9 team rejected the idea of changing the BIND 9 behavior:
https://lists.isc.org/pipermail/bind-workers/2015-May/003317.html
However the Yeti project produced a patch for BIND 9 which some
operators now use:
http://lists.yeti-dns.org/pipermail/discuss/2015-June/000089.html
The discussion on root naming issue are introduce in the Yet experience I-D:
https://tools.ietf.org/html/draft-song-yeti-testbed-experience-03#section-4.1
The delay observed on Yeti root zone update
It was observed one server on Yeti testbed have some bugs on SOA update with more than 10 hours delay which is running on Bundy 1.2.0 on FreeBSD 10.2-RELEASE. To better understand the issue a monitoring test was done on the behavior of zone update and DM-selection of each Yeti root serve. The findings is that more than half of Yeti root servers suffer from more than 20 min delay on the testbed. One possible reason may be that the server failed to pull the Zone on one DM and turn to another DM which introduces the delay due to IPv6 fragmenation. (more investigation is needed to shoot the problem)
More information for this issue : http://yeti-dns.org/yeti/blog/2017/03/26/Monitoring-on-Yeti-Root-Zone-Update.html
dnscap Losing Packets
One of the Yeti participants noticed that
dnscap, a tool written to
capture DNS packets on the wire, was dropping packets:
http://lists.yeti-dns.org/pipermail/discuss/2015-June/000046.html
Workarounds were found for Yeti, although the dnscap developers
continued to research and eventually discovered a fix included in DNSCAP release 1.1.0:
https://lists.dns-oarc.net/pipermail/dnscap-users/2016-October/000014.html
- Use helper library `pcap-thread` when capturing to solve
missing packets during very low traffic"
Later another losing-packets bug was found in the dnscap v1.3. It is resolved in later version of DNSCAP.
https://github.com/DNS-OARC/dnscap/issues/580
libpcap Corrupting pcap output
One of our systems had a full disk and ended up with corrupted pcap files from dnscap. We tracked that down to an issue in libpcap, where the underlying writes were not being properly checked. A fix was made and submitted to the upstream library. While not a perfect solution, it is the best that can be done with the underlying I/O API as well as the published API within libpcap:
https://github.com/the-tcpdump-group/libpcap/pull/494
RFC 5011 Hold-Down Timer in BIND 9 and Unbound
The first KSK roll on the Yeti project was not carefully planned, but rather handled as the KSK roll for any zone. Because of this we encountered problems with DNS resolvers configured to use RFC 5011. RFC 5011 automatically updates the trust anchors for a zone, but requires that the new KSK be in place for 30 days. What ended up happening is that BIND 9 continued to function, because it does not use the 30 day hold-down timer, but that Unbound stopped working, because it does (as per the recommendation in the RFC).
http://lists.yeti-dns.org/pipermail/discuss/2015-July/000127.html
Corner Case: Knot Name Compression & Go dns Library
A Yeti participant discovered problems in both a popular Go DNS library and a popular authoritative DNS server, Knot when using Knot to server the Yeti root zone and querying the server with a Go program:
https://github.com/miekg/dns/issues/234
The problem was fixed by the developers of these open source projects. There were no reports of this affecting end-users.
NSD Glue Truncation
NSD by default is configured to send minimal responses, which needs to be re-compiled in order to send a complete glue:
http://lists.yeti-dns.org/pipermail/discuss/2016-May/000522.html
3. Yeti experiment and findings
Yeti root testbed is design for experiments and findings are expected. We summarized all experiment and findings in section 4 of draft-song-yeti-testbed-experience. Some important experiments related with Root Key management are introduced here with detailed report:
Multiple ZSK Experiment
The Multi-ZSK (MZSK) experiment was designed to test operating the Yeti root using more than a single ZSK. The goal was to have each distribution master (DM) have a separate ZSK, signed by a single KSK. This allows each DM to operate independently, each maintaining their own key secret material.
A description of the MZSK experiment and the results can be found in Yeti Project Report: Multi-ZSK Experiment (MZSK).
Of particular interest, the experiment triggered a problem with IXFR for some software, the results of which are documented in An IXFR Fallback to AXFR Case.
Big ZSK Experiment
The Big ZSK experiment was designed to test operating the Yeti root with a 2048-bit ZSK. This was based on Verisign’s announcement that they were going to change the ZSK size of the IANA root to 2014-bits.
A description of the BGZSK experiment and the results can be found in Yeti DNS Project Github repository.
KSK Roll Experiment
Since ICANN is going to start KSK rollover on September 19, 2017, the Yeti KSK roll experiment was designed to perform a KSK roll for the Yeti root and observe the effects. One major goal is to deliver some useful feedback before the IANA KSK roll. A significant result was that DNSSEC failures are reported if a new view is added to a BIND server after the KSK roll has started.
For more information:
The Yeti KSK rollover experiment plan is documented in:
https://github.com/BII-Lab/Yeti-Project/blob/master/doc/Experiment-KROLL.md
A detailed report is published in:
https://github.com/BII-Lab/Yeti-Project/blob/master/doc/Report-KROLL.md
4. Yeti Data Analysis
Now we only have some preliminary analysis on Yeti traffic collected from Yeti server which brief introduced on the presentation Davey Song gave in 2016 Seoul workshop. It is expected that more information will be dug up in 2017 from the traffic, awaiting for more resources spent on this work.
5. Software Tools
BII has written a number of programs during the course of running the Yeti project.
PcapParser: Easy Handling of Large DNS Captured Packets
In Yeti, as with many DNS operations, we use pcap format to store captured DNS traffic. Because Yeti is concerned with large packets, we often have DNS messages that are fragmented or sent via TCP messages. These are hard to analyze, so we wrote a tool to convert these into defragmented, UDP pcap files.
http://dnsv6lab.net/2016/09/06/DNS-pcap-fragments/
As part of this work we are pushing the IPv6 IP defragmention code back into the underlying gopacket library.
ymmv: Easy and Safe Comparison of IANA and Yeti Traffic
Yeti needs as much traffic as possible, ideally with real user query data. Because it is often unacceptable to use Yeti in production environments, the ymmv program was created, which will send the same queries to Yeti servers as a resolver sends to the IANA servers, and also compare the replies and note any differences.
http://dnsv6lab.net/2016/10/13/ymmv/
6. Other Resources
2016 Seoul Yeti DNS Workshop materials(slides, pic, audio)
Yeti experiment and findings at 2016 DNS-OARC Workshop
The meeting note of 2015 Yeti DNS workshop in Yokohama
“One name space, Many circles” by Paul Vixie, 2015
“Potential Root services futures” by David Conrad, 2015
A summary for the Workshop on DNS Future Root Service Architecture, 2014
Video: Day 1
Video: Day 2
@2019 Yeti project Powered by www.jekyllrb.com