George Michaelson 0:00 You're listening to ping, a podcast by APNIC, discussing all things related to measuring the Internet. I'm your host, George Michaelson, this time, I'm talking to Shumon Huque from sales force about a mechanistic approach to keeping future code points in the DNS workable, a common problem in all protocols with future extensibility is the definition of flag or option fields in the packet structure. These might be two or more bits wide, maybe as much as 32 or 64 bits. But for the time being, there's only one or two values expected and perhaps special reserved values, like all zero or all ones. What often happens is that people coding intermediate systems don't realize the protocol has future extensibility, and code their firewall and filter lists recognizing the patterns that they see passing on the wire. They reject ones that aren't expected. The consequence is that the future becomes more cloudy with some novel systems unable to be seen, especially by users behind the commodity firewalls that operate in home routers. They have a very infrequent upgrade. This leads naturally to the concept of greasing deliberately sending packets with variant values in these fields, so that things aren't seen as static in the current definition, and the possibility of future values passing the filters is kept alive. Shumon and his collaborator Mark Andrews from ISC have been exploring this space with a draft in the DNS op Working Group. Shumon, welcome to ping. Shumon 2:30 Yeah. Thank you. Pleasure to be with you, George, George Michaelson 2:33 so can you just explain to people a little bit about who you are and what you do? Shumon 2:39 Sure I'm a software engineer based just outside of Washington, DC, where I'm speaking to you today from. I I currently work for sales force, and these days, mostly specialize in DNS, the domain name system and some surrounding infrastructure areas like networking and security and cloud platforms. I'm fairly involved in some external industry fora, like IETF, DNS, OARC, and also sometimes ICANN. And prior to Salesforce, I worked for a number of years at VeriSign, well known DNS company. George Michaelson 3:11 Oh, fairly well known. Yes, we've had, we've had Duane on ping several times. I think we all know VeriSign Shumon 3:17 Absolutely. Yes, yes. And I know Duane very well. And before VeriSign, I worked for a number of years as a technologist at the University of Pennsylvania. Yeah, George Michaelson 3:24 I spoke to you at IETF, and sort of asked why Sales Force had this interest in DNS. You pointed out that they actually do quite a lot of virtual hosting and provision of service in the cloud for companies, and there is quite a strong DNS tie in into that is Shumon 3:40 absolutely yeah, so salesforce. I didn't realize it until I joined the company, but they make tremendous use of DNS for all sorts of things, including, you know, tenant specific DNS records for instances of our cloud platform that we host for our customers and so on and so forth. So we have a very large, very diverse, very complex DNS infrastructure that requires some expertise to manage with components. Shoot about some of the components are outsourced to commercial DNS providers, and there are also in house components as well. George Michaelson 4:09 Today I'd like to talk to you about something that has got this wonderful name, DNS grease. Now this conjures up images that I am not sure we want to explore in too much depth, but can you kind of give people an idea? What is DNS grease about? Shumon 4:29 I can. So I'll explain kind of the you know what the term means a little bit later. But essentially what it is, is a method that exercises the regular use of an unallocated protocol extension points, sometimes called code points, to prevent ossification of their currently observed usage patterns, right? George Michaelson 4:52 So when people are constructing new protocols, they have to design fields purposefully inside the bit structure that's going to go out on the wire that's going to be the message. And you tend to have this pattern where you say, well, we'll start with the assumption these three bits are enough to encode eight different values. And we know right now that we'll use zero for the normal case, and we'll use all ones for I have to extend into another range of things. What are we going to do with the other six? And at the time you're developing the protocol, you might only have two things that you want to say. Signal there. So you've said all zeros, all one, and then the one and maybe the two value have been set, but the others haven't yet been set, have they? Shumon 5:33 So that's basically the idea that when you're designing the protocol, you haven't anticipated all the ways that these protocol fields could be used and they will be used in the future. And you you want to make sure that different systems in the network get used to the ideas that these protocol parameters in the future may have new values and that some new protocol elements may start to be used. [George: Yeah], and we don't want current systems to block their usage by hard coding notions of what are already out there when the original protocols was devised right. George Michaelson 6:06 Put like that, it's kind of well, this is weird, because you've got a spec document and it says in a textual paragraph, future use may be made of the bits in this field. Like, how could somebody not write a system that was open minded enough to understand that potential? But the thing is, it's not only people reading protocol specs who implement systems, is it? Shumon 6:25 Yeah, that's very true. So I'm going to get to that point in a little bit. But I just wanted, before I forget, I wanted to talk about the grease metaphor you asked. You asked about, which has a very weird, George Michaelson 6:34 it is a very strange name. Shumon 6:37 So the idea is, it's a metaphor, right? It comes from my understanding is like greasing joints and mechanical systems to keep the parts running smoothly so they don't rust up and stop moving, right? So in the same way, we want to kind of grease the protocol parameters so that middle boxes and things don't assume things and allow the protocol to continue to evolve, right? [George: Yeah]. So to your question about, why would anyone want to do this and block things, right? So there's a there's a number of reasons. I mean, historically, DNS, sort of evolution of the DNS protocol, has encountered some significant barriers. And it's not just DNS, other protocols have encountered similar barriers, and there are various reasons for this, right? So, for example, to give a few, inability to keep systems updated, inertia. There are faulty implementations of DNS software that kind of become entrenched in the ecosystem, [George: Yeah] and there are classes of devices typically called, I'm sure you're well aware of these middle boxes that have actively blocked the deployment of new protocol features, often intentionally, right? The question is, why would they do that? It's because things like middle boxes, firewalls and security apply appliances they often operate under to my view, is an unduly restrictive philosophy. George Michaelson 7:56 If we think about prior history here, when people started implementing basic home routers and firewalls, there was an awful lot of ICMP bombing going on in the Internet at large, and somehow inside that community of people trying to construct defensive boundaries for the home, they decided, without really a wide scale conversation, that ICMP was bad, and they just arbitrarily said, Well, we're not going to permit ICMP packet flows because we don't see a use for this. And they completely ignored all the purposeful use of ICMP to do things like send source quench or to do other behavioral monitoring of the system. They just came up with rule bad "ICMP bad", and they blocked it. So if you think about a middle box that's doing an analysis of protocol flow, and someone says you must permit port 53 they're not just going to say, oh, we'll allow any UDP on port 53 they're going to go into the packet and look for what they think they understand are the types of pattern they should see. And if they're never seeing flags in a certain field having a different value, it's kind of an aha moment, "aha. That must be a bad actor". And so they have this mentality that the right thing to do is to be as prescriptive as possible to what they expect to see. Shumon 9:12 The ICMP blockage is a kind of a prime example of, you know, harm caused by what was originally probably a good goal, right? You want systems to be able to defend themselves, but I think sure some of these actors go way beyond what they needed to do and cause all sorts of collateral damage. Definitely what they do is they allow the set of known patterns of traffic and block everything else. And the rules about what to allow are sometimes hard coded, or, as they say, "ossified", into equipment, hardware, software, firmware, etc. And that is no good for the evolution and extensibility of any protocol, including the DNS. George Michaelson 9:53 I had no idea that it also reflected on older code, but thinking about it, that makes perfect sense, because if you start to include system extensions that include types of traffic that weren't previously encoded, you've got to have two patterns in your software, either you ignore those packets, or if you're in an intermediary role, you pass them on in the hope that that's the right thing to do, and ignore is very definitely one of the defensive strategies. So I can kind of see a body of older code here just looking at things saying, "Ah, I don't know what that is", and chucking it away when it should have just said, I don't know what this is, but I'm not going to assume it's harmful traffic. It should have been written to pass it on. Shumon 10:32 Yeah, yeah, I agree with that. So that's certainly the other operating philosophy. Is that basically, what you do is defenses are at the edges of the network. So, I mean, you know this, I'm sure very well, it's like the original architectural model of the Internet. And there was an architectural model which was the end to end. Architectural model right, which was George Michaelson 10:50 Dumb core, Smart Edge. Shumon 10:52 Dumb core, Smart Edge, right? And the and the edges knew how to defend themselves on the protocols that they ran, knew how to defend themselves. And the early days, that principle was kind of universally agreed on, upon and seemed unchallenge and unchallengeable, right? But gradually over time, as the Internet grew, the principle, I think, succumbed to the realities have how infrastructure is actually deployed in the network, market dynamics and market pressures and so on. George Michaelson 11:17 Turns out we needed more smarts in the middle than we thought. But sometimes Shumon 11:21 I know, yeah, so that's George Michaelson 11:22 sometimes smarts isn't exactly Shumon 11:24 there's a whole market in the middle of the network, right? That and the Internet today has two way too many competing stakeholders with their own agendas today to make it, you know, really impossible for a single coherent set of architectural principles to actually exist. George Michaelson 11:39 Let's say you've not been oiling the engine, and out of the blue, a bunch of people wake up that it's time to start doing some grease. Get some grease on the joints in this engine, make it a bit more flexible for future use. How did you go about doing this? Shumon 11:53 What we had to do is so Mark Andrews of ISC and myself, we wrote down a Internet draft, right proposed specification document that describes how to apply greasing techniques to the DNS. And first of all, what you want to understand is basically the mechanism by which the DNS can evolve, right? So there is a mechanism that exists today called EDNS extension mechanism for DNS. It's quite old. It was originally devised in 1999 so it's almost 25 years old. George Michaelson 12:27 It's even numbered EDNS zero, because they realized when they wrote this spec, there might one day need to be a future extension mechanism that was different. That would be the mythical EDN 1, [Shumon: that's true], but that's a whole different Shumon 12:41 that was an interesting finding in our greasing experiments. But basically what EDNS is a framework, right for incrementally extending and evolving the capabilities in the DNS via, you know, miscellaneous things, flags, options, other parameters, and for advertising and negotiating those capabilities, right? So what we tried to do is understand, okay, what are the protocol elements in the original DNS protocol and that the EDNS framework supports. And you know, what sort of bit patterns are in use today? What, what the rest of the unallocated, unused bit patterns are and then what we do is propose. So there's a whole bunch of them. So like the DNS header, there's seven flags, only one of them is unused, the mythical Z flag. There are different resource record types. Many of them are unused. There are op code bits. George Michaelson 13:32 That's quite a large bit field, isn't it? And we're effectively using a registry in IANA to record types of record that we could have, but we have, by no means consumed that space. Shumon 13:42 Absolutely not there's 16 bits, so there's 65,000 and George Michaelson 13:45 but there are other header fields as well, which flag capabilities and behavior across this system, and not all of them are used. Shumon 13:52 So for example, there's the EDNS header that has 16 bits, only one of them, actually two of them have been used. A protocol that I worked on allocated the second one. So there's still 14 left. There are EDNS options. That option space is also vast. It's 16 bits. And there's the EDNS version, which is eight bits over which, as you mentioned, only one has been defined so far, right. So the idea George, is that so kind of the theory of operation is that DNS resolver implementations are proposed to periodically advertise unallocated code points essentially at random in requests that they send out, right? George Michaelson 14:30 So there is a this protocol is a request response protocol, and you're putting the onus on the resolver. That's the component of the system that on behalf of it uses software, asks questions out to servers. You're saying what they should do is at a small, random background rate, they should take some of these optional unset bits, and they should set them. Shumon 14:05 And then what what should happen is correctly implemented DNS servers should ignore these unused or unallocated values, and they should interoperate fly. George Michaelson 15:04 So the pattern here is not drop the query if you see these bits set the pattern in a well written server is, Hmm, that's weird. They're setting a bit that I don't understand or need, fine, but they have a well formed question. I'll answer their question. It's like a complete end to end test. Firstly, does the question get all the way through? And secondly, does the server answer it? Shumon 15:26 Yeah, absolutely. So some extent, this depends on the behavior, depends on the definition of the protocol, but the EDNS framework says, right, you ignore things that you don't understand and when you interop. George Michaelson 15:37 this was a really good choice of something to practice greasing. And did you suggest that it try all of the available bit fields, or did you narrow it down to a subset of things to testing? Shumon 15:48 Yeah, so that is a very good question. So we're still in the midst of some experiments, and there's, I think, some difference, differing opinions about what portion of the unallocated space to use for greasing operations, right? So the simple strategy would be to add random, choose anything in the entire range, right? So, but then there's a risk. You need to consider what happens when one of those unallocated points get allocated for a specific function in the future, and will it cause any breakage, right? So one strategy for dealing with that is to kind of, kind of had use random code points, but have kind of end of test dates defined for for a lot of them, right? [George: Yeah]. But the other possibility, which other protocol communities that have also experimented with greasing have done like TLS, is the prototypical example I like to cite is from their unallocated space, they reserve a range, or they reserve multiple ranges for greasing operation. George Michaelson 16:48 Oh, wow. So you actually formally mark out test spaces to continue the process with no risk of it colliding with the future use behavior Shumon 16:57 Correct? Yeah, that's safer. On the other hand, it also has a potential problem in that you might train middle boxes to recognize that space and ossify those right. George Michaelson 17:07 Maybe we're going to revisit that and decide we'll use those bits and ignore all the other ones, because they're the only ones we've tested can work. So you're in the midst of running this experiment. Is this? Has this been added to everyday DNS code that is run at scale across the Internet? Or is this something that people add to specific local instances and then measure what goes on? How does this work in practice? Shumon 17:29 Yeah, so that's kind of a process in development gradually. So what we're trying to do with the specification is to encourage DNS resolver implementers and sometimes other DNS actors to to implement these greasing operations. So most typical DNS software today doesn't do greasing. You could certainly do greasing outside the normal operation of a DNS server. So ISC actually has a tool called EDNS comp that you can configure to spray all sorts of random stuff at DNS server, George Michaelson 18:00 because this is initiated on the client side. You can write software to ask questions. You could replay data streams from things like DNS, OARC DITL captures and see how many of them successfully complete, and that might give you an indication. So are there people doing this kind of active experimentation. Shumon 18:19 There are, I mean, so ISC does it regularly with the EDNS com tool. What we would like to do is have this greasing function actually more tightly integrated with DNS resolvers and maybe DNS authoritative servers too, and that is a process that is in progress. So ISC actually has a branch of the bind server that does greasing and having it integrated into a DNS resolver is actually better in many senses, because you are interrogating the state of the network live in real time, right? So you discover problems that are actually out there when the queries are easy. George Michaelson 18:53 It's the shift from experiment with new software to active experimentation in the everyday transactions of the DNS. It gives you a much higher confidence signal that the code systems used in real life will behave correctly, doesn't it? Shumon 19:08 And that's the reason. George Michaelson 19:09 So you've hinted that this actually isn't exclusive to the DNS. You've mentioned in TLS, for instance, that this has also been picked up as an idea. Why is that? What was the trigger that brought this idea to the surface in TLS? Shumon 19:24 Yeah, so I'm trying to, I'm not as fully plugged into the TLS community to know all the detailed rationale. They were the first protocol community that rode up greasing for their own protocol, right? I think it was, let me see RC 8701 I think. And what they do is, I think this was related to the attempt to revise the TLS protocol and produce TLS version 1.3 [George: Right] And what they found is, when they attempted to deploy it, certain features didn't work, right, friction and intolerance that was invoked on the parts of middle boxes, right? George Michaelson 19:59 So that's that's kind of interesting, because DNS is one of the oldest protocols at scale, in use in the Internet. The actual first deployment was in the early 80s, and we have been carrying the legacy of a UDP only system into a world of UDP TCP, added cryptography, complete transport, independence, the extension mechanism, TLS, well, it's not exactly a baby protocol, but it is young in some respects. It's only been around for a decade and a half. It's not actually an incredibly long lived protocol yet. Shumon 20:29 But on the other hand, it's important enough that middle boxes and security appliances, [George: oh yeah] try very hard to infer characteristics of TLS sessions and figure out what they want to block, right? So one of the design goals of TLS, 1.3 was to make it more resistant to these kind of middle box interferences, right? So, [George: yeah] what they had to do was, I think if I remember from the TLS greasing RFC they grease things like TLS, Cipher suites, TLS extensions and ALPNs, application layer, protocol negotiation identifiers, right? George Michaelson 21:02 Those are the things which are actually most likely to change across the life of a given revision of TLS. Protocol new, new algorithms come into play as a function of deprecation of what older algorithms, as NIST and other people advise, they're no longer sufficiently secure. So you can guarantee for a long live protocol, there is going to be a future algorithm, right? Shumon 21:22 Yeah, so that's all idea. And I think, speaking of you, mentioned other protocols, not just TLS and other application protocols, but I think this kind of protocol ossification by middle boxes affects many layers of the protocol stack. For example, if you go online, layer down to layer four transports, right? So I think I'm not wrong in stating that there have been attempts to design new transport protocols that haven't really taken off, like SCTP and DCCP George Michaelson 21:51 not successful. Shumon 21:52 And I think the reason they're not successful is because of middle boxes and ossification in the interior of the George Michaelson 21:59 network, compared to QUIC, that deliberately said, we'll go up and be a session layer. We'll use UDP. It's just UDP packets. Shumon 22:08 You're absolutely right. I mean, I think QUIC learned that they took some lessons [George: right] from the experiences of attempts to design new transport protocols, and they decided, "screw it, we're gonna just put everything over UDP and solve the problem that way". George Michaelson 22:22 Right? So you mentioned DANE also has a component of greasing in it as well. Can you explain that? Shumon 22:29 I guess I could quickly explain what DANE is, in case your audience doesn't know. So it's basically a security protocol. It's a framework for associating domain names with cryptographic keying material like X509 certificates and even raw public keys. And in theory, it could provide, you, know, in my view, at least a better alternative to the Ubiquitous Web PKI authentication system, right? So you don't have to trust, yeah, 300 CAs with unlimited scope. Everything is namespace constrained, and you can issue your own certificates. George Michaelson 23:01 It moves the control point for decisions which certificate authorities you trust and which naming constructs you expect to see out of this rather opaque thing that your browser vendor ships you the trust anchors into a framework that is homed in the DNS so when you go looking for a DNS name, you're able to use the DNS as a trust point. I mean, I'm guessing that means DANE depends on DNSSEC to get verified contents, and that helps you bootstrap the certificates that are used to build the TLS association between you and the website. But DANE is quite a complex protocol, isn't it? Shumon 23:36 It critically depends on DNSSEC for security and authenticity, right? So the the the reason there is a component, there's a relation to greasing here, is that what has been discovered is, if a web client or a web browser wants to do Dan authentication of a server, it needs to be able to issue DNSSEC enabled queries, right? It needs to be able to, for example, set the DNSSEC okay bit and receive Dane responses with signatures, [George: Yep]. And experience has shown, I think Firefox themselves did a measurement a while ago to see you know how. Or what you know how well this works in practice. And it turns out there was a small but non trivial breakage rate for DNS second naval queries. It was like something like, like three or 4% but that's way too high for the web, right? George Michaelson 24:25 If you're talking about the level of transactions in a global Internet, and you're told randomly, 3% of them simply won't complete. That's really unacceptable. Shumon 24:35 Yeah, that's a showstopper immediately. So what had to happen with there was an effort that was that I was involved in a number of years ago to kind of solve this problem and figure out how to successfully allow a web web client to perform Dan and DNSSEC enabled authentication of a TLS server. And because of the middle box problem, what we devised was a mechanism to transport the entire chain of DNSSEC records and signatures needed to authenticate the TLS servers, date record and transmit them inside the TLS handshake, right? So the benefit there is that middle boxes won't be able to see or intercept any of this stuff, because it's encrypted and it's part George Michaelson 25:21 of because it's inside the protected component of the TLS exchange. Shumon 25:25 And so that's the main reason. There's other auxiliary reasons, such as latency reduction, and, you know, end stations don't need to run out. George Michaelson 25:33 It sounds like a situation analogous to what some people who operate secure web servers may know. When you request a certificate, you get two things back, typically from the CA, you get one, that's your certificate, and you get this other thing called full chain. And full chain is the package of all the intermediate elements in the X.509 that's signed over you. It's like a freebie. I think the metaphor here is that you're meant to give it when you initiate the web binding, saying, here's all the stuff between the trust anchor you've got and me. Now you don't have to go look for it, and it shortcuts all that looking. You're doing an analogous thing here. You're providing all of that intermediate state as a blob, and then people are able to use that. They still have to independently get the trust. You cannot include the trust anchor, but everything else, you can just give it to them and say, for the good path, here's all the things you need to go, Shumon 26:26 right? So it's in one shot you get the entire authentication chain. You still need the trust anchor to do the authentication locally, right? George Michaelson 26:33 So if we imagine a future where more of the protocol substrate of the Internet at large is encrypted, there's potential that there would be less need of greasing because the middle boxes wouldn't be able to have so much impact, right? Shumon 26:45 Basically, going back to the effects of middle boxes, right? So even there sort of all sorts of tussles and battles are happening all the time. One of them is related to encryption, right? So as you know, IETF, increasingly is trying to encrypt every single protocol, right? [George: Yeah] if and when encrypted and authenticated transports are common on the DNS resolution path, then perhaps middle boxes will have less capability to interfere with DNS traffic and operations, right? So I think that's probably true. George Michaelson 27:19 Yeah, I think, I think that's true as DNS, but then the problem simply recurses into what is their impact on the underlying substrate of fully encrypted data flow. Are middleware boxes now going to make optimistic or incorrect assumptions about the header states and the packet lengths and the duration of encrypted sessions. That means they interfere with higher protocols? Shumon 27:42 Yeah, that's certainly a danger. They could do that. I mean, the optimistic view is they it's encrypted. I don't know what to do with it. I just let it through, right? George Michaelson 27:50 So, so are you thinking of doing some level of greasing test here, or are you thinking of the applicability of greasing? Shumon 27:55 I was thinking of the applicability of greasing, right? So one argument is that if we encrypt in everything, maybe greasing is less important. But I don't think that's the case, because I feel that greasing would still continue to be useful for identifying, not necessarily middle boxes, but deficient DNS actors themselves, like proxies, load balancers even authoritative servers, because often they have incorrectly implemented some part of the DNS protocol. And greasing, I think, is still extremely important to identify those kind of breakages. George Michaelson 28:29 And those actors are also end points where a transport between two actors would be encrypted, but they necessarily decrypt and unpack the payload in order to perform their functional role. And that's a moment where they are looking at the PDUs of DNS, potentially not recognizing things and dropping the end to end exchange that's passing through them Shumon 28:50 exactly. You hit the nail on the head, right? So the DNS actors, so presumably, [George: yeah], they are the end. Points of the encryption themselves, right? So they can actually see what's going on in the program. George Michaelson 29:01 That's kind of a slightly different meaning of grease in some ways, because the first state of grease is more about bit twiddling inside existing protocols. And this is much more at the higher layer construct level, greasing the wheels of how to perform the protocol exchange. So is there another way of looking at grease. I mean, could we actually consider grease as be open to future extension in the broadest possible sense? Shumon 29:24 I think you can. So the other relation I wanted to make to grease with the TLS chain extension was that, you know, the operation of greasing, okay, continuous operation of greasing, for example, checking that certain types of DNS records can be queried and responses received with signatures. This is part of the suite of greasing operations that we envision to be performed by DNS actors as part of the DNS greasing right so over time. Right now, as I mentioned, there's like a three to 4% breakage rate, you can imagine, for example, stub resolvers even doing greasing operations, collecting telemetry, and then at some future date we can tell, okay, well, the breakage rate is low enough, the DANE enabled queries will actually work, right? So I think there is a very important use case for greasing to try and kind of collect telemetry around this problem and figure out when we can solve it down the road. George Michaelson 30:21 And it's not as if we don't know in DNS, the fairly significant change is just around the corner. There's this conversation taking place at the moment about a new structural record form to move some of the back end administration out of registry functions and into main line DNS. This idea of the DELEG record. So is grease applicable in DELEG possibly? Shumon 30:44 Yeah. so funny. You mentioned that. So originally I hadn't thought of it, but it turned out it is applicable because of some experiments that I conducted for to kind of interrogate the viability of DELEG in the early days, I wasn't thinking about greasing at the time, but in retrospect, it what the experiments that I did this was, in conjunction with my friend Roy Aarendts of ICANN, were a form of greasing, only greasing in the other direction from server. So what we tried to do Server Initiated, Server Initiated greasing, right? So what this may seem a little bit more fraught George, from the perspective of you know, safety, for example, a resolver can conduct a greasing operation. If it fails, it can retry without the grease bits. George Michaelson 31:28 But if you give the answer with grease added, how do you know that the answer was received or not? You can't know Shumon 31:35 exactly if I'm a responder and a grease response, I don't necessarily know what the impact on the querier was. Did they accept the grease response and everything was hunky dory, or did it fail catastrophically and cause an outage to the downstream application? George Michaelson 31:52 Wow, you'd really have to think hard about doing this experiment. This is not an immediately obvious experiment. Shumon 31:58 It isn't right. But I think what you could do is, instead of doing it unilaterally to your population of DNS clients, you can do greasing operations on the server side. You can do it as part of a targeted test, right, which is the test that Roy and I did to determine the viability of deleg records. Right? We had some specially instrumented DNS servers that injected basically an unexpected new record type in a DNS referral response, [George: yeah]. Then we were also conducting the measurements on the other side using a kind of a testbed of DNS resolvers and RIPE Atlas measurement nodes, right? So those kind of experiments, I think, are much safer with server side greasing. George Michaelson 32:42 So you were constructing a namespace that no other dependency in the public Internet would ever have need to use. And you're using Atlas, which is sitting on around 10,000 probes to ask questions to domain names that are served by this modified server. And because of the surface of Atlas, you're able to understand which Atlas probes do or don't see the responses with those things set, and that is down to middleware, because, you know, Atlas is able to receive them, and, you know, the service sending them. So you've already run this experiment. Shumon 33:14 We have run this experiment, yeah? So we actually have a report that we wrote I could share with you later. George Michaelson 33:20 We put the URL of that into the podcast web page. Shumon 33:23 Yeah, absolutely. And the experiment was actually successful. We actually demonstrated that putting this unsolicited record, you know, almost all resolvers dealt with it fine. They just ignored it. George Michaelson 33:35 Well, that's good. That's a really nice safety signal for future upgrade in the protocol. So are we actually getting to a point where it's possible, alongside the security considerations and the IANA considerations, all protocols maybe should have a grease component in their specification. They should actively solicit bit fields, PDUs, patterns that should be tested on a regular basis. Shumon 33:59 Yeah, so I think, I don't know if there's general agreement about your suggestion there, but I think there is a critical mass of people, at least in the standards community, in the IETF, think that this is a good idea to do greasing, as you suggest, generally, for any protocol. In fact, the the IAB, the Internet Architecture Board has a program called [George: oh right] I think it's called EDM extensibility, deployability and maintainability of protocols, yeah, and one of their charters is to figure out how to apply greasing techniques, generally, to all protocol George Michaelson 34:33 This is typical IETF. Here's a great word for what you're doing, grease. They have to invent an acronym, and they pick an acronym that is exactly the same as electronic dance music. Why? Who knows why the IAB does what the IAB does? Shumon 34:49 Yeah, it's funny that you mentioned that, right? So when I was mentioning the metaphor, I was almost mentioned that because we're the IETF, we had to come up with some crazy acronyms. George Michaelson 34:59 What does "grease" mean, Shumon 35:00 I know I Okay. I actually do know what it is, but I have to look up my notes. So as usual, the ITF, they turned "grease" into an acronym with a very ungainly expansion. So I'll tell you what it is. It's generate random extensions and sustain extensibility. George Michaelson 33:19 Oh, that's beautiful. Shumon 35:21 That's a mouthful, but that's what it means. George Michaelson 35:23 I bet that was a long bar conversation, Shumon, this has been absolutely wonderful. This is really, really fascinating stuff. And I think what you're doing is building resilience back into the protocol and future proofing things. Thank you for doing this and thank you for coming on ping and talking about it. Shumon 35:40 Yep, thank you for inviting me, George. It was a pleasure talking to you. George Michaelson 35:45 If you've got a story or research to share here on ping, why not get in contact by email to ping@apnic.net or via the APNIC social media channels, also remember the measurement@apnic.net mailing list on orbit is there to discuss and share relevant collaborative opportunities, grants and funding opportunities, jobs and graduate placings, or to seek feedback from the community for your own measurement projects, be sure to check out the APNIC website for all your resource and community needs until next time you.