Archives

RIPE 80
Plenary session
12 May 2020
2 p.m.


CHAIR: Hello. And welcome to the 1400 CEST session of RIPE 80. I should be appearing on screen, but I'm not sure if I am.

BRIAN NISBET: This is Brian, anyway. Hello. One of the co‑chairs of this particular session. So, we have two talks in this session, and a long and a short one, as you will have seen us doing, and this is the system we're working on for the Plenary for this RIPE meeting. Before I begin, I will remind you all to please rate the talks because it's important we get the feedback especially as we do you all of this for the first time. And please remember, if you have a sudden wish to be part of the Programme Committee, then you can still self‑nominate for that. You have 90 minutes left, you can mail pc [at] ripe [dot] net with a bio and a photograph and a statement of intent, and that will be announced for the candidates for that later today.

So, without further ado, we'll move on to our first talk. So this is Doug Madory from Oracle talking about visualising major routing incidents in 3D.

DOUG MADORY: As Brian said, I am Doug Madory. This is my talk visualising routing incidents in 3D. And this is as much a visualisation technique as an analytical approach to looking at routing incidents. And we'll go through this in this talk.

So, the scourge of route leaks continue. We had a couple last month. There is a small one this past weekend, routing leaks are something that's going to be with us for a long time. They have been with us since the beginning of the Internet. And here a couple of headlines that were captured a couple of incidents that took place in June of last year, in 2019, the one on the left is the one a lot of people are probably familiar with, with the route leak that went through Verizon, most notably impacting CloudFlare. The one on the left is the Safe House leak, like a week or so earlier. And so, in each of these cases when we talk about these major incidents, there is often ‑‑ we try to use some sort of measure to discuss like the scope or scale of those incidents, and it often comes down to simply prefix count.

And I think this talk is to try to argue why that may be inadequate or misleading.

So, in the example on the left, we have a quotation from an article ‑ that's a pretty common kind of discussion of these incidents where you get more than 20,000 prefixes involved, the one on the right more than 70,000 prefixes. But here is the weakness of this one‑dimensional measure of a leak.

Not every leaked route is accepted by the same amount of ASs in the Internet. So that's a measure of propagation. Many of the ASs ‑‑ many of the leaked routes don't go that far.

Another dimension that's often missing is the duration. Like, not every leaked route is in circulation for the same amount of time and so this results in a few, as we will see in a few examples we'll go through, there is often a long tail of prefixes that didn't propagate far and weren't for very long and that get included in the prefix count metric. And one could argue this serves to either inflate the number or distract from trying to capture its magnitude.

And I would admit that I am as guilty as anybody, I have written a lot of these over the years and we do use this prefix count as a metric for the size of the incident. And I guess I'm here presenting this to absolve myself of my sins, talking about using the same thing that I'm criticising on this slide.

So here is a better way that I would propose we do when we talk about routing incidents. So the other dimensions that we want to try to capture are propagation and duration to improve our understanding of the shape of these incidents, and so, to do this, visually, we can do this in a 3‑dimensional space where along one access is prefixes, the number of prefixes that were touched in any way. Another dimension is duration, so how long were they, were the leaked version of the routes in circulation by individual prefix? And then by individual prefix, what was the propagation, so, for us, we measured that by the percentage of or BGP sources that picked up the leaked version of the route at that moment in time.

And so, if you would imagine that there was an incident that all the prefixes in the leak were, for the duration of the leak were propagated to a hundred percent of the Internet you'd arrive at a solid box that, in this 3D space. But as we'll see that really never happens.

So on this visualisation, what we're looking at here is trying to do this 3‑dimensional view of the route leak from June 2019, the router leak that went through Verizon. So, as I mentioned in the coverage, it was classified as more than 20,000 prefixed. If I was to write it up, we saw maybe almost 29,000 prefixes, touched in some way, but, in reality, and so people might be confused by all those; one source says 20,000, another source says 29,000. But the difference in that is likely due to just some ‑‑ these low ‑‑ just a few AS sources that saw something that others didn't that weren't widely propagated. Let me walk through this graphic here.

On the up and down on the left is pure percentage, the darker red are leaked routes that propagated very far, propagated to just about the whole Internet.

And then along the bottom are the prefixes where we sort by the aggregate amount of propagations seen per prefix. And then time is on the right going out. So, part of the graph that's closest to the viewer is furthest back in time and then it proceeds forward in time as you go deeper in the graphic.

And so, I think, instantly, one can, when you look at this you can take away some idea of how long did this ‑‑ was this routing leak in circulation? And also, how many prefixes were getting widely propagated? Because in this case it's actually not that many out of the total number. In fact, it was only a few hundred that were widely propagated. If we dig into this particular thing a bit more.

In the aftermath of this routing leak, there was a lot of discussion about how RPKI could have helped, and I don't disagree with any of those conclusions. But it's worth it to run through the numbers just so we go at this with open eyes of exactly how RPKI would have helped in this case.

So, if we were to take the 29,000 prefixes that we saw leaked, and ran them through a validator, you end up having the vast majority, about 27,000, show up as unknowns. So that's due to a can la of ROAs, ongoing challenge, things are getting better there but that's where we're at, at least as of ‑‑ this was as of using data from June 2019.

The next most popular category was RPKI valid. So that may come as a surprise, given that this is a route leak, but you have to consider that, in this particular case, in this particular incident, a route optimiser was involved where it took routes from one transit provider and sent them on to another transit provider. There was no origination, false origination, and so the AS paths were reasonable and the origins were preserved. So we're not going to get false origins out of a route leak like this one. The only way RPKI is going to help is in this next category of invalid lengths. So we had about 120 prefixes that would have been marked as invalid lengths had the upstream been dropped. And the last category was invalid ASNs. In this case, these are prefixes that are just normally invalid, and so again, there is another challenge to RPKI that we just have a certain set of prefixes out there that are always misconfigured and always invalid. Some of those came through this leak.

So, back to the invalid length. So, the real damage that was done in this leak was due to more specifics that were generated, and not every prefix was a more specific that came to this leak. In fact, it was only 263 or so that were more specifics. And of those, there was a lot of prefixes that were CloudFlare. To their credit, they had set their max prefix length in the ROAs for many of the routes, not all but many of them, to the exact match of what the router prefix was. But for anyone who hadn't ‑‑ who had left some wiggle room in the max prefix length, you know, maybe a take‑away from this is if you are hoping to use RPKI to defend against this scenario of a route optimiser to announce more specifics in case where they are not originating prefixes, you are going to be relying on that max prefix length to be ‑‑ going to have to be an exact match. If you leave one bit longer length then that leaves room for a route optimiser, or some more specific to get past a validator, a RPKI validator.

So we want to keep working towards reducing this set of unknowns, RPKI unknowns by the creation of more ROAs, and in this case, just due to the nature of the leak, we had 13 times for valids than invalids. Having said all that, had there been filtering in place, I would agree that the major providers that were impacted were the ones with more specifics and we probably wouldn't be talking about this incident today if those had been filtered.

All right. So, I have been writing blogs about these kinds of incidents for many years now and, in fact, to me, it seems a little bit formulaic to write a route leak blog; you need to know how many prefixes were involved, what time it started, which ASs were involved and maybe some highlights of notable networks that were impacted. That is the formula for a route leak blog. And since it is somewhat formulaic, I thought why don't we make a site that generates this analysis in a way that we can just kind of generate the same thing every time and answer the same questions every time? So people who know me often reach out to me personally and ask did you see this thing? Who was affected? Were my prefixes affected? To what extent, things like that. And to answer those, this is a way we can get the knowledge to a wider audience.

What we hope to do with this, this site went live earlier this year, and we hope to continue publishing, you know, we call these autopsies of major routing leaks; soon after they occur, they need to be significant and we kind of came up with some arbitrary lines of what's significant, it's got to be more than 100 prefixes and seen by more than 10% of our BGP sources,

That way, you know, on any given day there is some low level of noise going on of routing misconfigurations but we want to capture the bigger events because these tend to be more disruptive.

As far as soon, you know, this is not a realtime thing, so there is a human in the loop, usually me, to try to verify what we're seeing is legit and so it may not be do it ‑‑ we are trying to get that timeline down but it will probably be never fully automated. The hope is that the product that comes out is still useful for people who are wondering who is affected, what happened, when did it start? All these questions that arise around these incidents.

And then also we have gone back in time to capture some things, some notable incidents in the past. One can kind of sit through, there is over 100 there, there could be a lot more if we spent more time uncovering all the route leaks of past years.

But, let's see...

Feel free to take a gander at that site. And here is some features that we developed that I hope this makes it more useful.

So beyond just that 3‑dimensional shape, it helps to try to decompose the visualisation and the incident by, you know, a couple of dimensions. One is, we have an AS net filter and so this is going to allow the viewer to re ‑‑ to click on that, one of the most affected ASs and the graphic gets redrawn for that origin, and there is some, you know, basic stats as far as what was the contribution of ‑‑ how much of this incident affected this particular origin? In this example on the right you can see that there is a couple of ‑‑ it's highlighted on the ASN filter what we're looking at, the most affected origins, and there is an absolute impact which has a metric, a unit list metric that is essentially just a summing all of the data points in the visualisation. We are assembling all of the peer propagation percentages or portions across the time and across the prefixes for everything, for when it's all, or for a particular origin. Or if you flick over to the country filter it's looking at how much different countries were impacted. Because occasionally the geography is really an important part of the story of understanding who or what was impacted in an incident.

So, in this example, it's on the screen, we have got absolute impact of 64,388, that's a number that we have arrived at when we have some kind of basic integrate and create and add up of the volume under the surface of the curve. Of that, in this particular incident, CloudFlare represented about 10% of that relative impact. Out of that 64,388 the aggregate proposition or the routes originated by CloudFlare ended up being about 10% of the total aggregated propagation, and we take ‑‑ we keep these ‑‑ the metrics for adding these ‑‑ compiling these metrics standardised across incidents. So the hope is then in case there was a value of comparing one with another, we could have some sense of how did one compare to another or the impact on one AS compared to another... (inaudible)... the hope is to be able to use these across incidents.

Of course, we don't have a way to normalise the fact that the routing table has grown significantly over the last 20 years or so, but, that aside, it still, I think, could be informative.

Let's see!

So let's use this approach to visit some famous route leaks in the past.

One of my favourite ‑‑ my big pet peeves is the China Telecom leak of April 2010, and coverage, there was a lot of headlines along these lines, China swallowed 15% of Internet for 18 minutes. When this came out the folks in this community and RIPE and NANOG, people kind of dismissed this type of summary out‑of‑hand, and for various reasons but I will tell you that it never seems to go away, especially in security circles, this is still quoted as a fact and I even encounter it in Internet measurement papers occasionally for people who ought to know better that this really isn't true, it's not true for a lot of reasons but let's go ‑‑ we'll get into the routing problems with this.

So, obviously the biggest issue is routes don't equal traffic. That is clearly the biggest fallacy in this assertion. Let's stipulate that either routes equal traffic or maybe 15% was hijacked. Let's correct that. It's still not true. And so, if it were true, then the 50,000 or so prefixes that were involved would have had the leaked version would have had to have been propagated to the entire Internet for the duration of this incident. And when we do this 3‑dimensional analysis we ought to arrive at some sort of solid box and, as we will see, you know, it never looks like that.

So this is the ‑‑ when we run this through, you can see it was AS2, 3724 that did it, it was an origination leak. When we look at, you know, who was affected here? You have got a distribution here of some prefixes over to the left in the red that are propagated very far, and then on a very long tail of all the others. If you were to dig into what were the prefixes that were propagated the farthest, almost all of them were Chinese routes and the they were routes that people get up in arms about or you're worried about or often this long tail and so the leaked versions weren't propagated very far. That's not to say there was zero impact, but it's probably not nearly as much as maybe so the pressure ports would lead you to believe.

In fact, if we use that country filter instead of the AS filter, we look at, we're trying to delve into, you know, what geographically, what were the prefixes that were impacted here and where were they? The curves look completely different when we isolate on the Chinese routes versus the US routes for example. In the one on the left we're looking at the absolute impact of the incident and then the absolute impact of prefixes that were gee located to China versus the US. And so the Chinese routes accounted for about 74% of the aggregate propagation in the leak so the vast majority of the next up were the US with a larger prefix count, but a lot less, only 8% of the of the total propagation of the leak.

Not only that, this whole curve accounted for only about 4.6 of what I call theoretical max. What was the most amount of volume that could exist in those 18 minutes by 15,000 prefixes a 100% propagation? This curve, when you count for all the countries, only accounts for about 4.6% of that volume. So it's actually a lot less, and if we were to try to correct the statement, it gets a lot weaker as far as the assertion in the headline a couple of slides ago. We'd see something like .07 of propagation, discounting both propagation for all prefixes or even just the US ones.

You know, on the bottom right, we are looking at US prefixes. There were a few that were widely propagated, and I remember in April 2010, digging into this incident, just after it occurred, and trying to understand where was the impact. Even in the top ten prefixes by what was propagated, there was a few US routes in there. I was like, that sounds a bit suspicious. When I looked a little further into it, these were prefixes that were heavily prepended; they only had single‑watt upstream and were prepended six times to it. So their AS path was really long and when this leak came along and announced a drastically shorter AS path due to the leak, almost the entire Internet believed the leaker over the folks that were doing the prepending. So that's another talk. But it overlaps into this topic here.

So if we keep going back and looking at some other incidents in the past.

So another famous one is the Indosat routing that basically took a whole ‑‑ leaked the whole table, but did it. Right. Like, so if we go and look at this analysis, there was, you know, close to half a million routes, that was about the size of the routing table at the time. But which ones were widely propagated? Well, it was actually a very small amount. In fact, only about 8,000 and they were mostly Indonesian routes getting leaked by Indosat. So this impact was largely constrained simply to Indosat and there was this very, very long tail of ASs that saw some of these other prefixes as being leaked.

So it lasted about two‑and‑a‑half hours, but I think this is, you know, what we call this a full table leak or 8,000 prefix leak? Because, really, where the actual impact was was like constrained down to about 8,000 prefixes in Indonesia.

The following year, there was another large routing leak, this was Telekom Malaysia, TM Net. In this case, this was a little different. Not as many prefixes leaked, but you can see, even though there is still a long tail, it isn't as binary, it's either widely propagated or it's really not. We have got a bit of a grade in here and in that gradient is quite a lot of prefixes that were impacted. And were propagated to some extent. And if we were to take this absolute impact, you agree that this is a useful number, and we compare the two incidents, the Indosat leak, the previous one, had twice as many prefixes, but the Telekom Malaysia one had six times as much overall propagation when we look at this absolute impact, and that's due to the fact that there was a lot more prefixes that were propagated to some you know more significant degree than it was in the previous incident.

So, having run a bunch of these, you know, there is a few observations that are worth compiling here.

I think these won't surprise people. But obviously the widely propagated part of the leak is generally the most damaging. And often it's a lot smaller than the total prefix count. And we ought to be focussing on that part of it as we try to you know discuss what would be the best filtering, RPKI, whatever techniques we're going to do to mitigate these ongoing routing leaks. So, these leaks that get widely propagated they are because, number one, they are more specific a route leak, so this ‑‑ sometimes this is generated by the route optimisers and often times not. In fact, I would argue that there is probably ‑‑ when we see folks like me who watch this thing we see a bunch of more specifics come out. We assume, oh, it's a route optimiser, that's a theory that is, you know, immediately raised. And I'd say that we have seen probably more often than not, a lot of ‑‑ the presence of a lot of more specifics in a route leak can be the result of traffic engineering. So the case in point on this one is the level 3 route leak from November 2017 that introduced tens of thousands of more specifics to the global routing table.

There was no route optimiser involved in the way. What was happening is, you had a lot of large networks trying to influence the return path of traffic from level 3 to their network and the way they would do that was over their private peering, with level 3, they would announce a lot of more specifics to try to ensure that the return path of the traffic would come across that leak.

You can debate whether that's the best way to do it but that's not uncommon. And when there is a route leak like there was in the case there, when it was announced these more specifics to the Internet, they caused a lot of disruption.

Another one that maybe isn't as widely, I haven't seen it discussed this much, is this routes that are intentionally have limited propagation and I think sometimes we refer to these as regional routes. And in the case of the Safe Host routing leak in June last year, there was a lot of European mobile operators that announced more specifics just in parts of western Europe for their traffic engineering purposes. But those more specifics for most of the Internet, you know, 80% of our sources wouldn't see these located around the world. As soon as there is a leak in a regional route, then there is a huge vacuum that just gets filled up with all of these ‑‑ there is no route to contend with, so it's automatic winner and they get widely circulated. In that case too, there was a lot of disruption as a result.

And then my other pet topic these days is, routes that are excessively prepended. I am not anti‑prepending, but there are quite a lot of prefixes that are prepended to, they just have one upstream and they are prepended and when another origin comes long inadvertently announcing their route, the illegitimate source wins over the legitimate one because the legitimate one has unfortunately not cleaned up their prepending ‑‑ that maybe served some purpose in the past but has long since served its purpose.

So, wrap‑up:

This is an argument for trying to include in some way these notions of propagation and duration when we discuss routing incidents.

You know, there is flaws and just talking about just simple prefix count when we talk about the scale, the impact of the routing incidents.

And we ought to come up with some kind of way to, one suggestion is, you know, do we want to say, like, I want to refer to prefixes that are seen by, you know, 1% of our peers or 10 peers or more or some sort of ‑‑ some cut‑off so that we don't have some small set of ASs that see a lot of leaked routes, but ultimately do not suggest a large operational impact.

More esoteric situation would be if we adopted some sort of Richter scale to this where we take this notion of integrating over the volume or the surface underneath ‑‑ or the volume under the surface or the area under the curve as a way to compare one incident to another as far as its scale. Of course, the assumption there is every AS is the same. That's not true. But in this type of work, you have to make those assumptions. You also make the assumption that every prefix is equivalent. That also obviously is not true when you get into it but you're doing Internet‑wide BGP analysis; you have to make those assumptions.

I don't disagree with all the talk of how RPKI would be helpful in this case in these incidents. Obviously we need to keep making progress and increasing the number of ROAs and I think people ought to consider revisiting their max prefix length if they are hoping that either route optimiser or ‑‑ hoping to use RPKI to defend against a leak that involves a route optimiser or traffic engineer more specifics because if you leave just one bit more on your max prefix length, then RPKI is not going to do much for you. It's going to end up being valid in the event that the routing leak preserves the origin like happened in the case in June of last year.

And so then please check out the site. We're going to keep publishing these route leak autopsies and hopefully this serves to inform our discussion around route leaks. And if you do encounter anyone saying China Telecom hijacked 15% of Internet, feel free to send them my way.

All right. That's it. Thank you very much.

(Virtual applause)

BRIAN NISBET: I think, unfortunately, Jelte is having some sound issues. So, unfortunately, that ran a little long and we don't really have much time for questions, Doug, because we are very tight today, unfortunately, as we mentioned, and we have only got 15‑minute coffee breaks as well and breaks between sessions. So, unfortunately to those of you who asked questions, we don't have time to go through those but I think Doug is available either by e‑mail or possibly, you know, in other locations as well to talk through some of those questions. So sorry about that.

So, the second presentation we have for this session is new observations on MANRS from Andrei Robachevsky.

ANDREI ROBACHEVSKY: Thank you, Brian, and hi everyone. So, this MANRS was presented a few times to this community at its various stages, but I appreciate this opportunity to give you an update where we are with this effort, which is more than six years old.

Well, I mean, there is no question there is a problem. I think Doug's presentation was a great illustration that the problems are there to propagate, maybe not always as the press or media outlets try to present, but it does and it does have an impact, and the numbers are not insignificant. But we also know when things happen, right, and those are some examples, and Doug, again, presented many more, that when things happen, they cause damage.

So, why is routing security so hard? I mean, we are fighting this problem for decades with limited success. So we have here can be described in social areas as a collective action problem. While we all understand this common goal that we are better off implementing routing security, because of conflicting interests it's hard to achieve.

Routing security is sort of, you know, group immunity, right, and this is a concept that we learn now and we understand how hard it is to achieve.

So, you can, to extend this analogy a little bit more, you can say that MANRS may be a vaccine that we're looking for.

So what does constitute this vaccine? What does it make of?

Norms, this is a concept that can help in some of those cases, to help overcome the problem of ‑‑ collective action problem, and another sort of term is strategy of the commons. So we need to have agreement on values and behaviours that would support.

So if we look at the routing system, I think we all agree on a common value. It's resilient and secure global routing system. And behaviours are known for decades. I mean, there are practices that were published back in 2006, 2007, and many operators implement them, but not all, and not even 60%.

So, from behaviours to norms. There are a few things that need to happen to make it happen.

First, those norms have to be widely accepted as a good practice.

It should be not exactly a least‑common denominator but not too high either.

And they should be visible and measurable.

So, here we come to this effort which is based on this concept.

It basically has two facets. One is the basis line, right. This is the expected behaviours that could improve routing security on a global scale, and is distilled from common operational practices, BGPs and so forth, but optimised for low cost and low risk for deployment. So, from this perspective, it has a better potential of becoming norms. When does it become socially acceptable when not to implement these things when you interact with your peers.

For norms to have another thing they need to have a visible support of the community and that's why MANRS builds a visible community of like‑minded operators. That's the second facet.

So, all what we want to achieve is to scale this up, right. Again, back to this group community. If one ‑‑ a person is immune doesn't stop the virus. It needs higher percentages of network operators to be following those practices to make a significant impact. Let's look at who can make this impact.

And there are roughly four groups of operators starting from enterprise, transit providers, IXPs and CDNs and Cloud providers that can make and contribute to better routing security.

So, network operators, ISPs, was a natural choice when we launched this initiative back in November 2014. We still had a baseline consisting of four simple practices called actions. We'll not go into detail. You can see them. But it's basically, you will recognise it sounds like ‑‑ as I said, it's a baseline, it's minimum requirement to implement.

Now, scaling up, we're looking for other partners, and the next partner that comes to mind immediately is the IXPs. IXPs, they have own communities with common operational objective, that's the community MANRS is looking for as well. And MANRS can sort of form them a reference point with global presence. Sort of a reference point to have them build safe neighbour hoods,

So in answering the question how IXPs can contribute to routing security, a task force was formed and we came up ‑‑ this task force came up with five actions. That's a separate different set of actions, slightly different set of actions for IXPs.

The most important one, I think, is action 1, that ‑‑ sort of instructs IXPs that run root servers so implement strict filtering on that point.

So, this programme was launched in April 2018, and I'll later show you some numbers so we have uptake there as well.

Now, finally, we're looking for other stakeholders, if you will, and those were CDN and Cloud providers. And this programme was launched on March 31, and it not only brings benefit to the routing system where we would like to leverage the peering power over these networks, but also bring some benefits to the Cloud providers themselves.

Again, a task force was formed. Many known names here on this task force. That was more than a year of work of trying to converge and there is still practices that would be applicable for CDN and content providers, but would also have an impact on routing security. As a result, six actions were developed and were consulted with the community, there was a call for comments. We got some comments, feedback, thank you for that, incorporated them, and launched the programme on the 31st March this year, with 8 big CDN Cloud providers joining this effort. So very happy to announce that as well.

Now, let's look at the adoption. When we started ‑‑ so we started back in 2014, as I said, in November, with nine operators launching the ISP programme and now we have a very steep, quite steep, mostly exponential, inclusive, reaching more than 360 ISPs, with more than 500 networks autonomous systems, 52 IXPs, and 8 CDN and Cloud providers that launched the programme in March.

So those numbers need to be constantly updated because we have pretty high influx of applications. Very happy about that.

This is a demography of participants. And you see Brazil dominating there with more than 26%, and that's due to the problem that was started two years on secure Internet where they go around the country and promote not routing security, not MANRS specifically, but also some of the practices, training people and that stimulates uptake in MANRS as well.

Now, this is speculative, of course, and I'm not attributing this graph to MANRS. Maybe it's also speculation if other efforts like MANRS help but this is a nice graph. We see some degrees in routing incidents, that's one vantage point. It's data taken from the BGP stream dotcom, but data about MANRS participants is real, taken from our own database, so you see this counter curves, which are nice.

Now, finally I would like to sort of say a few words about measuring MANRS readiness, because I think norms to be effective, they have to have teeth, they have to be open and transparent in a way that people can say who adheres and who doesn't to those norms.

So, that's why a year ago ‑ well, almost a year ago ‑ last year, we launched a tool which is called MANRS Observatory, which measures MANRS readiness for not only participating networks but also for all networks on the Internet, because we are using passive measurements.

MANRS Observatory doesn't measure by itself, right; it uses third‑party data and calculates metrics using our measurement framework.

It doesn't measure impact. I mean, it does measure MANRS readiness in a sense MANRS performance. Right. So, this is an open tool and you can go and look at this. It doesn't give you as a public visitor granularity of the network, but MANRS participants can get this granularity so they can see MANRS readiness for every network on the the Internet and specifically MANRS on this MANRS Observatory.

So, to wrap this up. Why join MANRS? Well, there are many reasons. The most sort of best one is improve your security posture and reduce number of impact of routing incidents.

You don't want your customer to become your transit provider, for instance, so filtering makes absolute sense.

Meet the expectations of the operators community. I think as those norms become more norms, more social expectation, you need to live up to that.

And, of course, and we see this more and more, that people use MANRS as competitive differentiator because that is a signal to communicate your security portion to your potential customers or peers.

Joining us is very simple. Just go to the MANRS website, fill out the form, depending on who you are, the different classes of networks participating in MANRS, and submit the form. That's the beginning of the dialogue. We'll discuss and we'll do it to the best of our ability and, if you can fulfil it, you can join MANRS.

So, thank you. I think that completes my presentation hopefully and well on time. Thank you.

CHAIR: Thank you.

JETTE JANSEN: Thank you, Andrei. We have a few questions from Michael Kafka from NetCat concerning involvement of IXPs: Do you know about or see IXPs doing ROV or route filtering against IRR?

ANDREI ROBACHEVSKY: Well, many IXPs are actually using the IXP Manager for the route server, and there is a pretty standard way sort of process of filtering, first you apply the RPKI thing and then fal lback to IRR if RPKI is unknown. So, IRR filtering is pretty widely used, as far as I can see.

JELTE JANSEN: And in the danger of running into the coffee break a little bit, Gordon Gidofalvy from KTH Stockholm asks: "Do MANRS members usually implement MANRS practices by the time they sign up or do you see a gradual deployment of these practice.?"

ANDREI ROBACHEVSKY: It's a requirement you can form into the action before you join. When an application is submitted, we do the audit. We also using MANRS Observatory as a tool now, to check this audit. You can check only so much from the outside. So we don't do on site audit, so, in many cases, we delay the ISP or the participant that tells us something. But you have to be compliant, right, before you can join.

Now, what happens sometimes is that networks join and then their security deteriorates and that's why also we are promoting MANRS Observatory inside the MANRS community as well where we are discussing ongoing conformance to MANRS and how to keep it up.

JELTE JANSEN: As the participants can see, there is a poll now. Please fill that in. One more question from Gerardo Viviers: Is it possible to use the Observatory for one's own AS or an AS1 as a user of before becoming a MANRS participant?

ANDREI ROBACHEVSKY: Yes. Well, so we have this notion of aspirant account. So when you submit your application, I think that's the requirement. If you are not fully conforming, but you are aspiring to become a MANRS member, we can issue you an account which is called aspirant account that you can check your own IS, how it looks from the outside:

JELTE JANSEN: Thank you. Thank you to Doug as well. Applause from the audience. We are going to have a short break. The next session will be at 1500 hours. Please rate all the talks and please vote for the Programme Committee. Thank you all and see you soon.

Virtual applause.



LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.