Archives

RIPE 80 ‑‑ virtual meeting.
Plenary session
12 May 2020
1pm.



CHAIR: Hello everybody and welcome to the next session, it is 1300 here in Berlin and it is time for us to start. My name is Peter Hessler and I am on the RIPE Programme Committee, I am be chairing this session with Wolfgang Tremmel, and coming up in a few minutes we have a presentation about flap data protectioning in the wild. I did ask you to use the chat for the regular chat sessions. If you have any questions for the presenters, please open and Q&A and ask any questions in there. Please try to keep them short and concise so we can have as many questions answered as possible. Due to time constraints, we may not be able to get to all questions, so if we do not, please contact the presenter after the session and they can discuss this with you as well.

So, please remember to rate the talks. This is usually available in the meeting plan, you need to log in and then next to the name of the talks, there will be a rate button, click that. And rate as you like.

So, Clemens, the floor is yours.

CLEMENS MOSIG: Hi. You should be able to hear me. And you should be able to see my slides now. Is that the case? Yes, okay. Great.

Thanks for the introduction. In this presentation, I will talk about BGP route flap damping in the wild, this is done with Randy Bush, Cristel Pelsser, Thomas Smith and this guy.

There are a lot of routers on the Internet. Most are working well, but some are misconfigured or malfunctions. They may send unwanted BGP announcements and withdrawals. We call them bad boys. An example BGP optimisers which can also lead to flaps.

You know that each BGP update triggers the BGP decision process, and with too many updates, routers are overwhelmed. To prevent this, route flap damping was introduced back in the nineties to reduce CPU load on routers so that the neighbouring routers would not have to recalculate path very often. Route flap damping is one of the two BGP optimisers to dampen those routes. But not only bad boys create many updates. There is also the BGP convergence process.

If one router sends a BGP withdrawal to its peers, which is totally normal, the neighbours will announce their alternative path to their peers and so on. And thus the withdrawal converges. But during that period a lot of announcements with ALT tif paths may arrive at a router somewhere else on the Internet. This may trigger route flap damping and the dampening router may entirely withdraw the prefix. In fact, route flap damping configurations suppress 14% of all prefixes in a study that used BGP from a tier 1 provider in an IXP. But you may think in the above case the prefix is withdrawn anyway so what's the deal if it's again withdrawn?

If the left router renounces its prefix, the propagation time will be very long because its prefix is being suppressed by some router.

Route flap damping was and still is controversial.

In our talk, we'll first go through the route flap damping history, and then explain how we measured route flap damping and show you how many ASes are damping and which configurations are used.

After seeing the introduction, you may think route flap damping shouldn't be recommended. Let's go back in time. Route flap damping has already been implemented and used in Cisco routers in the early nineties. The main reason was high CPU utilisation because of frequent best path selection. This was made into an RFC in 1998. In 2002, Mao published that just one BGP update can cause a series of BGP update in topology, as we have demonstrated in the introduction.

Depending on the topology, this amplification effect causes routers with a route flap damping default configuration to suppress up to one hour the propagation time or route that has been withdrawn and then renounced.

They also found that with adjusted route flap damping parameters different from the default, the effect is reduced.

Then this didn't just the implementations, but, in 2006, the routing Working Group was recommending to entirely disable route flap damping because of its drawbacks.

Then, five years later in 2011, Pelsser published how one could adjust the damping parameters such as only heavy hitters, the bad boys are being damped. This elegant solution did not require any software updates or code adjustments, but only adjusting a single parameter in the configs.

RIPE now recommends the use of route flap dampings in adjusted parameters. The latest IETF BGP also recommends these route flap damping.

As you can see this is quite the mess. So, let's get some clarity. And there we are opening the book again and measuring route flap damping.

Let's sum up why it is important to know who is using route flap damping on the Internet. First, there are many different recommendations in the past, we don't know what's in use or does anyone follow the relations. As we have seen in the beginning, default parameters can cause harm and affect Internet reachability so it's useful to know who is using them.

Also, if you try to measure noisy prefixes or send actively a lot of BGP updates, for example, to measure deployment of RPKI, route flap damping is a speed bump, thus impacts BGP measurements.

As route flap damping can be helpful but also affect reachability, we decided to measure route flap damping on the Internet. And to measure route flap damping, we need to recap how it works. Route flap damping works based on a penalty which is incremented with every route change. Each router keeps a penalty for each prefix and peer pair. This plot shows how the penalty may behave for a prefix over time.

The red dots indicate were where updates are received by the router and each BGP update increments the penalty by a constant, which is 1,000 in this plot. Now, you can see that in between the updates, in between TTL red dots, the penalty drops again. This is because the penalty always decreases based on a configured half life.

If the penalty surpasses a configured threshold, so suppresses the threshold, the route gets damped, or, in other words, withdrawn. Note that the router does not discard further updates but just doesn't use them.

At the end of the blue area, the router stops receiving updates and therefore the penalty can now go much lower. Now the penalty decreases below the real threshold and the route is considered usable again and can be renounced. The real threshold defines this point. We call this announcement readvertisement.

So, you can see that we have a time where the router receives updates and a time in which it does not receive any updates. If this router will receive an update every, let's say, five minutes, it would be constantly withdrawing the prefix because the penalty never decreases below the reuse threshold and thus we would never see the AS which would never be useful because we want to measure route flap damping.
We utilise BGP beacons which change between announcement and withdrawal with very high frequencies. We are using our own testing prefixes for that so we do not interfere with anyone else's traffic.

We send the signal and hope that someone damps us.

To target different route flap damping configurations we are using different update intervals. So a small update interval may trigger more sensitive configuration, while a larger one may trigger few.

We are also using a first break sending pattern. In burst, we send announcements and withdrawals with a specific update interval and in the break we do not send any updates. As explained before, this gives the router time to reset their penalty below the reuse threshold for all prefixes and thus we announce them again, so we can see the ASes that do route flap damping.

With this sending pattern, routers that are using route flap damping produce a very recognisable pattern. Let's see how this looks in the real world.

What you see in this plot are the views from a single vantage point, a route collector peer, of two different paths. Each vertical green line is an announcement. So we had a router collector and the plots show when we receive announcements for a specific path, for the specific peer.

The blue areas are the bursts and the white areas are the breaks. The left plot shows received announcements throughout the entire burst. Everything looks normal. This is what we expect when route flap damping is not deployed on this path. This is basically exactly what we are sending from the Beacon router.

In the right plot, you can see that the signal suddenly stops after half of each burst. This means the route got damped.

What you can also see is that there is another advertisement in the break in the white area. We call this readvertisement. Because this is the point in time where the penalty drops below the real threshold and the route is readvertised again.

So the left plot shows where the path without route flap damping looks like, and the right plot shows the pattern route flap damping produces.

And based on this pattern, we label paths with route flap damping true or false. On path with route flap damping true, we know that at least one AS does route flap damping because someone must have caused this pattern.

Ideally, the direct peer of our beacon is also a vantage point. The peer of the route collector, because then we would know which exact AS does route flap damping and which does not, because the AS paths would just be two hops long. But unfortunately there is 60,000 ASs on the Internet and we just cannot set up this many beacons and this many vantage points.

In our study, we sent our beacons from seven different locations and seven different networks in the world. We also tried to place our beacons more or less equally spread around the world.

And, of course, thanks a lot to all the network operators who helped us host the beacons as well as the ones who shared the route flap damping configurations with us.

Using the public route collector projects is Isolario, RIPE RIS and route views. We picked up our beacon signals at 700 routers on the Internet. Now we have a set of labelled paths with route flap damping true and a set of label paths with route flap damping false and we labelled these space on the update pattern I explained earlier. But we want to know which exact AS uses route flap damping on a given path.

The challenge is that on a route flap damping path the damping AS could be anywhere between the route collector and the beacon. They could also be two damping ASs on a given route flap damping path. We don't know.

The second challenge is that an AS may only dampen a subset of neighbours. For example, an AS may only apply route flap damping to its peers or providers, customers, or two groups.

Also, BGP, in general, likes to have information efforts as only the best path are exported.

Therefore, we introduced metrics based on which became route flap damping measure for individual ASs.

Our heuristic is based on three principles.

First of all, the signal ratio. If an AS is on one or more route flap damping path, and more route flap damping, then none route flap damping paths, it may indicate that this AS is using route flap damping.

Second, when a router dampens or prefix and tends to withdraw to its neighbours, the neighbours may announce their alternative path for our prefixes and, based on these alternative paths, we can then derive the damper on the damp path.


Third, ASs that use route flap damping should be seen in fewer announcements towards the end of the burst as they start damping.

And using these three principles we pinpoint the ASs on route flap damping paths.

We performed our measurements with six different update intervals. One minute, two minutes, three minutes, five, ten, fifteen. And as one would expect, we found more ASs doing route flap damping for smaller update intervals where we sent more updates across the same time period.

For the highest frequency we found that 8% of all measured ASs are using route flap damping. As our heuristics do several checks and limit themselves to make conclusions about ASs for which we have really enough data, 8% should give you the lower bound.

We found small ISPs as well as tier 1 providers doing route flap damping.

We also found that most ASs use vendor default configurations. The problem is that these default configurations are two decades old and have not been updated with the findings by Mao and Pelsser and we know the change in the default parameter in the vendor software will probably surprise network operators who use them so we understand that these have not been changed by the vendors. But we urge you to change your configuration.

We have also checked the WHOIS records for instances of route flap damping and we found dozens. Most are referencing RIPE‑229, which is out dated and from 2001.

But take this with a grain of salt, because it could be that the WHOIS entries have not just been maintained.

After pinpointing the route flap damping ASs, we started validating our findings by asking network operators from the route flap damping ASs. We received replies from 27, and we had only one false positive. Most of these ASs told us they use vendor default parameters. So, the networking community thought that route flap damping is rarely used. But, in reality, route flap damping is not in common and we found at least three tier 1 providers using it. We also expected that if route flap damping is used, the ASs would follow the recommendations by the IETF and RIPE. But, in fact, most ASs just use the default parameters that were depricated from 20 years ago.

So, what have we learned today?

Route flap damping is used from tier 1 providers to small ISPs, everybody uses them.

At least 8% of all measured ASs are using route flap damping.

And most of them use harmful default configurations.

To sum up our talk, we have some recommendations for you.

First, you should check whether your routers have unpurposely enabled route flap damping, because we came across two ASs of which one was a tier 1 provider ‑‑ we're not going to mention names ‑‑ but they didn't know that route flap damping was enabled.

Also, please check whether the WHOIS entries are up to date so that the network operators ‑‑ so other network operators know what you're doing, because it could be that the WHOIS entries we found and used for cross‑checking our results have just not been maintained.

Please, if you use route flap damping, consider changing your config to the recommended parameters after reading 580 or just as route flap damping. We actually came across a few ASs, quite a few ASs who did not use route flap damping, kept it in reserve but didn't need it. But whether route flap damping is needed is a different topic.

And if you want to help us, send us your route flap damping configuration. That's it from me and thank you for listening.

CHAIR: Thank you, Clemens. I am Wolfgang Tremmel from DE‑CIX and I am moderating this Q&A part. Are there any questions? As remarked before, if you have a question, please use the question window and keep your question short and to the point.

Okay, I have a question from Erik Bais, and the question goes: Are you planning to provide recommended config for some common used vendors?

CLEMENS MOSIG: No, I am not going to plan that, I am not planning that because the recommended parameters are written in the IETF practice and RIPE 580. There is very nice articles and they recommend exact parameters.

CHAIR: I also see that the poll has started so the participants should now see a poll window and you have basically three minutes to click your answers and I will then tell you the results later.

In the meantime, the next question is from Ayane Satomi from Batangas State university. The question goes:

Why do different ASs have used legacy configuration? Does it mean most ASs basically just pluck and deploy it with no regards to configure it?

CLEMENS MOSIG: So we don't know how the ASs chose the configurations. I assume ‑‑ so we found that these ASs just use the default configuration, so yes, you can assume that there is enabled route flap damping and thought well the vendor default configurations are default, so they may be good. So, let's just use them.

CHAIR: Okay, the next question is from Dr. Bahaa Al‑Musawi with no affiliation. The question goes: Thanks for your interesting representation. My question is: What is the significant information that we could get from finding out which AS is using route flap damping rather than looking at the behaviour of BGP and find out the total behaviour of BGP traffic prefixes?

CLEMENS MOSIG: So, if you know ‑‑ so, there are studies, for example, a study about BGP zombies, and they also actively measured BGP phenomenon on the Internet and they also used BGP beacons, and zombies are stack routes which have been withdrawn but are still in the RIB table. So, they thought that maybe the whole reason for these BGP zombies is route flap damping. So, if they would know who does route flap damping they could maybe could he ed their results with their findings. General route flap damping is just knowing who is doing it, it's just great to maybe troubleshoot your measurement or find the reason why a measurement has these results or these results.

CHAIR: Thank you. Next question is from Geoff Huston, APNIC: You appear to imply that flapping is related to pass exploration. What about the interaction of RFD and MRAI timers? Shouldn't MRAI timers suppress the past exploration updates?

CLEMENS MOSIG: So, MRAI is ‑‑ so to explain for everyone. MRAI is another BGP Optimiser, and it defines the minimal amount of time that must be left between two consecutive updates about a specific prefix. And yes, it does reduce the number of updates that occur in a process. But the paper in 2002 that found out that just one BGP withdrawal can configure route flap damping also took a default MRAI configurations and, even with a default MRAI configuration, one could still trigger route flap damping.

CHAIR: We now close the queue for questions. I still have one question left and this is from Lars Prehn. He says: Good talk, you mentioned in your motivation that the importance of your work is that actively send BGP announcements. How can I interpret your results in this context? How do the changes you suggest are in fact the current situation?

CLEMENS MOSIG: So, the results ‑‑ the current result that many ASs are using the vendor default configuration and I said in the beginning that with these vendor default configurations, I think a withdrawal can cause a route flap damping, I said that. But also sometimes 14 percent of prefixes are withdrawn and this was shown in a study in 2010 I think. And this means if you have the default configuration enabled for example towards your providers or your peers, your customers may experience unreachability issues, because you are simply withdrawing the prefix. But they are actually not flapping enough that they should be withdrawn.

CHAIR: Okay. Thank you, Clemens, and we have the results of the poll live now. 24% have enabled flap dampening. 76 have not. And of these, 58% are using the vendor parameters, and 18% more strict, 31 less strict, and well all the other results are using route flap damping, so basically that's even, and I get the remark I should mention the number of people who have answered the poll but I don't see that number. So, if perhaps the host can give me this number in the chat and then I can read it out. But I do not see the total number of people who have answered. Well, if we get that number, we can give it after the next talk.

Oh, I just get it, 105 people have answered the poll. So that's all right. Okay. Thank you, Clemens, thanks for presenting, and if people have further questions, I guess you are available in some of the chat channels, so, I would like now to introduce the next speaker, and the next speaker is Daniel Kopp. And Daniel Kopp is a colleague of mine. He also works for DE‑CIX in Frankfurt, and he has been a member of the DE‑CIX research and development team since.2015. He participates in several international research activities about measurement and computer security, and he has also been a presenter at RIPE meetings before and also presented at NANOG, DENOG and IETF, and he is going to talk today about the effectiveness of a DDoS Booter Services Takedown.

DANIEL KOPP: Thank you. So, this presentation is about DDoS attacks and especially about booter services. It has been a research paper on a conference in the end of last year, and it was, together with people from DE‑CIX, BENOCS, the University of Twente, and the PTU and the MPI.

So, I hope you will use the comments section. Maybe you can report on the biggest DDoS attack you have ever seen in your network, or about things that are right or wrong in this presentation.

So, booter services, to get everybody on board. They have been known for a while and probably a lot of people know this stuff already. It's basically a service that criminals provide that you can start a DDoS attack against any network as easy as with just one click. You can find the services really easy lately they promote themselves on different video platforms and show how big the DDoS attack will be, and they are quite cheap.

Basically, what you order there, they have kind of different service levels, and you pay in cryptocurrency normally and you have like 30 days flat rate where you can run DDoS attacks for 30 days.

This is a screenshot of one service where you can see the attacks you can start, they have different Layer4 high bandwidth attacks, but also like more detailed layer 7 attacks to target websites, for example. And otherwise, the service plans, they differ by the number of current attacks you can start and the length of attacks, so usually it starts with five‑minutes attacks, so that's maybe why you'll see like really short five minutes attacks in your networks for the most times. Every everything longer you have to pay more and like the basic prices they start about $10 for maybe like five to twelve gigabits traffic, that's what they promise, and if you want to have more, they claim they have the so‑called premium VIP service and then they claim they give you like 80 to 100 gigabits, so this is already like now a year old, so maybe this already change to higher levels of traffic.

And because the services are around for quite some time already and have been popular and crippled a lot of networks, of course some law enforcement agencies stuck together and from time to time we see this announcement that they clicked off some services from the Internet.

So, what we did in this research effort was, we wanted to ask the question for ourselves like, how dangerous are these booter services? For this, we create our own measurement setup and we attack, basically, ourselves. We bought some services and we see see what we got.

The second thing we wanted to have a look at is, in the networks that we had a look into that was IXP, a tier 1 ISP and a tier 2 ISP, we wanted to see how many attacks we could see on a daily or hourly basis.

Last, we wanted to have a look at major takedown where they took, at the end of 2018, they took down 15 booter services and we wanted to see like what was the effect? Can we see a decline of attacks over that time?

So here you can see the services that we ordered was just like not a random sample, we used Alexa top ranking and picked some popular booters, here is this was later that took down by the FBI. And then we saw like what attacks are effective. Basically, that was the protocols that we noticed that had some effect. It was NTP, DNS, CLDAP and Memcached at the time was really popular. Moreover, we chose one VIP service, it was more expensive, at around $180.

So here you can see the results from our self attack. You see that we had the biggest attack for this non‑VIP service, the standard service was NTP, it peaked at around 7 gigabit and Memcached was a lot lower, but it also depended on the time when you started a DDoS attack. And you could also see that like NTP had around 1,000 reflectors used and Memcached a lot less, we realso noticed was that if you had NTP traffic attacking ourselves and we were located at IXP, we got around 80% over transit and most of the traffic seemed to come from Asia and when we attacked ourselves, Memcached was 80% via the IXP because probably at the time there was some networks here in Germany that had also problems or in Europe that had problems with Memcached.

But we learned so far that NTP was the most significant attacks and also the most that were working all of the time.

Then we ordered more expensive service, where they claimed that they will provide you with 80 gigabits of attack traffic, and what we learned was that NTP was around 20 gigabits, and Memcached was lower again at around 30 gigabits. So, still NTP was the most significant attack to us.

So, we continued to have a look into how much DDoS attack we see over time. For this we had the IXP data for around three‑and‑a‑half months, the tier 1 ISP for around three weeks and the tier 2 ISP for around four‑and‑a‑half months.

And basically we concentrated now on NTP traffic and we found, or we said it's going to be attacked by the packet size. So we saw that we have a 50:50 slit of the distribution of packet sizes in the network with NTP queries and with NTP MON‑lists we considered the list for the attacks, so we used a filter of 200 bytes to distinguish to say this is a DDoS attack.

So then we plotted like the section that we can see, here we have the number of sources on the Y axis and the size the DDoS attacks is on the X axis, and each dot is one attack, basically, one unique attack target.

So we found 300,000 destinations and the biggest attacks like more than 100 gigabits that we observed was for 224 victims. And also we saw a bigger one with 600 gigabits over this whole period, but we also noticed there seemed to be some anomalies, especially with like this seems to be like just one source with a high volume of traffic. So, we continued to do a really strict filtering. We filtered, we said it should be more than 1 gigabit and the traffic came from more than 10 NTP sources, we thought this would be odd traffic. So with this filtering, this leaves us with around 70,000 targets at the time, and we continued to use this filter criteria to have a look over time like how was the DDoS attacks distributed over time. And especially over this takedown event in December 2018.

So, here we have a timeline. You will see the number of attacks per hour. And here at 20th December we had a takedown event.

So usually we start with, we see like 60 attacks per hour to distinct targets and around the day of the takedown, this really declines to a lower level of just around 30. But it doesn't keep long. It seemed that the attack, like, declines just stayed for around six days and after that it went up again.

So we dug a little bit deeper into the data and we wanted to have a look at significant statistical change in other vectors. So, this is two things we saw. More than this, there was not a lot more. We saw that the IXP, we saw the traffic going forwards, Memcached reflectors to really go downright at the event. And at the tier 2 ISP, we saw the same thing for NTP, so the traffic forwards the reflectors just strictly went down right at this takedown event.

And otherwise, at the end of our measurement period, we saw that there was a little bit going up again, but it's too short to say for sure.

So, more over the traffic perspective, we also had a look at the domain perspective. So, we had like 140 million domains in the com, net and org area and we were searching the websites of this booter services and for this we were searching for the key words "booters", "stressor", "DDoS servers". So then we found like we were searching for other web pages and how many are available. We found like 60 of these other booter services and so we learned that the ceased booter services are were just a small portion of what is available through customers, basically, and also the ceased booter and services that didn't appear to be the most popular according to Alexa rankings. Otherwise, we saw one booter services that just became available again with another domain that was already registered like way in advance, so probably this booter services already prepared for this event that the website might be taken down.

So even our credentials still worked so we were able to look at the same web page again.

So, in conclusion:

We learned that booter services, there is a lot, and they are friendly, user friendly, cheap and popular, and it's really easy to launch DDoS attacks.

Basically, you get what you pay for. But at the lower bandwidth, at least this is what we learned, and the NTP DDoS attacks at the time we did the measurement was the most prominent.

It might change over time, but in this time it was.

Also, the text size would have been quite critical to most small or medium networks, and over that, we saw a lot of DDoS attacks over the time like what I said with this really strict filtering of 1 gigabit from ten sources, we saw like 60 per hour just for NTP. This is probably the number is a lot higher if, because probably there is a lot of more DDoS traffic that is just lower than 1 gigabit.

And for the law enforcement action, we learned that the quite after for some of them and attacks went down for short times, so it seemed to have an effect, but it kind of little effect on the overall attack, but some effect on distinct measurement of traffic.

So this brings me to the end of my presentation. And I am happy to go into a discussion, and report about your experiences with DDoS attacks. So thank you.

(Virtual applause)

CHAIR: Okay. Thank you, Daniel. So, if anyone has any questions, please put them in the Q&A part. We still have a few minutes before the end of this session.

We have a question from Toma Gavrichenkov of Qrator Labs. He says: "Thank you for your talk. Regarding the link perspective, I believe it is also a significant portion, if not most of the booters, can be TLD other than com or net, namely CC info biz and others. Then, also, there are seven command and control structure from the NTP... memcached infrastructure. I, too, did not think the procedure had any significant long‑term effect on layer 7 attacks. However, as those attacks did not bring in significant traffic in Internet resources, therefore the change in those might have gone under your radar. Perhaps it's an area for more research.

DANIEL KOPP: Yeah, we are continuing in this regard, especially we would like to learn what are the targets of these attacks. It's difficult to find out actually, so at the moment, it looks like for me there is a lot of targets in the ISP area, but it's difficult to know because like maybe it's just because this dynamic IP address is changing all the time.

CHAIR: Next question is from Nikos Kostopoulos, and the question is: "Thanks for the presentation. Will you share datasets derived from your research online?"

DANIEL KOPP: I mean, the paper is online, but the datasets, we didn't have any idea to share the datasets at the moment, but if there is other researcher groups that are going in some similar direction, we are happy to, you know, discuss and to go forward together, or something, yeah.

CHAIR: Thank you. Next question is from Harshiva Matcha from Twitch: "Is it possible to disclose the ASN ISP IP information of booter services?"

DANIEL KOPP: This is difficult. It would be super interesting to find out more about the infrastructure of these booter services. It's probably very distributed and there are probably also using hijack systems, and they are hiding behind a bunch of DDoS protection services so it's really difficult to know the real source of these services and of course they have to use systems to send boot traffic to the reflectors, but yeah, it would be a very interesting and we're trying to get more insight into that. Yeah, if we data networks we will probably write another paper or give a presentation at RIPE.

CHAIR: A related question would be: "Do you know the country of jurisdiction as well?"

DANIEL KOPP: The country where it's coming from?

CHAIR: Yeah, the country of jurisdiction. So what countries are the hosting services and the command control?

DANIEL KOPP: So, what we record when we attack ourselves is basically we just saw the reflectors are being used so they are misused. We didn't see the ones that are sending the traffic towards the reflector. That would be the interesting part, right. So this is the difficult part, basically. We can't tell where the infrastructure really is.

CHAIR: Roland Dobbins says: "The overall majority of DDoS attacks are related to online gaming brand and access providers. Is this also what you see?"

DANIEL KOPP: I don't know, we didn't look into that, but, yeah, probably this is where it can also get a lot of money if you do some blackmailing and stuff, but also probably online gaming like you want to kick out another kid from the game and things like that, yeah, yeah, for sure.

CHAIR: Thank you. Next question is from Boris. You receive up to 20 gigabits measuring from well connected network. Do you expect less well connected ASs to receive similar amounts of traffic?

DANIEL KOPP: Yes, yes. I think the traffic ‑‑ I mean, this was like ‑‑ I mean, we would have been able to measure a lot more than 20 gigabits, but of course any small networks like the 20 gigabits would have tried to come to that network and overload networks on the way if there is a bottleneck.

CHAIR: Okay. Thank you very much.

It looks like that we are done with the questions. So, thank you very much, Daniel.

I would like to remind everyone to please rate the talks. Once you log into the RIPE 80.ripe.net website and you go to the Plenary, you have a "rate" button and please click for each talk as if you like them. Additionally, there are open nominations for the Programme Committee. They are opened for approximately another hour and 15 minutes, so if you are interested, please check out the Programme Committee site at ripe80.ripe.net and if you wish to be nominated, send in your nomination details.

I believe that is all the open administrative that we need to do. So, thank you very much and we'll see you again in 15 minutes, at 1400 hours.

(Virtual applause)

LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.