Archives

Database Working Group
Thursday, 14 May 2020
4 p.m. (CEST)

WILLIAM SYLVESTER: Welcome everybody to the Database Working Group. Before we get started, I want to thank everybody for showing up, and it looks like we have about, over 300 attendees so far signed in for the Database Working Group. So, thanks everyone for being here, and, with that, we have a great plan here today.

Before we get started, I wanted to thank Rob and Alvero and Oleg for helping out with all of our technical stuff, as well as Oleg is going to scribe for us, so, with that, I want to turn the floor over and Edward will take over from here.

ED SHRYANE: Welcome everyone. I am Ed Shryane. I am a senior technical analyst with the RIPE NCC. This is the operational update from the RIPE database for the RIPE NCC.

The current lineup for the database team. We have been working from home since mid‑March and this is how we interact every day since then. So I think it's been going pretty well since then between us.

Some statistics first. During 2019, there was a 56% increase in queries. We handled 24 billion in total, it's about 1,000 a second, and we handled 43 million updates. And at least recently there's been no down time for the WHOIS query or update service.

We have had four releases ‑ two major releases, two minor releases ‑ since RIPE 79. WHOIS 1.952. We applied the personal data limit to version queries. This was a bug where, if you were temporarily blocked due to excessively querying for personal data, you were still able to query data through diversion queries, so we ‑‑ all of our query flags are now consistent with the GDPR and the person data limit.

WHOIS 1.96. As we announced at the last RIPE meeting we depricated the lower attribute in aut‑num objects, since it didn't have any meeting any more ‑‑

WILLIAM SYLVESTER: Your slides are not updating.

ED SHRYANE: Can you see my WHOIS releases slide?

WILLIAM SYLVESTER: No.

ED SHRYANE: Can you see that?

WILLIAM SYLVESTER: I do see that.

ED SHRYANE: You see my screen?

WILLIAM SYLVESTER: We see a sliver of your screen, there we see your whole screen. We see it now.

ED SHRYANE: Aut‑num ‑‑ great. Okay. Sorry about that.

Back to the WHOIS releases since the last RIPE meeting.

In API, we released 197, which opened the NRTM service to everybody. It's part of NWI 8 and now there is no membership requirement or separate agreement needed so you are bound by the regular WHOIS database terms and conditions.

We fixed the security issue. Thank you to Cynthia for reporting that.

There is a crashing bug where we didn't handle extremely large objects properly. So now we have properly fixed that by limiting the object size to 5 megabits, which should be more than anyone would ever need.

Finally, we added a rate limiting, to help protect against service attacks. That was already there for port 43, but we have added it now for HTTP and the new open NRTM service also.

Finally, there was a point released last week, I thank you to Marco for bringing this up last December. We fixed updates signed with elliptic curved public keys now so they are now fully supported.

We had supported those keys as key CERT objects but they weren't fully working as signed updates.

WILLIAM SYLVESTER: Your slides still aren't updating. Can you just make your middle slide larger, maybe? For anyone playing bingo, I have hope you just won.

ED SHRYANE: I don't actually see the entire screen.

WILLIAM SYLVESTER: I can see your entire screen now. Can you try changing slides? We can at least see the slides change. Try to change one more real quick.

ED SHRYANE: What I'll do is open the PDF. Is that better?

WILLIAM SYLVESTER: We only see key note.

ED SHRYANE: I'll try... it must be just very, very slow. Apologies.

That covered the WHOIS releases.

I'll move onto the database, the key notes.
We cleaned up ‑‑ removed the lower attribute for aut‑num objects. We removed a remark, an obsolete remark from the were 19,000 INETNUM and 15 aut‑num objects with the legacy status. And we implemented an internal API to support GDPR requests for personal data. So it allows users to search for an e‑mail address or person's name. Will look at current data and also historical data, will return them full list and for now it's internal, but it's supports our internal staff to investigate requests.

We implemented the 2018‑06 non‑auth route cleanup. So, we deployed this in January. Since then it's been responsible for two thirds of all objects deleted in that non auth source, so I think it's been a really good success. The numbers are low. But I expect this to increase as adoption increases. And 64,000 ‑‑ of the 64,000 objects remaining, 80% of the routes are in the AFRINIC region, so, in particular, for AFRINIC, once RPKI takeup increases, I think that number will decrease.

Moving on to numbered work items. We have made programme on three of them. NWI‑8 where you synchronise users from the LIR portal to the default maintainer. It's now in use by nearly 300 LIRs. We have deferred a work on Phase 2, the authentication groups for now.

For NWI‑9, which is the in‑band notification mechanism for updates. We have opened the NRTM service as I mentioned, and we have had nearly four times the amount of distinct NRTM clients in API. So I think that's really good response to opening it.

And we have deferred any work on the second part ‑‑ we need a decision on what the next generation NRTM service should look like.

Denis will cover those in his presentation later.

Finally, NWI‑10. Definition of country. We added the country attribute to the organisation object last December. Work is in progress on implementation, where we're going to populate the country code from the organisation's legal address. And we'll be able to share more information on that soon.

Next is locked person's objects, as I have presented in past RIPE meetings. We plan to do a cleanup of locked persons. We found that 90% of references to locked persons are from IPv4 assignments. We can't clean up locked persons until references are updated. So, since ‑‑ so we're going to reassign those locked persons to the referring objects maintainer. And since RIPE 79, we have implemented the code changes. We announced the implementation plan. We published a RIPE article that explains what we're doing, and in the last couple of weeks, we e‑mails the affected LIRs and we have taken their feedback into account. That's been very useful.

Remaining work:
We plan to announce again what we have left to do. We're going to e‑mail affected LIRs and also the maintainers of the affected objects. So, including the specific person objects that we're updating.

We plan to perform the cleanup shortly after this RIPE meeting. And we'll ask subsequently, we want to ask the LIRs to review the content details on those assignments now that the locked persons they are responsible for they can update themselves.

Database website improvements.
We have optimised the query page. So, using analytics which we find really helpful, we have determined that 70% of users to our website just visit the query page once and they leave. They don't come back. So we have been focussing on loading this page quickly for those users.

We have reduced the page size to less than a megabit and using distributed testing tools, we have found that it's generally less than one second to load the page. So hopefully the experience is a lot better for most users.

In my resources we added associated Route‑6 to main objects to the more specific object screens there so users don't have to flip over to the search page to find related objects when they are in my resources.

We improved forgot maintainer password. It's not more likely that users will get an automated reset e‑mail to their e‑mail address and they won't need to follow the manual process. That's good for users, they won't to have this back and forth trying to prove who they are. It's also means there will be fewer tickets for RIPE NCC staff.

We're now supporting compressed responds for my resources in IP analyser. Certain users have a lot of resources. It takes time to transfer all this data to them, and compressed responses really helps with that.

And finally, this year, we are working on the ‑‑ the company is working on a common website template so we will take advantage of that, we'll finally be able to support mobile browsers on the website.

Okay, just quickly a few proposed changes.

Firstly, client certificate authentication, this was suggested on our GitHub page. The idea here is that you will allow clients to supply an X.509 certificate over HTTP for who is updates. The idea is that we'll use a standard feature of TLS negotiating to specify the certificate, that we will use public key so we don't reveal any credits for authenticating the update. We use existing who is functionality because we already support X.509 certificates. It's opt‑in. You must choose to configure your maintainer to use this.

So we have tested that. It works. We are waiting for a security auth just to confirm our implementation is not going to open any holes. And we hope to have it available for testing soon. And we'll announce this to the Database Working Group.

Secondly, the main object cleanup. We found that there was over 1,000 domain objects. The numbers are small, it's about 1,000 out of 800,000 domain objects. They don't have any N‑server attributes. So this comes from 2012, where we changed the rules but we didn't clean up existing data, so there is a bunch of domain objects that are now inconsistent for various reasons.

And they are listed there, early registration, they were maintained, there was already an consistency cleanup and miscellaneous. So, the ancient history in there that we need to clean up so they conform to the current rules. So we'll announce that to the Working Group. We'll notify the affected maintainers, the main objects maintainers and also the equivalent resource maintainers and then subsequently we'll update our ‑‑ delete those domain objects.

Upcoming work: Just to summarise: These are our provisional priorities and we may not get it all done. We'll do our best.

Our priorities are Cloud migration. There is a separate presentation on that. Numbered work items. So, I'd like to hear some feedback from the community on open numbered work items. And once we have functional details and an implementation plan we can pick those up.

We have the RIPE database cleanups, I already mentioned. Usability improvements for our website. RDAP, we want to improve that further so integrate with the other RIRs.

And there is some ongoing work around the default maintainer and forgot maintainer password.

And finally, I'd like to mention that the certified professional project is now live since the 1st April and the very first certification is for the RIPE database. LIRs were sent three vouchers, two valid to the end of year to take advantage of this course. And the RIPE 80 goodie bag also contains a voucher with limited validity until the 22nd May for this. So progress so far, there has already been 270 exams, with a 74% pass rate.

So that's the operational update from me. Any questions?

WILLIAM SYLVESTER: Any questions? I don't see anything on chat.

ED SHRYANE: Okay. I will go ahead with my Cloud migration.

WILLIAM SYLVESTER: Sure.

Again I have a PDF, I will double‑check if you can see this.

WILLIAM SYLVESTER: Yeah. I see the cover.

ED SHRYANE: So, in the activity plan for this year, the RIPE NCC have included a Cloud strategy. And just I will make some quotations from the activity plan in case some of our users have not seen it.

"As part of the strategy for investigating Cloud storage solutions, we will assess the suitabilitiful moving some RIPE database elements on to third‑party Cloud infrastructure and share our findings with the RIPE community."

And ‑‑ so to be clear. For the RIPE database, we're planning to a functional review of the feasibility of a Cloud migration and we plan to implement a pilot, and we're not moving at this point the production WHOIS to the Cloud, but we will assess the suitability and share our findings.

So more extracts from the activity plan.

To summarise: "Most of our services run on a complex technical infrastructure that we maintain entirely in‑house."

So that means the servers, the network, the configuration, everything is done internally.

In 2020, we are going to look at how a Cloud migration would benefit us. And we will discuss potential changes further with the community before any significant actions are taken.

So these changes need to be cost effective. We need to ensure the same high level of resilience, and there will be costs involved, and they are listed on the activity plan under notable investments.

Apologies for the wall of text on these slides, but I thought it was really relevant to put those in the presentation.

So, I'll move on. It's just the main points from the activity plan.

So next, I'll just summarise the current in‑house infrastructure that we have and the drawbacks.

So, our current infrastructure has limitations. WHOIS has been set up like this for a very long time. Everything is done in‑house. We have two data centres in Amsterdam and our servers are flit between them.

There's an a large capital expenditure to maintain and upgrade, and there will be in the future ongoing to upgrade them.

One downside we have already had is that physical hardware is not flexible, it's inflexible. Cannot easily respond to changing requirements, so we have had issues in the past where new features have needed us to expand our service or limitations of the frameworks that we use have kind of exposed the limits of our infrastructure. Dynamics being very slow or lack of memory or for high traffic as well, we do find that we're limited, the number of servers we have are limited and we can't easily scale.

There is a high maintenance cost to this. Doing everything in‑house is very time‑consuming and unproductive. It's not our core ‑‑ we prefer to focus on the WHOIS functionality and service the database rather than apply patches to servers.

There is an opportunity cost here, so, time spent on patching and tweaking servers is a missed opportunity for improving WHOIS.

And finally, we have needed additional infrastructure for disaster recovery. So in case the worst happens in Amsterdam, we do need to have infrastructure elsewhere to failover to.

Moving on to Cloud migration.
And just to go through some considerations for Cloud migration.

First of all, data protection, it's a primary concern that we have. There is a lot of.personal data in the RIPE database. As was presented at previous meetings, it's over 2 million personal objects now. We also have a lot of credentials passwords in the database. We have implemented the GDPR recommendations presented at RIPE 76 but during any migration we must continue to protect personal data.

Where the data is located must be covered by the GDPR and the Cloud provider also must comply with the GDPR. And as part of this, we will ask our legal team to review our Cloud migration plan.

Information security:
It's already a concern for us with our existing infrastructure. And a data breach is a possibility no matter where your infrastructure is, but we must ensure that we don't open any gaps during a migration. We must minimise the risk of a data breach. So much personal data and so many credentials in the database.

We will take measures to protect data. Cloud providers generally provide auditing and anomaly detection to help with that and there are technical measures we can implement such as encrypting data at rest and storing secrets securely.

So, if you do get access to our servers, there won't be anything useful there. And we will ask for a security review of our Cloud migration plan.

Scaleability:
This is one attractive advantage of a Cloud migration. A loud environment allows us to react to changing load levels, to both increasing and decreasing. If there is a ‑‑ we will have better denial of service protection. And Cloud provider can do all the service migration that they have an in‑house solution. They would have a better overview where the traffic is coming from. And it's something we have been hit with in the past.

Also, a content delivery network would be something very useful for us. Currently our website is all served from the same location. Whereas if you can distribute a set of content closer to the user, it's faster for the user and it also takes traffic away from our servers.

Reliability. So it goes without saying that we would need or reliability to be as good or better than it currently is: A Cloud solution would allow us to provide enough query servers for failover and also allow us to failover the main servers.


Also it allows us to do servers geographically. So for disaster recovery we can switch to a different data centre.

Finally, in case the Cloud provider goes down, we will plan to failover to internal servers. So we will plan to continue to run some infrastructure internally.

Costs. So costs are a real concern for Cloud migration and Cloud offers many features, many services but you generally pay for what you use, and that can be an advantage or a disadvantage.

During the transition, we may have double costs. We aim to remain within our existing operational costs, but we can expect an increase before we can downsize our own internal infrastructure.

Additionally, there will be a cost in terms of additional workload for the team in getting familiar with the Cloud environment.

A couple of things we have in mind to keep Cloud costs under control. We would have an internal cost review before going live. We would have ongoing cost reviews to optimise our service. So we're not paying for something we're not using. And also that we would have a tradeoff between rolling our own solution and paying for a provided cloud is service.

There is a couple of things that we were thinking of for a functional review for Cloud migration.

So, in summary:
Next steps:
So in the next couple of months, we're aiming before RIPE 81, we would prepare a detailed functional assessment of a Cloud migration. So just really going through everything that I have just discussed, including a legal review, information security review, a cost review, an architecture review as well, that's also important. Because we can move our existing solution as is or we can rearchitect, that's something we need to review.

Secondly we need to implement a test environment in the Cloud, a realistic test environment, something like release candidate. So we wouldn't be putting any personal data or credentials at risk and they would allow us and the community to validate the approach.

Finally we'd like to share the findings with the RIPE community and hopefully we can do that at RIPE 81, but keeping in mind it's four months ago, we will see how much we can get done before then.

Thank you for your time. Any questions?

WILLIAM SYLVESTER: So, on the Q&A from your first presentation, Cynthia had a comment saying: "Thank you for fixing the huge update bug" and she verified it in the testing instance that it does seem to be fixed.

ED SHRYANE: That's really good to hear, I am really glad to get bug reports from the community and it's something we were able to fix in the next release, so thank you.

WILLIAM SYLVESTER: Robert Stacks asks: "Can you please go into the detail about the hardware about how many physical systems are we talking currently? And is virtualisation already being used?

ED SHRYANE: The answer is yes. Well, to the second part we are using virtualisation, we are using that extensively for our test environments, release candidate is also virtualised, our production test environment. Everything apart from the production RIPE databases are virtualised and it's been like that for many years. So we do have some experience in doing this already.

Secondly, the physical infrastructure, I think it's better to share that to the community on the mailing list. It might be something ‑‑ I haven't prepared anything in the slides, so maybe I can share some details on the mailing list if that's okay.

LUISA VILLA: Great. And then from IRC, John Bond:

"Sorry if I missed it, but is there a short list of Cloud providers being considered?"

ED SHRYANE: We haven't made a decision, and I wouldn't like to call out anyone in particular, but we are looking at Amazon and Google Cloud at the moment, I think they are already being used internally within the RIPE NCC, so I think it would make sense to take advantage of the in‑house experience there already. But we haven't made any decision, so in the coming months that will become clear, hopefully.

WILLIAM SYLVESTER: And then Cynthia asks: "Have you considered Open Sourcing the web updates?"

ED SHRYANE: Yes, definitely. I think that would be really useful. So, our community can reuse some of our functionality and also it's better for us, as developer, to be working in the open. It allows for auditing as well, so users can see how something works and more easily report bugs. It's definitely something we're considering.

WILLIAM SYLVESTER: And Daniel from Twitch: "How do you plan to maintain the internal but scaled down in the future infrastructure in the unlikely event of a Cloud provider outage? Do you plan to use a hybrid Cloud solution?"

ED SHRYANE: Yes, exactly. That's the idea. So our traffic will normally be served through the Cloud but disaster recovery we would fall back to virtualised infrastructure internally. So hopefully we can keep the costs down, but in a disaster recovery situation, that it will be available.

WILLIAM SYLVESTER: Great. Well, I think, with that, no more questions, Ed, thank you very much. Is Michael with PII privacy leak in description attributes.

MICHAEL KAFKA: Let me clean up my desk a little bit before I share my screen. You don't need to see everything.

Unfortunately, my Zoom window is gone now, but you can see my screen?

WILLIAM SYLVESTER: Yes, we can see your screen.

MICHAEL KAFKA: Wonderful. What I wanted to talk about is privacy concerns, GDPR concerns in the database. The talk is just written last minute, it was finished actually the last version yesterday.

The RIPE database is there for a good reason, obviously. It's publicly available, accessible have a couple of protocol access methods without authentication, authorisation, it's quite obvious why. We have different object types in the database, organisations, persons, role, IRT, contain PII and contact info. These are governed by mature and robust data, Ed just mentioned a couple of new mechanisms deployed. What I want to talk about is this PII information in other objects and attributes which are currently not sufficiently covered, the data protection and privacy policies.

So, the whole story started a couple of months ago when I updated my Internet access for my private Internet access to an Internet contract and I found out that my service provider was automatically publishing my static IP address, full name, zip code, etc., which is definitely not what I was comfortable.

So I complained to my provider. They immediately updated with less than half an hour. The object was updated, and the data information was replaced with some generic name pointing to the provider. But Ed also mentioned the history, the information is still available on the history again without authentication, without any specific protection mechanism, unfortunately.

So, this does not affect only my ISP, we asked around a couple of our friends in Germany and Austria, we had a very similar picture. I started to search manually on the web interface of the database, and within 15, 20 minutes I found a lot of objects, INET numb objects, with a static IP address, the full name very often the street address, city and zip code from several countries, not only Austria, from just a couple of countries from the top of my head where I found these entities, these attributes. And this seems to be a quite common issue or effect. Actually because of the number of cases that could even constitute a GDPR high risk factor, we'll see later what that could possibly mean.

Our assumption is that these objects are created by some automatic scripts somewhere in the background. We can also assume that these scripts are possibly in use for many years. The suffix of the net name points back to a very old product name so we can assume that these scripts have been created and are in use for many years. Which also let's us assume that this ‑‑ existence of the script is forgotten and not included in the moderning of business processes of an LIR, which, on the other hand, means that the LIR is not capable to fully exercise data subjects rights, for example, but not limited to right of access article 5 GDPR.

We do not, I emphasise we do not assume malicious intent here. That's most likely just negligence.

So, the controllership, according to GDPR, is quite important when it comes to liability. Of course the LIR is controller because the data, the information is created by the LIR. The processing is definitely permitted. There is no question about it. But the publication is completely unnecessary.

The RIPE also is controller thanks again to Athina and many others yesterday who took a few minutes to exchange my thoughts on that.

Definitely this is a joint controllership according to Article 26, which means that the LIR, together with the LIR, is liable for PII published in the description and net name attributes.

So, when I look at data forwarding and data protection, there is also a couple of other regulations, possibly violated.

Let's assume we have one of the LIRs who is not fully available ‑‑ fully aware about the situation. This LIR could not possibly exercise the right for access for the data subject, Article 15, because if the LIR is not aware that the data has been published, it cannot provide access to the data. And very similar situation with Article 13/14, right of information for data subject. How is this data or information processed and where it is ‑‑ where has it been forwarded to.

For example, queries to the RIPE database, they are anonymous without any logging, without any authorisation. We have NRTM, we have database downloads. We assume that the right of information is validated and cannot be fulfilled. And also, the data protection cannot be guaranteed in this situation because we do have quite good mechanisms when it comes to person objects, roll objects, etc., with rate limiting, stripping of information and history, from NRTM, but no such mechanisms are in place for Inetnum or Inet6num objects, especially the description and net name attributes.

So, our conclusion when it comes to PII and GDPR:
The publication of end customer information, I emphasise from a natural person, is not lawful, it's a different situation when it comes to company names, GDPR specifically protects persons, natural persons. So this is not lawful.

The processing is justified by Article 6, fulfilling a contract. The publishing of PII by the LRR, PII of end customers, I emphasise of end customers, who do not have a contract in regulation with RIPE is a violation of data minimisation principle, Article 5.

We also see the data breach notification which is a proactive duty by the controller is endangered. Article 33 GDPR requires a controller to notify authorities if PII leaks into public. And Article 34, in high risks cases. We're not quite sure about ‑‑ we didn't examine that in detail yet, it might be high‑risk case, it also probably means Article 34 might come into the discussion. A duty to inform the data subject. Again because the controllers, the LIR ‑‑ and I should have updated the slide, not maybe the RIPE, but definitely the RIPE, is most likely not aware exactly which type of data constitutes a breach of privacy law.

We also see the ‑‑ Article 7 GDPR allows a data subject to request the removal of information if it's not necessary and not fully lawfully. This cannot be fully because the LIR is not under the full control of the database. The LIR cannot update the attribute, the histories under the control of the RIPE and currently there are no immediate interfaces for the LIR to have some influence about the information in the history. We do have ‑‑ Athina referred me to the data removal procedure. This detail covers the removal IRT objects, specifically e‑mail address, telephone number, names. But does not explicitly cover and describe what to do when end customer is affected and end customer in the description and net name attributes.

So, from the perspective of the.GDPR, they are additional unnecessary complicated steps for a data subject to finally get all data to be deleted. So I believe the Inet6num objects with the description attribute and the netnum attribute are in many cases, I emphasise not everyone does it, it's just a few service providers and LIRs who publish this information, they might be in violation of the GDPR and also the split control over the database between LIR and RIPE also makes it more complicated to execute the right to be forgotten.

So we have included our contact information. You are welcome to reach out to you if you want to know more, and I hope we still have a minute or so for Q&A.

WILLIAM SYLVESTER: We have I guess a comment here from Maria Stafyla, senior legal counsel at the RIPE NCC:

"Hi Michael, thank you for your presentation. We wanted to provide clarification on a few points. The description attribute is never intended to contain personal data especially in Inetnum object.

MICHAEL KAFKA: We know about that situation.

WILLIAM SYLVESTER: Then it can include a short description in relation to that object. "2. The person who inserts personal data into the RIPE database is responsible ensuring the relevant legal conditions are met. However, we do have a process that allows for the removal of that data."
And there is a link on the RIPE website.

MICHAEL KAFKA: This is a process and this is the problem that the end customer of a service provider has a direct contact with the LIR and must do some additional steps which are quite complicated to actually remove that information. So, we recommend to have a direct possibility for the LIR, also to remove that information.

WILLIAM SYLVESTER: Just to finish up her comment: "Upon receiving a request for personal data to be removed, we'll first ask the party responsible for maintaining information to replace it with something that is more appropriate or accurate. If this is unsuccessful we'll remove the relevant data ourselves", as in the RIPE NCC.

We're a little pressed for time. I know that we're right up against the coffee break. We wanted to hit NWIs, so, Michael, thank you very much.

MICHAEL KAFKA: Thank you for inviting me.

WILLIAM SYLVESTER: Sorry, real quick, there is ‑‑ if you want to submit a request, you can e‑mail NCC [at] ripe [dot] net. Regarding historical contacts, information of resource holders there are already some safeguards in place, recent changes brought database queries into alignment with the related FTP files were containing personal data are filtered out... etc. Our goal is to strike a balance between satisfying the community's requirements for historical registry information and privacy restrictions."

MICHAEL KAFKA: I just see some comments in the chat I'd like to comment on.

WILLIAM SYLVESTER: Let's make it quick.

MICHAEL KAFKA: The PII and private personal data, there is a difference, of course, but I think both are affected.

WILLIAM SYLVESTER: And George Michaelson from APNIC: "Thank you for this presentation which I think represents a problem for the LRTM feed receiver which maintains history state."

MICHAEL KAFKA: I will suggest to continue this discussion on the RIPE mailing list.

WILLIAM SYLVESTER: That would be great.

MICHAEL KAFKA: I am subscribed, so see you there. Goodbye. Thank you.

WILLIAM SYLVESTER: Real quick, Denis we're up on our NWIs.

DENNIS WALKER: Hi. I am Dennis Walker, one of the co‑chairs of the Database Working Group. I want to quickly run through some NWIs, the ones that are currently outstanding and some things that are not yet NWIs.

NWI 3 is the AFRINIC IRR homing. Now we have the RIPE non‑auth, the question the community needs to answer do we actively need to promote this project any more or can we now effectively close that one?

We won't have time to discuss these now online, but if you can make some comment on the mailing list on these issues.

NWI 8, the LIRs SSO authentication groups. We suggested implementing this in two stages. Phase 1 has already been done. That's the second default maintainer group. Phase 2 was user defined of integration groups.

Nobody objected to this, but really only the proposer actually said that this was a good idea. So, if we want to go ahead with this, go on the RIPE NCC to give resources to implement, then we need a few more people to actually say yes, this is a good idea, it would be useful, let's do it.

NWI 9 inband notification mechanism.
As Ed said, NRTM is now open to anyone. But for the next generation of the NRTM protocol, we have two versions being proposed. One is a variant of the RPKI RRDP protocol and the other is using HTTP web sockets. These are very different technologies. They connect differently, one connected intermittently, and does it a catch up, one is permanently connected. The question is, how do we move forward with these two proposals on the table? Do you want the RIPE NCC to evaluate the two technical issues and come up with some suggestions on each of them, or does someone in the community want to evaluate them? Do people feel that one is infinitely better than the other and therefore that's the one we should go with? So we need some direction now from the community about how we actually move forward with these two alternative technologies.

Some things are not yet NWIs. It was asked if we could find previous users of IP address. This obviously involves some serious privacy concerns. Maybe we could present some organisation details, and filter out the personal data. But again we need to know how important is this an issue? How many people really want to be able to find a previous user IP address? Is it important?

So, we want some feedback on that.

International domain names. There is an idea of using Punycode for this. Again, give us some feedback.

API keys, there was a proposal to have API key mechanisms extended to sync updates and Restful AP. Again, give us some feedback. Sorry I have had to rush through this, but we're out of time. Go to the mailing list and please let us know what you think of all these points.

WILLIAM SYLVESTER: We'll follow this up on the mailing list. George Michaelson had one comment: "Huge advantage of RDP is that is the well‑tuned to SIDN distribution."

With that, thank you for attending the Database Working Group. We look forward to seeing you next time and hope everyone is safe and well. And let's take all these discussions onto the mailing list. Take care.

(Coffee break)