Internet resilience is the ability of a network to maintain an acceptable level of service at all times. The Internet plays a critical role in society and the COVID-19 pandemic has reinforced the importance of reliable and stable Internet connectivity. However, not all countries have Internet infrastructure that is robust enough to provide an acceptable level of service to users.

In Africa, Internet resilience has not been sufficiently measured to date. So, as part of the Internet Society’s Measuring the Internet project, we want to find out how well African countries cope with Internet outages or disruptions and how resilient networks in Africa really are.

We’re going to seek these answers through the Measuring Internet Resilience in Africa (MIRA) project, by evaluating the capability of a country to provide continuous, stable, and reliable Internet connectivity.

 

How the MIRA Project Measures Internet Resilience in Africa

The MIRA project is a joint initiative between African Network Information Centre (AFRINIC) and the Internet Society. The project uses Internet measurements gathered by measurement devices, called MIRA pods, located within African countries in order to:

  • Determine levels of Internet resilience in African countries over time by recording specific metrics, including throughput and latency (the time it takes to reach various Internet destinations).
  • Increase the number of Internet measurement vantage points in Africa, i.e., the places from which measurements are taken.
  • Make the data available to everyone, everywhere on the Internet Society Pulse platform.

 

Who Can Use the Data from the MIRA Project?

The data presented will be freely available to all and can be used by anyone to gain insight into the availability and resilience of the African Internet, including:

  • Network operators and Internet Service Providers (ISPs) seeking to improve their services.
  • National Regulatory Authorities (NRAs) define the legal and operational environments for the Internet.
  • Researchers and engineers aiming to quantify and improve Internet resilience and performance in Africa.
  • Internet users, researchers, and engineers seeking to learn more about the Internet landscape in Africa.

 

What Will Be Measured?

Internet resilience encompasses many underlying components, ranging from the resilience of physical Internet infrastructure (such as undersea or terrestrial cables) to market resilience and quality of service (QoS), which includes performance, uptime, and available bandwidth.

As part of the MIRA project, we will measure:

  • The availability and diversity of the physical Internet infrastructure.
  • The quality of service of the network from the user’s perspective.
  • The availability and efficiency of the peering infrastructure, including the number of IXPs and ISPs.
  • The availability and performance of the DNS ecosystem (a key component of Internet performance and resilience).

 

What We’ve Done So Far

MIRA is already collecting – or preparing to collect – throughput and latency measurements in Benin, Burkina Faso, Congo DRC, Kenya, Madagascar, Nigeria, Rwanda, Tunisia, and South Africa using measurement data from a third party, M-Lab. We’ll soon be adding data from the RIPE NCC’s RIPE Atlas. These measurements are being carried out in these countries by dedicated Raspberry Pi devices that we call MIRA Pods. The initial data will be available shortly. 

 

Where Can I Find MIRA Data?

The data is available on the Internet Society’s Pulse platform so that everyone can easily find the data they need about the state of Internet resilience in the first set of countries in which we’re carrying out measurements.

 

How Can I Participate?

To get a robust overview of the Internet’s resiliency in Africa, it’s important to increase the number of vantage points, i.e., the networks from where measurements can be carried out. We are slowly rolling out the measurement infrastructure and will need help from volunteers who can host lightweight probes – the MIRA Pods mentioned above – on their home networks. The probes need to be in-home networks in order to capture the real-world experiences of Internet users.

 

How Can I Find out More about MIRA?

If you would like to learn more about the technology and methodology behind the project, please read the detailed project overview:

For technical details about the MIRA project and the measurement infrastructure, visit our page on GitHub.

If you would like to host a MIRA Pod on your network, please contact us at pulse@isoc.org for more details.

You can find out more about the project in the Internet Resiliencesection on the Internet Society Pulse platform.

 


 

By Kevin Chege

Director, Internet Development

Internet Society

 

 

 

 

IPv6 Extension Headers

IPv6 Extension Headers have been a part of the core IPv6 specification as detailed in RFC8200. They offer supplementary information about IPv6 packets on how they are processed or routed, such as Hop-by-Hop Options. The Hop-by-Hop Options header is not inserted or deleted.

Still, it may be examined or processed by any node along a packet's delivery path until the packet reaches the node (or each of the set of nodes, in the case of multicast) identified in the Destination Address field the IPv6 header.

The Hop-by-Hop Options header, when present, must immediately follow the IPv6 header. Its presence is indicated by the value zero in the Next Header field of the IPv6 header (taken from RFC 8200).

Their implementations within several IPv6 deployments have led to many security issues such as the 'Cisco IOS Software IPv6 Virtual Fragmentation Reassembly Denial of Service Vulnerability' or IPv6 Hop-by-Hop options use-after-free.

 

IETF works on IPv6 Extension Headers

Several IPv6 engineers came together to write the 'Operational Implications of IPv6 Packets with Extension Headers

The document highlights the issues regarding IPv6 Extension Headers and why they are dropped intentionally on the public Internet.

 

Packet forwarding constraints

Many consumer-grade routers have limitations regarding the length of IP header they can process. With IPv6 Extension Headers, this can lead to throughput reduction or simply dropping packets.

 

ECMP and Hash-based Load-Sharing

Many routers need to infer more information regarding how to do load sharing. The IPv6 flow label header would be helpful for this purpose as it would avoid having to process several IPv6 Extension Headers before making a decision.

Unfortunately, many routers (approximately 20-30%) deployed fail to use the IPv6 flow label header properly, and therefore, IPv6 Extension Headers are often dropped in those cases.

 

Security issues with IPv6 Extension Headers

Many routers have a default deny policy in security operations. This means that IPv6 Extension Headers are often dropped by default. Additionally, when Access Control Lists are set, they are often done in a manner where IPv6 Extension Headers can cause a security bypass. Therefore, for security reasons, they are usually dropped. Additionally, due to the processing requirements of those additional headers, many firewalls prefer to drop them as they can be used as a Denial of service vector.

 

Conclusion

The IETF draft on 'Operational Implications of IPv6 Packets with Extension Headers' offers an excellent in-depth discussion to Network Engineers on the implications of IPv6 extensions on the operation side.

In this blog, we gave an overview to Network and Systems Engineers. We encourage readers to go through the Internet-Draft and, if possible, offer suggestions to the IETF v6ops Working Group

 

 

 


 

About Author

Logan Velvindron
Infrastructure Security Engineer 

He experiments with new security technologies that have business value. In addition to this, he is a contributor to the Internet Engineering Task Force both as a specification developer, Security Area/Internet of Things Directorate member and a Working Group chair. He has managed and led several IETF hackathons during both African Internet Summit and IETF meetings. 

 

 

 

 

 

AFRINIC has implemented the Lame Delegation policy proposal in its entirety; specifically fulfilling the requirements of Section 10.7 of the CPM

The planned go-live date is 29th April 2021.

 

Definition of 'LAME'

An authoritative DNS name server is considered lame when it does not adequately respond to queries for a domain, for which it is the designated Start of Authority (SOA).

For the purpose of this policy implementation, a DNS nameserver that, if queried using a standard DNS client or library, the server does not respond with an authoritative answer for the specific domain is considered lame.

No differentiation is made here between the following behaviours of a DNS server:

  • Not responding at all.
  • Responding in some way, but not for the specific domain queried.
  • Responding for the correct domain, but without the authority bit set.

All the above variations result in a 'lame' delegation.

 

What does this implementation entail?

DNS lameness tests will be run on a monthly basis on all nameservers registered as "nserver" records within "domain" objects in the AFRINIC WHOIS database.

These checks will be running from at least three different geographical locations and a single successful recorded authoritative answer from a single test location is sufficient to NOT consider a nameserver as “lame”.

Once a given ‘nserver’ record has been determined to be lame for a given domain, and reasonable attempts have been made to contact the responsible person(s), the nserver attribute must then be removed from the given domain object. A ‘remarks’ line will be added to the domain object in the database recording this.

In the event that all nameserver records are lame for a given domain, the domain object will be removed in its entirety.

 

The complete "lameness" checking approach is as follows:

TimeStatusAction
Day 0 Lame delegation is first detected Lame delegation recorded, nameserver to be re-tested for lameness every day
Day 3 If delegation is still lame Initial notification is sent to the registered admin-c,zone-c, and tech-c contacts
Day 10 If delegation is still lame Send the First reminder to the registered admin-c,zone-c, and tech-c contacts
Day 11 If lame delegation is still detected A remark is added in the domain object identifying the lame nameserver(s).
Day 17 If delegation is still lame Send the Second reminder to the admin-c,zone-c, and tech-c contacts
Day 24 If delegation is still lame Send last and final reminder to the admin-c,zone-c, and tech-c contacts
Day 30 If reverse DNS delegation is still lame The “nserver” record will be removed from the domain object(s) for which it is lame. Any "domain" object that thus has zero "nserver" records, will be removed from the WHOIS database.

 

The lameness checks will continue to run on a daily basis and where a nameserver is no longer detected as lame, the corresponding remark will be removed.

 

What is the impact on members whose domains have lame delegations?

The negative impact of reverse DNS Lame delegations will affect the users of the network in question as well as any third parties relying on DNS records from the affected domain.

DNS lookup for deleted domains will be answered by an NXDOMAIN response, meaning this domain is not listed in any AFRINIC DNS zone.

 

What can a member do to resolve Lame Delegation?

AFRINIC highly recommends that the necessary steps must be taken to correct the DNS lameness issue before deletion takes place; this could either be, by making the nameservers authoritative for the relevant zones, or by editing the list of nameservers in the "nserver" attributes of the relevant "domain" objects.

Objects with lame delegations can be updated by logging in to https://my.afrinic.net

  • Go to the "Resources" tab
  • Select "Reverse Delegation" in the drop-down list
  • Click the expand icon adjacent to the IP prefix
  • Select the edit icon

You may also do it through the WHOIS web interface on the AFRINIC website https://www.afrinic.net/whois or through e-mail, where the domain objects can be submitted to auto-dbm@afrinic.net.

The implementation is designed to minimize the possibility of false positives and we recommend to members to make use of the lameness checker https://afrinic.net/whois/lame to verify against false positives as well validating any updates made or any new reverse domain delegation registered.

Statistics regarding Lame DNS delegation statistics are available here: https://stats.afrinic.net/lamerdns/

For any inquiries, you can contact hostmaster@afrinic.net.

 


 

Further reference:

Consolidated Policy Manual section 10.7 at https://afrinic.net/policy/manual#lame

How to resolve Lame Delegation? https://afrinic.net/support/whois/resolve-lame-delegation

DNS troubleshooting best practices are recommended in RFC 1912 at https://www.ietf.org/rfc/rfc1912.txt

 

 

 

 

 

AFRINIC will be participating in the virtual event OSIANE 2021 organised by the nongovernmental organisation PRATIC (Promotion, Reflection and Analysis on Information and Communication Technologies).

AFRINIC CEO Eddy Kayihura will be part of a panel in the session entitled "Towards a High-Quality Internet in Central Africa"

AFRINIC is a proud sponsor of OSIANE 2021 which is being held alongside the Central Africa Peering Forum (CAPF).

 

 

 

Resource Public Key Infrastructure (RPKI) is the way to cryptographically sign records that associate a Border Gateway Protocol (BGP) route announcement with the correct originating Autonomous System Number (ASN).

But if you are just getting started learning about RPKI or simply wish to read up on it, you will soon realize there is no one single authoritative Request for Comment (RFC) on the topic. In fact, there are more than 40 RFCs about RPKI found in different categories.

 

Figure 1 — There are more than 40 RFCs about RPKI.

 

The fact that it is not possible to find all the information about RPKI in one place makes it difficult to understand RPKI from scratch.

To give a bit more context, the Internet Engineering Task Force (IETF) is the premier Internet standards body, developing open standards through open processes. The IETF works on a broad range of networking technologies organized into IETF Areas. The IETF Security Area, with more than 20 active Working Groups, provides a focal point for security-related technical work.

RPKI is a framework that was first defined in RFC 6480 (An Infrastructure to Support Secure Internet Routing) in 2012. Different working groups under the IETF Security Area have contributed to the topic, and there are now more than 40 RPKI-related RFCs.

So, if you want to read about RPKI, the questions are many: where should you start? What RFC should you read first? What can you learn from the various RFCs? Should you read all of them?

To help you find useful information efficiently, we try to answer all these questions with a new tool: The RPKI RFCs Graph (source available in Github).

This graph shows the dynamics of all the RPKI-related RFCs and gives you a brief of each. The RFCs are represented in an interactive graph where you can see their relations to each other.

 

Figure 2 — The RPKI RFCs Graph

Figure 2 shows:

Three categories of RFCs: PROPOSED STANDARD (STANDARD), BEST CURRENT PRACTICE (BCP) and INFORMATIONAL.

RPKI-related RFCs are in blue, RPKI-related RFCs with briefs are in yellow, and other RFCs are in grey.

  • Links follows UPDATE (green) or OBSOLETE (red) relationships between RFCs.
  • 4 BCPs, 7 INFORMATIONAL, and 52 STANDARD.
  • In addition to the list of RFCs in the screenshot above, we have added some RFCs following UPDATE or OBSOLETE relationships where available. For instance, RFC 8212 (not RPKI-related) updated RFC 4271. Reading RFC 4271 alone is a good start, but will only give partial information about BGP-4.
  • Filtering options.

 

In Figure 2, we can also see that non-RPKI RFCs (RFC 8654, RFC 8212, RFC 7705, RFC 7607, RFC 7606, RFC 6793, RFC 6608, RFC 6286) updates RPKI RFC 4271. This shows that reading RFC 4271 will not be sufficient; updates are available on non-RPKI RFCs. From the same Figure, it is clear that reading RFC 1771 is of little value since it has been obsoleted by RFC 4271.

The interactive graph allows these filters:

  • Tooltip: Enable/disable RFC metadata information.
  • MUST read: According to our classification, there are six RPKI RFCs that MUST be read.
  • SHOULD read: These RFCs are useful, but you can read them after reading those in the MUST group.
  • MAY read: These are the less important ones.

 

Figure 3 shows RFC 6484 metadata with the ‘tooltip filtering option’ activated:

Figure 3 — RFCs Graph showing RFC 6484 metadata

 

The graph shows isolated RFCs (RFCs without relation to any other RFC). It is well understood that BCP and INFORMATIONAL comprise isolated RFCs. Only STANDARD RFCs presents relationships. On this version of the graph, RFCs with summaries are marked in yellow. For instance, a click on RFC 6811 will show the brief as pictured below (Figure 4).

 

 

 

The brief is structured with the following components:

  • Title: RFC title.
  • Targets: Can be relaying parties, vendor, RIR, and more.
  • Terminology: New concepts and acronyms used in the RFC.
  • Text of the brief.

An RFC targeting a vendor will be less important to a Regional Internet Registry, for instance. This work focuses on relaying parties, thus our classification was made from the point of view of a relaying party.

We hope you find this tool useful when navigating the many RPKI-related RFCs. If you have any comments or suggestions, please leave us a comment below.

 

This blog post has been republished from https://blog.apnic.net/2021/03/15/which-rpki-related-rfcs-should-you-read/ published on March 15, 2021. 

 


 

About the author

Alfred Arouna is a research engineer for Simula and MANRS’ 2020 Fellow.

 

 

 

 

 

Co-authored by CAIDA’s Roderick Fanou, Postdoctoral Scholar; Ricky Mok, Assistant Research Scientist; Bradley Huffaker, Technical Manager; and Kc Claffy, Founder and Director.

The underlying physical infrastructure of the Internet includes a mesh of submarine cables, generally shared by network operators who purchase capacity from the cable owners.

As of late 2020, over 400 submarine cables interconnected continents worldwide and constituted the oceanic backbone of the Internet. Although they carry more than 99% of international traffic, little academic research has occurred to isolate end-to-end performance changes induced by their launch.

It is generally assumed that the deployment of undersea cables improves performance, at least for economies around the cable. But by how much, and what happens to traffic from and towards neighbouring economies?

To study this, we looked at the South Atlantic Cable System (SACS), which was launched in mid-September, 2018. It was the first transatlantic cable traversing the southern hemisphere and provided an ideal opportunity to examine what happened to traffic between different Internet regions pre and post-launch.

Figure 1 – This image shows the Angola Cables Network, which includes the SACS cable. The cable stretches for 6,165km and has a capacity of 40Tbps and 4 fibre pairs (Source: Angola Cables)SACS connects Angola in Africa to Brazil in South America. In our paper, ‘Unintended consequences: Effects of submarine cable deployment on Internet routing‘, we shed empirical light on how it affected traffic patterns, by investigating the operational impact of SACS on Internet routing. Last year, we presented our results at the Passive and Active Measurement Conference (PAM) 2020, where it was awarded ‘best paper’.

Here, we summarize the contributions of our study, including our methodology, and some findings.

 

How did we measure the change in performance?

Our methodology quantifies the end-to-end communication performance changes from a new submarine cable deployment on Internet paths.

Our approach relies on existing subsea maps/databases and public measurement infrastructures.

Our method has four steps:

  1. Collect candidate IP paths that could have crossed the cable
  2. Identify router IP interfaces on both sides of the cable based on those candidate IP paths
  3. Search for corresponding paths (between same endpoint pairs) in historical traceroute datasets
  4. Annotate collected paths with the necessary information for analysis such as hostnames, ASes, IP geolocations, and round-trip time (RTTs) differences between consecutive hops

 

Collecting candidate IP paths

Identifying which Internet paths are passing through a newly deployed cable is quite challenging. To accurately identify IPs on both sides of the cable, we need samples of IP paths crossing it in both directions, which we can obtain by running measurements after the cable launch.

Our first step involves executing, in both directions, traceroutes between vantage points (VPs) located within two networks, denoted AS1 and AS2, that are topologically close to the respective ends of the cable.

From this, we get candidate IP paths containing IP addresses of routers traversed by packets from AS1 to AS2 or vice-versa via the cable as well as the round-trip times (RTTs) from the respective source IP addresses to each of them. We selected the networks hosting vantage points (VPs) as well as the active VPs within those networks, using existing measurement platforms (CAIDA Ark and RIPE Atlas) and publicly available sea cable databases/maps.

 

Identifying router interfaces at both ends of the cable

Using the speed of light constraint and the known length of the cable, we were able to deduce the minimum RTT to cross the cable. This gave us a threshold that we could use to narrow down the candidate IP paths, finding matching traceroutes containing RTT bumps greater or equal to the inferred minimum threshold.

We looked for cases where the locations of those IP interfaces, according to geolocation databases Netacuity and Maxmind, match the countries linked by the new subsea cable. Then, we inferred matching pairs of potential IPs on each side of the cable and looked for router aliases of those IPs.

 

Searching for corresponding paths in historical traceroute datasets

Using existing measurement platforms RIPE Atlas and CAIDA Ark, we looked for historical traceroutes containing any of the identified pairs separated by an RTT bump, greater or equal to the minimum threshold needed to cross the studied cable. We then grouped them into two sets, depending on whether they were run pre or post-cable launch.

 

Annotating collected paths with the necessary information for analysis

We annotated these IP paths with hostnames, ASes, locations, and RTT differences between consecutive hops.

Finally, we used three metrics to evaluate end-to-end performance and AS paths, before and after the cable launch:

  • The RTTs to the common IP hops closest to the traceroute destinations determines the time that packets took to travel from a source interface to a common IP close to a given destination network, measured before and after the cable launch
  • The AS-centrality of transit ASes represents the percentage of paths for which an AS played a role in transit
  • The length of AS paths crossing the studied cable operator’s network post-event, which we compared to the length of the AS paths serving the corresponding source IP destination prefixes, pre-event

 

So what did we discover?

 

Comparing RTTs before and after SACS

We started our analysis by comparing RTTs before and after SACS deployment. For the same source VP and destination prefix, we built a set of common IP hops in the traces before and after SACS, and selected the IP closest to the destination as a point of comparison.

Using the RTTs from VPs toward IP hops from the traces pre and post-SACS, we plotted the box plots of Figure 2, clustering RTTs by continent and measurement platform.

Figure 2 – Box plots of minimum RTTs from Ark and Atlas VPs to the common IP hops closest to the destination IPs. The red line of every box plot represents the median of these minimum RTTs; we marked the 75th and 25th percentile as well as the interquartile range (IQR).

 

What was the impact of SACS on latency?

Although the median latency across the whole dataset for paths crossing SACS post-launch did not change much (median RTT drops of 2-3ms); this hides significant decreases and increases in latency across paths from/to specific regions.

Interestingly, paths from South America experienced a median latency decrease of 38%, which was quite significant compared to paths from Oceania-Australia (8% decrease), and those from Africa (3%).

At the economic level, we found predictable performance improvements (RTT decrease) for paths going from Africa to Brazil, or from South America to Angola. However, we found an asymmetrical RTT reduction; the decrease of the median RTT from Africa to Brazil (73ms) was a third of that from South America to Angola (226ms). We also noted some unpredicted and unreported performance degradations. For example, we saw packets sub-optimally routed through SACS for paths going from North America to Brazil or Africa/Europe to Angola, leading to latency increases.

 

Comparing Transit Structure

We provide an in-depth inspection of the transit structure pre and post-SACS, an analysis of the impact on AS path lengths, and a validation of our results in the paper.

 

What are the contributions and key findings of this study?

In summary, the key contributions of this study can be listed as follows:

  • We introduced a reproducible method to investigate the impact of a cable deployment on macroscopic Internet topology and performance
  • We applied our methodology to the case of SACS, the first trans-Atlantic cable from South America to Africa
  • We discovered that the RTT decrease for IP paths going from Africa to Brazil was roughly a third of that noticed on paths from South America to Angola
  • Further, we discovered surprising performance degradations to/from some regions and analyzed the root-causes of these unintended consequences

From the findings of this paper, we suggest that to avoid suboptimal routing post-activation of cables in the future, ASes could inform BGP neighbours to allow time for changes, ensure optimal iBGP configurations post-activation, and use measurement platforms to verify path optimality.

Our code and data are published to facilitate reproducibility. This codebase can be extended to other cable use-cases.

 

 

This blog post has been republished from https://blog.apnic.net/2021/02/22/unintended-consequences-of-submarine-cable-deployment-on-internet-routing published in February 22, 2021. 

 

 


  

About the author


Roderick Fanou 

After obtaining his PhD in Telematics Engineering from IMDEA Networks Institute and Universidad Carlos III de Madrid, Spain in 2017, Roderick Fanou, joined CAIDA (University of California, San Diego), US in March 2018, where he worked as a Postdoctoral Scholar till March 2021. During his stay, he contributed to the MANIC and PANDA projects alongside Amogh Dhamdhere (in 2018) and Kc Claffy. His research activities involved assisting with the design and development of new applications as well as the integration of existing codebases that measure interdomain congestion, topology, and performance, for enabling large-scale scientific projects.

The study presented by this post is one of the outcomes of his collaboration with the CAIDA team.  

 

 

 

Last November we asked you for input through our anonymous satisfaction survey, so we could use it to guide our product roadmap for 2021. Today, we are sharing what you told us through the survey and how we’ll be improving PeeringDB and your experience of it in 2021.

We had over 200 responses to the survey. Respondents identified themselves as connected with organizations operating on every continent and in every part of our industry. 99% of respondents described themselves as very or somewhat satisfied with PeeringDB overall.

When we asked about specific service categories, we were told that Network Configuration Data and Search and Discovery capabilities were the most important. These service categories had lower, though still high, levels of satisfaction, with 95% and 96% of respondents describing themselves as very or somewhat satisfied with these aspects of PeeringDB.

Although we saw higher satisfaction with the User Experience and Web Interface, at 97%, this service category both had the most responses and the most divided feedback. One user described the current web interface as “clean and simple” while others said it was “showing its age.”

Documentation quality was also an area with lower specific satisfaction, at 93%. One comment homed in on a key problem, noting: "Needs a top-level overview document/intro. Or if it exists, I need to find it."

 

We have used your feedback to guide our product roadmap for 2021. The four key focus areas will be:

  • Improving geographic search
  • Developing a structured framework for user documentation
  • Improving the web site’s responsiveness
  • Introducing a communications framework to alert users to developments and support future tooling

Our first steps to accomplish this have been to add database support for coordinates of facilities. All new facilities will be located by their latitude and longitude, with street addresses as human-friendly search terms instead of authoritative data. This is a major project and we will share more on this work in a future blog post.

Another key change is the publication of our first HOWTO document. This document is designed to help new networks register with PeeringDB using our website. We will be publishing more documents in this series and developing a broader documentation framework to support API and web users equally.

If you have an idea to improve PeeringDB you can share it on our low traffic mailing lists or create an issue directly on GitHub. If you find a data quality issue, please let us know at support@peeringdb.com

 


 

About Peering DB

PeeringDB is a freely available, user-maintained, database of networks, and the go-to location for interconnection data. The database facilitates the global interconnection of networks at Internet Exchange Points (IXPs), data centres, and other interconnection facilities, and is the first step in making interconnection decisions.

  

About the Author

 

Leo Vegoda is developing PeeringDB’s product roadmap. He was previously responsible for organizational planning and improvement in ICANN’s Office of the COO, and Internet Number Resources in the IANA department, as well as running Registration Services at the RIPE NCC.