Domain industry news

Syndicate content CircleID
Latest posts on CircleID
Updated: 3 hours 33 min ago

French Acquire the .Best New gTLD - Interview with the New Owner

Thu, 2018-07-19 19:26

This is an interview with Cyril Fremont, the first French entrepreneur to have acquired a new generic Top-Level Domain (gTLD).

We long waited for innovation in the new gTLD industry and reading between the lines of this interview, one will understand that the reason behind this acquisition is "not" to sell domain names — the way registries do it in 2018.

If ".Best" domains remain open to all here, this registry is planning to create innovative projects that will be launched in the near future with some possibly big surprises.

Jean Guillon: Why acquire the .Best new gTLD and no other?

Cyril Fremont: The title says it all, "it's the best." The .Best TLD is one of the few that adds value to the domain name. It is simple: people are always looking for the best. Everyone wants the best and there are more than 200 million best-related searches every month.

Jean Guillon: How will the .Best strategy be different from other TLDs?

Cyril Fremont: There will be two steps in our new .Best strategy:

1) Lower the price tag for registrars and users. Today, ".Best" domain names are still considered as Premiums and they are too expensive — so we will quickly change that.

2) We want the .Best new gTLD to be part of a global strategy that will benefit the internet community.

Jean Guillon: OK Cyril, that's the official speech but now, what's the truth?

Cyril Fremont: lol, OK, challenge me.

Jean Guillon: What's the relation between the .Best new gTLD that you have just acquired and the ICO (Initial Coin Offering) that I just heard about?

Cyril Fremont: We are creating a new social network where users will be rewarded in our BESTCOIN cryptocurrency to elect the Best of everything. The business model is very simple: Get paid to review.

Jean Guillon: How ".Best" domain names become an innovation in such a project?

Cyril Fremont: The first innovation is that each user will have their own domain name and will be responsible for their published content. All data and reviews will be fully decentralized in users websites (to fully comply with GDPR).

The other innovation is that contributors will be rewarded in our CryptoCurrency (BESTCOIN) for their content.

And the Best innovation for the community is that we will change the game between buyers and sellers. Buying with a .Best domain will give you benefits because of the counter-power. Imagine you book a hotel with a .best email and get a free upgrade. This is what we are building.

Jean Guillon: OK, this is one big project but you said that more are coming so what's your vision? Isn't the .Best registry a tool to create a more global project for the community?

Cyril Fremont: that's exactly what it is Jean, our ambition is to create massive projects for consumers and not only focus on selling domain names through registrars the classic way.

As a registry in 2018, you need to think outside the box. The new gtld first mover advantage has gone.

While it is true that domain registrars are key in the domain names selling process, registries need to help them: registries need to bring the domain buyers to registrars.

All the projects that we are working on add a real value to the Internet community and so by ricochet as French say, to the global ICANN ecosystem: registrars, backend, escrow.

Plus, if you look at the internet economy, you will see that the influencers are actually driving the traffic. Look at Instagram: +1Billion users! How many active new gtlds domains at that time? 23 Millions domains. That's great but this is not our target. You have to be more customer-centric to be successful. You have to put the influencers at the center of all your registry strategies and economy.

Jean Guillon: OK, I understand what's your vision but can we expect technical innovation?

Cyril Fremont: Yes but we just finalized the .Best acquisition so I can't tell more for the time being. I give you rendezvous at the NAMESCON 2019 in Vegas.

Jean Guillon: Who is .Best: can you tell me more about the team?

Cyril Fremont: Premlead is the major shareholder and is helped by a team of investors that supported not only this first acquisition but also the next. Moreover, Premlead was already an active buyer of domain names including .Best premium names such as hotels.best, restaurants.best, ...

Jean Guillon: Is China an objective?

Cyril Fremont: In one word: it is 60% of the new gtld market.

Jean Guillon: I heard about "big numbers" and even the crazy number of 100 Million!

Cyril Fremont: Well, this number is indeed two objectives of us.

1) First, we just launched an ICO (www.coin.best) where we started selling our crypto token in presales for $100 Million USD.

2) Secondly, we have this target of creating a Best social network of 100M users by 2022.

Jean Guillon: I must admit that I fell from my chair when I heard about this since the .COM registry has just more than 134,000,000 domains installed and it took quite some years to achieve this. Can you tell me more about his?

Cyril Fremont: First, considering a social network means a lot of users. Look at Instagram and Facebook : we are not talking about 100 Million, they have +1 Billion users each. Do you understand the overall idea behind this? And by the way, the ex-Head of Facebook that drove the fast growth of the social network for 6 years just joined us as an advisor.

Jean Guillon: the .Best registry in the hands of a French person, isn't this a little bit strange?

Cyril Fremont: Totally! (lol) but domain names ending in ".Best" are not for French, it's a word used worldwide. Best is part of the 250 most used terms in the world. Also, the French history is linked to excellency: luxury, beauty, aerospace, ... In most industries, French are the Best and aren't we the new soccer world champions?

Jean Guillon: Last question, do you have plans to acquire other new gTLD?

Cyril Fremont: This is your Best question jean. You can say it to all registries: If any one of them would like to resell, come to us, speak to us. We can guarantee them the Best proposal!

Jean Guillon: Sorry, I have one more last question. How much did it cost you to acquire the .Best new gTLD?

Cyril Fremont: As said, you don't count for the Best!

Written by Jean Guillon, New generic Top-Level Domains' specialist

Follow CircleID on Twitter

More under: Domain Names, Registry Services, New TLDs

Categories: News and Updates

ICANN at a Crossroads: GDPR and Human Rights

Thu, 2018-07-19 10:08

The European Data Protection Board certainly has been keeping its records straight. Its 27 May statement starts with the following:

"WP29 has been offering guidance to ICANN on how to bring WHOIS in compliance with European data protection law since 2003."

All internet users have dealings with the Internet Corporation for Assigned Names and Numbers, yet the vast majority have never heard of ICANN. Responsible for deciding how the Domain Name System (DNS) is run, ICANN may be a technical standard-setting body, but its policies and activities acquire political nuances more often than not. At its core, there is a distinction between ICANN the organisation, incorporated in California, and the ICANN community, a multistakeholder group of volunteers who develop the policies that are subsequently implemented by the organisation.

Fifteen years ago, and only a few years after ICANN was established, European data protection regulators had already spotted the flaws with ICANN's WHOIS service, a public database of registrants' contact details. At the end of 2017, mere months before European General Data Protection Regulation (GDPR) came into effect, ICANN had yet to devise a plan to make its WHOIS registrant database compliant. However, this is no longer the era of paltry fines for violating data protection laws, when compliance was at best facultative.

Data protection as a human right

Here it's important to recall the diverse origins of data protection law. At the EU level, the 1995 Data Protection Directive aimed to harmonize the regulation of automated data processing in order to fulfill the EU's goal of free movement of goods and services (see recitals 7 and 8). In parallel, data protection began to be conceived as a human right, a notion that reached a more concrete with the Treaty of Lisbon and the 2009 European Union Charter of Fundamental Rights. Today's GDPR, which replaces the old directive, explicitly relies on the EU's human rights framework for its rationale (see recital 1 and following).

Unlike traditional human rights legislation, the GDPR contains concrete provisions for direct enforcement. That is, it grants entitlements to individuals against other legal persons beyond the state, i.e. companies. In addition, the contemplation of hefty fines for violation (up to 4% of global annual turnover for business entities), which is not an enforcement mechanism usually associated with human rights. This stick is what triggered the compliance rush witnessed over the past year, and the numerous subscription confirmation emails received from organisations long forgotten.

The GDPR is also interesting in that it creates an extremely specific and detailed bundle of rights to the benefit of EU citizens and residents against any data controller and processor, wherever they may be located. The EU thus acted according to a highly pragmatic conceptualisation of "online jurisdiction" similar to that of the Canadian courts in the 2017 Equustek case. In this high-profile copyright infringement case, the Canadian Supreme Court ruled that Google had to delist the incriminated website from its search results on a worldwide basis, not only under the google.ca subdomain. If a full de-listing meant applying Canadian law beyond its borders, so be it (it is worth noting the order failed at the enforcement level in the US.) With the GDPR, the EU adopts a similar perspective: individuals must be protected, even if it means potentially reaching out to every single data controller and processor in the world.

Extraterritoriality in cyberspace?

The application of laws based on residency, citizenship, or other non-territorial bases isn't new. Tax law, notably from the US, is often applied in a similar way. The internet makes such an application of law even more salient, as individuals create and manage legal relationships across territories at an unprecedented scale. This can be unsettling for the "territorial" states, hence the observed trend toward extraterritoriality. States seek to have their laws apply to individuals irrespective of their physical location, particularly when dealing with internet-related issues, as a means of obtaining immediate legal effectivity. Regardless of whether GDPR's alleged extraterritoriality is good or bad, it can be said that states, the EU, and courts will most likely favour an interpretation of "online jurisdiction" which maximizes their power and their perceived efficiency at enforcing their own laws.

An overly cynical (and factually wrong) conclusion would be that ICANN, as a non-profit California corporation, is not subject to human rights law, as they only create legal relations between governments and individuals. This would stem from an understanding of human rights law as a solely vertical arrangement between states and individuals, which disregards how an entity like ICANN can interfere with "horizontal" human rights entitlements, like those put into place by the GDPR. Recent events show that enforcing corporate respect for human rights is not some civil society pipe dream: a German court already ruled that ICANN's last-minute GDPR compliance plan is not quite compliant.

Human rights at ICANN, beyond the Bylaw

ICANN has found itself in a double bind: on one side, an expansive understanding of jurisdiction is gaining ground around the world; on the other, a set of human rights norms, previously constrained to treaties and the often staid world of public international law, is finding a new horizontality. The standard for personal data protection has been decidedly raised, prompting us to rethink what human rights compliance means. ICANN's global mission is tied to the functioning of internet, but its operations can severely interfere with individuals' exercise of human rights, as well as the commitments of governments to uphold these rights.

Developing a high-level commitment, as ICANN did with its 2017 Human Rights Bylaw, is a first step. However, viable solutions must, at the same time, go deeper. Indeed, the operationalisation of ICANN's human rights bylaw must pass through a refocusing of the lens, away from international treaties and into the low-level application of human rights norms at the transnational and national level. Rather than biding time before fines mandate action, the ICANN community should carry out sustained research and documentation of ICANN's concrete interference with human rights, both existent and potential. The multistakeholder community should also put in place the necessary efforts to go beyond the mere human rights bylaw and into real compliance assessment, an ever-evolving activity that requires constant attention and monitoring.

In a 17 May letter, European commissioners asked ICANN, through its CEO, to "show leadership and demonstrate that the multi-stakeholder model actually delivers." Be it taunting or encouraging, this challenge underscores the current need for intentional, proactive leadership from both the ICANN organisation and its community. Beyond enhancing its accountability, proactively identifying and preventing human rights violations might just prevent further debacles the next time a human rights law (not so) suddenly becomes applicable to ICANN. As California adopts its own improved data protection law, that time may come sooner than expected.

Special thanks to Collin Kurre from Article19 for her thoughtful suggestions

Written by Raphaël Beauregard-Lacroix

Follow CircleID on Twitter

More under: Domain Names, ICANN, Internet Governance, Law, Policy & Regulation, Privacy, Whois

Categories: News and Updates

Protests in Iraq Lead to a Two-Day Internet Shutdown by the Government

Wed, 2018-07-18 22:31

Widespread protests in Iraq against the government have lead to a state of emergency where the government has ordered disconnection of the fiber backbone of Iraq that carries traffic for most of the country. Doug Madory, Director of Internet Analysis at Oracle Dyn who has been monitoring the event, in a blog post today writes: "Government-directed Internet outages have become a part of regular life in Iraq. Just yesterday, the government ordered its latest national outage to coincide this year's last 6th grade placement exam. The first government-directed outage in Iraq that we documented occurred in the fall of 2013 and revolved around a pricing dispute between the Iraqi Ministry of Communications (MoC) and various telecommunications companies operating there. While the intention of this outage was to enforce the MoC's authority, it served mainly to reveal the extent to which Iraqi providers were now relying on Kurdish transit providers operating outside the control of the central government"

Follow CircleID on Twitter

More under: Access Providers, Broadband, Censorship

Categories: News and Updates

Google to Deploy Its First Private Trans-Atlantic Subsea Cable

Wed, 2018-07-18 01:16

Google today announced plans to launch its latest private subsea cable project dubbed Dunant. The cable will cross the Atlantic Ocean from Virginia Beach in the U.S. to the French Atlantic coast, the company says. From today's blog post: "Dunant adds network capacity across the Atlantic, supplementing one of the busiest routes on the internet, and supporting the growth of Google Cloud. We're working with TE SubCom to design, manufacture and lay the cable for Dunant, which will bring well-provisioned, high-bandwidth, low-latency, highly secure cloud connections between the U.S. and Europe."

In the future, Google says it plans to continue investing in both private and consortium cables. "Cables are often built to serve a very specific route," Jayne Stowell, Strategic Negotiator of Global Infrastructure at Google. "When we build privately, we can choose this route based on what will provide the lowest latency for the largest segment of customers. In this case, we wanted connectivity across the Atlantic that was close to certain data centers."

Follow CircleID on Twitter

More under: Broadband, Cloud Computing

Categories: News and Updates

The Uncertainty of Measuring the DNS

Wed, 2018-07-18 00:51

The period around the end of the nineteenth century and the start of the twentieth century saw a number of phenomenal advances in the physical sciences. There was J.J. Thompson's discovery of the electron in 1897, Max Planck's quantum hypothesis in 1900, Einstein's ground-breaking papers on Brownian motion, the photoelectric effect and special relativity in 1905, and Ernest Rutherford's study of the nucleus published in 1911 to mention but a few of the fundamental discoveries of the time. One of the more intriguing developments in physics is attributed to German physicist Werner Heisenberg, who observed a fundamental property of quantum systems that there is a limit to the precision of measurement of complementary variables, such as position and momentum. This "uncertainty principle" is not simply a statement about the process of measurement or the accuracy of measurement tools, but a more fundamental property that arbitrarily high precision knowledge of correlated variables is impossible in such systems.

In this article, I'd like to explore a similar proposition related to the behavior of the Internet's Domain Name System. It's nowhere near as formally stated as Heisenberg's Uncertainty Principle, and cannot be proved formally, but the assertion is very similar, namely that there is a basic limit to the accuracy of measurements that can be made about the behavior and properties of the DNS.

This assertion may appear to be somewhat absurd, in that the DNS is merely the outcome of a set of supposedly simple deterministic behaviors that are defined by the hardware and software that operate the infrastructure of the DNS. This leads to the supposition that if we had access to a sufficiently large measurement system, then we could observe and measure the behavior of a broad cross-section of DNS elements and infer from these observations the behavior of the entire system. There is, of course, a different view, based on elements of complexity theory, that it is possible to construct complex systems from a collection of deterministically behaving elements, in the same way that brains are constructed from the simple element of a single neuron. Complex systems are distinguished from merely complicated by the proposition that the system exhibits unpredicted and unpredictable emergent behaviors. There is an opinion that the DNS fits within this categorization of complex systems, and this introduces essential elements of uncertainty into the behavior of the system.

Why should we ask this question about whether there are inherent limits of the accuracy and precision of broad-scale measurement of the DNS?

In September of 2017 the planned rollout of the Key Signing Key of the Root Zone, scheduled for the 11th October in that year, was suspended. At issue was the release of an initial analysis of some data concerning the extent to which the new KSK was not being 'learned' by DNS resolvers. The measurement signal appeared to indicate that some resolvers had not been able to follow the key learning process described in RFC5011, and between 5% to 10% of the measured signal was reporting a failure to trust the new KSK value. The conservative decision was taken to suspect the KSK roll process and take some time to assess the situation. (I wrote up my perspective of these events at the time.)

It is being proposed that the process of rolling the KSK be resumed, and the Board of ICANN has asked a number of ICANN committees to provide their perspective o0n this proposed action.

The key question here is: "Is this plan safe?" Here it all depends on an interpretation of the concept of "safe", and in this respect, it appears that "safe" is to be measured as the level of disruption to users of the Internet. A simple re-phrasing of the question would be along the lines of "Will users will experience a disruption in their DNS service?" But this question is too absolutist — it assumes that the DNS either works for every user or the DNS fails for every user. This is just not the case, and now we need to head into notions of "relative safety" and associated notions of "acceptable damage". A rephrased question would be: "What is the estimated population of users who are likely to be impacted by this change, and how easily could this be mitigated?" But perhaps the underlying issue here is the determination of: "What is an acceptable level of impact?"

In terms of data-driven decision making, this is a fine statement of the issue. The settings of the notion of 'acceptability" appear to be some form of policy-based decision. While zero impact is a laudable objective, in a widely distributed diverse environment without any elements of centralized control then some level of impact is to be accepted.

It appears that we need to define what is an acceptable level of impact of the change in order to understand whether it is safe to proceed or not. But to do this we need to define the notions of "impact" and this necessarily involves some form of measuring the DNS. Which leads us to the questions of relating to how we can measure the DNS and the related question of the level of uncertainty associated with any such measurement.

DNS Behaviours

A simple model of the DNS is shown in Figure 1 — there are a set of end clients who use stub resolvers, a set of agents who perform name resolution for these stub resolvers, otherwise known as recursive resolvers, and a collection of servers that serve authoritative information about names in the DNS.

Figure 1 - A simple (and simplistic) model of DNS Name Resolution

In this very simple model, stub resolvers ask recursive resolvers queries, and these recursive resolvers resolve the name by performing a number of queries to various authoritative servers. Recursive resolvers perform two tasks when resolving a name. The first is to establish the identity of the authoritative servers for the domain of the name being resolved, and the second is to query one of these servers for the name itself.

Caching

The issue with this approach to name resolution is that it is just too slow. The essential element that makes the DNS useful is local caching. When a resolver receives a response, it stores this information in a local cache of answers. If it subsequently receives the same query it can use this cached information immediately rather than wait for a query to be performed. The implication of this action is that authoritative name servers do not see all the DNS queries that are originated by end users. What they see the queries that are local cache misses in recursive and stub resolvers. The inference of this observation is that measurements of DNS traffic as seen by authoritative servers is not necessarily reflective of the query volume as generated by end clients of the DNS. The related observation is that the caching behavior of recursive resolvers is not deterministic. While the local resolver may hold a response in its local cache for a time interval that is defined by the zone administrator, this is not a firm directive. The local cache may fill up and require flushing, or local policies may hold a response in the local cache for more or less time than the suggested cache retention time.

When a caching resolver receives a query that is not in its local cache, it will need to resolve that name by making its own queries. But this is not the only reason why a caching resolver will generate queries. When a locally cached entry is about to expire, the resolver may choose to refresh its cache automatically and make the necessary queries to achieve this. In this case, these cache refresh queries seen at the authoritative servers are not reflective of end-user activity but reflect the dynamics of DNS cache management.

We now have a number of variables in cache behavior. There are the variations in local cache policies that affect cache retention times, the variation in cache sizes that may force premature cache expiration, the issues relating to local efforts to perform automated cache renewal. There are the differences in cache management from various resolver implementations. In addition, there is the wide diversity in cache directives as published by zone administrators.

The overall impact of resolver caching is the measurements of DNS activity based on observations on queries made to authoritative servers do not have a known and predictable relationship with end-user activity. It appears to be intuitively likely that a higher volume of queries seen at an authoritative server may well relate to a higher level of user activity related to that name, but such a statement is not possible to bound to any greater level of certainty.

Forwarders, Resolver Farms and Load Balancers

The simple model of DNS infrastructure pf stub resolvers, recursive resolvers and authoritative servers may be fine as a theoretic abstraction of DNS name infrastructure, but of course, the reality is far messier.

There are DNS forwarders, that present to their clients as a recursive resolver, but in fact forward all received queries on another recursive resolver, or if they have a local cache, then they pass all cache misses onward, and use the local cache wherever possible. (Figure 2) Or in many implementations resolvers may be configured to only use forwarding for certain domains, or only after a defined timeout or any one of a number of configurable conditions.

Figure 2 - DNS Forwarding

There are DNS "Resolver Farms" where a number of recursive resolvers are used. They mimic the actions of a single DNS resolver, but each individual resolver in the farm has its own local cache. Rather than a single cache and a single cache management policy, there are now multiple caches in parallel. Given a set of user queries being directed into such a resolver farm, the sequence of cache hits and misses is far harder to predict, as the behavior is strongly influenced by the way in which the resolver farm load balancer operates.

DNS Query Load Balancers are themselves yet another source of variation and unpredictability of DNS behavior. Load balancers sit in front of a set of resolvers and forward incoming queries to resolvers. The policies of load balancing are highly variable. Some load balancers attempt to even the load and pass an approximately equal query volume to each managed resolver. Others argue that this results in a worse performance of the collection of caches, and instead use a model of successively loading each resolver up to its notional query capacity in an effort to ensure that the local caches are running 'hot'.

The combination of load balancers and resolver farms can lead to anomalous behaviors. It has been observed that a single incoming query has triggered each resolver in the resolver farm to make independent queries to the authoritative servers, although this is neither an anticipated or even a common situation.

The overall picture of DNS resolution can be complex when this situation of resolvers, forwarders, load balancers and resolver farms are all placed into the picture. The client will "see" only those resolvers that are locally configured as service points for its local DNS stub resolver. An authoritative server will only "see" those resolvers that are configured to address queries directly to authoritative servers. In between these two edges is a more complex picture of query handling that is not directly visible to observers outside the participating resolver set (Figure 3).

Figure 3 - DNS query handling

If you consider Figure 3 from the perspective of what each party "sees", then the client has direct visibility to just two of the twelve resolvers that can handle the client's queries. All the other resolvers are occluded from the client. If you look at the view from the authoritative server, then nine of the recursive resolvers are visible to that server. In this figure, two resolvers cannot be directly seen from either perspective and are only visible to certain resolvers as either a forwarding destination or a query source.

It's also the case that different measurement systems produce different perspectives even when they appear to occupy the same observer role. For example, when analyzing the root key trust data and comparing the set of reporting resolvers that generated a trust anchor signal to the resolvers that ask a query in response to a measurement ad we observed that the two sets of 'visible' resolvers had very little in common. Both measurement systems saw some 700,000 distinct IP addresses of resolvers, yet only 33,000 addresses were represented in both data sets (Study of DNS Trusted Key data in April 2018).

Applying these observations to the larger environment of the Internet, then it is clear that there is no coherent picture of the way in which all active DNS elements interact with each other. Even the aggregate view of end clients of the resolvers that they pass queries to and the view of authoritative servers when combined do not produce the complete picture of DNS query and cache management on the Internet. The inference from this observation on the task of DNS measurement is that many aspects of overall DNS behavior are not directly observable. Our measurements are of subsets of the larger whole and they point to suppositions of general behaviors rather than universally observed behavior.

Timers, timers and timers

The process of DNS name resolution operates asynchronously. When a client passes a query to a resolver it does not have any way to specify a time to live for the query. It cannot say to the resolver "answer this on or before time x". Instead, the client operates its own re-query timer, and once this timer expires without a response the client may repeat the query to the same resolver, or send the query to a different resolver, or both.

A similar situation occurs with the interaction between a resolver and the authoritative servers for a zone. The resolver will send a query to a chosen server and set a time. If no response is received by the time the timer expires it will repeat the query, either to the same server, or to another listed authoritative server, or even both.

Interactions between recursive resolvers and forwards are also potential candidates for query replication, where a non-responsive forwarding resolver may trigger the resolver client to re-query towards another forwarder if multiple forwarders are configured.

The DNS protocol itself can also be unhelpful in this respect. A security-aware recursive resolver cannot directly signal a failure to validate a DNS response, and instead will signal the failure using the same response code as the failure of the server to generate a response (a SERVFAIL response, RCODE 2). This response is normally interpreted as a signal for the resolver client to try a different resolver or authoritative server in an effort to complete the name resolution.

It is possible for a single query to generate a cascade of internal queries within the DNS as a result of these interactions. As to whether these queries are visible to the authoritative servers or not depends on the cache state of the various recursive resolvers on the various query paths.

The DNS Query Protocol

The DNS query protocol is itself a source of considerable indeterminism in behavior. When a DNS query is passed on by a recursive resolver there passed on query is an entirely new query. There is no "query hop count" or any other cumulative information in these sequences of queries. When a server receives a query, its only context is the source IP address of the DNS transaction, and this IP address may or may not have any discernible relationship with the agent that initiated the query in the first place.

There is no timestamp in the query part of the DNS on-the-wire protocol, so no way that a server can understand the 'freshness' of the query. Similarly, there is no "drop dead" time in a query, where the client can indicate that it will have no further interest in a response after a given time.

There has been a recent change to the DNS query with the inclusion of the EDNS(0) Client Subnet option in queries. The intent was to allow large-scale distributed content system operators to steer clients to the "closest" service delivery point by allowing their authoritative servers to match the presumed local of the client with the locations of the available content servers. This introduces a new range of variability in behavior relating to local cache management and the definition of what constitutes a cache hit or miss in these cases.

A DNS query has no explanation or rationale. Is this a query that has passed to the DNS by an end application? Is this query caused by a resolver performing a cache refresh? Is the query a consequential query, such as is made when a resolver performs CNAME resolution or DNSSEC validation, for example?

In a similar vein, a DNS response has only a sparse set of response codes. Would we be in apposition to understand more about the way the DNS functions if we had a greater level of insight as to why particular responses are being generated, what sources were used, and similar diagnostic information? Or would this be just more DNS adornment without a compelling use case?

Challenges in Measuring the DNS

While there are a number of approaches to DNS measurement, all of these approaches have their own forms of potential measurement bias and uncertainty.

It is possible to use a collection of client endpoints and have them all perform a DNS query and measure the time taken to respond. However, it is entirely unclear what is being measured here. The query will be passed into the DNS and each of the resolvers in the query resolution path has the opportunity to answer from its own cache instead of either forwarding the query or performing a resolution against the authoritative servers. Given the variability of cache state and the variability of the location of authoritative servers for the query name, measurements of resolution time of query names appear to have little to tell us in terms of "performance" of DNS resolution, apart from the somewhat obvious conclusion that a cache-based service is generally faster than resolution via query to authoritative servers.

We could look at the DNS traffic from the perspective of a recursive resolver, looking at the queries and responses as seen at such a vantage point. It's not entirely clear what is being seen here. The queries passed into such resolvers may come from other recursive resolvers who are using this resolver as a forwarder target. They may be using this resolver all the time or only using it in a secondary capacity when the primary forwarder has failed to respond in time. The queries may come from stub resolvers in end client systems, and again it's unclear whether queries to this resolver form the primary query stream from the client of if this resolver is being used as a secondary service in the event of a timeout of a response from the primary service. It is unclear to what extent caching by the recursive resolver's clients is also altering the inherent signal in the query stream. While a view from a recursive resolver can allow the observer to form some views as to the nature and volume of some DNS activities it does not provide a clear view into other aspects of the DNS. For example, a view from a recursive resolver may not be able to provide a reliable indicator relating to the uptake of DNSSEC-validation across the Internet.

We could move further along the DNS resolution path and look at the behavior of the DNS from the perspective of authoritative servers. The same issues are present, namely, the interaction with resolvers' caches implies that the authoritative server only receives cache miss and cache refresh queries from resolvers. There is also the issue that each DNS name tends to collect its own profile of users, and it is unclear to what extent such users for a particular DNS domain or set of domains are representative of the broader population of users.

As well as measurements using passive collection techniques there are various active measurement techniques that involve some form of injection of specific DNS queries into the DNS and attempting to observe the resolution of these queries at various vantage points, either at recursive resolvers, root servers or at selected authoritative servers. Conventionally, active measurement using query injection requires some form of control over the endpoint that will allow the monitoring application to access to DNS queries and responses. This normally requires the use of customized probe systems with their own DNS libraries, which introduces their own variability.

The Uncertainty of DNS Measurement

None of these measurement techniques offer an all-embracing accurate view of the behavior of the DNS. This is not to say that we are blind to the way the DNS works. Far from it. We probably believe that we have a good insight as to the way the DNS behaves. But our view is not necessarily that accurate and many aspects of DNS behavior raise further questions that delve into unknown aspects of this distributed system.

A number of studies have concluded that in excess of 90% of queries directed to the DNS Root Servers are for non-existent names. (For example, "A Day at the Root of the Internet", by Castro et al, ACM SIGCOMM, September 2008, estimated that 98% of query traffic seen at the DNS root should not be there at all!) How can this be? How can the DNS system be so liberally populated with aberrant behaviors that cause an endless torrent of queries that attempt to resolve unresolvable names? Or is this, in fact, a very small proportion of the overall query load, but the cumulative effects of caching in resolvers are masking the more conventional queries? The problem now is that we have no precise view of the overall query load in the DNS, and the calculation of the extent to which queries for non-existent names populate this overall query load is not measurable to any high level of certainty.

Another example is the observation that some studies have found that a very large proportion of queries in the DNS are in fact "fake" queries. They are "fake" in that the query is being made without any visible action on the part of an end client, and the query appears to be an outcome of some behaviors within the DNS itself rather than client-side initiated queries. Our experiences with DNS measurement using unique timestamped labels and an online ad distribution system to see the queries from end users appears to support this observation, where some 40% of queries seen at our authoritative servers reflects an original query that was made days, months or even years in the past! (http://www.potaroo.net/ispcol/2016-03/zombies.html) Why is this happening? How are these queries being generated? What happens to responses? If this is visible at authoritative servers, what proportion of such repeat queries are being answered from resolvers' caches? Again, measurement efforts to answer such questions have a very high level of uncertainty associated with the measurement.

Which brings us back to the proposed role of the DNS Root Zone Key Signing Key.

The question of the extent of anticipated user impact is a really good question from the perspective of risk assessment. Our measurements with the existing RFC 8145 trust anchor reporting tool are simply unable to offer a reasonable assessment of user impact of this key role.

One possible response is to offer the view that if we wait then more resolvers will support this signal mechanism, which will imply that the signal will be more inclusive of the entire DNS environment, which means that the results of the analysis will have a lower level of uncertainty. There is no particular pressing need to roll the key now, tomorrow or any other particular date, and if deferral has no visible downside and the potential of garnering more information about potential risk, then deferral of the procedure looks like a sound conservative decision.

The opposite view is that the DNS simply cannot provide measurements relating to the entirety of the DNS that have an arbitrarily high level of certainty, irrespective of the measurement approach. Waiting for some new measurement approach that will eliminate the uncertainties of risk assessment looks just like pointless procrastination. No cryptographic key should be considered to be eternal, and key rotations, planned or unplanned, should be anticipated. In such a situation appears to be better to make key rotations a regularly scheduled operational event rather than an erratic, hasty, event-driven once-in-a-blue-moon debacle. That way code that does not incorporate support for KSK rolls will not survive in the network for more than one or two regularly scheduled key rolls, and the incorporation of this process becomes a part of the DNS ecosystem. If we are going to roll the key at regularly scheduled intervals we need to start with the first roll, and procrastination over this step appears to be more harmful than just simple execution of the procedure.

Within the bounds of some approximate level of certainty, we understand the population of potentially impacted users to be small, and certainly far smaller than the uncertainty levels associated with measurement exercises. Rolling the KSK as per the prepared procedure looks like a reasonable operational decision in such circumstances.

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: DNS, DNS Security

Categories: News and Updates

Cuba Starts Providing Internet on Mobile Phones

Tue, 2018-07-17 18:15

Cuba, one of the least connected countries, has started providing internet on the mobile phones of select users as it aims to roll out the service nationwide by year-end. Sarah Marsh reporting in Reuters: "Journalists at state-run news outlets were among the first this year to get mobile internet, provided by Cuba's telecoms monopoly, as part of a wider campaign for greater internet access that new President Miguel Diaz-Canel has said should boost the economy and help Cubans defend their revolution." According to a leaked 2015 government document, Cuba plans to connect half of the homes and 60 percent of phones by 2020.

Follow CircleID on Twitter

More under: Access Providers, Mobile Internet

Categories: News and Updates

Researchers Warn Buried Internet Cables at Risk as Sea Levels Rise

Mon, 2018-07-16 20:48

The results of a study presented today at a meeting of internet network researchers depicts critical communications infrastructure could be submerged by rising seas in as soon as 15 years. The study, which only evaluated risk to infrastructure in the United States, was shared with academic and industry researchers at the Applied Networking Research Workshop, a meeting of the Association for Computing Machinery, the Internet Society and the Institute of Electrical and Electronics Engineers. "Most of the damage that's going to be done in the next 100 years will be done sooner than later," says Paul Barford, the study's senior author, and a UW-Madison professor of computer science. The buried fiber optic cables, data centers, traffic exchanges and termination points that are the nerve centers, arteries and hubs of the vast global information network. "That surprised us. The expectation was that we'd have 50 years to plan for it. We don't have 50 years."

Follow CircleID on Twitter

More under: Access Providers, Broadband

Categories: News and Updates

Former ICANN Senior Vice President Kurt Pritz to be Named Chair of Whois Group

Mon, 2018-07-16 20:11

"Former ICANN senior vice president Kurt Pritz is expected to be named chair of the group tasked with reforming Whois in the post-GDPR world," reports Kevin Murphy in Domain Incite. "I'm told that choice was made by GNSO Council's leadership and selection committee… and will have to be confirmed by the full Council when it meets this Thursday. Pritz would chair the GNSO's first-ever Expedited Policy Development Process working group, which is expected to provide an ICANN community response to ICANN org's recent, top-down Temporary Specification for Whois."

Follow CircleID on Twitter

More under: Domain Names, ICANN, Whois

Categories: News and Updates

ICANN Issues Notice of Breach of Registry Agreement to .Pharmacy TLD Operator

Thu, 2018-07-12 23:35

The National Association of Board of Pharmacy ("NABP"), the operator of the .Pharmacy top-level domain is in breach of its Registry Agreement with the ICANN according to a letter issued by the agency today. NABP has been accused of failing to operate the TLD in a transparent manner consistent with general principles of openness and non-discrimination. The letter also indicates that NABP has failed to publish on its website primary contact information for handling inquiries related to
malicious conduct in the TLD feedback. Failure to comply by August 11 may result in the termination of NABP's contact ICANN warns.

Follow CircleID on Twitter

More under: ICANN, Policy & Regulation, Registry Services, New TLDs

Categories: News and Updates

A Progressive Web Apps World?

Thu, 2018-07-12 23:27

The browser is now a full fledged platform for apps. The major benefits of using the browser as a platform includes ease of universal deployment and avoiding concepts such as having to install software. It's also a very flexible and powerful environment.

Increasingly consumer electronics "devices" are software applications. In the April 2014 issue of CE Magazine, I wrote about HTML5 as a programming environment. Today's PWAs (Progressive Web Apps) go further. They take advantage of HTML5 and also capabilities of the JavaScript environment.

This is happening with PWAs or Progressive Web App. The term progressive refers to the approach of taking advantage of capabilities that are available in the application environment rather than having rigid requirements. This includes running on a range of devices instead of focusing on narrow categories like "mobile first". Moving across a disparate array of environments should include taking advantage of opportunities that are available such as a larger screen and adapting to the limits of a small screen. Perhaps in the future we'll treat the screen as an optional interaction surface given the availability of voice, touch and other techniques.

Instead of lumping a set of disparate concepts under "mobile" we can think of them separately:

  • The screen is an interaction surface. It may be a small wall-mounted device or a large screen on a pad we carry around.
  • Mobility is not simply a characteristic of a device but should be thought of as the mobility of a person who can interact on any available surface rather than being limited to a device they are carrying. Applications too can be mobile and available where needed rather than on a particular device.
  • We can introduce place as its own attribute. Touch a surface in the living room would turn on the light in that room. When we place a phone call we may be trying to reach a person. Or perhaps we are trying to reach anyone who happens to be home at the time.
  • There are many interaction modes with touch being one. Cameras can enable rich gesturing or even the use of facial expressions. Voice is another. And so many more.

We've come a long way from the original iPhone with its then rigid specifications. PWAs now give us a chance to explore new possibilities.

While a PWA can be treated like a standard application on a device, the ability to "just run" from a URL makes it easy to use the application on any device with a browser. This allows one to travel light.

Today airlines are removing screens from the planes and expecting travelers to carry their own devices and some airlines handout tablets. Perhaps it will make sense to bring back the larger screens as connected devices with browsers.

Apps

Unlike desktop applications which have had full access to the capabilities of the hardware (even more so in the earlier days of personal computers), a PWA starts out with essentially no access beyond the content from its original site. This reflects the cautious safety-first model for the web.

These capabilities are becoming increasingly more available to browser applications. They can now get the users' location (if permission is granted) and support Bluetooth devices (again, with permission). Soon they will be able to process payments. Increasingly we can think of PWAs as true applications and not just cached web pages.

Google and others are using PWAs as an opportunity to enforce hypertext transport protocol secure (https) encryption. Service workers require https.

Central to the structure of a PWA is the Service Worker [1] — typically a file in the top level (or other) directory of the application. A common name is "/sw.js". It is distinct from the rest of the application and instead of a top level "window" object, it has a "self" object. It's a kind of web worker [2] which communicated with the rest of the application by sending messages rather than directly referencing shared objects.

The service worker has limited access to files in or below its file directory. This is part of a larger approach of treating the URL path as a file system path, but sites can get creative in interpreting the string. This cautious approach also limits access to platform capabilities such as the native file system.

Notification is a key new capability which relies on service worker being available. Even when the browser is closed the JavaScript engine is still running. The notification mechanism allows data to be updated in the background to allow you to quickly see the current state of orders or reservations even if they aren't connected at the moment.

The app can also use the platform notification mechanism to alert the user of changes. To prevent denial of service attacks the notifications are routed through the browser-supplier servers.

There is also a local data store, IndexedDB which is an advancement over the browser local storage. In addition to having larger capacity and transactions, IndexedDB is asynchronous. Asynchronicity is a key feature of HTML5 programming and the promise mechanism has improved the readability of asynchronous programs. This style can make apps very responsive while still having the simplicity of single threading.

I expect that more platform capabilities such as calendar providers will become available to PWAs. These concepts are being pioneered on mobile devices but also make sense on desktop devices allowing more integration with work flows.

Looking ahead

Because PWAs are constrained by the browser environment it's hard to directly share among PWAs from different sources or across browsers. I would like to have the ability to share resources among browser apps and among local machines rather than relying on distant servers.

In theory some capabilities can be provided by third parties with local https servers and, perhaps, we'll see offerings such as document stores. There is still a need for a more traditional file system which just stores collections of bytes without knowing what they mean. This is similar to access to a file system on today's SD cards allowing multiple programs to store images files so that multiple camera applications and photo processing programs can share access to the same files and create new formats. How should access to such objects be managed? How do I selectively share access and limit others' ability to view a picture even when they have physical access to the device?

Today we do have browser machines such as the Chromebooks, but their capabilities are limited by the available apps. The same hardware can be used as a Windows Netbook which is far more capable. Over time it may make more sense to have such browser-based machines without worrying about the underlying operating systems.

PWA Engines

PWAs offer the ability to write once and run everywhere. Smart TVs are an example of a market that is currently fragmented as various venders build their own smart TVs along with boxes from Amazon, Roku, Apple and others.

Having a browser box could provide a standard platform which brings the benefits of the huge investment in web technology to this market. Concepts like URLs would be used to bookmark content (formerly known as TV shows) so they can be shared. Capabilities such as windowing (Picture in Picture) can be implemented locally and flexibly.

Device like home lighting and climate controllers can be implemented as applications using generic hardware and become available anywhere in the house. Voice services like Alexa and Google Home can take advantage of place and monitor conditions such as temperature. (Alas, yes, they can also listen in ... proceed cautiously).

PWA and IoT

The cost of a browser-capable computer keeps dropping. But for many applications there is need for running a browser engine on the device itself.

Such devices still benefit from being able to expertise developed for browser-based applications. NodeJS supports running JavaScript on a wide variety of machines. Onion.IO sells a complete system including WiFi for $7.50 retail (as of November 2017).

We need to think about an Internet of Things built of fully capable devices. There will be sensors and other devices that may be resource constrained but those devices can be considered as peripherals to those fully capable devices. Similarly, the programming environment of JavaScript is far more resilient than a low-level language such as C.

Rather than mobile first, we can take a local-first approach to development. Thus, a door "bell" (it's no longer a bell, but a two-way communicating device) might take advantage of a face recognition service but can still function if the recognition services is not available.

A PWA application should communicate directly with the door device rather than relying on a distant server merely to authorize the door to open.

The Internet protocols are not quite there. The DNS (Domain Name System) depends on distant services and the IP address is issued by a "provider". We need to shift to thinking of the Internet in terms of peer connectivity rather than as something that one accesses.

About JavaScript

JavaScript started at as a low performance scripting language designed in one week. But it built upon a long heritage of important design principles. The focus was on safety and resilience. Objects are defined dynamically rather than having static class definitions.

The surprise is that today JavaScript has become a high-performance programming environment thanks to clever ideas like dynamic compilation. Tools like TypeScript assist programmers by allowing them to provide hints to the IDE. In addition, there are whole set of tools to facilitate sharing of packages. An event-driven single-threading approach removes much of the complexity of high performance applications.

With platforms like NodeJS JavaScript isn't just for the web. Progressive Web Apps are just one class of applications written using the language and associated tools.

A PWA World?

Not quite. While I am excited about PWAs and see many venues, such as TV-V2 where they are vital, there is still need for a variety of approaches to programming and software development. There will continue to be a need to program applications ahead of what is available to the PWA environment and the need for developers to create their own extensions.

Google, Microsoft, and others are embracing PWAs. For Microsoft there is a recognition that they can make a lot of money providing service using their Azure services (and, for Amazon, AWS).

For me PWAs are exciting because they bring back some of the excitement of writing and sharing applications without all the complexities of applications meant for wide market sales.

Today's PWAs are built on the current web which is optimized for content distribution and commerce. As we explore new use cases an application for this engine I expect to see much innovation including the development of more peer technologies rather than a focus on delivering services. That said, the current technologies and protocols are already a strong basis for delivering capabilities.

Consumer electronics devices will increasingly use PWAs either internally or as an interface. More than that, PWAs empower prosumers to wield software like they did soldering irons in the past.

[1] https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API
[2] https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API

Written by Bob Frankston, Independent Internet Professional

Follow CircleID on Twitter

More under: Mobile Internet, Web

Categories: News and Updates

DomainTools Sued for Misusing New Zealand's .NZ Domain Name Registration Information

Thu, 2018-07-12 19:54

Domain Name Commission Limited ("DNCL"), New Zealand's overseer for the country's .NZ domain, has filed a lawsuit against the domain name service company DomainTools. According to the filing, DNCL states that "DomainTools's activities undermine the protections that DNCL promises to provide to .nz registrants and violate the TOU governing use of the .nz WHOIS service." It continues: "The products and services that DomainTools offers to its customers are built on practices that infringe .nz registrants' privacy rights and expectations by harvesting their registration information in bulk from the registry where it is maintained; using high-volume queries and technical measures designed to evade the restrictions that protect .nz WHOIS servers against that form of abuse; and storing and retaining registrant data, including detailed personal contact information, even after the registrant has chosen to withhold their data from the registry. These activities cause irreparable harm to DNCL's reputation and integrity, divert resources from DNCL's mission, interfere with its contractual relationships with .nz domain name registrars, and harm the goodwill DNCL receives from individual registrants of .nz domain names."

Follow CircleID on Twitter

More under: Domain Names, Law, Privacy, Whois

Categories: News and Updates

Anti-Phishing Working Group Proposes Use of Secure Hashing to Address GDPR-Whois Debacle

Tue, 2018-07-10 21:49

The AntiPhishing Working Group (APWG) in a letter to ICANN has expressed concern that the redaction of the WHOIS data as defined by GDPR for all domains is "over-prescriptive". APWG which is an international coalition of private industry, government and law-enforcement actors, says such a redaction of WHOIS data will "hinder legitimate anti-spam, anti-phishing, anti-malware and brand protection activities, particularly efforts to identify related domains that are under unified (e.g., cyber attacker's) control." Kevin Murphy reporting in Domain Incite says: "The hashing system may also be beneficial to interest groups such as trademark owners and law enforcement, which also look for registration patterns when tracking down abuse registrants. The proposal would create implementation headaches for registries and registrars — which would actually have to build the crypto into their systems — and compliance challenges for ICANN."

Follow CircleID on Twitter

More under: Cybercrime, Cybersecurity, DNS, Domain Names, ICANN, Internet Governance, Policy & Regulation, Privacy, Registry Services, Whois

Categories: News and Updates

Doug Madory Reports on Shutting Down the BGP Hijack Factory

Tue, 2018-07-10 19:24

A lengthy email to the NANOG mailing list last month concerning suspicious routing activities of a company called Bitcanal initiated a concerted effort to kick a bad actor off the Internet. Doug Madory, Director of Internet Analysis at Oracle Dyn, in a post today reports on some of the details behind this effort: "When presented with the most recent evidence of hijacks, transit providers GTT and Cogent, to their credit, immediately disconnected Bitcanal as a customer.  With the loss of international transit, Bitcanal briefly reconnected via Belgian telecom BICS before being disconnected once they were informed of their new customer’s reputation."

Follow CircleID on Twitter

More under: Cybercrime, Cybersecurity, Networks

Categories: News and Updates

China Has Nearly 3 Times the Number of Internet Users as the US, and the Gap Will Only Widen

Tue, 2018-07-10 18:39

China has 772 million internet users, as compared to the United States currently at 292 million. While the US internet penetration has reached 89%, China is only 55% and growing fast. A new report titled, "China Internet Report 2018” discussed today at the Rise Conference in Hong Kong, attempts to comprehensively break down China’s thriving tech industry, identifying the big players in each field and lay out the four significant emerging trends. The report is the result of a collaboration between Abacus, 500 Startups, and the South China Morning Post. Xinmei Shen reporting in Abacus News has highlighted the top ten takeaways from the China Internet Report including the following:

— "China’s internet giants are doing everything. From streaming video to self-driving cars, the big three (Baidu, Alibaba and Tencent) are present in almost every tech sector, either by investing in startups or by building it themselves."

— "Government policy continue to actively shape China’s tech industry. State watchdogs have banned cryptocurrency trading, called out companies for invading user privacy, and even put a stop to quiz apps that ask “inappropriate” questions. Any trend can disappear overnight — if Chinese authorities want it to."

Follow CircleID on Twitter

More under: Broadband, Cloud Computing, Mobile Internet, Web

Categories: News and Updates

An Insider's Guide to the IPv4 Market - Updated

Tue, 2018-07-10 16:40

This article was co-authored by Marc Lindsey, President and Co-founder of Avenue4 LLC and Janine Goodman, Vice President and Co-founder of Avenue4 LLC.

On September 24, 2015, the free supply of IPv4 numbers in North America dwindled to zero. Despite fears to the contrary, IPv4 network operators have been able to support and extend their IP networks by purchasing the IPv4 address space they need from organizations with excess unused supply through the IPv4 market. The IPv4 market has proved to be an effective means of redistributing previously allocated IPv4 numbers, and could provide enough IPv4 addresses to facilitate the Internet's growth for several more years while the protracted migration to IPv6 is underway. Although the market has matured since we wrote our last "Insider's Guide” in May 2015, some old inefficiencies and impediments persist and new challenges have been exposed.

IPv6 Migration Hasn't Solved the Problem

IPv6 provides up to 3.4 x 10^38 unique IP addresses — easily enough to support the expected growth of the Internet for the foreseeable future. However, incompatibility between IPv4 and IPv6 has hobbled the transition. Despite its generous capacity, not enough network operators and end users are actually moving to IPv6 to justify retiring IPv4 networks. By the end of June 2018, IPv6 traffic represented less than a quarter of the total global Internet traffic, according to Google's IPv6 adoption statistics. Migration to IPv6 is simply not occurring fast enough to accommodate continued Internet expansion.

The CGN Band-Aid

As the world slowly migrates to IPv6, some network operators are leaning heavily on Carrier-grade Network Address Translation (CGNs) to help mitigate their IPv4 address consumption while continuing to extract value out of their existing IPv4 network infrastructure. CGNs allow many endpoints served by a single carrier to share a smaller number of unique IPv4 addresses. But CGNs have considerable drawbacks:

  • Designing, procuring, implementing and operating CGNs give rise to capital and operating expenses invested in short-term solutions.
  • CGNs can frustrate law enforcement seeking to identify bad actors, and impede proper functioning of Internet security and other Web-enabled applications that depend on unique endpoint IP address mapping.
  • Extensive use of CGNs adds additional complexity to the networks that use them, and that complexity can compromise network performance, availability, reliability and scalability.
  • Industry experts believe end-point obfuscation caused by CGNs stunts the development and deployment of important Internet innovations such as IoT.

CGNs are not a viable long-term solution.

Workings of the Private Market

Meanwhile, IPv4 number trading between private parties is very active in North America, Europe and the Asia Pacific region. The IPv4 market has created powerful financial incentives for entities to free up excess inventory and sell it to organizations that still need more IPv4 numbers to operate and grow their networks.

The first widely publicized sale occurred in 2011 soon after the Internet Assigned Numbers Authority (IANA) exhausted its IPv4 supply. Microsoft purchased Nortel's IPv4 inventory of 666,624 legacy numbers for $7.5 million. Since then, the sale, lease or other conveyance of IP numbers has accelerated. At the end of 2017, ARIN recorded nearly 39 million numbers transferred between private parties. In the first half of 2018, over 24 million numbers were transferred — putting 2018 on track to reach the highest volume of ARIN registration transfers ever.

Despite the IPv4 marketplace's success, it operates inefficiently. There are no established standards of conduct, little transparency, and even less accountability. Many participants in the market struggle to define, from a legal perspective, what is being bought and sold. Contract terms and conditions can vary significantly, often derived from other industries and not always fit for the nuances of IPv4 transactions. There are no accepted means to establish market value. Market trades are handled via ad-hoc bi-lateral negotiations. And total transaction costs (e.g., registry charges, legal fees, escrow account charges, and broker/sales agent commissions) are not always apparent.

Notwithstanding these inefficiencies, buyers and sellers who follow key market-proven tips and practices can trade and transfer numbers successfully.

1. Define Your Preferred Deal Structure Early

Before participating in the marketplace, both buyers and sellers should understand the specific exclusive rights in IPv4 numbers that can be conveyed, identify and assess the trade-offs of the available deal approaches, and select the approach that best fits their business objectives. For both buyers and sellers, flexibility is vital to maximizing the value of their transactions.

One-time asset sales agreements are common. But large deals can also include options, installment payments and phased delivery, and other creative, value-enhancing features such as allowing the buyer to pay a portion of the purchase price in the form of credits that the seller can use to offset purchases of unrelated enterprise services supplied by the buyer.

2. Select the Right Advisors

The burgeoning IPv4 broker industry is an artifact of the new market. There are a slew of brokers now offering services specifically to IPv4 market participants. This pool of participants will keep increasing. There are no meaningful barriers to entry for IPv4 brokers. There also is no self-regulatory or independent body to enforce minimal qualifications, experience or codes of conduct. Many (incorrectly) assume that brokers appearing on the ARIN, RIPE and APNIC facilitator lists have been vetted by the RIRs to meet minimum experience, skill and ethical standards. The RIRs do not perform any such vetting. In fact, they expressly disclaim responsibility for the quality or suitability of their registered facilitators. Consequently, assessing the qualifications and ethics of prospective brokers, and managing the quality of their performance once engaged, is critical to success for market participants.

Buyers or sellers planning to enlist the assistance of a broker, advisor or other form of intermediary should test the broker's experience and understanding of the industry by interviewing multiple brokers and researching the credentials and backgrounds of the firms' professionals. Getting references in this business can be tough; no one wants to go on record.

The brokerage or market services agreement should clearly describe when the intermediary earns its fees and when those fees are payable. The service agreement should also disclose whose interest the broker represents, including whether it receives any form of compensation or fees from other parties in the deal. Setting and documenting expectations up front will help avoid disputes down the line.

3. Perform Due Diligence

Immediately upon initiating trade discussions, the prospective buyer and seller should sign a mutual confidentiality agreement and conduct due diligence.

Buyer due diligence begins when it obtains from the seller or its broker the specific designation of the available IP address range. The buyer should verify that the prospective seller is an active organization in good standing by checking the records of the secretary of state where the company is organized, examining corporate credit ratings and, for publicly traded companies, reviewing recent SEC filings. Buyers should also confirm that (a) the selling entity is, in fact, the current registrant or a legal successor of the listed registrant by investigating the RIR registration records for the IPv4 space being sold, and (b) the individual purporting to represent the seller is authorized to act on the seller's behalf.

A buyer should require its seller to disclose material facts about the block for sale, including whether (i) any of the numbers are currently in use, (ii) any third-parties have made a competing claim to control the block, or (iii) there are any known inaccuracies in the RIR registration records. Buyers also may want to analyze the reputation and prior usage of the numbers in the available block. Thorough due diligence may involve even more comprehensive written questions presented by the buyer to seller.

Seller due diligence is less intensive but still necessary. Eventually, trades become public knowledge when the RIR registration records are updated. Many sellers care about their reputations and prefer to conduct business with organizations that have compatible values. IPv4 transactions are no different.

Sellers should know their buyer's business before proceeding to the contract phase, and should also establish criteria to filter out potential buyers with whom the seller will not conduct business as part of its go-to-market strategy. In addition, the seller's due diligence should examine the potential buyer's ability to fulfill the payment terms of the contract. This financial assessment will determine when it may be prudent to require an up-front deposit or employ an escrow as part of the payment terms. Sellers should also verify the authority of the people claiming to represent potential buyers.

Where registration transfer is a closing condition, examining the seller's prior transfer experience and its ability to successfully register with the relevant RIR the quantity of numbers purchased is important. For buyers in the ARIN or APNIC region, sellers should consider, and weight favorably, offers from buyers that have been pre-qualified by the applicable RIR to receive the block sizes in the deal. Pre-approval will alleviate some uncertainty in the registration transfer approval process.

No transaction is without some risk. The objective of properly conducted due diligence is to identify — before the agreement is signed — transactions that present unreasonable or readily avoidable risks. If due diligence reveals that these risks are within acceptable parameters for both buyer and seller, the parties can then enter into contract negotiations to fairly allocate reasonable risks between them.

4. Focus on the Terms and Conditions that Matter

The uncertainty or misperception about the legal rights attached to IPv4 numbers causes some buyers to define in their asset purchase agreements the rights they believe they will acquire, relying on their experience with traditional tangible property-based asset purchase arrangements or merger and acquisition deals.

Buyers, for example, seek guarantees that they will receive good and clear "title" to the IPv4 numbers. Recognizing that ownership of IP numbers is not settled law, sellers, who may otherwise believe they possess title to their numbers, should resist contractually overcommitting to convey title — at least until the question of ownership is resolved by the courts. On a related point, some buyers demand sellers represent and warrant that they will convey to the buyer unconstrained exclusive use rights. But IP numbers are part of the Internet Protocol, which relies on the operation of interconnected global registries, servers and networks. And any range of IP numbers can be used without constraint on private networks. Sellers' promises to convey exclusive rights in their IPv4 numbers should, therefore, be limited to the right to register in the RIR system, and use for Public Internet routing, the IPv4 numbers being traded.

Some additional key contract terms to focus on include:

  • Clear termination triggers that can only be invoked prior to the closing of the transaction
  • Appropriate limits on liabilities and disclaimers for both parties
  • Descriptions of any responsibilities and liabilities that survive the closing, and if some survive, for how long
  • Representations and warranties tailored for the manageable risks and conditions specific to IPv4 sales and transfers
  • Scope and duration of each party's post-closing obligations
  • Delineation of the duties, liabilities and costs that are assumed by the buyer once the IPv4 addresses are sold versus those that are retained by the seller
  • Terms that secure the transfer and payment (e.g., third-party escrow) for the in-scope address space

Ready to Navigate the Market

The private IPv4 market is evolving. It has been used effectively to redistribute millions of underutilized IPv4 assets, yet buyers and sellers continue to face obstacles. Market participants can achieve their business objectives by employing market-proven tips and best practices. Network operators that lack the resources or insight to navigate the challenges of today's market remain at risk of being competitively disadvantaged.

Written by Marc Lindsey, President and Co-founder at Avenue4 LLC

Follow CircleID on Twitter

More under: IP Addressing, IPv6

Categories: News and Updates

Application Fees for New gTLDs Could Be Artificially Kept High

Tue, 2018-07-10 03:59

"It's possible that application fees for new gTLDs could be artificially propped up in order to discourage gaming," reports Kevin Murphy in Domain Incite: "In the newly published draft policy recommendations for the next new gTLD round, ICANN volunteers expressed support for keeping fees high 'to deter speculation, warehousing of TLDs, and mitigating against the use of TLDs for abusive or malicious purposes'. ... I wonder how much of a deterrent to warehousing an artificially high application fee would be; deep-pocketed Google and Amazon appear to have warehoused dozens of TLDs they applied for in the 2012 round."

Follow CircleID on Twitter

More under: ICANN, New TLDs

Categories: News and Updates

European Data Regulators Throw ICANN Back to the Drawing Board for a Third Time on Whois Privacy

Sun, 2018-07-08 05:42

In a letter to ICANN, the chair of the European Data Protection Board (EDPB) makes it plain that even the organization's "interim" plan is fundamentally flawed, reports Kieren McCarthy in the Register. "European data regulators have torn up the latest proposal by internet overseer ICANN over its Whois data service, sending the hapless organization back to the drawing board for a third time. ... In what is perhaps the greatest blow to ICANN's credibility, the EDPB undercuts ICANN's legal appeal to a ruling it lost last month in German court, stating clearly that it cannot force people to provide additional "admin" and "technical" contacts for a given domain name — something some were hoping would act as an effective workaround to the privacy law."

Follow CircleID on Twitter

More under: DNS, Domain Names, ICANN, Internet Governance, Policy & Regulation, Privacy, Whois

Categories: News and Updates

Cuban "Technological Sovereignty" - a Walled Garden Strategy?

Tue, 2018-07-03 21:34

ToDus, a messaging application described as a "Cuban WhatsApp" and Apklis, a distribution site for Android mobile apps, were featured at the First Computerization Workshop held recently at the Universidad de Ciencias Informáticas (UCI).

One might ask, why do we need a Cuban WhatsApp and Apklis when we already have WhatsApp itself and the Google Play Store?

ToDus seems to duplicate WhatsApp's features. Users can send messages, photos and other files to individuals or groups of up to 250 members and, like WhatsApp, it is secure — messages are encrypted and stored on users phones, not toDus servers. (ToDus users cannot speak with each other using this version of the program, but that feature will be added). Since toDus is a free app, I believe it could be listed on the Google Play Store as well as on Apklis.

The key difference between toDus and Apkis and WhatsApp and the Play Store is that the former run on Cuba's national intranet, not the global Internet. One could argue that this duplication is done to lower operating costs or improve performance. I don't know how Cuba's international access is priced, but it seems that the marginal cost of international traffic for a chat app used by 11 million people would be very small and the latency difference imperceptible. (If Cuba is trying to save on communication cost or cut latency, they would be way better off pursuing an undersea cable between Havana and Florida).

Yadier Perdomo, Director of Networks at UCI, may have alluded to a more significant motivation when he stated that toDus "guarantees technological sovereignty, something that similar products, such as WhatsApp and Messenger (from Facebook), do not do." I am not sure what he means by "technological sovereignty," but it seems consistent with an overall effort to focus on domestic as opposed to global communication and services. Furthermore, ETECSA is rolling out 3G mobile connectivity (and experimenting with 4G) and evidently planning to charge less for access to the national intranet than the Global Internet.

Does this point to a strategy of encouraging a Cuban "walled garden" that favors intranet communication and services (and El Paquete Semanal) over Global Internet communication and services?

That policy would have two negative side effects. For one, it would create two classes of Cuban users — the relatively poor, mass population that predominantly uses mobile phones on the intranet and elite users with access to the global Internet using computers as well as mobile phones. The Internet-enabled users would have access to more information and more powerful application and be better able to create content.

Second, while Cuba can create and support a simple application like toDus on its own, they lack the scale and resources to create complex and mass-data dependent applications — Cuba's Ecured will never be as comprehensive as WikiPedia, their Mapa service will never be as useful as Google Maps, there can never be a Cuban equivalent of Google Translate, etc.

One might justify favoring the intranet over the Internet as an interim step to what the Cubans call "the computerization of society," but it is a drain of resources in the short run and a dead-end in the long run. Cubans should focus on things in which they have a comparative advantage — as the saying goes, "do what you do best and link to the rest."

Well, brings me to the end of this post, but I want to add two miscellaneous tidbits — kind of a PS:

1. If you go to the Internet portal of Cuba's intranet, you see links to toDus and Apklis. Both are broken, but the Apklis link is to https://www.apklis.cu/es/ not to https://www.apklis.cu. What's up with that?

2. The names and logos of Apklis and toDus are completely goofy. The toDus logo is evidently a reference to the Cuban tody bird and I cannot guess the rationale behind the Apklis butterfly. One thing is clear — neither name or logo says anything about the corresponding service, though one might guess that Apklis has something to do with APKs.

As of now, the Internet portal of Cuba's intranet is unreachable and versions cached at the Internet Archive do not have links to toDus and Apklis.

Written by Larry Press, Professor of Information Systems at California State University

Follow CircleID on Twitter

More under: Access Providers, Censorship, Internet Governance, Policy & Regulation

Categories: News and Updates

Trump's Tweets Flouting the Cybercrime Treaty Curbs on Racist and Xenophobic Incitement

Tue, 2018-07-03 01:57

The existence of the 2001 Cybercrime Convention is generally well known. The treaty has now been ratified/acceded to by 60 countries worldwide, including the United States. Less well known is the existence of the Additional Protocol to the Convention ”concerning the criminalization of acts of a racist and xenophobic nature committed through computer systems." The Additional Protocol has 30 ratifications/accessions — although not including the United States — which asserts that the First Amendment to its Constitution would preclude adherence to the provisions.

Next week, Cybercrime signatories and legal experts will gather for an annual ensemble of meetings and workshop in Strasbourg to review the state of the instrument and its implementation. One significant contemporary development that deserves substantive treatment at the meeting is the failure to apply the Additional Protocol to the incessant, pervasive racist and xenophobic Trump tweets and the significant resulting global harm occurring. Trump is the ultimate virtual elephant trampling in the meeting room.

The Additional Protocol

Although the national and international law needed to provide adequate legal responses to propaganda of a racist and xenophobic nature had its origins following World War II, the concern over use of computer systems did not occur until the 1990s. The emergence of heavily promoted, globally interconnected and unregulated DARPA internets in the mid-90s coupled with the marketplace demise of more regulated and secure OSI internets, resulted in a rapidly scaling array of cybersecurity challenges. One of those challenges was the ability for highly motivated groups promoting racism and xenophobia to organize and propagate their material via DARPA internets.

Developments began unfolding in 1997. In June of that year, the EU Council of Ministers established the European Monitoring Centre on Racism and Xenophobia. In October 1997, the Heads of State and Government of the Council of Europe on the occasion of their Second Summit met to seek common responses to the developments of "new information technologies."

A few weeks later in November 1997, the UNHCR held a seminal workshop in Geneva on the "Seminar on the role of Internet with regard to the provisions of the International Convention on the Elimination of All Forms of Racial Discrimination." Especially chilling was an NGO presentation by the Paris-based Centre Simon Wiesenthal of statistics on the exponentially increasing hate sites and groups organizing via DARPA internet technology.

By 2001, the problems were significantly worse, and those meeting to produce the Cybercrime Convention found that "the emergence of international communication networks like the Internet provide certain persons with modern and powerful means to support racism and xenophobia and enables them to disseminate easily and widely expressions containing such ideas." This concern resulted in an explicit Additional Protocol to the Cybercrime Convention that defined racist and xenophobic material, the dissemination proscribed, measures to be taken at the national level, and apply a number of the Cybercrime Convention provisions.

"racist and xenophobic material" means any written material, any image or any other representation of ideas or theories, which advocates, promotes or incites hatred, discrimination or violence, against any individual or group of individuals, based on race, colour, descent or national or ethnic origin, as well as religion if used as a pretext for any of these factors. (Art. 2)

distributing, or otherwise making available, racist and xenophobic material to the public through a computer system. (Art.3)

Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offenses under its domestic law, when committed intentionally and without right… (Art. 4)

Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offenses under its domestic law, when committed intentionally and without right, aiding or abetting the commission of any of the offenses established in accordance with this Protocol, with intent that such offense be committed. (Art. 7)

The associated Explanatory Report provides further history and amplification on the provisions.

Rather little, however, was done for more than a decade. A cursory informal survey of the Council of Europe site finds a significantly rising concern over the manifestation of racism and xenophobia beginning around 2016 and becoming exponentially worse over the past two years. Plainly, the chief executive of one of the Convention's more prominent signatories who began leading a rather expansive resurgence of racism and xenophobia globally presented a challenge that was unanticipated and included an unprecedented affront to legal systems and norms of behavior. Now, the ultimate question for those assembling in Strasbourg in 2018 is whether they can simply ignore what has been occurring over the past eighteen months.

Trump's Promotion of Racism and Xenophobia

It is relatively well-established that Donald Trump on a massive scale has been manifesting actions contravened by Art. 3 of the Additional Protocol that are aided and abetted through social media. There are hundreds of articles on his actions that unfold every day in highly respected publications.

Some investigators have even compiled extensive lists of evidence. See, e.g., New York Times, "Donald Trump's Racism: The Definitive List."

It is not apparent, however, that any responsive actions have actually been taken by the Additional Protocol signatories pursuant to Arts. 4 and 7, notwithstanding the ease with which the Trump's offensive traffic can be blocked. Although the European Commission has sought to apply its own recommendations to control proscribed online content, it has not apparent it has ever addressed Trump's racist and xenophobic tweets, much less sought to proscribe them.

Perhaps more concerning is that the social media service most extensively employed by Trump asserts an affirmative defense that "world leaders" are allegedly exempt from the Convention's Additional Protocol provisions. See Twitter, Inc, "World Leaders on Twitter."

The matter has, however, risen to such prominence that it was addressed in a Washington Post editorial several months ago with respect to domestic law. See The Washington Post, "The 3 loopholes that keep Trump's tweets on Twitter."

Resulting Harm by Inaction

Trump's flouting of the Cybercrime Convention's Additional Protocol provisions on racism and xenophobia is plainly reprehensible. The damage of the global rule of law and sense of acceptable conduct by a national leader is profound and long-lasting. The harm to society globally is equally grave — giving rise to destabilizing hate groups and terrorism in countries throughout the world. See "Palgrave Hate Studies Cyber Racism and Community Resilience." See also Simon Wiesenthal Center's "2017 Digital Terrorism & Hate Report Card: Social Media Giants Fail to Curb Online Extremism."

The inaction has even spurred the emergence of an entirely new market for racist and xenophobic products.

One of the additional disconcerting developments and serious consequences, however, is the Cybercrime Convention signatories and Octopus community largely ignoring a profound problem posed when one of their own signatories goes rogue with a chief executive who is the de facto leader of a global racist and xenophobic movement through Twitter. When even the most prominent public figures in the United States are profoundly embarrassed by Trump's racist and xenophobic behaviour — which is presently uncontrollable domestically — there is a continuing hope that international forums might step up, speak out in defence of their own treaty provisions, and call for responsive action by signatories. Will they?

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC

Follow CircleID on Twitter

More under: Cybercrime, Internet Governance

Categories: News and Updates

Caribbean Peering Forum Brings Dream of Better Internet Closer

Mon, 2018-07-02 23:34

The dream of a faster, safer, more affordable Internet in the Caribbean sometimes seems elusive. One group of Internet pioneers is taking steps to make it a reality.

The Caribbean Peering and Interconnection Forum, or simply CarPIF, is an annual event that brings together the people responsible for delivering Internet services to the region, including internet service providers, internet exchange point operators, content delivery networks, data centre managers and other computer network professionals.

"CarPIF provides a very important service to the local Internet community, as it the only regional forum where the diverse group responsible for building, managing and securing the Internet across the Caribbean come together to discuss how to improve internet services and maximise the value to businesses and consumers alike," said Bevil Wooding, Caribbean Outreach Liaison with the American Registry for Internet Numbers (ARIN), and one of the co-founders of CarPIF.

"CarPIF is where the economic underpinnings of the traffic exchange and peering relationships that define the Internet are discussed using Caribbean data and Caribbean examples, to a Caribbean audience," Wooding said.

At peering forums around the world, network operators broker deals with each other and with content providers such as Google, Facebook, Netflix and Akamai, which service up some of the most popular content on the Internet. It was no different at the CarPIF event. The main meeting hall of Belize City's Radisson hotel was abuzz with sound of over fifty CarPIF delegates engaged in conversation, meeting contacts, drumming up leads and striking new deals.

But the event involved more than the swapping of business cards. The two-day program featured high-profile speakers from across the region and around the world. From the outset, a tone of Caribbean collegiality was set by event moderators Wooding and Shernon Osepa, Regional Affairs Manager for Latin America and the Caribbean at the Internet Society, and one of the co-founders of CarPIF. Presenter after presenter openly shared their real-world experience and exchanged practical insights to help improve and advance the Internet in the Caribbean.

Keynote speaker John Curran, CEO of ARIN, regaled participants with stories from the earliest days of the Internet, illustrating how the network connections that make up the global Internet ultimately rely on an underlying fabric of social relationships.

Etienne Sharp, coordinator of the Belize internet exchange point, sat on a panel with local internet service providers, openly discussing some of the challenges affecting their technical interconnection and business relationship. Riyad Mohammed, a representative of the ttix2 internet exchange point in south Trinidad, vividly described some growing pains of the team behind the region's newest internet exchange.

The flow of the conversation also pivoted to explore avenues for new business. Jamaican-born Stephen Lee, Program Director at the Caribbean Network Operators Group (CaribNOG) and CEO of US-based technology services firm Arkitechs, gave a practical overview of local content development opportunities available for Caribbean entrepreneurs. Peter Harrison, CTO of Silicon Valley-based data centre firm Colovore, gave insights into the nuts and bolts of managing high-performance colocation facilities.

Nico Scheper, coordinator of the AMS-IX Caribbean internet exchange point in Curacao, shared valuable data on the positive impact of internet exchange points on Internet speed in the Caribbean over the last decade. And Arturo Servin, Manager of Content Deliver and Interconnection Strategy for Latin America and the Caribbean at Google, gave expert insight into how Google balances business interests with other considerations in choosing the locations for its caches.

"CarPIF has really grown over the years into a significant event on the Caribbean calendar, for operators as well as for international content providers," Osepa said. "We were pleased to see another strong turnout in Belize. More importantly, the issues impacting access and affordability tackled at this year's event should result in tangible benefit to consumers."

Kevon Swift, Head of Strategic Relations and Integration at the Latin American and Caribbean Internet Addresses Registry (LACNIC), remarked, "The representatives from four major Internet organisations, ARIN, LACNIC, ICANN and the Internet Society are all here, working together to build a more stronger Caribbean Internet."

The event was held from June 13 to 14, with the support of several other Internet organisations, including CaribNOG, the Caribbean Telecommunications Union and Packet Clearing House. The local host was Belize's Public Utilities Commission (PUC).

Delivering closing remarks, PUC Chairman John Avery thanked all the event supporters.

"The PUC has derived significant benefit from CarPIF, and we plan to use the information and insights gained to improve internet service and affordability for Belize. I think the Forum lives up to its goal of connecting the Caribbean, both its networks and its people," he said.

Written by Gerard Best, Development Journalist

Follow CircleID on Twitter

More under: Access Providers, Broadband, Regional Registries

Categories: News and Updates

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer