Domain industry news

Syndicate content CircleID
Latest posts on CircleID
Updated: 55 min 33 sec ago

Ethos Capital to Acquire .ORG Top-Level Domain

Wed, 2019-11-13 19:01

The Internet Society and Public Interest Registry (PIR) have reached an agreement with Ethos Capital, under which Ethos Capital will acquire PIR and all of its assets from the Internet Society. Public Interest Registry (PIR) is the nonprofit corporation that operates the .ORG top-level domain. The transaction is expected to close during the first quarter of next year. The transaction will provide the Internet Society with the sustainable funding and resources to advance its mission, said Andrew Sullivan, President and Chief Executive Officer of the Internet Society. PIR assures that all of its domain operations and educational initiatives will continue, and "there will be no disruption of service or support to the .ORG Community or other generic top-level domains operated by the organization."

Follow CircleID on Twitter

More under: Domain Names, Registry Services

Categories: News and Updates

Cybersecurity Standards Practices as Cyber Threats

Wed, 2019-11-13 16:29

One of the most embarrassing and pernicious realities in the world of cybersecurity is the stark reality that some industry cybersecurity standards practices are themselves cyber threats. How so?

Most industry and intergovernmental standards bodies serve as means for assembling the constantly evolving collective knowledge of participant experts and package the resulting specifications and best practices as freely available online documents to a vast, diverse universe of users. In many cases, these materials have the force and effect of law through governmental bodies who reference them as compulsory requirements for an array of cybersecurity products and services provided to end-users.

Unfortunately, a few remaining outlier standards organizations attempt to exploit the cybersecurity marketplace by significantly restricting availability of their standards and charging incredulous prices for access to documents that deter use. This behavior is often coupled with lobbying co-opted government authorities to reference the specifications as mandatory requirements — flying in the face of longstanding juridical norms. In some cases where such references have created artificial demand, the prices reach an astronomical seven dollars per page for a single user to simply look at specifications that are often trivial and useless, yet mandated by some governmental authority or certification group. The result is that the cybersecurity standards practices themselves become cyber threats because the needed specifications are not available to end-users who cannot or will not pay seven dollars per page for a standard.

The most extreme of these bodies is the Geneva-based, private International Organization for Standardization (ISO) which together with its regional and national partners, manages to continue the practice of enticing participants to contribute their cybersecurity intellectual property for free — which is then resold by the organization's secretariats at vicarious prices reflecting whatever the cybersecurity market will bear. That some of the participants are also government employees who are contributing government IPR, and then effectively serving as marketing arms for secretariats selling the pricey products, makes the practice all the more unacceptable.

In a recent proposal to the European Union on cybersecurity normative standards, the entire bundle of proposed ISO/IEC specifications amounts to $ 5000 per individual user license. Additionally, that amount is potentially recurrent every five years - the asserted maintenance period for the standards. For the proposed bundle of 31 documents, the per-page price varies wildly between 0.68 and 6.77 Swiss Francs with the average 2.63 Francs ($ 2.65) per page as downloadable PDF files. The 6.77 Swiss Franc ($ 6.81) per page amount is for the ISO/IEC 30111 standard on how to process and resolve potential vulnerability information in a product or online services. And, these 31 standards only include the explicitly mandated specifications themselves. Secondary normative references requiring still further ISO/IEC standards can significantly add to the cost.

The Institute of Electrical and Electronics Engineers (IEEE) engages in similar behavior, and even leverages its association with engineering profession members. Its price per page varies between $0.56 to $3.48 for common cybersecurity standards.

Over the years, most standards bodies who once sold their cybersecurity standards have ceased the practice — realizing that meaningful standards making in the ICT sector effectively require freely available standards to as many people and entities as quickly as possible. Where public safety or security are factors, or where the specification is referenced as a regulatory requirement, freely available standards are essential, and the converse inexcusable.

Several years ago, the American Bar Association advanced an initiative on public ownership of the law and adopted a resolution calling for public availability to standards which are the subject of regulatory enactments. However, the ISO/IEC national body in the U.S., ANSI, mounted a fierce lobbying effort asserting their right to pursue a business model of whatever the market will bear, and to do otherwise will put them out of business — without ever providing supporting financial data.

Today, as cybersecurity becomes ever more critical, government authorities worldwide need to seriously question arguments of the few remaining standards bodies who seek the largesse of a cybersecurity regulatory imprimatur, while maintaining a business model which is a clear detriment to end-users. Attempting to extract revenues from a cybersecurity standards marketplace is clearly very different from standards developed for a closed manufacturing community of physical "widgets." Today, there are many other cybersecurity standards bodies for governmental authorities to choose from who do have acceptable business models.

So in sum, while charging whatever the market will bear for cybersecurity specifications may be ill-considered as a private standards organization business practice, it is ultimately its choice. However, they should not be seeking a helping hand from regulatory authorities to prop up a broken business model at the expense of diminished cybersecurity.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC

Follow CircleID on Twitter

More under: Cybersecurity, Internet Governance, Policy & Regulation

Categories: News and Updates

Cybersecurity Workforce Needs to Grow 145% to Close Skills Gap Worldwide, Says New Study

Mon, 2019-11-11 21:53

The cybersecurity workforce needs to grow by 145% to close the skills gap and better defend organizations worldwide according to a report released by (ISC)², a nonprofit membership association of certified cybersecurity professionals. The organization has, for the first time, estimated the current cybersecurity workforce to be 2.8 million professionals and concluded that 4.07 million additional trained professionals are needed to close the skills gap worldwide. In the U.S. market, the current cybersecurity workforce estimate is reported to be 804,700, and the shortage of skilled professionals is 498,480. Other key findings:

"65% of organizations report a shortage of cybersecurity staff; a lack of skilled/experienced cybersecurity personnel is the top job concern among respondents (36%)"

"62% of large organizations with more than 500 employees have a CISO; that number drops to 50% among smaller organizations."

Follow CircleID on Twitter

More under: Cybersecurity

Categories: News and Updates

Do Cable Companies Have a Wireless Advantage?

Fri, 2019-11-08 18:52

The big wireless companies have been wrangling for years with the issues associated with placing small cells on poles. Even with new FCC rules in their favor, they are still getting a lot of resistance from communities. Maybe the future of urban/suburban wireless lies with the big cable companies. Cable companies have a few major cost advantages over the wireless companies, including the ability to bypass the pole issue.

The first advantage is the ability to deploy mid-span cellular small cells. These are cylindrical devices that can be placed along the coaxial cable between poles. I could not find a picture of these devices, and the picture accompanying this article is of a strand-mounted fiber splice box — but it's s good analogy since the size and shape of the strand-mounted small cell device is approximately the same size and shape.

Strand-mounted small cells provide a cable company with a huge advantage. First, they don't need to go through the hassle of getting access to poles, and they avoid paying the annual fees to rent space on poles. They also avoid the issue of fiber backhaul since each unit can get broadband using a DOCSIS 3.1 modem connection. The cellular companies don't talk about backhaul a lot when they discuss small cells, but since they don't own fiber everywhere, they will be paying a lot of money to other parties to transport broadband to the many small cells they are deploying.

The cable companies also benefit because they could quickly deploy small cells anywhere they have coaxial cable on poles. In the future, when wireless networks might need to be very dense, the cable companies could deploy a small cell between every pair of poles. If the revenue benefits of providing small cells is great enough, this could even prompt the cable companies to expand the coaxial network to nearby neighborhoods that might not otherwise meet their density tests, which for most cable companies is to only build where there are at least 15 to 20 potential customers per linear mile of cable.

The cable companies have another advantage over the cellular carriers in that they have already deployed a vast WiFi network comprised of customer WiFi modems. Comcast claims to have 19 million WiFi hotspots. Charter has a much smaller 500,000 hotspots but could expand that count quickly if needed. Altice is reportedly investing in WiFi hotspots, as well. The big advantage of WiFi hotspots is that the broadband capacity of the hotspots can be tapped to act as landline backhaul for cellular data and even voice calls.

The biggest cable companies are already benefitting from WiFi backhaul today. Comcast just reported to investors that they added 204,000 wireless customers in the third quarter of 2019 and now have almost 1.8 million wireless customers. Charter is newer to the wireless business and added 276,000 wireless customers in the third quarter and now has almost 800,000 wireless customers.

Both companies are buying wholesale cellular capacity from Verizon under an MVNO contract. Any cellular minute or cellular data they can backhaul with WiFi doesn't have to be purchased from Verizon. If the companies build small cells, they would further free themselves from the MVNO arrangement — another cost savings.

A final advantage for the cable companies is that they are deploying small cell networks where they already have a workforce to maintain the network. Bother AT&T and Verizon have laid off huge numbers of workers over the last few years and no longer have the fleets of technicians in all of the markets where they need to deploy cellular networks. These companies are faced with adding technicians where their network is expanding from a few big-tower cell sites to vast networks of small cells.

The cable companies don't have nearly as much spectrum as they wireless companies, but they might not need it. The cable companies will likely buy spectrum in the upcoming CBRS auction and the other mid-range spectrum auctions over the next few years. They can use the 80 MHz of free CBRS spectrum that's available everywhere.

These advantages equate to a big cost advantage for the cable companies. They save on speed to market and avoid paying for pole-mounted small cells. Their networks can provide the needed backhaul for practically free. They can offload a lot of cellular data through the customer WiFi hotspots. And the cable companies already have a staff to maintain the small cell sites. At least in the places that have aerial coaxial networks, the cellular companies should have higher margins than the cellular companies and should be formidable competitors.

Written by Doug Dawson, President at CCG Consulting

Follow CircleID on Twitter

More under: Access Providers, Broadband, Mobile Internet, Wireless

Categories: News and Updates

Colombian Government Releases Action Plan for the Selection of .CO Domain Registry Operator

Wed, 2019-11-06 16:45

In light of the approaching expiration of the .CO top-level domain registry operator contract next year, Columbia's Ministry of Information Technology and Communications (MinTIC) today released an action plan (Spanish) for the .co operator selection process. The country code domain has so far been managed by Neustar, which now appears to be open for bidding by other registry operators. Colombia's government is having an information session at the ICANN Montreal event today at noon (Room 514A).

Follow CircleID on Twitter

More under: Domain Names, Registry Services

Categories: News and Updates

Trump's Strange WRC-19 Letter

Wed, 2019-11-06 14:14

The 2019 World Radiocommunication Conference (WRC-19) is underway. It is the latest in a continuum of treaty-making gatherings that began in 1903 and is devoted to the now 116-year-old art of globally carving up the radio spectrum among designated uses that is instantiated in the Radio Regulations treaty agreement. Not unexpectedly, the event includes designation of 5G spectrum that flows from the requirements long set in 3GPP and GSMA.

In a kind of odd blast-from-the-past, Donald Trump addressed a letter to the ITU Secretary General and the WRC-19 delegates (embarrassingly misspelling the name of the ITU). President Calvin Coolidge did something similar in person on 4 October 1927 when the WRC that year cobbled together the first set of Radio Regulations — meeting across from the White House in Washington DC. Like Coolidge (at Herbert Hoover's urging), Trump also supported "harmonizing the use of spectrum globally" and multilateral diplomacy. Nice touch.

Where it gets a little strange, however, is that WRC-19 is in Sharm El-Sheikh, Egypt, not Washington DC — and where President Abdel Fattah el-Sisi has the honour of welcoming delegates.

It gets stranger still when Trump wanders off the reservation rambling on about 5G security and "maintaining American leadership in 5G..." asserting that "we intend to cooperate with like-minded nations to promote security in all aspects of 5G networks worldwide." Somebody apparently failed to tell him that 5G security work is done in other international venues like 3GPP SA3 and GSMA where the U.S. government has been all but absent — leaving the leadership to other nations by default.

Indeed, had he read an actual primer on 5G, he would have known that the actual innovation represented by 5G and the related security challenges revolve around network and services virtualisation, not spectrum planning at WRC-19. The disconnect here is so utterly profound that it is yet another embarrassment to the nation.

If Trump really wants to cooperate with other nations on 5G security, the precedent and models exist like SDNS. He needs to enable the use of U.S. national security community resources to begin shaping the needed 5G platforms alongside those already engaged in the work today.

If he really hops to it and reallocates the money from the failed wall, the next SA3 (5G security) meeting is coming up in two weeks in Reno! However, you need to do more than show up. You need to demonstrate actual leadership by understanding what is occurring, contributing working materials, and engaging with the hundreds of other technical experts there, and keep it up every 60 days.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC

Follow CircleID on Twitter

More under: Policy & Regulation, Wireless

Categories: News and Updates

What to Expect From SpaceX Starlink Broadband Service Next Year and Beyond

Wed, 2019-11-06 14:07

Last May, SpaceX founder Elon Musk tweeted "6 more launches of 60 sats needed for minor coverage, 12 for moderate" and SpaceX President and CEO Gwynne Shotwell recently said they planned to be offering service in parts of the US in mid-2020, which would require six to eight 60-satellite launches. The first of those launches will be in the middle of this month on a thrice-flown Falcon 9 booster. (They will also need customer terminals and Elon Musk has used a prototype to post a tweet from his home).

Six to eight launches would bring them up to Musk's "minor" coverage by mid-2020 and, if they maintain the same launch rate, they will achieve "moderate" coverage around the end of the year. But, what is meant by "minor" and "moderate" coverage? A simulation by Mark Handley, a professor at University College London, provides an approximation of the answer.

The first Starlink "shell" will have 24 orbital planes. Each orbital plane will have 66 satellites at an inclination of 53 degrees and an altitude of 550 km. Handley ran simulations of the first six and first twelve orbital planes — corresponding roughly to the SpaceX plan for 2020. Snapshots of the coverage area "footprints" from the two simulations are shown below:

Coverage with six and twelve 66-satellite orbital planes

The blue areas — around 50 degrees north and south latitude — are regions with continuous 24-hour coverage by at least one satellite. With six orbital planes, there will be continuous connectivity in the northern US and Canada and much of western Europe and Russia, but only southern Patagonia and the South Island of New Zealand in the sparsely populated south. Note that the financial centers of London and (just barely) New York will have continuous coverage, but, since these early satellites will not have inter-satellite laser links (ISLLs), SpaceX would have to route traffic between them through an undersea cable.

Coverage is continuous around 50 degrees north and south.

(At this point, you should stop reading and watch the video (6m 36s) of the simulation which shows the footprints moving across the surface of the planet as it rotates).

With 12 orbital planes, all of the continental US and most of Europe, the Middle East, China, Japan, and Korea will be covered. Shotwell says that once they have 1,200 satellites in orbit, they will have global coverage (with the exception of the polar regions) and capacity will be added as they complete the 550 km shell with 1,584 satellites. That should occur well before the end of 2021 since she expects to achieve a launch cadence of 60 satellites every other week.

Shotwell also said they planned to include ISLLs by late 2020, implying that around half of the satellites in this first shell will have them. Those ISSLs will give SpaceX an advantage over terrestrial carriers for low-latency long-distance links, a market Musk hopes to dominate. ISLLs will also reduce the need for ground stations. (Maybe they can lease ground-station service from SpaceX competitor Amazon in the interim)

All of this is cool, but what will it cost the user?

It sounds like SpaceX is serious about pursuing the consumer market from the start. When asked about price recently, Shotwell said millions of people in the U. S. pay $80 per month to get "crappy service." She did not commit to a price, but homes, schools, community centers, etc. with crappy service would pay that for good service, not to mention those with no service. Some customers may pay around $80 per month, but the price at a given location will be a function of SpaceX capacity, the price/demand curve for Intenet service, and competition from terrestrial and other satellite service providers — so prices will vary within the U. S. and globally. In nations where Starlink service is sold by partner Internet service providers, they will share in pricing decisions.

Since the marginal cost of serving a customer is near zero as long as there is sufficient capacity, we can expect lower prices in a poor, sparsely-populated region than in an affluent, densely-populated region. Dynamic pricing is also a possibility since SpaceX will have real-time demand data for every location. "Dynamic pricing of a zero marginal cost, variable-demand service" sounds like a good thesis topic. It will be interesting to see their pricing policy.

National governments will also have a say on pricing and service. While the U. S. will allow SpaceX to serve customers directly, other nations may require that they sell through Internet service providers and some — maybe Russia — may ban Starlink service altogether.

The price and quality of service also impact long-run usage patterns and applications. Today, the majority of users in developing nations access the Internet using mobile phones, which limits the power and range of applications they can use. Affordable satellite broadband would lead to more computers in homes, schools, and businesses and reduce the cost of offering new Internet services, impacting the economy and culture and leading to more content and application creation, as opposed to content consumption.

Looking further into the future, SpaceX has FCC approval for around 12,000 satellites and they recently requested spectrum for an additional 30,000 from the International Telecommunication Union. Their next-generation reusable Starship will be capable of launching 400 satellites at a time, and they will have to run a regular shuttle service to launch 42,000 satellites as well as replacements since the satellites are only expected to have a five-year lifespan. (One can imagine Starships dropping off new satellites then picking up obsolete satellites and returning them to Earth).

This sounds rosy. As we said in the NSFNet days, what could possibly go wrong? SpaceX seems to have a commanding lead over its would-be competitors. Might they one day become a dominant Internet service provider in a nation or region and abuse that position? Also, before they launch 42,000 satellites — or even 12,000 — SpaceX better come up with a foolproof plan for debris avoidance and mitigation. I hope they have a vice-president in charge of unanticipated side-effects.

Update Nov 5, 2019

Speaking at an investment conference, Shotwell said that a single Starship-Super Heavy launch should be able to place at least 400 Starlink satellites in orbit. Doing so would reduce the per-satellite cost to 20% of today's 60-satellite launches.

Written by Larry Press, Professor of Information Systems at California State University

Follow CircleID on Twitter

More under: Access Providers, Broadband, Wireless

Categories: News and Updates

DNS Wars

Tue, 2019-11-05 20:55

The North American Network Operators' Group (NANOG) is now quite an institution for the Internet, particularly in the North American Internet community. It was an offshoot of the Regional Techs meetings, which were part of the National Science Foundation Network (NSFNET) framework of the late 80s and early 90s. NANOG has thrived since then and is certainly one of the major network operational forums in today's Internet — if not the preeminent forum for network operators for the entire Internet.

The 77th NANOG meeting was held in Austin, Texas at the end of October and they invited Farsight's Paul Vixie to deliver a keynote presentation. These are my thoughts in response to his presentation, and they are my interpretation of Paul's talk and more than a few of my opinions thrown in for good measure!

The DNS

The DNS is a fundamental part of the infrastructure of the Internet. Along with IP addresses, the namespace was considered to be the shared glue that essentially defined the Internet as a single cohesive network. Oddly enough, the issues with address exhaustion in IPv4 created a schism in the address framework that led to the adoption of a client/server architecture for the Internet that increased the reliance on the namespace as the consistent common framework for the Internet.

We refer to the DNS interchangeably to refer to a number of quite distinct concepts. It's a structured namespace, a distributed database, the protocol we use to query this database and the servers and services we use to make it all work. Little wonder that wherever you look on the Internet, you will find the DNS.

Paul started his presentation with a description of the 1648 Peace of Westphalia, which is a rather odd place to start recounting the evolution of the DNS. This was a diplomatic congress between sovereign states. The principle of sovereign rights was established though this arrangement, and the modern definition of a nation determined which parties had a seat at this particular table. A nation was defined as a geographic territory with defended intact borders, a principle that one could characterize as recognition of the rule of the strongest within bounded domains. This physical definition of a nation and the associated concept of national sovereignty has carried forward to the national structures of today's world order. The realm of sovereignty encompassed land and sea, and subsequently expanded to air, space (or at least the bits of it close to the earth) and now some aspects of the realm defined by information technology.

Let's park that concept and return to the evolution of the DNS. The original model of the name system for the networks of the 1980s was a way for a computer to name other computers connected to the same network conveniently. The initial method still exists in most systems as hosts.txt, which is a simple list of names and the corresponding protocol address of the named computer.

The initial distribution of names was via flooding of a common copy of the hosts file. Pretty obviously, this does not scale, and the frustrations with this naming model drove much of the design of the DNS. The DNS is a hierarchal name structure, where every nodal point in the namespace can also be a delegation point. A delegation is completely autonomous, in that an entity who is delegated control of a nodal point in the namespace can populate it without reference to any other delegated operator of any other nodal point. The implementation of the matching namespace as a database follows the same structure, in that an authoritative server is responsible for answering all queries that relate to this nodal point in the database. Client systems that query these authoritative services also use a form of hierarchy, but for somewhat different reasons. End systems are usually equipped with a stub resolver service that can be queried by applications. They typically pass all queries to a recursive resolver. The recursive resolver takes on the role of traversing the database structure, resolving names by exposing the delegation points, and discovering the authoritative servers for each of these zone delegations. It does so by using the same DNS protocol query and response mechanism as it uses once it finds the terminal zone that can provide the desired answer.

There are perhaps three reasons why we believed that this was a viable approach.

  • The first is that all resolvers cache the answers, and all answers come with a suggested cache time. Extensive caching within the resolver system suppresses queries.
  • The second is that we used an exceptionally lightweight protocol that used stateless anonymous queries. UDP is exceptionally efficient, and the deliberate excision of any details of who was asking or why it was intended to ensure that the answers were not customizable. This meant that the cached answers could be used to reply to future queries without the risk of breaking some implicit context associated with an answer.
  • The third reason is that we aligned the resolvers with the service infrastructure. Your local ISP operated the DNS recursive resolver. This meant that the resolver's cache was nearby, and near has a much better chance of being faster than remote.

Ever since then, we've been testing out these reasons and discovering how we can break these assumptions!

Episode 1 – The Root Wars

It appears that the first DNS War was fought over the transition of DNS names in .com, .net and .org from a USG contracted service that was offered without cost to customers to a charged service. To some, the transition of the DNS into a commercial monopoly raised some issues. Why was one entity allowed to reap the considerable financial rewards of the now booming DNS while all potential commercial competition was locked out?

Various efforts were made in the mid-90s to compete with the monopoly incumbent operator, Network Solutions, by standing up alternate root servers that contained more top-level domains. The most prominent of these was an effort called AlterNIC, but it was not alone. When one of AlterNIC's founders hijacked the InterNIC website for three days in 1997, it led to civil lawsuits followed by a US Federal wire fraud prosecution.

The pressure for competition in the DNS did not go away, but the path through alternate root servers slowly faded out. The alternate path, namely competition in name registration services and the controlled release of additional names in the root zone, was the path that was ultimately followed. It is still debated today as to whether these moves achieved their intended objectives of enhancing competition in the namespace without adding confusion and entropy to the DNS, and the somewhat robotic continuation of expanding the root zone by ICANN appears more and more nonsensical as the years roll on.

Episode 2 – Sitefinder and Zone Contents

The next episode of the DNS Wars was the Network Solution's Sitefinder debacle. Network Solutions administered the .com zone as a registry operator. They added delegation entries to this zone according to orders passed to them by registrars. Search was gaining in popularity (and in intrinsic value), and it was noted that users were often confusing search terms with domain names.

Network Solutions decided to exploit this by synthesizing a wildcard in the domain, effectively directing all queries for names that did not exist in .com to a search engine rather than conforming with DNS standards and responding with NXDOMAIN. After some drama and much legal posturing, the wildcard was withdrawn.

It's hard to tell now if the outrage at the time was about the seizure of as-yet undelegated domain names or this implicit seizure of search. In retrospect, the latter was the more valuable heist.

But the NXDOMAIN substitution issues did not go away.

Episode 3 – Open Resolver Wars

The browser vendors had decided that a single input element would be used for both search terms and searches. The result was a substantial cross leakage of search terms and DNS queries. Search engines gained valuable insights into popular, but as yet undelegated domain names. An asset that was previously the exclusive property of DNS registry operators and DNS operators could capture a search session by substituting a search engine pointer in place of an NXDOMAIN query.

The emerging monopoly of Google search was not exactly uncontested, and when Google managed to obtain a default position in some heavily used browsers, there was a reaction to try and redirect users to an alternative search engine. The DNS was co-opted in this effort, and OpenDNS tried to achieve this with a recursive resolver that performed NXDOMAIN redirection into a search engine, in a reprise of Sitefinder. For a short period, OpenDNS also redirected the domain name www.google.com to a different search engine. Within a few weeks, Google launched its public DNS on quad 8 and based the service on the absolute integrity of both positive and negative responses in the DNS. A 'trustable' DNS that undertook never to lie.

Oddly enough, the result is that Google's public DNS offering is now totally dominant in the open resolver space. If this was a three-way struggle between infrastructure-based DNS, Open Resolvers and Google's Open Resolvers, then it looks like Google won that round.

Episode 4 – Client Subnet Wars

The next episode of DNS struggles has been the Client Subnet wars. The deliberate excision of any details of who was asking or why was deliberately subverted by attaching a Client Subnet record.

This privacy-destroying initiative was largely due to Akamai. For reasons best known only to Akamai, this particular content distribution network was not a keen fan of using the routing system to steer the client to the closest instance of a replicated server set through anycast. For them, anycast was seen as suboptimal. Instead, they used the DNS. On the assumption that every client used an infrastructure-based DNS resolver, then the location of the resolver that they used for queries and their own location was close enough to be treated as the same location. When a query came to Akamai's DNS servers, the source IP address was used to calculate the client's location, and the response was generated that pointed to the Akamai server set that was calculated to be closest to the user. No hand-offs, no additional round trip times, no more overhead. And if the recursive resolver cached the Akamai response, then so much the better, as all clients of this recursive resolver would obtain the same response from the Akamai DNS servers in any case.

Open Recursive Resolvers are not necessarily located close to their clients. The result was that Akamai's location-derived response was at times wildly inaccurate, and the Akamai content service was abysmally slow because a remote server was being used.

It seems odd in retrospect, as there are many ways that this could be solved, but the mechanism that was bought to the IETF for standardization was to use the extension mechanism for DNS, EDNS(0), and record the subnet of the client into this field. Recursive resolvers were meant to perform a local cache lookup on the combination of the query name and the client subnet, and a cache miss required a query to the authoritative server with the client subnet information still attached to the query.

There are a whole set of reasons why this is a completely insane approach. It destroys DNS privacy, in that authoritative servers are now aware of the identity of the end client. The notion of what is a "subnet" and what is a client address is evidently too hard a concept to grasp for some implementors, and the full client IP address is seen all too often in the ECS field of the query. The CDN does not provide the recursive resolvers with a map of their servers' locations so that the recursive resolver and optimise its local cache, as that of course would be an unacceptable leak of the CDN's privacy, but of course in the warped world of CDNs its quite acceptable to undermine the individual user's privacy, as that just doesn't matter. The local recursive resolver cache is now under pressure, as it now has to add the client subnet as a lookup key into the local cache, so local caching becomes less efficient. Also, there is the consideration that if the server realizes that the client is poorly served, it is perfectly capable of redirecting the client to a closer server. Any delay in the steerage function is likely to be more than compensated by the benefits of using this closer server.

It's hard to see Client Subnet as an optimization, and far easier to interpret this technology as a deliberate effort to pervert privacy in the DNS and deploy the DNS as one more tool in the ongoing effort to improve mechanisms of user surveillance and increase the efficiency of monetizing Internet users.

Episode 5 – Today's DoH/DoT Wars

And now we have the DNS over something Wars. Without a doubt, this is now a complex issue, and the motivations of the actors are sometimes not easy to discern. At its heart is the observation that almost every Internet transaction starts with a DNS lookup, and if I were able to observe all your DNS queries as they took place, then I would probably be in a position to assemble a comprehensive up to date profile of you and your activities. In terms of surveillance data, the DNS can be seen as the data motherlode. The Snowden material showed that such data is not just of commercial interest, but also a topic of keen interest to state actors. The IETF embarked on a DNS privacy path. If this wasn't enough, the DNS is now the control point for many, if not most, cybersecurity functions.

Pushing the recursive resolver deeper into the network means that the DNS conversation between the client stub resolver and the recursive resolver may transit a far longer path across the network, and that lengthened path opens up an unencrypted query and response to a larger set of actors who could inspect, or possibly alter, the DNS transaction. The Snowden papers described some NSA activity along these lines.

The first outcome of the DNS Privacy Working Group was the definition of stub-to-resolver encryption using TLS. The IETF decided to use TCP port 853 for this method, allowing the port 53 port number to remain as DNS (unencrypted) over TCP. The TLS setup may look like a heavy price to pay, but when you consider that a stub resolver will normally keep a single session open with a recursive for an extended period and there is TCP Fast Open to allow fast session re-establishment, this starts to look pretty much the same as DNS over UDP in terms of performance, and the encryption secures the stub-to-recursive conversation against observation and interception.

Then came the specification of DNS over HTTPS. It's not a new idea, and there are xxx over HTTPS implementations for many values of xxx, including IP itself! But there is a difference between hacking away at the code and standardizing the approach. HTTPS is commonly seen as the new substrate of the Internet as it passes through firewalls relatively easily. The content can be readily masked in opaque padding and jitter generators, the combination of TLS 1.3 and encrypted SNI (ESNI) really does hide most of the meaningful, visible parts of any session, and it resists middleware inspection and alteration. Many vital functions and services use port 443, so it is simply not an option to block this port completely. Why prefer DoH over DoT? DoH achieves everything that DoT delivers but also embeds itself in all other traffic in a manner that can make it all but impossible to detect. This is about content hostels, where a single IP address is used by thousands of different content domains. And combine that with ESNI in TLS 1.3, where the distinguishing name is not shared in the clear, then it is pretty clear that DoH can be used in a manner that evades most common (and cheap) forms of middleware detection and potentially some of the even more expensive detectors as well.

Even so, is this level of security going to be enough in any case? To put it another way, there is a theory that if the DNS is too complex for the Chinese Communist Party then they will stop filtering the DNS. There is also a theory that this is complete nonsense!

But if you are motivated enough to hide in the packet crowd, why not run an entire VPN session over port 443 with TLS 1.3 and ESNI? Hiding just your DNS queries is not enough if you want to conceal the entirety of your network activity, and constructing a secure environment from a distinct and separate set of tools is often far less secure than the more comprehensive approach offered by a modern VPN with current TLS behaviors.

So why DoH at all? It doesn't appear to be solving a technology or a performance issue that is not already competently addressed in DoT. But there are compelling drivers behind DoH, and they appear in the commercial landscape of today's Internet. The major issue is the tensions between applications and everyone else! If much of the value of services on the Internet is based on the knowledge of the end user's behaviors and preferences, then applications are hardly motivated to share their user-driven activity with anyone else. Using the platform's stub resolver is a leak, using the service provider's recursive resolver is a leak, and using transmissions in the clear is obviously a leak. If an application wants to limit the extent of information publication to itself and its mothership, then it needs to avoid common infrastructure and drive itself through the network using secure channels. DoH can do this readily. And where all this is played out is in the world of mobile devices, where the value of the market sector and the services and transactions that occur in this sector dominate all others. Today's networks act as both a data collection field and platform for the delivery of data-steered ads. Everything else is incidental.

Episode 6 – Resolverless DNS Wars

Not only is the DNS used extensively in this manner, but also the web community has been energized to bypass these mechanisms as a new measure, and now we are contemplating a future network that features "resolverless" DNS. Like server push for HTML, resolverless DNS can make the DNS faster by preloading the resolution outcomes before the application may need to use them.

Currently, this is the topic of an IRTF research group item where the content itself can push DNS outcomes, but the pragmatic observation is that there are few impediments to this approach in the browser world. Push is already well established as a means of improving the time to load for content, and there is little difference in pushing style sheets, content, scripts and DNS resolution outcomes.

Interestingly, a protected session, such as TLS, is considered to be good enough for push and DNSSEC validation of the pushed content is not considered necessary by resolverless DNS proponents. This strikes me as irresponsibly naive, and if content can push content, then the recipient should be absolutely required to validate the veracity of the pushed data.

But perhaps the position makes more sense if you view this as a major divorce, where the web is separating itself from the Internet and wanting to sever all forms of inter-dependence with the rest of the Internet. Why share any of that user data when you can keep it all? So, when we talk about applications ingesting Internet infrastructure functions into their own space perhaps, we are not really talking about applications in a generic sense but instead are focussing entirely on the web platform, browsers and their ecosystem of HTML-based applications.

It's challenging to predict how this will work through, but perhaps there is already one emergent factor that we need to consider, and that is the Peace of Westphalia and the concept of a nation being defined by its adequately defended borders.

The IT Corporate Nation State

In a world where one corporate entity provides the operating system for some 90% of all handheld computers (Google with Android), share the same corporate entity is used as the browser by more than 70% of users (Google with Chrome) and where a single open resolver service is used as the preferred resolver by some 10% of users (Google with Quad 8), then if the Internet was regarded as a distinct realm of human activity on a peer level with realms defined by land and sea, then Google's ability to assert sovereign rights over huge swathes of the information technology space, based on the ability to defend its assets, must be admitted. By that reasoning, the Westphalian model of nation-states applies here as well, and regulation is necessarily replaced with negation.

When the Mozilla Foundation announced its intention to ship the next version of its Firefox browser with a default setting that both enabled DoH and directed DoH to Cloudflare's Open Recursive Resolver service, the United Kingdom called for a "summit meeting" with Mozilla. This was not the enacting of legislation, the adoption of a regulation or any other measure that is conventionally available to a nation-state, but a meeting of a different nature. Is this the resurgence of quasi nation-states such as the Honourable East India Company, a joint-stock company that ran its own army (twice the size of the British Army at its peak in 1803), fought its wars and established and defended its borders in a manner that was fully consistent with the actions of any other nation-state?

Part of the new world order is that the space defined by the actions of applications is well beyond the traditional domain of communications regulation and even beyond the domain of regulation trade and commerce. Applications use communications as a service, but they do not define it. This is a new space, and the sovereign rights of nations are finding it extremely challenging to assert that they have primacy when they cannot defend their borders and cannot unilaterally enforce their will. Is the new definition of information technology nationhood equating to the ability to impose the national will on end-users irrespective of physical land and sea borders?

The Internet rode a wave of the deregulation of telecommunications. What deregulation meant was that enterprises were no longer confined to offer a standard service at a highly regulated price. Deregulation meant that companies were driven by user expenditure and user preference, and accordingly, user preference became the subject of intense scrutiny. But, like the supermarket retail industry, knowing what the customer prefers is one thing, but knowing how customer preferences are shaped and influenced is an entirely different realm; such intense scrutiny and acquired knowledge allows the enterprise to both shape preferences and then meet them. The Internet has been changed irrevocably from being a tool that allows computers to communicate to a tool that allows enterprises to deploy tools that are intended to monetize users in a highly efficient and effective manner.

We have reached a somewhat sad moment when it is clear that the DNS has been entirely co-opted into this regime. Sadder still to think that if this is a new realm of national sovereignty, then our existing nation-state world order is just simply not able to engage with the new IT corporate nation-states in any manner that can curb their overarching power to defend their chosen borders. The 1648 Peace of Westphalia has much to teach us, and not all of the lesson is pleasant.

I have to thank Paul Vixie for his NANOG talk in looking at the evolution of the DNS and the Internet through this particular lens.

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: DNS, DNS Security, Internet Governance

Categories: News and Updates

Considerations on the High-Level Panel's "Internet Governance Forum Plus" Model

Mon, 2019-11-04 20:55

The Fourteenth Annual Meeting of the Internet Governance Forum (IGF) will convene in Berlin three weeks from tomorrow. One of the highlights of the meeting could be the main session on Internet Governance and Digital Cooperation that is to be held on Day 1, 26 November 2019. The session is to consider some of the recommendations contained in the June report from the UN Secretary-General's High-level Panel on Digital Cooperation, most notably the panel's proposal to revamp the IGF so that it could serve as an institutional home for initiatives on "digital cooperation." Inevitably, there are potential benefits to the proposed "IGF+" model, but there are also ambiguities and potential shortcomings that would need to be discussed and thought through. This post addresses some of the latter issues. The text is also a chapter in a book of short comments that is to be released at the IGF — Wolfgang Kleinwächter, Matthias C. Kettemann, and Max Senges, (eds.): Towards a Global Framework for Cyber Peace and Digital Cooperation: An Agenda for the 2020s. Hamburg: Verlag Hans-Bredow-Institut, 2019. The complete volume is to be posted online in the coming days.

* * *

The UN Secretary-General's High-level Panel on Digital Cooperation released its report in June 2019. The report is to be discussed at the Internet Governance Forum (IGF) in Berlin in November 2019.[1] The report proposes consideration of what it calls three possible architectures of global digital cooperation: an Internet Governance Forum Plus; a Distributed Co-Governance Architecture that would assemble transnational policy networks under an umbrella "network of networks," apparently operating outside the United Nations system; and a Digital Commons Architecture that would promote the UN's Sustainable Development Goals by assembling multi-stakeholder "tracks," each of which could be "owned" by a lead organization such as a UN agency, an industry or academic consortium or a multi-stakeholder forum.

In addition, the report invites all stakeholders to commit to a Declaration of Digital Interdependence. It also recommends a multi-stakeholder alliance, involving the UN, to create a platform for sharing digital public goods; the creation of regional and global digital "help desks" to assist governments and stakeholders in understanding digital issues and developing their capacities; a Global Commitment on Digital Trust and Security; and the marking of the UN's 75th anniversary in 2020 with a Global Commitment for Digital Cooperation.

It is important that the UN Secretary-General has taken a strong interest in digital issues and convened an effort to inject new ideas into the global governance discussion. Insofar as some of the panel's proposals are reasonably anodyne and focused on normative declarations and information-sharing, they may navigate the waters of inter-state rivalries to adoption. However, it could prove more difficult to attract the necessary buy-in and commitment to a new operational model for global digital cooperation.

The report's schematic presentation of the three alternative models may present hurdles to an inclusive and systematic assessment of their merits and feasibility. Indeed, just three of the report's forty-seven pages are devoted to specifying what are really the panel's main "deliverables." This was an interesting choice, inter alia, because probing questions about the models were raised in some of the outreach meetings conducted during the panel's work.[2] In any event, the final product does not offer much more detail than the initial sketches that were shared.

One could argue that in some cases, it makes sense to frame a proposal for international cooperation in general terms and then pursue elaboration and a sense of collective ownership in the public vetting stage. After all, the 2005 report of the Working Group on Internet Governance did not provide extensive detail in proposing the creation of the IGF. But the IGF was pitched as primarily a space for dialogue and collective learning, which is a less demanding construct than a complex operational system intended to engineer new types of collaborative outcomes that include policies and norms. In addition, the historical context is very different today from that of the World Summit on the Information Society (WSIS), and the models go beyond the issues debated and the multi-stakeholder processes undertaken since that time. As such, one also could argue that more functional and political explanation of the models would have helped to facilitate the international community's engagement.

To illustrate the challenges ahead, this brief chapter highlights some of the issues raised by one of the models: the Internet Governance Forum Plus. All three models merit analysis, but space limitations allow room to assess just one, and as this volume is a contribution to an IGF meeting, the choice seems apt. Moreover, the IGF+ might be viewed by some actors as the most viable of the three since as the IGF already has a UN mandate, an institutional form of sorts, and governmental and stakeholder support. In contrast, the other two models could require heavy lifting to get off the ground, especially in the midst of a recession in international cooperation that has extended even to the Universal Postal Union.

The one-page IGF+ model has four main components. First, there would be an Advisory Group based on the IGF's current Multi-stakeholder Advisory Group (MAG). It is not clear what the advantage would be in dropping "multi-stakeholder" from the group's name. The report also explicitly limits its role to preparing annual meetings and identifying policy issues to be explored. One can imagine concerns being expressed on one or both of these points.

Second, there would be a Cooperation Accelerator that would catalyze issue-centered cooperation across a wide range of institutions, organizations and processes. The Accelerator would "identify points of convergence among existing IGF coalitions, and issues around which new coalitions need to be established; convene stakeholder-specific coalitions to address the concerns of groups such as governments, businesses, civil society, parliamentarians, elderly people, young people, philanthropy, the media, and women; and facilitate convergences among debates in major digital and policy events at the UN and beyond."[3]

This is a demanding mandate that would be difficult to fulfill. The old adage that everyone wants more coordination, but nobody wants to be coordinated, is relevant here. Given the diversity of actors' interests and orientations in the broad digital policy space, the case for pursuing such cooperation and convergence would have to be compelling. Making that case would require a well-functioning team of actors with knowledge of diverse issue-areas, significant political skills, the contacts and local knowledge needed to organize diverse transnational coalitions with different agendas, and sufficient status to be able to facilitate convergence among governments and stakeholders in multiple UN settings "and beyond." The report says that the Accelerator "could consist of members selected for their multi-disciplinary experience and expertise," but the status of those members and the process for their selection are not indicated. Assessing candidates for these roles and getting support for the selections made could prove challenging. After all, just populating the MAG has proven controversial at times, and it is (apparently) just a conference program committee.

Third, there would be a Policy Incubator that would help nurture policies and norms for public discussion and adoption. This ambitious structure "should have a flexible and dynamic composition involving all stakeholders concerned by a specific policy issue." While their precise status and modalities of selection are not mentioned, presumably, these stakeholders would need serious expertise as well since their mandate would be even more substantive than that of the Accelerator. The group would "incubate policies and norms for public discussion and adoption," something that is often difficult in more well-established and supported international institutions. And in response to requests from actors (who presumably would meet criteria that excludes e.g., trolls and promoters of purely private agendas), the Accelerator would "look at a perceived regulatory gap, it would examine if existing norms and regulations could fill the gap and, if not, form a policy group consisting of interested stakeholders to make proposals to governments and other decision-making bodies. It would monitor policies and norms through feedback from the bodies that adopt and implement them."

It is interesting to consider how this mechanism might operate in relation to the established patterns of (dis)agreement among governments and stakeholders on Internet governance and wider digital issues. For example, consider the question of identifying and filling policy gaps. The UN Working Group on Enhanced Cooperation on Public Policy Issues Pertaining to the Internet spent years locked in divisive debates about whether there were any gaps and "orphaned issues" that required new cooperation before it closed down without an agreement. Moreover, regulation is a complex arena that is heavily institutionalized across governments and involves specialized and expert agencies. If the requests do not come from the entities with responsibilities regarding the gap, they may not welcome an IGF-based group approaching to say, "we hear that you have a gap and are here to help."

More generally, some actors might perceive the proposed Cooperation Accelerator and the Policy Incubator as insufficiently "bottom-up" in approach. Accelerator members would identify points of agreement among extant coalitions, consider whether new ones are needed, convene actors, and facilitate the convergence of their preferences. Incubator stakeholders would receive requests to look at gaps and then assemble groups to develop responses. Finding the right balance here would take some refinement, and managing such processes could draw the IGF onto terrain that requires careful treading.

Fourth, there would be an Observatory and Help Desk that would direct requests for help on digital policy to appropriate entities and engage in related activities. Sharing knowledge and information should be a tractable challenge that is well suited to an international mechanism. This author is among those who believe that it would be useful to institutionalize an informational "clearing house" function that utilizes both technological tools and human support.[4] Indeed, as Wolfgang Kleinwächter has noted, the IGF already performs a diffuse kind of clearing house function by bringing together suppliers and demanders of knowledge and information on a wide range of issues, so one could argue that this would be a quite natural fit.[5]

That said, the panel was more ambitious in imagining not just a mechanism for aligning informational supply and demand, but rather a "help desk" that ministers and others would want to call on for rather more. The report proposes an IGF unit with the capacity to "direct requests for help on digital policy (such as dealing with crisis situations, drafting legislation, or advising on policy) to appropriate entities… coordinate capacity development activities provided by other organizations; collect and share best practices; and provide an overview of digital policy issues, including monitoring trends, identifying emerging issues and providing data on digital policy." All this could require a significant bureaucratic unit, and some of these tasks could be sensitive and are already performed by other international organizations. In parallel, the panel separately recommends "the establishment of regional and global digital help desks to help governments, civil society and the private sector to understand digital issues and develop capacity to steer cooperation related to social and economic impacts of digital technologies," so the IGF unit would need to coordinate with those entities as well.[6] There are are some operational and political issues to be worked through here.

Turning from the four new units to the broader vision, it should be noted that the IGF+ proposal does not address the questions of IGF improvements that have been much debated over the years. A great many suggestions have been made by researchers, civil society advocates, the private sector and governments, as well as the Working Group on Improvements to the Internet Governance Forum and the UN's 2016 retreat on advancing the IGF mandate. The report does include a footnote mentioning some of this activity but does not engage with the issues, as envisioning a "plus" layer is its sole focus.

Irrespective of what happens with the "plus," continuing attention is needed to improving the rest of the IGF. Indeed, the shape and dynamics of the host body would presumably impact the "fit" and operation of the proposed add-ons. Should the IGF remain an annual event that is mostly devoted to workshops, supplemented by some bits of intersessional activity like the national and regional IGFs, dynamic coalitions, and best practice forums? Or, for example, might it be worth considering having meetings focused on one or two themes per year in a NETmundial-style configuration, e.g., globally participatory preparatory processes and efforts to agree on normative outcomes that could inform decision-making institutions? The WGIG report and the Tunis Agenda mandate included the option of adopting recommendations, but concerns about "WSIS-style negotiations" and the political fragility of the new process made such a model too controversial to be considered in the IGF's early years. Perhaps by now, conditions have matured enough to consider such an option. Maybe some of the other long-standing challenges could be addressed seriously in tandem, such as enhancing the involvement of governments, especially from the developing countries.

Finally, it merits note that the High-Level Panel was tasked with mapping out options for "digital cooperation," which is broader, more inchoate, and perhaps even more contestable than "Internet governance." Several considerations follow from this. First, not all of the digital issues of concern today may need additional forms of international cooperation, much less governance. Artificial intelligence, blockchain, robotics, 3D printing and so on may raise policy concerns, but determining the most suitable responses to these requires case-by-case consideration with potential forms following functions. Second, where international cooperation is needed, pursuing it in the IGF is only sensible with respect to clear Internet governance dimensions of the issues.

Third, the fact that "digital" issues are important would not justify changing the name and focus of the IGF, as some actors seem to contemplate. On the one hand, even though the Internet Assigned Numbers Authority transition has reduced the political heat level, Internet governance remains a substantial and complex arena with many outstanding questions that require the international community's attention. On the other hand, Internet governance should not be subsumed under a broader "digital governance" rubric alongside very different issues. If careful analysis determines that we need new mechanisms for issues that are not about Internet governance, then these should be developed. Perhaps the High-Level Panel's second and third models could figure prominently in such a process, but that is a different conversation. In the meanwhile, hopefully, the Berlin IGF and related discussions will be sufficient to determine whether the IGF+ model should serve as an important part of strengthening the IGF and enhancing its utility.

[1] The Age of Digital Interdependence: Report of the UN Secretary-General's High-level Panel on Digital Cooperation. The United Nations, 2019. https://www.un.org/en/pdfs/DigitalCooperation-report-for%20web.pdf

[2] In contrast, the panel did an online Call for Contributions well before the report's release that did not delve into the models and therefore elicited rather little comment on them. See the inputs offered at https://digitalcooperation.org/responses/

[3] All quotes pertaining to the IGF+ model are from page 24 of the High-Level Panel's report.

[4] For a discussion, see, William J. Drake and Lea Kaspar, "Institutionalizing the Clearing House Function," in, William J. Drake and Monroe Price (eds.), Internet Governance: The NETmundial Roadmap. Los Angeles: USC Annenberg Press, pp. 88-104. Efforts to launch something akin to this have included e.g., the European Commission-backed Global Internet Policy Observatory, the NETmundial Initiative, and (most successfully) the Geneva Internet Platform's Digital Watch Observatory.

[5] See Wolfgang Kleinwächter, "Multistakeholderism and the IGF: Laboratory, Clearinghouse, Watchdog," in William J. Drake. (ed.), Internet Governance: Creating Opportunities for All – The Fourth Internet Governance Forum, Sharm el Sheikh, Egypt, 15-18 November 2009. The United Nations, 2010, pp. 76-91.

[6] The Age of Digital Interdependence, p. 5.

Written by William J. Drake, International Fellow and Lecturer

Follow CircleID on Twitter

More under: Internet Governance, Policy & Regulation

Categories: News and Updates

Challenging Domain Names for Abusive Registration: UDRP and ACPA

Fri, 2019-11-01 19:40

There are predatory-domain name registrants, and there are registrants engaged in the legitimate business of acquiring, monetizing and reselling domain names. That there are more of the first than the second is evident from proceedings under the Uniform Domain Name Dispute Resolution Policy (UDRP). "Given the human capacity for mischief in all its forms, the Policy sensibly takes an open-ended approach to bad faith, listing some examples without attempting to enumerate all its varieties exhaustively. Worldcom Exchange, Inc v. Wei.com, Inc., D2004-0955 (WIPO January 5, 2005). But, it is also evident that the Policy is even-handed, and that some of the "mischief" comes from mark owners. Metamark (UK) Limited v. Andrew Longton / Metamark Corporation, FA190900 1864151 (Forum September 30, 2019) (METAGLIDE and <metamark.com> in which the Complainant not only failed to prove the domain name was identical or confusingly similar to its mark, but the domain name was registered 20 years before the mark came into existence).

The question to be answered in UDRP proceedings, and no less so in actions under the Anticybersquatting Consumer Protection Act (ACPA), is whether a challenged registrant knowingly registered a domain name corresponding to a mark with the unlawful purpose of taking advantage of its goodwill and reputation. Mark owners may be irked that certain words and combinations are already registered, but they forget there are competing interests in the cyber marketplace, and getting there first is a time-honored practice. It is not unlawful to have registered (before the existence of a mark) or to register domain names (after it) identical or confusingly similar to marks if there is neither intention to target nor knowledge of the mark. Sarah Lonsdale & Stuart Clark t/a RocknCrystals v. Domain Admin / This Domain is For Sale, HugeDomains.com, D2019-1584 (WIPO September 6, 2019) (<rockncrystals.com>).

The UDRP jurisprudence that has developed over the past twenty years confirms three points: a) that a mark owner's exclusive rights are no greater than the law allows, b) that the facts will be weighed (as one would expect in an adjudicative proceeding) to determine the lawfulness of domain name registrations, and c) that the law is no less protective of Respondent as it is of Complainant. In FPK Services LLC DBA HealthLabs.com v. Contact Privacy Inc. Customer 1241257718 / Michael Gillam, D2019-1483 (WIPO October 10, 2019) Complainant's mark predated <healthlab.com>, but "there is nothing in the record to indicate that Respondent was aware of Complainant or its alleged mark at the time the Domain Name was acquired in 2017). One of the factors Panels take into account is the strength or weakness of the mark; as the Panel points out, descriptive marks are not inherently distinctive absent proof of secondary meaning.

Both the UDRP and the ACPA are crafted to combat cybersquatting and to some extent, have overlapping jurisdictions, although there are good reasons for filing a claim in federal court for the opportunity of pleading in the alternative for trademark infringement. The UDRP is not a trademark court, and for cybersquatting, it should not be assumed that the outcomes will be the same in both fora. A Panel's judgment applying UDRP law may be different from a Judge's under the ACPA.

Take, for example, a claim mislabeled as cybersquatting, which is more likely actionable (if actionable at all) for trademark infringement. That which is outside the scope of the UDRP can be within the scope of the ACPA;, or if not that, of the Lanham Act § 43(a). The Panel in Ascension Health Alliance v. Prateek Sinha, Ascension Healthcare Inc., D2018-2775 (WIPO January 25, 2019) (<ascension healthcare.com>) suggests the claim is in the wrong forum: "[a]lthough Complainant may have the starting ingredients of an ordinary, trademark infringement case against Respondent, the Complainant has not demonstrated to the satisfaction of the Panel that Respondent is not making a bona fide offering of services." See also Trivago N.V. v. Adam Smith, D2019-1957 (WIPO October 20, 2019) (<TRIVAGO and <traveltrow.com>. "Complainant [may very well have] a valid trademark infringement or unfair competition cause of action against Respondent in a court of law.")

The reason for different results begins with different evidentiary requirements. Under the UDRP, a trademark complainant prevails only on proof of bad faith registration and bad faith use; bad faith use alone is insufficient (the conjunctive model). In contrast, the ACPA is satisfied on either/or proof: bad faith registration or bad faith use or trafficking in bad faith (the disjunctive model), with the result that mark owners can lose in the UDRP and prevail in the ACPA. Two cases illustrating this point are Newport News Holdings Corporation v. Virtual City Vision, Incorporated, d/b/a Van James Bond Tran, 650 F3d 423 (4th Cir. 2011) for <Newport news.com>; and Bulbs 4 E. Side, Inc. v. Ricks, 199 F.Supp.3d 1151 (S.D. Tex., Houston Div. August 10, 2016) for <justbulbs.com>).

In the earlier UDRP Newport News proceeding, the Respondent had successfully argued it had rights or legitimate interests because it was using the domain name in good faith to "disseminate city information in an effort to increase tourism and other visitor traffic to the city"; but years after the UDRP defendant changed its use to compete with Plaintiff. It would not have been actionable in a new UDRP but became actionable under the ACPA. The "just bulbs" Plaintiff was unsuccessful in two UDRP complaints before it prevailed on summary judgment on the ACPA claim; its trademark infringement motion was denied on a finding of genuine issues of material fact.

There is also another related difference in the jurisprudence applied in UDRP proceedings and court actions. Under the UDRP, a renewal of registration of a domain name arguably used in bad faith but registered in good faith is not actionable, while under the ACPA and the Lanham Act, it is. Under UDRP renewal is simply regarded as a continuation of the registrant's holding, not a new registration. (Bad faith is measured from the registration of the domain name by the challenged registrant). In Tergus Pharma, LLC v. Domain Administrator, DomainMarket.com, D2019-1787 (WIPO September 24, 2019) (<tergus.com>) the Panel noted that

the clear consensus view of WIPO UDRP panels is that the mere renewal of the domain name registration is not the relevant point in time to assess if there was bad faith in the "registration" of the domain name for purposes of the Policy, paragraph 4(a)(iii).

This is so, even though the Respondent appears subsequently to be using the domain name in bad faith: "[F]or whatever reasons, [Respondent] mentions the Complainant on the web page advertising the Domain Name on the [its] website. Far from enhancing the value of the Domain Name, this may serve to warn a prudent bidder that it could be buying a lawsuit or a UDRP action." The UDRP consensus is described in WIPO Overview 3.0 at section 3.9.

Contrast for renewal as bad faith an ACPA decision (direct action in federal court), Jysk Bedn Linen v. Dutta-Roy, 810 F.3d 767, 777 (2015) the court noted that "[i]n a sense, the cybersquatter muddies the clear pool of the trademark owner's goodwill and then profits off the resulting murkiness." Re-registration with knowledge of the trademark is a key factor in determining bad faith in this particular case.

[w]hen Dutta-Roy re-registered bydesignfurniture.com under his own name rather than Jysk's, he was expressing his intent or ability to infringe on Jysk's trademark. He admitted that he never had used the domain names in the bona fide offering of any goods or services. His demand for money can be looked at in two ways, and they are two sides of the same coin. First, the amount of money demanded could show how much he believes the domain name smudges the goodwill of the trademark — that is, how much money Jysk would lose out on if Dutta-Roy were to use the domain names to misdirect Jysk's customers. Second, the amount of money demanded could show how much value he believes Jysk puts on the domain names. In either case, bad-faith intent abounds.

And concluded:

It would be nonsensical to exempt the bad-faith re-registration of a domain name simply because the bad-faith behavior occurred during a noninitial registration, thereby allowing the exact behavior that Congress sought to prevent.

There can be no safe harbor for domain name holder, 15 U.S.C. § 1125(d)(1)(B)(ii), where it has no legal basis for re-registering the domain name.

The benefit of filing a complaint in federal court is that plaintiffs are not confined to cyber-piracy claims; they can plead in the alternative for relief under the Lanham Act, § 43(a). As noted in the Ascension Health Alliance and Trivago cases, it is not as though Complainants are entirely wrong in challenging unlawfully registered domain names, but their remedy may lie in federal court under the Lanham Act as a backup to their cyber-piracy claims.

This brings us to a more recent federal case that illustrates the benefit. In ZP_314_v_ ILM_Capital., 1:16-cv-00521-B (S.D. Alabama September 30, 2019) the court found Plaintiff was entitled to relief for trademark infringement on summary judgment but not its ACPA claim, (My thanks to Evan Brown for bringing this case to my attention in one of his blog posts). In an earlier decision on competing summary judgment motions reported at 335 F.Supp.3d 1242 (2018) the Court concluded that Plaintiff stated a claim under the ACPA:

As a preliminary matter, the undersigned notes that the parties dispute whether Defendants' re-registration of the subject domain names in March 2017 and May 2018 constitutes an actionable offense under the ACPA… The Eleventh Circuit firmly resolved this issue when it held that, "[t]he plain meaning of register includes a re-registration[,]" such that re-registration falls under the purview of the ACPA. Jysk, 810 F.3d at 777 ("It would be nonsensical to exempt the bad-faith re-registration of a domain name simply because the bad-faith behavior occurred during a noninitial registration, thereby allowing the exact behavior that Congress sought to prevent.")

But there is a difference between having an actionable claim for bad faith use and proving the elements for it on trial. For an ACPA claim, the "only element requiring proof at trial was bad faith intent to profit" (15 U.S.C. § 1125(d)(1)(A). The phrase "intent to profit" is not found in the UDRP, although implicit in Paragraph 4(b)(iv): "by using the domain name, you have intentionally attempted to attract, for commercial gain, Internet users to your web site or other on-line location, by creating a likelihood of confusion with the complainant's mark as to the source, sponsorship, affiliation, or endorsement of your web site or location or of a product or service on your web site or location."

In ZP No. 314 (trial decision, page 35) the Court distinguishes "bad faith intent to profit" from mere bad faith:

Without question, the factors enumerated above [referring to the statutory nine factors of the ACPA] strongly suggest bad faith on the part of Defendants. This finding is bolstered by evidence from which it reasonably can be inferred that Defendants' conduct in registering domain names that are identical or confusingly similar to the marks of ZP, their direct competitor, was not an isolated occurrence, but appears to be Defendants' mode of operation.

"Mere bad faith" is sufficient for a UDRP award, but not in court: "proving mere bad faith is not enough" because "[a] defendant is liable only where a plaintiff can establish that the defendant had a 'bad faith intent to profit.'" 15 U.S.C. §1125(d) (emphasis in original), citing Southern Grouts & Mortars, Inc. V. 3M Company, 575 F.3d 1235, 1246 (11th Cir. July 23, 2009). Under the UDRP, "mere bad faith" is sufficient if coupled with bad faith registration.

Having resolved the ACPA claim by dismissing it, the court then turned to the § 43(a) claim. It found that the "only question that remained at trial was whether Defendants' use of the marks after July 2017 constituted 'use in commerce.'" Here, the court distinguishes between cyber-piracy and trademark infringement (pages 24):

In the present case, the court previously found as a matter of law that the domain names at issue were confusingly similar to ZP's marks and that ZP had acquired secondary meaning (i.e., had a protectable interest in the marks) after July 2017… Therefore, the only question that remained at trial [the 43(a) claim] was whether Defendants' use of the marks after July 2017 constituted "use in commerce."

And the court found that Plaintiff proved it was (page 27):

Based on the foregoing, the court finds that Defendants' use and re-registration of the eight infringing domain names after July 2017 (when ZP had obtained trademarks on "One Ten" and "One Ten Student Living"), which included "parking" eight infringing domain name webpages with ZP's marks prominently displayed at the top of the page, with click through links to various other vendors' goods and services, constituted use in commerce under common law and the Lanham Act.

Jysk and ZP No. 314 the factual circumstances are outside the scope of the UDRP because in Jysk the bad faith follows a re-registration of the domain name and in ZP there is insufficient evidence in the summary judgment submission to support cybersquatting.

Written by Gerald M. Levine, Intellectual Property, Arbitrator/Mediator at Levine Samuel LLP

Follow CircleID on Twitter

More under: Domain Names, Brand Protection, Law, UDRP

Categories: News and Updates

More Privacy for Domain Registrants – Heightened Risk for Internet Users

Thu, 2019-10-31 23:06

A recent exchange on CircleID highlighted a critical need for data to inform the debate on the impact of ICANN's post-GDPR WHOIS policy that resulted in the redaction of domain name registrant contact data. A bit of background: in my original post, I made the point that domain name abuse had increased post-GDPR. A reader who works with a registrar (according to his bio) commented:

"Can you back up that statement with data? Our abuse desk has actually seen a reduction in abuse complaints."

That question spurred an investigation by the data engineers at AppDetex to answer these questions:

  • What abuse data exists?
  • Is it indicative of GDPR having had an effect on abuse?
  • And has the abuse impacted internet users as a result?

Our goal was to assemble domain name abuse data from a market basket of very large consumer-focused brands who have looked for, found, and then attempted to mitigate abuse of the domain name system related to their own brands. The abuse encompassed a variety of categories ranging from innocuous to insidious and included malicious attacks that sought to defraud users or spread malware.

To understand the effect of GDPR, we examined the number of attempted mitigations of abuse and the success or failure of those mitigation attempts (compliance) in the quarter preceding, and then two quarters after the implementation of ICANN's Temporary Specification for gTLD Registration Data and the resulting wholesale redaction of registrant contact data in registrars' publicly available WHOIS databases.

We inspected both the number of cases of brand abuse that warranted mitigation as well as compliance for a few reasons. Abuse, if left unchecked and available to the public on the internet, can ensnare people who are lured into mistakenly giving up their credentials, transacting with those who have bad intent, downloading malware, or may fall victim to any other variety of crimes or misdemeanors.

After evaluating thousands of mitigations during the quarter before, and two-quarters post-GDPR, we found an increase of nearly 15% in the number of attempted mitigations following the implementation of the Temporary Specification. More disturbing and indicative of harm, in the quarters following the implementation of the Temporary Specification, we found substantial decreases in successful mitigations (compliance). Two quarters after the implementation, that decrease in compliance totaled 38% (measured during the 30 days following an initial mitigation attempt and qualified as the removal of content or a drop or transfer of the domain name). This means that the life of abusive domains began to increase immediately after the implementation of the Temporary Specification, exposing billions of internet users to scams for much longer periods than before the implementation of the Temporary Specification.

This change followed a very stable period of successful enforcement and mitigation rates. Before the implementation of GDPR, those rates had remained relatively consistent, having leveled-off from the previous period of changes observed immediately after the launch and release of new generic top-level domains in the months following October 2013.

Another significant change in mitigation is that fewer domain name registrants and registrars are part of the solution. It's now much more difficult to reach registrants due to both the wholesale redaction of registrant contact data and the lack of clarity over when that data should be revealed to those seeking to abate abuse. This means that ISPs are removing content at the behest of brands while, due to inaction by registrants and registrars, the abusive domain names remain registered and might again be used in malicious schemes.

Were these changes a result of the redaction of registrant contact information? Likely, as both brand rights holders and security professionals are finding it more difficult to pursue mitigation of abuse. In fact, MarkMonitor, in their blog, cited that it takes 12% more effort to abate abuse, and IBM X-Force cited huge drops in blocking of abusive domains as a result of GDPR.

Can we expect more of this? Again, it's likely. The anonymity of the domain naming system as a result of GDPR-related redactions and the use of privacy and proxy services (as mentioned by Russ Pangborn in his recent blog) leave room for bad actors to act with impunity. To put it bluntly, it's easy for criminals to be brazen when their identity is hidden, and they are not held accountable for their crimes.

The sad thing is that it's not the brands or security professionals that suffer the brunt of the damage. It's the poor souls who don't know how to discern a good site from a bad site, the unlucky ones who mistakenly visit a site and have their credit card "skimmed", and the rest of us who suffer any number of other insults to our well-being while ICANN policymakers debate the definition of abuse and the responsibilities of contracted parties in abating it.

Isn't it time for the US and other local lawmakers to take up the cause of consumer protection in the domain name space and mandate a change for the better?

Written by Frederick Felman, Chief Marketing Officer at AppDetex

Follow CircleID on Twitter

More under: Cybersecurity, Domain Names, ICANN, Privacy, New TLDs, Whois

Categories: News and Updates

Monetising Solutions for the Telcos

Thu, 2019-10-31 21:01

Developments in the telecommunications industry and the broader digital economy have opened up many new markets over the last few decades. Telecoms has changed from a more or less standalone, horizontally-organized industry to one that has become a key facilitator in a range of vertical markets.

The keyword that is used to indicate that change is "smart." We are talking about smart transport, smart energy, smart cities and so on. Essentially what this means is that internet and communication technology (ICT) technologies are increasingly being strategically added on and embedded in these industries.

The technological developments have been mindboggling: broadband, mobile communications, cloud computing, data management, storage, AI and analytics. Combined, these have created the ideal environment for the development of technology platforms on which social and economic transformations can be developed. These platforms are often called "labs" — places where innovation, sharing, collaboration and piloting can take place.

The telecoms industry was right at the forefront of the digital explosion, but for a long time, telcos concentrated on protecting their very lucrative incumbent voice businesses.

And so companies such as Google, Apple, Facebook, Amazon, and many others in the internet market had free rein to develop over-the-top (OTT) business models, using the existing telecoms infrastructure to build their own platforms from which to distribute their own services to end-users.

Despite what could be called "missed opportunities" for telcos, they were able to maintain a strong market position in the basic telecoms market (connectivity). The massive increase in OTT services also stimulated a far greater use of the telecoms network. In most cases, telcos remain strong and healthy players in the connectivity market. However, it has become a low-margin utility service. Within their current business models, there is little room for them to develop more value-added products with opportunities for premium-based revenue models.

There are various obvious scenarios for the telcos to pursue:

  • Current model of an integrated telco: a strong focus on technology and engineering, combined with good customer relationships;
  • The wholesale model: full control over the network, intermediaries between vendors and OTT retail providers; and
  • Platform: a more virtual telco model based on first-class infrastructure with a strong focus on innovation and new services and strong relationships with customers, partners and developers.

I would like to concentrate on the third option.

The nature of the telecoms business, its culture, and its business models is not very well-suited to a more vertical approach that can be provided through platform-based models.

For example, let's look at the massive transformations that are taking place in transport, cities and energy. What is needed is a holistic approach to these developments. Telcos could take control of such a platform, rather than just being a supplier to some of the underlying elements of new smart models.

Looking around the globe, we see the car industry, cities and energy companies trying to take charge of the platform. As they often lack in-house ICT skills, the success of these platforms is a hit-and-miss situation. In other cases, IT companies are taking charge (such as Cisco, IBM and Huawei) or companies such as PWC and Accenture. The problem with these latter organizations is that their clients have become increasingly wary of proprietary solutions.

So far, very few telcos have taken a leading position in such developments. Key reasons are that their financial, technology and business models are not well-suited to starting a platform and taking risks involved in setting them up. Instead, we see IT companies taking the lead, like Google (Alphabet), for example, in Smart City Toronto.

Their business models are much better suited to such opportunities, and they are prepared to take risks and accept that several investments may fail. However, this allows them to learn on the job. They know that the total value of the platform markets that will be developed over the next 10-20 years will be in the trillions of dollars.

Perhaps Spain's Telefonica has gone the furthest of all the telcos. While still not adopting the full platform approach, they are taking the lead in a range of international smart city projects. KPN in the Netherlands is another example of a leading participant, but again not a full platform operator.

Of course, telcos quickly become partners in such projects, but most of the time, they are relegated to providing basic telecoms services. Often, these services are tendered for by the project leader, and competition makes sure that the margins for the telcos remain rather subdued.

Looking at the very upbeat messages that the telcos are sending out regarding 5G, the situation will become even more complex. In order to deliver the applications that the technology promotes, such as Internet of Things (IoT) and the much-promoted connected car business, platforms will require cooperation between telcos. Such applications can't rely on one supplier alone. You cannot have a driverless solution that only uses the Telstra network or the Optus one.

Telcos are not used to partnering with competitors. Often the message is "let's partner, but you have to do it my way." Car manufacturers in Europe have already indicated that they are not going to build the roadside IoT platforms and are looking at the telcos to collaborate. So who will develop the "build it and they will come" business model?

If the telcos do want to monetize their network better, they will have to move up the value chain, and this will require a totally different business model. Most likely, this will require setting up structurally separated new companies, each individually specialized, based on the markets they are selecting. The platform would largely be built around a virtual "telco" model, mainly operating in the cloud. They should be open to external developers and partners, securing an ongoing development of new and innovative offerings.

In such a model, the telcos' unique skill sets allow them to take a greater controlling role. Rather than being asked to be a partner, they should set up the ecosystem for the platform, select the partners, develop the financial models around the platform, and be in control. Their independent position also allows them to scale this business model and replicate it where opportunities arise.

There is no doubt that such an approach holds significant risks. Some initiatives will fail. Of course, such a model should be thoroughly assessed through scenario design, but that shouldn't lead to procrastination. If done well, the rewards will be substantial.

The telcos arguably have the deepest insight into customers' behavior, but if they are to move up the value chain, they will need to use this insight to move out of partnerships and establish themselves in a controlling position.

Written by Paul Budde, Managing Director of Paul Budde Communication

Follow CircleID on Twitter

More under: Telecom

Categories: News and Updates

Will Legacy TLDs Have a Long Legacy?

Wed, 2019-10-30 22:56

We all live in a world where the rapid pace of innovation can be both exciting and challenging.

From keeping up with the latest consumer technologies, such as new mobile apps, social media platforms, and digital assistants like Alexa to business-driven innovations like Things (IoT) to Artificial Intelligence, the one certain thing we all face is change.

In the Top-Level Domain (TLD) arena, can the same be said about legacy TLDs? With .com, .org and .net having a lasting hold in the domain arena, it makes us wonder if how much new TLDs will threaten their legacy. In other words, will new TLDs eventually eclipse more traditional TLDs?

According to a recent Dreamhost blog post, younger people are more likely to gravitate towards, and trust, new TLDs when compared to their legacy counterparts. The article also pointed out how the rising popularity of new TLDs shows how "we are in for a change."

For the fans of legacy TLDs, many will point to the recent numbers provided by Statistica that show how .com is still leading the way. Though, according to Verisign, new TLD registration was up over 11 percent in 2018 compared to 2017. This means that one in every five new domains in 2018 was a new TLD.

While the numbers show that .com is still on top, the slow-rising sea change is that new TLDs are providing more dynamic offerings for specialized brands, which are growing in scope. As legacy operators continue to make up for lost volume by raising prices, end-users will continue to look at other viable options. According to estimates, there are 1.5 million significant brands on the marketplace — all ripe for using new TLDs.

In addition, legacy TLDs lack specialization, where they could stand for anything. On one side of the coin, this is good for selling on volume. But wouldn't an environmental nonprofit benefit more from a .earth or a .green domain than a .org, and certainly a .com, as an example?

While there is still room for gaining traction for new TLDs, we could soon be hitting a tipping point to where all companies, brands, nonprofits, artists (and the like) realize the true brand value and differentiation that comes from these specialized domains.

When this happens, a rapid-pace-of-innovation-like-change may be something that all legacy TLDs will need to face.

Written by Matt Langan, Founder, L&R Communications

Follow CircleID on Twitter

More under: Domain Names, New TLDs

Categories: News and Updates

.COM Contract Amendment Coming Soon for Public Comment

Wed, 2019-10-30 20:26

Last Thursday, during VeriSign's Q3 2019 quarterly earnings call, CEO Jim Bidzos offered statements that seemed to be carefully calibrated to satisfy Wall Street's curiosity about protracted negotiations with ICANN on a Third Amendment to the .com Registry Agreement while also appearing to distance the company from the soon-to-be forthcoming product of that year-long effort.

As I've written previously, this Third Amendment is necessary because of the First Amendment to the .com Registry Agreement which extends the current agreement's term, including the wholesale registration price cap, until 2024 — a circumstance made inconvenient late last year when the National Telecommunications and Information Administration (NTIA) amended its Cooperative Agreement with VeriSign to remove the 2012 price restriction and granting pre-approval, beginning in 2020, for increases that don't exceed 7% annually in four out of every six years of a .com Registry Agreement term.

Some industry bloggers, such as DomainNameWire's Andrew Alleman and DomainIncite's Kevin Murphy, seemed to hint recently that this Third Amendment may not be the clean pass-through of price increases that VeriSign likely prefers — a possibility that I raised earlier when I suggested that ICANN will replicate the very lucrative innovation that is already incorporated into the .NET registry agreement. In that case, they invented a "special development fund" to which VeriSign annually remits $0.75 for every .NET registration — in addition to the $0.25 per domain name registration that it and every other registry operator already contributes to ICANN's budget. These funds are deposited into ICANN's general treasury, where they are neither sequestered nor audited or otherwise accounted for.

This breakthrough innovation has, so far, generated nearly $200 million in additional and unaccountable slush funds for ICANN's general treasury with no muss, no fuss, and, indeed, nary a peep from the much-vaunted stakeholder community that's been saddled with accountability backstop duty since the U.S. Government's Obama-era abandonment of its historical role as guarantor of the Internet's root.

There is, of course, nothing to stop ICANN from merely reusing what has previously proven to be a successful method of levying tolls that generate increased revenue. After all, the precedent exists, and ICANN has demonstrated time and again that it is remarkably impervious to stakeholder outcry and community attempts at accountability seem to roll off of ICANN just like water from a duck's back.

But, like as not, it's quite possible that ICANN has been hard at work pursuing its own kind of permissionless innovation that pushes the envelope in the name of progress.

What might that look like?

Well, the simplest way to innovate here is to make the tolls "automagically" progressive — meaning that every time VeriSign gives itself a raise, then ICANN also shares in the good fortune. What seems most probable, if this path was taken, would be to determine the toll amount — let's say $0.15 per .com registration per year (this leaves VeriSign $0.40 of its next price increase which, between monopolists, probably seems grudgingly "fair") — and to peg an additional $0.15 to every price increase that VeriSign takes. This way, ICANN's share of recurring toll revenues grows in concert with the profit margin of its largest ratepayer and fellow monopolist collaborator.

This would equate to around $20 million of increased revenue in conjunction with Price Increase #1, another $20 million for Price Increase #2, and so on, so that by 2024, assuming that VeriSign takes all of its allowable price increases (which past behavior suggests that they will), then ICANN will be making an additional $80 million more in revenue — per year — than they are today.

The other virtue of this approach is that, because VeriSign always takes its price increases, ICANN would essentially be able to forecast out its revenue increasing $20 million per year in four out of every six years in perpetuity. This is a staggering sum of inefficiently allocated resources — excuse me, revenue — and it doesn't even account for the massive borrowing power that comes with this type of consistent and predictable revenue growth.

During last week's earnings call, VeriSign's CEO went out of his way to stress that the contract negotiation is an ICANN process and otherwise distanced the company to the point that one could be forgiven for concluding that VeriSign was merely a passive observer to the negotiation for the contract from which more than 90% of its revenue is derived.

But this isn't exactly true when two parties are negotiating a contract, is it? Rather, that's a point that is stressed by a party that has been outmaneuvered by the other and wants to signal to investors and posterity that they did not accede to the terms in the agreement gladly, but that the deal isn't so bad as to necessitate rejecting it.

As the saying goes, success has a million parents, but failure is an orphan. Regardless, it seems laughable that ICANN wouldn't have taken this opportunity to secure its financial future by ensuring that VeriSign's rising tide raises, if not all, then at least ICANN's ship. The question is whether ICANN focused on pecuniary self-interest or whether it also secured public interest concessions — like intellectual property rights protection mechanisms and anti-abuse provisions — similarly as it recently did with .ORG in exchange for agreeing to greater pricing flexibility for the registry operator.

There are likely to be more developments during or shortly after next week's ICANN meeting in Montreal. Stay tuned.

Written by Greg Thomas, Founder of The Viking Group LLC

Follow CircleID on Twitter

More under: Domain Names, ICANN, Internet Governance, Policy & Regulation, Registry Services

Categories: News and Updates

Doing Our Part for a Safer, Stronger DNS

Wed, 2019-10-30 19:20

Public Interest Registry is the industry leader of DNS Anti-Abuse efforts on the Internet. Since our inception, we have worked to empower people and organizations that use the Internet to make the world a better place. Whether a .ORG is the foundation of an individual voice, a global non-profit, or any organization that is part of the mission-driven .ORG community, we are proud to have earned the trust of so many dedicated users.

Evolution and innovation are critical to the success of the Internet. Just as the Internet landscape keeps evolving, PIR does as well. PIR, like many other registries and registrars, historically took the position that it only would take action on "Technical Abuse of the Domain Name System" and only in very limited circumstances. With the ongoing discussions around "Abuse" on the Internet, we found ourselves wondering if that is a sufficiently thoughtful approach to such an important issue.

We recognize our unique role at PIR. .ORG is the most trusted domain name extension on the Internet. More socially conscious voices use the Internet as a platform, and in particular .ORG domains, to innovate and impact a greater footprint online. This makes their organizations targets for nefarious Abuse activities. We have a responsibility to protect our .ORG community and that is a responsibility we take seriously.

That is just one reason why PIR helped to coordinate the efforts behind the "Framework to Address Abuse”. Eleven registries and registrars planted the flag for when we think a registry or registrar must act, as well as when it should act regardless of our contractual requirements. We are proud to join our other participating registry and registrar friends and partners in this important effort. I encourage everyone to read the framework and talk to one of us at ICANN Montreal and to join us in these efforts.

In that same vein, we recently rolled out our third iteration of our "Quality Performance Index” or QPI. QPI is a registrar incentive program that creates an indexed "quality score" for a registrar's domains, factoring abuse rates (first and foremost), renewal rates, domain usage, SSL certificates, and DNNSEC. Only registrars meeting our threshold QPI score are eligible to participate in our incentive programs. QPI has been well received by registrars and the community as a whole and was recently featured as a "Registry Best Practice” by the Government Advisory Committee's Public Safety Working Group.

PIR's Anti-Abuse Principles

We have spent the last several months developing a set of seven core Anti-Abuse Principles that will guide our efforts to build a cleaner, safer, more trusted .ORG domain. These principles present a roadmap and are not intended to serve as a "green light" for PIR to become a content regulator. In fact, in 2019 so far, we have suspended a total of 28,675 domain names under our Anti-Abuse Program, of that number only eight were suspended because of the content associated on the domain (six for containing Child Sexual Abuse Materials ("CSAM") and two for distribution of opioids online). We believe these seven principles are the next step in an ongoing, iterative process to create a safer, cleaner, and most trusted space for the .ORG community.

1. Registrants can expect to enjoy the benefits of registration, including free expression – Everyone who visits or registers a .ORG site should be able to express themselves and their views freely as long as they don't breach legal requirements, PIR's Anti-Abuse Policy or these principles.

2. Due process must be observed in each decision; this includes having a publicly available appeal process – Abuse mitigation can work only if it is seen to be fair and to follow basic principles of due process, including notice through the registrar to the registrant (subject to ordinary limits like law enforcement demands), an opportunity to be heard, an opportunity to cure or correct any Abuses, and the ability to appeal decisions taken.

3. We will act transparently with regards to Abuse – Every quarter, PIR will publish its Abuse and takedown numbers, including DNS Abuse (like phishing, malware, botnets, etc.), civil court takedown orders and Website Content Abuse suspensions.

4. We should do what is right, even when it is hard – PIR is fortunate to serve as steward for the trusted .ORG space where so many are doing so much good online. We cannot and won't do the bare minimum on Abuse, but we will be forward leaning and thoughtful in all cases. This approach may make waves and may prove challenging at times, but if what we are doing on a given complaint is right when all factors are weighed, it should be done. Would it be easy and legally conservative to not take action against a domain name that hosts CSAM or incites violence absent a court order? Yes, but we don't think it is the right thing to do.

5. Actions will be proportionate and with a clear understanding of collateral damage – PIR as a registry can't remove individual pieces of content on a website. Instead, we can only take down the entire domain name, along with any and all postings, threads, third-level domains, email, and all other content associated with the website attendant to the domain. Hypothetically, if GöransList.ORG, a domain with a popular forum/posting site, had a handful of posts with illegal content among the millions and millions of posts on that site, PIR suspending the entire domain name would not be proportionate or appropriate. Suspending that domain would effectively remove millions of legitimate pieces of content and affect not just the registrant but end-users worldwide. Acting at the DNS level to address Website Content Abuse can cause immense collateral damage.

6. We must factor in the scale of harms in making decisions on Abuse – We must weigh all factors and when online harms are severe enough, the strong action of suspending a domain name may be, in some circumstances, an appropriate action. In cases involving CSAM, human trafficking, or other Abuse that poses threat to human safety we will not hesitate, consistent with due process and these principles, to act swiftly.

7. Action based on illegality must be apparent on its face – When the Abuse takes the form of illegal content, we will work with trusted experts to evaluate the facts and take appropriate action, such as the Internet Watch Foundation, the National Center for Missing and Exploited Children, etc. In some cases, the illegality will be clear from the nature of the Abuse, but in others, it may require a more nuanced analysis and corresponding caution on our part. In many cases, only the courts can make a final determination of what is illegal and what is not.

Protecting the .ORG community is our number one priority when it comes to fighting DNS Abuse. Our Anti-Abuse principles make our approach regarding Anti-Abuse efforts more thoughtful and comprehensive. PIR welcomes any views, concerns, and improvements the public and especially the .ORG community regarding these principles. As stewards of the .ORG community and for the Internet, we are doing more, and we are doing it right now.

Written by Jon Nevett, CEO, Public Interest Registry

Follow CircleID on Twitter

More under: Cybersecurity, DNS, DNS Security, Domain Names, Registry Services

Categories: News and Updates

Internet Society Seeks Nominations for 2020 Board of Trustees

Wed, 2019-10-30 17:54

Are you passionate about working toward a stronger, open Internet available to everyone? Do you have experience in Internet standards, technology, development or public policy? If so, please consider applying for a seat on the Internet Society Board of Trustees.

The Internet Society serves a pivotal role in the world as a leader on Internet policy, technical, economic, and social matters, and as the organizational home of the Internet Engineering Task Force (IETF). Working with members, chapters, and other partners around the world, the Internet Society promotes the continued evolution and growth of the open Internet for everyone.

The Board of Trustees provides strategic direction, inspiration, and oversight to advance the Society's mission. Trustees also serve as members of the Internet Society Foundation board.

In 2020:

  • the Internet Society's chapters will elect two Trustees;
  • its Organization Members will elect one Trustee, and
  • the IETF will select one Trustee.

Membership in the Internet Society is not required to nominate someone (including yourself) to stand for election or to serve on the Board. Following an orientation program, all new Trustees will begin 3-year terms commencing with the Society's annual general meeting in August 2020.

Nominations close at 15:00 UTC on December 6, 2019.

Find out more by reading the Call for Nominations and other information available at: https://www.internetsociety.org/board-of-trustees/elections/

Written by Dan York, Author and Speaker on Internet technologies - and on staff of Internet Society

Follow CircleID on Twitter

More under: Internet Governance

Categories: News and Updates

"lo" and Behold

Tue, 2019-10-29 21:22

Room 3420 at the University of California, Los Angeles's Boetler Hall where "infant internet took its first breath of life," 50 years ago today.

Happy 50th Internet! On October 29, 1969, at 10:30 p.m. Leonard Kleinrock, a professor of computer science at UCLA along with his graduate student Charley Kline sent a transmission from UCLA's computer to another computer at Stanford Research Institute via ARPANET, the precursor to the internet. The message text was the word "login" however, on the very first attempt, only the letters "l" and the "o" were transmitted before the system crashed. The first transmitted message resulted in "lo" as in lo and behold, says Professor Kleinrock jokingly, remembering the day fifty years later — the moment "infant internet took its first breath of life". The first permanent ARPANET link was eventually established on November 21, 1969. And the rest, as they say, is history!

Follow CircleID on Twitter

More under: Internet Protocol, Web

Categories: News and Updates

Self-Serving Internet Regulation – Shining a Light into the Shadows

Tue, 2019-10-29 20:14

On August 13, 2019, PharmacyChecker.com filed a lawsuit casting a white-hot spotlight on Big Pharma front groups using shadow regulation and the spread of misinformation to restrict Internet access to safe and affordable medicines.

Case 7:19-cv-07577: PharmacyChecker.com LLC, Plaintiff, vs. National Association of Boards of Pharmacy (NABP), Alliance for Safe Online Pharmacies (ASOP), Center for Safe Internet Pharmacies Ltd. (CSIP), LegitScript LLC, and Partnership for Safe Medicines, Inc. (PSM), Defendants.

The filing states: "PharmacyChecker.com brings this action… arising from a conspiracy among the defendants...and their constituent members to suppress competition in the markets for online pharmacy verification services and comparative drug pricing information. The defendants, who are a network of overlapping nonprofit organizations and private firms that are funded or backed by pharmaceutical manufacturers and large pharmacy interests, are using shadow regulation — private agreements with key internet gatekeepers — to manipulate and suppress the information available to consumers seeking information about lower-cost, safe prescription medicine."

The Campaign for Personal Prescription Importation (CPPI), the Electronic Frontier Foundation (EFF), Roger Bate, and others have been calling out these mounting issues for years.

I've shared my thoughts in this article on how, as a global Internet community, we must protect human rights as it intersects with digital technology while opposing those who use the Internet to restrict access to safe and affordable medications. This lawsuit takes aim directly at the defendants who are being called out for spreading fabrications and misinformation to protect U.S. pharmaceutical profits.

CPPI fully agrees with those who believe rogue websites selling fraudulent medication are a serious public health threat and cautions consumers when choosing an online pharmacy. Consumers should purchase their prescription medicines from licensed, legitimate online pharmacies that adhere to appropriate, well-defined safety protocols, and only on the basis of presenting a valid prescription.

PharmacyChecker.com brings focus to the issues that Big Pharma's front groups — posing as consumer watchdogs — seek profit protection for U.S. pharmaceutical companies by spreading fabrications and misinformation, rather than addressing the obvious health hazard of medication access issues. It is well-documented that Big Pharma opposes potential state and federal legislation importation reform by means of ASOP's and PSM's media scare tactics (here is one example). While NABP and LegitScript insert themselves into 'choke points' with social media networks, search engines, payment processors, and shipping companies to suppress information about, and access to safe, licensed foreign pharmacies from reaching consumers.

Unfortunately, that's not the worst of it.

The PharmacyChecker.com lawsuit cracks open and fully exposes Big Pharma's shadow regulation. As EFF's Mitch Stoltz wrote in this recent blog post, "This is a classic example of shadow regulation: agreements between the Internet's gatekeepers and special interests seeking to control others' speech. Shadow regulation is pernicious because it bypasses democratic accountability. And when it happens in industries with little competition, it avoids accountability through the market, as well.” Stoltz also comments in the same post, "Pharmacy Checker's antitrust suit is an important test case for fighting back against unaccountable private speech policing.

Shadow regulation, in the hands of a virtual monopoly — achieved by NABP as the exclusive Registry for the .Pharmacy domain name — enables NABP to unilaterally distribute a so-called 'Not Recommended List' being used by search engines such as Bing, credit card processors, and others. Devoid of scrutiny, NABP unfairly and unjustly sequesters all too many legitimate pharmacy service websites — along with PharmacyChecker.com — in a net supposedly intended for rogues, while allowing thousands of unsafe and risky websites to go unchecked.

As Roger Bate wrote in this December 18, 2018 blog post, "Bing has bought into the dangerous self-interested arguments of the [NABP] that any overseas site, even if linked to a legitimate foreign pharmacy, is illegitimate for simply taking business from a U.S. pharmacy. Using the NABP list of sites, it deems unacceptable, has led to a ludicrous and dangerous outcome."

This represents a colossal disservice to millions of patients seeking to access safe, affordable medicines, and it is the essence of the court filing.

As an advocacy organization, CPPI believes that access to safe and affordable prescription medications is a human right and, therefore, must be protected through public cyber-policymaking, transparent Internet governance, and updates to outmoded laws.

It is important to keep the bright light shining on this case so that we can finally rid the Internet of a shadow regulation conspiracy, as detailed in the lawsuit.

As CPPI salutes PharmacyChecker.com for standing up to Big Pharma, we remind everyone it is up to all of us to ensure that the Internet cannot be used as a tool for censorship, particularly when it comes as a direct attack on our human right to good health.

Written by Tracy Cooley, Executive Director, Campaign for Personal Prescription Importation

Follow CircleID on Twitter

More under: Censorship, Domain Names, Internet Governance, Policy & Regulation, Registry Services

Categories: News and Updates

EFF: For ISPs to Retain Power to Censor the Internet, DNS Needs to Remain Leaky

Tue, 2019-10-29 20:00

EFF's Senior Legislative Counsel, Ernesto Falcon, in a post on Monday has argued that major ISPs in the U.S. — the likes of Comcast, AT&T;, and Verizon — are aggressively influencing legislators to stop the deployment of DNS over HTTPS (DoH), "a technology that will give users one of the biggest upgrades to their Internet privacy and security since the proliferation of HTTPS." He writes:

"The reason the ISPs are fighting so hard is that DoH might undo their multi-million dollar political effort to take away user privacy. DoH isn't a Google technology — it's a standard, like HTTPS. They know that."

"The major ISPs have also pointed out that centralization of DNS may not be great for user privacy in the long run. That's true, but that would not be an issue if everyone adopted DoH across the board."

Follow CircleID on Twitter

More under: Access Providers, DNS, DNS Security, Internet Governance, Policy & Regulation, Privacy

Categories: News and Updates

Data, Applications, and the Meaning of the Network

Mon, 2019-10-28 19:39

Two things seem to be universally true in the network engineering space right this moment. The first is that network engineers are convinced their jobs will not exist, or there will only be network engineers "in the cloud" within the next five years. The second is a mad scramble to figure out how to add value to the business through the network. These two movements are, of course, mutually exclusive visions of the future. If there is absolutely no way to add value to a business through the network, then it only makes sense to outsource the whole mess to a utility-level provider.

The result, far too often, is for the folks working on the network to run around like they've been in the hot aisle so long that your hair is on fire. This result, however, somehow seems less than ideal.

I will suggest there are alternate solutions available if we just learn to think sideways and look for them. Burning hair is not a good look (unless it is an intentional part of some larger entertainment). What sort of sideways thinking am I looking for? Let's begin by going back to basics by asking a question that might be a bit dangerous to ask — do applications really add business value? They certainly seem to. After all, when you want to know or do something, you log into an application that either helps you find the answer or provides a way to get it done.

But wait — what underlies the application? Applications cannot run on thin air (although I did just read someplace that applications running on "the cloud" add business value, implying applications running on-premises do not). They must have data or information in order to do their jobs (like producing reports, or allowing you to order something). In fact, one of the major problems developers face when switching from one application to handle a task to another one is figuring out how to transfer the data.

This seems to imply that data, rather than applications, is at the heart of the business. When I worked for a large enterprise, one of my favorite points to make in meetings was we are not a widget company… we are a data company. I normally got blank looks from both the IT and the business folks sitting in the room when I said this — but just because the folks in the room did not understand it does not mean it is not true.

What difference does this make? If the application is the center of the enterprise world, then the network is well and truly a commodity that can, and should, be replaced with the cheapest version possible. If, however, data is at the heart of what a business does, then the network and the application are equal partners in information technology. It is not that one is "more important," while the other is "less important;" rather, the network and the applications just do different things for and to one of the core assets of the business — information.

After all, we call it information technology, rather than application technology. There must be some reason "information" is in there — maybe it is because information is what really drives value in the business?

How does changing our perspective in this way help? After all, we are still "stuck" with a view of the network that is "just about moving data," right? And moving data is just about exciting as moving, well… water through pipes, right?

No, not really.

Once information is the core, then the network and applications become "partners" in drawing value out of data in a way that adds value to the business. Applications and the network are but "fungible," in that they can be replaced with something newer, more effective, better, etc., but neither is really more important than the other.

This topic is a part of my talk at NXTWORK 2019 — if you've not yet registered to attend, right now is a good time to do so.

Written by Russ White, Infrastructure Architect at Juniper Networks

Follow CircleID on Twitter

More under: Cloud Computing, Networks

Categories: News and Updates

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer