News and Updates

Missed connections: another smart domain sales idea

Domain Name Wire - Mon, 2019-06-17 16:51

A great idea to close deals at previously negotiated prices.

I like it when domain name companies try new and interesting things. They don’t always work, but this industry needs more creative thinking.

The latest comes from Uniregistry. The company took some of its uncompleted sales and is offering them to the public (with permission of the seller).

The list includes domains in which the buyer and seller agreed to a price but the buyer never paid. Each domain is listed with the final agreed price.

If a buyer wants one of the domains, they should open a new inquiry at the accepted price to purchase the domain.

Uniregistry VP of Sales Jeffrey Gabriel notes that “This is not a starting point, and there will not be negotiations.”

From a buyer perspective, I like how you get to skip the negotiations and get domains at fair prices. No wasted time.

Before posting this story, I secured one of the domains on the list: I like this name as a brandable similar to my name

Here are some of the domains:

  • (sorry, too late)
  • $4,000
  • $8,000
  • $5,000
  • $25,000

© 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) Latest domain news at Domain Name Wire.

Related posts:
  1. Sells for $17,000
  2. 3 Companies That Still Want to Buy Your Domain Names
  3. Median sales price at Uniregistry stays steady as sales grow
Categories: News and Updates

Is UN SG Guterres Driving the Bus That Macron Threw UN IGF & Multistakeholderism Under at IGF Paris?

Domain industry news - Mon, 2019-06-17 16:40

A fresh & transparent, community-led, bottom-up, public debate has now become unavoidable and undeferrable. "....we need limited and smart regulation" were the clear and unambiguous words of UN Secretary-General Antonio Guterres at the launch of the final report of his UN Panel on Digital Cooperation last week in New York. Last November, I wrote about President Macron throwing down the gauntlet at UN IGF Paris challenging IGF and Multistakeholderism to become more relevant. And last week, I wrote the post Can ICANN Survive Today's Global Geo-Political Challenges under its existing narrow mandate?

So what's the risk of all this to the single internet root being broken and how is all this tied together you might ask?

Macron called for the UN IGF' and Multistakeholderism to be reinvented. He called for factoring multilateralism into IGF's non-decision-making body. Macron called for IGF to move beyond the mere talk-shop lip service it has been for the last 13 years since its birth, and the reason why it was allowed to be born.

Now, UN Secretary-General Antonio Guterres is calling with unambiguous clarity that "...We need limited and smart regulation".

So after Macron threw Multistakeholderism under the Bus at IGF Paris, here comes UN Secretary-General Antonio Guterres driving this bus. Conclusion. To quote UN High-Level Panelist Vint Cerf's infamous words "adapt or die" — IGF will either be adapted or it will die.

This only means that the days of IGF being a mere talk-shop are numbered. Somehow decisions will need to be taken, and regulation by governments and treaties by international bodies will follow to make the internet, cyberspace, and the world safer.

So where does the role of ICANN fit in all this, such as:

  • Its current narrow mandate on only names and number,
  • Legitimate future mandates, if any,
  • ICANN continued championing and financial support of talk-shop IGF & Multistakeholderism, if IGF remains unchanged,
  • ICANN continued championing and financial support of decision making IGF & Multistakeholderism, if IGF gets adapted to be infused with "Multilateralism" ,
  • How will the ICANN community transparently address, debate and discuss to guide the board on how to move forward in its continue to support of old and/or new IGF?

The singularity of the internet root itself may be at risk, so it is time for all these questions above and perhaps many others to become publicly and transparently discussed and debated asap at all possible fora, starting at ICANN 65 Marrakesh.

I have already brought this directly to the ICANN leadership's attention and am doing so here by calling for a special extraordinary public session to be scheduled during next week's ICANN Marrakesh meeting, inviting them and the ICANN board to join me along with the ICANN community as well as global stakeholders via live streaming to discuss and debate all these critical issues openly and transparently.

Let's showcase how the ICANN bottom-up model and its community can manifest the necessary leadership to inspire the much-needed change to happen transparently, democratically with open, un-doctored debates and discussions for the world to see and emulate.

Written by Khaled Fattal, Group Chairman, MLi Group & Producer "Era of the Unprecedented

Follow CircleID on Twitter

More under: ICANN, Internet Governance, Policy & Regulation

Categories: News and Updates

Proactive domain sales with Joe Uddeme – DNW Podcast #240

Domain Name Wire - Mon, 2019-06-17 15:30

Learn about changes to the domain market and how to sell your domains proactively.

Want to learn a process for figuring out end user buyers for your domain names? On today’s show, Joe Uddeme of Name Experts LLC walks us through how he does outbound lead qualifying and marketing. He also gives his take on what’s happened in the domain market since he was last on the podcast in 2014, including the impact of changing habits and new TLDs on “exact match” domains.

Also: About those subpoenas, Uniregistry’s big numbers, DigitalTown’s domain exit, and NamesCon Austin details.

This week’s sponsor: DNAcademy. Use code DNW to save $50.

Subscribe via Apple Podcasts to listen to the Domain Name Wire podcast on your iPhone or iPad, view on Google Play Music, or click play above or download to begin listening. (Listen to previous podcasts here.)

© 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) Latest domain news at Domain Name Wire.

Related posts:
  1. How James Booth went from zero to $5 million in 6 months – DNW Podcast #94
  2. Building web businesses with Peter Askew – DNW Podcast #163
  3. Domain investing with Shane Cultra – DNW Podcast #164
Categories: News and Updates

Clinton Sparks wins domain name back

Domain Name Wire - Mon, 2019-06-17 13:10

Grammy-nominated Clinton Sparks gets domain name back after it expired.

Clinton Sparks? Not really. Just an attempt to game the search engines.

DJ, producer and songwriter Clinton Sparks has won the rights to the domain name, which expired after a dispute with his agent.

Sparks and his Get Familiar Music Company registered the domain in 2002 and the agent let it lapse in 2017.

A Ukranian SEO expert then registered the domain name and put up a fake profile of a University of Pennslyvania professor (see image). The profile picture is actually of an American University professor.

The fake profile on links to two sites: and an academic essay writing service. It seems that the domain owner registered this domain based on its SEO value, and then used the fake site to bolster its backlink profile. The site was designed to confuse search engines.

With the help of attorney John Berryhill, Sparks and Get Familiar Music filed a UDRP with National Arbitration Forum. Berryhill rarely represents Complainants, but he chooses interesting cases when he does.

© 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) Latest domain news at Domain Name Wire.

Related posts:
  1. Seinfeld Isn’t Laughing about
  2. Manny Ramirez Asks Arbitrator for His (Domain) Name
  3. High traffic Google typo hit with UDRP
Categories: News and Updates

Notorious Hacker Group XENOTIME Expands Its Targeting Beyond Oil and Gas to Electric Utility Sector

Domain industry news - Sun, 2019-06-16 00:20

XENOTIME, the notorious group behind what is regarded as the most dangerous malware targetting industrial control systems has expanded its targeting beyond oil and gas to the electric utility sector. Dragos, an industrial systems cybersecurity firm, identified a change of pattern in XENOTIME behavior in February of 2019 where XENOTIME began probing the networks of electric utility organizations in the US and elsewhere using similar tactics to the group's operations against oil and gas companies.  Highlighting below a couple of noteworthy paragraphs from the Dragos report on Friday.

Background: "The 2017 TRISIS malware attack on a Saudi Arabian oil and gas facility represented an escalation of attacks on ICS. TRISIS targeted safety systems and was designed to cause loss of life or physical damage. Following that attack, XENOTIME expanded its operations to include oil and gas entities outside the Middle East."

The cause for concern: "While none of the electric utility targeting events has resulted in a known, successful intrusion into victim organizations to date, the persistent attempts, and expansion in scope is cause for definite concern. XENOTIME has successfully compromised several oil and gas environments which demonstrate its ability to do so in other verticals."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, Malware

Categories: News and Updates

Greece Announces Plans to Install Free Public Wi-Fi Nationwide

Domain industry news - Fri, 2019-06-14 23:42

Greece's Department of Telecommunications and Post (EETT) has announced plans to install 3000 public Wi-Fi hotspots around the nation beginning next year in both open-air and enclosed public spaces. "The initiative is mostly funded by EU programs ESIF and ERDF with a total budget of 14.8 million euros and is implemented by the Greek ministry of Digital Policy, Telecommunications, and Media," said Wi-Fi Now.

Wifi4EU: In addition to the above project, Greece has also been the beneficiary of the WiFi4EU initiative which provides municipalities €15,000 grants to install Wi-Fi equipment in public spaces. The Greek government says that the new nationwide public Wi-Fi project is not linked but complimentary to WiFi4EU.

"This is the second time the Greek government is attempting to set up a nationwide free public Wi-Fi network." Greece launched a similar program in 2004 to build 600 hotspots but the project was never completed. (Wi-fi Now)

Follow CircleID on Twitter

More under: Access Providers, Mobile Internet, Wireless

Categories: News and Updates

Clothing company fails to upgrade domain name through UDRP

Domain Name Wire - Fri, 2019-06-14 16:42

Clothing company goes after shorter domain name through UDRP.

This clothing company tried to upgrade its domain from to through UDRP. It failed.

A kids and women’s clothing company has lost a cybersquatting dispute against the domain, which is owned by someone with the last name Jones.

The Complainant, Little Jonesies LLC, uses the domain name

David Jones of Idaho registered the domain in January 2016.

Little Jonesies LLC filed trademarks for the term in 2017 but claimed first use in commerce of 2015.

Interestingly, the company didn’t register its own domain name until April 2016, after was registered. It might have just been using third-party ecommece platforms like Etsy without its own domain name.

In its complaint, the clothing seller didn’t address the fact that the registrant’s last name is Jones, which is a decent tip-off that it might have rights or legitimate interests in the domain.

Jones didn’t make a formal reply in the case but said he registered the domain with plans to start a business. He could have just as easily said he registered it because he has kids and it would suffice as a legitimate interest.

The Complainant also failed to provide any evidence that the domain was used in bad faith. It resolved to an error page at the time the complaint was filed.

Panelist David H. Bernstein did not consider reverse domain name hijacking. Thorpe North & Western LLP represented the Complainant.

© 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) Latest domain news at Domain Name Wire.

Related posts:
  1. Another Creative Agency Tries to Grab Generic Domain Name
  2. hit with UDRP
  3. Mike Mann overturns UDRP decision in court
Categories: News and Updates

DigitalTown let its domain names expire

Domain Name Wire - Fri, 2019-06-14 13:05

Company drops new top level domains as it becomes a shell company.

DigitalTown, Inc. filed its delayed 10-K yesterday. It covers the fiscal year ending February 28, 2019.

The annual report discloses that the Company is no longer acquiring or renewing its domain name portfolio. It had $0 in renewal fees during the fiscal year, so many of these domains have already expired. At one point, it had 13,000 .city domain names.

DigitalTown also appears to have wound down most of its “Domain Marketing Development Obligations” in which new top level domain registries paid DigitalTown to register their domain names.

The company has sold off all of its revenue-generating assets and is effectively a shell company. It owes lots of money to creditors, which it hopes to pay off with nearly worthless stock. Shares (OTC: DGTW) last traded yesterday at $.0006. (Hey, it was up 50% on the day!)

Many of the acquisitions the company made in recent years were paid with stock. Assuming the owners didn’t liquidate this stock earlier, they have been wiped out.


© 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) Latest domain news at Domain Name Wire.

Related posts:
  1. Rob Monster exits DigitalTown, George Nagy takes over CEO role
  2. Another CEO leaves DigitalTown
  3. Former ICANN CFO joins DigitalTown, Ciacco now CEO
Categories: News and Updates

The UN Panel on Digital Cooperation: An Agenda for the 2020s

Domain industry news - Thu, 2019-06-13 22:31

The UN Panel on Digital Cooperation presented last week in New York its final report, and an old question is back on the international agenda:  Could the global Internet be ordered by a reasonable arrangement among stakeholders which would maximize the digital opportunities and minimize the cyber risks by keeping the network free, open and safe?

Since the days of the UN World Summit on the Information Society (WSIS/2002–2005) dozens of commissions, task forces and working groups have proposed declarations, compacts and frameworks which meanwhile fill a whole library. Some of those documents were useful, as the Tunis Agenda (2005) and the NetMundial Declaration (2014), others are forgotten. The Internet Governance Ecosystem is a very dynamic space and in a permanent status of change. A quarter of a century ago, the Internet was seen mainly as a technical issue with some political and economic implications. Nowadays, it is a political and economic issue with some technical components. And the global digitalization does not stop with artificial intelligence, the Internet of Things and 5G at the horizon.

Insofar, UN Secretary-General Antonio Guterres did the right thing in July 2018 when he appointed a high-level panel of 20 experts with a mandate to look into the latest digital developments, to analyze the illnesses of today's cyberspace and to propose how to cure some weaknesses of the Internet. The group was led by an American woman — Melinda Gates from the Microsoft Foundation — and a Chinese man — Jack Ma from Internet giant Alibaba. Now, after one year of discussion, the group came forward with another proposal: A Declaration for Digital Interdependence. Like many of its predecessors, the final report presents an excellent diagnosis. Whether the recommended therapy will meet the same high standard, however, is another matter.

Moving forward into a digital disaster?

The Internet world is vulnerable. This is part of its history. The fathers of the networks wanted one thing above all: to send data from A to B without limits and borders. Security was not a priority. Insofar, the Internet pioneers did not differ from the pioneers of the automobile world. Only when the number of car crashes escalated, thousands of people died, and considerable economic damage arose, legally binding traffic rules were introduced, safety barriers were built on highways and vehicles got seat belts, airbags, and catalytic converters. Yet still, 1.2 million people die every year on the road.

The Internet is not about life and death. The Internet is about power and money. But even there, dysfunctions of the network can cause high damages, divide societies and drive the global economy into a ruinous downward spiral. And the consequences of the pollution of our mental environment, of incitement, censorship, and surveillance can be seen in the recent cultural decline of our political debates. Can democracy survive the Internet, asked Nathaniel Persily from Stanford University already in 2017. And this was and is a good question.

With nearly five billion people online and trillions of objects connected, the one world we live in with its 193 jurisdictions is a global village, regardless of the recent waves of neo-nationalism and the building of new borders. And since everything is connected to everything, the windows of vulnerability grow with each further growth of the network. No one can predict exactly what the consequences of deploying an autonomous, Internet-based weapon system will be in a hybrid war. Nobody knows what will happen if sand gets into the transmission of the free flow of data, which is seen now as the oil of the 21st century. And nobody knows what will happen if IP addresses and domain names are confused and servers and routers no longer do what the internet protocols tell them. The UN panel's wake up call is very clear: If you let everything go, mankind is marching into a digital disaster that can have worse consequences than climate change.

For a new multilateralism

The experts — including French Nobel laureate Jean Tirole, former Swiss President Doris Leuthard, Estonia's ex-Foreign Minister Marina Kaljurand, the father of the Internet Vint Cerf and former ICANN's CEO Fadi Chehade — give five recommendations: Everyone should be online and enabled by 2030 to benefit from the advantages of the digital age. Human rights, security and trust in cyberspace should be strengthened, and appropriate mechanisms for global digital cooperation created. The implementation of the recommendations should be based on nine universal values as, among other things, respect, humanity, transparency, sustainability and harmony. Everyone should commit to a "Declaration of Digital Interdependence." And for the envisaged "mechanisms of digital cooperation," three models will be put up for discussion.

That sounds good, but it also seems like a little bit of the wheel was just being reinvented. However, if you look more closely, then you must pay tribute to the group that in these turbulent times they put forward proposals that can shake the foundations of the stalemate of international politics indeed — at least in the medium term. Yes, the devil is in the details, but in the 47-page report, the innovation is also in the nuances.

The report sends a clear message that cyberspace needs some rules. However, the language of "smart regulation" is an interesting one. The group makes clear that the time of traditional international treaties, negotiated behind closed doors, is over. Of course, UN Secretary-General Guterres argued in favor of "Multilateralism" when he presented the report in New York. And he rejected any form of "Unilateralism" that carries the danger of fragmenting the Internet. But the UN Secretary-General also made it clear that the future of multilateralism must no longer be a matter for governments only, but also a matter for all non-state actors from business, civil society, and the technical community. His engagement for such an "innovative multilateralism" is reflected in the report which states clearly that "multilateralism" and "multi-stakeholderism" coexist and complement each other.

This statement reflects the truth of the Internet Governance Ecosystem. However, the reality is that many governments still prefer to bargain with each other. Of course, the multi-stakeholder principle is not new. It was launched at the UN World Summit on the Information Society in 2005. However, most governments have not yet gone beyond lip service with which they support the model "in principle," but ignore it, when it comes to hard decisions. The "sharing of decision making," as proposed by the WSIS definition of Internet Governance 15 years ago, is more the exception than the rule in Internet policymaking. Which government likes to share its power?

Insofar it is difficult to imagine, in the current world situation where we have technology wars among cyber superpowers, that such a participatory Internet Governance model, as envisaged by the report, has a realistic chance to get implemented soon. However, one can read the proposals also an agenda for the 2020s. History tells us, that the political pendulum is swinging backward and forward. And it is useful to have in bad times a plan for the good times. Nobody can exclude, that the wind, which is currently blowing in the direction of political confrontation, can turn in a different direction in the next decade. And the 2020s will be a decade of growing digital interdependence.

The proposals for ​​a new global mechanism for digital cooperation are of a similar caliber. The report offers three options: 1. A distributed co-governance architecture, 2. A Digital Commons Architecture and 3. An extension of the Internet Governance Forum, called IGF Plus.

The IGF was created by the UN World Summit in 2005 as a multi-stakeholder discussion platform. The IGF has no decision-making capacity. Over the years, the IGF became useful as a reservoir of collective wisdom and a place for the clarification of many factual issues. However, it remained a paper tiger, because of no procedure channels for ideas that emerge at the IGF into the intergovernmental negotiations.

In 2005, when the IGF was established, the ITU was nearly the only intergovernmental organization which had a special interest in Internet issues. In the meantime, however, there is a multitude of intergovernmental Internet negotiations: the UN is dealing with autonomous weapon systems, state behavioral norms, and confidence-building measures in cyberspace. The WTO has started talks on digital trade. The UN Human Rights Council discusses freedom of expression and privacy in the digital age. ILO, WIPO, UNESCO, OECD, Council of Europe, OSCE, NATO and many other intergovernmental bodies have digital and cyber issues on their agendas. Even G7 and G20 are discussing now rules and norms for the development and the use of artificial intelligence. And although everything is connected to everything on the Internet, these negotiations take place in isolated interstate silos. Trade negotiators have little to do with arms control negotiators. And governmental bureaucrats sitting in the Human Rights Council have no real clue about the future of AI.

This, of course, is a significant deficit of the current system. An IGF Plus could help to bridge the existing gap between discussion and decision, to interconnect — probably via liaisons — these intergovernmental negotiations and to open doors for non-state actors to participate in an adequate way in this very decentralized negotiation processes.

Towards October 2020

The UN panel was wise enough not to push for quick adoption of its recommendations. Antonio Guterres announced the kick start of a global discussion process intending to raise awareness of the urgency of enhanced digital collaboration. A newly appointed UN Technology Envoy should help him with this. However, he also mentioned a deadline: October 24, 2020. This is the 75th anniversary of the founding of the United Nations and this would be a good date for the adoption of something like a "Multistakeholder Digital UN Call", with commitments not only from the 193 governments of the UN member states but also from the main stakeholders from the private sector, civil society, and the technical community.

By the way, on the road to the 75th UN anniversary, there is the 14th IGF, scheduled for November 2019 in Berlin. This is a good opportunity to add some more concrete proposals and to test whether the world is ready and open for innovation in Internet policymaking. At the 13th IGF in Paris, November 2018, the French president Macron offered several ideas which produced since that the "Paris Call" to strengthen trust and security in cyberspace and the "Christchurch Call" to reduce the misuse of the Internet for terrorist activities. Good steps, but more steps are needed. Why not use the Berlin IGF and to propose a "Multi-Stakeholder Pact to Protect the Public Core of the Internet"? Such a pact could become the first cornerstone in an emerging cybersecurity architecture which would add substance to the UN panel's proposal to work towards a "Global Commitment on Digital Trust and Security."

Written by Wolfgang Kleinwächter, Professor Emeritus at the University of Aarhus

Follow CircleID on Twitter

More under: Internet Governance, Policy & Regulation

Categories: News and Updates

Cuba's New WiFi Regulations – Good, Bad or Meh?

Domain industry news - Thu, 2019-06-13 19:39

Cuba has legalized WiFi access to public Internet hotspots from nearby homes and small businesses, but SNET and other community networks remain illegal under the new regulations. Does this signify a significant policy change?

Soon after ETECSA began rolling out WiFi hotspots for Internet access, people began linking to them from homes and community street nets. These connections and importing the WiFi equipment they used were illegal, but generally tolerated as long as they remained apolitical and avoided pornography. Regulations passed last month legalized some of this activity in a bid to boost connectivity by allowing Internet access from homes and small private businesses like restaurants and vacation rentals that are located close enough to a hotspot to establish a WiFi connection.

The added convenience may generate more revenue for ETECSA, and it will give the Ministry of Communication (MINCOM) some small fees and, more important, registration data on the local-area network operators. (If you license a connection, you have the power to rescind the license). It will also generate some additional network traffic, which may strain network capacity. There are two WiFi frequency bands — 2.4 and 5 GHz — and a friend told me that currently only the 2.4 GHz band is being used. The new regulations allow use of the 5 GHz band as well, which will add capacity from homes and businesses to the hotspots, but backhaul capacity from the hotspots to the Internet may become more of a bottleneck and exacerbate quality of service problems.

So much for small networks, but what, if anything, will be the impact of these regulations and their enforcement be on larger, community networks, the largest of which is Havana's SNET? The new regulations bar cables that cross streets and radio transmitter power over 100 mW. SNET uses cables and higher-powered transmitters, so, if these regulations were enforced, they would put SNET and smaller community networks out of business.

However, community networks have been illegal and tolerated since their inception, so it may be that they will continue to be ignored. If that is the case, the new regulations don't really change the status quo, but what if these new regulations foreshadow a policy change? What if ETECSA were willing to collaborate with community networks following the example of in Spain?

If that were the case, ETECSA could take steps like providing high-speed wireless or fiber Internet connections at the locations of the central SNET backbone "pillars" and allowing cables and faster wireless links to and within second-level networks that serve up to 200 users. They could also cooperate with SNET administrators in purchasing supplies and equipment and network management and they could do the same for smaller community networks outside of Havana.

So, which is it — a step backward with cracking down on SNET and other community networks, a slightly positive step adding locations from which one can access a WiFi hotspot, or a positive indication of a policy change and a step toward incorporating community networks into the recognized and supported Cuban Internet infrastructure?

We will know the answer when the new rules go into effect on July 29, but my guess is that it will be the middle choice, a slightly positive step. Cracking down on SNET would be disruptive — eliminating jobs and depriving thousands of users of services they value, and I don't think the government would want those problems. At the other extreme, full cooperation with community networks would mean ETECSA giving up control and the dilution of their bureaucratic and financial monopoly, which seems unlikely. That leaves "meh" — much ado about not much.

But, to end on a more upbeat note — a friend tells me that he has heard that SNET community representatives are talking with the government. Could ETECSA and the Communication Ministry have different views and, if so, who is in charge?

Update Jun 15, 2019:

Two things. First, the friend I mentioned above commented on my speculation that MINCOM and ETECSA might have different views, saying "ETECSA and MINCOM are so tight together that is hard to say where one starts and the other one begins."

He also pointed out that the administrators of four of the SNET sub-nets posted a statement telling users to remain respectful and calm while they negotiate with MINCOM to protect the interest of SNET and other community networks. They have had one meeting in which they talked about spectrum and the statement refers to the "regulatory framework," suggesting that MINCOM is open to high-speed wireless links. They say the first meeting was productive and they will have future meetings.

This increases my confidence that SNET will survive under these new regulations and, if MINCOM allows high-speed links between the sub-nets, SNET performance will improve. It would be even better if the talks go beyond SNET's survival and move on to ways they can collaborate with ETECSA.

You can follow the negotiation progress on the SNET Facebook page.

Written by Larry Press, Professor of Information Systems at California State University

Follow CircleID on Twitter

More under: Access Providers, Broadband, Mobile Internet, Policy & Regulation, Telecom, Wireless

Categories: News and Updates

Sports equipment company guilty of reverse domain name hijacking

Domain Name Wire - Thu, 2019-06-13 17:56

Panel finds Singapore company brought case in abuse of UDRP.

A World Intellectual Property Organization panel has found Sport and Fashion, Pte. Ltd., a sports equipment/apparel company, guilty of reverse domain name hijacking.

The company filed a cybersquatting complaint against the domain It markets a brand called Demix.

Canadian company St. Lawrence Cement Inc owns the domain name. It markets cement products under the Demix brand.

The cement company registered the domain in 1998, well before the Complainant had any trademark rights to the term.

Based on the panel decision, it appears the Complainant might have copied-and-pasted information from another UDRP filing into this one:

The Complaint appears to have been prepared with very little attention to detail and as noted above includes erroneous references to a completely unrelated trademark. Once the intended respondent was identified as being a Canadian company likely to be involved in the cement business (given its name), it ought to have been obvious to the Complainant that the Respondent was likely to have independently coined the Disputed Domain Name. If the Complainant was in any way unclear on this issue, a few minutes Internet searching would have found the Respondent’s present web sites at for example “”. Similarly a search of the Canadian Trademarks Registry would have found a number of DEMIX-related trademarks predating the Complainant’s trademark – whilst these are in various corporate names this should nevertheless have alerted the Complainant to the potential difficulties with its case.


© 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) Latest domain news at Domain Name Wire.

Related posts:
  1. Foundation Fitness nailed for Reverse Domain Name Hijacking on
  2. A rare RDNH in an “outside the scope” UDRP
  3. Canadian company GVE Global Vision Inc tries reverse domain name hijacking
Categories: News and Updates

NamesCon Global Announces Dates and Opens Registration for 2020 Show in Austin, Texas

DN Journal - Thu, 2019-06-13 16:45
In January NamesCon Global announced the show would move from Las Vegas to Austin in 2020 - now we have the dates and location for the big event.
Categories: News and Updates

Have We Reached ‘Peak Telecom’ and What Does This Mean for 5G

Domain industry news - Thu, 2019-06-13 16:16

"Peak telecom" is described as the maximum point of expansion reached by the traditional telecommunications industry before the internet commoditized the industry to a utility pipe.

I had to think of this when I read the recent outcomes of the famous Ericsson Consumer Lab survey. The company used the result of the survey to counteract market criticism regarding the viability of the telco business models in the deployment of 5G.

It will come as no surprise that Ericsson, as a manufacturer of 5G gear, has given the report a positive spin. However, I remain skeptical about the short-term business models for the deployment of 5G. Once full deployment happens over the coming decade, I certainly can see long-term opportunities. These will revolve around content and apps as well as areas such as IoT in smart homes, cities and energy. However, the question is, will this lead to new financial opportunities for the telcos? Peak telecom questions such an outcome.

What exactly do these broader 5G opportunities mean for the telecommunications operators — the companies who have to build the infrastructure? It is here that we can see that we have reached peak telecom. For several years now, we have seen that growth in the telecom industry is rather stagnant. Profits are still being made but mostly generated by lowering costs. For example, new telecom access speeds are provided at no extra cost to the users. Basically, consumers are getting more for the same price.

There has continuously been the promise of new revenues that could be generated through a range of new telecoms development (internet, broadband, smartphones). The telcos have, however, largely failed to move into the content/app market where the new profits are occurring. Companies such as Amazon, Facebook, Google, Alibaba, Tencent and Netflix have been the primary commercial beneficiaries of these developments.

The Ericsson report mentions that mobile access in congested areas and in megacities is becoming a problem and that 5G will assist here. I agree, but will customers pay extra for it?

It also mentions opportunities for 5G to be an alternative to fixed broadband and for it to become a key technology in fixed wireless networks. There certainly will be niche market opportunities here, but this is a highly price-sensitive market. The economics of mass fixed infrastructure favors it over mobile infrastructure. Any gains here will basically be a substitution of a fixed service they already provide, so the overall net gain for the industry will be neglectable.

The report indicates that 20% of smartphone users are prepared to pay a premium for 5G. The current commercial 5G service in South Korea is charging a meager 10% premium. No doubt, in coming years, through competition even that premium will disappear.

The report indicates that consumers expect new innovation such as foldable phones, VR glasses, AI, 360-degree camera, robotics and so on. All true but it all depends how affordable these products and service will be and again who will develop these next "must-have" products? Here, also, the telcos will most likely be missing out.

I fully agree with the report's assessment that we have to look at 5G over the more extended period. As mentioned, there are good reasons to believe that once full deployment exists, it will open up many new business opportunities.

However, will this promise be enough for telcos to make the substantial upfront investments that are needed? This without a clear indication if they can extract any significant new revenues from 5G? The more likely scenario is that the digital giants are going to be the ones that will reap the real profits of those innovations.

I stick to my argument that the key reason for the telcos to move into 5G is because of network efficiencies, which lead to lower costs. This is absolutely critical in this peak telecom market.

To end on a more positive note for the industry, there is the first mover advantage with short term premium price opportunities for those who can tap into the early adopters' market. There is always a group of users who simply do want to have the newest of the newest, whatever the price. The size of this market varies — depending on how "hot" the new product is seen by this market segment — and could be anywhere between 10% and 25%.

This is certainly attractive for the telcos as it allows them to recoup some of the initial investment rapidly. In relation to mobile products and services, this mainly relates to "must have" gadgets and, in particular, the smartphone. The current price (in Korea) of a 5G phone is approximately US$1,500 (AU$2,153), without any outstanding features.

The lack of attractive smartphones could be another negative for some of the early adopters. Time will tell.

Written by Paul Budde, Managing Director of Paul Budde Communication

Follow CircleID on Twitter

More under: Mobile Internet, Telecom, Wireless

Categories: News and Updates

NamesCon picks The Capital Factory for 2020 show in Austin

Domain Name Wire - Thu, 2019-06-13 15:40

Event chooses startup accelerator for its 2020 U.S. event.

The Capital Factory in Austin, Texas.

NamesCon has finalized the date and location details for its 2020 conference.

The event will take place January 26-29 in Austin, TX. Much of the action will take place at The Capital Factory, a business accelerator.

The Capital Factory is in a unique building that is half business offices and half hotel — The Omni. I suspect that The Omni will be the key hotel for guests.

It’s not a big space for a conference, so keynotes, partner workshops and the live domain auction will take place outside Capital Factory, perhaps in the Omni. It could also take place at the office of some of the companies that are just a few blocks away, such as GoDaddy and WPEngine.

Hosting the exhibit hall and various workshops in the incubator presents an interesting opportunity. Exhibitors and the domain industry want to reach end users, and Capital Factory is full of startups. That means exhibitors and speakers can reach a new audience in addition to domain investors.

NamesCon is calling the event “The Domain Economic Forum,” a nod to trying to pull in sponsors and guests that do more than service domain names. It wants companies that are tangentially related, too.

Early bird tickets are available for $349.

© 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) Latest domain news at Domain Name Wire.

Related posts:
  1. Verisign patents “Methods and systems for creating new domains”
  2. Test driving the GoDaddy GoCentral website builder
  3. Review: Yahoo Small Business website builder
Categories: News and Updates

Use of DNS Firewalls Could Have Prevented More Than $10B in Data Breach Losses Over the Past 5 Years

Domain industry news - Thu, 2019-06-13 02:42

New research from the Global Cyber Alliance (GCA) released on Wednesday reports that the use of freely available DNS firewalls could prevent 33% of cybersecurity data breaches from occurring. The study also indicates that DNS firewalls might have prevented conservatively $10 billion in data breach losses over from the 11,079 incidents in the past five years. (data showed 3,668 breaches involved at least one of the threat actions which would have been potentially mitigated by a DNS firewall.)

And there's more: Report also makes a reference to the spring of 2018 when the Council of Economic Advisors released a report estimating that 'malicious cyber activity cost the U.S. economy between $57 billion and $109 billion in 2016.' Also in 2018, McAfee and the Center for Strategic and International Studies estimated that global losses from cybercrime are between $445 billion and $600 billion. Hence, researchers point out that since their data shows that a DNS Firewall could play a role in preventing one-third of breaches, "it is likely it could have played a role in one-third of these losses to the extent that they arise from breaches and not denial-of-service or other non-breach attacks." That would be additional prevention of $19 and $37 billion in the U.S. or globally between $150 and $200 billion.

Advice for individuals: It is worth using a DNS firewall, says Global Cyber Alliance. GCA in collaboration with IBM and the Packet Clearing House in 2017 launched Quad9, a free DNS security service that blocks known malicious domains, preventing computers and IoT devices from connecting to malware or phishing sites.

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, DNS, DNS Security, Malware

Categories: News and Updates

A $950,000 Sale Booked by Larry Fischer and the Year's Biggest ccTLD Sale Highlight This Week's Chart

DN Journal - Thu, 2019-06-13 01:10
Larry Fischer landed another whale this week - topping our latest weekly sales chart with a name that hit nearly $1 million. We also saw the year's biggest ccTLD sale to date.
Categories: News and Updates update and subpoenas

Domain Name Wire - Wed, 2019-06-12 19:05

Here’s why domain investors got subpoena notices from Enom.

There’s been a lot of talk in domain circles today about subpoena notices for domains at Enom.

The subpoena stems from a lawsuit that Scratch Foundation, a non-profit created at MIT, filed against the domain name

Scratch Foundation filed the case as in rem (meaning against the domain), but Ravi Lahoti stepped forward as the owner. Lahoti says he’s owned the domain name since 1998. DomainTools historical Whois records seem to corroborate his story, at least until Scratch Foundation was formed, although the domain has bounced around different entities/privacy services.

This brings us to the subpoena, which you can view here (pdf). Attorney David Weslow, who is representing Scratch Foundation, asked Enom for a list of Lahoti’s domains. The original subpoena was quite a bit broader than that, but these things are often narrowed between the lawyer and recipient. Regardless, Enom has apparently notified anyone who owns a domain that was at any point tied to Lahoti.

It seems that Weslow is trying to paint Lahoti as a serial cybersquatter.

Indeed, in a recent order (pdf), the judge wrote “Plaintiff has also brought various other anticybersquatting cases against Mr. Lahoti to the Court’s attention which demonstrate that Mr.
Lahoti is a notorious cybersquatter.”

Whether or not Lahoti is a notorious cybersquatter shouldn’t matter on the claim under the Anticybersquatting Consumer Protection Act so long as Lahoti proves that he owned the domain before Scratch Foundation had rights in the mark. This seems like it will be easy to do.

But there’s an interesting wrinkle in the lawsuit: it also claims trademark infringement. Should Scratch Foundation show that prior ads on infringed on its trademarks, could it win a judgment against the domain? I’m not familiar with how a judgment for trademark infringement works in an in rem case. But it’s clear that the trademark infringement claim is part of the effort to get control of the domain.

There’s a lesson here involving in rem lawsuits. Lahoti asked the court to name Lahoti as the defendant instead of in rem. This might make it easier to defend but also opens him up to personal liability. The court denied this request on the basis that Lahoti didn’t object early enough to in rem jurisdiction.

© 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) Latest domain news at Domain Name Wire.

Related posts:
  1. owner defends domain name in court
  2. Here’s why the lawsuit was dismissed (Transcript)
  3. DomainTools appeals injunction decision in .NZ case
Categories: News and Updates

17 end user domain name sales up to €50,000

Domain Name Wire - Wed, 2019-06-12 15:27

A vegan hair care line, two soccer-related businesses and a French travel site bought domain names this week. led Sedo’s weekly public sales list. The domain is still in escrow but this is undoubtedly an end user at this price.

Here’s the list of end user domain name sales that just completed at Sedo. You can view previous lists like this here. €50,000 – The domain is still in escrow but this is an end user price. There’s a non-profit at, but this is also a good name for a creative service. is also in use. €10,000 – Techbull Media in Spain. I can’t find out much about this company online, but I found one LinkedIn profile showing someone as “head of content” at the company. A reasonable price for this domain. EUR 6,000 – KoelnBusiness Wirtschaftsfoerderungs-GmbH seems to be like a chamber of commerce for Cologne. $6,000 – Forwards to, a women’s apparel and accessories ecommerce site based in Los Angeles. $5,000 – This alphanumeric name was purchased by 123 Marketing, a Canadian website design and SEO company. $5,000 – A health and wellness site. It looks like it’s an upgrade from $5,000 – This domain forwars to, a GDPR and Privacy Management software provider. $4,950 – Indometal (London) Limited, a subsisidary of PT TIMAH Tbk a State-owned Enterprise, a miner, smelter and exporter of tin from Indonesia. €3,500 – RES Touristik GmbH, a tour operator in Europe that books trips to soccer games. €3,500 – Rapid Pare-Brise. It forwards the domain to its much longer site This domain is certainly an upgrade for this small French franchise of auto window repair/replacement shops. $3,500 – Hotelgift provides gift cards that can be used at over 100,000 hotels. I suspect RestaurantGift will be the same thing for restaurants. €3,400 – German site about casinos, gaming and gambling including news and online casino listings. £3,000 – An international soccer recruiting company that helps people get soccer scholarships or placed on teams. $2,800 – Site offering the “latest news and tutorials about creating live action and motion graphics content.” $2,500 – Mermaid + Me is a German all natural vegan hair care line. It forwards this domain to its website €2,000 – Forwards to, a free multilingual service to convert documents into PDF formats.

© 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) Latest domain news at Domain Name Wire.

Related posts:
  1. 13 end user domain name sales from last week
  2. 22 End User Domain Name Sales
  3. 22 end user domain name sales from the past week
Categories: News and Updates

Network Protocols and Their Use

Domain industry news - Wed, 2019-06-12 01:21

In June, I participated in a workshop, organized by the Internet Architecture Board, on the topic of protocol design and effect, looking at the differences between initial design expectations and deployment realities. These are my impressions of the discussions that took place at this workshop.

1 - Case Studies

In this first part of my report, I'll report on the case studies of two protocol efforts and their expectations and deployment experience. These are the Border Gateway Protocol (BGP) and the security extensions to the DNS (DNSSEC).

BGP – Routing protocols have been a constant in the Internet, and BGP is one of the oldest still-used protocols. Some aspects of the original design appear to be ill-suited to today's environment, including the general approach of session restart when unexpected events occur, but this is merely a minor quibble. The primary outcome of this protocol has been its inherent scalability. BGP is a protocol designed in the late 1980s, using a routing technology described in the mid-1950s, and first deployed when the Internet that it was used to route had less than 500 component networks (Autonomous Systems) and less than 10,000 address prefixes to carry. Today BGP supports a network which is approaching a million prefixes and heading to 100,000 ASNs. There were a number of factors in this longevity, including the choice of a reliable stream transport in TCP, instead of inventing its own message transport scheme, the distance vector's use of hop-by-hop information flow allowing various forms of partial adoption of new capabilities without needing all-of-network flag days and a protocol model which suited the business model of the way that networks interconnected.

These days BGP also enjoys a position of the entrenched incumbent which itself is a significant impediment to change in this area, and the protocol's behavior now determines the business models of network interaction rather than the reverse.

This is despite the obvious weakness in BGP today, including aspects of insecurity and the resultant issue of route hijacks and route leakage, selective instability and the bloating effects of costless advertisement of more specific address prefixes.

Various efforts over the past thirty years of BGP's lifetime to address these issues have been ineffectual. In each of these instances we have entertained design changes to the protocol to mitigate or even eliminate these weaknesses, but the consequent changes to the underlying cost allocation model or the business model or the protocol's performance are such that change is resisted.

Even the exhortation for BGP speakers to apply route filters to prevent source address spoofing in outbound packets, known as BCP 38, is now twenty years old, and is ignored by the collection of network operators to much the same extent that is was ignored twenty years ago, despite the massive damage inflicted by a continuous stream of UDP denial of service attacks that leverage source address spoofing.

The efforts to secure the protocol are almost as old as the protocol itself, and all have failed. Adding cryptographic extensions to BGP speakers and the protocol to support verifiable attestations that the data contained in BGP protocol packets are in some sense "authentic" rather than synthetic impose a level of additional cost to BGP that network operators appear to be unwilling to bear.

The issues of security itself, where it can only add credentials to "good" information, imply that universal adoption is required if we want to assume that everything that is not "good" is necessarily "bad" only adds the formidable barriers of universal adoption and the accompanying requirement of lowest bearable cost, as every BGP speaker must be in a position or accept these additional costs.

We have not seen the end of proposals to improve the properties of BGP, both in the area of security and in areas such as route pruning, update damping, convergence tuning and such. Even without knowledge of the specific protocol mechanisms proposed in each case, it appears they such proposals are doomed to the same fate as their predecessors. In this common routing space, cost and benefit are badly aligned, and network operators appear to have little in the way of true incentive to address these issues in the BGP space. The economics of routing is a harsh taskmaster and it exercises complete control over the protocols of routing.

DNSSEC – If BGP is a mixed story of long-term success in scaling with the Internet and at the same time a story of structural inability to fix some major shortcomings in the routing environment it is interesting to compare this outcome with that of DNSSEC.

DNSSEC was intended to address a critical shortcoming to the DNS model, namely through the introduction of a mechanism that would allow a client of the DNS to validate that the response that the DNS resolution system has provided is authentic and current. This applies to both positive and negative response so that when a positive response is provided, this is verified as a faithful copy of the data that is served by the relevant zone's authoritative name servers, and where a negative response is provided, then the name really does not exist in the zone.

We have all heard of the transition of the Internet from an environment of overly credulous mutual trust and lack of skepticism over the authenticity of the data we receive from protocol transactions that occur over the Internet to one of suspicion and disbelief, based largely on the continual abuse of this original mutual trust model.

A protocol that would be clearly informative of efforts to identify when the DNS is being altered in various ways by third parties would have an obvious role and would be valued by users. Or so we thought. DNSSEC was a protocol extension to the DNS was intended to provide precisely that level of assurance, and it is a complete and utter failure.

In terms of protocol design, stories of failure are as informative, or even more so, as stories of success. In the case of DNSSEC, the stories of its failure stretch across its twenty years of progressive refinement.

The initial approach, described in RFC 2535, had an unrealistic level of inter-dependency such that a change in the apex root key required a complete rekeying of all parts of the signed hierarchy. Subsequent efforts were directed to fix this "re-keying" problem.

What we have today is more robust, and within the signed hierarchy rekeying can be performed safely, but the root key roll still presents major challenges. Every endpoint in the DNS resolution environment that performs validation needs to synchronize itself with the root key state as its single "trust anchor." This use of a single trust point is both a feature and a burden on the protocol. It eliminates many of the issues we observe in the Web PKI, where multiple trusted CAs create an environment that is only as good as the poorest quality CA, which in turn destroys any incentive for quality in this space. Every certificate is equally trusted in that space.

In a rooted hierarchy of trust, all trust derives from a single trust entity, which creates a single point of vulnerability and also creates a natural point of monopoly. It is a deliberate outcome that the IANA manages the root key of the DNS in the role of trustee representing the public interest.

Yet even with this care and attention to a trusted and secure root, DNSSEC is still largely a failure, particularly in the browser space. The number of domains that use DNSSEC to sign their zone is not high, and the uptake rate is not a hopeful one. From the perspective of a zone operator, the risks of signing a zone are clearly evident, whereas the incremental benefits are far less tangible.

From the perspective of the DNS client, a similar proposition is also the case. Validation imposes additional costs, both in time to resolve and in the reliability of the response, and the benefits are again less tangible.

Perhaps two additional comments are useful here to illustrate this point. When a major US network operator first switched on DNSSEC in their resolvers the domain name had a key issue and could not be validated. The DNSSEC model is to treat validation failure as a ground to withhold the response. So would not be resolved by these resolvers. At the time there was a NASA activity that had generated significant levels of public interest, and the DNS operator was faced with either turning DNSSEC off again or adding the additional measure of manually maintained "white lists" where validation failure would be ignored, adding further costs to this decision to support DNSSEC validation in their resolution environment.

The second issue is where validation takes place. So far, the role of validation of DNS responses has been placed on the recursive resolver, not the user. If a resolver has successfully validated a DNS response, it sets the AD bit in response to the stub resolver. Any man-in-middle that sits between the stub resolver and the recursive resolver can manipulate this response if the interaction is using unencrypted UDP for the DNS. If the zone is signed and validation fails, then the recursive resolver reports a failure of the server, not a validation failure. In many cases (more than a third of the time) the stub resolver interprets this as signal to re-query using a difference recursive resolver and the critical information of validation failure and the implicit signal of DNS meddling is simply ignored.

Surely there is a market for authenticity in the namespace? The commercial success of the WebPKI, which was an alternative approach to DNSSEC, appears to support this proposition. For many years while name registration was a low-value transition, the provision of a domain name certificate was a far more expensive proposition, and domain holders paid. The entrance of free certificates into the CA market was not an observation of the decline in value of this mechanism of domain name authentication but an admission of the critical importance of such certificates in the overall security stance of the Internet, and a practical response to the proposition that security should not be a luxury good but be accessible to all.

Protocol Failure or Market Failure? – Why has DNSSEC evidently failed? Was this a protocol failure or a failure of the business model of name resolution? The IETF's engagement with security has been variable to poor, and the failure to take a consistent stance with the architectural issues of security has been a key failure here. But perhaps this is asking too much of the IETF.

The IETF is a standardization body, like many others. Producers of technology bring their efforts to the standards body, composed of peers and stakeholders within the industry, and the outcome is intended to be a specification that serves two purposes. The first is to produce a generic specification that allows competitive producers to make equivalent products, and the second is to produce a generic behavior model that allows others to build products that interact with this standard product in predictable ways. In both cases, the outcome is one that supports a competitive marketplace, and the benefit to the consumer is one based on the disciple of competitive markets.

But it is a stretch to add "architecture" to this role, and standards bodies tend to get into difficulties when they attempt to take a discretionary view of the technologies that they standardize according to some abstract architectural vision. Two cases illustrate this issue for the IETF. When Network Address Translators (NATs) appeared in the early 1990s as a means of forestalling address exhaustion, the IETF deliberately did not standardize this technology on the basis that it did not sit within the IETF's view of the Internet's architecture. Whatever the merits or otherwise of this position, the outcome was far worse than many had anticipated. NATs are everywhere these days, but they have all kinds of varying behavior because NAT developers had no standard IETF specification of behavior to refer to. The burden has been passed to the application space because applications that require an understanding of the exact nature of the NAT (or NATS that they are behind) also have to use a set of discovery mechanisms to reveal the nature of the address translation model being used in each individual circumstance.

The other case I'll use is that of Client Subnet in the DNS. Despite a lengthy prolog to the standard specification that the IETF did not believe that this was a technology that sat comfortably in the IETF's overall view of a user privacy architecture and should not be deployed, Client Subnet has been widely deployed, and in too many cases has been deployed as a complete client identity. For the IETF a refusal to standardize in the architectural ground has its negative consequences if the deployment of the technology occurs in any case, and a reluctant version of standardization despite such architectural concerns again has its negative consequences, in that deployers are not necessarily sensitive to such reluctance in any case.

Even if the IETF is unable to carry through with a consistent architectural model, why is DNSSEC a failure and why has the WebPKI model the incumbent model for web security, despite its obvious shortcomings? One answer to this question is the first adopter advantage. The WebPKI was an ad hoc response by browsers in the mid-1990s to add a greater level of confidence in the web. If domain name certificates generated sufficient levels of trust in the DNS (and routing for that matter) that the user could be confident that the site on their screen was the site that they intended to visit, then this was a sufficient and adequate answer.

Why change it? What could DNSSEC use add to this picture?

Not enough to motivate adoption it would seem. In other words, the inertia of the deployed infrastructure leads to a first adopter advantage. An installed base of a protocol that is good enough for most uses is often enough to resist adoption of a better protocol. And when it's not clearly better but just a different protocol, then the resistance to change is even greater.

Another potential answer lies in centralization and cartel behaviors. The journey to get your Certification Authority into the trusted set of the few remaining significant browsers is not easy. The CAB forum can be seen both as a body that attempts to safeguard the end user's interest by stipulating CA behaviors that are an essential set of preconditions to being accepted as a trusted CA and a body that imposes barriers to entry by potential competitive CAs. From this perspective, DNSSEC and DANE can be viewed as an existential threat to the CA model, and resistance to this threat from the CAB forum is entirely predictable and expected. Any cartel would behave in the same manner.

A third answer lies in the business model of outsourcing. The DNS is often seen as a low maintenance function. A zone publisher has an initial workload of setting up the zone and its authoritative servers, but after that initial setup, the function is essentially static. A DNS server needs no continual operational attention to keep it responding to queries. Adding DNSSEC keys changes this model and places a higher operational burden on the operator of the zone.

CA's can be seen as a means of outsourcing this operational overhead. It is a useful question to ask why the CA market still exists and why are there still service operators who pay CAs for their service while free CAs exist. Let's Encrypt uses a 90-day certification model, so the degree to which the name security function is effectively outsourced is limited. There is a market for longer term certificates that are a more effective way of outsourcing this function, and the continuing existence a large set of CAs who charge a price points to the continuing viability of this market.

Even though DNSSEC has largely failed in this space so far, should the IETF have avoided the effort and not embarked on DNSSEC in the first place? I would argue against such a proposition.

In attempting to facilitate competition in the Internet's essential infrastructure, the IETF is essentially an advocate for competitive entrants. Dominant incumbents have no inherent need to conform to open standards, and in many situations, they use their dominant position to deploying services based on technologies that are solely under their control, working to achieve a future position to complement the current situation.

Most enterprises who obtain a position that allows the extraction of monopoly rentals from a market will conventionally seek to use the current revenue stream to secure their future position of monopoly further. In the IT sector, when pressed such dominant actors have been known to use crippling Intellectual Property Rights conditions to prevent competitors from reverse engineering their products to gain entry to the market. In light of such behaviors, the IETF acts in ways similar to a venture capital fund, facilitating the entrance of competitive providers of goods and services through open standards. Like any venture capital fund, there are risks of failure as much as there are benefits of success, and the failures should not prevent the continual seeking of instances of success.

While I am personally not ready to write DNSSEC off as a complete failure just yet, there is still much the IETF can learn about why it spends many years on this effort. The broader benefits of such activities to the overall health of a diverse and competitive marketplace of goods and services on the Internet is far more important than the success or otherwise of individual protocol standardization efforts.

2 - Deployment Considerations

In this second part of my report, I'll report on tensions between initial expectations of protocol design and standardization and subsequent deployment experience

Do we really understand the expectations of our Internet protocols?

What do we expect? Are these expectations part of a shared understanding, or are there a variety of unique and potentially clashing expectations? Do we ever look back and ask whether we built what we had thought we were going to build? Did anyone talk to the deployers and operators and competitors to understand their expectations, requirements and needs? In many working groups the loudest voices and the strongest held opinions might dominate a group's conversation, but this is not necessarily reflective of the broader position of interested parties, and not necessarily reflective of the path that represents the greatest common benefit. The strongest supporters of a single domain of interoperable connectivity are often new entrants and incumbents may have an entirely different perspective of the scope and expectations of a standardization effort.

Not only is this a consideration when embarking on standardization of a new protocol or a new tool element, but similar considerations apply to efforts to augment or change a standard protocol. Existing users may oppose the imposition of additional costs to their use of a protocol that appear to benefit new entrants unfairly. Change by its very nature will always find some level of opposition in such forums.

Perhaps one possible IETF action could be to avoid working on refinements and additions to deployed protocols, as this works against the interests of the deployed base and also sends a negative signal about the risks of early adoption of an IETF protocol. On the other hand, the IETF is not working in isolation, and the market itself would resist the adoption of protocol changes if those changes had no substantive bearing on the functionality, integrity or cost of the deployed service. In other words, if the augmentations offer no benefits to the installed base other than opening up the service realm to more competitors, it is entirely reasonable to anticipate resistance towards such changes.

A direction to the IETF to stop work on protocol refinements may well be a direction to stop working on ultimately futile efforts, and instead spend its available resources working in potentially more productive spaces, as the market will perform such choices between sticking to an existing protocol or adopting change in any case. However, many items or work are started in the IETF with confident expectations of success, and "no" is a very difficult concept in an open collaborative environment. It does not need a complete agreement, or even a rough consensus of the entire community to embark on activity. The more typical threshold is a cadre of enthusiasm. Whether its individuals or some corporate actors make no substantive difference in such circumstances.

This lack of critical ability to select a particular path of action and make choices between efforts has proved to be a liability at times. The standardization of numerous IPv6 transition mechanisms appeared to make a complex task far harder for many operators. The continuing efforts to tweak the IPv6 protocol appears to act against the interests of early adopters, and a sense of delay and caution has become a widespread sentiment among network and service operators.

Scale has been a constant factor in deployment considerations. Protocols that can encompass increasing scope of deployment without imposing onerous costs of early adopters who are forced to keep up with the growth pressures being imposed by later entrants tend to fare better than those that impose growth costs on all. The explosive growth of Usenet news imposed escalating loads on all providers, and ultimately many dropped out. The broader issue of the scalability and cost of information flooding architectures cannot be ignored as an important lesson from this particular example.

Many protocols require adjustment to cope with growth. A good example here is the size of the Autonomous Number field in BGP. The original 16-bit field was running out, and it was necessary to alter BGP to increase the size of this field. One option is a "flag day" where all BGP speakers shift to use a new version of the protocol. Given the scope of the Internet, this has not been a realistic proposition for many years and probably many decades. The alternative is piecemeal adoption, where individual BGP speakers can choose to deploy a 32-bit ASN capable version and interoperate seamlessly with older BGP speakers.

In general, where change is necessary for a deployed protocol, piecemeal deployments that are backward compatible with the existing user base will have far better prospects than those which are less flexible. In the early days of designing what was to become the IPv6 protocol, there were various wish lists drawn up. "Backward compatibility" was certainly desired in this case, but no robust way of achieving this was found, and the protracted transition we are experiencing uses a somewhat different approach of coexistence, in the form of the dual stack Internet. Coexistence implies that any network cannot rid itself of a residual need for IPv4 services while any other network is still only operating an IPv4-only network. The entire transition process stalls on universal adoption, where the late adopters appear to claim some perverse form of advantage in the market through deferred cost of transition.

Is the IETF's conception of "need" and "requirement" distanced from the perspectives of operators and users? Should the IETF care when operators or users don't? Transport Layer Security (TLS) is a good illustration here. While the network was primarily a wired network, it was evident that users trusted network operators with their traffic, and efforts to encrypt traffic did not gain mainstream appeal. TLS only gained traction with the general adoption of WiFi, as the idea of eavesdropping on radio was easy to understand. And at this point, the message of the need for end-to-end encryption had a more receptive audience. Should the IETF have waited until the need was obvious, or were its early actions useful in having a standard technology specification already available when user demand was exposed?

It is hard to believe that the IETF has superior knowledge of the requirements of a market than those actors who either service that market or intend to invest in servicing that market. Having the IETF wait until it makes a clear judgment as to need runs the risk of only working on already deployed technology. At this point, the value proposition of an open and interoperable standard is one that exists for all but the original developers and early adopters.

How do standards affect deployment? HTTPS is an end to end protocol that can be used to drive through various forms of firewalls and proxies. Packages that embed various services into of HTTPS sessions, including IP itself, have existed for years, although the lack of applicable standards have meant that their use was limited to those who were willing and able to install custom applications on their platforms. The recent publication of RFC 8484 that described the technique of DNS over HTTPS (DOH) was more a case of formalizing an already well-understood tunneling concept than representing some new invention. The existence of an IETF standard document effectively propelled this technology into a form of legitimacy, transforming it from just another tool in the hacker's toolbox into something that some mainstream browser vendors intend to fold into their product. The standard, in this case, is seen as a precursor to widespread adoption. That should not imply that there is broad agreement about the appropriateness of the standardization or broad agreement with the prospect of broad deployment.

DOH has been a story of emerging difference of expectations. Some browser vendors appear to be enthusiastic about DOH as an enabler of faster service with greater control placed into the browser itself, lifting the name resolution function out from the platform and placing it into the application. However, the DNS community is not so clearly on board with this, seeing DOH as a potential threat to the independent integrity of the DNS as a distinct element of common and consistent Internet infrastructure. Once the name resolution function is pushed deeply into the application what's to prevent applications from customizing their view of the namespace?

An important value of a single communications network resides within the concept of a single referential framework, where my reference to some network resource can be passed to you and still refer to the same resource. Should the IETF not work on technology standards that head down paths that could potentially lead to undermining the cohesive integrity of the common Internet namespace? Or are such deployment consequences well outside the responsibility of the IETF?

Deployment of technologies has exposed many tussles on the Internet. One of the major issues today is the tussle between applications and platforms. Today's browsers are now a significant locus of control, exercising independent decisions over transport, security, latency, and the namespace, which collectively represent independent control over the entire user experience. Why should the IETF have an opinion one way or the other on such matters? If you take the view that a role of standards is to facilitate open competition between providers, then the issue in this space lies in the inexorable diminution of competition on the Internet. It appears that if one can realize unique economies of scale, and greater scale generates greater economies then the inevitable outcome is a concentration in these markets.

One of the essential roles of the IETF is diminished through this concentration within the deployment space, and the IETF runs the risk of being relegated to rubber-stamping technologies that have been developed by incumbents.

How can the IETF we measure the level of concentration in a market? If the IETF were to claim that they had an important role in supporting competition in decentralized markets, then how exactly would the IETF execute on this objective? What would it need to do? Is protocol design and standardization relevant or irrelevant to the industry composition of deployment that breeds centralization? Can the IETF ever design a protocol that would be impossible to leverage in a centralized manner? This resistance to concentration within the Internet appears to be an unlikely mission for the IETF. The Internet's business models leverage inputs and environments to create an advantage to incumbent at-scale operators. It would be comforting to think that the protocols used, and their properties are mostly orthogonal to this issue.

However, there is somewhat more at play. Standardization occurs during the formative stages of a technology, and this may be associated with deployment conditions that include early adopter advantages. If such advantages exist, then the rewards to such early adopters may be disproportionately large. This engenders positive market sentiment which motivates the early adopter to defend its unique position and discourage competition. Early adopters head to the IETF to shape emerging protocols and influence their intended entrance into the market. Their interests in the standardization process are not necessarily to generate a technology specification that facilitates opening up the technology to all forms of competitive use. Often their interests lie in the production of complex monolithic specifications replete with subtle interdependences and detail. Trying to position the IETF work to encourage competition by producing simple specifications of component elements that are readily accessible runs counter to the interests of early adopters and subsequent incumbents.

There is an entire world of economic thought on market dominance and competition, and it becomes relevant to this consideration about protocols and centrality on the Internet.

Is big necessarily bad? Is centralization necessarily bad? Or is the current environment missing some key components that would've controlled and regulated the dominant incumbents?

In many ways, it seems that we are re-living the Gilded Age of more than a century past. There is a feeling of common unease that the Internet, once seen as a force for good in our society, has been entirely captured by a small clique who are behaving in a manner consistent with a global cabal. The response to such feelings of unease over the ruthless exploitation of personal profiles in the deployment space is to seek tools or levers that might reverse this situation. The tools may include law and regulation, the passage of time, new protocols, educating users, or new vectors of competition. In many ways, this common search for a regulatory lever is mostly ineffectual, as the most effective response to market dominance often is sourced from the dominant incumbent itself.

3 - Security and Privacy

In this third part of my report, I'll report on our experience with security and privacy.

These days any form of consideration about the Internet and its technology base needs to either address the topic of security and privacy in all its forms or explicitly explain this glaring omission. Obviously, this workshop headed directly into this space, asking whether the IETF was looking at topical and current threat models, and also asking about likely evolution in this space.

Exhortations about security practices for service operators made through standards bodies are often ineffectual in isolation. RFC 2827 is almost 20 years old, and it ignored by network operators to about the same extent that it was ignored at the time of its publication. It may be better known as BCP 38, or packet filtering to prevent source address spoofing in IP packets. It's important because there is a class of DDOS attack using UDP amplification where the UDP response is far larger than the query. It's a fine practice and we should all be doing this form of filtering. Twenty years later the attacks persist because the filtering is just not happening. What may make such forms of advice more effective is the association of some form of liability for service operators, or explicit obligations as part of liability insurance. In isolation, advice relating to security measures is often seen as imposing cost without immediate direct benefit, and in circumstances such as this case, where the defensive approach is only effective when most operators undertake the practice, benefits for early adopters are simply not present.

Another example is the standardization of Client Subnet extensions in DNS. Despite the standard specification RFC 7871 containing the advice that this feature should be turned off by default and users be permitted to opt out, this has not happened. This is in spite of the potential for serious privacy leak through attribution of DNS queries to end users.

The environment of attacks escalates, as the growing population of devices allows the formation of larger pools of co-opted devices that in concert can mount massive DDOS attacks. Given our inability to prevent such attacks from recurring, the reaction has been the formation of a market in robust content hosting. As the attacks increase in intensity, the content hosting operators require larger defensive measures and economies of scale in content hosting come into play. The content hosting and associated distribution network sector is increasingly concentrated into a handful of providers. In many ways, this is a classic case of markets identifying and filling a need. The distortion of that market into a very small handful of providers is a case of economies of scale coming into play. As with the CA market, the market has now seen the entrance of zero cost actors, which has significantly lifted the barrier to further new entrants in this market. What remains now appears to be simply a process of further consolidation in the market for content hosting.

The threat model is also evolving. RFC3552, published in 2003, explicitly assumed that the end systems that are communicating have not themselves been compromised in any way. Is this a reasonable assumption these days? Can an application assume that the platform is entirely passive and trustable, or should the application assume that the underlying platform may divulge or alter the application's information in unanticipated ways. To what extent can or should applications lift common network functionality into the user space and deliberately withhold almost all aspects of a communication transaction from other concurrently running applications, from the common platform, and from the network? Do approaches like DOH and QUIC represent reasonable templates for responding to this evolved threat model? Can we build protocols that explicit limit information disclosure when one of the ends of the communication may have been compromised?

Is protocol extensibility a vector for abuse and leakage, such as the Client Subnet DNS extension in the DNS, or the session ticket in TLS?

And where are our points of trust to allow us to validate what we receive? As already pointed out, DNSSEC is not faring well, and the major trust point is the WebPKI. Unfortunately, this system suffers from a multiplicity of indistinguishable trust, and our efforts to detect compromise have shrunk to logging, in the form of Certificate Transparency. Such a measure is not responsive in real time, and rapid attacks are still way too effective.

A single trust anchor breeds a natural monopoly at the apex, and across the diversity of the global Internet, there is much distrust in that single point, particularly when geopolitics enters the conversation. This single trust broker is a natural choke point and is one that tends to drift towards rent-seeking if operated by the private sector and distrust if operated by the public sector. Designs for trust need to take such factors into account.

The issue of security popups in the browser world can be compared to the silent discard of the response in the DNSSEC world, as they offer two different views of security management. Placing the user into the security model leads to lack of relevant information and an observed tendency for the user to accept obviously fraudulent certificates because of no better information. From that perspective removing the user from the picture improves the efficacy of the security measure. On the other hand, there is some disquiet about the concept of removing the user from security controls. Giving the user no information and no ability to recognize potentially misrepresenting situations that may them seems to be a disservice to the user.

Do our standards promote and encode the "state of the art" as a means of shedding liability for negligence while still acknowledging that state of the art is not infallible? Or do they purport to represent a basic tenet of security that is correctly executed is infallible?

The Internet of (insecure) Things is an interesting failure case, and the predatory view of the consumer often distracts from the ethos of care of the customer and the safeguarding customer's enduring interests. Grappling with conformance to demanding operational standards in a low cost highly diverse and high-volume industry is challenging. Perhaps more so is the tendency of the IETF to develop many responses simultaneously and confront the industry with not one but many measures. Already we've seen proposals that use some level of manufacture cooperation, such as nesting public/private key pair, QR code, MUD profile or boot server handshake.

A safe mode of operation would require that the device cannot cold start, nor even continue to operate without some level of handshake. Is this realistic? Will manufacturers cooperate? Will this improve the overall security of the IoT space. Are these expectations of manufacturers realistic? Will a kickstart IoT toothbrush comply with all these requirements? Will these requirements impose factory costs that make the device prone to manufacturer errors and increase the costs to the consumer without any change in the perceived function and benefit of the device? An IoT toothbrush will still brush teeth irrespective of the level of conformance to some generic standard security profile. The failure in the October 2016 botnet DNS attack that used readily compromisable webcams was not a failure of information or protocol. It was a failure of markets, as there was no disincentive to bring to market an invisibly flawed, but cheap, and otherwise perfectly functional product. We tend to see the IoT marketplace as a device market. In contrast, effective device security is an ongoing relationship between the consumer and the device manufacturer and requires a service model rather than a single sale transaction.

The prospect of regulatory impost to provide channels to the retail market that include conformance to national profiles is nigh on certain. Will the inevitable diversity of such regional, national or even state profiles add or detract from the resultant picture of IoT security? Will we end up with a new marketplace for compliance that offers an insubstantial veneer of effective security for such devices? It's very hard to maintain a sunny optimistic outlook in this space.

Human behavior also works against such efforts as well. Our experience points to an observation that users of a technology care a whole lot less about authentication and validation that we had assumed were the case. Most folks don't turn on validation of mail, validation of DNS responses and similar, even when they had access to the tools to do so. When we observed the low authentication rates post-deployment, our subsequent efforts to convince users to adopt more secure practices were ineffectual. Posters in the Paris Metro informing metro users as to what makes a password harder to guess really have not made an impact. In the consumer market, users don't understand the security and don't value such an intangible attribute as part of a product or service.

Safeguarding privacy is a similarly complex space. The last decade has seen the rise in surveillance capitalism, where the assembling of individual profiles consumers has become the cornerstone of many aspects of today's Internet. Many products and services are provided on the Internet free of charge to the user. The motivation to provide such free services comes from the reverse side of this market, where the tool of service is used to assist in the generation of a profile of the individual user, which is then sold to advertisers. Our digital footprint can provide a rich vein of data to fuel this world of surveillance capitalism. Whether it's our browser history, logs of DNS queries, our mailboxes, search history, or documents, e-book purchases and reading patterns, all of this data can be converted into information that has a monetary value. Our attitude toward this activity is not exactly consistent.

On the one hand we appear to be enthusiastic consumers of free-of-charge products and services and all too willing to dismiss reports of data mining on the basis that individually none of us have anything to hide. At the same time, we all have experienced those disconcerting incidents where the delivered ad mirrors some recent browsing topic or received email. Why is privacy a common concern when our actions appear to indicate that we are willing to trade it for the provision of goods and services?

One reason for this concern is when a principle of informed consent is violated. Protocols, products and services should not facilitate unintended eavesdropping on a user's actions and activities. When they leak personal information without such informed consent there is a reasonable reaction over what is perceived to be unacceptable surveillance. Another reason lies in the inherently asymmetric nature of the market of personal profile data. Individual users tend to undervalue their profile data, and the relationship with the consumers of such data tends to be exploitative of individual users.

The IETF's position on privacy has strengthened since the publication of RFC 7258 in May 2014, and the IETF expects to go to some lengths with information management in its protocols to contain what is now seen as gratuitous leakage. This includes measures such as query name minimization in DNS queries and encryption of the SNI field in TLS handshakes.

It would be good to think that we have finally stopped using the old security threat model of the malicious actor in the middle. We have more complex models that describe secure and/or trusted enclaves, which an unknown model of the surrounding environment. Is this device security or really a case of "data security"? We need to associate semantics with that data and describe its access policies to safeguard elements of personal privacy.

4 - Where Now?

In this final part of my report, I'll report on some expectations for the IETF's protocol standardization activities.

The Internet faces many challenges these days, and while many of these challenges are the consequence of the Internet's initial wild and rapid success, few of these challenges have the same intrinsically optimistic tenor as compared to the challenges of the earlier Internet. We see an increasingly capable and sophisticated set of threats coming from well-resourced adversaries. The increasing adopting of Internet-based services in all parts of our world increases the severity of these threats. We also see increasing consolidation by a shrinking set of very large global enterprises. Social media, search, cloud services and content are all offered by a handful of service operators and effective competition in this space is not merely an illusory veneer but has disappeared completely. The increasing dominance of many parts of the Internet by a small set of entrenched incumbents raises the obvious questions about centrality of control and influence, as well as the very real questions about the true nature of competitive pressure in markets that are already badly distorted.

For the IETF this poses some tough questions. Is the IETF there only to standardize those technology elements that these entrenched incumbents choose to pass over to an open standardization process to simply improve the economies and efficiency of their lines of supply while excluding some of their more important technology assets? If the IETF feels that this situation of increasing concentration and the formation of effective monopolies in many of these activity areas calls for some remedial action, then is it within the IETF's areas of capability or even within its chosen role to do anything here?

Some ten years ago the IAB published RFC 5218, on "What Makes a Successful Protocol." Much, if not all, of that document, still holds today. The basic success factor for a protocol is for it to meet a real need. Other success factors include incremental deployment capability, open code, open specification, and unrestricted access. Successful protocols have few impediments to adoption and address some previously unmet need. RFC 5218 also used a category called wild success:

"… a 'successful' protocol is one that is used for its original purpose and at the originally intended scale. A "wildly successful" protocol far exceeds its original goals, in terms of purpose (being used in scenarios far beyond the initial design), in terms of scale (being deployed on a scale much greater than originally envisaged), or both. That is, it has overgrown its bounds and has ventured out 'into the wild'."

One view is that for the IETF, success and wild success are both eminently desirable. The environment of technology standardization has elements of competitive pressure, and standards bodies want to provide an effective platform for protocol standardization that encourages both submissions of work to be considered by the standardization process and through its standards imprimatur is able to label a technology a useful and useable. For the IETF to be useful at all it needs to be able to engender further wild success in the protocols it standardizes. So there is a certain tension in the propositions that the IETF should pursue a path that attempts to facilitate open and robust competiution and eschew standardizing protocols that lead to further concentration in the market and the position that in order to maintain its value and relevant the IETF should seek to associate itself with successful protocol, irrespective of the market outcomes that may result.

Some of the tentative outcomes of this workshop for me have been:

  • Technologies get deployed in surprising ways, which can have unintended consequences in threat models, surveillance capability and user privacy
  • The focal point of technology and service evolution is moving up the stack, and applications are now taking responsibility for their own services, transport, security, naming context and similar.
  • Perceived needs drive deployment, not virtue!
  • Interoperability continues to be important but what are the interfaces that require standardization?
  • With the Internet now the mainstream of communications, the support ecosystem is populated with more diverse actors and interests. IETF commentary could be helpful at this point, but by whom and to whom?
  • Specific subject issues, such as DDOS, IOT, Spam, DNS, regulation, and centralization, are the topic of many challenging conversations, but none of these issues have easy resolution, and none are resolvable solely within the purview of the IETF.

What should the IETF do?

It is highly likely that the IETF will adopt a highly conservative position to such challenging questions and simply stick to doing what the IETF does best, namely, to standardize technologies within its areas of competence, and let others act as they see fit. The IETF does not define the Internet, nor is it responsible for either the current set of issues or the means of their solution, assuming that solutions might exist. The IETF is in no position to orchestrate any particular action across such diversity and multiplicity of other actors here, and it would probably be folly for the IETF to dream otherwise.

No doubt the IETF will continue to act in a way that it sees as consistent with the interests of the user community of the Internet. No doubt it will continue to work on standardizing protocols and tools that proponents in the IETF believe will improve the user experience and at the same time, attempt to safeguard personal privacy. It is difficult to see circumstances where the IETF would act in ways that are not consistent with such broad principles.

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: Cybersecurity, DNS, DNS Security, Internet Protocol, IPv6, Networks, Web

Categories: News and Updates

PIR Announces New .ORG Impact Awards to Recognize Achievements in the .ORG Community

DN Journal - Wed, 2019-06-12 00:27
PIR, the administrator of .ORG, has launched The .ORG Impact Awards, a new program designed to recognize .ORG website owners who are using the Internet to empower change around the world.
Categories: News and Updates

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer