News and Updates

Fighting Phishing with Domain Name Disputes

Domain industry news - Thu, 2017-09-07 16:08

I opened an email from GoDaddy over the weekend on my phone. Or so I initially thought.

I had recently helped a client transfer a domain name to a GoDaddy account (to settle a domain name dispute), so the subject line of the email — "Confirm this account" — simply made me think that I needed to take another action to ensure everything was in working order.

But quickly, my radar went off. Something was amiss:

Phishing email not from GoDaddy

  • The "to" line was blank, which meant that I had been bcc'd on the email.
  • The sender's name was "Go Daddy" (with a space that the Internet's popular registrar doesn't really have).
  • Although the body of the email contained the GoDaddy logo, the footer of the email referred to "Godaddy" (without a space but with a lowercase "D" that is not consistent with the registrar's style).
  • Upon actually reading the email, I immediately noticed the multiple grammatical errors in the first sentence: "Our records shows your account details is incomplete."

Because I was looking at the email on my phone instead of on a computer, I couldn't readily identify the link behind the prominent "Verify Now" button. But later, once I was in front of a PC, I saw that the link was not to GoDaddy at all.

Fortunately, I didn't click the link until now, as I am writing this blog post. At the moment, it leads to a web page that says, "This Account has been suspended."

Phishing for Info

If I had clicked the link when I received the email, I suspect I would have been taken to a page that looked like GoDaddy's website and would have been prompted to enter my username and password. Doing so, of course, would have disclosed that sensitive information to someone else — someone phishing for exactly that information — which would have compromised everything in my account.

Fortunately, as far as I know, I've never clicked on a phishing link — or, if I have, I've never disclosed personal credentials.

But phishing scams seem to be getting more common and more sophisticated. And if I — a savvy computer user and domain name attorney — have to think twice before not clicking on a deceptive link, I can only imagine how many other people (hello, Mom?) must actually click on those links without giving it a second thought.

I realize this is really not new. But it underscores the importance of domain name disputes and how companies can use the Uniform Domain Name Dispute Resolution Policy (UDRP) and other tools to combat phishing as a way to protect their customers.

Google's Phishing Fights

Just days after my "GoDaddy" experience, I read a UDRP decision involving a complaint brought by Google for the domain name <web-account-google.com>. According to the decision:

Complainant [Google] argues that Respondent engages in a phishing scheme to obtain personal information for users.... Complainant claims that the login information contained on the resolving webpage [associated with the domain name <web-account-google.com>] does not actually function, but rather Respondent uses it to obtain personal information from users.

The UDRP panel had no problem finding that this conduct constituted "bad faith" under the policy, and that Google had satisfied the UDRP's other two elements as well, and it ordered the domain name transferred to Google.

Screenshot of web page at www.web-account-google.com (captured September 5, 2017)However, as of this writing, the UDRP decision had not yet been implemented, so, naturally, I went to see what the web page looked like using this domain name, that is, the page at www.web-account-google.com. As the image here shows, the page mimics a Google website: It contains the Google logo along with a header that says (in French), "Sign into your Google account."

Surely, any non-savvy or careless (or simply quickly moving) Internet user directed to this page could not be blamed for thinking that he or she had arrived at a real Google page. But anyone who entered his or her Google credentials would immediately be disclosing them to someone other than Google.

The consequences for any such victim could be tremendous, giving a devious person access to, among many other services offered by Google, sensitive and personal email archives.

Fortunately for Google — and its users — this phishing scam will soon come to an end when the UDRP decision is implemented.

Using the UDRP

The UDRP is a popular way to shut down some phishing scams. Indeed, nearly 1,500 domain name dispute decisions at the World Intellectual Property Organization (WIPO) and the Forum — the two most-popular UDRP service providers — refer to "phishing."

Surely, the total number of phishing scams is far greater than the UDRP numbers reveal, given that many phishing scams are short-lived and disappear before a UDRP complaint can even be filed; some phishing scams don't involve domain name disputes; and trademark owners simply don't have the resources to pursue every scam.

I have no idea how many people fell victim to this fake Google website, but I can quickly see that Google had dealt with this same problem in plenty of other UDRP cases, involving such domain names as <gmaill.com>, <googledocs.net>, <gmailcustomerservices.com> and <google-spain.com>, to list just a few.

Of course, Google is just one of many trademark owners that have used the UDRP to shut down phishing scams. Other technology companies, banks, hotels, financial services firms, insurance companies, and many others have successfully invoked the UDRP to stop phishers from harming consumers.

So, too, has GoDaddy, which won a UDRP decision a couple of years ago for the domain names <service-godaddy.com> and <services-godaddy.com>, which were used as part of a phishing scheme.

While the UDRP will never eliminate phishing, it is obviously an important tool that trademark owners are using to protect their customers from deceptive scams.

Written by Doug Isenberg, Attorney & Founder of The GigaLaw Firm

Follow CircleID on Twitter

More under: Cybercrime, Cybersecurity, Cybersquatting, Domain Names, UDRP

Categories: News and Updates

Owner of eGrandstand.com files UDRP against Grandstand.com

Domain Name Wire - Thu, 2017-09-07 15:37

Company wants to drop the ‘e’ from its domain name.

A glassware and apparel company that goes by the name Grandstand has filed a UDRP against GrandStand.com, a common word domain name that was registered in 1995.

Based on DomainTools historical whois records, the current registrant has owned the domain name since at least 2001, which is when DomainTools started collecting records on the domain name. It is very likely the original owner.

The glassware company, Screen-It Graphics of Lawrence, Inc. d/b/a Grandstand, registered eGrandStand.com in or after 2004.

Grandstand.com has been resolving to “coming soon” pages without ads for many years.

Barring some sort of extenuating circumstances that aren’t apparent to me, this case is a candidate for reverse domain name hijacking.


© DomainNameWire.com 2017. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com.

Latest domain news at DNW.com: Domain Name Wire.

The post Owner of eGrandstand.com files UDRP against Grandstand.com appeared first on Domain Name Wire | Domain Name News & Views.

Related posts:
  1. Arena.com Owner Wins Domain Name Arbitration
  2. Louis Vuitton Loses Domain Dispute for Mobile Domain Name
  3. Panelist finds East End N.Y. Imports Inc. engaged in reverse domain name hijacking
Categories: News and Updates

CentralNic is on the premium domain hamster wheel

Domain Name Wire - Thu, 2017-09-07 14:17

The premium domain hamster wheel will be in effect later this year.

CentralNic is on a hamster wheel with premium domains.

Domain name registrar and registry company CentralNic (AIM: CNIC) reported first half 2017 financial results today.

The company reported first half revenue of £10.59 million, up 18.5% from £8.93 million in the first half of 2016. But this was down from £13.20 revenue in the second half of last year.

While that might suggest the company is going to see a drop in revenue this year, never fear…CentralNic has a trick up its sleeve. It’s the premium domain hamster wheel.

The reason CentralNic had a stellar second half last year was that it sold off some of its premium domains. One customer paid £3.555 million for premium domains.

The good news is that the sale made CentralNic’s 2016 look good. The bad news is that CentralNic needs to repeat that performance with what is essentially an asset sale counted as revenue this year.

In its interim report, the company says it’s working on that and premium sales are “expected to contribute significantly to profits in the second half.” Buyers: you’re in the driver’s seat on negotiations as 2017 winds down.

Buyers: you’re in the driver’s seat on negotiations as 2017 winds down.

This is a revenue game that a handful of public companies in the domain space play with premium domains. The problem is that once they book premium domain sales as revenue, they need to continually repeat the performance each year. Also, you eventually run out of premium domains to sell.

CentralNic will also get a boost from running the .sk domain name. It expects that acquisition to close in September.

It’s unclear to me how CentralNic’s underlying businesses are doing in terms of growth versus just adding revenue streams via acquisition.

CentralNic’s wholesale business generated £1.816 million in the first half of the year. The company breaks out its top customers, with the top one generating £440k and the second just £50k. While I’ve always assumed these represent TLD companies such as XYZ and Radix, I can’t imagine its second largest TLD client (Radix) paid just £50k for the last six months. I’ve reached out to the company to find out if these customers represent registrars instead of registries. I’ve confirmed that these numbers are for registrars, not registries.

On the new TLD front, CentralNic has renegotiated and extended its contract with XYZ through May 2032.


© DomainNameWire.com 2017. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com.

Latest domain news at DNW.com: Domain Name Wire.

The post CentralNic is on the premium domain hamster wheel appeared first on Domain Name Wire | Domain Name News & Views.

Related posts:
  1. CentralNic buys domain name registrar Internet.bs (IBS)
  2. CentralNic hits 4 million new top level domain names
  3. CentralNic acquires .SK operator for €26 million
Categories: News and Updates

Big 3-Letter .Com Sale at Sedo Tops This Week's Sales Chart - Helps .Coms Sweep 11 of Top 12 Spots

DN Journal - Thu, 2017-09-07 02:00
Yet another big 3-letter .com sale tops this week's domain sales report. We also saw the second biggest ccTLD sale of the year to date.
Categories: News and Updates

EU Presidency Pushing Other Member States for Substantial Internet Surveillance

Domain industry news - Wed, 2017-09-06 22:24

A leaked document by Statewatch reveals the current EU Presidency (Estonia) has been pushing the other Member States to strengthen indiscriminate internet surveillance and to follow in the footsteps of China regarding online censorship. Diego Naranjo reporting in EDRi: "Standing firmly behind its belief that filtering the uploads is the way to go, the Presidency has worked hard in order to make the proposal for the new copyright Directive even more harmful than the Commission's original proposal, and pushing it further into the realms of illegality. ... The proposals in this leak highlight a very dangerous roadmap for the EU Member States, if they were to follow the Presidency's lead."

Follow CircleID on Twitter

More under: Censorship, Internet Governance, Policy & Regulation

Categories: News and Updates

More end user domain sales up to $75,000 including Heathrow Airport

Domain Name Wire - Wed, 2017-09-06 16:55

Heathrow Airport was among the groups buying domain names on Sedo this past week.

Heathrow Airport bought a domain name related to its expansion.

This week’s top public sale at Sedo was Netlife.com for $75,000. Although the buyer is not yet known, I’m putting my money on a Norwegian company called Netlife Research.

An augmented reality company also made a sizeable purchase, and Heathrow Airport bought a domain for its airport expansion.

Here’s the list of end user domain name sales I uncovered at Sedo from the past week:

(You can view previous lists like this here.)

NetLife.com $75,000 – Even though I’m not entirely sure who bought this domain (it’s still in escrow), this is surely an end user sale. The seller is domain investor Satoshi Shimoshita, the guy who had the valuable CM.com domain taken from him. There are lots of companies that use the name NetLife. The technical contact has moved to a domain company in Norway, so perhaps it’s the company Netlife Research which uses the domain NetlifeResearch.com. [Update: the domain has transferred and my educated guess was correct.]

MyXR.com $17,000 – The domain has Whois privacy and a landing page that says “Augmenting Soon”. In this case, XR probably stands for Extended Reality, which includes Augmented Reality.

Thermatec.com $7,500 – Thermatec Instrumentation & Controls Inc is an industrial equipment company.

MaxxLife.com $6,400 – Marketing company Complete Spectrum, Inc. bought this domain name. It might be for a client.

Daeco.com £5,500 – A South Korean company called Daeco that uses the domain Daeco.co.kr

Wefix.co.uk €5,000 – Ridown Ltd is a business incubator. Right now the domain forwards to a simple page that says “We Fix”.

MadeInTheStates.com $5,000 – Content marketing/influencer marketing company Issue Inc. They might use it for a campaign about products made in the U.S.

Omnicura.com $4,025 – Omnicura Clinics is coming soon, but we know little more than that.

Crypto.pro $3,500 – The site has already been put to use as a cryptocurrency forum.

Corka.com €3,500 – A1 Rubber manufactures a flooring surface called Corka.

HeathrowExpansion.com £2,000 – Last week I reported the sale of HeathrowExpansion.co.uk to a UK web design company. It turns out it bought the domain on behalf of the Heathrow Airport. Now the airport has purchased the matching .com.

Evollove.com $2,000 – Evollove Group Limited in Hong Kong is a wedding, floral and exhibition planning company.


© DomainNameWire.com 2017. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com.

Latest domain news at DNW.com: Domain Name Wire.

The post More end user domain sales up to $75,000 including Heathrow Airport appeared first on Domain Name Wire | Domain Name News & Views.

Related posts:
  1. 18 end user sales from $750 to $17,500
  2. Big-name end user domain purchases, including a .org for $55,000
  3. End user domain name sales including new TLDs and the ACLU
Categories: News and Updates

More end user domain sales up to $75,000 including Heathrow Airport

Domain Name Wire - Wed, 2017-09-06 16:55

Heathrow Airport was among the groups buying domain names on Sedo this past week.

Heathrow Airport bought a domain name related to its expansion.

This week’s top public sale at Sedo was Netlife.com for $75,000. Although the buyer is not yet known, I’m putting my money on a Norwegian company called Netlife Research.

An augmented reality company also made a sizeable purchase, and Heathrow Airport bought a domain for its airport expansion.

Here’s the list of end user domain name sales I uncovered at Sedo from the past week:

(You can view previous lists like this here.)

NetLife.com $75,000 – Even though I’m not entirely sure who bought this domain (it’s still in escrow), this is surely an end user sale. The seller is domain investor Satoshi Shimoshita, the guy who had the valuable CM.com domain taken from him. There are lots of companies that use the name NetLife. The technical contact has moved to a domain company in Norway, so perhaps it’s the company Netlife Research which uses the domain NetlifeResearch.com.

MyXR.com $17,000 – The domain has Whois privacy and a landing page that says “Augmenting Soon”. In this case, XR probably stands for Extended Reality, which includes Augmented Reality.

Thermatec.com $7,500 – Thermatec Instrumentation & Controls Inc is an industrial equipment company.

MaxxLife.com $6,400 – Marketing company Complete Spectrum, Inc. bought this domain name. It might be for a client.

Daeco.com £5,500 – A South Korean company called Daeco that uses the domain Daeco.co.kr

Wefix.co.uk €5,000 – Ridown Ltd is a business incubator. Right now the domain forwards to a simple page that says “We Fix”.

MadeInTheStates.com $5,000 – Content marketing/influencer marketing company Issue Inc. They might use it for a campaign about products made in the U.S.

Omnicura.com $4,025 – Omnicura Clinics is coming soon, but we know little more than that.

Crypto.pro $3,500 – The site has already been put to use as a cryptocurrency forum.

Corka.com €3,500 – A1 Rubber manufactures a flooring surface called Corka.

HeathrowExpansion.com £2,000 – Last week I reported the sale of HeathrowExpansion.co.uk to a UK web design company. It turns out it bought the domain on behalf of the Heathrow Airport. Now the airport has purchased the matching .com.

Evollove.com $2,000 – Evollove Group Limited in Hong Kong is a wedding, floral and exhibition planning company.


© DomainNameWire.com 2017. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com.

Latest domain news at DNW.com: Domain Name Wire.

The post More end user domain sales up to $75,000 including Heathrow Airport appeared first on Domain Name Wire | Domain Name News & Views.

Related posts:
  1. 18 end user sales from $750 to $17,500
  2. Big-name end user domain purchases, including a .org for $55,000
  3. End user domain name sales including new TLDs and the ACLU
Categories: News and Updates

Europe and North America Energy Sector Targeted by Sophisticated Cyberattack Group

Domain industry news - Wed, 2017-09-06 15:53

The Western energy sector is being targeted by a new wave of cyberattacks capable of providing attackers ability to severely disrupt affected operations, according to reports on Wednesday. Symantec Security Response team reports: "The energy sector has become an area of increased interest to cyber attackers over the past two years. Most notably, disruptions to Ukraine’s power system in 2015 and 2016 were attributed to a cyber attack and led to power outages affecting hundreds of thousands of people. ... The Dragonfly group appears to be interested in both learning how energy facilities operate and also gaining access to operational systems themselves, to the extent that the group now potentially has the ability to sabotage or gain control of these systems should it decide to do so."

The group behind the attacks is known as Dragonfly: "The group has been in operation since at least 2011 but has re-emerged over the past two years from a quiet period… This 'Dragonfly 2.0' campaign, which appears to have begun in late 2015, shares tactics and tools used in earlier campaigns by the group."

"The original Dragonfly campaigns now appear to have been a more exploratory phase where the attackers were simply trying to gain access to the networks of targeted organizations. The Dragonfly 2.0 campaigns show how the attackers may be entering into a new phase, with recent campaigns potentially providing them with access to operational systems, access that could be used for more disruptive purposes in future."

"The most concerning evidence of this is in their use of screen captures. In one particular instance the attackers used a clear format for naming the screen capture files, [machine description and location].[organization name]. The string 'cntrl' (control) is used in many of the machine descriptions, possibly indicating that these machines have access to operational systems."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity

Categories: News and Updates

DomainSherpa's Michael Cyger Announces Retirement from Domain Name Publishing

DN Journal - Wed, 2017-09-06 15:10
Like DomainSherpa Founder Michael Cyger's other Facebook friends, I was surprised to see a post from Michael this morning announcing his retirement form domain name publishing.
Categories: News and Updates

Top 5 Domain Name Stories from August

Domain Name Wire - Wed, 2017-09-06 13:54

A look back at the past month in the domain business.

The Verizon Center in Washington D.C. is being renamed the Capital One Arena.

The top story on Domain Name Wire last month was an arena naming deal that I scooped. Other topics include data privacy, a neo-nazi website and tough times in the domain business. Here are the top five stories ranked by pageviews:

1. Verizon Center in Washington DC likely to become Capital One Center – The most viewed story last month was my scoop about a name change for the arena in Washington D.C. Using recent domain name registrations, I concluded that it would be renamed from the Verizon Center to the Capital One Arena.

2. Will May 2018 be the death of Whois? – The European Union’s General Data Protection Regulation (GDPR) isn’t a sexy topic, but it’s going to have a major impact on domain names.

3. GoDaddy drops Uniregistry again – GoDaddy will stop offering Uniregistry domains (again). It struck a deal with Hexonet to manage Uniregistry names that are already registered with GoDaddy.

4. Google ends up with DailyStormer.com domain name – Domain name registrars and other service providers played a game of hot potato with the racist Daily Stormer website.

5. Cold winds in the domain industry? – The domain business is in a bit of a funk.

And here are the podcasts Domain Name Wire published last month:

#150 – Get ready to MERGE!
#149 – Frank Schilling talks domains
#148 – Domain name due diligence
#147 – Leverage your Domain Buys


© DomainNameWire.com 2017. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com.

Latest domain news at DNW.com: Domain Name Wire.

The post Top 5 Domain Name Stories from August appeared first on Domain Name Wire | Domain Name News & Views.

Related posts:
  1. Top Domain Name News Stories of 2007
  2. Frank Schilling’s .Sexy close to 2,000 domains, .Tattoo 700
  3. New TLDs this week: Schilling launches .hiphop, Donuts charges your .creditcard
Categories: News and Updates

Top 5 Domain Name Stories from August

Domain Name Wire - Wed, 2017-09-06 13:54

A look back at the past month in the domain business.

The Verizon Center in Washington D.C. is being renamed the Capital One Arena.

The top story on Domain Name Wire last month was an arena naming deal that I scooped. Other topics include data privacy, a neo-nazi website and tough times in the domain business. Here are the top five stories ranked by pageviews:

1. Verizon Center in Washington DC likely to become Capital One Center – The most viewed story last month was my scoop about a name change for the arena in Washington D.C. Using recent domain name registrations, I concluded that it would be renamed from the Verizon Center to the Capital One Arena.

2. Will May 2018 be the death of Whois? – The European Union’s General Data Protection Regulation (GDPR) isn’t a sexy topic, but it’s going to have a major impact on domain names.

3. GoDaddy drops Uniregistry again – GoDaddy will stop offering Uniregistry domains (again). It struck a deal with Hexonet to manage Uniregistry names that are already registered with GoDaddy.

4. Google ends up with DailyStormer.com domain name – Domain name registrars and other service providers played a game of hot potato with the racist Daily Stormer website.

5. Cold winds in the domain industry? – The domain business is in a bit of a funk.

And here are the podcasts Domain Name Wire published last month:

#150 – Get ready to MERGE!
#149 – Frank Schilling talks domains
#148 – Domain name due diligence
#147 – Leverage your Domain Buys


© DomainNameWire.com 2017. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com.

Latest domain news at DNW.com: Domain Name Wire.

The post Top 5 Domain Name Stories from August appeared first on Domain Name Wire | Domain Name News & Views.

Related posts:
  1. Top Domain Name News Stories of 2007
  2. Frank Schilling’s .Sexy close to 2,000 domains, .Tattoo 700
  3. New TLDs this week: Schilling launches .hiphop, Donuts charges your .creditcard
Categories: News and Updates

RIPE NCC to Hold Sixth IPv6 Focused Hackathon

Domain industry news - Wed, 2017-09-06 06:27

The Regional Internet Registry for Europe, the Middle East and parts of Central Asia (RIPE NCC) together with Comcast and Danish Network Operator's Group (DKNOG), are organizing the sixth IPv6 focused hackathon. The event is aimed at promoting IPv6 in Denmark, creating new tools for IPv6 measurement visualizations and IPv6 deployment efforts. From the announcement: "Hackathons provide great opportunities for network operators, designers, local community, RIPE Atlas developers and other enthusiastic coders and hackers in developing new and creative tools, meeting others in your field, and exchanging knowledge and experience with people very different from your everyday colleagues."

Details
Event Date: 4-5 November 2017
Location: A super-cool, top secret location in Copenhagen, Denmark

Follow CircleID on Twitter

More under: IP Addressing, IPv6

Categories: News and Updates

Domain Registries to Discuss Possibility of ICANN Fee Cuts in Private Meeting This Month

Domain industry news - Wed, 2017-09-06 05:59

Heads of 20 or more gTLD registries will meet privately this month to discuss various topics including the possibility of a reduction in ICANN fees. Kevin Murphy reporting in Domain Incite: "The Registry CEO Summit is being held in Seattle at the end of September… Jay Westerdal of Top Level Spectrum (.feedback etc) and Ray King of Top Level Design (.design etc) are organizing the event. ... 20 to 25 registry CEOs to attend. .. .CLUB Domains CEO Colin Campbell, who said he will attend, said he intends to bring proposals to the meeting around persuading ICANN to support the industry with marketing support and fee reductions."

Follow CircleID on Twitter

More under: Domain Names, ICANN, Registry Services, Top-Level Domains

Categories: News and Updates

Bret Fausett joins Tucows, Statton Hammock to MarkMonitor

Domain Name Wire - Wed, 2017-09-06 00:36

Two lawyers change employers.

Bret Fausett (left) and Statton Hammock (right)

Two legal minds in the domain name industry have found new homes.

Bret Fausett has joined Tucows (NASDAQ: TCX) as Chief Legal Officer and VP, Regulatory Affairs. Fausett left his General Counsel role at Uniregistry in July after nearly six years at the company. Prior to that, he was in private practice.

Statton Hammock is now Vice-President, Global Policy and Industry Development at MarkMonitor, a Clarivate Analytics company. Hammock left Rightside in August subsequent to the company’s acquisition by Donuts. He had been at the company for five years and has prior experience at Network Solutions.


© DomainNameWire.com 2017. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com.

Latest domain news at DNW.com: Domain Name Wire.

The post Bret Fausett joins Tucows, Statton Hammock to MarkMonitor appeared first on Domain Name Wire | Domain Name News & Views.

No related posts.

Categories: News and Updates

Making Sense of the Domain Name Market - and Its Future

Domain industry news - Tue, 2017-09-05 22:13

With ever more TLDs, where does it make sense to focus resources?

After four years and a quadrupling of internet extensions, what metrics continue to make sense in the domain name industry? Which should we discard? And how do you gain understanding of this expanded market?

For registries, future success is dependent on grasping the changes that have already come. For registrars, it is increasingly important to identify winners and allocate resources accordingly. The question is: how?

The biggest barrier to both these goals, ironically, may be the industry's favorite measure: the number of registrations.

Since the earliest days, registrations have been the main marker of success: Who's up? Who's down? Who's in the top 10? Top five? But even when this approach made sense, it relied on ignoring the elephant in the room: dot-com.

The Verisign dot-com beast remains six times larger than the next largest TLD. But, for a long time, the fact that most of the other gTLDs and ccTLDs (and even sTLDs) were clumped closely together made registration figures the go-to metric.

Except now, in 2017, the same extreme of scale as dot-com to the others now exists at the other end of the market. There are more than a thousand new gTLDs in the root, but even the largest of them barely touch legacy gTLDs or ccTLDs in terms of numbers of registrations. It may be time to rethink how we look at the market.

Another traditional measure has been the number, or percentage, of parked domains. It used to be that if a domain owner wasn't actually using their domain to host a website, it was a sign the registration was more likely to be dropped or was purely speculative.

But do parked domains still tell that story? That parked domain is often intellectual property protection. It may be part of a planned online expansion. And while assumed to be speculative, often that parked domain is renewed again and again.

This is especially true with older registries. You could argue that in terms of a registry's inherent value, a parked domain that is held by a single owner for many years is more valuable than one with a website that changes hands every year.

Maybe we need to consider more than just whether a domain has a website attached and start digging into the history of its registration.

Intertwined

The truth is that the domain name market has been around for a relatively long time now and has become more complex and intertwined with the larger economy than we give it credit for. The market is also unusual in that it has not grown according to demand but in fits and starts, defined by and dependent on the arcane processes and approvals of overseeing body ICANN.

Dot-com is the giant of the internet because it was the only openly commercial online space available at a time when the internet's potential was first realized by businesses and entrepreneurs. Even now the ending ".com" in many ways defines the global address system. While its growth has slowed, it still towers over every other TLD.

Then came small bursts of new gTLDs, joined by more commercialized ccTLDs, which all benefitted from the globalization of the internet. Most of them are roughly the same size: between two and five million registrations. And now comes the new wave of TLDs that has produced a third block of registries: with registrations largely ranging from one thousand to one million.

These three-time periods tell a story about the domain name market: that for all its fluidity and its speed, the market is not only stable but also segmented. There is no point in Germany's dot-de dreaming of becoming the same size as dot-com, just as there is no point in dot-shop hoping to rival dot-fr in terms of numbers. Increasingly, peer comparison is going to become more important than pure numbers.

It's also not clear that there is much competition across segments — or even within them - once that initial purchase is made.

Will someone drop their dot-uk domain after they've bought their new dot-website domain? It seems unlikely. A tech business in Spain may look at a dot-es or a dot-tech. But it probably never considered Brazil's dot-br, or dot-racing (because it is a Spanish tech company, not a Brazilian racing company).

Once the decision and purchase are made, the website built and the email set up, the low cost of domain renewal reduces the likelihood of a company dropping it or moving to a new address. It is another of the peculiarities of the market: low price equals less movement. But even that is now being tested by new registries that charge variable "premium" rates for what they believe are more valuable individual names.

The secret of success

For these reasons, future success — for both registries and registrars — mostly likely lies in two things: high rates of renewal and future growth potential.

The domain renewal rate is increasingly a sign of the overall health of a registry. As the market changes, both that rate and any changes to it will become increasingly important in understanding whether the registry is going up or down in the market overall.

A high renewal rate shows stability and greater value in the registry. If that renewal rate goes up — compared to its peers — then the competition has picked off what it can. If the rate goes down, it may be vulnerable to other options in the market.

The renewal rate will tend to be higher and more stable in older TLDs, and lower and more varying in new gTLDs. But when compared to its peers, it can tell a larger story: too low or too variable may be a warning sign; higher or more stable could indicate a more solid registry, and one worth investing in.

The other biggest driver for success is future growth potential. And for this, it is necessary to look outside the domain name market to the real world.

When it comes to the wealth of new gTLDs — most of which are words or names that self-define themselves — growth potential is going to come down to a combination of brand, good policies and sheer luck.

The focus on registration numbers obscures what may be the long-term successful approach in this vast market. In some cases, aggressive, short-term marketing and low pricing has seen huge, sudden increases in registration followed by equally huge drop-offs a year later when domains come up for renewal. The largest registries in terms of numbers are also notable by their effort to tap the vast Chinese internet market. It's an approach that currently pays off in terms of registrations but is it sustainable?

Politics

As for more traditional registries, future growth depends as much as digital economies and politics as it does on internal policies. Germany's dot-de and the UK's dot-uk have long led the market in terms of registrations. It just so happens that they also have open registration policies and their associated countries have very large and successful digital economies.

A registry that may be interesting to watch is dot-eu since it represents not a single country but an economic region. Recent anti-European Union sentiment that has been most strongly defined by the UK and Brexit and the collapse of the Greek economy — but which has also seen large movements in Austria, the Czech Republic and Italy, among others - has seemingly slowed dot-eu's growth.

But despite last year's predictions, the European Union appears to have emerged stronger and, thanks to the unpredictable nature of the US presidency, its trading currency, the euro, is on the path to becoming the world's strongest currency. Does this mean that dot-eu will similarly benefit as companies see the growing value in a European trading block? We will have to see. But if broader sentiment is increasingly pro-EU then the answer is almost certainly yes.

In a world where you can choose from a huge array of internet extensions, new registrations will increasingly reflect what the registrant wants to say about themselves: who is their market? Are they are a traditional or unconventional business? Are they defined by their product, or their country, or their region? Or are they trying to trap the online zeitgeist and ride the wave of a current trend?

For the domain name industry, it is going to be increasingly difficult to track the ebs and flows of this global market. Which makes choosing the right metrics all the more important. Is it time to kill off the number of registrations as the industry's main measure of value? No. But it is time to start rethinking about how that market is segmented and take a broader view of what represents success.

Written by Kieren McCarthy, Executive Director at IFFOR; CEO at .Nxt

Follow CircleID on Twitter

More under: DNS, Domain Names, Registry Services, Top-Level Domains

Categories: News and Updates

Researchers Expose Over 320 Million Hashed Passwords

Domain industry news - Tue, 2017-09-05 21:32

A group of security researchers have succeeeded in cracking over 320 million passwords which were made public in an encrypted blacklist. CynoSure Prime, a “password research collective” reports: "Earlier this month (August 2017) Troy Hunt founder of the website Have I been pwned? released over 319 million plaintext passwords compiled from various non-hashed data breaches, in the form of SHA-1 hashes. Making this data public might allow future passwords to be cross-checked in a secure manner in the hopes of preventing password re-use, especially of those from compromised breaches which were in unhashed plaintext. ... Out of the roughly 320 million hashes, we were able to recover all but 116 of the SHA-1 hashes, a roughly 99.9999% success rate. In addition, we attempted to take it a step further and resolve as many 'nested' hashes (hashes within hashes) as possible to their ultimate plaintext forms."

Follow CircleID on Twitter

More under: Cybersecurity

Categories: News and Updates

Security is a System Property

Domain industry news - Tue, 2017-09-05 21:09

There's lots of security advice in the press: keep your systems patched, use a password manager, don't click on links in email, etc. But there's one thing these adages omit: an attacker who is targeting you, rather than whoever falls for the phishing email, won't be stopped by one defensive measure. Rather, they'll go after the weakest part of your defenses. You have to protect everything — including things you hadn't realized were relevant. Security is a systems problem: everything matters, including the links between the components and even the people who use the system.

Passwords are a good illustration of this point. We all know the adage: "pick strong passwords". There are lots of things wrong with this and other simplistic advice with passwords, but we'll ignore most of them to focus on the systems problem. So: what attacks do strong passwords protect against?

The original impetus for this advice came from a 1979 paper by Bob Morris and Ken Thompson. (Morris later became Chief Scientist of the NSA's National Computer Security Center; Thompson is one of the creators of Unix.) When you read it carefully, you realize that strong passwords guard against exactly two threats: someone who tries to login as you, and someone who has hacked the remote site and is trying to guess your password. But strong passwords do nothing if your computer (in those days, computer terminal...) is hacked, or if the line is tapped, or if you're lured to a phishing site and send your password, in the clear, to an enemy site. To really protect your password, then, you need to worry about all of those factors and more.

It's worth noting that Morris and Thompson understood this thoroughly. Everyone focuses on the strong password part, and — if they're at least marginally competent — on password salting and hashing, but few people remember this quote, from the first page of the paper:

Remote-access systems are peculiarly vulnerable to penetration by outsiders as there are threats at the remote terminal, along the communications link, as well as at the computer itself. Although the security of a password encryption algorithm is an interesting intellectual and mathematical problem, it is only one tiny facet of a very large problem. In practice, physical security of the computer, communications security of the communications link, and physical control of the computer itself loom as far more important issues. Perhaps most important of all is control over the actions of ex-employees, since they are not under any direct control and they may have intimate knowledge about the system, its resources, and methods of access. Good system security involves realistic evaluation of the risks not only of deliberate attacks but also of casual authorized access and accidental disclosure.

(True confession: I'd forgotten that they noted the scope of the problem, perhaps because I first read that paper when it originally appeared.)

I bring this up now because of some excellent reporting about hacking and the 2016 election. Voting, too, is a system — it's not just voting machines that are targets, but rather, the entire system. This encompasses registration, handling of the "poll books" — which may themselves be computerized — the way that poll workers sign in voters, and more. I'll give an example, from the very first time I could vote in a presidential election: the poll workers couldn't find my registration card. I was sent off to a bank of phones to try to call the county election board. The board had far too few phone lines, so I kept getting busy signals, all the while thinking nasty thoughts about attempts to keep Yankees (I'd just moved to North Carolina) and students (I was there for grad school) from voting.

Think of all of the system pieces in just that part of the election. There was the poll worker — was she honest? There was the election book, and whatever processes, mechanisms, software, or people had gone into compiling it. There was the phone bank I was using, the phone network, the phones at the election board, the people there, and their backend systems that had a master copy of the election roll. My story had a happy ending — the poll worker kept checking, and found that my card has been misalphabetized — but if an analogous problem happened today with an electronic poll book, it's hard to see how the poll worker's diligence could have resolved it. (For other interesting systems aspects of voting, including issues with poll books, see an old blog post of mine.)

The systems aspect of voting is apparent to some, of course, including the New York Times reporters who are covering the hacking story:

Michael Wines, who covers election issues for the Times, said that what stood out to him was the vulnerability of the nation's vast Rube Goldberg election system. Elections, he explained, "are run by understaffed, underfinanced and sometimes undertrained local officials, serviced by outside contractors who may or may not be well vetted, conducted with equipment and software that may or may not be secure." [emphasis added]

Almost all security problems are system problems; beware of people who try to sell you simplistic, point solutions. It's not that these solutions are wrong; rather, they have to be examined for their role in securing the system. Consider HTTPS — encrypted — web connections. Unless you're being targeted by law enforcement or a major intelligence agency, the odds of your connection being tapped on the backbone are vanishingly small. However, it's trivial to tap someone's WiFi connection if you're on the same net as them, e.g., in a public hotspot. So — it's a good idea to encrypt web pages, but if the environment is strictly controlled LAN to controlled LAN, that should be far down on your list of security priorities. And remember: encrypting one link does not solve any of the many other vulnerable points in your system.

Written by Steven Bellovin, Professor of Computer Science at Columbia University

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Cybersecurity

Categories: News and Updates

Global Content Removals Based on Local Legal Violations - Where are we Headed?

Domain industry news - Tue, 2017-09-05 20:42

Excerpt from my Internet Law casebook discussing transborder content removal orders, including the Equustek case.

From the Internet's earliest days, the tension between a global communication network and local geography-based laws has been obvious. One scenario is that every jurisdiction's local laws apply to the Internet globally, meaning that the country (or sub-national regulator) with the most restrictive law for any content category sets the global standard for that content. If this scenario comes to pass, the Internet will only contain content that is legal in every jurisdiction in the world — a small fraction of the content we as Americans might enjoy, because many countries restrict content that is clearly legal in the U.S.

Perhaps surprisingly, we've generally avoided this dystopian scenario — so far. In part, this is because many major Internet services create localized versions of their offerings that conform to local laws, which allows the services to make country-by-country removals of locally impermissible content. Thus, the content on google.de might vary pretty substantially from the content on google.com. This localization undermines the 1990s utopian vision that the Internet would enable a single global content database that everyone in the world could uniformly enjoy. However, service localization has also forestalled more dire regulatory crises. So long as google.de complies with local German laws and google.com complies with local U.S. laws, regulators in the U.S. and Germany should be OK...right?

Increasingly, the answer appears to be "no." Google's response to the European RTBF rule has highlighted the impending crisis. In response to the RTBF requirement that search engines to remove certain search results associated with their names, initially Google only de-indexed results from its European indexes, i.e., Google would scrub the results from Google.de but not Google.com. However, European users of Google can easily seek out international versions of Google's search index. An enterprising European user could go to Google.com and obtain unscrubbed search results — and compare the search results with the localized edition of Google to see which results had been scrubbed.

The French Commission Nationale de l'Informatique et des Libertés (CNIL) has deemed this outcome unacceptable. As a result, it has demanded that Google honor an RTBF de-indexing request across all of its search indexes globally. In other words, if a French resident successfully makes a de-indexing request under European data privacy laws, Google should not display the removed result to anyone in the world, even searchers outside of Europe who are not subject to European law.

The CNIL's position is not unprecedented; other governmental agencies have made similar demands for the worldwide suppression of content they object to. However, the demand on Google threatens to break the Internet. Either Google must cease all of its French operations to avoid being subject to the CNIL's interpretation of the law, or it must give a single country the power to decide what content is appropriate for the entire world — which, of course, could produce conflicts with the laws of other countries.

Google proposed a compromise of removing RTBF results from its European indexes, and if a European attempts to log into a non-European version of Google's search index, Google will dynamically scrub the results it delivers to the European searcher. As a result, if the European searcher tries to get around the European censored results, he or she will still not see the full search results. (Of course, it would be easy to bypass Google's dynamic scrubbing using VPNs). CNIL has rejected Google's compromise as still unacceptable.

If CNIL gets its way, other governments with censorious impulses will demand equal treatment. But even Google's "compromise" solution — walling off certain information from being available in a country that seeks to censor that information — will be helpful to censors. In effect, the RTBF ruling forces Google to build a censorship infrastructure that regulators can coopt for other censorious purposes. Thus, either way, the resolution to the RTBF's geography conundrum provides a preview of the future of global censorship.

The Equustek Case

The local violation/global removal debate is taking place in other venues as well. In 2017, the Canada Supreme Court ordered Google to globally remove search results based on alleged Canadian legal violations. Google Inc. v. Equustek Solutions Inc., 2017 SCC 34.

In that case, Datalink, a competitor of Equustek, sold products that allegedly infringed Equustek's intellectual property rights. After Equustek sued Datalink, Datalink relocated to an unknown location outside of Canada, putting it out of the reach of Canadian courts. Equustek asked Google to deindex Datalink's website. Google partially deindexed the site from google.ca, but Equustek sought more relief. The Canada Supreme Court ordered global deindexing of Datalink's website:

The problem in this case is occurring online and globally. The Internet has no borders — its natural habitat is global. The only way to ensure that the interlocutory injunction attained its objective was to have it apply where Google operates — globally. As Fenlon J. found, the majority of Datalink's sales take place outside Canada. If the injunction were restricted to Canada alone or to google.ca, as Google suggests it should have been, the remedy would be deprived of its intended ability to prevent irreparable harm. Purchasers outside Canada could easily continue purchasing from Datalink's websites, and Canadian purchasers could easily find Datalink's websites even if those websites were de-indexed on google.ca. Google would still be facilitating Datalink's breach of the court's order which had prohibited it from carrying on business on the Internet....

The order does not require that Google take any steps around the world, it requires it to take steps only where its search engine is controlled....

This is not an order to remove speech that, on its face, engages freedom of expression values, it is an order to de-index websites that are in violation of several court orders....

This does not make Google liable for this harm. It does, however, make Google the determinative player in allowing the harm to occur.

The court noted that Google admitted it would be easy to deindex Datalink's domain name, and the court noted that Google regularly deindexes content for other reasons, such as the DMCA online safe harbor.

The court dismissed the risk of international conflicts-of-laws because everyone apparently accepted that Datalink would violate Equustek's IP rights under other countries' laws. However, the court was surprisingly unspecific about the alleged IP violations, which apparently included trademarks and trade secrets. Due to the ambiguities about the alleged IP violations, the court avoided some subtle IP issues, such as the scope of Equustek's trademark rights (usually trademark rights don't reach beyond a country's borders, so a Canadian court could not order a defendant to stop infringing trademark rights in other countries) and the likelihood that Canadian trade secret laws and remedies differ from the laws and remedies of other countries. See Ariel Katz, Google v. Equustek: Unnecessarily Hard Cases Make Unnecessarily Bad Law, ArielKatz.org, June 29, 2017.

Because the court sidestepped the international conflicts-of-laws issue, the Equustek case's facts do not implicate the more problematic situation where Datalink's content violates Canadian law but is legal in other countries, yet a Canadian court order under Canadian law prevents the content from being available in countries where it was legal. (The CNIL-demanded rule would reach this outcome, because RTBF-scrubbed content illegal in Europe would be almost certainly legal in the U.S.). The court said that Google could challenge the injunction in Canadian courts if the injunction violates other countries' laws — but will Google really spend substantial money and time to defend a third party content by going back to a Canadian court to adjudicate the content's legitimacy?

In response to the opinion, Canadian law professor Michael Geist wrote:

What happens if a Chinese court orders it to remove Taiwanese sites from the index? Or if an Iranian court orders it to remove gay and lesbian sites from the index? Since local content laws differ from country to country, there is a great likelihood of conflicts. That leaves two possible problematic outcomes: local courts deciding what others can access online or companies such as Google selectively deciding which rules they wish to follow. The Supreme Court of Canada did not address the broader implications of the decision, content to limit its reasoning to the need to address the harm being sustained by a Canadian company, the limited harm or burden to Google, and the ease with which potential conflicts could be addressed by adjusting the global takedown order. In doing so, it invites more global takedowns without requiring those seeking takedowns to identify potential conflicts or assess the implications in other countries.

Michael Geist, Global Internet Takedown Orders Come to Canada: Supreme Court Upholds International Removal of Google Search Results, MichaelGeist.ca, June 28, 2017.

Does the Equustek ruling mean that plaintiffs (both Canadian and non-Canadian) will flock to Canadian courts to sue non-Canadian defendants solely to get global deindexing orders?

Note that Equustek ruling (and the CNIL dispute) avoid an underlying jurisdictional issue because Google has substantial physical presence in both Canada and Europe. Would Canada or Europe have jurisdiction over an Internet service that operates exclusively from the United States?

I encourage you to do a thought exercise: project yourself 20 years in the future. What do you think will be the state of the law on global removals based on local violations? Do you think most countries will have embraced the Equustek approach broadly? If so, do you think the Internet (however you define it) will be better or worse as a result?

* * *

After I wrote this, Google sought legal relief in US courts from the Equustek ruling. For useful perspective on Google's move, read Daphne Keller's analysis.

Written by Eric Goldman, Professor, Santa Clara University School of Law

Follow CircleID on Twitter

More under: Censorship, Internet Governance, Law, Policy & Regulation

Categories: News and Updates

An Opinion in Defence of NATs

Domain industry news - Tue, 2017-09-05 18:22

Network Address Translation has often been described as an unfortunate aberration in the evolution of the Internet, and one that will be expunged with the completion of the transition of IPv6. I think that this view, which appears to form part of today's conventional wisdom about the Internet unnecessarily vilifies NATs. In my opinion, NATs are far from being an aberration, and instead, I see them as an informative step in the evolution of the Internet, particularly as they relate to possibilities in the evolution of name-based networking. Here's why.

Background

It was in 1989, some months after the US National Science Foundation-funded IP backbone network had been commissioned, and at a time when there was a visible momentum behind the adoption of IP as a communications protocol of choice, that the first inklings of the inherent finite nature of the IPv4 address became apparent in the Internet Engineering Task Force (IETF) [1].

Progressive iterations over the IP address consumption numbers reached the same general conclusion: that the momentum of deployment of IP meant that the critical parts of the 32-bit address space would be fully committed within 6 or so years. It was predicted that by 1996 we would have fully committed the pool of Class B networks, which encompassed one-quarter of the available IPv4 address space. At the same time, we were concerned at the pace of growth of the routing system, so stop gap measures that involved assigning multiple Class C networks to sites could've staved off exhaustion for a while, but perhaps at the expense of the viability of the routing system [2].

Other forms of temporary measures were considered by the IETF, and the stop gap measure that was adopted in early 1994 was the dropping of the implicit network/host partitioning of the address in classful addressing in favour of the use of an explicit network mask, or "classless" addressing. This directly addressed the pressing nature problem of the exhaustion of the Class B address pool, as the observation at the time was that while a Class C network was too small for many sites given the recent introduction of the personal computer, Class B networks were too large, and many sites were unable to realise reasonable levels of address use with Class B addresses. This move to classless addressing (and classless routing of course) gained some years of breathing space before the major impacts of address exhaustion, which was considered enough time to complete the specification and deployment of a successor IP protocol [3].

In the search for a successor IP protocol, several ideas were promulgated. The decisions around the design of IPv6 related to a desire to make minimal changes to the IPv4 specification, while changing the size of the address fields, and changing some of encoding of control functions through the use of the extension header concept, and the changing of the fragmentation behaviour to stop routers from performing fragmentation on the fly [4].

The common belief at the time was that the adoption of classless addressing in IPv4 bought sufficient time to allow the deployment of IPv6 to proceed. It was anticipated that IPv6 would be deployed across the entire Internet well before the remaining pools of IPv4 addresses were fully committed. This, together with a deliberate approach for hosts to prefer to use IPv6 for communication when both IPv4 and IPv6 was available for use would imply that the use of IPv4 would naturally dwindle away as more IPv6 was deployed, and that no 'flag day' or other means of coordinated action would be needed to complete this Internet wide protocol transition [5].

In the flurry of documents that explored concepts of a successor protocol was one paper that described a novel concept of source address sharing [6]. If a processing unit was placed on the wire, it was possible to intercept all outbound TCP and UDP packets and replace the source IP address with a different address and change the packet header checksum and then forward the packet on towards its intended destination. As long as this unit used one of its own addresses as the new address, then any response from the destination would be passed back to this unit. The unit could then use the other fields of the incoming IP packet header, namely the source address and the source and destination port addresses, to match this packet with the previous outgoing packet and perform the reverse address substitution, this time replacing the destination address with the original source address of the corresponding outgoing packet. This allowed a "public" address to be used by multiple internal end systems, provided that they were not all communicating simultaneously. More generally a pool of public addresses could be shared across a larger pool of internal systems.

It may not have been the original intent of the inventors of this address sharing concept, but the approach was enthusiastically taken up by the emerging ISP industry in the 1990's. They were seeing the emergence of the home network and were unprepared to respond to it. The previous deployment model, used by dial-up modems, was that each active customer was assigned a single IP address as part of the session start process. A NAT in the gateway to the home network could extend this "single IP address per customer" model to include households with home networks and multiple attached devices. To do so efficiently a further refinement was added, namely that the source port was part of the translation. That way a single external address could theoretically be shared by up to 65,535 simultaneous TCP sessions, provided that the NAT could rewrite the source port along with the source address [7].

For the ensuing decade, NATs were deployed at the edge of the network, and have been used by the ISPs as a means of externalising the need to conserve IP addresses. The address sharing technology was essentially deployed by and operated by the end customer, and within the ISP network, each connected customer still required just a single IP address.

But perhaps that role is underselling the value of NATs in the evolution of the Internet. NATs provided a "firewall" between the end customer and the carrier. The telephony model shared the same end-to-end service philosophy, but it achieved this over exercising overarching control over all components of the service. For many decades telephone was a controlled monopoly that was intolerant of any form of competitive interest in the customer. The Internet did not go down this path, and one of the reasons why this didn't happen is that NATs allowed the end customer to populate their home network with whatever equipment they chose, and via a NAT, present to the ISP carrier as a single "termination" with a single IP address. This effective segmentation of network created a parallel segmentation in the market, which allowed the consumer services segment to flourish without carrier-imposed constraint. And at the time that was critically important. The Internet wasn't the next generation of the telephone service. It was an entirely different utility service operating in an entirely different manner.

More recently, NATs have appeared within the access networks themselves, performing the address sharing function across a larger set of customers. This was first associated with mobile access networks but has been used in almost all recent deployments of access networks, as a response to the visible scarcity in the supply of available IPv4 addresses.

NATs have not been universally applauded. Indeed, in many circles within the IETF NATs were deplored.

It was observed that NATs introduced active middleware into an end-to-end architecture, and divided the pool of attached devices into clients and servers. Clients (behind NATs) had no constant IP address and could not be the target of connection requests. Clients could only communicate with servers, not with each other. It appeared to some to be a step in a regressive direction that imposed a reliance on network middleware with its attendant fragility and imposed an asymmetry on communication [8].

For many years, the IETF did not produce standard specifications for the behaviour of NATs, particularly in the case of handling of UDP sessions. As UDP has no specific session controls, such as session opening and closing signals, how was a NAT meant to maintain its translation state? In the absence of a specific standard specification different implementations of this function made different assumptions and implemented different behaviour, introducing another detrimental aspect of NATs: namely variability.

How could an application operate through a NAT if the application used UDP? The result was the use of various NAT discovery protocols that attempted to provide the application with some understanding of the particular form of NAT behaviour that it was encountering [9].

NATs in Today's Internet

Let's now look at the situation today — the Internet of early 2017. The major hiatus in the supply of additional IPv4 addresses commenced in 2011 when the central IANA pool of unallocated IPv4 addresses was exhausted. Progressively the RIRs ran down their general allocation address pools: APNIC in April 2011, the RIPE NCC in September 2012, LACNIC in 2014 and ARIN in 2015. The intention from the early 1990's was that the impending threat of imminent exhaustion of further addresses would be the overwhelming impetus to deploy the successor protocol. By that thinking then the Internet would've switched to exclusively use IPv6 before 2011. Yet, that has not happened.

Today a minimum of 90% of the Internet's connected device population still exclusively uses IPv4 while the remainder use IPv4 and IPv6 [10]. This is an all-IPv4 network with a minority proportion also using IPv6. Estimates vary of the device population of today's Internet, but they tend to fall within a band of 15 billion to 25 billion connected devices [11]. Yet only some 2.8 billion IPv4 addresses are visible in the Internet's routing system. This implies that on average each announced public IPv4 address serves between 3 to 8 hidden internal devices.

Part of the reason why estimates of the total population of connected devices are so uncertain is that NATs occlude these internal devices so effectively that any conventional internet census cannot expose these hidden internal device pools with any degree of accuracy.

And part of the reason why the level of IPv6 deployment is still so low is that users, and the applications that they value, appear to operate perfectly well in a NATed environment. The costs of NAT deployment are offset by preserving the value of existing investment, both as a tangible investment in equipment and as an investment in knowledge and operational practices in IPv4.

NATS can be incrementally deployed, and they do not rely on some ill-defined measure of coordination with others to operate effectively. They are perhaps one of the best examples of a piecemeal, incremental deployment technology where the incremental costs of deployment directly benefit the entity who deployed the technology. This is in direct contrast to IPv6 deployment, where the ultimate objective of the deployment, namely the comprehensive replacement of IPv4 on the Internet can only be achieved once a significant majority of the Internet's population are operating in a mode that supports both protocols. Until then the deployments of IPv6 are essentially forced to operate in a dual stack mode, and also support IPv4 connectivity. In other words, the incremental costs of deployment of IPv6 only generate incremental benefit once others also take the same decision to deploy this technology. Viewed from the perspective of an actor in this space the pressures and costs to stretch the IPv4 address space to encompass an ever-growing Internet are a constant factor. The decision to complement that with a deployment of IPv6 is an additional cost that in the short term does not offset any of the IPv4 costs.

So, for many actors the question is not "Should I deploy IPv6 now?" but "how far can I go with NATs?" By squeezing some 25 billion devices into 2 billion active IPv4 addresses, we have used a compression ratio of around 14:1, of the equivalent of adding four additional bits of address space. These bits have been effectively 'borrowed' from the TCP and UDP port address space. In other words, today's Internet uses a 36 -bit address space in aggregate to allow these 25 billion devices to communicate.

Each additional bit doubles this pool, so the theoretical maximum space of a comprehensively NATted IPv4 environment is 48 bits, fully accounting for the 32-bit address space and the 16-bit port address space. This is certainly far less than IPv6's 128 bits of address space, but the current division of IPv6 into a 64-bit network prefix and a 64-bit interface identifier drops the available IPv6 address space to 64 bits. The prevalent use of a /48 as a site prefix, introduces further address use inefficiencies that effectively drops the IPv6 address space to span the equivalent of some 56 bits.

NATs can be pushed harder. The "binding space" for a NAT is a 5-tuple consisting of the source and destination IP address, a source and destination port address and a protocol identifier. This 96-bit NAT address space is a highly theoretic ceiling, but the pragmatic question is how much of this space can be exploited in a cost-effective manner such that the marginal cost of exploitation is lower than the cost of an IPv6 deployment.

NATs as Architecture

NATs appear to have pushed applications to a further level of refinement and abstraction that were at one point considered to be desirable objectives rather than onerous limitations. The maintenance of both a unique fixed endpoint address space and a uniquely assigned name space for the Internet could be regarded as an expensive luxury when it appears that only one of these spaces is a strictly necessary regarding ensuring the integrity of communication.

The IPv4 architecture made several simplifying assumptions — one of these was that an IPv4 address was overloaded with both the unique identity of an endpoint and its network location. In an age where computers were bolted to the floor of a machine room this seemed like a very minor assumption, but in today's world, it appears that the overwhelming number of connected devices are portable devices that constantly change their location both in a physical sense and regarding network-based location. This places stress on the IP architecture, and the resulting is that IP is variously tunneled or switched in the final hop access infrastructure to preserve the overloaded semantics of IP addresses.

NATs deliberately disrupt this relationship, and the presented client side address and port have a particular interpretation and context only for the duration of a session.

In the same way that clients now share IP addresses, services now also share addresses. Applications cannot assume that the association of a name to an IP address is a unique 1:1 relationship. Many service-identifying names may be associated with the same IP address, and in the case of multi-homed services, it can be the case that the name is associated with several IP addresses.

With this change comes the observation that IP addresses are no longer the essential "glue" of the Internet. They have changed to a role of ephemeral session tokens that have no lasting semantics. NATs are pushing us to a different network architecture that is far more flexible - a network that uses names as the essential glue that binds it together.

We are now in the phase of the internet's evolution where the address space is no longer unique, and we rely on the name space to offer coherence to the network

From that perspective, what does IPv6 really offer?

More address bits? Well, perhaps not all that much. The space created by NATs operates from within a 96-bit vector of address and port components, and the usable space may well approach the equivalent of a 50-bit conventional address architecture. On the other hand, the IPv6 address architecture has stripped off some 64 bits for an interface identifier and conventionally uses a further 16 bits as a site identifier. The resulting space is of the order of 52 bits. It's not clear that the two pools of address tokens are all that much different in size.

More flexibility? IPv6 is a return to the overloaded semantics of IP addresses as being unique endpoint tokens that provide a connected device with a static location and a static identity. This appears to be somewhat ironic given the observation that increasingly the Internet is largely composed of battery powered mobile devices of various forms.

Cheaper? Possibly, in the long term, but not in the short term. Until we get to the "tipping point" that would allow a network to operate solely using IPv6 without any visible impact on the network's user population, then every network still must provide a service using IPv4.

Permanent address to endpoint association? Well not really. Not since we realised that having a fixed interface identifier represented an unacceptable privacy leak. These days IPv6 clients use so-called "privacy addresses" as their interface identifier, and change this local identifier value on a regular basis.

Perhaps we should appreciate the role of NATs in supporting the name-based connectivity environment that is today's Internet. It was not a deliberately designed outcome, but a product of incremental evolution that has responded to the various pressures of scarcity and desires for greater flexibility and capability. Rather than eschewing NATs in the architecture as an aberrant deviation in response to a short-term situation, we may want to contemplate an Internet architecture that embraces a higher level of flexibility of addressing. If the name space is truly the binding glue of the Internet, then perhaps we might embrace a view that addresses are simply needed to distinguish one packet flow from another in the network, and nothing more.

Appreciating NATs

When NATs were first introduced to the Internet, they were widely condemned as an aberration in the Internet's architecture. And in some ways, NATs have directly confronted the model of a stateless packet switching network core and capable attached edge devices.

But that model has been a myth for decades. The Internet as it is deployed is replete with various forms of network "middleware, " and the concept of a simple stateless packet switching network infrastructure is has been relegated to the status of a historical, but now somewhat abstract concept.

In many ways, this condemnation of NATs was unwarranted, as we can reasonably expect that network middleware is here to stay, irrespective of whether the IP packets are formatted as IPv4 or IPv6 and irrespective of whether the outer IP address fields in the packets are translated or not.

Rather than being condemned, perhaps we should appreciate the role that NATs play in the evolution of the architecture of the Internet.

We have been contemplating what it means to have a name-based data network, where instead of using a fixed relationship between names and IP addresses, we eschew this mapping and perform network transactions by specifying the name of the desired service or resource [12]. NATs are an interesting step in this direction, where IP addresses have lost their fixed association with particular endpoints, and are used more as ephemeral session tokens than endpoint locators. This certainly appears to be an interesting step in the direction of named data networking.

The conventional wisdom is that the endpoint of this current transitioning Internet is an IPv6 network that has no further use for NATs. This may not be the case. We may find that NATs continue to offer an essential level of indirection and dynamic binding capability in networking that we would rather not casually discard. It may be that NATs are a useful component of network middleware and that they continue to have a role on the Internet well after this transition to IPv6 has been completed, whenever that may be!

References

[1] F. Solensky, "Continued Internet Growth", Proceedings of the 18th Internet Engineering Task Force Meeting, August 1990.
[2] H. W. Braun, P. Ford and Y. Rekhter, "CIDR and the Evolution of the Internet", SDSC Report GA-A21364, Proceedings of INET'93, Republished in ConneXions, September 1993.
[3] V. Fuller, T. Li, J. Yu and K. Varadhan, "Classless Inter-Domain Routing (CIDR): An Address Assignment and Aggregation Strategy", Internet Request for Comment (RFC) 1519, September 1993.
[4] S. Bradner and A. Mankin, "The Recommendation for the IP Next Generation Protocol", Internet Request for Comment (RFC) 1752, January 1995.
[5] D. Wing and A. Yourtchenko, "Happy Eyeballs: Success with Dual-Stack Hosts," Internet Request for Comment 9RFC) 6555, April 2012.
[6] P. Tsuchiya and T. Eng, "Extending the IP Internet Through Address Reuse", ACM SIGCOMM Computer Communications Review, 23(1): 16-33, January 1993.
[7] P. Srisuresh and D. Gan, "Load Sharing using IP Network Address Translation (LSNAT)", Internet Request for Comment (RFC) 2319, August 1998.
[8] T. Hain, "Architectural Implications of NAT", Internet Request for Comment (RFC) 2993, November 2000.
[9] G. Huston, "Anatomy: A Look Inside Network Address Translators," The Internet Protocol Journal, vol. 7. No. 3, pp. 2-32, September 2004.
[10] IPv6 Deployment Measurement, https://stats.labs.apnic.net/ipv6/XA.
[11] Internet of Things Connected devices, 2015 – 2025
[12] L. Zhang, et. al, "Named Data Networking," ACM SIGCOMM Computer Communication Review, vol. 44, no. 3, pp 66-73, July 2014.

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: IP Addressing, IPv6

Categories: News and Updates

10 Notable NameJet Sales from August

Domain Name Wire - Tue, 2017-09-05 18:02

A low month but some notable sales.

NameJet sold 65 names greater than $2,000 for a total of only $333k last month. That’s a lot less dollar value than usual (about half). I gather the low number is for a couple reasons. First, August is a slow month because a lot of people are on vacation. Second, there was the shill bidding issue in July and canceled auctions as a result of it.

There were still a number of notable sales. Here are ones that caught my eye.

Ditan.com $27,821 – It was bought by someone in China, so my guess is this has something to do with Ditan Park, not the migraine drug.

ArganOil.com $13,255 – I had to look this one up. It’s an oil that people dip bread it and is also used in cosmetics.

Emojis.com $26,100 – People are wasting a lot of money on emoji domain names, but the emoji ecosystem is very strong.

Miho.com $7,500 – This is a Japanese name and is also used by some companies.

Chard.com $7,255 – Someone who’s last name is Chard owns it now.

Dispensers.com $4,288 – Not to be confused with Dispensaries.

MarketMakers.com $3,279 – A big financial and economic term.

YouTubd.com $2,494 – How many visits do you think this typo gets?

CSR.org $2,312 – CSR is common shorthand for Customer Service Representative.

BitcoinTechnology.com $2,109 – It wouldn’t be a sales list without a crypto currency domain!


© DomainNameWire.com 2017. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com.

Latest domain news at DNW.com: Domain Name Wire.

The post 10 Notable NameJet Sales from August appeared first on Domain Name Wire | Domain Name News & Views.

Related posts:
  1. Domain Owners Should Learn Lesson About Leaving Money on the Table
  2. Selfies.com for $39k and 14 other notable NameJet sales
  3. 10 notable NameJet sales including a dot.com bust
Categories: News and Updates

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer