News and Updates

Mormon church switching from 3-letter to 19-letter domain name

Domain Name Wire - Tue, 2019-03-05 17:52

Church goes from short to long with domain name switch.

The Church of Jesus Christ of Latter-day Saints is changing its domain name to a much longer domain name.

Visitors will soon go to ChurchofJesusChrist.org rather than LDS.org, which was short for Latter-day Saints.

While most organizations prefer a shorter domain to a longer one, there was a special circumstance here: Jesus said to do it. Well, assuming he foresaw the internet, he would have suggested it.

In a news release about the change, the Mormon church stated:

The Church of Jesus Christ of Latter-day Saints is the name of the Church Latter-day Saints believe came by revelation from the Lord Jesus Christ Himself (see Doctrine and Covenants 115:4). “Jesus Christ directed us to call the Church by His name because it is His Church, filled with His power,” President Russell M. Nelson has said.

The Church is now making changes to many of its communication channels to reflect the faith’s full name and better convey commitment to follow Jesus Christ…

Mormon.org will eventually be merged into the ChurchofJesusChrist.com domain as well.

© DomainNameWire.com 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

No related posts.

Categories: News and Updates

An Update From CIRA on IoT Security

Domain industry news - Tue, 2019-03-05 17:22

Last April, I shared information about a multistakeholder process that CIRA is part of, which seeks to identify and guide the development of policy around the Internet of Things (IoT), putting security at the heart of internet innovations in Canada.

Since the formation of this process, we've made quite a bit of progress and I'm pleased to share some of that with you. In particular, we've taken solid steps forward in CIRA's IoT Secure Home Gateway project.

Before I begin, I want to stress, once again, the threats that IoT devices pose. There are cheaply made devices, growing in popularity, which share similar software and features. Their users have little to no ability to secure or update these devices. Wondering what types of devices I'm referring to? Think baby monitors, smart light bulbs and that internet-connected singing puppy your nephew is so fond of.

IoT devices like these are a security and privacy risk to individual users. This can include bad people taking control of your Wi-Fi enabled webcams or internet-connected children's toys. However, the greater concern for me is when these insecure devices are taken over for the purposes of a distributed denial-of-service attack (DDoS) whereby hundreds or thousands of devices are used to attack core internet infrastructure or services. The multistakeholder group I'm part of called this a potential IoT Zombie Apocalypse. With the scale and growth of these devices, and the threat that entails, that feels like an apt description.

The Network Resiliency Group: Three defence approaches

As part of the larger multistakeholder approach in Canada for IoT security, I'm part of the Network Resiliency Group. We're primarily concerned with the weaponization of IoT devices whereby a device can become compromised from the internet or from other internet-connected devices and used to attack or take down major internet infrastructure.

Our work has identified three approaches to defence.

  1. Scale existing DDoS mitigation mechanisms.
  2. Directly address the insecurity of IoT devices through improved security design and lifecycle management practices, encouraged via standards, awareness, examples and regulation.
  3. Network-based defences for IoT for the home and small business.

There is a Network Resiliency Working Group final report available, for those who want to dig into our work a little deeper and into each of these approaches. It's quite comprehensive and worth a read.

In the meantime, I'll focus in on one part, which falls into the third approach and is near and dear to my heart: CIRA's IoT Secure Home Gateway project.

CIRA's IoT Secure Home Gateway: Innovation in securing IoT devices

CIRA has an innovation hub, called CIRA Labs. It's where ideas are sparked, brought to life and tested. Some projects turn into products and services, or become integrated into CIRA's work. Others don't. As the lead on CIRA Labs, I'm excited about a current project I'm working on all around securing IoT devices in Canadian homes.

CIRA Labs is developing a functional prototype, open source software and new standards for a next-generation secure home gateway and a home registry solution that protects IoT devices and the internet from each other through security controls.

This project started as an idea in late 2016 after the Mirai Dyn attack. We knew we had to do something around mitigating the risk of home-based large-scale DDoS attacks. We decided to embark on a project focused on bringing the current home gateways into secure home gateways.

The hypothesis we are testing is whether we can implement an enterprise-type security framework to home and small business networks, while keeping it simple to use.

We did a quick assessment of the current state of home networks and concluded there are no standard home network security frameworks, especially to onboard the upcoming wave of new IoT devices.

Then we looked at the state of IoT and its security landscape. The major observation is when we add a new IoT device to the home network, it is granted full access to the entire internet, at full speed, with full access to the internal network as well, with sufficient access to the Wi-Fi keys that it can impersonate other devices. Once an IoT device is compromised, there are no facilities to detect anomalous traffic patterns and quarantine the device.

Another important aspect of today's IoT is their dependence on the cloud to provide their services. A requirement of the secure home gateway is to provide the users of the home network with secure access to its home network. Therefore, a secure home gateway needs a domain name to be internet reachable, and the devices within the home can benefit from being named. With this, a secure connection to the home network is possible.

Having the IoT vendor sending all your internal home video feed, audio feed and personal information to a cloud in a foreign country where Canadians have no privacy rights (U.S. or wherever the IoT vendor servers are located) is an unnecessary privacy risk. If someone knocks on your door, the camera should stream the video directly to your mobile phone encrypted. Not via another jurisdiction.

CIRA is working with multiple local and international partners to develop this secure home gateway solution. As the steward of the .CA ccTLD, we are the experts who can provide names as part of that solution. Our goal is to have a functional prototype and application that you can download by end of March 2019. It's worthwhile noting that over the course of the last year we found many IoT security and home gateway initiatives that complement our work. We tried to ensure there is no overlap and to integrate the available solutions into the secure home gateway project.

From a technology standpoint, we are betting on the market adoption of MUD, based on the Internet Engineering Taskforce (IETF) Manufacturer Usage Description (MUD) standard internet draft.

We're in phase two of this project, which will include a greater focus on building a user-friendly app so that everyone, no matter their level of technical ability, can use it. We're also looking to standardize the API between the app, the home gateway and MUD servers.

There are several other steps underway, which I encourage you to look at our via our CIRA Labs Github, including the many challenges we're trying to address. If we're successful, we will significantly decrease the threat of IoT devices, but we've got a lot of work left to do. You can also learn more about this project and others on the CIRA Labs webpage.

Getting MUD-dy in March

MUD is an authoritative identifier of IoT devices, which allows manufacturers to expose the identity and intended use of their devices using an IETF-approved standard. These standards are key to our gateway project because they provide the instructions that say who or what can communicate with that device. For example, if you have a smart refrigerator MUD will help ensure that the only communication occurring is between your fridge and you, and your fridge and its manufacturer. This will cut out any sort of additional traffic to or from your smart fridge that you don't want.

From March 23-29 the IETF will meet in Prague, and IoT security and MUD are on the agenda. I look forward to this discussion, as the biggest brains in internet engineering convene to tackle what is one of the greatest internet threats of our time. I look forward to sharing further updates as we progress with this project and our work to make IoT devices safer. Stay tuned!

Written by Jacques Latour, Chief Technology & Security Officer at CIRA

Follow CircleID on Twitter

More under: Cybersecurity, Internet of Things

Categories: News and Updates

February’s top domain name stories

Domain Name Wire - Tue, 2019-03-05 17:16

Here’s a look at the top stories on Domain Name Wire last month, as ranked by views.

1. How I almost lost a domain name this week – Even professional domain owners can let something slip through the cracks. Here’s what happened and how to make sure it doesn’t happen to you.

2. Speaker hit for reverse domain name hijacking – Based on a twitter conversation with the Complainant, he got some bad legal advice.

3. Calm.com domain name is paying dividends – Here’s how a great domain name is helping a $1 billion valuation company.

4. Be careful about this expired domain name metric – Dig into this metric before buying a domain name based on it.

5. Blockchain.io domain owner fights back against Blockchain.com – Paymium responds to domain name dispute.

Also check out last month’s podcasts. Click the link to listen or subscribe on your podcast app.

#225: Rick Schwartz
#224: Blockchain and Domain Names
#223: The Email Episode
#222: NamesCon Recap

© DomainNameWire.com 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. DOMAINfest Hotel’s Domain is Expired
  2. Calm.com domain name is paying dividends
Categories: News and Updates

MMX reports strong start to 2019

Domain Name Wire - Tue, 2019-03-05 15:04

Domain name company touts topline growth to kick off 2019.

Top level domain name company MMX (Minds + Machines) (LSE:MMX) is reporting a strong start to 2019.

In an investor update issued today, the company said that domain registrations are up 38% year-over-year so far this year to 1.84 million registrations. Billings are up 129%. The company credits the first time contribution from ICM Registry domains as well as strong growth in China.

MMX is seeing a 91% renewal rate on .XXX domains, part of the ICM acquisition.

Donuts co-founder Dan Schindler is now a Special Advisor to the company to help it on its premium domains strategy. Schindler left Donuts a while back and recently resurfaced in the industry. Christa Taylor was also formally announced as the company’s Chief Marketing Officer. Taylor had consulted for the company starting last year and became CMO ahead of NamesCon this year.

The company plans to pay off its outstanding debt early this month.

© DomainNameWire.com 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. MMX releases 2015 report, discusses go-forward strategy
  2. .VIP gets 75% renewal rate for early registrations
  3. MMX reports earnings: .XXX to help, one-off sales to hurt
Categories: News and Updates

Public Interest Registry (.Org) hires three new execs

Domain Name Wire - Tue, 2019-03-05 14:49

Song-Marshall, Abley and Vora join PIR executive team.

L to R: Judy Song-Marshall, Joe Abley, Anand Vora. Photos from LinkedIn.

Public Interest Registry (PIR), the non-profit that runs the .org top level domain name, has hired three domain name industry veterans to join its executive team. It is still searching for a Chief Financial Officer.

The three hires are:

Judy Song-Marshall, Chief of Staff: Judy comes from Neustar, where she was Director of Registry Services. She also ran product marketing for the company for over seven years and spent over 11 years at the company, according to her LinkedIn profile.

Joe Abley, Chief Technology Officer: Joe worked as Infrastructure Scientist at domain registry Afilias for the past year. He previously worked for Dyn and ICANN. Afilias provides the technical registry operations for PIR.

Anand Vora, Vice President of Business Affairs: Anand joins PIR from its new CEO Jon Nevett’s alma mater, Donuts. This is a round trip for Vora, who interned for PIR when he got his MBA at Goerge Washington University and started his career at the company as a product management specialist and then channel manager for Asia.

© DomainNameWire.com 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. PIR (.org) makes changes to its sales practices
  2. Public Interest Registry (.Org) tax return provides insight into registrar marketing deals
  3. .Org sticks with Afilias for backend
Categories: News and Updates

How Instant Pot got its name (and domain)

Domain Name Wire - Mon, 2019-03-04 20:39

Inventor created program to find synonyms.

Instant Brands Inc., maker of the popular Instant Pot cooking appliance, is merging with housewares company Corelle Brands LLC.

Ottawa Citizen has the story on how Instant Pot inventor Robert Wang came up with the name for the now iconic device:

He tested recipes with his family and turned to computer science to come up with a name. After writing a program that matched synonyms for “fast” with synonyms for “cooker,” Wang arrived on Instant Pot. Luckily, the domain name was still available.

The story has some other interesting info about how Wang was able to build the business thanks to Amazon.

In an upcoming episode of the DNW Podcast I will interview a company and product namer and discuss some of the tools you can use to create new names.

(Hat tip: Bill Sweetman)

© DomainNameWire.com 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

No related posts.

Categories: News and Updates

Phishers Increasingly Targeting SaaS and Webmail Services, APWG Reports

Domain industry news - Mon, 2019-03-04 20:26

Most-Targeted Industry Sectors
APWG Report, 4th Quarter 2018According to the latest report from Anti-Phishing Working Group (APWG) while the total number of conventional, spam-based phishing campaigns declined in 2018, users of software-as-a-service (SaaS) systems and webmail services are increasingly targeted.

The decline: "The number of confirmed phishing sites declined as 2018 proceeded. The total number of phishing sites detected by APWG in 4Q was 138,328 — down from 151,014 in Q3, 233,040 in Q2, and 263,538 in Q1. This general decline in the number of phishing campaigns as the year went on may have been a consequence of anti-phishing efforts — and/or the result of criminals shifting to more specialized and lucrative forms of e-crime than mass-market phishing."

Is it a decline: APWG points out that there is a growing concern the drop may be due to under-detection. "The detection and documentation of some phishing URLs has been complicated by phishers obfuscating phishing URLs with techniques such as Web-spider deflection schemes — and by employing multiple redirects in spam-based phishing campaigns, which take users (and automated detectors) from an email lure through multiple URLs on multiple domains before depositing the potential victim at the actual phishing site."

New targets: Phishing that targeted SaaS and Webmail services increased from 20.1 percent of all attacks in Q3 to almost 30 percent in Q4 of 2018. "Attacks against cloud storage and file hosting sites continued to drop, decreasing from 11.3 percent of all attacks in Q1 2018 to 4 percent in Q4 2018."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, Email

Categories: News and Updates

A Quick Look at QUIC

Domain industry news - Mon, 2019-03-04 19:06

Quick UDP Internet Connection (QUIC) is a network protocol initially developed and deployed by Google, and now being standardized in the Internet Engineering Task Force. In this article we'll take a quick tour of QUIC, looking at what goals influenced its design, and what implications QUIC might have on the overall architecture of the Internet Protocol.

QUIC is not exactly a new protocol, as the protocol appears to have been developed by Google in 2012, and initial public releases of this protocol were included in Chromium version 29, released in August 2013. QUIC is one of many transport layer network protocols that attempt to refine the basic operation of IP's Transmission Control Protocol (TCP).

Why are we even thinking about refining TCP?

TCP is now used in billions of devices and is perhaps the most widely adopted network transport protocol that we've witnessed so far. Presumably, if this protocol wasn't fit for general use, then we would have moved on and adopted some other protocol or protocols instead. But the fact is that TCP is not only good enough for a broad diversity of use cases, but in many instances, TCP is incredibly good at its job. Part of the reason for TCP's longevity and broad adoption is TCP's incredible flexibility. The protocol can support a diverse variety of uses, from micro-exchanges to gigabyte data movement, transmission speeds that vary from hundreds of bits per second to tens and possibly hundreds of gigabits per second. TCP is undoubtedly the workhorse of the Internet. But even so, there is always room for refinement. TCP is put to many different uses and the design of TCP represents a set of trade-offs that attempt to be a reasonable fit for many purposes but not necessarily a truly ideal fit for any particular purpose.

One of the aspects of the original design of the Internet Protocol suite was that of elegant brevity and simplicity. The specification of TCP [1] is not a single profile of behavior that has been cast into a fixed form that was chiseled into the granite slab of a rigid standard. TCP is malleable in a number of important ways. Numerous efforts over the years have shown that it is possible to stay within the standard definition of TCP, in that all the packets in a session use the standard TCP header fields in mostly conventional ways, but also to create TCP implementations that behave radically differently from each other. TCP is an example of a conventional sliding window positive acknowledgment data transfer protocol. But while this is what the standard defines, there are some undefined aspects of the protocol. Critically, the TCP standard does not strictly define how the sender can control the amount of data in flight across the network to strike a fair balance between this data flow across the network and all other flows that coincide across common network path elements. There is a general convention these days in TCP to adopt an approach of slowly increasing the amount of data in flight while there are no visible errors in the data transfer (as shown by the stream of received acknowledgement packets) and quickly responding to signals of network congestion (interpreted as network packet drop, as shown by duplicate acknowledgements received by the sender) by rapidly decreasing the sending rate. Variants of TCP use different controls to manage this "slow increase" and "rapid drop" behavior [2] and may also use different signals to control this data flow, including measurements of end-to-end delay, or inter-packet jitter (such as the recently published BBR protocol [3]). All of these variants still manage to fit with the broad parameters of what is conventionally called TCP.

It is also useful to understand that most of these rate control variants of TCP need only be implemented on the data sender (the "server" in a client/server environment). The common assumption of all TCP implementations is that clients will send a TCP ACK packet on both successful receipts of in-sequence data and receipt of out-of-sequence data. It is left to the server's TCP engine to determine how the received ACK stream will be applied to refine the sending TCP's internal model of network capability and how it will modify its subsequent sending rate accordingly. This implies that the deployment of new variants of TCP flow control is essentially based on deployment within service delivery platforms and does not necessarily imply changing the TCP implementations in all the billions of clients. This factor of server-side control of TCP behavior also contributes to the flexibility of TCP.

But despite this considerable flexibility, TCP has its problems, particularly with web-based services. These days most web pages are not simple monolithic objects. They typically contain many separate components, including images, scripts, customized frames, style sheets and similar. Each of these is a separate web "object" and if you are using a browser that is equipped the original implementation of HTTP each object will be loaded in a new TCP session, even if they are served from the same IP address. The overheads of setting up both a new TCP session and a new Transport Layer Security (TLS) [4] session for each distinct web object within a compound web resource can become quite significant, and the temptations to re-use an already established TLS session for multiple fetches from the same server are close to overwhelming. But this approach of multiplexing a number of data streams within a single TCP session also has its attendant issues. Multiplexing multiple logical data flows across a single session can generate unwanted inter-dependencies between the flow processors and may lead to head of line blocking situations, where a stall in the transfer of the currently active stream blocks all queued fetch streams. It appears that while it makes some logical sense to share a single end-to-end security association and a single rate-controlled data flow state across a network across multiple logical data flows, TCP represents a rather poor way of achieving this outcome. The conclusion is that if we want to improve the efficiency of such compound transactions by introducing parallel behaviors into the protocol, we need to look beyond TCP.

Why not just start afresh and define a new transport protocol that addresses these shortcomings of TCP? The answer is simple: NATs!

* * *

Network Address Translation and Transport Protocols

The original design of IP allowed for a clear separation between the network element that allowed the network to accept an IP packet and forward it onto its intended destination (the "Internet" part of the IP protocol suite) and the end-to-end transport protocol that enabled two applications to communicate via some form of "session". The transport protocol field in the IPv4 packet header and the Next Header field of the IPv6 packet header uses an 8-bit field to identify the end-to-end protocol. This clear delineation between network and host parts of an IP packet header design assumes that the network has no intrinsic need to "understand" what end-to-end protocol was being used within a packet. At the network level, the protocol architecture asserts that these packets are all stateless datagrams and should be treated identically by each network switching element. Ideally, an IP packet switch will not differentiate in its treatment of packets depending on the inner end-to-end protocol. (It all this sounds somewhat dated these days it's because it is a somewhat dated view of the network, and these days many network elements reach into the supposed host part of a packet header. As a simple example, think of flow-aware traffic load balancing where in order to preserve packet order within a TCP stream the load balancer will use parts of the TCP header to identify packets that belong to the same logical flow.)

There are some 140 protocols listed in the IP protocol field registry [5]. TCP and UDP are just two of these protocols (protocol values 6 and 17 respectively) and, in theory at any rate, there is room for a least 100 more. In the public Internet, the story is somewhat different. TCP and UDP are widely accepted protocols, and ICMP (protocol 2) is generally accepted, but little else. How did this happen?

NATs changed the assumption about network devices not looking inside the packet (well, to be precise, port-translating NATs changed that assumption). NATs are network devices that look inside the IP packet and re-write the port addresses used by TCP and UDP [6]. What if an IP packet contains an end-to-end transport protocol identifier value that is neither TCP or UDP? Most NATs will simply drop the packet, on the basis of a security paradigm that "what you don't recognize is likely to be harmful." The pragmatic result is that NATs have limited an application's choice of transport protocols in the public Internet to just two: TCP and UDP.

* * *

If the aim is to deploy a new transport protocol but not confuse active network elements that are expecting to see a conventional TCP or UDP header, then how can this be achieved? This was the challenge faced by the developers of QUIC.

QUIC over UDP

The solution chosen by QUIC was a UDP-based approach.

UDP is a minimal framing protocol that allows an application to access the basic datagram services offered by IP. Apart from the source and destination port numbers, the UDP header adds a length header and a checksum that covers the UDP header and UDP payload. It is essentially an abstraction of the underlying datagram IP model with just enough additional information to allow an IP protocol stack to direct an incoming packet to an application that has bound itself to a nominated UDP port address. If TCP is an overlay across the underlying IP datagram service, then it's a small step to think about positioning TCP as an overlay in an underlying UDP datagram service.

Using our standard Internet model QUIC is, strictly speaking, a datagram transport application. An application that uses the QUIC protocol sends and receives packets using UDP port 443.

Technically, this is a minimal change to an IP packet, adding just 8 bytes to the IP packet by placing a UDP header between the IP and TCP packet headers (Figure 1). The implications of this change are far more significant than these 8 bytes would suggest. However, before we consider these implications, let's look at some QUIC services.

Figure 1 – The QUIC Protocol Architecture

Not only does the use of a UDP "shim" provide essential protection of QUIC against NATs' enforcement of TCP and UDP as the only viable transport protocols, but there is also another aspect of QUIC which is equally important in today's Internet. The internal architecture of host systems has changed little from the 1970s. An operating system provides a consistent abstraction of some functions for an application. The operating system not only looks after scheduling the processor and managing memory, but also maintains the local file store, and critically for QUIC, provides the networking protocol stack. The operating system contains the device drivers that implement a consistent view of i/o devices, and also includes the implementations of the network protocol stack up to and including the transport protocol. The abstracted interface provided to the application may not be entirely identical for all operating systems, but its sufficiently similar that with a few cosmetic changes, an application can be readily ported to any platform with an expectation that not only will the application drive the network as expected, but do so in a standard manner such that it will seamlessly interoperate with any other application instance that is a standard-compliant implementation of the same function. Operating System network libraries not only relieve applications of the need to re-implement the network protocol stack, but assist in the overall task of ensuring seamless interoperation within a diverse environment.

However, such consistent functionality comes at a cost, and in this case, the cost is resistance to change. Adding a new transport protocol to all operating systems, and to all package variants of all operating systems is an incredibly daunting task these days. As we have learned from the massive efforts to clean up various security vulnerabilities in operating systems, getting the installed base of systems to implement change suffers from an incredibly static and resistant tail of laggards!

Applications may not completely solve this issue, but they appear to have a far greater level of agility to self-apply upgrades. This means that if an application chooses to use its own implementation of networking protocols it has a higher degree of control over the implementation, and need not await the upgrade cycle of third-party operating system upgrades to apply changes to the code base. Web browser clients are an excellent example of this, where many functions are folded into the application to provide the desired behavior.

In the case of QUIC, the transport protocol code is lifted into the application and executed in user space rather than an operating system kernel function. This may not be as efficient as a kernel implementation of the function, but the gain lies in greater flexibility and control by the application.

QUIC and the Connection ID

If the choice of UDP as the visible end-to-end protocol for QUIC was a choice dictated by the inflexibility of the base of deployed NAT devices in the public Internet, and their collective inability to accommodate new protocols, the handling of UDP packets by NATs have further implications for QUIC.

NATs maintain a translation table. In the most general model, a NAT takes the 5-tuple of incoming packets, using the destination and source IP addresses, the destination, and source port addresses and the protocol field and perform a lookup into the table, and finds the associated translated fields. The packet's address headers are rewritten to these new values, checksums are recomputed, and the packet is passed onward. Certain NAT implementations may use variants of this model. For example, some NATs only use the source IP address and port address on outbound packets as the lookup key, and the corresponding destination IP address and port address in incoming packets.

Typically, the NAT will generate a new translation table entry when a triggering packet is passed from the inside to the outside and will subsequently remove the table entry when the NAT assumes that the translation is no longer needed. For TCP sessions it is possible to maintain this translation table quite accurately. New translation table entries are created in response to outbound TCP SYN connection establishment packets and removed either when the NAT sees the TCP FIN exchange or in response to a TCP RST packet or when the session is idle for an extended period.

UDP packets do not have these explicit packet exchanges to start and stop sessions, so NATs need to make some assumptions. Most NATs will create a new translation table entry when it sees an outbound UDP packet that has not matched any existing translation table. The entry will be maintained for some period of time (as determined by the NAT) and will then be removed if there are no further packets that match the session signature. Even when there are further matching UDP packets the NAT may use an overall UDP session timer and remove the NAT entry after some pre-determined time interval.

For QUIC and NATs, this is a potential problem. The QUIC session is established between a QUIC server on UDP port 443 and the NAT-generated source address and port. However, at some point in the session lifetime, the NAT may drop the translation table entry, and the next outbound client packet will generate a new translation table entry, and that entry may use a different source address and port. How can the QUIC server recognize that this next received packet, with its new source address and source port number, is actually part of an existing QUIC session?

QUIC uses the concept of connection identifiers (connection IDs). Each endpoint generates connection IDs that will allow received packets with that connection ID to be routed to the process that is using that connection ID. During QUIC version negotiation these connection IDs are exchanged, and after that, each sent QUIC packet includes the current connection ID of the remote party.

This form of semantic distinction between the identity of a connection to an endpoint and the current IP address and port number that is used by QUIC is similar to the Host Identity Protocol (HIP) [7]. This protocol also used a constant endpoint identifier that allowed a session to survive changes in the endpoint IP addresses and ports.

QUIC Streams

TCP provided the abstraction of a reliable order byte stream to applications. QUIC provides a similar abstraction to the application, termed within QUIC as streams. The essential difference here is that TCP implements a single behavior, while a single QUIC session can support multiple streams profiles.

Bidirectional streams place the client and server transactions into a matched context, as is required for the conventional request/response transactions of HTTP/1. A client would be expected to open a bidirectional stream with a server and then issue a request in a stream which would generate a matching response from the server. It is possible for a server to initiate a bidirectional push stream to a client, which contains a response without an initial request. Control information is supported using unidirectional control streams, where one side can pass a message to the other as soon as they are able. An underlying unidirectional stream interface, used to support control streams, is also exposed to the application.

Not only can QUIC support a number of different stream profiles, but QUIC can support different stream profiles within a single end-to-end QUIC session. This is not a novel concept of course, and the HTTP/2 protocol is a good example of an application-level protocol adding multiplexing and stream framing to carry multiple data flows across a single transport data stream. However, a single TCP transport stream as used by HTTP/2 may encounter head of the line blocking where all overlay data streams fate-share across a single TCP session. If one of the streams stalls, then it's possible that all overlay data streams will be affected and may stall as well.

QUIC allows for a slightly different form of multiplexing where each overlay data stream can use its own end-to-end flow state, and a pause in one overlay stream does not imply that any other simultaneous stream is affected.

Part of the reason to multiplex multiple data flows between the same two endpoints in HTTP/2 was to reduce the overhead of setting up a TLS security association for each TCP session. This can be a major issue when the individual streams are each sending a small object, and it's possible to encounter a situation where the TCP and TLS handshake component of a compound web object fetch dominates both the total download time and the data volume.

QUIC pushes the security association to the end-to-end state that is implemented as a UDP data flow so that streams can be started in a very lightweight manner because they essentially reuse the established secure session state.

QUIC Encryption

As is probably clear from the references to TLS already, QUIC uses end-to-end encryption. This encryption is performed on the UDP payload, so once the TLS handshake is complete very little of the subsequent QUIC packet exchange is in the clear (Figure 2).

Figure 2 – Comparison of TCP and TLS with QUIC

What is exposed in QUIC are the public flags. This initial part of a QUIC packet consists of the connection ID, allowing the receiver to associate the packet with an endpoint without decrypting the entire packet. The QUIC version is also part of the public flag set. This is used in the initial QUIC session establishment and can be omitted thereafter.

The remainder of the QUIC packet is private flags and the payload. These are encrypted and are not directly visible to an eavesdropper. This private section includes the packet sequence number. This field used to detect duplicate and missing packets. It also includes all the flow control parameters, including window advertisements.

This is one of the critical differences between TCP and QUIC. With TCP the control parts of the protocol are in the clear so that a network element would be able to inspect the port addresses (and infer the application type), as well as the flow state of the connection. The connection of a sequence of such TCP packets, even if only looking at the packets flowing in one direction within the connection would allow the network element to infer the round-trip time and the data transmission rate. And, like a NAT, manipulation of the receive window in the ACK stream would allow a network element to apply a throttle to a connection and reduce the transfer rate in a manner that would be invisible to both endpoints. By placing all of this control information inside the encrypted part of the QUIC packet ensures that no network element has direct visibility to this information, and no network element can manipulate the connection flow.

One could take the view that QUIC enforces a perspective that was assumed in the 1980s. This is that the end-to-end transport protocol is not shared with the network. All the network 'sees' are stateless datagrams, and the endpoints can safely assume that the information contained in the end-to-end transport control fields is carried over the network in a manner that protects it from third-party inspection and alteration.

QUIC and IP Fragmentation

The short answer is "no!" QUIC packets cannot be fragmented.

The way this is achieved is by having the QUIC HELLO packet be padded out to the maximal packet size, and not completing the initial HELLO exchange if the maximally-sized packet is fragmented.

For IPv4 the QUIC maximum QUIC packet is 1,350 bytes. Adding 8 bytes for the UDP header, 20 bytes for IPv4, and 14 bytes for the Ethernet frame this means that QUIC packet on Ethernet is 1,392 packets in size. There is no particular rationale for this choice of 1,350 other than the results of empirical testing on the public Internet.

For IPv6 the QUIC maximum packet size is reduced by 20 bytes to 1,330. The resultant ethernet packet is still 1,392 bytes because of the larger IPv6 IP packet header.

What happens if the network path has a smaller MTU than this value? The answer is in the next section.

QUIC and TCP

QUIC is not intended as a replacement for TCP. Indeed, QUIC relies on the continued availability of TCP.0

Whenever QUIC encounters a fatal error, such as fragmentation of the QUIC HELLO packet, the intended response from QUIC is to shut down the connection. As QUIC itself lies in the application space, not the kernel space, the client-side application can be directly informed of this closure of the QUIC connection and it can re-open a connection to the server using a conventional TCP transport protocol.

The implication is that QUIC does not necessarily have to have a robust response for all forms of behavior, and when QUIC encounters a state where QUIC has no clear definition of the desired behavior, it is always an option to signal a QUIC failure to the application. The failure need not be fatal to the application, as such a signal can trigger the application to repeat the transaction using a conventional TCP session.

I can QUIC, do you?

Unlike all other TCP services that use a dedicated TCP port address to distinguish itself from all other services, QUIC does not advertise itself in such a manner. That leaves a number of ways in which a server could potentially advertise itself as being accessible over QUIC.

One such possible path is the use of DNS service records (SRV) [7]. The SRV record can indicate the connection point for a named service using the name of the transport protocol and the protocol-specific service address. This may be an option for the future, but no such DNS service record has been defined for QUIC.

Instead, in keeping with QUIC's overall approach of loading up most of the service functionality into the application itself, a server that supports QUIC can signal its capability within HTTP itself. The way to do this is defined in an Internet standard for "Alternative Services" [8], which is a means to list alternative ways to access the same resources.
For example. the Google homepage, www.google.com, includes the HTTP header:

alt-svc: quic=":443"; ma=2592000; v="44,43,39"

This indicates that the same material is accessible using QUIC over port 443. The "ma" field is the time to keep this information on the local client, which in this case is 30 days, and the "v" field indicates that the server will negotiate QUIC versions 39, 43 and 44.

QUIC Lessons

QUIC is a rather forceful assertion that the Internet infrastructure is now heavily ossified and more highly constrained than ever. There is no room left for new transport protocols in today's network. If what you want to do can't be achieved within TCP, then all that's left is UDP.

The IP approach to packet size adaptation through fragmentation was a powerful concept once upon a time. A sender did not need to be aware of the constraints that may apply on a path. Any network-level packet fragmentation and reassembly was invisible to the end-to-end packet transfer. This is no longer wise. Senders need to ensure that their packets can reach their intended destinations without any additional requirement for fragmentation handling.

Mutual trust is over in today's Internet. Applications no longer trust other applications. They don't trust the platform that hosts the application or the shared libraries that implemented essential functions. They are no longer prepared to wait for the platform to support novel features in transport protocols. Applications no longer have any trust in a network to keep their secrets. More and more functions and services are being pulled back into the application and are passed out from an application as much as possible in the packet is cloaked in a privacy shroud.

There is a tension between speed, security and paranoia. An ideal outcome is one that is faster, private and secure. Where this is not obvious, and the inevitable trade-offs emerge, it seems that we have some minimum security and privacy requirements that merely must be achieved. But once we have achieved this minimum, we are then happy to trade off incremental improvements in privacy and security for better session performance.

The traditional protocol stack model was a convenient abstraction, not a design rule. Applications do not necessarily need to bind to transport-layer sockets provided by the underlying platform. Applications can implement their own end-to-end transport if necessary.

The Internet's infrastructure might be heavily ossified, but the application space is seeing a new set of possibilities open up. Applications need not wait for the platform to include support for a particular transport protocol or await the deployment of a support library to support a particular name resolution function. Applications can solve these issues for themselves directly. The gain in flexibility and agility is considerable.

There is a price to pay for this new-found agility, and that price is broad interoperability. Browsers that support QUIC can open up UDP connections to certain servers and run QUIC, but browsers cannot assume, as they do with TCP, that QUIC is a universal and interoperable lingua franca of the Internet. While QUIC is a fascinating adaptation, with some very novel concepts, it is still an optional adaptation. For those clients and servers who do not support QUIC, or for network paths where UDP port 443 does not support the common fallback is TCP. The expansion of the Internet is inevitably accompanied by inertial bloat, and as we've seen with the extended saga of IPv6 deployment, it is a formidable expectation to think that the entire Internet will embrace a new technical innovation in a timeframe of months, years or possibly even decades! That does not mean that we can't think new thoughts and that we can't realize these new ideas into new services on the Internet. We certainly can, and QUIC is an eloquent demonstration of exactly how to craft innovation into a rather stolid and resistant underlying space.

Further Reading

QUIC has excited considerable interest over the past couple of years, and there are many posts to be found on the net. Here's a small sample of this online material that you may find to be of interest.

A useful consideration of positive and negative aspects of QUIC are in Robin Marx's post "QUIC and HTTP/3: Too big to fail?

A slightly older (2014) but useful technical overview of QUIC can be found in Shigeki Ohtsu's presentation to the HTTP/2 Conference in Japan.

A commentary on Cloudflare's investigations with QUIC can be found in a recent blog post: "The Road to QUIC

References

[1] Jon Postel, "Transmission Control Protocol," RFC 793, September 1981.

[2] Geoff Huston, "Faster," The ISP Column, June 2005.

[3] Neal Cardwell, Yuchuing Cheng, C. Stephen Gunn, Soheil Hasses Yeganeh and Van Jacobsen, "BBR: congestion-based congestion control," Communications of the ACM, Vol. 60, Issue 2, pp 58-66, February 2017.

[4] Eric Rescorla, "The Transport Layer Security (TLS) Protocol Version 1.34," RFC 8446, August 2018.

[5] IANA Protocol Numbers Registry.

[6] Geoff Huston, "Anatomy: A look Inside Network Address Translators," The Internet Protocol Journal, Vol. 7, No. 3, September 2004.

[7] Arnt Gulbrandsen, Paul Vixie and Levon Esibov, "A DNS RR for specifying the location of servicesd (DNS SRV)," RFC 2782, February 2000.

[8] Mark Nottingham, Patrick McManus and Julian Reschke, "HTTP Alternative Services," RFC 7838, April 2016.

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: Internet Protocol

Categories: News and Updates

New CEO Matthias Conrad is Planning to Step on the Accelerator at Sedo

DN Journal - Mon, 2019-03-04 17:53
For only the 3rd time in 20 years, Sedo has a new CEO. We connected with Matthias Conrad to learn more about his plans for the domain sales & monetization giant.
Categories: News and Updates

DNSSEC – DNW Podcast #226

Domain Name Wire - Mon, 2019-03-04 16:30

Domain Name System Security Extensions — what’s it all about?

You’ve probably heard about some recent hacks involving the domain name system. This week we’ll talk about how DNSSEC could help stem these attacks. Matt Larson, who co-hosts the Ask Mr. DNS Podcast and currently works at ICANN will explain what DNSSEC is, what’s required for it to work and the pros/cons.

Also: .Dev, Gandi acquired, Nominet comes to America, MarkMonitor going public and more.

This week’s sponsor: DNAcademy.com. Use code DNW for $50 off.

Subscribe via iTunes to listen to the Domain Name Wire podcast on your iPhone or iPad, view on Google Play Music, or click play above or download to begin listening. (Listen to previous podcasts here.)

© DomainNameWire.com 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. Paul Mockapetris, inventor of the DNS – DNW Podcast #100
  2. New Domain Transfer Policy – DNW Podcast #111
  3. Boost your domain sales chances – DNW Podcast #135
Categories: News and Updates

New .Com Winners & Losers

Domain Name Wire - Mon, 2019-03-04 14:53

NameSilo jumps on monthly list.

ICANN has published the latest official data from Verisign (NASDAQ: VRSN) about the .com namespace. This registrar-by-registrar report covers November 2018.

The only notable movement was NameSilo, which jumped from #10 to #6 on the monthly report. It’s currently the 11th largest .com registrar. You can read an interview with NameSilo’s CEO here.

Here’s how registrars did in terms of new .com registrations.

1. GoDaddy.com* (NYSE: GDDY) 865,438 (893,103 in October)
2. Alibaba (HiChina) 312,227 (204,213)
3. Tucows** (NASDAQ:TCX) 175,148 (183,542)
4. NameCheap Inc. 157,607 (139,033)
5. Xin Net Technology Corporation 152,143 (135,590)
6. NameSilo (CSE:URL) 130,713 (68,758)
7. Endurance+ (NASDAQ: EIGI) 123,087 (130,915)
8. Web.com++ 97,702 (105,815)
9. Google Inc. (NASDAQ: GOOGL) 88,236 (92,312)
10. United Internet^ (FRA: UTDI) 66,501 (71,257)

Here’s the leaderboard of the top registrars in terms of total .com registrations as of the end of November 2018.

1. GoDaddy* 49,370,531 (49,222,603 in October)
2. Tucows** 12,659,835 (12,709,575)
3. Endurance+ 7,179,580 (7,240,990)
4. Web.com++ 6,732,478 (6,730,384)
5. Alibaba 6,192,801 (5,972,559)
6. United Internet^ 5,666,856 (5,675,949)
7. Namecheap 4,515,221 (4,452,551)
8. Xin Net Technology Corporation 2,792,514 (2,639,761)
9. Google 2,057,844 (2,005,816)
10. GMO 1,964,580 (1,975,188)

Many domain companies have multiple accreditations and I’ve tried to capture the largest ones. See the notes below.

* Includes GoDaddy and Wild West Domains
** Includes Tucows and Enom
+ Includes PDR, Domain.com, FastDomain and Bigrock. There are other Endurance registrars, but these are the biggest.
++ Includes Network Solutions and Register.com
^ Includes 1&1, PSI, Cronon, United-Domains, Arsys and world4you

© DomainNameWire.com 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. .Com Winners & Losers
  2. Google Domains tops 2 million .com domains
  3. .Com’s Top Registrars
Categories: News and Updates

WHOIS Detractors and Advocates: Today's Viewpoints Post-GDPR

Domain industry news - Sat, 2019-03-02 18:49

Opposing parties continue to debate whether WHOIS should stay after the General Data Protection Regulation (GDPR) took effect across the EU in May 2018. While the Internet Corporation for Assigned Names and Numbers (ICANN), which oversees WHOIS, is looking for ways to be GDPR compliant, experts from various fields are contemplating the problems pointed out by officials.

In this article let's take a closer look at the stakeholders involved in the discussion regarding the future of WHOIS and the issues that get them busy these days.

Officials and GDPR Specialists

Detractors

The GDPR authorities presented different arguments against WHOIS. One is ICANN's proposed accreditation model that suggests tiered access to data should be granted only to specific user types and purposes. Officials were concerned that such access would be biased towards certain groups such as intellectual property holders while other relevant parties might be ignored.

Detractors also mention that, at the moment, the processing and protection of sensitive details are ambiguous and require more concrete solutions to comply with GDPR.

Promoters

Meanwhile, on the opposite side of the barricade, advocates are asserting how turning down WHOIS might backfire to compromise people's security. For example, domain registrars could have incentives to make the records inaccessible, which subsequently might impede investigative activities and security initiatives.

On top of that, promoters point out how even inaccurate WHOIS data are considered relevant to investigations, as experts can trace and connect them to other sources of information.

Cybersecurity Community

Detractors

Critics say registrants' data utilized by businesses is not a good indicator of security, as details extracted by companies might be exploited for the wrong reasons. That is why they call for ICANN to manage priorities well to improve cybersecurity in the next years.

Furthermore, there are already proposals on WHOIS' replacement, for instance, by the Registration Data Access Protocol. Experts say RDAP, which is a more standardized version of WHOIS, might address WHOIS-related concerns more smoothly, including security cases. However, RDAP is not yet complete to answer issues around legal enforcement or comprehensiveness to name a few.

Promoters

Many cybersecurity specialists defend WHOIS as an essential protocol to track perpetrators across registrars and networks and insist that abolishing it is a short-sighted decision. The fact is that cybercriminals often reuse registration details for multiple domains to save costs, so tracing contacts' similarities is an efficient way to reveal malicious activities.

On top of that, restricting access to WHOIS records could also significantly hurt those professionals who fight against domain squatting and infringement.

Businesses' Opinion

Detractors

Business stakeholders discuss the other side of anonymity, as many consider privacy as one valuable criterion during registration. Detractors stress that, if all domain owners are obliged to display their details, harassment would be more likely. Therefore one determinant of WHOIS fate in terms of legal approval is ICANN's capability to realize proper due processes when acquiring companies' sensitive information.

Meanwhile, registrars are not keen on the idea of total transparency either, but for their own reasons. They advise users that publishing ownership data can attract spammers and scammers and want to offer privacy options to registrants for added fees.

Promoters

Businessmen in favor of the protocol claim how it caters to the legitimate interests of stakeholders. For instance, certain marketing and security research efforts often rely upon the interconnectedness of data that WHOIS databases provide. They also highlight the value of domain records in decision-making processes notably to verify entities and protect brands.

As months go, it seems that the business sector will be carefully eyeing how much WHOIS will be preserved to aid diverse industries.

* * *

WHOIS is not perfect, and opponents refer to many relevant issues. However, it's important to keep in mind the protocol's comprehensiveness and decentralized nature for parties to come up with the best solution regarding its future.

Written by Jonathan Zhang, Founder and CEO of Threat Intelligence Platform

Follow CircleID on Twitter

More under: Domain Names, Whois

Categories: News and Updates

Domain Registrars Given a Six-Month Deadline to Implement Registration Data Access Protocol (RDAP)

Domain industry news - Fri, 2019-03-01 18:44

ICANN issued an industry-wide six-month deadline for the deployment of the Registration Data Access Protocol (RDAP) — a replacement for the WHOIS protocol. Kevin Murphy reporting in Domain Incite: "Registration Data Access Protocol fulfills the same function as Whois, but it's got better support for internationalization and, importantly given imminent work on Whois privacy, tiered access to data. ... The registries and registrars knew it was coming and told ICANN this week that they're happy for the 180-day implementation deadline to come into effect." Domain registries and registrars are required to implement an RDAP service by 26 August 2019, says ICANN.

Follow CircleID on Twitter

More under: Domain Names, ICANN, Policy & Regulation, Registry Services, New TLDs, Whois

Categories: News and Updates

The Domain Battle That Pitted Ari Goldberger Against Michael Cohen and The Trump Organization

DN Journal - Fri, 2019-03-01 18:02
In the wake of this week's developments in Washington DC this previously untold story (with a domain twist) is an relevant today as it was when it happened.
Categories: News and Updates

Wow: Over 64,000 .Dev domain names registered

Domain Name Wire - Fri, 2019-03-01 16:13

The .Dev top level domain is off to a strong start.

Google launched the .dev top level domain name yesterday and the domain already has over 64,000 registrations.

The zone file for .dev on ICANN’s reporting system currently shows about 15,000, so it’s missing many of the domains. Still, based on my review of the zone file, it looks like last name and first/last name combos are popular; individual developers are buying their name ending in .dev. Development terms and company names are also popular.

While these numbers are slightly behind Google’s early success with .app, they are a lot higher than I expected. Google’s cachet with developers and its PR capabilities are certainly driving registrations.

The company is offering a free .dev domain for one year to applicants for this year’s Google I/O conference but this represents only a small part of the 64,000 registrations.

The entire .dev top level domain name is on the HSTS preload list, so .dev domain names need an SSL certificate in order to resolve. (Learn more about this here.)

© DomainNameWire.com 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. Uniregistry to release over one million domains through registrar channel
  2. Domain Connect promises to make connecting domains to web services easier
  3. Donuts chooses Rightside over Google Nomulus
Categories: News and Updates

Montefiore Investment acquires domain name registrar Gandi

Domain Name Wire - Fri, 2019-03-01 13:02

Private equity firm acquires domain name registrar with 2.5 million domains.

Private equity firm Montefiore Investment has acquired domain name registrar Gandi.

In a heartfelt blog post, Gandi CEO Stephan Ramoin explained the origins of the company and its values. One thing that’s unique about Gandi is that is has grown mostly by word-of-mouth; the company didn’t jump into the advertising game like many registrars.

With a private equity backer, Ramoin noted that the company will now look to acquire other businesses for the first time.

Gandi has revenues of over 37 million euros and 800,000 customers. More than half of its sales come from France, where it is located, but other regions have contributed substantially since the company created overseas subsidiaries in 2012. Gandi has more than 2.5 million domains under management.

© DomainNameWire.com 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. CentralNic acquires Instra for $24 million, sells $3.6M in domains, and plans to raise $15M
  2. Bad guys get Gandi.net’s password to technical provider, redirect domains
  3. Frank Schilling sparked yesterday’s active day for Rightside
Categories: News and Updates

New TLDs, five years in

Domain Name Wire - Thu, 2019-02-28 21:36

 

It was five years ago this month that the first domains under the new top level domain (nTLD) program started rolling out to consumers. The first Latin script general availability domains came out in early February 2014.

Remember .guru? .Plumbing? Yeah, that was a long time ago.

Here’s how I’d sum up the first five years of new top level domains.

Demand and artificial demand

Registration numbers for new TLDs have not met expectations for just about anyone who put money into it. That includes the many registries that spent millions acquiring strings as well as ICANN.

I thought new TLD applicants were very bullish, but even I was surprised when the first sunrises got only a couple hundred registrations.

Applicants should have understood the market size better. People were looking at registrations in .com and predicting market size for new TLD strings but many applicants expected some magic demand to come out of nowhere. The reality is that there are only so many people creating a website at a given time. They’re going to look for a .com or ccTLD name. If they don’t get that then they might consider a new TLD. So new TLD demand for people creating websites is a subset of total site creator demand. Without a catalyst for more people to create websites, demand won’t shoot up.

Of course, if you look at the headline registration numbers, some domains seem to be doing really, really well. .Top has 3.8 million domains in the zone and .XYZ has over 2 million.

But we all know at this point how these numbers were manufactured. A lot of registries took the “fake it til you make it” approach. They boosted their numbers through giveaways (and near giveaways).

This worked in some respects. .XYZ got a lot of attention when companies saw its numbers take off. Some companies used .xyz domains as a result.

Cheap domains have a downside, though. Spammers and criminals churn through domain names so they like cheap ones. There’s a fairly tight correlation between domain price and quality of a namespace. Unfortunately, the bad reputation of some new TLDs has given new TLDs a bad rap overall in security circles.

The reality is that registration and usage growth should be slow. I wish there was a catalyst for domain demand to shoot through the roof but there quite simply isn’t one at this time.

The earlier the better

.Guru still has over 60,000 names in its zone. It would have a fraction of that if it came to market later.

Until .app, .guru has the most pre-orders of any domain at GoDaddy. I don’t think anyone would look at the total pool of new TLDs and suggest that .guru belongs where its numbers put it.

It had first-mover advantage among “generic” new TLDs.

Speaking of which, this might be why some people had unrealistic expectations for new TLD registration numbers. They looked at .co and .xxx and extrapolated. But these names did as well as they did because they had very little competition. They had an advantage that most new TLDs don’t have. The environment has changed.

The earlier new TLDs also had an advantage because domainer wallets weren’t tapped out.

Crazy auction prices

I understand companies paying millions of dollars to acquire strings before we had a good idea of registration volumes. But I was perplexed as contention set auction prices continued to soar even after reality set in.

How can anyone justify spending $10M plus on a string that has a possible “real” registration base of 10k-20k domains at modest prices? Some of these acquisition costs will never be paid back. Even on a ten-year payback, that’s a horrible investment.

I realize that registries got cash infusions by losing contention set auctions but it doesn’t make sense to blow that cash (real cash in the bank!) on other strings just because you got a windfall.

What does Verisign think?

As it turns out, new TLDs had little impact on .com. But it was shocking when Verisign sued XYZ for some of the comments it made about .com and how well .xyz was doing. (Verisign lost.)

That lawsuit was the first big indication that Verisign had concerns about the impact of new TLDs on .com. Its messaging changed during the lawsuit when it said actually it was .net that was hurt. I’m still a bit confused what this lawsuit was all about. I’ve heard people say it was Verisign’s effort to shut up new TLD folks that were bashing .com. I don’t know. I don’t understand many of the decisions Verisign makes.

IDN transliterations of .com are a dud

There’s not much to say here. The idea that .com transliterations would make IDN.com domains worth a lot was flat-out wrong.

Amazon’s slow roll

Amazon made several surprising moves with new TLDs.

First is that it applied for so many. 76.

Second is that it didn’t plan to open them up to the public, at least at first. That changed when the community got upset about “closed generics”.

Third is that Amazon has done so little with its TLDs. Why hold these strings only to neuter them with over-the-top restrictions? I realize Amazon is a big company and these domains won’t move the needle, but the company could get an ROI on some of them by selling to other registries. If they don’t have plans for a string in the next few years, why not shop them?

Registry technical service costs fall…a lot

One new TLD operator told me he thinks the technical backend registry cost of first-year creates is headed to zero. He might be right.

There are so many competent companies providing registry backend technology. They bid aggressively to win contracts with cut-rate pricing.

Of course, the registry for .com domains still gets a whopping $7.85 per registration. And that might go up soon.

Donuts

No discussion of the first five years of new TLDs would be complete without mentioning Donuts.

The company applied for over 300 TLDs. It bought Rightside and now has about 240 strings in its portfolio. Last year it was acquired by private equity firm Abry Partners in a competitive process.

Let’s face it; Donuts did it right. It understood how contention sets would be settled and how to play this game. Its massive portfolio approach smoothed out its “bad” TLD choices and its overhead is spread out over hundreds of strings.

Even though I’m sure Donuts didn’t hit its best case forecasts, its founders made the smartest play in this round of domain expansion.

What’s next?

We’ll see more consolidation in the new TLD space. This will speed up as more TLD operators face reality. There will also be another round with some twists and new rules. It will be a while, though.

That’s my take. What do you think?

© DomainNameWire.com 2019. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. Final new TLD objection tally: Donuts 55, Amazon 24, Google 22
  2. Donuts: Verisign trying to intimidate, bully competitors with .XYZ lawsuit
  3. Minds + Machines acquires .XXX top level domain operator ICM Registry
Categories: News and Updates

ICANN LAC-i Roadshow Stirs "High Interest" in Caribbean ccTLDs

Domain industry news - Thu, 2019-02-28 20:23

The Caribbean can create a unique flavour on the Internet by using effectively managed and financially stable Country Code Top-Level Domains.

Commonly known as ccTLDs, Country Code Top Level Domains are the two-letter shortcodes inserted after the last dot and before the first slash of some website addresses. All ccTLDs are specifically designated for a particular country or territory, such as .gd for Grenada, .vc for St Vincent and the Grenadines, or .kn for St Kitts and Nevis. The Internet Corporation for Assigned Names and Numbers (ICANN) is the global forum responsible for developing policies for the coordination of some of the Internet's core technical elements, including top-level domains.

Speaking after ICANN's first Caribbean edition of its Latin American and Caribbean Internet Roadshow (LAC-i Roadshow), recently held in Turks and Caicos Islands, Albert Daniels said the event yielded "very high interest" in the management and operations of the local ccTLD, .tc. Daniels is ICANN's senior manager of stakeholder engagement in the Caribbean.

"The LAC-i Roadshow facilitated a special session which explained how the establishment of a multistakeholder advisory group has been very successful in the management of a similar Caribbean ccTLD, .tt, for Trinidad and Tobago," Daniels said in a post-event statement dated February 12.

In attendance at the roadshow were government and private sector stakeholders, end users, and several representatives from two Internet service providers, Digicel and Flow.

"ICANN structures like the Country Code Names Supporting Organisation were also examined as entities which could connect Caribbean ccTLD managers with their peers in other countries globally, who would be in a position to help them manage and grow their own ccTLDs with stability and resiliency," Daniels added in a subsequent e-mail interview.

"Country code top-level domain names (like .tt, .vc, .uk, .ca, .gd or .kn) were delegated to trustees to manage on behalf of local communities, to give countries of the world their unique space on the internet and provide a platform to contribute to the development of the Internet economy at a local level. Caribbean stakeholders, therefore, have the opportunity to carve out their own unique local flavor of an internet presence utilising an effectively managed and financially stable ccTLD. Generic top-level domain names also offer companies and organisations in the Caribbean the opportunity for similar 'brand' identification," the statement said.

The LAC-i roadshow included a session intended to introduce participants to some of the ways in which Internet policy developed in the global ICANN multistakeholder community can directly impact the type of internet that is used every day for business and social activity in the Caribbean and across the world.

"A secure, stable and interoperable Internet is now relied upon every minute of every day for transacting business and interacting socially at a global level and at the finest level of local day to day life. The policies developed in the ICANN multistakeholder community impact the kind of internet that we get in the Caribbean and how this internet works for us, so we should therefore pay attention to key developments taking place in ICANN global policy development and ensure that Caribbean input with Caribbean concerns and Caribbean needs features in the global decision making that takes place in ICANN and shapes the Internet that we use in the Caribbean," Daniels said, via email.

The LAC-i Roadshow, which took place in Providenciales on February 7, was part of a larger event jointly coordinated with the Telecommunications Commission of Turks and Caicos and the American Registry for Internet Numbers.

More information about ICANN LAC-I Roadshow is available on their website: http://icannlac.org/EN.

Written by Gerard Best, Development Journalist

Follow CircleID on Twitter

More under: Domain Names, ICANN, Internet Governance

Categories: News and Updates

Number of Chinese Internet Users Reaches 829 Million, More Than Double the Population of the US

Domain industry news - Thu, 2019-02-28 20:16

Latest update from the China Internet Network Information Center (CNNIC) reports the total number of Chinese internet users reached 829 million at the end of 2018 — more than double the population of the US and up 7.3 percent on the previous year. Eight hundred seventeen million Chinese used a smartphone to access the internet, accounting for 98.6 percent of the total netizens, according to Global Times which received a copy of CNNIC's latest report. "There are still 562 million people in China isolated from the online world, with most living in rural regions. A low education level and insufficient internet surfing skills are the main obstacles blocking them from accessing the internet, the report said."

Follow CircleID on Twitter

More under: Access Providers, Mobile Internet, Web

Categories: News and Updates

High-Speed Fibre Makes Up One-Quarter of Fixed Broadband Internet Connections in OECD Countries

Domain industry news - Thu, 2019-02-28 20:00

Organization for Economic Co-operation and Development (OECD) reports that the share of high-speed fiber in fixed broadband Internet connections in its member countries has risen to 25%, up from 12% eight years ago. OECD currently consists of 36 member countries spanning from North and South American regions to Europe and Asia-Pacific. OECD's latest data shows a wide range between countries, however: "The share of fiber in total broadband [range] from above 70% in Korea, Japan and Lithuania to below 10% in Greece, Belgium, the United Kingdom, Israel, Austria, Germany, Italy and Ireland ... The highest growth in fiber over the past year has been seen in Ireland, Belgium and Australia with fibre subscriptions up 218%, 71% and 70% respectively."

Follow CircleID on Twitter

More under: Broadband

Categories: News and Updates

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer