News and Updates

Domain Name Rights Coalition is reborn to protect domain registrants

Domain Name Wire - Mon, 2018-02-26 15:00

Rebirthed organization will advocate for a broad range of domain name owners.

Domain Name Rights Coalition (DNRC), which was first founded in the late 1990s to protect the interests of domain name owners, has been reborn. The group bills itself as a think tank supporting the work of the ICANN community and representatives of domain name registrants. It will advocate on behalf of domain name owners when it comes to policy, including rights protection mechanisms such as the Uniform Domain Name Dispute Resolution Policy (UDRP).

The organization represents a broader group of people than the Internet Commerce Association (ICA) which represents domain name investors. The genesis of DNRC was to protect small businesses and ISPs that were facing big trademark interests. It was instrumental in adding safeguards for domain name owners when the UDRP was originally formed.

The DNRC’s interests will often align with the ICA’s, which means there will be another advocate at the table when it comes to the battle over rights protection mechanisms. That advocate will be backed by a broader coalition of supporters, which is good for domain name investors.

Kathy Kleiman co-founded the original DNRC and is spearheading the new organization. You can learn more about how cybersquatting policy was formed in the 1990s and what’s in store for the future on today’s DNW podcast when I interview Kleiman. It will be published at 10:30 CST.

© 2017. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at)

Latest domain news at Domain Name Wire.

© 2018. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at)

Latest domain news at Domain Name Wire.

Related posts:
  1. Money Down the Drain
  2. EA loses domain dispute for for upcoming SSX launch
  3. Nintendo loses domain dispute for domain name
Categories: News and Updates

Humming an Open Internet Demise in London?

Domain industry news - Sun, 2018-02-25 20:20

In mid-March, the group dubbed by Wired Magazine 20 years ago as Crypto-Rebels and Anarchists — the IETF — is meeting in London. With what is likely some loud humming, the activists will likely seek to rain mayhem upon the world of network and societal security using extreme end-to-end encryption, and collaterally diminish some remaining vestiges of an "open internet." Ironically, the IETF uses what has become known as the "NRA defence”: extreme encryption doesn't cause harm, criminals and terrorists do. The details and perhaps saving alternatives are described in this article.

Formally known as the Internet Engineering Task Force (IETF), the group began its life as a clever DARPA skunkworks project to get funded academics engaged in collective brainstorming of radical new ideas for DOD. It never created an actual organization — which helped avoid responsibility for its actions. During the 1990s, the IETF became embraced as a strategic home for a number of companies growing the new, lucrative market for disruptive DARPA internet products and services — coupled with continued copious funding from the Clinton Administration which also treated it as a means for promoting an array of perceived U.S. political-economic interests.

Over subsequent years, as other industry technical bodies grew and prospered, the IETF managed to find a niche value proposition in maintaining and promoting its legacy protocols. During the past few years, however, the IETF's anarchist roots and non-organization existence have emerged as a significant security liability. The zenith was reached with the "Pervasive Encryption" initiative, bringing Edward Snowden virtually to the IETF meetings, and humming to decide on radical actions that met the fancy of his acolytes.

The Pervasive Encryption initiative

The IETF began doing Snowden's bidding with the "Pervasive Encryption" initiative as their common crusade against what Snowden deemed "Pervasive Monitoring." The IETF activists even rushed to bless his mantra in the form of its own Best Current Practice turned into a mitigation commandment called RFC 7258.

The initiative will come to fruition at a humming session in London at the IETF 101st gathering in a few weeks. The particular object of humming is an IETF specification designated TLS 1.3 and designed to provide extremely strong, autonomous encryption for traffic between any end-points (known as "end-to-end" or "e2e"). TLS = Transport Layer Security. The specification has been the subject of no less than 24 versions and more than 25 thousand messages to reach a final stage of alleged un-breakability. In the IETF vernacular, the primary design goal of TLS 1.3 is to "develop a mode that encrypts as much of the handshake as is possible to reduce the amount of observable data to both passive and active attackers." How this occurs leverages an array of cryptologic techniques to achieve perfect "forward secrecy."

There are perceived short-term benefits for some parties from the essentially invisible traffic capabilities between two end-points on devices anywhere in the world that are described below. However, the impacts are overwhelmingly, profoundly adverse. Innumerable parties over the past two years have raised alarms, and include multiple organizations and venues: workshops and lists within the IETF itself, vendor concerns, effects concerns, major enterprise users such as Financial Data Center Operators, major malware software vendors, the IEEE, the 3GPP mobile services community, the ITU-T Security Group and TSB Secretariat, a plethora of company R&D activities in the form of remedial product patents, trade press articles, and literally hundreds of research studies published in professional journals. The bottom-line view among the IETF activists, however, is "not our problem."

The use of TLS by the IETF is somewhat ironic. Transport Layer Security (TLS) actually had its origins in early OSI industry efforts in the 1980s to provide for responsible security for the OSI internet. Indeed, an initial acceptable industry specification was formally published in the early 90s as a joint ITU-T/ISO (International Telecommunication Union Telecommunications Standardization Sector and International Organization for Standardization) joint standard that remains in effect today.

IETF crypto-activists a few years later took over the ITU-T/ISO internet TLS to roll out their own versions to compensate for DARPA internet cyber security deficiencies. However, it was the Snowden affection that primarily drove zealots to embark on TLS 1.3 as the crown jewel of the Pervasive Encryption initiative. A secondary but significant factor was the interest of Over-the-Top providers in free, unfettered bandwidth to customers leveraging the NetNeutrality political mandate, and added substantial fuel to the TLS 1.3 fire. Indeed, OTT providers have pursued a TLS variant known as QUIC — which allows for multiple simultaneous encrypted streams to end-user customers. QUIC creates major operational and compliance challenges similar to TLS 1.3 and is already being blocked. So as those in London hum for TLS 1.3 anarchy, what is gained and what is lost?

What is gained with TLS 1.3?

There are several "winners." TLS1.3 makes eavesdropping significantly more difficult. There are fewer "handshakes," so it should be faster than previous TLS versions. The platform enhances a sense of confidentiality for some individual users — especially the paranoid and those seeking increased protection for activities they want unknown. Those who profess extreme privacy zeal will likely be pleased.

For those engaged in any kind of unlawful activities, TLS 1.3 is a kind of nirvana. It includes those who seek to distribute and manage malware on remote machines — for either programmed attacks or for clandestine campaigns such as those manifested by Russian agents in the U.S. elections. Symantec has already presented statistics on how a considerable amount of malware is distributed via end-to-end encryption tunnels.

The platform also potentially enhances business opportunities and revenue for Over the Top (OTT) providers, and for vendors that leverage it for PR purposes. The latter includes some browser vendors and a few cloud data centre operators who cater to hosting customers for whom opaque end-to-end encryption for unaccountable activities is a value proposition.

TLS 1.3 also provides a perceived sense of satisfaction for those eternal "crypto anarchists" who have been labouring for so many years to best the government agency cryptologists and law enforcement authorities.

In a somewhat amusing, unintended way, the biggest winners may be the vendors of devices and software that detect and block TLS 1.3 traffic. They will benefit from the enormously increased market for their products.

What is lost with TLS 1.3?

TLS 1.3 (and QUIC) are already known to be highly disruptive to network operators' ability to manage or audit networks. This occurs through a number of factors, but one of the most prevalent is that it breaks the functionality of the enormous number of network "middleboxes" that are essential for network operation. The problem is exacerbated in commercial mobile networks where the operator is also attempting to manage radio access network (RAN) bandwidth.

Because encrypted e2e transport paths in potentially very large numbers are being created and managed autonomously by some unknown third parties, a network provider faces devastating consequences with respect to providing sufficient bandwidth and meeting network performance expectations. It is in effect an unauthorized taking of the provider's transport network resources.

As noted above, TLS 1.3 significantly facilitates widespread malware distribution, including agents that can be remotely managed for all kinds of tailored attacks. In the vernacular of cybersecurity, it exponentially increases the threat surface of the network infrastructure. The proliferation of Internet of Things (IoT) devices exacerbates the remotely controlled agent attack potential. Although, the counter-argument is to somehow magically improved the security at all the network end-points, the ability to really accomplish this fanciful objective is ephemeral and not real. It seems likely that most end users will view their loss of security and control of their terminal devices as much more important than any perceived loss of privacy from potential transport layer monitoring in transit networks.

A particularly pernicious result for enterprise network and data centre operators, including government agencies, is the potential for massive sensitive data exfiltration. A peripheral intruder through a TLS 1.3 encrypted tunnel into a data centre or company network could leverage their access to command substantial resources to gather and export intelligence or account information of interest. This potential result is one of the principal reasons for a continuing awareness campaign of the Enterprise Data Center Operators organization — coupled with proffering alternative options.

Most providers of network services are required to meet compliance obligations imposed by government regulation, industry Service Level Agreements, or insurance providers. The insurance impact may arise from an assessment that the potential liabilities of allowing TLS 1.3 traffic exposes providers to substantial tort litigation as an accessory to criminal or civil harm. The long list of compliance "by design" obligations are all likely to be significantly impeded or completely prevented by TLS 1.3 implementations:

  • Availability (including public services, specific resilience and survivability requirements, outage reporting)
  • Emergency and public safety communication (including authority to many, one to authority, access/prioritization during emergency, device discovery/disablement)
  • Lawful interception (including signaling, metadata analysis, content)
  • Retained data (including criminal investigative, civil investigative/eDiscovery, sector compliance, contractual requirements and business auditing)
  • Identity management (including access identity, communicating party identity. communicating party blocking)
  • Cyber Security (including defensive measures, structured threat information exchange)
  • Personally Identifiable Information protection
  • Content control (including intellectual property right protection, societal or organization norms)
  • Support for persons with disabilities

Lastly, the implementation of TLS 1.3 is likely to be found unlawful in most countries and backed up by longstanding treaty provisions that recognize the sovereign right of each nation to control its telecommunications and provide for national security. Furthermore, nearly every nation in the world requires that with proper authorization, encrypted traffic must be either made available in decrypted form, or the encryption keys provided to law enforcement authorities — which TLS 1.3 prevents. Few if any rational nations or enterprises are going to allow end-to-end encrypted traffic transiting their networks or communicating with end-point hosts at data centres or users without the ability to have some visibility to assess the risk.

Myth of "the Open Internet"

The reality is that there have always been many internets running on many technologies and protocols and loosely gatewayed under diverse operational, commercial, and political control. In fact, the largest and most successful of them is the global commercial mobile network infrastructure which manages its own tightly controlled technical specifications and practices. With the rapid emergence of NFV-SDNs and 5G, internets on demand are beginning to appear.

The myth of a singular "Open Internet" has always been a chimera among Cyber Utopians and clueless politicians riding the Washington Internet lobbyhorse. The myth was begun by the Clinton Administration twenty years ago as an ill-considered global strategy to advance its perceived beneficial objectives and Washington politics. It came to backfire on the U.S. and the world in multiple dangerous ways. In reality, the humming approval of TLS 1.3 in London will likely diminish the "openness" within and among internets, but it will also properly cordon off the dangerous ones.

Thus, the perhaps unintended result of the IETF crypto zealots moving forward with TLS 1.3 will be for most operators to watch for TLS 1.3 traffic signatures at the network boundaries or end-points and either kill the traffic or force its degradation.

Innovation and a major industry standards organization to the rescue

Fortunately, there are responsible alternatives to TLS 1.3 and QUIC. For the past two years, some of the best research centres around the world have been developing the means for "fine-grained" visibility of encrypted traffic that balances both the security interests and privacy concerns. Several dozen platforms have been published as major papers, created innovative university programs, led to a major standards Technical Report, and generated even a seminal PhD thesis. A few have been patented. A number of companies have pursued proprietary solutions.

The question remained, however, what major global industry standards body would step up to the challenge of taking the best-of-breed approaches and rapidly produce new technical specifications for use. It occurred last year when the ETSI Cyber Security Technical Committee agreed to move forward with several Fine Grained Transport Layer Middlebox Security Protocols. ETSI as both a worldwide and European body has previously led major successful global standards efforts such as the GSM mobile standards now spun out as 3GPP, and the NFV Industry Standards Group, so it had the available resources and industry credentials.

Considerable outreach is being undertaken to many other interested technical organizations, and a related Hot Middlebox Workshop and Hackathon are scheduled for June. The result allows the IETF to hum as it wishes, and the rest of the world can move on with responsible alternatives that harmonize all the essential requirements of network operators, data centres, end users, and government authorities.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Cybersecurity, Internet Governance, Policy & Regulation

Categories: News and Updates

Have We Reached Peak Use of DNSSEC?

Domain industry news - Sat, 2018-02-24 23:57

The story about securing the DNS has a rich and, in Internet terms, protracted history. The original problem statement was simple: how can you tell if the answer you get from your query to the DNS system is 'genuine' or not? The DNS alone can't help here. You ask a question and get an answer. You are trusting that the DNS has not lied to you, but that trust is not always justified.

Whether the DNS responses you may get are genuine or not is not just an esoteric question. For example, many regimes have implemented a mechanism for enforcing various national regulations relating to online access to content by using DNS interception to prevent the resolution of certain domain names. In some cases, the interception changes a normal response into silence, while in other cases a false DNS answer is loaded into the response. As well as such regulatory-inspired intervention in the DNS, there is also the ever-present risk of malicious attack. If an attacker can pervert the DNS in order to persuade a user that a named resource lies behind the attacker's IP address, then there is always the potential to use DNS interception in ways that are intended to mislead and potentially defraud the user.

DNSSEC is a partial response to this risk. It allows the end client's DNS resolver to check the validity and completeness of the responses that are provided by the DNS system. If the published DNS data was digitally signed in the first place, then the client DNS can detect this, and the user can be informed when DNS responses have been altered. It is also possible to validate assertions that the name does not exist in the zone, so that non-existence of a name can be validated through DNSSEC. If the response cannot be validated, then the user has good grounds to suspect that some third party is tampering inside the DNS.

From this perspective, DNSSEC has been regarded as a Good Thing. When we look at the rather depressing saga of misuse and abuse of the Internet and want to do something tangible to improve the overall picture, then 'improving' the DNS is one possible action. Of course, it's not a panacea, and DNSSEC will not stop DNS interception, nor will it stop various forms of intervention and manipulation of DNS responses. What it can do is allow the client who posed the query some ability to validate the received response. DNSSEC can inform a client whether the signed response that they get to a DNS query is an authentic response.

The Costs and Benefit Perceptions of DNSSEC Signing

DNSSEC is not without its own costs, and the addition of more moving parts to any system invariably increases its fragility. It takes time and effort to manage cryptographic keys. It takes time and effort to sign DNS zones and ensure that the correct signed information is loaded into the DNS at all times. Adding digital signatures to DNS responses tends to bloat DNS messages, and these larger DNS messages add to the stress of using a lightweight datagram protocol to carry DNS queries and responses. The response validation function also takes time and effort. DNSSEC is not a 'free' addition to the DNS.

In addition, DNSSEC adds further vulnerabilities to the DNS. For example, if the date fields in the DNSKEY resource records expire, then the material that has been loaded into the zone that was signed with this key also expires, as seen by validating resolvers. More generally, if the overlay of keys and meshed digital signatures fails in any way then validating resolvers will be unable to validate DNS responses for this zone. DNSSEC is not implemented as a warning to the user. DNSSEC will cause information to be withheld if the validating DNS resolver fails to validate the response.

Attacks intended to pervert DNS responses fall into two major categories. The first is denial, where a DNS response is blocked and withheld from the query agent. DNSSEC can't solve that problem. It may well be that the DNS "name does not exist" (NXDOMAIN) response cannot be validated, but that still not help in revealing what resource record information is being occluded by this form of interception. The second form of attack is alteration, where parts of a DNS response are altered in an effort to mislead the client. DNSSEC can certainly help in this case, assuming that the zone being attacked is signed, and the client performs DNSSEC validation.

Is the risk of pain from this second class of attack an acceptable offset against the added effort and cost to both maintain signed zones and operate DNSSEC-validating resolvers? The answer has never been an overwhelming and enthusiastic "yes." The response to DNSSEC has been far more tempered. Domain name zone administrators appear to perceive DNSSEC-signing of their zone a representing a higher level of administrative overhead, higher delays in DNS resolution, and the admission of further points of vulnerability?

The overwhelming majority of domain name zone administrators appear to be just not aware of DNSSEC, or even if they want to sign their zone they cannot publish a signed zone because of limitations in the service provided by registrar, or if they are aware and could sign their zone, then they don't appear to judge that the perceived benefit of DNSSEC-signing their zone adequately offsets the cost of maintaining the signed zone.

There are a number of efforts to try and alter the combined issues of capability and perception. Some of these efforts attempt to offload the burden of zone signing and key management to a set of fully automated tools, while others use a more direct financial incentive, offering reduced name registration fees for DNSSEC-signed zones.

The metrics of signed DNSSEC zones are not easy to come by for the entire Internet, but subsections of the namespace are more visible. In New Zealand, for example, just 0.17%of the names in the .nz domain are DNSSEC-signed ( It appears that this particular number is not anomalously high or low, but, as noted, solid whole-of-Internet data is not available for this particular metric.

It appears that on the publication side, the metrics of DNSSEC adoption still show a considerable level of caution bordering on skepticism.

DNSSEC Validation

What about resolution behaviour? Are there measurements to show the extent to which users pass their queries towards DNS resolvers that perform DNSSEC validation?

Happily, we are able to perform this measurement with some degree of confidence in the results. Using measurement scripts embedded in online ads and using an ad campaign that presents the scripting ad across a significant set of endpoints that receive advertisements, we can trigger the DNS to resolve names that are exclusively served by this measurement system's servers. By a careful examination of queries that are seen by the servers, it is possible to determine if the end user system is passing their DNS queries into DNSSEC-validating resolvers.

We've been doing this measurement continuously for more than four years now, and Figure 1 shows the proportion of users that pass their DNS queries though DNSSEC-validating resolvers.

* * *

Perhaps it's worth a brief digression at this point and look at exactly what "measuring DNSSEC validation" really entails.

Nothing about the DNS is as simple as it might look in the first instance, and this measurement is no exception. Many client-side stub resolvers are configured to use two or more recursive resolvers. The local DNS stub resolver will pass the query to one of these resolvers, and if there is no response within a defined timeout interval, or if the local stub resolver receives a SERVAIL or a REFUSED code, then the stub resolver may re-query using another configured resolver. If the definition of "passing a query through DNSSEC-validating resolvers" is that the DNS system as a whole both validates signed DNS information and withholds signed DNS information if the validation function fails, then we need to be a little more careful in performing the measurement.

The measurement test involves resolving 2 DNS names: one is validly signed, and the other has an incorrect signature. Using this pair of tests, users can be grouped into three categories:

  1. None of the resolvers used by the stub resolver performs DNSSEC validation, and this is evident when the client is able to demonstrate resolution of both DNS names and did not query for any DNSSEC signature information.
  2. Some of the resolvers perform DNSSEC validation, but not all, and this is evident when the client is seen to query for DNSSEC signature information yet demonstrates resolution of both DNS names.
  3. All of the resolvers used by the client perform DNSSEC validation, and this is evident when the client is seen to query for DNSSEC signature information and demonstrates that only the validly-signed DNS name resolved.

The measurement we are using in Figure 1 is category 'c', where we are counting end systems that have resolved the validly-signed DNS name and have been unable to resolve the invalidly-signed DNS name.

* * *

Figure 1 shows a story that is consistent with an interpretation of "peak DNSSEC" from the perspective of DNSSEC validation. When we started this measurement in late 2013, we observed that around 9% of users passed their queries to DNSSEC validating resolvers. This number rose across 2014 and 2015, and by the end of 2015, some 16% of users were sitting behind DNSSEC-validating DNS resolvers. However, that's where it stuck. Across all of 2016 this number remained steady at around 16%, then in 2017, it dropped. The first half of the year saw the number at just below 15%, but a marked drop in July 2017 saw a further drop to 13%. At the time of the planned roll of the KSK, the number dropped further to 12%, where it has remained until now.

If this number continues to drop, then we stand the risk of losing impetus with DNSSEC deployment. If fewer users validate DNS responses, then the rationale for signing a zone weakens. And the fewer the number of signed zones, the motivation for resolvers to perform DNSSEC validation also weakens. Up until 2016 DNSSEC was in a virtuous circle: the more the number of validating resolvers the greater the motivation for signed zones, and the more the number of signed zones the greater the motivation for resolvers to perform validation. But this same feedback cycle also works in the opposite sense, and the numbers over the past 14 months bear this out, at least on the validation side.

From the validation perspective, the use of DNSSEC appeared to have peaked in early 2016 and has been declining since then.


Given that our current perceptions of the benefits of DNSSEC appear to be overshadowed by our perceptions of the risks in turning on DNSSEC, then the somewhat erratic measures of DNSSEC adoption are perhaps unsurprising.

I also suspect that the planned KSKroll and the last-minute suspension of this operation in October 2017 did the overall case for DNSSEC adoption no favour. The perception that DNSSEC was a thoroughly tested and well understood technical mechanism was given a heavy blow by this suspension of the planned key roll. It exposed some uncertainties relating our understanding of the DNSSEC environment in particular and also of the DNS system as a whole, and while the measure was entirely reasonable as an operationally conservative measure, the implicit signal about our current lack of thorough understanding of the way the DNS, and DNSSEC, works sent negative signals towards both potential and current users of DNSSEC.

However, I can't help but think that this is an unfortunate development, as the benefits of DNSSEC are under-appreciated in my view. DNSSEC provides a mechanism to enables high trust in the integrity of DNS responses, and DANE (placing domain name keys in the DNS and signing these entries with DNSSEC) is a good example of how DNSSEC can be used to improve the integrity of a currently fractured structure of trust in the namespace. The use of authenticated denial in DNSSEC provides a hook to improve the resilience of the DNS by pushing the resolution of non-existing names back to recursive resolvers through the use of NSEC caching. DNSSEC is not just being able to believe what the DNS tells you, but also making the namespace more trustworthy and improving the behaviours of the DNS and its resilience to certain forms of hostile attack.

It would be a sad day if we were to give up on these objectives due to lack of momentum behind DNSSEC adoption. It would be unfortunate if we were to persist with the obviously corrupted version of name certification we have today because some browser software writers are obsessed by wanting to shave off the few milliseconds that need to be spent in validating the name against the name's public key when using DNS. It would be unfortunate if the DNS continues to be the weapon of choice in truly massive denial of service attacks because we are unable to deploy widespread NSEC caching to absorb these attacks close to the source. But a declining level of DNSSEC adoptions means that these objectives appear to fade away.

I'm hoping that we have not passed the point of peak use of DNSSEC, and the last 2 years has been a temporary aberration in a larger picture of progressive uptake. That would be the optimistic position.

Otherwise, we are being pushed into a somewhat more challenging environment that has strong parallels with the Tragedy of the Commons. If the natural incentives to individual actors do not gently nudge us to prefer outcomes that provide better overall security, more resilient infrastructure and a safer environment for all users, then we are all in trouble.

If the network's own ecosystem does not naturally lead to commonly beneficial outcomes, then we leave the door open to various forms of regulatory imposition. And the centuries of experience we have with such regulatory structures should not inspire us with any degree of confidence. Such regulatory strictures often tend to be inefficiently applied, selectively favour some actors at the expense of others, generally impose additional costs on consumers and, as often as not, fail to achieve their intended outcomes. If we can't build a safe and more resilient Internet on our own, then I'm not sure that we are going to like what will happen instead!

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: Cybersecurity, DNS, DNS Security

Categories: News and Updates

Usenet, Authentication, and Engineering (or: Early Design Decisions for Usenet)

Domain industry news - Sat, 2018-02-24 18:28

A Twitter thread on trolls brought up mention of trolls on Usenet. The reason they were so hard to deal with, even then, has some lessons for today; besides, the history is interesting. (Aside: this is, I think, the first longish thing I've ever written about any of the early design decisions for Usenet. I should note that this is entirely my writing, and memory can play many tricks across nearly 40 years.)

A complete tutorial on Usenet would take far too long; let it suffice for now to say that in the beginning, it was a peer-to-peer network of multiuser time-sharing systems, primarily interconnected by dial-up 300 bps and 1200 bps modems. (Yes, I really meant THREE HUNDRED BITS PER SECOND. And someday, I'll have the energy to describe our home-built autodialers — I think that the statute of limitations has expired...) Messages were distributed via a flooding algorithm. Because these time-sharing systems were relatively big and expensive and because there were essentially no consumer-oriented dial-up services then (even modems and dumb terminals were very expensive), if you were on Usenet it was via your school or employer. If there was abuse, pressure could be applied that way — but it wasn't always easy to tell where a message had originated — and that's where this blog post really begins: why didn't Usenet authenticate requests?

We did understand the need for authentication. Without it, there was no way to perform control functions, such as deleting articles. We needed site authentication; as will be seen later, we needed user authentication as well. But how could this be done?

The obvious solution was something involving public key cryptography, which we (the original developers of the protocol: Tom Truscott, the late Jim Ellis, and myself) knew about: all good geeks at the time had seen Martin Gardner's "Mathematical Games" column in the August 1977 issue of Scientific American (paywall), which explained both the concept of public key cryptography and the RSA algorithm. For that matter, Rivest, Shamir, and Adleman's technical paper had already appeared; we'd seen that, too. In fact, we had code available: the xsend command for public key encryption and decryption, which we could have built upon, was part of 7th Edition Unix, and that's what is what Usenet ran on.

What we did not know was how to authenticate a site's public key. Today, we'd use certificate issued by a certificate authority. Certificates had been invented by then, but we didn't know about them, and of course, there were no search engines to come to our aid. (Manual finding aids? Sure — but apart from the question of whether or not anything accessible to us would have indexed bachelor's theses, we'd have had to know enough to even look. The RSA paper gave us no hints; it simply spoke of a "public file" or something like a phone book. It did speak of signed messages from a "computer network" — scare quotes in the original! — but we didn't have one of those except for Usenet itself. And a signed message is not a certificate.) Even if we did know, there were no certificate authorities, and we certainly couldn't create one along with creating Usenet.

Going beyond that, we did not know the correct parameters: how long a key to use (the estimates in the early papers were too low), what was secure (the xsend command used an algorithm that was broken a few years later), etc. Maybe some people could have made good guesses. We did not know and knew that we did not know.

The next thing we considered was neighbor authentication: each site could, at least in principle, know and authenticate its neighbors, due to the way the flooding algorithm worked. That idea didn't work, either. For one thing, it was trivial to impersonate a site that appeared to be further away. Every Usenet message contains a Path: line; someone trying to spoof a message would simply have to claim to be a few hops away. (This is how the famous kremvax prank worked.)

But there's a more subtle issue. Usenet messages were transmitted via a generic remote execution facility. The Usenet program on a given computer executed the Unix command,

uux neighborsite!rnews

where neighborsite is the name of the next-hop computer on which the rnews command would be executed. (Before you ask: yes, the list of allowable remotely requested commands was very small; no, the security was not perfect. But that's not the issue I'm discussing here.) The trouble is that any knowledgeable user on a site could issue the uux command; it wasn't and couldn't easily be restricted to authorized users. Anyone could have generated their own fake control messages, without regard to authentication and sanity built into the Usenet interface. (Could uux have been secured? This is itself a complex question that I don't want to go into now; please take it on faith and don't try to argue about setgid(), wrapper programs, and the like. It was our judgment then — and my judgment now — that such solutions would not be adopted. The minor configuration change needed to make rnews an acceptable command for remote execution was a sufficiently high hurdle that we provided alternate mechanisms for sites that wouldn't do it.)

That left us with no good choices. The infrastructure for a cryptographic solution was lacking. The uux command rendered illusory any attempts at security via the Usenet programs themselves. We chose to do nothing. That is, we did not implement fake security that would give people the illusion of protection but not the reality.

This was the right choice.

But the story is more complex than that. It was the right choice in 1979 but not necessarily right later, for several reasons. The most important is that the online world in 1979 was very different than it is now. For one thing, since only a very few people had access to Usenet, mostly CS students and tech-literate employees of large, sophisticated companies — the norms were to some extent self-enforcing: if someone went too far astray, their school or employer could come down on them. For another, our projections of participation and volume were very low. In my most famous error, I projected that Usenet would grow to 50-100 sites, and 1-2 articles a day, ever. The latest figures, per Wikipedia, puts traffic at about 74 million posts per day, totaling more than 37 terabytes. (I suppose it's an honor to be off by seven orders of magnitude — not many people help create a system that's successful enough to have a chance at such a lack of foresight!) On the one hand, a large network has much more need for management, including ways to deal with people and traffic that violates the norms. On the other, simply as a matter of statistics a large network will have at the least proportionately more malefactors. Furthermore, the increasing democratization of access meant that there were people who were not susceptible to school or employer pressure.

Traffic volume was the immediate driver for change. B-news came along in 1981, only a year or so after the original A-news software was released. B-news did have control messages. They were necessary, useful — and abused. Spam messages were often countered by cancelbots, but of course cancelbots were not available only to the righteous. And online norms are not always what everyone wants them to be. The community was willing to act technically against the first large-scale spam outbreak, but other issues — a genuine neo-Nazi, posts to the newsgroup by a member of NAMBLA, trolls on the soc.motss newsgroup, and more were dealt with by social pressure.

There are several lessons here. One, of course, is that technical honesty is important. A second, though, is that the balance between security and functionality is not fixed — environments and hence needs change over time. B-news was around for a long time before cancel messages were used or abused on a large scale, and this good mass behavior was not because the insecurity wasn't recognized: when I had a job interview at Bell Labs in 1982, the first thing Dennis Ritchie said to me was "[B-news] is a tool of the devil!" A third lesson is that norms can matter, but that the community as a whole has to decide how to enforce them.

There's an amusing postscript to the public key cryptography issue. In 1979-1981, when the Usenet software was being written, there were no patents on public key cryptography nor had anyone heard about export licenses for cryptographic technology. If we'd been a bit more knowledgeable or a bit smarter, we'd have shipped software with such functionality. The code would have been very widespread before any patents were issued, making enforcement very difficult. On the other hand, Tom, Jim, Steve Daniel (who wrote the first released version of the software — my code, originally a Bourne shell script that I later rewrote in C — was never distributed beyond UNC and Duke) and I might have had some very unpleasant conversations with the FBI. But the world of online cryptography would almost certainly have been very different. It's interesting to speculate on how things would have transpired if cryptography was widely used in the early 1980s.

Written by Steven Bellovin, Professor of Computer Science at Columbia University

Follow CircleID on Twitter

More under: Cybersecurity, Internet Protocol

Categories: News and Updates

NamesCon Registration Prices go Up Tomorrorow

Domain Name News - Sat, 2013-11-30 18:57

What’s the perfect thing to do after celebrating Thanksgiving with your family? Get right back to work and plan for the new year. And to get into the domaining mood right in the new year, what’s better than a domain industry conference at a low price? Today’s the last day to get your NamesCon tickets for the event from January 13th to 15th in Las Vegas, NV for $199 + fees. Fees double tomorrow to $399.

Richard Lau, the organizer of the event told DNN: “We are at over 200 attendees already and expect to hit more than 400 at the conference. The opening party on Monday (6:30pm – 9pm) will be hosted by .XYZ at the Tropicana, and the Tuesday night party will be at the Havana Room at the Tropicana from 8pm-midnight.

With hotel prices as low of $79 a night +$10 resort fee at the Tropicana right on the strip this ‘no meal’ conference is shaping up to be the event for the industry in 2014.

The event has already attracted sponsors like:

Further sponsorships are available.

Keynote speakers are:

If you need another reason to attend – you even meet DomainNameNews in person there :)

Related posts:

Categories: News and Updates

.DE Registry to add Redemption Grace Period (DENIC)

Domain Name News - Tue, 2013-11-26 19:49

As of December 3rd, 2013, DENIC, the operator of the .DE ccTLD will also introduce a Redemption Grace Period (RGP) that allows the original domain owner to recover their expired domain for up to 30 days after the expiry, the same as for gTLDs.

See the full press release after the jump.

Redemption Grace Period for .DE name space kicking off in early December

New cooling off phase to prevent unintentional domain loss

Effective 3 December 2013, the managing organization and central registry operator of the .DE top level domain, DENIC, will launch a dedicated cooling-off service (called Redemption Grace Period – RGP) which shall apply for all second-level domain names in the .DE name space. This procedure shall protect registrants against an unintentional loss of their domain(s), as a result of accidental deletion.

Under the RGP scheme, .DE domain names shall no longer be irretrievably lost, following deletion, but instead initially enter a subsequent 30-day cooling-off phase, during which they may solely be re- registered on behalf of their former registrant(s).

RGP cooling-off provisions shall allow former registrants to redeem registration of the subject domain names, by having recourse to the related Restore service, through a registrar. Only if no redemption is requested, during the 30-day RGP phase, the relevant domain names shall become available for registration by any interested party again. At the time being, similar regulations are applied by other top level domain registries already.

Registrars redeeming a deleted .DE domain name for the original registrant will have to pay a Restore fee and may pass on the related costings.

Deleted .DE domain names placed in cooling off, from RGP implementation, will be earmarked by a redemption period status in the DENIC lookup services (whois) accessible at

As a consequence of the above measures, the current DENIC .DE domain guidelines shall be superseded by new, amended ones from the date of RGP launch, i.e. 3 December 2013, which shall then be permanently published at guidelines.html.

Related posts:

Categories: News and Updates

Andee Hill forms backed by Gregg McNair

Domain Name News - Mon, 2013-11-25 16:00

Andee Hill, who recently left where she was Director of Business Development, has created a new licensed escrow company, Escrow Hill Limited with he backing of entrepreneur Gregg McNair.


“When Andee told me she was thinking about forming her own escrow business I was immediately enthusiastic. I have a reputation of connecting some of the best people in our industry and Andee is at the top both professionally and as an amazing human being,” McNair said.’s team includes Ryan Bogue as General Manager and Donald Hendrickson as Operations Manager. Both have worked in the business of online escrow under Hill’s direction for over fifteen years combined. Together with Hill’s experience the new team offers over thirty years of online escrow experience!

“During my fifteen years in this business, I have handled just about every aspect of online escrow. Regardless of my title, I have always known that understanding the client’s needs and providing excellent and secure service is invaluable. I have been fortunate to work with the industry innovator from day one. I have seen what works and what doesn’t. I have been even more fortunate to have created great relationships and trust with industry leaders. At I know I can do an even better job,” Hill said.

“Gregg has earned a strong reputation for honesty, integrity and for successfully making businesses work. He also has incredible enthusiasm and a heart for helping others. All are key factors in me wanting Gregg to support my endeavor at,” Hill continued.

McNair has assumed the non-operational role of Chairman, supporting Hill and her team with whatever it takes to build the best escrow business on the planet.

Marco Rinaudo, Founder and CEO of domain registrar, another one of Gregg McNairs investments,  has been appointed CTO of Rinaudo, who has been a leader in the international hosting and registrar space since 1995, said, “ is formed and supported by the very best people in the industry. Our team has built the most sophisticated on-line internet escrow platform, fully automated and with more advanced security features than any other.

See the full press release after the jump.

Andee Hill forms

AUCKLAND NZ: One aspect of the domain space that bridges the whole industry is that of escrow; and the one person better known than any other in that context is the former Director of Business Development at, Ms. Andee Hill.

Ms. Hill has established the licensed international escrow enterprise, Escrow Hill Limited with the backing of long time friend and industry entrepreneur, Gregg McNair.

“When Andee told me she was thinking about forming her own escrow business I was immediately enthusiastic. I have a reputation of connecting some of the best people in our industry and Andee is at the top both professionally and as an amazing human being,” McNair said.’s dream team includes Ryan Bogue as General Manager and Donald Hendrickson as Operations Manager. Both have worked in the business of online escrow under Hill’s direction for over fifteen years combined. Together with Hill’s experience the new team offers over thirty years of online escrow experience!

The domain industry is undergoing incredible change and is positioned to provide secure, yet flexible, state of the art products and services. will be able to meet the needs of both past and future generations of domain buyers, brokers and sellers. Hill’s reputation as an honest, discreet and hard working professional will now aspire to a new level.

“During my fifteen years in this business, I have handled just about every aspect of online escrow. Regardless of my title, I have always known that understanding the client’s needs and providing excellent and secure service is invaluable. I have been fortunate to work with the industry innovator from day one. I have seen what works and what doesn’t. I have been even more fortunate to have created great relationships and trust with industry leaders. At I know I can do an even better job,” Hill said.

“Gregg has earned a strong reputation for honesty, integrity and for successfully making businesses work. He also has incredible enthusiasm and a heart for helping others. All are key factors in me wanting Gregg
to support my endeavor at,” Hill continued.

McNair has assumed the non-operational role of Chairman, supporting Hill and her team with whatever it takes to build the best escrow business on the planet.

Marco Rinaudo, Founder and CEO of has been appointed CTO of Rinaudo, who has been a leader in the international hosting and registrar space since 1995, said, “ is formed and supported by the very best people in the industry. Our team has built the most sophisticated on-line internet escrow platform, fully automated and with more advanced security features than any other.

Related posts:

Categories: News and Updates

Inaugural Heritage Auctions Domain Event in New York City – Live Results

Domain Name News - Thu, 2013-11-21 23:55

We were will be live blogging the results of the inaugural Heritage Auctions Domain Event in New York City today. There are no guarantees that this list is correct or complete as this are not official or officially approved results.

The auction sold 26 out of the 68 domains for a total of $419,970. Domains that did not sell in the live auction will be available on Heritage Auction’s website for two weeks at their reserve price as a Buy it Now price.

The top 5 sales of this auction were:

  1. for $138,000
  2. for $112,125
  3. for $34,500
  4. for $23,000, for $23,000
  5. for $17,250

Please note that all domains occur a 15% bidder premium, noted in our total and in the last column of the table below. See the full live blogged results after the jump.


Lot #Domain NameReserveSold?Sale PricePrice w/ Commission$7,000SOLD$7,000$8,050.00 87002OJX.comno reserveSOLD$3,666$4,215.90 87003CoinCompany.comno reserveSOLD$1,600$1,840.00$10,000SOLD$11,500.00 87005ChicagoWine.comno reservepass$95,000SOLD$97,500$112,125.00$7,000pass$25,000pass$6,000SOLD$6,000$6,900.00$95,000pass$35,000pass$20,000pass$20,000SOLD$20,000$23,000.00 87014ZQF.comno reserveSOLD$3750$4,312.50$15,000pass$50,000pass$3,888pass$5,000pass$3,500SOLD$5,000$5,750.00$15,000SOLD$15,000$17,250.00$3,000pass$20,000SOLD$20,000$23,000.00$20,000pass$40,000pass$2,000SOLD$2,600$2,990.00$8,000pass$30,000pass$385,000pass$1,500SOLD$1,500$1,725.00$30,000pass$10,000pass$500pass$1,000,000pass$6,500SOLD$6,500$7,475.00 87035BulkDiapers.comno reserveSOLD$500$575.00$30,000SOLD$30,000$34,500.00$200,000pass$10,000pass 87039ActiveStocks.comno reserveSOLD$850$977.50 87040TheCoinBlog.comno reserveSOLD$325$373.75$200,000pass$9,300pass$18,000pass$95,000pass$4,500pass$1,500SOLD$1,500$1,725.00$1,500pass$4,500pass$5,000SOLD$5,500$6,325.00$75,000pass 87051JazzBlog.comno reservepass$500SOLD$600$690.00$100,000pass$3,000pass$200,000pass 87056NewTees.comno reserveSOLD$250$287.50$6,500pass$2,500pass$10,000pass$40,000pass$9,300pass & SwissChronographs.comno reserveSOLD$550$632.50$10,000pass$220,000pass 87065DiveSuits.comno reserveSOLD$500$575.00$3,500pass$120,000SOLD$120,000$138,000.00$3,500SOLD$4,500$5,175.00 !ERROR! E2 does not contain a number or expression!ERROR! F2 does not contain a number or expression Auction Total$419,970


Related posts:

Categories: News and Updates

Buenos Aires Airport closure leaves many ICANN 48 attendees stranded

Domain Name News - Fri, 2013-11-15 15:27

As the 48th ICANN meeting is set to start in Buenos Aires, many of the attendees were stranded today in Montevideo, Uruguay  and other South American airports due to an airport closure in Buenos Aires. An Austral Embraer ERJ-190 on behalf of Air Austral/Aerolineas Argentina coming from Rio de Janeiro (Brazil), overrun the runway and only came to a halt after the nose of the machine had hit the localizer antenna about 220 meters/730 feet past the runway end at 5:45 local time this morning (UTC-3). None of the 96 passengers was injured and they were all taken to the terminal. According to reporting of the airport there was a cold front passing through the area at the time. The airline reports that the incident occurred due to a sudden change in wind direction and speed.

Flights into the aiport resumed again after about three hours, but some attendees will now only arrive tomorrow. DNN was not able to confirm if any ICANN 48 attendees were on the flight itself.


[via AVHerald and the ICANN Social Group on Facebook, picture posted on twitter by @JuanMCornejo]


Related posts:

Categories: News and Updates

Demand Media to Spin Off Domain Registration Business into RightSide [Press Release]

Domain Name News - Tue, 2013-11-05 15:25

As already predicted by Andrew over at DNW:

Demand Media Announces Key Executives and Name for Proposed Domain Services Company

Company Will Lead Expansion of Generic Top Level Domains under Rightside Brand; Taryn Naidu Selected as Incoming CEO

November 05, 2013 09:00 AM Eastern Standard Time

SANTA MONICA, Calif.–()–Demand Media, Inc. (NYSE: DMD), a leading media and domain services company, today announced that Taryn Naidu, who currently serves as Demand Media’s Executive Vice President of Domain Services, will become the CEO and a Director of the newly formed domain services company that is proposed to be spun off from Demand Media. Demand Media also announced that it has selected the name Rightside Group, Ltd. (“Rightside”) for the spun off domain services business.

“It’s an exciting time for us, as new gTLDs start going live this year and our path to becoming an independent public company as a leader in our industry progresses.”

Rightside will be a Kirkland, WA based technology and services company for the Internet domain industry. The company will advance the way consumers and businesses define and present themselves online through a comprehensive technology platform making it possible to discover, register, develop, and monetize domain names. Rightside will play a leading role in the historic launch of new generic Top Level Domains, and the name represents a new way to navigate the Internet, while establishing the new company as the one to guide users in the right direction. It’s everything to the right of the dot – and beyond.

Taryn Naidu, who has led Demand Media’s domain services business since 2011 will become Chief Executive Officer of Rightside, upon completion of the separation. Additionally, Rightside executive management will include Wayne MacLaurin as Chief Technology Officer and Rick Danis as General Counsel. David Panos will be appointed as Chairman of the Board of Directors and Shawn Colo, Demand Media’s Interim President and Chief Executive Officer, will be appointed as a Director of Rightside in connection with the separation.

“Establishing the leadership team and brand identity of the proposed new company marks an important milestone in achieving our plan to separate our business into two distinct market leaders,” said Demand Media Interim President and Chief Executive Offer Shawn Colo. “I am pleased to announce a very strong executive team led by Taryn. This team has a wealth of industry experience, has played an integral role in building the largest wholesale domain registrar and is driving the transformation of this business into one of the largest end-to-end domain name service providers in the world.”

“Rightside’s mission will be to help millions of businesses and consumers define and present themselves online. We’re able to deliver on this through our distribution network of more than 20,000 active partners, one of the leading domain services technology platforms, a large number of applications for new generic Top Level Domains (gTLDs), and a deep bench of industry talent,” said Taryn Naidu, newly designated incoming Chief Executive Officer of Rightside. “It’s an exciting time for us, as new gTLDs start going live this year and our path to becoming an independent public company as a leader in our industry progresses.”

About Rightside

Rightside plans to inspire and deliver new possibilities for consumers and businesses to define and present themselves online. The company will be a leading provider of domain name services, offering one of the industry’s most comprehensive platforms for the discovery, registration, development, and monetization of domain names. This will include 15 million names under management, the most widely used domain name reseller platform, more than 20,000 distribution partners, an award-winning retail registrar, the leading domain name auction service and an interest in more than 100 new Top Level Domain applications. Rightside will be home to some of the most admired brands in the industry, including eNom,, United TLD and NameJet (in partnership with Headquartered in Kirkland, WA, Rightside will have offices in North America and Europe. For more information please visit

About Demand Media

Demand Media, Inc. (NYSE: DMD) is a leading digital media and domain services company that informs and entertains one of the internet’s largest audiences, helps advertisers find innovative ways to engage with their customers and enables publishers, individuals and businesses to expand their online presence. Headquartered in Santa Monica, CA, Demand Media has offices in North America, South America and Europe. For more information about Demand Media, please visit

Related posts:

Categories: News and Updates

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer