News and Updates

Reverse domain name hijacking in AquaFX.com case

Domain Name Wire - Wed, 2018-06-27 13:08

Panel says case was a Plan B attempt to get the domain name.

A three-person National Arbitration Forum panel has found Aqua Engineering & Equipment, Inc. to have engaged in reverse domain name hijacking over the domain name AquaFX.com.

The company failed to convince the panel that it met any of three necessary points of UDRP: that the domain was confusingly similar to a mark in which it has rights, that the domain owner lacks a legitimate interest in the domain, and that the domain was registered in bad faith.

Aqua Engineering & Equipment relied on a stylized trademark for “Aqua Fx The Leaders in Reverse Osmosis”. Furthermore, this mark was registered after the respondent registered the domain name.

The panel determined that this was a “Plan B” reverse domain name hijacking attempt.

The domain owner was represented by ESQwire.com.

In a different National Arbitration Forum decision today, panelist Calvin A. Hamilton declined to find reverse domain name hijacking for the domain name shippingquestscams.com. His reason: he found that the domain was confusingly similar to the complainant’s trademark. That’s unfortunate; panels should discourage complainants from filing complaints against non-commercial gripe sites.

© DomainNameWire.com 2018. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. Non-Profit Urban Logic Guilty of Reverse Domain Name Hijacking
  2. Dubai Law Firm Nailed for Reverse Domain Name Hijacking
  3. Telepathy scores $40,000 from reverse domain name hijacking case
Categories: News and Updates

WP Engine acquires StudioPress, maker of the Genesis framework for WordPress

Domain Name Wire - Wed, 2018-06-27 12:48

WordPress hosting company acquires maker of popular framework for building WordPress sites.

WordPress hosting company WP Engine has acquired StudioPress.

StudioPress is the creator of the Genesis framework for WordPress site design/development. Domain Name Wire is built on the Genesis framework.

This marks the first acquisition WP Engine has publicly announced since Silver Lake Partners bankrolled the firm with $250 million earlier this year.

Silver Lake was one of the three private equity firms to invest in GoDaddy in 2011. WP Engine is a direct competitor to GoDaddy.

© DomainNameWire.com 2018. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. VaultPress drops its price for WordPress security and backup
  2. How to reduce comment spam on your WordPress website
  3. WP Engine banks $250 million from PE that bought GoDaddy
Categories: News and Updates

Will Cisco Make a Comeback in Cuba?

Domain industry news - Wed, 2018-06-27 03:20

Laura Quintana, Cisco Vice President of Corporate Affairs, launching Cisco networking training at the Universidad de Ciencias InformáticasIs the recently announced Cisco Networking Academy at the Universidad de Ciencias Informáticas a belated drop in the bucket or the first step in a significant opening?

Cisco dominated the infrastructure equipment market in Cuba and elsewhere during the early days of the Internet, but Huawei replaced them in Cuba — here is a timeline:

What does this mean?

It might be a belated drop in the bucket. UCI has only 19 trained CNA instructors while the CNA curriculum is being taught by over 20,000 instructors at over 10,000 institutions.

On the other hand, this might be the first step in a significant opening. The Castros are no longer in power — might the US allow Cisco to sell equipment to Cuba and might the Cubans consider Cisco as a competitor to Huawei, SES, and other connectivity providers?

The Trump administration has cracked down on individual travel but has not curtailed the sales of communication equipment to Cuba. Trump would doubtless like to claim credit for any Cuban sales by Cisco and for Raúl Castro stepping down and he is indifferent to human rights violation, so my guess is that the US would allow Cisco to sell to Cuba.

Similarly, competition from Cisco would enhance Cuba's bargaining position with Huawei and, while much of their SNA material is generic, some is Cisco-specific, giving them an advantage. I don't know if Cisco is charging for their training or equipment, but they may be donating it as a marketing and international public-relations expense. (In the mainframe days, IBM gave significant discounts and subsidies to universities so students would be trained on their equipment. They even built the building to house the computers at the UCLA Western Data Processing Center where I was a student).

It's too soon to know if this is an important first step and it will be interesting to see how events unfold. A good start would be for the US to allow Cubans access to Cisco's online CNA courses and for UCI to expand their initial internal offering and to train CNA instructors at schools and organizations like the Unión de Informáticos, ETECSA, networks like Infomed, and the Joven Clubs.

President Obama announced the Cisco-UCI SNA plan over two years ago. Two years from now, we will know whether it is significant for either Cisco or Cuba.

Written by Larry Press, Professor of Information Systems at California State University

Follow CircleID on Twitter

More under: Broadband, Networks, Policy & Regulation

Categories: News and Updates

ICANN selects Cancun, Kuala Lumpur and Hamburg for 2020 meetings

Domain Name Wire - Tue, 2018-06-26 14:30

Spring break in Mexico, anyone?

The ICANN board has selected its meeting locations for the group’s 2020 meeting slate.

The first stop will be the Cancun International Convention Center in Mexico for meeting #67 from March 7-12, 2020. You might notice that those dates coincide with some schools’ spring breaks. I predict a few snarky headlines about ICANN heading to Cancun for a Spring Break boondoggle.

The 2020 Community Forum, ICANN 68, will be held at at the Kuala Lumpur Convention Center in Malaysia June 22-25, 2020. It is being hosted by Dr. Suhaidi Hassan of the Internet Society Malaysia Chapter.

ICANN 69 will be in Hamburg, Germany at the Congress Center October 17-22, 2020. It will be hosted by eco association, DENIC and the City of Hamburg at the Congress Center Hamburg.

© DomainNameWire.com 2018. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. Colombia Faces Uphill Battle to Host ICANN Meeting in December
  2. ICANN Travel Survey – Poll FAIL
  3. Which ICANN sessions domainers should attend
Categories: News and Updates

Supplement makers sue LegitScript

Domain Name Wire - Tue, 2018-06-26 14:17

LegitScript, which has gone after domain name registrars in the past, is sued by supplement makers.

Three supplement makers have sued LegitScript, a company that helps web service providers and banks screen for illegitimate pharmacies and questionable supplements.

SanMedica International LLC, Novex Biotech, LLC, and Carter-Reed Company, LLC allege that LegitScript is wrongfully flagging its products, which is preventing the companies from promoting the products on websites.

Domain name companies are familiar with LegitScript because the group, funded in part by pharmaceutical companies, has gone after registrars that it says were allowing illegitimate online pharmacies to register domain names with them.

In its lawsuit, the plaintiffs say that LegitScript has turned its attention to supplements after mastering the process of taking down bad online pharmacies. They believe that LegitScript uses the wrong criteria to determine if a supplement should be added to its blacklist that is used by companies like Facebook, Google and Visa.

The full lawsuit is here (pdf).

© DomainNameWire.com 2018. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

No related posts.

Categories: News and Updates

Internet Evolution: Another 10 Years Later

Domain industry news - Mon, 2018-06-25 23:15

Ten years ago, I wrote an article that looked back on the developments within the Internet over the period from 1998 to 2008. Well, another ten years have gone by, and it's a good opportunity to take a little time once more to muse over what's new, what's old and what's been forgotten in another decade of the Internet's evolution.

The evolutionary path of any technology can often take strange and unanticipated turns and twists. At some points, simplicity and minimalism can be replaced by complexity and ornamentation, while at other times a dramatic cut-through exposes the core concepts of the technology and removes layers of superfluous additions. The evolution of the Internet appears to be no exception and contains these same forms of unanticipated turns and twists. In thinking about the technology of the Internet over the last ten years, it appears that it's been a very mixed story about what's changed and what's stayed the same.

A lot of the Internet today looks much the same as the Internet of a decade ago. Much of the Internet's infrastructure has stubbornly resisted various efforts to engender change. We are still in the middle of the process to transition the Internet to IPv6, which was the case a decade ago. We are still trying to improve the resilience of the Internet to various attack vectors, which was the case a decade ago. We are still grappling with various efforts to provide defined quality of service in the network, which was the case a decade ago. It seems that the rapid pace of technical change in the 1990's and early 2000's has simply run out of momentum and it seems that the dominant activity on the Internet over the past decade was consolidation rather than continued technical evolution. Perhaps this increased resistance to change is because as the size of the network increases, its inertial mass also increases. We used to quote Metcalf's Law to each other, reciting the mantra that the value of a network increases in proportion to the square of the number of users. A related observation appears to be that a network's inherent resistance to change, or inertial mass, is also directly related to the square of the number of users as well. Perhaps as a general observation, all large loosely coupled distributed systems are strongly resistant to efforts to orchestrate a coordinated change. At best, these systems respond to various forms of market pressures, but as the Internet's overall system is so large and so diverse, these market pressures manifest themselves in different ways in different parts of this network. Individual actors operate under no centrally orchestrated set of instructions or constraints. Where change occurs, it is because some sufficiently large body of individual actors see opportunity in undertaking the change or perceive unacceptable risk in not changing. The result for the Internet appears to be that some changes are very challenging, while others look like natural and inevitable progressive steps.

But the other side of the story is one that is about as diametrically opposed as its possible to paint. Over the last decade, we've seen another profound revolution in the Internet as it embraced a combination of wireless-based infrastructure and a rich set of services at a speed which has been unprecedented. We've seen a revolution in content and content provision that has not only changed the Internet but as collateral damage, the Internet appears to be decimating the traditional newspaper and broadcast television sectors. Social media has all but replaced the social role of the telephone and the practice of letter writing. We've seen the rise of the resurgence of a novel twist to the old central mainframe service in the guise of the 'cloud' and the repurposing of Internet devices to support views of a common cloud-hosted content that in many ways mimic the function of display terminals of a bygone past. All of these are fundamental changes to the Internet and all of these have occurred in the last decade!

That's a significant breadth of material to cover, so I'll keep the story to the larger themes, and to structure this story, rather than offer a set of unordered observations about the various changes and developments over the past decade, I'll use a standard model of a protocol stack as the guiding template. I'll start with the underlying transmission media and then looking at IP, the transport layer, then applications and services, and closing with a look at the business of the Internet to highlight the last decade's developments.

Below the IP Layer

What's changed in network media?

Optical systems have undergone a sustained change in the past decade. A little over a decade ago production optical systems used simple on-off keying to encode the signal into the optical channel. The speed increases in this generation of optical systems relied on improvements in the silicon control systems and the laser driver chips. The introduction of wavelength division multiplexing in the late 1990's allowed the carriers to greatly increase the carrying capacity of their optical cable infrastructure. The last decade has seen the evolution of optical systems into areas of polarisation and phase modulation to effectively lift the number of bits of signal per baud. These days 100Gbps optical channels are commonly supportable, and we are looking at further refinements in signal detection to lift that beyond 200Gbps. We anticipate 400Gbps systems in the near future, using various combinations of a faster basic baud rate and higher levels of phase amplitude modulation, and dare to think that 1Tbps is now a distinct near-term optical service.

Radio systems have seen a similar evolution in overall capacity. Basic improvements in signal processing, analogous to the changes in optical systems, has allowed the use of phase modulation to lift the data rate of the radio bearer. The use of MIMO technology, coupled with the use of higher carrier frequencies has allowed the mobile data service to support carriage services of up to 100Mbps in today's 4G networks. The push to even higher frequencies promises speeds of up to 1Gbps for mobile systems in the near future with the deployment of 5G technology.

While optical speeds are increasing, ethernet packet framing still persists in transmission systems long after the original rationale for the packet format died along with that bright yellow coaxial cable! Oddly enough, the Ethernet-defined minimum and maximum packet sizes of 64 and 1500 octets still persist. The inevitable result of faster transmission speeds with constant packet sizes results in an upper bound of the number of packets per second increasing more 100-fold over the past decade, in line with the increase of deployed transmission speeds from 2.5Gbps to 400 Gbps. As a consequence, higher packet processing rates are being demanded from silicon-based switches. But one really important scaling factor has not changed for the past decade, namely the clock speed of processors and the cycle time of memory, which has not moved at all. The response so far has been in increasing reliance of parallelism in high-speed digital switching applications, and these days multi-core processors and highly parallel memory systems are used to achieve performance that would be impossible in a single threaded processing model.

In 2018 it appears that we are close to achieving 1Tbps optical systems and up to 20Gbps in radio systems. Just how far and how quickly these transmission models can be pushed into supporting ever higher channel speeds is an open question.

The IP Layer

The most notable aspect of the network that appears to stubbornly resist all forms of pressure over the last decade, including some harsh realities of acute scarcity, is the observation that we are still running what is essentially an IPv4 Internet.

Over this past decade we have exhausted our pools of remaining IPv4 addresses, and in most parts of the world, the IPv4 Internet is running on some form of empty. We had never suspected that the Internet would confront the exhaustion of one its most fundamental pillars, the basic function of uniquely addressing connected devices, and apparently shrug it off and continue on blithely. But, unexpectedly, that's exactly what's happened.

Today we estimate that some 3.4 billion people are regular users of the Internet, and there are some 20 billion devices connected to it. We have achieved this using some 3 billion unique IPv4 addresses. Nobody thought that we could achieve this astonishing feat, yet it has happened with almost no fanfare.

Back in the 1900's we had thought that the prospect of address exhaustion would propel the Internet to use IPv6. This was the successor IP protocol that comes with a four-fold increase in the bit width of IP addresses. By increasing the IP address pool to some esoterically large number of unique addresses (340 undecillion addresses, or 3.4 x 1038) we would never have to confront network address exhaustion again. But this was not going to be an easy transition. There is no backward compatibility in this protocol transition, so everything has to change. Every device, every router and even every application needs to change to support IPv6. Rather than perform comprehensive protocol surgery on the Internet and change every part of the infrastructure to support IPv6, we changed the basic architecture of the Internet instead. Oddly enough, it looks like this was the cheaper option!

Through the almost ubiquitous deployment of Network Address Translators (NATs) at the edges of the network, we've transformed the network from a peer-to-peer network into a client/server network. In today's client/server Internet clients can talk to servers, and servers can talk back to these connected clients, but that's it. Clients cannot talk directly to other clients, and servers need to wait for the client to initiate a conversation in order to talk to a client. Clients 'borrow' an endpoint address when they are talking to a server and release this address for use by other clients when they are idle. After all, endpoint addresses are only useful to clients in order to talk to servers. The result is that we've managed to cram some 20 billion devices into an Internet that only has deployed just 3 billion public address slots. We've achieved this through embracing what could be described as time-sharing of IP addresses.

All well and good, but what about IPv6? Do we still need it? If so, then are we going to complete this protracted transition? Ten years later the answer to these questions remains unclear. On the positive side, there is a lot more IPv6 around now than there was ten years ago. Service Providers are deploying much IPv6 today than was the case in 2008. When IPv6 is deployed within a Service Provider's network we see an immediate uptake from these IPv6-equipped devices. In 2018 it appears that one-fifth of the Internet's users (that itself is now estimated to number around one half of the planet's human population) are capable of using the Internet over IPv6, and most of this has happened in the past 10 years. However, on the negative side, the question must be asked: What's happening with IPv6 for the other four-fifths of the Internet? Some ISPs have been heard to make the case that they would prefer to spend their finite operating budgets on other areas that improve their customers' experience such as increasing network capacity, removing data caps, acquiring more on-net content. Such ISPs continue to see the deployment of IPv6 as a deferable measure.

It seems that today we are still seeing a mixed picture for IPv6. Some service providers simply see no way around their particular predicament of IPv4 address scarcity and these providers see IPv6 as a necessary decision to further expand their network. Other providers are willing to defer the question to some undefined point in the future.

Routing

While we are looking at what's largely unchanged over the past decade we need to mention the routing system. Despite dire predictions of the imminent scaling death of the Border Gateway Protocol (BGP) ten years ago, BGP has steadfastly continued to route the entire Internet. Yes, BGP is as insecure as ever, and yes, a continual stream of fat finger foul-ups and less common but more concerning malicious route hijacks continue to plague our routing system, but the routing technologies in use in 2008 are the same as we use in today's Internet.

The size of the IPv4 routing table has tripled in the past ten years, growing from 250,000 entries in 2008 to slightly more than 750,000 entries today. The IPv6 routing story is more dramatic, growing from 1,100 entries to 52,000 entries. Yet BGP just quietly continues to work efficiently and effectively. Who would've thought that a protocol that was originally designed to cope with a few thousand routes announced by a few hundred networks could still function effectively across a routing space approaching a million routing entries and a hundred thousand networks!

In the same vein, we have not made any major change to the operation of our interior routing protocols. Larger networks still use either OPSF or ISIS depending on their circumstances, while smaller networks may opt for some distance vector protocol like RIPv2 or even EIGRP. The work in the IETF on more recent routing protocols LISP and BABEL seem to lack any real traction with the Internet at large, and while they both have interesting properties in routing management, neither have a sufficient level of perceived benefit to overcome the considerable inertia of conventional network design and operation. Again, this looks like another instance where inertial mass is exerting its influence to resist change in the network.

Network Operations

Speaking of network operation, we are seeing some stirrings of change, but it appears to be a rather conservative area, and adoption of new network management tools and practices takes time.

The Internet converged on using the Simple Network Management Protocol (SNMP) a quarter of a century ago, and despite its security weaknesses, its inefficiency, its incredibly irritating use of ASN.1, and its use in sustaining some forms of DDOS attacks, it still enjoys widespread use. But SNMP is only a network monitoring protocol, not a network configuration protocol, as anyone who has attempted to use SNMP write operations can attest.

The more recent Netconf and YANG efforts are attempting to pull this area of configuration management into something a little more usable than expect scripts driving CLI interfaces on switches. At the same time, we are seeing orchestration tools such as Ansible, Chef, NAPALM and SALT enter the network operations space, permitting the orchestration of management tasks over thousands of individual components. These network operations management tools are welcome steps forward to improve the state of automated network management, but it's still far short of a desirable endpoint.

In the same time period as we appear to have advanced the state of automated control systems to achieve the driverless autonomous car, the task of fully automated network management appears to have fallen way short of the desired endpoint. Surely it must be feasible to feed an adaptive autonomous control system with the network's infrastructure and available resources, and allow the control system to monitor the network and modify the operating parameters of network components to continuously meet the network's service level objectives? Where's the driverless car for driving networks? Maybe the next ten years might get us there.

The Mobile Internet

Before we move up a layer in the Internet protocol model and look at the evolution of the end-to-end transport layer, we probably need to talk about the evolution of the devices that connect to the Internet.

For many years the Internet was the domain of the desktop personal computer, with laptop devices serving the needs of those with a desire for a more portable device. At the time the phone was still just a phone, and their early forays into the data world were unimpressive.

Apple's iPhone, released in 2007, was a revolutionary device. Boasting a vibrant color touch-sensitive screen, just four keys, a fully functional operating system, with WiFi and cellular radio interfaces, and a capable processor and memory, it's entry into the consumer market space was perhaps the major event of the decade. Apple's early lead was rapidly emulated by Windows and Nokia with their own offerings. Google's position was more as an active disruptor, using an open licensing framework for the Android platform and its associated application ecosystem to empower a collection of handset assemblers. Android is used by Samsung, LG, HTC, Huawei, Sony, and Google to name a few. These days almost 80% of the mobile platforms use Android, and some 17% use Apple's iOS.

For the human Internet, the mobile market is now the Internet-defining market in terms of revenue. There is little in terms of margin or opportunity in the wired network these days, and even the declining margins of these mobile data environments represent a vague glimmer of hope for the one dominant access provider industry.

Essentially, the public Internet is now a platform of apps on mobile devices.

End to End Transport Layer

It's time to move up a level in the protocol stack and look at end-to-end transport protocols and changes that have occurred in the past decade.

End-to-end transport was the revolutionary aspect of the Internet, and the TCP protocol was at the heart of this change. Many other transport protocols require the lower levels of the network protocol stack to present a reliable stream interface to the transport protocol. It was up to the network to create this reliability, performing data integrity checks and data flow control, and repairing data loss within the network as it occurred. TCP dispensed with all of that, and simply assumed an unreliable datagram transport service from the network and pushed to the transport protocol the responsibility for data integrity and flow control.

In the world of TCP not much appears to have changed in the past decade. We've seen some further small refinements in the details of TCP's controlled rate increase and rapid rate decrease, but nothing that shifts the basic behaviors of this protocol. TCP tends to use packet loss as the signal of congestion and oscillates its flow rate between some lower rate and this loss-triggering rate.

Or at least that was the case until quite recently. The situation is poised to change and change in a very fundamental way, with the debut of Google's offerings of BBR and QUIC.

The Bottleneck Bandwidth and Round-trip time control algorithm, or BBR, is a variant of the TCP flow control protocol that operates in a very different mode from other TCP protocols. BBR attempts to maintain a flow rate that sits exactly at the delay-bandwidth product of the end-to-end path between sender and receiver. In so doing, tries to avoid the accumulation of data buffering in the network (when the sending rate exceeds the path capacity), and also tries to avoid leaving idle time in the network (where the sending rate is less than the path capacity). The side effect is that BBR tries to avoid the collapse of network buffering when congestion-based loss occurs. BBR achieves significant efficiencies from both wired and wireless network transmission systems.

The second recent offering from Google also represents a significant shift in the way we use transport protocols. The QUIC protocol looks like a UDP protocol, and from the network's perspective, it is simply a UDP packet stream. But in this case, looks are deceiving. The inner payload of these UDP packets contains a more conventional TCP flow control structure and a TCP stream payload. However, QUIC encrypts its UDP payload so the entire inner TCP control is completely hidden from the network. The ossification of the Internet's transport is due in no small part to the intrusive role of network middleware that is used to discarding packets that it does not recognize. Approaches such as QUIC allow applications to break out of this regime and restore end-to-end flow management as an end-to-end function without any form of network middleware inspection or manipulation. I'd call this development as perhaps the most significant evolutionary step in transport protocols over the entire decade.

The Application Layer

Let's keep on moving up the protocol stack and look at the Internet from the perspective of the applications and services that operate across the network.

Privacy and Encryption

As we noted in looking at developments in end-to-end transport protocols, encryption of the QUIC payload is not just to keep network middleware from meddling with the TCP control state, although it does achieve that very successfully. The encryption applies to the entire payload, and it points to another major development in the past decade. We are now wary of the extent to which various forms of network-based mechanisms are used to eavesdrop on users and services. The documents released by Edward Snowden in 2013 portrayed a very active US Government surveillance program that used widespread traffic interception sources to construct profiles of user behavior and by inference profiles of individual users. In many ways, this effort to assemble such profiles is not much different to what advertising-funded services such as Google and Facebook have been (more or less) openly doing for years, but perhaps the essential difference is that of knowledge and implied consent. In the advertisers' case, this information is intended to increase the profile accuracy and hence increase the value of the user to the potential advertiser. The motivations of government agencies are more open to various forms of interpretation, and not all such interpretations are benign.

One technical response to the implications of this leaked material has been an overt push to embrace end-to-end encryption in all parts of the network. The corollary has been an effort to allow robust encryption to be generally accessible to all, and not just a luxury feature available only to those who can afford to pay a premium. The Let's Encrypt initiative has been incredibly successful in publishing X.509 domain name certificates that free of cost, and the result is that all network service operators, irrespective of their size or relative wealth, can afford to use encrypted sessions, in the form of TLS, for their web servers.

The push to hide user traffic from the network and network-based eavesdroppers extends far beyond QUIC and TLS session protocols. The Domain Name System is also a rich source of information about what users are doing, as well as being used in many places to enforce content restrictions. There have been recent moves to try and clean up the overly chatty nature of the DNS, using query name minimization to prevent unnecessary data leaks, and the development of both DNS over TLS and DNS over HTTPS to secure the network path between a stub resolver and its recursive server. This is very much a work in progress effort at present, and it will take some time to see if the results of this work will be widely adopted in the DNS environment.

We are now operating our applications in an environment of heightened paranoia. Applications do not necessarily trust the platform on which they are running, and we are seeing efforts from the applications to hide their activity from the underlying platform. Applications do not trust the network, and we are seeing increased use of end-to-end encryption to hide their activity from network eavesdroppers. The use of identity credentials within the encrypted session establishment also acts to limit the vulnerability of application clients to be misdirected to masquerading servers.

The Rise and Rise of Content

Moving further up the protocol stack to the environment of content and applications we have also seen some revolutionary changes over the past decade.

For a small period of time, the Internet's content and carriage activities existed in largely separate business domains, tied by mutual interdependence. The task of carriage was to carry users to content, which implied that carriage was essential to content. But at the same time, a client/server Internet bereft of servers is useless, so content is essential to carriage. In a world of re-emerging corporate behemoths, such mutual interdependence is unsettling, both to the actors directly involved and to the larger public interest.

The content industry is largely the more lucrative of these two and enjoys far less in the way of regulatory constraint. There is no concept of any universal service obligation or even any effective form of price control in the services they offer. Many content service providers use internal cross funding that allows them to offer free services to the public, as in free email, free content hosting, free storage, and similar, and fund these services through a second, more occluded, transaction that essentially sells the user's consumer profile to the highest bidding advertiser. All this happens outside of any significant regulatory constraint which has given the content services industry both considerable wealth and considerable commercial latitude.

It should be no surprise that this industry is now using its capability and capital to eliminate its former dependence on the carriage sector. We are now seeing the rapid rise of the content data network (CDN) model, where instead of an Internet carrying the user to a diverse set of content stores, the content stores are opening local content outlets right next to the user. As all forms of digital services move into CDN hostels, and as the CDN opens outlets that are positioned immediately adjacent to pools of economically valuable consumers, then where does that leave the traditional carriage role in the Internet? The outlook for the public carriage providers is not looking all that rosy given this increasing marginalization of carriage in the larger content economy.

Within these CDNs, we've also seen the rise of a new service model enter the Internet in the form of cloud services. Our computers are no longer self-contained systems with processing and compute resources but look more and more like a window that sees the data stored on a common server. Cloud services are very similar, where the local device is effectively a local cache of a larger backing store. In a world where users may have multiple devices, this model makes persuasive sense, as the view to the common backing store is constant irrespective of which device is being used to access the data. These cloud services also make data sharing and collaborative work far easier to support. Rather than creating a set of copies of the original document and then attempt to stitch back all the individual edits into a single common whole, the cloud model shares a document by simply altering the document's access permissions. There is only ever one copy of the document, and all edits and comments on the document are available to all.

The Evolution of Cyber Attacks

At the same time as we have seen announcements of ever-increasing network capacity within the Internet, we've seen a parallel set of announcements that note new records in the aggregate capacity of Denial of Service attacks. The current peak volume is an attack of some 1.7Tbps of malicious traffic.

Attacks are now commonplace. Many of them are brutally simple, relying on a tragically large pool of potential zombie devices that are readily subverted and co-opted to assist in attacks. The attacks are often simple forms of attack, such as UDP reflection attacks where a simple UDP query generates a large response. The source address of the query is forged to be the address of the intended attack victim, and not much more need be done. A small query stream can result in a massive attack. UDP protocols such as SNMP, NTP, the DNS and memcache have been used in the past and doubtless will be used again.

Why can't we fix this? We've been trying for decades, and we just can't seem to get ahead of the attacks. Advice to network operators to prevent the leakage of packets with forged source addresses, RFC 2827, was published in two decades ago in 1998. Yet massive UDP-based attacks with forged source addresses persist all the way through today. Aged computer systems with known vulnerabilities continued to be connected to the Internet and are readily transformed into attack bots.

The picture of attacks is also becoming more ominous. Previously attributed to 'hackers' it was quickly realized that a significant component of these hostile attacks had criminal motivations. The progression from criminal actors to state-based actors is also entirely predictable, and we are seeing an escalation of this cyber warfare arena with the investment in various forms of exploitation of vulnerabilities being seen as part of a set of desirable national capabilities.

It appears that a major problem here is that collectively we are unwilling to make any substantial investment in effective defense or deterrence. The systems that we use on the Internet are overly trusting to the point of irrational credulity. For example, the public key certification system used to secure web-based transactions is repeatedly demonstrated to be untrustworthy, yet that's all we trust. Personal data is continually breached and leaked, yet all we seem to want to do is increase the number and complexity of regulations rather than actually use better tools that would effectively protect users.

The larger picture of hostile attack is not getting any better. Indeed, it's getting very much worse. If any enterprise has a business need to maintain a service that is always available for use, then any form of in-house provisioning is just not enough to be able to withstand attack. These days only a handful of platforms are able to offer resilient services, and even then it's unclear whether they could withstand the most extreme of attacks. There is a constant background level of scanning and probing going on in the network, and any form of visible vulnerability is ruthlessly exploited. One could describe today's Internet as a toxic wasteland, punctuated with the occasional heavily defended citadel. Those who can afford to locate their services within these citadels enjoy some level of respite from this constant profile of hostile attack, while all others are forced to try and conceal themselves from the worst of this toxic environment, while at the same time aware that they will be completely overwhelmed by any large-scale attack.

It's a sobering thought that about one half of the world's population are now part of this digital environment. A more sobering thought is that many of today's control systems, such as power generation and distribution, water distribution, and road traffic control systems are exposed to the Internet. Perhaps even more of a worry is the increasing use of the Internet in automated systems that include various life support functions. The consequences of massive failure of these systems in the face of a sustained and damaging attack cannot be easily imagined.

The Internet of Billions of Tragically Stupid Things

What makes this scenario even more depressing is the portent of the so-called Internet of Things.

In those circles where Internet prognostications abound and policymakers flock to hear grand visions of the future, we often hear about the boundless future represented by this "Internet of Things. This phrase encompasses some decades of the computing industry's transition from computers as esoteric pieces of engineering affordable only by nations, to mainframes, desktops, laptops, handhelds, and now wrist computers. Where next? In the vision of the Internet of Things, we are going to expand the Internet beyond people and press on with using billions of these chattering devices in every aspect of our world.

What do we know about the "things" that are already connected to the Internet?

Some of them are not very good. In fact, some of them are just plain stupid. And this stupidity is toxic, in that their sometimes inadequate models of operation and security affect others in potentially malicious ways. Doubtless, if such devices were constantly inspected and managed we might see evidence of aberrant behavior and correct it. But these are unmanaged devices that are all but invisible. There is the controller for a web camera, the so-called "smart" thin in a smart television, or what controls anything from a washing machine to a goods locomotive. Nobody is looking after these devices.

When we think of an Internet of Things we think of a world of weather stations, webcams, "smart" cars, personal fitness monitors and similar. But what we tend to forget is that all of these devices are built upon layers of other people's software that is assembled into a product at the cheapest possible price point. It may be disconcerting to realize that the web camera you just installed has a security model that can be summarised with the phrase: "no security at all", and its actually offering a view of your house to the entire Internet. It may be slightly more disconcerting to realise that your electronic wallet is on a device that is using a massive compilation of open source software of largely unknown origin, with a security model that is not completely understood, but appears to be susceptible to be coerced into being a "yes, take all you want".

It would be nice to think that we've stopped making mistakes in code, and from now on our software in our things will be perfect. But that's hopelessly idealistic. It's just not going to happen. Software will not be perfect. It will continue to have vulnerabilities. It would be nice to think that this Internet of Things is shaping up as a market where quality matters and consumers will select a more expensive product even though its functional behavior is identical to a cheaper product that has not been robustly tested for basic security flaws. But that too is hopelessly naive.

The Internet of Things will continue to be a marketplace where the compromises between price and quality will continue to push us on to the side of cheap rather than secure. What's going to stop us from further polluting our environment with a huge and diverse collection of programmed unmanaged devices with inbuilt vulnerabilities that will be all too readily exploited? What can we do to make this world of these stupid cheap toxic things less stupid and less toxic? Workable answers to this question have not been found so far.

The Next Ten Years

The silicon industry is not going to shut down anytime soon. It will continue to produce chips with more gates, finer tracks and more stacked layers for some years to come. Our computers will become more capable in terms of the rage and complexity of the tasks that they will be able to undertake.

At the same time, we can expect more from our network. Higher capacity certainly, but also greater levels of customization of the network to our individual needs.

However, I find it extremely challenging to be optimistic about security and trust in the Internet. We have made little progress in this areas over the last ten years and there is little reason to think that the picture will change in the next ten years. If we can't fix it, then, sad as it sounds, perhaps we simply need to come to terms with an Internet jammed full of tragically stupid things.

However, beyond these broad-brush scenarios, it's hard to predict where the Internet will head. Technology does not follow a pre-determined path. It's driven by the vagaries of an enthusiastic consumer marketplace that is readily distracted by colorful bright shiny new objects and easily bored by what we quickly regard as commonplace.

What can we expect from the Internet in the next ten years that can outdo a pocket-sized computer that can converse with me in a natural language? That can offer more than immersive 3D video in outstanding quality? That can bring the entire corpus of humanity's written work into a searchable database that can answer any of our questions in mere fractions of a second?

Personally, I have no clue what to expect from the Internet. But whatever does manage to capture our collective attention I am pretty confident that it will be colorful, bright, shiny, and entirely unexpected!

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: Access Providers, Broadband, Cybersecurity, DNS, DNS Security, Internet Protocol, IP Addressing, IPv6, Mobile Internet, Networks

Categories: News and Updates

All About .Me – DNW Podcast #191

Domain Name Wire - Mon, 2018-06-25 15:30

Learn about the origins of the .Me domain name.

You’ve surely seen .me domain names and might even own some in your domain name portfolio. Today you’re going to learn about how the .me domain name came to be and why it has been successful when we talk to Natasa Djukanovic, the CMO of the .Me registry.

Also: Web.com takeover, RDNH bonanza, Mugshots.com update and more.

Subscribe via iTunes to listen to the Domain Name Wire podcast on your iPhone or iPad, view on Google Play Music, or click play below or download to begin listening. (Listen to previous podcasts here.)

© DomainNameWire.com 2018. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. How End User Domain Buyers Think – DNW Podcast #134
  2. NamesCon recap with 15 interviews – DNW Podcast #172
  3. The challenges of new TLDs with Tobias Sattler – DNW Podcast #177
Categories: News and Updates

Another day that ends in Y, another reverse domain name hijacking

Domain Name Wire - Mon, 2018-06-25 15:20

Verdant Services of Seattle guilty of reverse domain name hijacking.

Well, it’s another day that ends in ‘y’, so I have another reverse domain name hijacking case to report.

This time it’s Verdant Services, Inc. of Seattle trying to reverse domain name hijack Verdant.com. It owns the domain VerdantServices.com.

The domain name is owned by VRDT Corporation, which goes by the name Verdant. VRDT was the company’s ticker symbol when it was previously publicly traded.

The respondent bought the domain name for $80,000 in 2012. As you can see by going to Vergant.com, the domain forwards to VRDT.com, which is a website with details about the respondent’s company.

Here’s one of the curious arguments made by the complainant:

Complainant tried to buy the Domain Name from Respondent in 2017, without success. Because Complainant sought to acquire the Domain Name, Respondent knew that Complainant existed and had an interest in the trademark corresponding to the Domain Name.

Hello, Plan B reverse domain name hijacking!

Here’s part of what World Intellectual Property Organization panelist David H. Bernstein wrote in finding Verdant Services, Inc. guilty of reverse domain name hijacking:

On its face, this Complaint did not state a claim for transfer under the Policy. Complainant’s evidence established that Respondent had made a bona fide use of the Domain Name in the past, that Respondent’s acquisition of the Domain Name in 2012 therefore could not possibly have been in bad faith, and that Respondent was not in fact defunct (since Respondent responded to Complainant’s emails as recently as November 2017). The Complaint was also legally deficient to the extent it was essentially arguing for retroactive bad faith, a concept that has firmly been rejected as set out in the WIPO Overview 3.0. Alternatively, if Complainant did not intend to argue for retroactive bad faith (since that part of the Complaint is somewhat unclear in its reasoning), then the Complaint was devoid of any allegations whatsoever establishing bad faith registration. And, the Complaint included no facts that establish bad faith use.

The weakness of the Complaint was all the more apparent once Respondent responded, with evidence not only of its good faith registration and its legitimate interest in the VERDANT mark, but also with evidence of its business interests that continue to operate under the name. These are facts that should have been readily apparent to Complainant, had Complainant investigated Respondent’s use prior to filing the Complaint. In light of the Complainant’s knowledge of Respondent’s prior bona fide use of the Domain Name and the other facts discussed above, it was improper for Complainant to file a challenge and claim that Respondent was defunct without adequate investigation of the true facts.

The complainant was represented by Carstens & Cahoon, LLP, an intellectual property law firm.

© DomainNameWire.com 2018. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. UDRP complainant shoots self in foot with supplemental filing
  2. Telepathy scores $40,000 from reverse domain name hijacking case
  3. Insurance company Allianz tries reverse domain name hijacking a domain name
Categories: News and Updates

New TLD data leads onlookers astray

Domain Name Wire - Mon, 2018-06-25 14:49

You need context in order to understand the data.

There are many good sources for new top level domain name data out there. Sites like nTLDStats.com and namestat.org do a good job presenting data.

But without context, it can mislead onlookers. Onlookers would think that .top is the most popular top level domain and HiChina is absolutely crushing it with new TLD sales.

Of course, insiders know the reason .top has more domains registered and HiChina is a top seller.

Consider a post on SeekingAlpha this morning titled “GoDaddy Is Unprepared For The Future Of Domain Names”.

(Upfront, I should mention that basically anyone can write stock analysis for Seeking Alpha and they get paid based on the traffic those articles drive. The person who submitted this article has only written two other articles for the site and they are about bitcoin and a South American Bank.)

The author of this article, Ben Holden-Crowther, argues that GoDaddy is overvalued. he might be correct; GoDaddys stock has been on a meteoric rise. Its market cap has ballooned from $4 billion in early 2016 to $12 billion today.

But the article’s thesis is that GoDaddy is overvalued because it’s behind the game in new TLDs. It points to the “missed opportunity” of GoDaddy appling for and selling its own top level domain names, and references the top new TLD charts on nTLDStats. Quote:

With 23 million domain names registered under these new gTLDs since 2013, the new extensions are clearly playing an increasingly important role in the success of a domain name business.

OK, there are 23 million registrations. But we all know most of those are giveaways or dollar domains. Especially the top domains on the chart.

The writer points out that Uniregistry owns and sells its own new TLDs, cutting out the middleman. Well, ask Uniregistry how new TLDs are turning out for it compared to expectations.

The author also points out that GoDaddy is #3 in market share for new TLDs. (It’s #1 in domains overall.) Missing here is that same quality question: you can be #1 for new TLDs if you focus on selling domains for pennies.

If I had to benchmark new TLD preparedness, there are only a handful of registrars I’d put at or above GoDaddy. One is Name.com, which is owned by a major new TLD registry and has a big incentive to push new TLDs.

This Seeking Alpha article is just the latest example of how people can be misled by new TLD metrics. New TLDs will continue to exist and sales of the domains to end users will continue to grow over time. But there’s a lot of noise in the numbers. In fact, I could counter the entire article by showing that new TLD registrations have dropped from a peak of about 30 million a year ago. That ignores the context, though. The reason for the drop is free and penny domain promotions being throttled back.

Context is key when it comes to domain data. Without context, we’ll continue to see articles like the one published today on Seeking Alpha.

© DomainNameWire.com 2018. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. GoDaddy lets customers “watch” new TLDs
  2. GoDaddy gets sixth patent for “Adwords for Top Level Domains”
  3. GoDaddy auction data provide early indication of new TLD demand and values
Categories: News and Updates

It's About Whois Display And Access

Domain industry news - Mon, 2018-06-25 00:37

The need for an access model for non-public Whois data has been apparent since GDPR became a major issue before the community well over a year ago. Now is the time to address it seriously, and not with half measures. We urgently need a temporary model for access to non-public Whois data for legitimate uses, while the community undertakes longer-term policy development efforts.

The pronounced need isn't news to ICANN. The Governmental Advisory Committee (GAC), law enforcement, security experts, IP interests and a host of others sounded the alarm some time ago. And ICANN's CEO even acknowledged it, while stopping short of addressing it. Most recently, the Security and Stability Advisory Committee issued a strongly worded advisory underlining the security and stability harms now accruing thanks to a dark Whois. Alas, we find that the system already is fragmented.

"It's not that bad," some will say. "The parade of horribles hasn't arrived." That would be misdirected thinking. Requests for non-public data may be lower than anticipated because the temporary Whois model approved by ICANN's Board raised more questions that it answered, left avenues for access unclear and fragmented, and left out measures to hold registrars and registries accountable for providing access to non-public Whois for GDRP-allowed uses. Even so, early feedback on requests for non-public WHOIS indicates that many registrars are non-responsive. It's like we're back to the wild west.

It's a good thing that ICANN has now joined the community in acknowledging critical access needs by publishing a "Framework Elements for Unified Access Model for Continued Access to Full WHOIS Data — For Discussion," but it's only a half step. Rather than stick its toe in the access water with this model for a model (for discussion, not for action), ICANN should jump in and commit to solving this problem immediately, rather than focus on a few high-level themes with a suggestion that we all gather to talk about it.

Unfortunately for the victims of e-crime, abuse and infringement, and for the world's Internet users, this "model" is nowhere close to implementable. For example, it tries to impose a substantial amount of work and responsibility on governments and the GAC, which has already told ICANN that it will provide advice within a limited purview and is not responsible for administrative or operational activities. And it's not exactly timely. By its own specification and timetable, the model can't move ahead until at least mid-December 2018. Meanwhile, the harms continue to pile up.

So what we appear to have here is a rapidly deteriorating domain name system, with a vague model to address it that relies on bodies that don't want to run it, and the plan is to talk it over for something like the next six months.

There's a better option.

Over-applying GDPR requirements, the ICANN Board issued a Temporary Specification (Temp Spec) to deal with Whois display. Similarly, applying GDPR requirements for legitimate use should now drive ICANN's Board to take similar steps to put in place a stop-gap measure that immediately provides uniformity and predictability for access to non-public Whois. Just as ICANN sprang into action on the matter of displaying Whois, it's now in a position to do the same for access to Whois.

This isn't to suggest that ICANN shouldn't discuss and further develop with the community its newly released Unified Access Model. But, there's a problem to solve now and solutions have already been offered that should not be ignored. I'm referring to the community access model — now at 47 pages with great detail that answers many of the very questions ICANN asks in its Unified Access Model. Highlighted in this model are technical solutions that will probably be part of the community discussion around the Unified Access Model, and should most certainly be considered as a stop-gap solution to the current need for access to nonpublic Whois.

With available and implementable solutions today, ICANN should be driven by public interest rather than total risk aversion. It's time to move quickly to provide an immediate temporary solution to access while the community works on the Unified Access Model and EPDP.

Written by Fabricio Vayra, Partner at Perkins Coie LLP

Follow CircleID on Twitter

More under: Domain Names, ICANN, Internet Governance, Policy & Regulation, Privacy, Whois

Categories: News and Updates

Live On Monday, 25 June - DNSSEC Workshop at ICANN 62 in Panama

Domain industry news - Mon, 2018-06-25 00:17

With the DNSSEC Root Key Rollover coming up on October 11, how prepared are we as an industry? What kind of data can we collect in preparation? What is the cost-benefit (or not) of implementing DANE? What can we learn from an existing rollover of a cryptographic algorithm?

All those questions and more will be discussed at the DNSSEC Workshop at the ICANN 62 meeting in Panama City, Panama, on Monday, June 25, 2018. The session will begin at 9:00 and conclude at 12:15 EST (UTC-5). [Note: this is one hour different than current US Eastern Daylight Time - Panama does not change to daylight savings time - and so this will begin at 10:00 EDT (UTC-4).]

The agenda includes:

  • DNSSEC Workshop Introduction, Program, Deployment Around the World – Counts, Counts, Counts
  • Panel: DNSSEC Activities and Post Key Signing Key Rollover Preparation
  • DANE: Status, Cost Benefits, Impact from KSK Rollover
  • An Algorithm Rollover (case study from CZ.NIC)
  • Panel: KSK Rollover Data Collection and Analysis
  • DNSSEC – How Can I Help?
  • The Great DNSSEC/DNS Quiz

It should be an outstanding session! For those onsite, the workshop will be in Salon 4, the ccNSO room.

Lunch will follow. Thank you to our lunch sponsors: Afilias, CIRA, and SIDN.

* * *

The DNSSEC Workshop will be followed by the "Tech Day" set of presentations from 13:30 – 18:30 EST. Many of those may also be of interest. They will also be streamed live at the same URL.

As this is ICANN's smaller "Policy Forum" schedule, there will not be either the "DNSSEC for Everybody” session nor the "DNSSEC Implementer's Gathering” as there is at the other two ICANN meetings each year. Also, as I am not able to travel to ICANN 62, I want to thank Jacques Latour for stepping in to help with the usual presenting and emceeing that I do.

Please do join us for a great set of sessions about how we can work together to make the DNS more secure and trusted!

Written by Dan York, Author and Speaker on Internet technologies - and on staff of Internet Society

Follow CircleID on Twitter

More under: DNS, DNS Security, ICANN

Categories: News and Updates

MERGE! Announces a Half-Dozen Additions to Speaker's Roster for 2018 Conference in Orlando

DN Journal - Fri, 2018-06-22 22:36
The 2nd annual MERGE! conference - coming to Orlando in September - continues to take shape with 6 new additions to the speaker's roster today.
Categories: News and Updates

Access to Safe and Affordable Prescription Medications Online is a Human Right

Domain industry news - Fri, 2018-06-22 15:19

I recently served on a panel at the Toronto RightsCon 2018 conference (Making Safe Online Access to Affordable Medication Real: Addressing the UN Human Rights resolution for access to essential medicines), where I represented the perspective of Americans struggling to afford their daily medications and desperate to have safe, affordable Internet access to their prescriptions.

These Americans may not understand the inner workings of the Internet, but they do understand its mission of providing global access to information, products, and services. They know there are "bad actors" out there, as there are in any segment of our society. They also know how to find the legitimate pharmacies, and get the medications they need, at prices they can pay.

We can usually find fair prices for the things we need in a marketplace close to home, but prescription drugs in the U.S. are not fairly priced. The global marketplace available through the Internet can provide patients with fair prices for life-saving prescription medications.

However, there are people who use "rogue pharmacies" to scare patients, while at the same time maintaining the exorbitantly high cost of prescription medications. This is an ongoing, serious health crisis for many Americans who are desperate for relief and want the government to act.

The cost of prescription medications is higher in the U.S. than any other country in the world because there are no restrictions or limitations on how much companies can charge, so these "big pharma" global giants charge 'whatever the market will bear.'

These companies can increase the price of medications for any reason… or no reason whatsoever. Many people are then forced to choose between their medications and gas, food, or even their mortgage. Or, they skip doses, split pills, or forgo medications completely.

In fact, an estimated 35 million Americans fail to adhere to their prescribed drug regimens due to cost, according to a Commonwealth Fund study.[1] In another study by the Harvard School of Public Health and Kaiser Health Foundation, 50 percent of Americans said they couldn't afford medication and became sicker as a result of not taking medicine.[2]

Once again at RightsCon, it was well-noted that for years, millions of Americans facing this crisis have purchased their prescriptions from licensed, legitimate Canadian pharmacies that provide a lifeline to those in need of affordable and often life-saving daily medications. But once again, more misleading information along with impractical registration criteria seek to erode patients' trust in licensed, legitimate online pharmacies that have chosen not to register or are blocked from using a .Pharmacy domain name.

Clearly, only licensed, legitimate online pharmacies should be able to sell prescription medications upon receipt of a valid prescription and with adherence to proper safety protocols. However, neither the location of the licensed pharmacy, the domain it uses, nor the location of the patient should impact affordability or access.

After all, the Internet was created to expand freedoms, protect human rights and build a global community. Internet protocols and policies must reflect the realities of how people use the Internet today because the Internet is, in some cases, the only access patients have to affordable maintenance medications.

We believe the Internet community can and should protect access through policymaking that embraces safe, legitimate pharmacy websites regardless of their location and domain name. To do otherwise is to allow the Internet to be used as a tool for censorship.

As an advocacy organization that fights for everyday Americans, we believe that access to safe and affordable prescription medications should not be a privilege reserved for the wealthy among us. Instead, we believe it is a human right and, therefore, must be protected through cyber policymaking, effective Internet governance, and updated amendments to outmoded laws so that such policies truly meet the needs of patients.

This is a critical time for protecting our human rights at its intersection with digital technology. As a global Internet community, we must stand up to those who are using the Internet to restrict options that support and protect fair access to medicines.

All Americans deserve access to safe and affordable medications.

[1] Commonwealth Fund: http://www.commonwealthfund.org/~/media/files/publications/issue-brief/2015/jan/1800_collins_biennial_survey_brief.pdf

[2] Harvard School of Public Health and Kaiser Health Foundation: https://kaiserfamilyfoundation.files.wordpress.com/2013/01/7371.pdf

Written by Tracy Cooley, Executive Director, Campaign for Personal Prescription Importation

Follow CircleID on Twitter

More under: Domain Names, Internet Governance, Web

Categories: News and Updates

The “disclosure” on this fake domain renewal notice is hilarious

Domain Name Wire - Fri, 2018-06-22 14:33

Senders of misleading email specifically disclaim that it’s misleading.

Domain Name Wire readers have surely received lots of fake renewal notices telling them they must pay or lose their domain name. Or, at least mislead them into thinking that.

This all ends up in my spam folder, but when I was clearing out that folder this week I decided to open one of the emails. I read the tiny fine print at the bottom and it gave me a laugh.

Here’s the email:

If you look carefully at the email you’ll see some tiny, light grey print at the bottom. It starts out with a fairly typical email disclaimer:

PLEASE NOTE:
This Email contains information intended only for the individuals or entities to which it is addressed. If you are not the intended recipient or the agent responsible for delivering it to the intended recipient, or have received this Email in error, please notify immediately the sender of this Email at the Help Center and then completely delete it. Any other action taken in reliance upon this Email is strictly prohibited, including but not limited to unauthorized copying, printing, disclosure, or distribution.

The disclaimer buries the lede. If you read on, it says you aren’t renewing your domains, just an “optimization” service for your “webside” (sic).

We do not register or renew domain names. This is not a bill or an invoice. This is a optimization offer for your webside. You are under no obligation to pay the amount stated unless you accept this purchase offer.

Then it talks about how the email complies with CAN-SPAM. It’s the second sentence that cracks me up:

Promotional material is stricly (sic) along the guidelines oft he can-spam act of 2003. They are in no way misleading.

Here’s a hint: if you have to tell people that your email is not misleading, it probably is.

Oh, by the way, I “elected to recieve notificaton (sic) offers” according to the email.

Gmail caught this email and put it in spam, warning that the link had been used to steal information. The link uses a .top domain…imagine that!

© DomainNameWire.com 2018. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. Domain Renewal Scam Picks Up Speed
  2. FTC Settles with Con Artists in Domain Name Renewal Scam
Categories: News and Updates

Regional court in Germany to reconsider Whois data GDPR case

Domain Name Wire - Fri, 2018-06-22 12:41

Court exercises its option to re-evaluate its ruling before kicking appeal up to higher court.

A German court that ruled against an injunction last month in a Whois data dispute will reconsider its decision.

ICANN filed a legal action in Bonn after domain name registrar EPAG, which is owned by Tucows (NASDAQ: TCX), informed ICANN that it would no longer collect Admin and Tech contact information on domain registrations. EPAG made this decision based on its interpretation of the General Data Protection Regulation (GDPR).

The court denied ICANN’s request for an injunction that would have forced EPAG to continue collecting this data.

ICANN subsequently appealed the decision to a higher court.

The original court has the option to re-evaluate its decision before forwarding the case to the higher court. It has exercised this option and asked EPAG to comment on ICANN’s appellate papers.

This doesn’t necessarily mean that the lower court thinks it erred in its original decision.

EPAG is due to respond to the court within two weeks.

© DomainNameWire.com 2018. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact copyright (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

Related posts:
  1. Domain investors risk being left out of Whois discussion
  2. GDPR will make domain name transfers more difficult
  3. ICANN files legal action against Tucows registrar over GDPR
Categories: News and Updates

NamesCon Registration Prices go Up Tomorrorow

Domain Name News - Sat, 2013-11-30 18:57

What’s the perfect thing to do after celebrating Thanksgiving with your family? Get right back to work and plan for the new year. And to get into the domaining mood right in the new year, what’s better than a domain industry conference at a low price? Today’s the last day to get your NamesCon tickets for the event from January 13th to 15th in Las Vegas, NV for $199 + fees. Fees double tomorrow to $399.

Richard Lau, the organizer of the event told DNN: “We are at over 200 attendees already and expect to hit more than 400 at the conference. The opening party on Monday (6:30pm – 9pm) will be hosted by .XYZ at the Tropicana, and the Tuesday night party will be at the Havana Room at the Tropicana from 8pm-midnight.

With hotel prices as low of $79 a night +$10 resort fee at the Tropicana right on the strip this ‘no meal’ conference is shaping up to be the event for the industry in 2014.

The event has already attracted sponsors like:

Further sponsorships are available.

Keynote speakers are:

If you need another reason to attend – you even meet DomainNameNews in person there :)

Related posts:

Categories: News and Updates

.DE Registry to add Redemption Grace Period (DENIC)

Domain Name News - Tue, 2013-11-26 19:49

As of December 3rd, 2013, DENIC, the operator of the .DE ccTLD will also introduce a Redemption Grace Period (RGP) that allows the original domain owner to recover their expired domain for up to 30 days after the expiry, the same as for gTLDs.

See the full press release after the jump.

Redemption Grace Period for .DE name space kicking off in early December

New cooling off phase to prevent unintentional domain loss

Effective 3 December 2013, the managing organization and central registry operator of the .DE top level domain, DENIC, will launch a dedicated cooling-off service (called Redemption Grace Period – RGP) which shall apply for all second-level domain names in the .DE name space. This procedure shall protect registrants against an unintentional loss of their domain(s), as a result of accidental deletion.

Under the RGP scheme, .DE domain names shall no longer be irretrievably lost, following deletion, but instead initially enter a subsequent 30-day cooling-off phase, during which they may solely be re- registered on behalf of their former registrant(s).

RGP cooling-off provisions shall allow former registrants to redeem registration of the subject domain names, by having recourse to the related Restore service, through a registrar. Only if no redemption is requested, during the 30-day RGP phase, the relevant domain names shall become available for registration by any interested party again. At the time being, similar regulations are applied by other top level domain registries already.

Registrars redeeming a deleted .DE domain name for the original registrant will have to pay a Restore fee and may pass on the related costings.

Deleted .DE domain names placed in cooling off, from RGP implementation, will be earmarked by a redemption period status in the DENIC lookup services (whois) accessible at www.denic.de.

As a consequence of the above measures, the current DENIC .DE domain guidelines shall be superseded by new, amended ones from the date of RGP launch, i.e. 3 December 2013, which shall then be permanently published at http://www.denic.de/en/domains/general-information/domain- guidelines.html.

Related posts:

Categories: News and Updates

Andee Hill forms EscrowHill.com backed by Gregg McNair

Domain Name News - Mon, 2013-11-25 16:00

Andee Hill, who recently left Escrow.com where she was Director of Business Development, has created a new licensed escrow company, Escrow Hill Limited with he backing of entrepreneur Gregg McNair.

 

“When Andee told me she was thinking about forming her own escrow business I was immediately enthusiastic. I have a reputation of connecting some of the best people in our industry and Andee is at the top both professionally and as an amazing human being,” McNair said.

 

EscrowHill.com’s team includes Ryan Bogue as General Manager and Donald Hendrickson as Operations Manager. Both have worked in the business of online escrow under Hill’s direction for over fifteen years combined. Together with Hill’s experience the new team offers over thirty years of online escrow experience!

“During my fifteen years in this business, I have handled just about every aspect of online escrow. Regardless of my title, I have always known that understanding the client’s needs and providing excellent and secure service is invaluable. I have been fortunate to work with the industry innovator from day one. I have seen what works and what doesn’t. I have been even more fortunate to have created great relationships and trust with industry leaders. At EscrowHill.com I know I can do an even better job,” Hill said.

“Gregg has earned a strong reputation for honesty, integrity and for successfully making businesses work. He also has incredible enthusiasm and a heart for helping others. All are key factors in me wanting Gregg to support my endeavor at EscrowHill.com,” Hill continued.

McNair has assumed the non-operational role of Chairman, supporting Hill and her team with whatever it takes to build the best escrow business on the planet.

Marco Rinaudo, Founder and CEO of domain registrar Internet.bs, another one of Gregg McNairs investments,  has been appointed CTO of EscrowHill.com. Rinaudo, who has been a leader in the international hosting and registrar space since 1995, said, “EscrowHill.com is formed and supported by the very best people in the industry. Our team has built the most sophisticated on-line internet escrow platform, fully automated and with more advanced security features than any other.

See the full press release after the jump.

Andee Hill forms EscrowHill.com

AUCKLAND NZ: One aspect of the domain space that bridges the whole industry is that of escrow; and the one person better known than any other in that context is the former Director of Business Development at Escrow.com, Ms. Andee Hill.

Ms. Hill has established the licensed international escrow enterprise, Escrow Hill Limited with the backing of long time friend and industry entrepreneur, Gregg McNair.

“When Andee told me she was thinking about forming her own escrow business I was immediately enthusiastic. I have a reputation of connecting some of the best people in our industry and Andee is at the top both professionally and as an amazing human being,” McNair said.

EscrowHill.com’s dream team includes Ryan Bogue as General Manager and Donald Hendrickson as Operations Manager. Both have worked in the business of online escrow under Hill’s direction for over fifteen years combined. Together with Hill’s experience the new team offers over thirty years of online escrow experience!

The domain industry is undergoing incredible change and EscrowHill.com is positioned to provide secure, yet flexible, state of the art products and services. EscrowHill.com will be able to meet the needs of both past and future generations of domain buyers, brokers and sellers. Hill’s reputation as an honest, discreet and hard working professional will now aspire to a new level.

“During my fifteen years in this business, I have handled just about every aspect of online escrow. Regardless of my title, I have always known that understanding the client’s needs and providing excellent and secure service is invaluable. I have been fortunate to work with the industry innovator from day one. I have seen what works and what doesn’t. I have been even more fortunate to have created great relationships and trust with industry leaders. At EscrowHill.com I know I can do an even better job,” Hill said.

“Gregg has earned a strong reputation for honesty, integrity and for successfully making businesses work. He also has incredible enthusiasm and a heart for helping others. All are key factors in me wanting Gregg
to support my endeavor at EscrowHill.com,” Hill continued.

McNair has assumed the non-operational role of Chairman, supporting Hill and her team with whatever it takes to build the best escrow business on the planet.

Marco Rinaudo, Founder and CEO of Internet.bs has been appointed CTO of EscrowHill.com. Rinaudo, who has been a leader in the international hosting and registrar space since 1995, said, “EscrowHill.com is formed and supported by the very best people in the industry. Our team has built the most sophisticated on-line internet escrow platform, fully automated and with more advanced security features than any other.

Related posts:

Categories: News and Updates

Inaugural Heritage Auctions Domain Event in New York City – Live Results

Domain Name News - Thu, 2013-11-21 23:55

We were will be live blogging the results of the inaugural Heritage Auctions Domain Event in New York City today. There are no guarantees that this list is correct or complete as this are not official or officially approved results.

The auction sold 26 out of the 68 domains for a total of $419,970. Domains that did not sell in the live auction will be available on Heritage Auction’s website for two weeks at their reserve price as a Buy it Now price.

The top 5 sales of this auction were:

  1. XZ.com for $138,000
  2. Animation.com for $112,125
  3. Hemisphere.com for $34,500
  4. AIE.com for $23,000, BusinessPhone.com for $23,000
  5. Numismatics.com for $17,250

Please note that all domains occur a 15% bidder premium, noted in our total and in the last column of the table below. See the full live blogged results after the jump.

 

Lot #Domain NameReserveSold?Sale PricePrice w/ Commission 87001DupontCircle.com$7,000SOLD$7,000$8,050.00 87002OJX.comno reserveSOLD$3,666$4,215.90 87003CoinCompany.comno reserveSOLD$1,600$1,840.00 87004DoctorateDegree.com$10,000SOLD$11,500.00 87005ChicagoWine.comno reservepass 87006Animation.com$95,000SOLD$97,500$112,125.00 87007KCY.com$7,000pass 87008FXTrading.com$25,000pass 87009SellShort.com$6,000SOLD$6,000$6,900.00 87010Dayton.com$95,000pass 87011Coins.ca$35,000pass 87012Comics.ca$20,000pass 87013AIE.com$20,000SOLD$20,000$23,000.00 87014ZQF.comno reserveSOLD$3750$4,312.50 87015EqualRights.com$15,000pass 87016DVDs.com$50,000pass 87017CommercialArt.com$3,888pass 87018Burbank.net$5,000pass 87019FFQ.com$3,500SOLD$5,000$5,750.00 87020Numismatics.com$15,000SOLD$15,000$17,250.00 87021Charge.me$3,000pass 87022BusinessPhones.com$20,000SOLD$20,000$23,000.00 87023AKU.com$20,000pass 87024Sociology.com$40,000pass 87025SellGoldCoins.com$2,000SOLD$2,600$2,990.00 87026KFX.com$8,000pass 87027Marilyn.com$30,000pass 87028NL.com$385,000pass 87029ExecriseGloves.com$1,500SOLD$1,500$1,725.00 87030Keynesian.com$30,000pass 87031CakeMix.com$10,000pass 87032NumismaticsBlog.com$500pass 87033MutualFunds.com$1,000,000pass 87034GIU.com$6,500SOLD$6,500$7,475.00 87035BulkDiapers.comno reserveSOLD$500$575.00 87036Hemisphere.com$30,000SOLD$30,000$34,500.00 87037Alexandria.com$200,000pass 87038Downline.com$10,000pass 87039ActiveStocks.comno reserveSOLD$850$977.50 87040TheCoinBlog.comno reserveSOLD$325$373.75 87041Bicycle.com$200,000pass 87042HJR.com$9,300pass 87043FootballUniforms.com$18,000pass 87044Suit.com$95,000pass 87045OJQ.com$4,500pass 87046GradedCards.com$1,500SOLD$1,500$1,725.00 87047BasketballMemorabilia.com$1,500pass 87048VJZ.com$4,500pass 87049SmartTVs.com$5,000SOLD$5,500$6,325.00 87050GolfLessons.com$75,000pass 87051JazzBlog.comno reservepass 87052MyCoinCollection.com$500SOLD$600$690.00 87053Tie.com$100,000pass 87054UncutDiamonds.com$3,000pass 87055QR.com$200,000pass 87056NewTees.comno reserveSOLD$250$287.50 87057WOJ.com$6,500pass 87058FootballlEquipment.com$2,500pass 87059ItalianSuits.com$10,000pass 87060LuxuryBags.com$40,000pass 87061KCJ.com$9,300pass 87062SwissChronograph.com & SwissChronographs.comno reserveSOLD$550$632.50 87063PX.net$10,000pass 87064CurrencyExchange.com$220,000pass 87065DiveSuits.comno reserveSOLD$500$575.00 87066OpalEarrings.com$3,500pass 87067XZ.com$120,000SOLD$120,000$138,000.00 87068PHQ.com$3,500SOLD$4,500$5,175.00 !ERROR! E2 does not contain a number or expression!ERROR! F2 does not contain a number or expression Auction Total$419,970

 

Related posts:

Categories: News and Updates

Buenos Aires Airport closure leaves many ICANN 48 attendees stranded

Domain Name News - Fri, 2013-11-15 15:27

As the 48th ICANN meeting is set to start in Buenos Aires, many of the attendees were stranded today in Montevideo, Uruguay  and other South American airports due to an airport closure in Buenos Aires. An Austral Embraer ERJ-190 on behalf of Air Austral/Aerolineas Argentina coming from Rio de Janeiro (Brazil), overrun the runway and only came to a halt after the nose of the machine had hit the localizer antenna about 220 meters/730 feet past the runway end at 5:45 local time this morning (UTC-3). None of the 96 passengers was injured and they were all taken to the terminal. According to reporting of the airport there was a cold front passing through the area at the time. The airline reports that the incident occurred due to a sudden change in wind direction and speed.

Flights into the aiport resumed again after about three hours, but some attendees will now only arrive tomorrow. DNN was not able to confirm if any ICANN 48 attendees were on the flight itself.

 

[via AVHerald and the ICANN Social Group on Facebook, picture posted on twitter by @JuanMCornejo]

 

Related posts:

Categories: News and Updates

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer