Domain industry news

Syndicate content CircleID
Latest posts on CircleID
Updated: 3 hours 4 min ago

The IPv4 Market Runs on All Cylinders in 2018

Fri, 2019-03-22 18:20

This post was co-authored by Marc Lindsey, President and Janine Goodman, Vice President of Avenue4.

2018 was a record-breaking year for the IPv4 market. The total volume of addresses traded, overall number of transactions in the ARIN region, and prices reached their highest levels to date.

Since 2014, the number of transactions has grown considerably, mostly attributable to a dramatic increase in small block trades of fewer than 4,000 addresses. The volume of addresses sold during the same period, however, tells a different story. Between 2014 and 2015, the volume of addresses traded increased seven-fold to nearly 40 million addresses. Between 2015 and 2016, this number dropped by half. Then between 2016 and 2017, trading volumes skyrocketed again, more than doubling in the intra-RIR market and then growing by another 15% overall in 2018. This volatile pattern of activity is directly correlated to the dips and surges in available large block supply.

In our 2016 report, we attributed the sharp reduction in trading volumes to both the depletion of large block supply available in the marketplace, and the decision by some large block holders to delay entering the market altogether until pricing improved.

The heavy activity in 2017 and 2018 in part reflects the market's response to this large block scarcity, which impacted unit pricing across all other sectors of the market. Large blocks were trading for $6 per number at the beginning of 2016, nearly half the per unit price one might expect to pay for a /24 (256 numbers) that same year. By the end of 2018, large blocks were trading at prices in excess of $20 per number — surpassing small and mid-sized block unit pricing.

Escalating prices in 2018, combined with increased buyer flexibility, prompted some new large block sellers to enter the market. For the first time, three major telecommunications carriers sold from their IPv4 inventory. The sale of over 5MM numbers from Level 3 to Alibaba was the largest inter-RIR transaction ever and helped breathe some life into an otherwise previously sagging inter-RIR market.

As we've seen over the last couple of years, buyers are willing to enter into contract structures that afford sellers more time to undertake renumbering efforts required to free up their supply. And large block buyers are increasingly more willing to purchase smaller and smaller block sizes, which appeals to sellers that have substantial but fragmented unused address space. In 2018, nearly 80% of IPv4 blocks sold to large block buyers were a /16 (65,536 numbers) or smaller block size.

We expect the small block sector of the market to continue to thrive over the next several years. The large block sector is, however, another matter. Although we expect additional large block supply to enter the market later this year, mostly in the form of legacy /8 address space, the available large block space is dwindling and will become a much smaller source of supply starting in 2020. And no, IPv6 migration is still not materially impacting the IPv4 market.

A full analysis of the IPv4 market can be found in Avenue4's 2018 State of the Market Report.

Written by Janine Goodman, Vice President and Co-founder at Avenue4 LLC

Follow CircleID on Twitter

More under: IP Addressing

Categories: News and Updates

EURid Pauses Brexit Plans

Fri, 2019-03-22 18:15

When the UK announced its intention to withdraw from the European Union it was clear to some of us that this would cause complications with .eu and possibly other domain name extensions.

Over the past year, it's become clear that the European Commission, who mandate the .eu domain name policy, weren't interested in providing a "soft landing" for impacted registrants of .eu domain names. While their position has morphed slightly over the past year, it's very clear that they were happy to let existing registrants of .eu domain names residing in the UK, including Northern Ireland, to simply lose their domains. In fact, they were pushing the .eu registry to pull all .eu domains from UK registrants on March 29th.

For a multitude of reasons, the EC's position on Brexit and .eu domains is terrible. It's the kind of approach that would undermine confidence not only in .eu domain names but in domains in general.

The original date for the UK's withdrawal from the EU is March 29th, however, the current status of Brexit is very much in flux.

Earlier this morning EURid contacted all .eu accredited registrars to inform them that pending clarity on the UK's situation that their Brexit plans were being put on hold.

"Due to ongoing uncertainties over the United Kingdom's withdrawal from the European Union, EURid has placed on hold any plan regarding domain names registered to individuals and undertakings located in the United Kingdom and Gibraltar. Those plans were set out in European Commission's notice to .eu stakeholders, published on 28 March 2018 (see Notice to stakeholders: withdrawal of the United Kingdom and EU rules on .eu domain names, 27 March 2018).

As soon as we receive official updates from the European Commission on how to proceed, we will amend the plans on the Brexit dedicated web page and communicate with affected stakeholders as appropriate, and as instructed by the Commission"

Hopefully, in the interim period, the EC will finally realize how bad their current approach to this situation is.

You can check the current status of EURid's Brexit plans here.

Written by Michele Neylon, MD of Blacknight Solutions

Follow CircleID on Twitter

More under: Domain Names, Internet Governance, Policy & Regulation

Categories: News and Updates

SpaceX's Starlink Internet Service Will Target End Users on Day One

Wed, 2019-03-20 14:56

Internet users per 100 inhabitants (source)It sounds like SpaceX is planning on offering broadband service to end users who will order the service online and set up their own ground stations.

Starting with Teledesic in 1990, would-be Low-Earth Orbit (LEO) satellite constellations have been justified to the FCC, other regulators, and the public as a means of closing the digital divide. Teledesic's goal was "providing affordable access to advanced network connections to all those parts of the world that will never get such advanced capabilities through existing technologies." Today's low-Earth Orbit (LEO) satellite companies make the same claim, but Telesat, OneWeb and Leosat seem to be targeting commercial markets first.

As far as I know, Telesat is the first LEO provider to sign up a customer and that customer is Omniaccess. Headquartered in the "yachting capital of the Mediterranean," Palma de Mallorca, OmniAccess provides broadband and IPTV service to over 350 vessels — superyachts, boutique cruise lines, and prestigious research & exploration organizations. The Telesat agreement provides OmniAccess with limited exclusivity to serve the "superyacht" market — in other words, they will be connecting the yachts of Russian oligarchs.

In 2003, OneWeb founder Greg Wyler worked on 3G mobile and fiber access to homes and schools in Rwanda. In 2007, he founded O3b Networks, a middle-Earth orbit satellite Internet service provider to connect the "other three billion" unconnected people. OneWeb was founded in 2012 with goals of 1 billion subscribers by 2025 and the elimination of the global digital divide by 2027. Their first marketing-oriented move was to partner with Airbus, Delta, Sprint, and Airtel, establishing the Strategic Air Alliance to develop standards to enable them to provide passengers seamless, in-cabin connectivity. It looks like their first customers will be airline passengers, ships at sea and mobile phone companies.

Leosat has focused on enterprise and government customers from the start.

Elon Musk followed a similar strategy in bootstrapping his electric car company, Tesla. He started with expensive, high-end vehicles and followed several years later with the lower-priced Models 3 and Y, but it looks like the SpaceX Starlink Internet service will focus on end users from the start.

SpaceX sister company, SpaceX Services, filed an FCC application for "a blanket license authorizing operation of up to 1,000,000 earth stations that end-user customers will utilize to communicate with SpaceX's LEO constellation." Those end users will be individuals, libraries, schools, etc. "throughout the contiguous United States, Alaska, Hawaii, Puerto Rico, and the U.S. Virgin Islands." They assert that this license will "enable SpaceX to bring high-speed, reliable, and affordable broadband service to consumers in the United States and around the world, including areas underserved or currently unserved by existing networks." (Note that these initial satellites would have the capacity to serve Cuba and other Caribbean nations).

Their user terminals will "employ advanced phased-array beam-forming and digital processing technologies to make highly efficient use of Ku-band spectrum resources by supporting highly directive steered antenna beams that track the system's LEO satellites." Since SpaceX plans to begin launching operational satellites in 2019 and they are already conducting successful satellite-ground communication tests, they must be confident that they can mass produce such antennas at a low cost. (Note that a former SpaceX executive has recently joined Mynaric, a German laser communications startup focused on satellite and airborne platforms. Perhaps SpaceX and Mynaric will collaborate on the antennas).

Ground stations that can track fast-moving satellites, switching seamlessly from one to another when they go out of view, will be easy for end-users to install. It sounds like SpaceX is planning on offering broadband service to end users who will order the service online and set up their own ground stations.

Click for more on the SpaceX, OneWeb, Telesat and Leosat LEO Internet-service projects.

Written by Larry Press, Professor of Information Systems at California State University

Follow CircleID on Twitter

More under: Access Providers, Broadband, Wireless

Categories: News and Updates

India’s Draft National E-Commerce Policy: A Bollywood Drama in Four Acts

Mon, 2019-03-18 23:19

This article was co-authored with Prof Emeritus & Senior Scholar, York University, Sam Lanfranco.

India's recently published Draft National e-Commerce Policy, prepared by the Indian Commerce Ministry think-tank, can be read like the script of a four-act Bollywood drama.

Act 1: A match Made in Heaven

They were the dream couple: Princess India and Prince IT.

She was full of cultural richness and diversity, with beauty, mystique and natural resources. She also a dark side. She harbored the world's largest number of impoverished people, with little infrastructure, and facing sparse economic prospects.

He was young, with enormous potential. One day he would conquer all. He arrived like the sun rising after a long cold night. He had a solution to every problem. He would bring equality of access to a nearly unlimited economic playing field

She had the people and the land he needed. He would put them on the path to prosperity. Her children would become fat and content.

She was a willing lover, giving him all he asked. She sent her children to school to learn his ways. Programs like Digital India "Power to Empower" initiative, launched by Prime Minister Narendra Modi midyear 2015, were implemented to strengthen his hold of over the land. The dream would become a reality.

The Princess had good reason to believe in her choice. Her Prince, shining and full of promise, made significant progress on some fronts. 1.23 of her 1.3 billion children carried Aadhaar digital biometric identity cards. Nearly all (1.21 billion) had mobile phones, almost half with smartphones and connected to the Internet. Her country became the world's fastest-growing fifth-largest economy. By 2017 exported IT services garnered $154 billion in revenue, were the fastest-growing part of the economy and the largest private-sector employer. Technology start-ups mushroomed to 3,100 in 2018–19.

Their big wedding dance scene, insanely happy, had predicted this bright future!

Act 2: Disenchantment

Even matches made in heaven can fade with the passage of time. The Princess traveled her land, and something seemed not quite right. The resources that had gone into the Prince's IT efforts resulted in a 51 percent growth in e-commerce but captured only about 3 percent of the national retail market. Some of her children had become much richer, but they were mostly the few who had been rich before — most of her children, those supposed to prosper, were as poor as ever. What had gone wrong she asked her people. They were quick to answer. Mother of us all who cares, we know that you and the Prince wanted to help, but the Prince has many distant relatives who have bad intentions. When we started to use the technologies, they came from abroad and destroyed our businesses. They used investor money to undercut every effort we made until we were gone. They took control of marketplaces and dictated prices that made them unimagined profits which they took abroad to their homes.

That was not enough for them. The "price" of their technology was access to our personal data. They mined and monetized our data for their profits. The Prince and his relatives have taken our money and our souls. We have gotten little in return.

When the Princess heard this, she became furious and turned into the Hindu Goddess Kali, in her earliest guise, as a destroyer of evil forces. She's clever and vicious, but to plot her revenge she turned to those who were even more dangerous and fiendish than she: her bureaucrats. She asked: What can I do to make my people prosper and punish the wrongdoers?

Her bureaucrats went into their ministry. They thought and thought, and talked and talked. They came forth with a policy egg they named the "Draft National e-Commerce Policy," a policy egg pregnant with bureaucratic self-interest.

Enter the slow waltz dance of the bureaucrats, to seduce the goddess Kali.

Act 3: The Reckoning

And the bureaucrats said: Your people are right. The relatives of the Prince are greedy, unscrupulous robber barons. It is the people's data they take, and it makes them rich. They monetize data into marketable products. They monetize and sell data that is not their own. Like drug addicts, they are hooked and totally dependent on data. Day and night, they think about nothing other than how to get more data, and how to turn it into more marketable products.

They profess to collect data in the name of development, prosperity, and innovation. They love India not for what they give it, but for what they can get as India's people become one of the world's biggest sources of monetized data. The more data they control, the more they can monopolize markets and innovation. They tell the Princes that this will obstruct her children's access to innovation and economic opportunity. This will negate Prince IT's promise of equal access to nearly unfettered opportunity. Oligopolies controlled by the few will never permit access to equitable prosperity!

The Princess/Kali is reminded that data in and of itself is not a bad thing. Processed big data will be the lifeblood of future socio-economic activity. The importance of data will grow as Artificial Intelligence (AI), and the Internet of Things (IoT) populate the data cloud with clusters of data asteroids of use for a myriad of innovative uses.

This causes Princess India to shed Kali and return with three questions. What are one's rights with regard to the uses of one's individual data? What are the proper uses for data in the cloud? How is this done to promote equitable prosperity? Princess India begins to glimpse the light in the data cloud, and the promise of "India's Data for India's Development." Good policies will bring advantages and opportunities to all. The IT Prince husband's marital promise will come to pass.

Princess India, convinced she would get her will, returns to bull benevolent human form and asks: My wise servants, what shall I do? They reply: To control data, you need to establish who owns it, and the rights and obligations of ownership. Your subjects must know that only they own the rights to their data and that the data cannot be used without their consent. Even anonymous data need policies to regulate the use and protect rights under the law.

The Princess is told don't be alarmed by such control in the hand of your subjects. As the world's largest democracy India will become the world's largest digital democracy. Indian data and all that comes from it belongs to India and its citizens. The sovereign right to this data cannot be assigned to strangers, even if they are your husband's distant relatives.

Entities that collect or process data deemed private under Indian law, even if stored abroad, would be required to adhere to Indian data policies. India will be like an island with data sniffer dogs at every port. Transgressions will be caught and prosecuted to the full extent of the law.

Cross-border data flow regulations will ensure that Indian data generates value for India. Negotiated access will adhere to Indian data use policies. India's governance structures will do what is necessary under its laws and regulations to ensure that it will fulfill its holy duty to you, Princess India, to generate equitable benefits, including appropriate taxes and revenues to finance governance.

The bureaucrats further tell the Princess that proper policies and data regulations will benefit India in many ways:

  • Protecting the privacy and data ownership rights of citizens
  • Enabling proper data access for start-ups and Indian data use innovation
  • Promoting the domestic use of data for Indian economic gain
  • Controlling and pricing access to government data for legitimate uses
  • Requiring e-commerce entities operating in India to be registered in India
  • Having taxing and duty structures that level the economic playing field
  • Ensuring that taxes, duties and economic gains from India data stay in India
  • Enacting data use policies that protect national security and law and order
  • Regulating intellectual property to fight counterfeits and protect brands

The bureaucrats propose a robust administrative, regulatory and legal structure, using a multi-pronged six issue approach dealing with: data assembly, regulatory issues, infrastructure development, e-commerce marketplaces, digital economy development; and e-commerce export promotion.

Collecting and analyzing data is also a strategic national task. Data focused agencies need to be established or strengthened, to support evidence-based data policy, and to track the economy through a digital "data lens."
Issues like compulsory intellectual property and data-licensing will require extensive research and review. Such practices can run afoul of principles of data privacy and data ownership.

India's position on policies like the World Trade Organization (WTO) efforts to permanently exempt electronic transmissions from duties will require extensive research and review. They may unfairly benefit rich developed country companies while preventing poorer countries like India from extract taxes on cross border trade. This is particularly problematic when cross border digital trade can consist of digital objects of considerable value, such as 3D printer production algorithms, AI algorithms, and the like.

The complex relationship between cross border source code flows, the terms of technology transfer, and the impacts on local industry and national security again require extensive research and review. This calls for appropriate national research funding and digital/data focused authorities with a remit to explore consequences and policies in these areas.

There are multiple emergent foreign investments and cross-border trade models. Some reflect a presence, with local supply lines, in a national marketplace. Others reflect a cross-border inventory-based model of sale and distribution. National policy has to balance foreign engagement in the Indian 'marketplace', investment restrictions, and cross-border inventory-based commerce.

Act 4: Princess India's dream: Dance of the Data Ministry. (heavy stomp!)

Content for the moment, the Princess falls into a slumber and is soon dreaming. In her dream, she sees an enormous mountain range made up of data, from which has sprung a mighty river of rupees that flows to nourish the country. But soon the river begins to dry until there is only a trickle, and the land turns to dust.

"What happened?", asked the Princess as she awakes. The land answers: "Your bureaucrats did exactly what they told you." They build an enormous all-knowing and powerful ministry of data that controlled all data. First, they took the data to control the marketplace, but instead of creating opportunities for all they just used it to create opportunities for their benefit, and to generate taxes and revenues. They did not care about opportunities and equitable prosperity. They forgot the people. They gave the data to the IT Prince's relatives who had learned how best to work with bureaucratic interests within the government.

The ministry was charged with empowering the citizens, protecting their rights and maintain their dignity. But gradually the ministry claimed those rights, imposing data governance from above and curtailing digital democracy from below.

Soon the ministry wielded more power, using artificial intelligence algorithms to extend control across all aspects of life in the land. The bureaucrats argued that AI made better, cheaper and faster decisions than could citizens with traditional governance processes. As the machines demanded more data, and the bureaucracy was given more control, the results left the poor even more marginalized. Left with little access to Prince IT's digital opportunities, and unable to sustain themselves on what little data they retained, despair permeated the land.

The Princess wept and asked: What shall I do? The country answered again: Do not leave control in the hands of the bureaucrats. Let them learn. Let us all learn that development and sustainability do not come from more data alone, but from its selective and wise uses. Help us understand that e-commerce does not mean more data manipulation, so customers buy more or buy what others want us to buy. Put data first in the service of needs, not wants.

Let us rebuild our social fabric, where sustainable human relationships are based on trust and respect. Sustainable commerce is a beneficial relationship between humans and not a crass want generation calculation.
Help us remember that sustainable and equitable business models are based on trust, dignity, and respect. Anything less makes a mockery of our human experience and the lessons learned. The marriage to the IT Prince should be to build on the shoulders of that historical experience, and not squander the Prince's promise in the pursuit of hegemonic market or political power.

With that, the Princess fully awake, looked at the mess the bureaucrats had created. She called them together, and said only two words: Think again! She continued: We are the world's biggest democracy and that should extend to the digital sphere and be in the service of all. How do we get there from here?


Like all Bollywood dramas, this one will end with a big dance scene. Will it be an elite affair, a waltz of the oligarchs, or an engaged dance of the people? The Princess is looking to her people to lead to which it will be.

Written by Klaus Stoll, Digital Citizen

Follow CircleID on Twitter

More under: Cloud Computing, Cybersecurity, Intellectual Property, Internet Governance, Internet of Things, Policy & Regulation, Privacy

Categories: News and Updates

A Short History of DNS Over HTTP (So Far)

Mon, 2019-03-18 19:21

The IETF is in the midst of a vigorous debate about DNS over HTTP or DNS over HTTPS, abbreviated as DoH. How did we get there, and where do we go from here?

(This is somewhat simplified, but I think the essential chronology is right.)

Javascript code running in a web browser can't do DNS lookups, other than with browser.dns.resolv() to fetch an A record, or implicitly by fetching a URL which looks up a DNS A or AAAA record for the domain in the URL.

It is my recollection that the initial impetus for DoH was to let Javascript do other kinds of DNS lookups, such as SRV or URI or NAPTR records which indirectly refer to URLs that the Javascript can fetch or TXT records for various kinds of security applications. (Publish a TXT record with a given string to prove you own a domain, for example.) The design of DoH is quite simple and well suited for this. The application takes the literal bits of the DNS request, and sends them as an HTTP query to a web server, in this case probably the same one that the Javascript code came from. That server does the DNS query and sends the literal bits of answer as a DNS response. This usage was and remains largely uncontroversial.

About the same time someone observed that if the DoH requests used HTTPS rather than HTTP to wrap DNS requests, the same HTTPS security that prevents intermediate systems from snooping on web requests and responses would prevent snooping on DoH. This was an easy upgrade since browsers and web servers already know how to do HTTPS, so why not? Since DoH prevents snooping on the DNS requests, a browser could use it for all of its DNS requests to protect the A and AAAA requests as well, and send the requests to any DoH server they want, not just one provided by the local network.

This is where things get hairy. If the goal were just to prevent snooping, there is a service called DNS over TLS or DoT, which uses the same security layer that HTTPS uses, but without HTTP. A key difference is that even though snooping systems can't tell what's inside either a DoT or a DoH transaction, they can tell that DoT is DNS, while there's no way to tell DoH from any other web request, unless it happens to be sent to a server that is known to do only DoH.

Mozilla did a small-scale experiment where the DNS requests for some of their beta users went to Cloudflare's DNS service, with an offhand comment that maybe they'd do it more widely later.

On the one hand, some people believe that the DNS service provided by their network censors material, either by government mandate or for the ISP's own commercial purposes. If they use DoH, they can see stuff without being censored.

On the other hand, some people believe that the DNS service blocks access to harmful material, ranging from malware control hosts to intrusive ad networks (mine blocks those so my users see a blue box rather than the ad) to child pornography. If they use DoH, they can see stuff that they would rather not have seen. This is doubly true when the thing making the request is not a person, but malware secretly running on a user's computer or phone, or an insecure IoT device.

The problem is that both of those are true, and there is a complete lack of agreement about which is more important, and even which is more common. While it is easy for a network to block traffic to off-network DNS or DoT servers, to make its users use its DNS or DoT servers, it is much harder to block traffic to DoH servers, at least without blocking traffic to a lot of web servers, too. This puts network operators in a tough spot, particularly ones that are required to block some material (notably child pornography) or business networks that want to limit the use of the networks unrelated to the business, or networks that just want to keep malware and broken IoT devices under some control.

At this point, the two sides are largely talking past each other, and I can't predict how if at all, the situation will be resolved.

Written by John Levine, Author, Consultant & Speaker

Follow CircleID on Twitter

More under: Cybersecurity, DNS, Internet Protocol

Categories: News and Updates

WIPO Reports Cybersquatting Cases Grew by 12% Reaching New Records in 2018

Mon, 2019-03-18 18:58

According to a report from the World Intellectual Property Organization (WIPO), trademark owners filed a record 3,447 cases under the Uniform Domain Name Dispute Resolution Policy (UDRP) with WIPO’s Arbitration and Mediation Center in 2018.

"WIPO’s 2018 caseload covered 5,655 domain names in total." Disputes involving domain names registered in new generic Top-Level Domains (gTLDs) accounted for some 13% of the total, with disputes most commonly found in .ONLINE, .LIFE, and .APP. Representing 73% of the gTLD caseload, .COM demonstrated the continuing popularity of the legacy gTLDs.

The top three sectors of complainant activity were banking and finance (12% of all cases), biotechnology and pharmaceuticals (11%), and Internet and IT (11%).

Follow CircleID on Twitter

More under: Domain Management, Domain Names, Intellectual Property, UDRP

Categories: News and Updates

ICANN Terminates AlpNames

Fri, 2019-03-15 22:23

AlpNames has been sent a notice of termination by ICANN. Unlike many termination notices that specify a future date, the one they were sent has an immediate effect.

As reported in multiple fora over the last few days AlpNames had gone offline, and at time of writing still is. They've also become unresponsive. It's on the basis of this that ICANN decided to terminate their contract straight away.

What this means is that AlpNames has lost their "license" to sell domains from ICANN. The existing domains will have to be moved to another registrar, though it's unclear who will take over the domain portfolio. The registrar's back-office operations are with LogicBoxes, so it's fairly safe to assume that the data has been escrowed and will be available to the new registrar.

So what happened?

The Gibraltar based registrar was sent multiple notices by ICANN since the beginning of March but did not respond. Also, they owe ICANN fees.

As a registrar, their track record with abuse was far from stellar. Spamhaus has been listing them as one of the worst registrars for DNS abuse on the planet for a long time. ICANN's report on "competition, consumer trust and consumer choice" calls out AlpNames:

Alpnames Ltd., based in Gibraltar, was associated with a high volume of abuse from the .science and .top domain names. The Study notes that this registrar used price promotions that offered domain name registrations for USD $1 or sometimes even free. Moreover, Alpnames permitted registrants to randomly generate and register 2,000 domain names in 27 new gTLDs in a single registration process. Registering domain names in bulk using domain generation algorithms are commonly associated with cybercrime. However, there is currently no contractual prohibition or safeguard against the bulk registration of domains.

Historically AlpNames was linked to Famous Four Media, which changed ownership in the last few months.

AlpNames has about 700 thousand names in new gTLDs. I'm not sure how big they were in legacy gTLDs or if there were any ccTLD domains under management.

Written by Michele Neylon, MD of Blacknight Solutions

Follow CircleID on Twitter

More under: Domain Management, Domain Names, ICANN

Categories: News and Updates

Portrait of a Single-Character Domain Name

Wed, 2019-03-13 18:57

Irregularities surrounding O.COM RSEP reveal coloring outside the lines.

Let's take some crayons and draw a picture of the current state of affairs regarding single-character domain names (SCDNs), and specifically O.COM.

During the public comment period for the current O.COM RSEP, ICANN's own Intellectual Property and Business constituencies recommended implementation of rights protections mechanisms (RPMs) for intellectual property, including Sunrise and Priority Access periods. It is curious that such hard-won protections are being so easily set aside by Verisign and ICANN.

No matter, however, because this isn't just about trademarks. This is also a simple issue of internationalized domain names (IDNs). We can forego the finer points of trademark law, because Verisign has, since at least July 2013, been unequivocal in the commitments it has made numerous times in correspondence with ICANN, in response to questions raised by financial analysts during quarterly earnings calls, and which can still be found — in living color — on their website blog today:

Use Case No. 2: John Doe does not have a registration for an second-level domain name. John Doe registers a second-level domain name in our Thai transliteration of .com but in no other TLD. That second-level domain name will be unavailable in all other transliterations of .com IDN TLDs and in the .com registry unless and until John Doe (and only John Doe) registers it in another .com IDN TLD or in the .com registry.

The blog goes on to helpfully explain that VeriSign's objective with this strategy is to avoid cost and confusion and will benefit the community by creating "a ubiquitous user experience." Ubiquity appears to have a different meaning here.

Just for fun, let's apply Use Case No. 2 to the facts at hand regarding the single-character "O", replacing John Doe with First Place Internet and substituting Hebrew for Thai.

First Place Internet does not have a registration for second-level domain name. First Place Internet registers O in the Hebrew transliteration of .com but in no other TLD. O will be unavailable in all other transliterations of .com IDN TLDs and in the .com registry unless and until First Place Internet (and only First Place Internet) registers it in another .com IDN TLD or in the .com registry.

Since it seems that there might be a number of different ways to look at this predicament, let me break it down, super-simple style:

Want this to be a trademark issue? Then First Place Internet owns USPTO Registration Number 1102618 which is active and, having been registered in 1978, is older than I am.

Want this to be an IDN.IDN issue? Then, at precisely 2018-07-31 T14:29:51Z, employing its validated Trademark Clearinghouse SMD file for its U.S. Trademark # 1102618, First Place successfully registered VeriSign's Hebrew o.קום (o.xn--9dbq2a) IDN domain name in VeriSign's Sunrise Period.

Want this to be about an open and transparent DNS? Read VeriSign's words and then get acquainted with the United States Federal Trade Commission and the U.S. Securities and Exchange Commission.

We have rules in America that intend to ensure a level playing field — that seek to even things out between a rich, powerful and dominant industry player and its competitors and consumers. First among these is something my grandfather taught me when I was a little boy (still younger than USPTO Reg. No. 1102618): a person lives up to their commitments. Years of mandatory annual compliance training provided by the publicly-traded corporations that I've had the privilege to work for reinforces the significance of commitments made publicly in correspondence to a so-called regulator, to investors and analysts during quarterly earnings calls, and to an unsuspecting public in policy stated on the corporate website.

Over the years, I've learned — sometimes the hard way — that this rule means having to do something I didn't want to when I misspoke and then had to make it right.

If this auction proceeds and Verisign is permitted to color outside the lines by welching on commitments it has made and that can still be found on their website today, then multi-stakeholder governance will have failed — not to mention any sense of fair play — and the image of an open and equitable DNS dies by the auctioneer's gavel.

Maybe it's appropriate and relevant to ask: Is Verisign's trademark — USPTO Registration Number 3060761 for "It's a Trust Thing" — dead from discontinued use?

Written by Greg Thomas, Managing Director of The Viking Group LLC

Follow CircleID on Twitter

More under: Domain Names, ICANN, Intellectual Property, Internet Governance, Law, Policy & Regulation, Registry Services, New TLDs

Categories: News and Updates

ICANN Chair Elections Test Its Institutional Integrity

Wed, 2019-03-13 17:05

The ICANN Board will soon be considering candidates for election to the position of ICANN Chairperson and Vice Chair, which compels me to remind both the Board and the ICANN community of the fact that one of the members pursuing the Chairmanship is the subject of an on-going Australian Freedom of Information Act, which was initiated by the irregularities that brought about this individuals dismissal from the .au Domain Administration. In pursuit of bringing the facts of the matter to light for all concerned, following receipt of the initial declination to release the requested information, on 07 March 2019 the Office of the Australian Information Commissioner has "...concluded that aspects of the Department's decision to refuse access to the documents requested are incorrect. Consequently, [the Information Commissioner has] invited the Department to issue a revised decision pursuant to [section] 55G of the FOI Act or final submissions if it disagrees with [the Information Commissioner's] view by 14 March 2019."

Coupled with this notice from the Office of the Australian Information Commissioner is the fact that there is also an on-going police investigation into this matter, which in fact was the catalyst for the initiation of the Freedom of Information Act request in the first place.

Recently, I brought the ICANN Board's attention to something that the Board Governance Chair had been derelict in his duties, i.e., vetting all Board members through background checks, in the same manner as all Nominating Committee Board appointees, to ensure that the ICANN Board meets basic governance standards. To Chairman Chalaby's credit, the Board took swift action to ensure those Board members who had not been, were indeed properly vetted within the very week of that ICANN meeting.

In the same way — to protect the institution of ICANN — to ensure that ICANN is kept separate and apart from what may or may not prove to be a serious, avoidable, self-inflicted wound for an institution that so many have tirelessly dedicated countless hours and effort to establish — I call on the Chair and ICANN Board to ensure that no candidate who may be standing under a cloud of any type be considered for the highest position and authority within ICANN.

As we move forward to when the ICANN Board will vote on the next Board Chair and Vice Chair, I urge the members of the Board to respect the importance of having the utmost integrity within itself, and to respect the fact that the impact of any shadow — no matter how large or small — will impact the larger volunteer community that is ICANN.

Thus, for all candidates for Vice Chair and Chair, I ask that the Board ensure such individuals are held to the highest standards of integrity; anything less is unacceptable if ICANN is to be a true steward of the Internet. In today's world, perceptions matter.

When one is a leader at the Board level within ICANN, it is not only that the ICANN Community must have their faith and trust in our leaders be returned, but that trust must be validated. Any deleterious halo effect has a decidedly negative reflection on all of the hundreds of volunteers, and ultimately on the organization as a whole.

So I caution the Board that a mistake made here will dramatically harm the global perception of our (ICANN's) institutional integrity.

Written by Ronald N. Andruff, President & CEO, dotSport LLC

Follow CircleID on Twitter

More under: ICANN, Internet Governance, Policy & Regulation

Categories: News and Updates

4G Mobile Trials Have Begun in Cuba - What Is Their 3/4/5G Strategy?

Wed, 2019-03-13 03:01

Early 4G speed test (Source)During the first month of 3G mobile service, Cuban Internet use increased substantially. At the end of January, ETECSA had 5.4 million mobile users, 35% of which use the Internet and they are adding 5,000 new data customers per day. According to Eliecer Samada, head of ETECSA's wireless access group, the company is now at 160% of the expected capacity.

As a result of that unexpected demand and damage due to the tornado that hit Havana in January, both data and phone service have been slow and unreliable.

To alleviate these problems, ETECSA announced last week that they were accelerating 4G mobile trials along the north coast from Mariel through Havana to Varadero. That is a distance of about 100 miles with 44 4G base stations. The trial will be open to about 10,000 high-volume users who have 4G-compatible phones and have been using at least 2.5 GB of 3G mobile data per month in that area. (ETECSA reports that 7% of 3G network users account for 52% of the traffic).

Andy García ran a speed test using his neighbor's account and recorded a download speed of 5.52 Mbps, upload speed of 1.18 Mbps and a 24.17 ms latency, but a few days later, he observed slower rates and Armando Camacho recently recently reported a speed of 3.2 Mbps download and 5.8 Mbps upload and he has posted the locations of 21 base stations in Havana. We can't draw conclusions about the post-trial speeds from a few tests, but they will surely be faster than current 3G speeds and considerably slower than the US LTE speeds reported last month by Tom's Guide.

Current US 4G speeds (Source)ETECSA expects this trial to divert enough traffic to improve 3G and voice service. If that is the case, it seems the current congestion is at the base stations rather than in backhaul from them. Regardless, I expect that backhaul capacity from faster 4G base stations will constrain 4G rollout in this and other regions.

I don't know what ETECSA's mobile deployment strategy is — what the balance will be between 3 and 4G capacity and pricing — but I have suggested that they will gain trained, demanding users if they focus on bringing the cost down as quickly as possible. That would argue for cheap or even free 3G service.

The average price of 1 GB of mobile data in Cuba is higher than that in 184 of 230 nations. (The price in ten of the 28 Caribbean nations is higher than in Cuba and India is the lowest-price nation). The source does not indicate the speeds of these services and it would be interesting to see them normalized for per-capita income as an indication of affordability, but there seems to be room for price cutting in Cuba.

Regardless of the deployment and pricing of 3 and 4G mobile Internet access in Cuba, both should be regarded as stopgap measures and plans should be made for 5G deployment.

Update Mar 21, 2019

ETECSA initially restricted 4G access to those with 2.5 GB per month data plans. 14Ymedio reports that they have now opened 4G up to those with 1.5 GB per month plans in spite of having temporarily run out of the USIM cards that are required for 4G access. (USIM cards obsoleted SIM cards, which were used in 2G phones and could be used, with the loss of some features, in 3G phones).

The article also states that they are adding 50,000 new mobile accounts per month, as opposed to the 5,000 per day reported above. They say that 40% of those users generate some sort of data traffic — for Nauta email, MMS messages or Web browsing.

Written by Larry Press, Professor of Information Systems at California State University

Follow CircleID on Twitter

More under: Access Providers, Mobile Internet, Wireless

Categories: News and Updates

ICANN Postpones Amazon Domain Decision, Crusade Continues Between Amazon Nations and Amazon Inc.

Tue, 2019-03-12 23:22

ICANN on Monday extended the deadline to April for Amazon basin nations to reach a deal with the tech giant Amazon Inc in their seven-year battle over the .amazon domain name. Reuters reports: "[ICANN] meeting this week in Kobe, Japan, decided to put off a decision that was expected to favor use of the domain by the world's largest online retailer. Amazon basin countries Brazil, Bolivia, Peru, Ecuador, Colombia, Venezuela, Guyana and Suriname have fought the domain request since it was made in 2012, arguing that the name refers to their geographic region and thus belongs to them."

Amazon nations remain "firmly opposed" to Amazon Inc gaining exclusive control of the domain name, says Brazil's foreign ministry. He adds: "Brazil and its seven Amazon partners will continue to negotiate in good faith with to try to reach a 'mutually acceptable solution' to the domain dispute."

Supporting .Amazon domain strengthens global internet cooperation, says Christian Dawson of i2Coalition: "Though we should all be sympathetic to the position of the governments of Brazil and Peru, we should also be impressed with the extensive efforts that Amazon has undertaken in order to assuage as many of those concerns as possible. They have made formal signed commitments to not use the TLDs in a confusing manner. They have promised to support future gTLD applications to represent the region using the geographic terms of the regions, including .AMAZONIA, .AMAZONICA or .AMAZONAS. They also offered to reserve for the relevant governments certain domain names that could cause confusion or touch on national sensitivities.

Follow CircleID on Twitter

More under: Domain Names, ICANN, Internet Governance, New TLDs

Categories: News and Updates

The Pace of Domain Growth Has Slowed Considerably, Reports CENTR

Tue, 2019-03-12 19:01

The global Top-Level Domain market is currently estimated at 348 million domains across all recorded TLDs. Although the overall domain count has continued to grow in all regions and types, the Council of European National Top-Level Domain Registries (CENTR) reports that the pace of growth has slowed considerably. "As of January 2019, it has seen its lowest recorded year-on-year rate of 3.7%."

"While domain count and growth are not the only measurement of market health, they can provide an indication of general uptake and interest in domain names. At present, the indication is a continued slow-down. This may be explained by multiple factors, such as a market saturation, alter- native online presence choices (e.g. social media) or even a concentra- tion of market share to fewer TLDs."

Follow CircleID on Twitter

More under: Domain Names, New TLDs

Categories: News and Updates

How to Track Online Malevolent Identities in the Act

Tue, 2019-03-12 17:52

Want to be a cybersleuth and track down hackers?

It may sound ambitious considering that malevolent entities are extremely clever, and tracing them requires certain skills that may not be easy to build for the typical computer user.

But then again, the best defense is offense. And learning the basics of sniffing out cybercriminals may not only be necessary nowadays, it has become essential for survival on the Web. So where can you begin?

Place Honeypots

Hackers take great care to cover their tracks. So, it's important to catch them with their hand in the cookie jar. You can do so by setting up a bait — called a honeypot — to lure them out. It can take the form of a spammable domain or an easily hackable virtual machine which can appear as legitimate targets.

Once attacked, honeypots help you observe what intruders do to the system, know the tricks that they employ to infect devices, and subsequently find ways to counter them. Such forensic evidence enables law enforcers to track unsolicited access and then locate and catch perpetrators.

Reverse-Engineering Malware

Let's say that despite all the precautions, malware still succeeded in infiltrating your company's system. Instead of losing sleep, you can use the infection to understand how the malicious program operates and what it's been engineered to do, such as what vulnerabilities it's been designed to exploit.

This process is called reverse engineering. It involves disassembling the program to be able to analyze and retrieve valuable information on how it is used or when it was created. It is extremely helpful in finding substantial evidence such as encryption keys or other digital footprints that can lead investigators to the cybercriminals.

Leverage WHOIS Information

When a complaint is received over a dangerous website, the first step in the investigation is to identify the operator of the suspect domain.

This can be done by querying the domain name registry where the site has been registered. A whois database download service, for example, enables users to retrieve the WHOIS data that contains the name, location, and contact details of domain registrants. With this information in hand, security teams can report the matter to law enforcement agents who can then track down malicious operators and apprehend them on the spot.

Inspect Files' Metadata

Once in possession of files and devices from a suspicious entity, you can analyze the evidence that is saved in them and discover crucial details that can be followed back to the source.

Word, Excel, or PowerPoint files, for example, contain relevant information, called metadata, that can blow a hacker's cover. They include the name of the person that created the file, the organization, the computer, and the local hard drive or network server where the document was saved.

It is also important to analyze the grammar used in comments that are embedded in the software code. Socio-cultural references, nicknames, language, and even the use of emojis — all can reveal clues on the nationalities of the criminals or their geographical location.

Go On with Tracerouting

One of the best ways to catch perpetrators is by identifying their IP addresses. However, they usually hide these IPs by spoofing or by bouncing communications from different locations. Luckily, no matter how shrewd and clever these individuals may be, malicious addresses can still be identified through an approach called tracerouting.

The technique works by showing the hostnames of all the devices within the range of your computer and a target machine. More often than not, the last machine's hostname address belongs to the hacker's Internet Service Provider. With the ISP known, investigators can then pinpoint the geographical location and the areas where the culprit is probably situated.

* * *

Every time you venture online, you're exposed to malevolent entities that can harm your system and disrupt business operations. Knowing how to trace the source of an attack can stop it in its tracks and prevent the intervention from happening again.

Written by Jonathan Zhang, Founder and CEO of Threat Intelligence Platform

Follow CircleID on Twitter

More under: Cybersecurity, Malware, Spam, Whois

Categories: News and Updates

Putting Cyber Threats Into Perspective

Tue, 2019-03-12 17:37

As society uses more digital technologies we are increasingly also faced with its problems.

Most of us will have some horror stories to tell about using computers, smartphones, and the internet. But this hasn't stopped us from using the technology more and more. I believe that most people would say that their lives would be worse without technology — in developed countries but equally in the developing world, where mobile phones and the internet have revolutionised the lives of hundreds of millions of individual people, resulting in great personal benefits involving, for example, employment, business, education (information) and healthcare.

And, while there are certainly also downsides, with hacks, identity theft, populism, cyberbullying, cybercrime and so on, the positives of ICT still far outweigh the negatives. Yet in recent years cybersecurity has achieved political importance that greatly exceeds its actual threat.

Despite the various and ongoing cyber threats the world seems to function quite well; and, as my colleague Andrew Odlyzko in his recent paper Cyber security is not (very) important argues, there have been many other security threats that are having a far greater impact on us than all the cyber threats combined. Think of the recent tsunamis, earthquakes, floods, epidemics, financial collapse (2008), 9/11 and so on. What about the massive damage done by guns in America or the hundreds of thousands of car casualties around the world every year? We seem to treat that as acceptable collateral damage.

In many of these cases, there is little political will to address the underlying issues like climate change, inequality and oligarchy, environmental degradation, gun control and so on.

Interestingly, many of those disasters do have some predictability and, if we wanted to, we could do much more about them. But that would require far more political attention around those more serious issues, and most politicians shy away from this. Cybersecurity seems to be an easier target.

If we look at history, we see that the collapse of societies has far more to do with those environmental issues than with technology. That is not to say that we should ignore cybersecurity. Of course not. But looking back on the last few decades cybersecurity has followed the same growth patterns as technology, and there is no reason to believe that this is going to change. We seem to be able to manage the cyber threats in the same way we can deal with other social problems such as crimes like theft and robbery, and so there is no overwhelming need to over-emphasise cyber threats.

As Andrew puts it, with all other social imperfections, we will never be able to get absolute cybersecurity. And, yes, there will be technological disasters, but it is unlikely that they will ever be on the scale of all the other disasters that humanity is facing.

So let's put this into perspective; and I would argue let's concentrate on how to address those far more dangerous developments, such as climate change, and how to look at ways ICT technology can assist humanity in finding solutions for this.

Amazingly it is here that government policies are moving backward, with relatively fewer funds being made available for innovation, research and development, education, e-health and so on.

There is also an important psychological element in cybercrime. Cyber breaches are widely reported but we must realise that vote rigging, gerrymandering and vote stacking, carried out in far more traditional ways, have a much greater impact on election outcomes than the influence of cybercrime.

Another example here is that, while many financial databases have been hacked and millions of credit cards have been captured, relatively little damage has been done, as banks have sophisticated ICT systems in place that can detect fraudulent transactions. Yet the financial damage of greedy banks nearly brought economies down in 2008.

Nevertheless, my greatest worry is still the Big Brother effect of cybersurveillance. It has the potential to further undermine our already weakening democratic structures. This has nothing to do with cybersecurity — in fact, cybersecurity can't be used to solve this problem. And, despite the fact that the issue is now being far more seriously investigated by law-makers and regulators, especially in Europe and Australia, the major issue continues to be the lack of political will to address these issues.

The ICT world with all its 'goods and bads' reflects our messy society and it is that same society that has led us to where we are now. And in many cases, our progress has been based on muddling on, with the occasional starburst.

While there are certainly many worrying signs in society today it remains our responsibility not to charge blindly in the same direction as some of our forebears did, which led to the collapse of many previous civilisations. We are now in a far better position to understand what causes those collapses and we are capable of innovation and diversifying to avoid disaster. And we — the people in the ICT industry — are in the privileged position of being able to assist societies by creating the right tools to further prosperity for all.

Written by Paul Budde, Managing Director of Paul Budde Communication

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Cybersecurity, Internet Governance, Policy & Regulation

Categories: News and Updates

Some Thought on the Paper: Practical Challenge-Response for DNS

Mon, 2019-03-11 20:22

This post reflects on ideas suggested in the paper: Practical Challenge-Response for DNS, 2018 by Rami Al-Dalky, Michael Rabinovich, and Mark Allman.

Because the speed of DNS is so important to the performance of any connection on the 'net, a lot of thought goes into making DNS servers fast, including optimized software that can respond to queries in milliseconds, and connecting DNS servers to the 'net through high bandwidth links. To set the stage for massive DDoS attacks based in the DNS system, add a third point: DNS responses tend to be much larger than DNS queries. In fact, a careful DNS response can be many times larger than the query.

To use a DNS server as an amplifier in a DDoS attack, then, the attacker sends a query to some number of publicly accessible DNS servers. The source of this query is the address of the system to be attacked. If the DNS query is carefully crafted, the attacker can send small packets that cause a number of DNS servers to send large responses to a single IP address, causing large amounts of traffic to the system under attack.

Carrying DNS over TCP is one way to try to resolve this problem because TCP requires a three-way handshake. When the attacker sends a request with a spoofed source address, the server attempts to build a TCP session with the system which owns the spoofed address, which will fail. A key point: TCP three-way handshake packets are much smaller than most DNS responses, which means the attacker's packet stream is not being amplified (in size) by the DNS server.

DNS over TCP is problematic, however. For instance, many DNS resolvers cannot reach an authoritative DNS server using TCP because of stateful packet filters, network address translators, and other processes that either modify or block TCP sessions in the network. What about DNSSEC? This does not prevent the misuse of a DNS server; it only validates the records contained in the DNS database. DNSSEC just means the attacker can send even larger really secure DNS records towards an unsuspecting system.

Another option is to create a challenge-response system much like the TCP handshake, but embed it in DNS. The most obvious place to embed such a challenge-response system is in CNAME records. Assume a recursive DNS server requests a particular record; an authoritative server can respond with a CNAME record effectively telling the recursive server to ask someone else. When the recursive server sends the second query, presumably to a different server, it includes the response information it has in order to give the second server the context of its request.

To build a challenge-request system, the authoritative server sends back a CNAME telling the recursive server to contact the very same authoritative server. In order to ensure the three-way handshake is effective, the source IP address of the querying recursive DNS server is encoded into the CNAME response. When the authoritative server receives the second query, it can check the source address encoded in the second resolution request against the source of the packet containing the new query. If they do not match, the authoritative server can drop the second request; the three-way handshake failed.

If the source of the original request is spoofed, this causes the victim to receive a CNAME response telling it to ask again for the answer — which the victim will never respond to, because it did not send the original request. Since CNAME responses are small, this tactic removes the amplification the attacker is hoping for.

There is one problem with this solution, however: DNS resolvers are often pooled behind a single anycast address. Consider a resolving DNS server pool with two servers labeled A and B. Server A receives a DNS request from a host and finding it has no cache entry for the destination, recursively sends a request to an authoritative server. The authoritative server, in turn, sends a challenge to the IP address of server A. This address, however, is an anycast address assigned to the entire pool of recursive servers. For whatever reason, the challenge — a CNAME response asking the recursive server to ask at a different location — is directed to B.

If the DNS software is set up correctly, B will respond to the request. However, this response will be sourced from B's IP address, rather than A's. Remember the source of the original query is encoded in the CNAME response from the responding server. Since the address encoded in the follow-on query will not match B's address, the authoritative server will drop the request.

To solve this problem, the authors of this paper suggest a chained response. Rather than dropping the request with an improperly encoded source address, encode the new source address in the packet and send another challenge in the form of a CNAME response. Assuming there are only two servers in the pool, the next query with the encoded list of IP addresses from the CNAME response will necessarily match one of the two available source addresses, and the authoritative server can respond with the correct information.

What if the pool of recursive servers is very large — on the order of hundreds or thousands of servers? While one or two "round trips" in the form of a three-way handshake might not have too much of a performance impact, thousands could be a problem. To resolve this issue, the authors suggest taking advantage of the observation that once the packets being transmitted between the requester and the server are as large as the request itself, any amplification gain an attacker might take advantage of has been erased. Once the CNAME packet grows to the same size as a DNS request by adding source addresses observed in the three-way handshake process, the server should just answer the query. This (generally) reduces the number of round trips down to three or four before the DNS is not going to generate any more data than the attacker could send to the victim directly, and dramatically improves the performance of the scheme.

I was left with one question after reading this paper: there are carefully crafted DNS queries that can cause very large, multipacket responses. These are not mentioned at all in the paper; this seems like an area that would need to be considered and researched more deeply. Overall, however, this seems like it would be an effective system to reduce or eliminate the use of authoritative servers in DDoS reflection attacks.

Written by Russ White, Network Architect at LinkedIn

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, DNS, DNS Security

Categories: News and Updates

Facebook and Privacy

Mon, 2019-03-11 20:12

Mark Zuckerberg shocked a lot of people by promising a new focus on privacy for Facebook. There are many skeptics; Zuckerberg himself noted that the company doesn't "currently have a strong reputation for building privacy protective services." And there are issues that his blog post doesn't address; Zeynep Tufekci discusses many of them While I share many of her concerns, I think there are some other issues — and risks.

The Velocity of Content

Facebook has been criticized for being a channel where bad stuff — anti-vaxxer nonsense, fake news (in the original sense of the phrase...), bigotry, and more — can spread very easily. Tufekci called this out explicitly:

At the moment, critics can (and have) held Facebook accountable for its failure to adequately moderate the content it disseminates — allowing for hate speech, vaccine misinformation, fake news and so on. Once end-to-end encryption is put in place, Facebook can wash its hands of the content. We don't want to end up with all the same problems we now have with viral content online — only with less visibility and nobody to hold responsible for it.

Some critics have called for Facebook to do more to curb such ideas. The company itself has announced it will stop recommending anti-vaccination content. Free speech advocates, though, worry about this a lot. It's not that anti-vaxxer content is valuable (or even coherent...); rather, it's that encouraging such a huge, influential company to censor communications is very dangerous. Besides, it doesn't scale; automated algorithms will make mistakes and can be biased; people not only make mistakes, too, but find the activity extremely stressful. As someone who is pretty much a free speech absolutist myself, I really dislike censorship. That said, as a scientist, I prefer not closing my eyes to unpleasant facts. What if Facebook really is different enough that a different paradigm is needed?

Is Facebook that different? I confess that I don't know. That is, it has certain inherent differences, but I don't know if they're great enough in effect to matter, and if so, if the net benefit is more or less than the net harm. Still, it's worth taking a look at what these differences are.

Before Gutenberg, there was essentially no mass communication: everything was one person speaking or writing to a few others. Yes, the powerful — kings, popes, and the like — could order their subordinates to pass on certain messages, and this could have widespread effect. Indeed, this phenomenon was even recognized in the Biblical Book of Esther

3:12 Then were the king's scribes called on the thirteenth day of the first month, and there was written according to all that Haman had commanded unto the king's lieutenants, and to the governors that were over every province, and to the rulers of every people of every province according to the writing thereof, and to every people after their language; in the name of king Ahasuerus was it written, and sealed with the king's ring.

3:13 And the letters were sent by posts into all the king's provinces, to destroy, to kill, and to cause to perish, all Jews, both young and old, little children and women, in one day, even upon the thirteenth day of the twelfth month, which is the month Adar, and to take the spoil of them for a prey.

3:14 The copy of the writing for a commandment to be given in every province was published unto all people, that they should be ready against that day.

3:15 The posts went out, being hastened by the king's commandment, and the decree was given in Shushan the palace. And the king and Haman sat down to drink; but the city Shushan was perplexed.

By and large, though, this was rare.

Gutenberg's printing press made life a lot easier. People other than potentates could produce and distribute fliers, pamphlets, newspapers, books, and the like. Information became much more democratic, though, as has often been observed, "freedom of the press belongs to those who own printing presses". There was mass communication, but there were still gatekeepers: most people could not in practice reach a large audience without the permission of a comparative few. Radio and television did not change this dynamic.

Enter the Internet. There was suddenly easy, cheap, many-to-many communication. A U.S. court recognized this. All parties to the case (on government-mandated censorship of content accessible to children) stipulated, among other things:

79. Because of the different forms of Internet communication, a user of the Internet may speak or listen interchangeably, blurring the distinction between "speakers" and "listeners" on the Internet. Chat rooms, e-mail, and newsgroups are interactive forms of communication, providing the user with the opportunity both to speak and to listen.

80. It follows that unlike traditional media, the barriers to entry as a speaker on the Internet do not differ significantly from the barriers to entry as a listener. Once one has entered cyberspace, one may engage in the dialogue that occurs there. In the argot of the medium, the receiver can and does become the content provider, and vice-versa.

81. The Internet is therefore a unique and wholly new medium of worldwide human communication.

The judges recognized the implications:

It is no exaggeration to conclude that the Internet has achieved, and continues to achieve, the most participatory marketplace of mass speech that this country — and indeed the world — has yet seen. The plaintiffs in these actions correctly describe the "democratizing" effects of Internet communication: individual citizens of limited means can speak to a worldwide audience on issues of concern to them. Federalists and Anti-Federalists may debate the structure of their government nightly, but these debates occur in newsgroups or chat rooms rather than in pamphlets. Modern-day Luthers still post their theses but to electronic bulletin boards rather than the door of the Wittenberg Schlosskirche. More mundane (but from a constitutional perspective, equally important) dialogue occurs between aspiring artists, or French cooks, or dog lovers, or fly fishermen.

Indeed, the Government's asserted "failure" of the Internet rests on the implicit premise that too much speech occurs in that medium, and that speech there is too available to the participants. This is exactly the benefit of Internet communication, however. The Government, therefore, implicitly asks this court to limit both the amount of speech on the Internet and the availability of that speech. This argument is profoundly repugnant to First Amendment principles.

But what if this is the problem? What if this new, many-to-many communications, is precisely what is causing trouble? More precisely, what if the problem is the velocity of communication, in units of people per day?

High-velocity propagation appears to be exacerbated by automation, either explicitly or as a side-effect. YouTube's recommendation algorithm appears to favor extremist content. Facebook has a similar problem:

Contrast this, however, with another question from Ms. Harris, in which she asked Ms. Sandberg how Facebook can "reconcile an incentive to create and increase your user engagement when the content that generates a lot of engagement is often inflammatory and hateful." That astute question Ms. Sandberg completely sidestepped, which was no surprise: No statistic can paper over the fact that this is a real problem.

Facebook, Twitter and YouTube have business models that thrive on the outrageous, the incendiary and the eye-catching, because such content generates "engagement" and captures our attention, which the platforms then sell to advertisers, paired with extensive data on users that allow advertisers (and propagandists) to "microtarget" us at an individual level.

The velocity, in these cases, appears to be a side-effect of this algorithmic desire for engagement. Sometimes, though, bots appear to be designed to maximize the spread of malicious content. Either way, information spreads far more quickly than it used to, and on a many-to-many basis.

Zuckerberg suggests that Facebook wants to focus on smaller-scale communications:

This is different from broader social networks, where people can accumulate friends or followers until the services feel more public. This is well-suited to many important uses — telling all your friends about something, using your voice on important topics, finding communities of people with similar interests, following creators and media, buying and selling things, organizing fundraisers, growing businesses, or many other things that benefit from having everyone you know in one place. Still, when you see all these experiences together, it feels more like a town square than a more intimate space like a living room.

There is an opportunity to build a platform that focuses on all of the ways people want to interact privately. This sense of privacy and intimacy is not just about technical features — it is designed deeply into the feel of the service overall. In WhatsApp, for example, our team is obsessed with creating an intimate environment in every aspect of the product. Even where we've built features that allow for broader sharing, it's still a less public experience. When the team built groups, they put in a size limit to make sure every interaction felt private. When we shipped stories on WhatsApp, we limited public content because we worried it might erode the feeling of privacy to see lots of public content — even if it didn't actually change who you're sharing with.

What if Facebook evolves that way, and moves more towards small-group communication rather than being a digital town square? What will be the effect? Will smaller-scale many-to-many communications behave this way?

I personally like being able to share my thoughts with the world. I was, after all, one of the creators of Usenet; I still spend far too much time on Twitter. But what if this velocity is bad for the world? I don't know if it is, and I hope it isn't — but what if it is?

One final thought on this… In democracies, restrictions on speech are more likely to pass legal scrutiny if they're content-neutral. For example, a loudspeaker truck advocating some controversial position can be banned under anti-noise regulations, regardless of what it is saying. It is quite possible that a velocity limit would be accepted — and it's not at all clear that this would be desirable. Authoritarian governments are well aware of the power of mass communications:

The use of big-character-posters did not end with the Cultural Revolution. Posters appeared in 1976, during student movements in the mid-1980s, and were central to the Democracy Wall movement in 1978. The most famous poster of this period was Wei Jingsheng's call for democracy as a "fifth modernization." The state responded by eliminating the clause in the Constitution that allowed people the right to write big-character-posters, and the People’s Daily condemned them for their responsibility in the "ten years of turmoil" and as a threat to socialist democracy. Nonetheless, the spirit of the big-character-poster remains a part of protest repertoire, whether in the form of the flyers and notes put up by students in Hong Kong's Umbrella Movement or as ephemeral posts on the Chinese internet.

As the court noted, "Federalists and Anti-Federalists may debate the structure of their government nightly, but these debates occur in newsgroups or chat rooms rather than in pamphlets." Is it good if we give up high-velocity, many-to-many communications?

Certainly, there are other channels than Facebook. But it's unique: with 2.32 billion users, it reaches about 30% of the world's population. Any change it makes will have worldwide implications. I wonder if they'll be for the best.

Possible Risks

Zuckerberg spoke of much more encryption, but he also noted the risks of encrypted content: "Encryption is a powerful tool for privacy, but that includes the privacy of people doing bad things. When billions of people use a service to connect, some of them are going to misuse it for truly terrible things like child exploitation, terrorism, and extortion. We have a responsibility to work with law enforcement and to help prevent these wherever we can". What does this imply?

One possibility, of course, is that Facebook might rely more on metadata for analysis: "We are working to improve our ability to identify and stop bad actors across our apps by detecting patterns of activity." But he also spoke of analysis "through other means". What might they be? Doing client-side analysis? About 75% of Facebook users employ mobile devices to access the service; Facebook clients can look at all sorts of things. Content analysis can happen that way, too; though Facebook doesn't use content to target ads, might it use it for censorship, good or bad?

Encryption also annoys many governments. Governments disliking encryption is not new, of course, but the more people use it, the more upset they will get. This will be exacerbated if encrypted messaging is used for mass communications; Tufekci is specifically concerned about that: "Once end-to-end encryption is put in place, Facebook can wash its hands of the content. We don't want to end up with all the same problems we now have with viral content online — only with less visibility and nobody to hold responsible for it." We can expect pressure for back doors to increase — but they'll still be a dangerous idea, for all of the reasons we've outlined. (And of course, that interacts with the free speech issue.)

I'm not even convinced that Facebook can actually pull this off. Here's the problem with encryption: who has the keys? Note carefully: you need the key to read the content — but that implies that if the authorized user loses her key, she herself has lost access to her content and messages. The challenge for Facebook, then, is protecting keys against unauthorized parties — Zuckerberg specifically calls out "heavy-handed government intervention in many countries" as a threat — but also making them available to authorized users who have suffered some mishap. Matt Green calls this mud puddle test: if you drop your device in a mud puddle and forget your password, how do you recover your keys?

Apple has gone to great lengths to lock themselves out of your password. Facebook could adopt a similar strategy — but that could mean that a forgotten password means loss of all encrypted content. Facebook, of course, has a way to recover from a forgotten password — but will that recover a lost key? Should it? So-called secondary authentication is notoriously weak. Perhaps it's an acceptable tradeoff to regain access to your account but lose access to older content — indeed, Zuckerberg explicitly spoke of the desirability of evanescent content. But even if that's a good tradeoff — Zuckerberg says "you'd have the ability to change the timeframe or turn off auto-deletion for your threads if you wanted" — if someone else (including a government) took control of you're account, it would violate another principle Facebook holds dear: "there must never be any doubt about who you are communicating with".

How Facebook handles this dilemma will be very important. Key recovery will make many users very happy, but it will allow the "heavy-handed government intervention" Zuckerberg decries. A user-settable option on key recovery? The usability of any such an option is open to serious question; beyond that, most users will go with the default, and will thus inherit the risks of that default.

Written by Steven Bellovin, Professor of Computer Science at Columbia University

Follow CircleID on Twitter

More under: Internet Governance, Policy & Regulation, Privacy, Web

Categories: News and Updates

Russians Take to the Streets to Protest Against New Internet Restrictions

Mon, 2019-03-11 18:54

Thousands of Russians in Moscow and other cities rallied on Sunday against tighter internet restrictions. The protest is reported to be one of the most prominent in the Russian capital in years. Reuters reports: "Lawmakers last month backed tighter internet controls contained in legislation they say is necessary to prevent foreign meddling in Russia's affairs. But some Russian media likened it to an online 'iron curtain' and critics say it can be used to stifle dissent. ... The legislation is part of a drive by officials to increase Russian 'sovereignty' over its Internet segment." The new bill passed in the Russian parliament in February aims to route Russian internet traffic and data through points controlled by the state and proposes building a national DNS as an alternate platform in the event the country is cut off from foreign infrastructure.

Follow CircleID on Twitter

More under: Access Providers, Internet Governance, Policy & Regulation

Categories: News and Updates

Five Inconvenient Facts about the Migration to 5G Wireless

Sun, 2019-03-10 19:39

An unprecedented disinformation campaign purposefully distorts what consumers and governments understand about the upcoming fifth generation of wireless broadband technology. A variety of company executives and their sponsored advocates want us to believe that the United States already has lost the race to 5G global market supremacy and that it can regain it only with the assistance of a compliant government and a gullible public. Stakeholders have identified many new calamities, such as greater vulnerability to foreign government-sponsored espionage carried out by equipment manufacturers, as grounds for supporting the merger of two of only four national wireless carriers and preventing U.S. telecommunications companies from buying equipment manufactured by specific, blacklisted Chinese companies.

How do these prescriptions promote competition and help consumers? Plain and simple, they do not, but that does not stop well-funded campaigns from convincing us that less competition is better. Set out below, I offer five obvious but obscured truths.

1) Further concentration of the wireless marketplace will do nothing to maintain, or reclaim global 5G supremacy.

It requires a remarkable suspension of disbelief to think that allowing Sprint and TMobile to merge remedies a variety of ills, rather than further depletes conditions favoring competition in an already extremely concentrated marketplace. Advocates for the merger want us to believe that it is our patriotic duty to support the combination because it will enhance the collective fortunes of wireless carriers and customers, help the U.S. regain 5G market leadership from the Chinese and achieve greater competition, innovation and employment than what two separate companies could achieve.

Nothing has prevented Sprint and TMobile from acquiring funds needed for 5G investments. Ironically, considering the rampant fear of foreign ventures doing business in the U.S. telecommunications marketplace, both companies have primary ownership by powerful foreign ventures: Softbank (Sprint) and Deutsch Telekom (TMobile). Interest rates have rarely reached such low levels and both companies have matched AT&T and Verizon in terms of preparing for the future migration from 4G to 5G infrastructure.

A merger would combine the two mavericks in the marketplace responsible for just about every consumer-friendly pricing and service innovation over the last decade from "anytime" minutes, to bring your own device, to attractive bundling of "free" and "unmetered" content. A merged venture would reduce the number of wireless towers, total radio spectrum used to provide service and incentives for enhancing the value proposition of next-generation wireless technology.

2) Carriers Cannot Expedite 5G with Labels.

Branding handsets and service as "5G evolving" contributes to the hype without expediting the ready for service date. An emphasis on puffery and marketing distracts the carriers and their subscribers from an emphasis on the hard work needed to make 5G a reality. There are no short cuts in spectrum planning, network design, equipment installation and coordination between carriers and local authorities. Even before the rollout of definitive 5G standards and equipment, FCC Chairman Ajit Pai wants to limit local regulators by establishing a "shot clock" deadline on permitting and site authorizations no matter how complicated and locality specific.

3) Ignoring or Underemphasizing International Coordination will Backfire.

Next generation network planning typically requires years of negotiation between and among national governments. For wireless services, the nations of the world attempt to reach consensus on which frequencies to allocate and what operational procedures and standards to recommend. This process requires patience, study, consensus building and compromise, characteristics sadly out of vogue in the current environment newly fixated with real or perceived threats to national security, fair trade and intellectual property rights. These important matters increase the need to coordinate with nations, rather than offer enhanced, first to market opportunities for nations acting unilaterally and independent of traditional inter-governmental forums.

4) Invoking Patriotism, Trade and National Security Concerns Will Harm U.S. Ventures.

Advisors to Sprint and TMobile probably are congratulating themselves on having come up with a creative, national security rationale for unprecedented and ill-advised merger approval and outlawing market entry by foreign equipment manufacturers. Their short term objectives ignore the great likelihood of long term harm to efficiency, innovation, employment, nimbleness and speed in market entry. Concentrating a market reduces competitive incentives by making it easier for dominant ventures to establish an industry-wide consensus on service rates and terms. Antitrust experts use the term "conscious parallelism" to identify the all too frequent decision by competitors not to devote sleepless afternoons competing rather than implicitly accepting a high margin path of least resistance.

5) Politicizing Next Generation Wireless Harms Everyone.

Planning for a major new generation of wireless technology did not always have a political element, divided along party lines. The process is tedious and incremental, perhaps not well too slow to accommodate the pace of changes in technologies and markets. However, its primary goal seeks to optimize technology for the greatest good. Historically, when nations favored domestic standards and companies, markets fragmented and profit margins declined.

Incompatible transmission standards, like that currently in use by wireless carriers, have increased consumer cost and frustration because an AT&T handset will not work on the Verizon network. Incompatible standards and spectrum assignments typically harm consumers and competition by increasing the likelihood of incompatible equipment and networks.

I cannot understand how two political parties can apply the same evaluative criterion and reach total opposite outcomes. By law, the FCC and Justice Department must consider whether the TMobile-Sprint merger would "substantially lessen" competition. Measuring markets and assessing market impacts should not cleave along a political fulcrum, yet it does with predictably adverse consequences. One cannot see any harm in a business initiative that concentrates a market, while the other one cannot anticipate how a merger might enhance competition, or at least cause no harm.

If politics, national industrial policy and false patriotism become dominant factors in spectrum planning and next-generation network, consumers will suffer as will ventures who have become distracted and unfocused on how to make 5G enhance the wireless value proposition.

Written by Rob Frieden, Pioneers Chair and Professor of Telecommunications and Law

Follow CircleID on Twitter

More under: Cybersecurity, Networks, Policy & Regulation, Wireless

Categories: News and Updates

Thousands of UK Businesses, Individuals to Lose Their .EU Web Address in a No-Deal Brexit

Sat, 2019-03-09 21:35

The British government is urging close to 340,000 registered holders of .EU domain in Britain to make contingency plans as their web addresses will disappear if the UK does not agree on a deal with Brussels. Updated government guidance according to The Guardian report warns if the UK leaves without a deal at the end of March then domain owners based in the UK will have two months leeway to move their principal location to somewhere within the EU or EEA. "After a year, all the British-registered .EU domains will be made available for purchase by individuals and companies who continue to reside in the EU."

Impact on European TLDs not limited to .EU: "The rights of UK residents to hold domain names in all of the following countries will also be affected post-Brexit," reports JDSRUPA:

.FR (France)
.HU (Hungary)
.IT (Italy)
.RE (Reunion Island)
.YT (Mayotte)
.PM (Saint Pierre and Miquelon)
.WF (Wallis and Futuna Islands)
.TF (French Southern and Antarctic Territories)

This is because above TLDs generally require a registrant located in the EU (or in the territory of Iceland, Liechtenstein, Norway and Switzerland).

Follow CircleID on Twitter

More under: Domain Names, Policy & Regulation

Categories: News and Updates

Supporting Dot Amazon Strengthens Global Internet Cooperation

Fri, 2019-03-08 01:28

With the backlash against tech companies gaining steam, we've seen certain contrarian members of the media taking indiscriminate aim at companies and issues without due cause. This is what happened when Financial Times columnist Gillian Tett, in a paywalled March 7th editorial, inaccurately portrayed a process involving the Amazon's gTLD application for .AMAZON, an issue the i2Coalition has been engaged in for years.

While we respect that columnists have limited time to write pieces and short space to make their argument, this column argues that the story of .AMAZON is one of attempted exploitation. This is not the case. The columnist praises ICANN's global consultative process and its slow and deliberative action, and yet she goes on to immediately rush to judgment about what it should do with the gTLD itself. Rather than try to understand ICANN's processes, this columnist prefers to complain that it's simply too complicated. "[M]y heart lies on the side of the jungle," she writes, going on to say that proceeding with the .AMAZON delegation would threaten Internet cooperation.

Given looming deadlines to solve this protracted conflict in the right way, we wish to state that the opposite is true.

Amazon applied for .AMAZON and its Chinese and Japanese translations, among many others, when ICANN launched the new gTLD program seven years ago. Under the 2012 gTLD Applicant Guidebook process, the ".AMAZON" application received perfect scores, and ICANN's Geographic Names Panel, which had been consulting with governments for multiple years on the subject, said the domain was neither a prohibited geographic name nor one which required government approval.

ICANN operates under the multi-stakeholder model. The multi-stakeholder model of Internet governance serves to keep the Internet free and open, by bringing the interests of all parties to the table and ensuring that the Internet is free of undue control by any one group, including governments who have an important but not overreaching role to play. It was under this system that the .AMAZON application was made. Amazon met all the requirements under ICANN's articles, bylaws, and guidebook, and drew top marks across the board for its' application.

Amazon is a member of the i2Coalition, but our interest in this matter goes well beyond the commercial interest of a single member of our community. ICANN is seven years into a gTLD delegation process in which the rules were clearly spelled out in advance, with governments at the table when the rules were made. Attempts to change them after the fact, in ways that are not driven by consensus of the global multi-stakeholder community, are corrosive to the trust we have in the multi-stakeholder model of Internet governance. In short, ICANN needs to follow through with the .AMAZON application, and follow its' own rules, to maintain the credibility of its' systems.

Though we should all be sympathetic to the position of the governments of Brazil and Peru, we should also be impressed with the extensive efforts that Amazon has undertaken in order to assuage as many of those concerns as possible. They have made formal signed commitments to not use the TLDs in a confusing manner. They have promised to support future gTLD applications to represent the region using the geographic terms of the regions, including .AMAZONIA, .AMAZONICA or .AMAZONAS. They also offered to reserve for the relevant governments certain domain names that could cause confusion or touch on national sensitivities.

We strongly believe that the Internet community and the Board of ICANN now has an opportunity to show the entire multi-stakeholder community that its' systems work. By upholding its Applicant Guidebook, its community-developed bylaws, and independent dispute resolution process, the ICANN Board's approval of the .AMAZON applications will increase community trust, and show that the Board takes ICANN's core principles of transparency and accountability extremely seriously. For those reasons, we call on the Board of ICANN to approve the .AMAZON applications.

Written by Christian Dawson, Executive Director, i2Coalition

Follow CircleID on Twitter

More under: ICANN, Internet Governance, Policy & Regulation, New TLDs

Categories: News and Updates

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer