Domain industry news

Syndicate content CircleID
Latest posts on CircleID
Updated: 6 hours 55 min ago

Surveillance Capitalist in Chief

Fri, 2020-05-29 20:00

Co-authored by Klaus Stoll and Professor Sam Lanfranco.

Surveillance capitalism monetizes private data that it collects without consent of the individuals concerned, data to analyze and sell to advertisers and opinion-makers. There was always an intricate relationship between governments and surveillance capitalists. Governments have the duty to protect their citizens from the excesses of surveillance capitalism. On the other hand, governments use that data, and surveillance capitalism's services and techniques.

Donald Trump just outed himself as Surveillance Capitalist in Chief. Social media, as we know it only exists because it is one of the main sources of data, revenue, and profits for surveillance capitalism. It is also Donald Trump's much beloved and used bully pulpit that allows him to reach 80.5 million people in an instant.

Why is Trump attacking it by alleging that Twitter was stifling his freedom of speech? Why has he followed that by signing a likely legally unenforceable executive order that empowers federal regulators to crack down on social media companies that allegedly censor political speech or exhibit political bias?

The inconvenient truth is that surveillance capitalism is incompatible with the truth. Whilst pretending to serve millions, social platform business practices have been created not with the interest of users, but as ever more effective private data harvesters in the service of a commercial and political elite. In truth, they care little about the truth of what users say or receive. They care about their return on investment.

Section 230 of the U.S. Communications Decency Act protects social media companies from liability for the content that users post on their platforms, unlike other media who are held accountable for their content. This does not exempt social media companies from all responsibility for the veracity of content. Everybody on the Internet, be they private individuals, corporate companies, or President of the United States, have rights and responsibilities.

Trump has the right to free speech, and he is also responsible for what he says, its veracity and that it does not harm others. If he is unable to express his opinions responsibly, it falls to those whose platforms he uses to act responsible and flag his content with the intent to prevent harm from falsehoods. This is part of the give and take within the freedom of speech.

Such an intervention does not limit Trump's free speech. His opinion is still fully visible and unredacted. When its veracity is questionable or false, the platform to flag that promotes user due diligence, a wider exercise of responsible free speech, and a generally more knowledgeable public dialogue.

Trump's response to Twitter's actions is to clothe his unfettered lack of veracity in the wrappings of free speech. The irony of Trump's Executive Order is that Twitter could become required to remove such postings of questionable veracity, rather than just flag them for due diligence.

Twitter's response is a "violation" of the first principle of surveillance capitalism: Separate what is morally and ethically inseparable. Separate rights from responsibilities. Separate data ownership from privacy. Separate falsehood from consequences, all in the name of surveillance capitalism's profits.

Trump needs a social media bully pulpit that frees him from any concerns about anybody or anything except himself and his interests, to win the next election.

While Twitter pursues baby steps by flagging Trump's postings, social media must choose which path to follow. The whole Internet ecosystem must choose which path to follow. How do we protect the rights and responsibilities of free speech, promote the veracity of content, and protect user privacy?

Social media have become dominant players in this area of the Internet and have a major role to play. How does society balance the private interests of surveillance capitalism and a public good that includes free and responsible speech, veracity of content and user privacy? Surveillance capitalism, with its exploitive business model and associated use by allied political actors, will opt for their responsibility-free privileges and unbridled profits. Others, in defense of the public interest and the integrity of the individual, will fight for the Internet as free and unbiased Network of Networks dedicated to serving the common good.

The fight over the path forward will be long, costly, and turbulent. Those who demand truth and integrity in social media, endanger surveillance capitalism's business model with its storehouses of data, money, and power. Drawing on Trump's unfortunate Tweet, "when the looting starts, the shooting starts," on the Internet, that looting started two decades ago with social media warehousing and exploiting private date. Hopefully, with rightful and responsible free speech, veracity and engaged citizenship, we can get beyond the data looting and restore dignity to the role of the Internet as a Network of Networks operating in the public good, without having ended up where "the shooting starts."

Written by Klaus Stoll, Digital Citizen

Follow CircleID on Twitter

More under: Censorship, Cybercrime, Cybersecurity, Internet Governance, Law, Policy & Regulation, Privacy

Categories: News and Updates

Unintended Benefits of Trump's Twitter Tantrum

Fri, 2020-05-29 17:01

Four years ago, progressive intergovernmental organizations like the European Union became increasingly concerned about the proliferation of hate speech on social media. They adopted legal mechanisms for removing Twitter accounts like Donald Trump's. The provisions were directed at Facebook, Twitter, and YouTube. In June of 2016, a "deconstruction" of these mechanisms was presented to one of the principal global industry standards bodies with a proposal to develop new protocols to rapidly remove such accounts. The proposal recognized that tweet messages such as Trump's were essentially global malware, and used cybersecurity threat models to identify and remove the source account.

The proposal for the new takedown protocol standards was not adopted. However, the EU did proceed to advance the legal mechanism implementations to additional social media platforms and in 2019 claimed a degree of success in removing the worst hate speech identified to European authorities. Unfortunately, Trump's Twitter account was not removed despite numerous parties identifying his messages as blatantly racist and xenophobic. The messages became so egregious that they led to a U.S. House of Representatives resolution condemning them. Twitter refused, however, to abide by the EU provisions.

The European Commission implements comprehensive and broad set of actions to tackle the spread and impact of online disinformation in Europe. (Image: European Commission)

During the past four years, the phenomenon of speech malware expanded into "fake speech." The EU progressively began to tackle these societal disinformation threats as well. The threats were especially relevant to elections. Legal mechanisms similar to those dealing with hate speech were put into effect.

Over time, Trump's hate speech evolved into fake speech — principally via Twitter messages. The messages became increasingly so "disinformative" that Twitter recently attempted to comply with its EU legal obligations through the labeling option. Trump on Thursday manifested a Tantrum by Executive Order against Twitter — purveying still further disinformation and asserting legal authority he does not have.

The Trump Tantrum against Twitter may have some unintended benefits to the larger world outside Trump's office. Among other things, it allows the EU to make clear that Twitter was, in fact, complying with its legal obligations. Furthermore, removing liability protection for social media companies would make the laws throughout the world deal with hate and fake speech more compelling. It also renews the possibility of developing a new cybersecurity protocol for identifying and tagging tweet disinformation. Indeed, it opens up the possibility of going beyond just tagging Trump's messages, but also terminating his account. Trump could even be memorialized in a sense by assigning his fake tweets with a unique cyber threat identifier and reported worldwide for implementing takedown remediation.

Note: The author was an invited expert to the November 1997 UNHCR conference on internet hate speech.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC

Follow CircleID on Twitter

More under: Cybersecurity, Internet Governance

Categories: News and Updates

A New U.S. National Broadband Plan?

Fri, 2020-05-29 16:22

United States Senator Edward Markey (D-Mass) introduced a bill that would require that the FCC create a new National Broadband Plan by July 2021. This plan would lay out the national goals needed for broadband going forward and also provide an update on how the COVID-19 crisis has impacted Internet access. I am not a big fan of the concept of a national plan for many reasons.

Can't Trust FCC Data. The FCC would base any analysis in a new plan on the same flawed data they are using for everything else related to broadband. At this point, the best description of the FCC's broadband data is that it is a fairy tale — and not one with a happy ending.

Gives Politicians Talking Points rather than Action Plans. A national broadband plan gives every politician talking points to sound like they care about broadband — which is a far cry from an action plan to do something about broadband. When politicians don't want to fix a problem, they study it.

Makes No Sense if Broadband is Unregulated. Why would the government create a plan for an industry over which the government has zero influence? The FCC has gifted the broadband industry with 'light-touch regulation' which is a code word for no regulation at all. The FCC canned Title II regulatory authority and handed the tiny remaining remnant of broadband regulation to the Federal Trade Commission — which is not a regulatory agency.

The Last National Broadband Plan was a Total Bust. There is no need for a National Broadband Plan if it doesn't include a requirement that the FCC should try hard to tackle any recommendations made. Almost nothing from the last broadband plan came to pass — the FCC and the rest of the federal government stopped even paying lip service to the last plan within a year after it was published. Consider the primary goals of the last National Broadband Plan that were to have been implemented by 2019:

  • At least 100 million homes should have affordable access to 100/50 Mbps broadband. Because the cable companies implemented DOCSIS standards in urban areas, more than 100 million people now have access to 100 Mbps download speeds. But only a tiny fraction of that number — homes with fiber, have access to the 50 Mbps upload speed goal. It's also impossible to make a case that U.S. broadband is affordable — U.S. broadband prices are almost double the rates in Europe and the far East.
  • The U.S. should lead the world in mobile innovation and have the fastest and most extensive wireless network of any nation. U.S. wireless broadband is far from the fastest in the world — our neighbor Canada is much closer to that goal than is the U.S. Everybody acknowledges that there are giant areas of rural America without good wireline broadband, but most people have no idea that cellular coverage is also miserable in a lot of rural America.
  • Every American Community should have gigabit access to anchor institutions such as schools, libraries, and government buildings. We probably came the closest to meeting this goal, at least for schools, and over 90% of schools now have gigabit access. However, much of that gain came through poorly-aimed federal grants that paid a huge amount of money to bring fiber to anchor institutions while ignoring the neighborhoods around them — and in many cases, the fiber serving government buildings is legally blocked from being used to help anybody else.
  • Every American should have affordable access to robust broadband and the means and the skills to subscribe. A decade after the last National Broadband Plan, there are still huge numbers of rural homes with no broadband, or with broadband that barely functions. Instead of finding broadband solutions for rural America, we have an FCC that congratulates itself each year for being on the right trajectory for solving the broadband gap.
  • To ensure that America leads in the clean energy economy, every America should be able to use broadband to track and manage their real-time energy consumption. I can't come up with anything other than a facepalm for this goal.

As hard as I try, I can't think of even one reason why we should waste federal dollars to develop a new national broadband plan. Such a plan will have no teeth and will pass out of memory soon after it's completed.

Written by Doug Dawson, President at CCG Consulting

Follow CircleID on Twitter

More under: Access Providers, Broadband, Coronavirus, Policy & Regulation

Categories: News and Updates

The Costs of Trump's 5G Wall

Fri, 2020-05-29 00:12

Over the past three years, Trump and his followers around Washington have begun to erect the equivalent of his Southern Border Wall around the nation's information network infrastructure — especially for 5G. The tactics are similar — keep out foreign invaders who are virtually sneaking across the borders to steal the nation's information resources and controlling our internet things. The tactics and mantras are almost identical. "Build The Wall" that keeps the foreigners' equipment and communications out of our networks. Stop our cooperation in the international activities that foster global communication and trade because we are being treated unfairly.

Piece by piece, Trump has moved forward with his plans since assuming power — masked by incremental steps in diverse Washington agency venues and the esoteric fog that envelops how 5G networks and cooperation work and supports those who support it would make quick money from 5G network wall snake oil. Like all of the Trump communication, he creates an alternative reality, projecting the illusion of leadership and action while harming American telecommunication's national security and economic interests. Week after week, the bricks of Trump's 5G wall have emerged from Administration portals in the form of Executive Orders and instant emergency regulations dictated by tweets. Trump's 5G security disinformation campaign is the equivalent of his "taking bleach" to cure COVID-19 infections.

Specific examples of Trump's ignorant, xenophobic wall follies include bans on foreign network products, prohibiting international network interoperation, withdrawal of engagement in international 5G activities by government agencies, demanding lockstep compliance by allies, and impeding private sector collaboration in industry and academic activities. U.S. existence in most global activities has largely ceased — replaced by faux domestic initiatives that create the illusion of replacing ongoing global activities. You can see the adverse effects rather vividly in the participation and leadership at trusted, authoritative international venues dealing with 5G cybersecurity such as ETSI's annual Security Week. This is a real 5G security venue with the actual global participants in 5G developments, not Trump's fake ones.

Trump's playbook here dates back to almost identical tactics by the Harding Administration a century ago. Then, xenophobia and anti-globalism were peaking and abetted by a rampant fear of new foreign radio equipment going into rapidly emerging American wireless Internet infrastructure. Harding's tactics did not work out well, and ultimately the policies and global engagement changed significantly.

The costs of Trump's 5G wall fall upon American information industry providers who are being shut out of extraterritorial markets and losing their effective engagement in global collaborative activities. Trump's actions have been a gift for offshore providers. The costs also fall upon American consumers who will be served up more expensive products with inferior capabilities and fail to interoperate with the rest of the world. And lastly, Trump has made the nation itself an untrusted pariah worldwide that is focused entirely on serving his boundless narcissism and unfettered tweeting of fake information.

The remaining relevant dialogue in international bodies these days is how long it will take the U.S. to recover in a post-Trump world. That will be the great challenge of 2021 and beyond. That is the blueprint for the future that deserves attention over the coming months.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC

Follow CircleID on Twitter

More under: Access Providers, Broadband, Mobile Internet, Policy & Regulation, Telecom, Wireless

Categories: News and Updates

Verisign Extends COVID-19 Wholesale Restore Fee Waiver

Thu, 2020-05-28 17:36

Verisign today announced that the waiver of the wholesale restore fee for .com and .net domain names is extended until August 1, 2020 at 03:59:59 UTC. Two months ago, Verisign, the operator of the .com and .net top-level domains, implemented a number of measures to respond to the emerging COVID-19 pandemic, which included a waiver for one-time wholesale restore fee of .com and .net domain names. This fee is charged to restore to active status a domain name registration, which has been deleted by a registrar. The company hopes that its domain name registrar partners will pass on these restore fee waivers to their customers.

Also noted by Verisign:

  • We are also expanding the waiver of the wholesale restore fees to include the .cc, .tv, and .name TLDs, as well as our four Internationalized TLDs (IDN TLDs), the Hebrew, Korean and Japanese transliterations of .com and the Korean transliteration of .net.
  • Verisign estimate restore fee waivers have already saved several million dollars for registrants of all types, including hard-hit small businesses.

Follow CircleID on Twitter

More under: Coronavirus, Domain Management, Domain Names, Registry Services

Categories: News and Updates

CircleID Launches the First in a Series of Community Dialogues on COVID-19 and the Internet

Wed, 2020-05-27 20:45

A CircleID community dialogue series to assess challenges and implications of the coronavirus (COVID-19) pandemic on the Internet.

The COVID-19 pandemic has led to the rapid migration of the world's workforce and consumer services to virtual spaces, has amplified the Internet governance and policy issues including infrastructure, access, exponential instances of fraud and abuse, global cooperation and data privacy, to name but a few. The need for practical, scalable and efficient solutions has risen dramatically.

This was the context in which CircleID hosted its first community dialog via virtual conferencing, which took place on May 7. The topic was "COVID 19 and the Internet" as this certainly top of mind in the CircleID Community.

I had the privilege of moderating this event. It included a diverse line up of industry leaders who shared how they are responding to the crisis, how their perspective on their work may have changed, and where they think they are headed in the coming months and years. The conversation was broad-ranging as panelists shared what they were thinking in real-time. The goal was to frame questions during this transformational period for Internet infrastructure and Internet-based commerce. The answers are still unfolding.

The line-up of panelists included (in speaking order):

  • Head of European Policy at Cloudflare, Caroline Greer
  • ICANN CEO Goran Marby
  • Co-Chair of the Data, Privacy & Cybersecurity Practice at Greenberg Traurig, Gretchen A. Ramos
  • Presidential Scholar and Professor of Law at the University of Utah, George Contreras
  • ArkiTechs Inc. CEO Stephen

Mr. Marby explained ICANN's recent initiative to identify suspect "covid" domain name registrations. (See here.) This initiative drew the most questions from the viewers as this type of preemptive program is not normally offered by ICANN. Time will tell whether this program is successful and what impact it will have on ongoing concerns about phishing, malware, spam, botnets and acts of fraud that are pervasive in the DNS. Marby noted that, as of the panel date, the 80,000 domain names reviewed, 7,000, were identified as potentially malicious.

Ms. Greer and Mr. Lee discussed how their companies are scaling up to meet the demands of their clients. Cloudflare is adding more staff and providing enhanced services. As expected, policymakers in Brussels are focused on COVID-19 response efforts and weathering the crisis. Mr. Lee pointed out that in less-developed regions, like the Caribbean, issues around sustainability are key. As demand for Internet access and bandwidth increases, the need for costly infrastructure increases. The challenge is to manage this increased pressure on internet resources in a time when normally strained economies are considerably more vulnerable.

Ms. Ramos observed how the transition to virtual operations has amplified questions around privacy and data security and that norms are changing. The question is, how? The world is waiting to see.

Professor Contreras described The Open Source Pledge, a cooperative effort to share patents and copyrighted content in the fight against COVID-19. He is part of an international group of lawyers and academics that created the pledge and a model, open-source license to facilitate cooperation and information exchange. Notable signatories include Amazon, Facebook, Hewlett Packard, IBM, Microsoft and the NASA Jet Propulsion Laboratory at CalTEch. The world awaits the outcomes of this endeavor. We plan to check in to see the progress of this initiative.

All the participants stressed that, while swift action was imperative, much remains to be learned as to how the responses have worked, what more is needed, and how the changes implemented today will affect operations in a post-COVID-19 world. It was also noted that despite urgency and uncertainty, there is a lot of positive innovation and energy that has emerged from the response within the Internet community. The Internet was designed to be resilient and scalable. A truer test of its capacity and adaptability has never been seen as it did in the last few months.

CircleID encourages you to watch and share your thoughts. They are especially interested in hearing what issues you think are ripe for more in-depth discussion. They welcome suggestions on topics, questions and potential speakers. Stay tuned…

Written by Lori Schulman, Senior Director, Internet Policy at INTA

Follow CircleID on Twitter

More under: Access Providers, Broadband, Cloud Computing, Coronavirus, Cybersecurity, DNS, Domain Names, ICANN, Brand Protection, Internet Governance, Law, Policy & Regulation, Privacy

Categories: News and Updates

Who Owns Your Connected Device?

Wed, 2020-05-27 18:11

When Charter purchased Time Warner Cable, the company decided that it didn't want to support the home security business it had inherited.

It's been clear for years that IoT companies gather a large amount of data from customers. Everything from a smart thermometer to your new car gathers and reports data back to the cloud. California has tried to tackle customer data privacy through the California Consumer Privacy Act that went into effect on January 1.

Web companies must provide California consumers the ability to opt-out from having their personal information sold to others. Consumers must be given the option to have their data deleted from the site. Consumers must be provided the opportunity to view the data collected about them. Consumers also must be shown the identity of third parties that have purchased their data. The new law broadly defines personal data, including name, address, online identifiers, IP addresses, email addresses, purchasing history, geolocation data, audio/video data, biometric data, or any effort made to classify customers by personality type or trends.

However, there is one area that the new law doesn't cover. There are examples over the last few years of IoT companies making devices obsolete and non-functional. Two cases that got much press involve Charter security systems and Sonos smart speakers.

When Charter purchased Time Warner Cable, the company decided that it didn't want to support the home security business it had inherited. Charter ended its security business line earlier this year and advised customers that the company would no longer provide alarm monitoring. Unfortunately for customers, this means their security devices become non-functional. Customers probably felt safe in choosing Time Warner Cable as a security company because the company touted that they were using off-the-shelf electronics like Ring cameras and Abode security devices — two of the most common brands of DIY smart devices.

Unfortunately for customers, most of the devices won't work without being connected to the Charter cloud because the company modified the software to only work in a Charter environment. Customers can connect some of the smart devices like smart thermostats and lights to a different hub, but customers can't repurpose the security devices, which are the most expensive parts of most systems. When the Charter service ended, homeowners were left with security systems that can't connect to a monitoring service or law enforcement. Charter's decision to exit the security business turned the devices into bricks.

In a similar situation, Sonos notified owners of older smart speakers that it would no longer support the devices, meaning no more software upgrades or security upgrades. The older speakers will continue to function but can become vulnerable to hackers. Sonos offered owners of the older speakers a 30% discount on newer speakers.

It's not unusual for older electronics to become obsolete and to no longer be serviced by the manufacturer — it's something we're familiar with in the telecom industry. What is unusual is that Sonos told customers that they could not sell their older speakers without permission from the company. Sonos has this ability because the speakers communicate with the Sonos cloud. Sonos is not going to allow the old speakers to be registered by somebody else. If I were a Sonos customer, I would also assume this to mean that the company is likely to block old speakers from their cloud eventually. The company's notification told customers that their speakers are essentially a worthless brick. This is a shock to folks who spent a lot of money on top-of-the-line speakers.

There are numerous examples of similar incidents in the smart device industry. Google shut down the Revolv smart hub in 2016, making the device unusable. John Deere has the ability to shut off farm equipment costing hundreds of thousands of dollars if farmers use somebody other than John Deere for service. My HP printer gave me warnings that the printer would stop working if I didn't purchase an HP ink-replacement plan.

This raises the question if consumers really own a device if the manufacturer or some partner of the manufacturer has the ability at some future time to shut the device down. Unfortunately, when consumers buy smart devices, they never get any warning about the manufacturer's rights to kill the devices in the future.

I'm sure the buyers of the Sonos speakers feel betrayed. People likely expect decent speakers to last for decades. I have a hard time imagining somebody taking Sonos up on the offer to buy new speakers at a discount to replace the old ones because in a few years the company is likely to obsolete the new speakers as well. We all have gotten used to the idea of planned obsolescence. Microsoft stops supporting older versions of Windows, and users continue to use the older software at their risk. But Microsoft doesn't shut down computers running old versions of Windows as Charter is doing. Microsoft doesn't stop a customer from selling a computer loaded with Windows 5 to somebody else, as Sonos is doing.

These two examples provide a warning to consumers that smart devices might come with an expiration date. Any device that continues to interface with the original manufacturer through the cloud can be shut down. It would be an interesting lawsuit if a Sonos customer sues the company for essentially stealing their device.

It's inevitable that devices grow obsolete over time. Sonos says the older speakers don't contain enough memory to accept software updates. That's probably true, but the company went way over the line when they decided to kill old speakers rather than let somebody sell them. Their actions tell customers that they were only renting the speakers and that they always belonged to Sonos.

Written by Doug Dawson, President at CCG Consulting

Follow CircleID on Twitter

More under: Internet of Things

Categories: News and Updates

Trust Has Eroded Within the Cybercriminal Underground Causing a Switch to Ecommerce Platforms

Wed, 2020-05-27 02:35

Popular underground goods and services / Trend Micro, May 26, 2020

New data released today indicates that trust has eroded among criminal interactions, causing a switch to ecommerce platforms and communication using Discord, which both increase user anonymization. "Determined efforts by law enforcement appear to be having an impact on the cybercrime underground," says Trend Micro Research that conducted the study. "This research helps us inform businesses early about emerging threats, such as Deepfake ransomware, AI bots, Access-as-a-Service and highly targeted SIM-swapping," says Ed Cabrera, Trend Micro's chief cybersecurity officer. A layered, risk-based response is vital for mitigating the risk posed by these and other increasingly popular threats." The study also found commoditization has driven prices down for many items such as crypting services which has dropped from US$1,000 to just $20 per month, and the price of generic botnets dropping from $200 to $5 per day. "Pricing for other items, including ransomware, Remote Access Trojans (RATs), online account credentials and spam services, remained stable, which indicates continued demand."

Other scenarios expected in the underground economy within the next three years:

  • Deepfake ransomware will be the evolution of sextortion
  • More cybercrime will hit Africa in the next three to five years
  • Cybercriminals will find a scalable business model that takes advantage of the IoT’s wide attack surface
  • Smart contracts in escrow offered in underground forums
  • SIM card hijacking will increase and target high-level executives

Follow CircleID on Twitter

More under: Cybercrime, Cybersecurity

Categories: News and Updates

When It Comes to Domain Name Rights Protection,

Wed, 2020-05-27 00:36

It is not surprising that the phase 1 review of domain name rights protection mechanisms is delayed, but it is a bit of a surprise that in responding to a question posed in 2020, business executives and their lawyers replied with answers first offered and rejected five years earlier.

In that time before COVID-19, the launch of the Vox Populi Registry and its dotSucks domain names drew quite a lot of attention. After all, unlike .com, .net and other legacy top-level domains that came before it, dotSucks did not hint at its meaning. And unlike many other new top-level domains, it was not limited to a practice or industry. It does not mince words.

The run-up to the launch of the registry saw a flurry of stories that ran the gamut from, "(slaps his forehead) Why didn't I think of that?" to "Why, that's outrageous." In fact, the registry's business model was the first to focus on the value of a domain name. might cost $8 or $10 a year to renew, but its value to the social media company is almost incalculable. Vox Pop did the math.

In order to make the point, we talked about setting registration prices as high as $25,000. All part of a pre-launch marketing campaign to make the case for targeted value rather than mass market low price. We weren't interested in flooding the Internet with names, just working with registrants who saw the value in them.

There were already companies hip to use of colloquial language to make their marketing points. One of my favorites was Jolly Rancher, which ran a national ad campaign during the NFL season under the cover of "Being a Rookie Sucks." And Lagunitas Brewing Company who set their consumer complaint email address at

This is still our approach. It is why we have become sponsors and, I hope, solid citizens within groups like INTA, the International Trademark Attorneys' organization. And why we don't ever see Vox Populi Registry show up on those lists of registries whose names are used for fraudulent or malicious purposes. It is yet another aspect of our value.

In the current review of rights protection mechanisms, the question that triggered the nostalgic outburst was Sunrise question 2 (b): "To the extent, you have identified abuses of the Sunrise Period, if any, please describe them and specify any documentation to substantiate the identified abuse."

On the spreadsheet created by ICANN to make it easy to find and compare the many comments for each individual question, the answers to Sunrise question 2 (b) can be found, with some irony, in a column labeled BS. I am not making that up, but, whether by accident or design, I am taking it as subliminal commentary.

And just what kind of BS is in column BS?

We see again the criticism of "price gouging" by Vox Populi Registry. We hear again that its pricing is "discriminatory," that it is "pricing higher that cost recovery" and that the Sunrise list of the registry was populated by using data from the Trademark Clearinghouse.

None of that was true in 2015. None of it is true in 2020.

Ultimately, it is disappointing that five years in, with a wealth of market data showing the value of owning your mistakes, speaking the language of the customers you seek and meeting the expectations of those who want to be heard, some business executives and their lawyers continue to criticize without basis. That.Sucks.

For anyone still unsure of the value of the dotSucks platform, I recommend a couple of minutes from Jerry Seinfeld's new comedy performance, "23 Hours to Kill", on Netflix. Begin at about the six-minute mark of the show. The gist is this: "Sucks and great are pretty close. They're not that different."

That has been our point all along.

Written by John Berard, Founder, Credible Context & CEO, Vox Populi Registry

Follow CircleID on Twitter

More under: Domain Management, Domain Names, Brand Protection, New TLDs

Categories: News and Updates

Public and Private Infrastructure Investment Alternatives

Tue, 2020-05-26 00:49

Electrifying rural South Dakota – A poster promoting membership in rural electric cooperatives. Ca. 1940. Courtesy of the Rural Electric Cooperative Association.

"The strategic goal of infrastructure is not to derive economic benefit from the asset itself but to generate economic benefit by maximizing the use of the asset."
Steve Song

Eric Yuan, CEO of the Zoom teleconferencing service, stated that the average number of daily meeting recipients increased from 10 million in December 2019 to 200 million in March 2020 in a webinar last month. I've been teaching 21 students using Zoom as a result of the COVID-19 pandemic, and the audio and video are smooth, and switching between speakers is seamless. Offhand, I cannot think of any technology that has scaled so well so fast.

When I teach, I use transport offered by Charter, Amazon, and others to reach Zoom's application on a server in an Amazon data center in Virginia. (Zoom has servers in 16 data centers around the world). Zoom's rapid expansion would not have been possible without the transport and application-service infrastructure provided by private investment.

It is a remarkable success story, but imperfect.

Two of my students have been unable to participate in our Zoom meetings because they cannot afford fixed Internet access at home, the campus labs are closed, and data caps limit their participation with mobile phones. I can afford home connectivity, but Charter is the only broadband provider on my block, so I must pay whatever they decide to charge me. That is the situation in Los Angeles, and there are rural areas in the US and many locations in other nations where broadband connectivity is not available at any price. Amazon has competition but their dominant infrastructure position provides them with opportunities to be "be evil" if they are not monitored).

The Federal government funded the research, development, and procurement that led to the Internet then turned to private companies like Amazon and Charter to create the infrastructure Zoom and others use. The COVID-19 pandemic, with its attendant substitution of communication for transportation, highlights the fact that Internet access is as much a necessity today as access to sidewalks, roads, and highways.

Can publically-owned infrastructure fill the Internet infrastructure gap?

Singapore ISP equity, June 2000 (Source)

We have some municipal broadband in the US, but it is roadblocked or outlawed in 22 states, and the states with restrictions have higher Internet prices on the average than the others. Public Internet infrastructure planning and investment are found in other nations as well. For example, Stockholm has provided municipal fiber as a service for over 25 years and around the same time, the Singapore government decided Internet infrastructure was strategic and therefore took equity positions in the nascent Internet service providers. (Internet service in Sweden and Singapore costs less than half of what I pay in Los Angeles today.)

China seems to follow a semi-public strategy of funding private companies and allowing them to compete against each other while retaining political control rather than equity. They followed this strategy in developing terrestrial Internet infrastructure and applications and are doing the same with satellite broadband. Community networks, where the users own and operate the network, are another form of quasi-government ownership.

I don't mean to imply that public ownership is inherently superior to private ownership. Public ownership may lead to cronyism and bureaucracy. For example, Cuba has a bureaucratic government-monopoly Internet service provider and Cuban infrastructure and access lag behind other Latin American and Caribbean nations, content is controlled and they recently confiscated SNET, a large and successful community network (that was not connected to the Internet).

There is no simple, optimal public/private policy and whatever we do needs to be continuously monitored and adjusted as people learn to game the system, but the proposal for the creation of a National Investment Authority (NIA) by Cornell University law professors Saule Omarova and Robert C. Hockett is a good place to begin the discussion.

The NIA would bail out citizens and critical organizations during a crisis like COVID-19 and invest in socially valuable collective goods like rural broadband, renewable energy, affordable housing, and clean water during stable times. An independent NIA governing board would set development goals and strategy, but would not make investment decisions. Those would be made by a National Infrastructure Bank (NIB) and a National Capital Management Corporation (NCMC).

The NIB would buy and securitize bonds that municipalities and other public and private actors issue, and the NCMC would seek investors in a collection of socially valuable investment funds the way a privately owned asset management/venture capital firm like Blackrock does.

But, why would a private investor invest in an NCMC fund that was focused on long-term social return instead of a fund of a private asset management firm that seeks to maximize financial return? The government would guarantee an attractive, relatively short term return on investments in NCMC funds. It would convert the expected long-term return to society into a reasonable short-term return to private investors.

The public foots the bill for bailouts today and the NIA would give us a seat at the investment-decision table. It would face political hurdles, but so did the New Deal at the time of an earlier crisis. If the NIA sounds interesting, check out this short article, podcast interview (with transcript), or this detailed paper.

Written by Larry Press, Professor of Information Systems at California State University

Follow CircleID on Twitter

More under: Access Providers, Broadband, Coronavirus

Categories: News and Updates

Emerging Communications Technologies

Mon, 2020-05-25 23:20

A "New IP" framework was proposed to the ITU last year. This framework envisages a resurgence of a network-centric view of communications architectures where network-managed control mechanisms moderate application behaviors.

It's not the first time that we've seen proposals to rethink the underlying architecture of the Internet's technology (for example, there were the "Clean Slate" efforts in the US research community a decade or so ago) and it certainly won't be the last. However, this New IP framework is very prescriptive in terms of bounding application behaviors, and it seems to ignore the most basic lesson of the past three decades of evolution: communications services are no longer a command economy and these days the sector operates as a conventional market-based economy, and this market for diverse services is expressed in the diversity of application behaviors.

What this market-based economy implies is that ultimately what shapes the future of the communications sector, what shapes the services that are provided, and even the technologies used to generate such services are the result of consumer choices. Consumers are often fickle, entranced by passing fads, and can be both conservative and adventurous at the same time. But whatever you may think of the sanity of consumer markets, it's their money that drives this industry. Like any other consumer-focused services market, what consumers want, they get.

However, it's more than simple consumer preferences. This change in the economic nature of the sector also implies changes in investors and investment, changes in operators, changes in the collective expectations of the sector, and the way these expectations are phrased. It's really not up to some crusty international committee to dictate future consumer preferences. Time and time again, these committees with their lofty titles, such as "the Focus Group on Technologies for Network 2030," have been distinguished by their innate ability to see their considered prognostications comprehensively contradicted by reality! Their forebears in similar committees missed computer mainframes, then they failed to see the personal computer revolution, and were then totally surprised by the smartphone. It's clear that no matter what the network will look like some ten years from now, what it won't be is what this 2030 Focus Group pondering a new IP is envisaging!

I don't claim any particular ability to do any better in the area of divination of the future, and I'm not going to try. But in this process of evolution, the technical seeds of the near-term future are already visible today. What I would like to do here is describe what I think are the critically important technical seeds any why.

This is my somewhat arbitrary personal choice of technologies that I think will play a prominent role on the Internet over the next decade.

The foundation technology of the Internet, and indeed of the larger digital communication environment, is the concept of packetization, replacing the previous model of circuit emulation.

IP advocated a radical change to the previous incumbency of telephony. Rather than an active time switched network with passive edge devices, the IP architecture advocated a largely passive network where the network's internal elements simply switched packets. The functionality of the service response was intended to be pushed out to the devices at the edge of the network. The respective roles of networks and devices were inverted in the transition to the Internet.

But change is hard, and for some decades, many industry actors with interests in the provision of networks and network services strived to reverse this inversion of the network service model. Network operators tried hard to introduce network-based service responses while handling packet-based payloads. We saw the efforts to develop network-based Quality of Service approaches that attempted to support differential service responses for different classes of packet flows within a single network platform. I think some twenty years later, we can call this effort a Grand Failure. Then there was virtual circuit emulation in MPLS, and more recent variants of loose source routing (SR) approaches. It always strikes me as odd that these approaches require orchestration across all active elements in a network where the basic functionality of traffic segmentation can be offered at far lower cost through ingress traffic grooming. But, cynically, I guess that the way to sell more fancy routers is to distribute complexity across the entire network. I would hesitate to categorize any of these technologies as emerging, as they seem to be more like regressive measures in many ways, motivated more by a desire to "value-add" to an otherwise undistinguished commodity service of packet transmission. The longevity of some of these efforts to create network-based services is a testament to network operators' level of resistance to accepting their role as a commodity utility rather than any inherent value in the architectural concept of circuit-based network segmentation.

At the same time, we've made some astonishing progress in other aspects of networking. We've been creating widely dispersed fault-tolerant systems that don't rely on centralized command and control. Any student of the inter-domain routing protocol BGP, which has been quietly supporting the Internet for some three decades now, could not fail to be impressed by the almost prescient design of a distributed system for managing a complex network that is now up to nine orders of magnitude larger than the network of the early 1990s for which it was originally devised. We've created a new kind of network that is open and accessible. It was impossible to develop new applications for the telephone network, yet on the Internet, that's what happens all the time. From the vibrant world of apps down to the very basics of digital transmission, the world of networking is in a state of constant flux, and new technologies are emerging at a dizzying rate.

What can we observe about emerging technologies that will play a critical role in the coming years? Here's is my personal selection of recent technical innovations that I would classify into the set of emerging technologies that will exercise a massive influence over the coming ten years.

Optical Coherence

For many decades the optical world used the equivalent of a torch. There was either light passing down the cable or there wasn't. This "on-off keying" (OOK) simple approach to optical encoding was continuously refined to support optical speeds of up to 10Gbps, which is no mean feat of technology, but at that point, it was running into some apparently hard limitations of the digital signal processes that OOK is using.

But there is still headroom in the fiber for more signal. We are now turning to Optical Coherence and have unleashed a second wave of innovation in this space. Exploiting Optical Coherence is a repeat of a technique that has been thoroughly exercised in other domains. We used phase-amplitude keying to tune analogue baseband voice circuit modems to produce 56Kbps of signal while operating across a 3Khz bandwidth carrier. Similar approaches were used in the radio world where we now see 4G systems supporting data speeds of up to 200Mbps.

The approach relies on the use of phase-amplitude and polarisation keying to wring out a data capacity close to the theoretical Shannon limit. Optical systems of 100Gpbs per wavelength are now a commodity in the optical marketplace, and 400G systems are coming on stream. We will likely see Terabit optical systems in the coming years using high-density phase-amplitude modulation coupled with custom-trained digital signal processing. As with other optical systems, it's also likely that we'll see the price per unit of bandwidth on these systems plummet as the production volumes increase. In today's world, communications capacity is an abundant resource, and that abundance gives us a fresh perspective on network architectures.


What about radio systems? Is 5G an "emerging technology"?

It's my opinion that that 5G is not all that different from 4G. The real change was shifting from circuit tunneling using PPP sessions to a native IP packet forwarding system, and that was the major change from 3G to 4G. 5G looks much the same as 4G, and the fundamental difference is the upward shift in radio frequencies for 5G. Initial 5G deployments use 3.8Ghz carriers, but the intention is to head into the millimeter-wave band of 24Ghz to 84Ghz. This is a mixed blessing in that higher carrier frequencies can assign larger frequency blocks and therefore increase carrying capacity of the radio network, but at the same time, the higher frequencies use shorter wavelengths, and these millimeter-sized shorter wavelengths behave more like light than radio. At higher frequencies, the radio signal is readily obstructed by buildings, walls, trees and other larger objects, and to compensate for this, any service deployment requires a significantly higher population of base stations to achieve the same coverage. Beyond the hype, it's not clear if there is a sound sustainable economic model of millimeter-wave band 5G services.

For those reasons, I'm going to put 5G at the bottom of the list of important emerging technologies. Radio and mobile services will remain incredibly important services on the Internet, but 5G represents no radical change in the manner of use of these systems beyond the well-established 4G technology.


It seems odd to consider IPv6 as an "emerging technology" in 2020. The first specification of IPv6, RFC1883, was published in 1995, which makes it a 25-year-old technology. But it does seem that after many years of indecision and even outright denial, the IPv4 exhaustion issues are finally driving deployment decisions, and these days one-quarter of the Internet's user devices use IPv6. This number will inexorably rise.

It's hard to say how long it will take for the other three quarters, but the conclusion looks pretty inevitable. If the definition of "emerging" is one of large-scale increases in adoption in the coming years, then IPv6 certainly appears to fit that characterization, despite its already quite venerable age!

I just hope that we will work out a better answer to the ongoing issues with IPv6 Extension Headers, particularly in relation to packet fragmentation before we get to the point of having to rely on IPv6-only service environments.


Google's Bottleneck Bandwidth and Round-trip time TCP control algorithm (BBR) is a revolutionary control algorithm that is in my mind equal in importance to TCP itself. This transport algorithm redefines the relationship between end hosts, network buffers, and speed and allows end systems to efficiently consume available network capacity at multi-gigabit speeds without being hampered by poorly designed active packet switching elements.

Loss-based congestion control algorithms have served us well in the past, but these days, as we contemplate end-to-end speeds of hundreds of gigabits per second, such conservative loss-based system control algorithms are impractical. BBR implements an entirely new perspective on both flow control and speed management that attempts to stabilize the flow rate at the same rate as a fair share of available network capacity. This is a technology to watch.


There has been a longstanding tension between applications and networks. In the end-to-end world of TCP, the network's resources are shared across the set of active clients in a manner determined by the clients themselves. This has always been an anathema to network operators, who would prefer to actively manage their network's resources and provide deterministic service outcomes to customers. To achieve this, it's common to see various forms of policy-based rate policers in networks, where the 'signature' of the packet headers can indicate the application generating the traffic, which, in turn, generates a policy response. Such measures require visibility on the inner contents of each IP packet, which is conventionally the case with TCP.

QUIC is a form of encapsulation that uses a visible outer wrapping of UDP packets and encrypts the inner TCP and content payload. Not only does this approach hides the TCP flow control parameters from the network and the network's policy engines, but it also lifts the control of the data flow algorithm away from the common host operating system platform and places it into the hands of each application. This gives greater control to the application so that the application can adjust its behavior independently of the platform it is running.

In addition, it removes the requirement of a "one size that is equally uncomfortable for all" model of data flow control used in operating system platform-based TCP applications. With QUIC the application itself can tailor its flow control behaviors to optimize the behavior of the application within the parameters of the current state of the network path.

This shift of control from the platform to the application will likely continue. Applications want greater agility and greater levels of control over their own behaviors and services. By using a basic UDP substrate, the host platform's TCP implementation is bypassed, and the application can then operate in a way that is under the complete control of the application.

Resolverless DNS

I was going to say "DNS over HTTPS" (DoH), but I'm not sure that DoH itself is a particularly novel technology, so I'm not sure it fits into this category of "emerging technologies." We've used HTTPS as a firewall-tunneling and communication privacy-enhancing technology for almost as long as firewalls, and privacy concerns have existed, and software tools that tunnel IP packets in HTTPS sessions are readily available and have been for at least a couple of decades. There is nothing novel there. Putting the DNS into HTTPs is just a minor change to the model of using HTTPS as a universal tunneling substrate.

However, HTTPS itself offers some additional capabilities that plain old DNS over TLS, the secure channel part of HTTPS, cannot intrinsically offer. I'm referring to "server push" technologies on the web. For example, a web page might refer to a custom style page to determine the intended visual setting of the page. Rather than having the client perform another round of DNS resolution and connection establishment to get this style page, the server can push this resource to the client and the page that uses it. From the perspective of HTTP, DNS requests and responses look like any other data object transactions, and pushing a DNS response without a triggering DNS query is, in HTTP terms, little different from, say, pushing a stylesheet.

However, this is a profound step of significant proportions in terms of the naming architecture of the Internet. What if the names were only accessible within the context of a particular web environment, and inaccessible using any other tool, including conventional DNS queries? The Internet can be defined as a single coherent namespace. We can communicate with each other by sending references to resources, i.e., names, and this makes sense only when the resources I refer to by using a particular name are the same resources that you will refer to when you use the same name. It does not matter what application is used and what might be the context of the query for that name, the DNS resolution result is the same. However, when content pushes resolved names to clients, it is simple for content to create its own context and environment that is uniquely different from any other name context. There is no longer one coherent namespace but many fragmented potentially overlapping namespaces and no clear way to disambiguate potentially conflicting uses of names.

The driver behind many emerging technologies is speed, convenience and tailing the environment to match each user. From this perspective, resolverless DNS is pretty much inevitable. However, the downside is that the Internet loses its common coherence, and it's unclear whether this particular technology will have a positive impact on the Internet or a highly destructive one. I guess that we will see in the coming few years!

Quantum Networking

In 1936, long before we built the first of the modern-day programable computers, British mathematician devised a thought experiment of a universal computing machine, and more importantly, he classified problems into "computable" problems where a solution was achievable in finite time, and "uncomputable" problems, where a machine will never halt. In some ways, we knew even before the first physical computer that there existed a class of problems that were never going to be solved with a computer. Peter Shor performed a similar feat in 1994, devising an algorithm that performs prime factorization in finite time in a yet-to-be-built quantum computer. The capabilities (and limitations) of this novel form of mechanical processing were being mapped out long before any such machine had been built. Quantum Computers are an emerging, potentially disruptive technology in the computing world.

There is also a related emerging technology, Quantum Networking, where quantum bits (qubits) are passed between quantum networks. Like many others, I have no particular insight as to whether quantum networking will be an esoteric diversion in the evolution of digital networks or whether it will become the conventional mainstream foundation for tomorrow's digital services. It's just too early to tell.

Architectural Evolution

Why do we still see constant technical evolution? Why aren't we prepared to say: "Well, that's job done? Let's all head to the pub!" I suspect that the pressures to continue to alter the technical platforms of the Internet come from the evolution of the architecture of the Internet.

One view of the purpose of the original model of the Internet was to connect clients to service. Now we could have each service run a dedicated access network, and a client would need to use a specific network to access a specific service, but after trying this in a small way, the 1980s general reaction was to recoil in horror! So we used the Internet as the universal connection network. As long as all services and servers were connected to this common network when a client connected, they could access any service.

In the 1990s, this was a revolutionary step, but as the number of users grew, they outpaced the server model's growth capability, and the situation became unsustainable. Popular services were a bit like the digital equivalent of a black hole in the network. We needed a different solution, and we came up with content distribution networks (CDNs). CDNs use a dedicated network service to maintain a set of equivalent points of service delivery all over the Internet. Rather than using a single global network to access any connected service, all the client needs is an access network that connects them to the local aggregate CDN access point. The more we use locally accessible services, the less we use, the broader network.

What does this mean for technologies?

One implication is the weakening of the incentives to maintain a single consistent, connected Internet. If the majority of digitally delivered services desired by users can be obtained through a purely local access framework, then who is left to pay for the considerably higher costs of common global transit to access the small residual set of remote-access only services? Do local-only services need access to globally unique infrastructure elements.

NATs are an extreme example of a case in point that local-only services are quite functional with local-only addresses, and the proliferation of local user names leads to a similar conclusion. It is difficult to conclude that the pressures for Internet fragmentation increase with the rise of content distribution networks. However, if one looks at fragmentation in the same way as entropy in the physical world, then it requires constant effort to resist fragmentation. Without the constant application of effort to maintain a global system of unique identifiers, we appear to move towards networks that only exhibit local scope.

Another implication is the rise of specific service scoping in applications. An example of this can be seen in the first deployments of QUIC. QUIC was exclusively used by Google's Chrome browser when accessing Google web servers. The transport protocol, which was conventionally was placed into the operating system as a common service for applications was lifted up into the application. The old design considerations that supported the use of a common set of operating system functions over the use of tailored application functionality no longer apply. With the deployment of more capable end systems and faster networks, we can construct highly customized applications. Browsers already support many of the functions that we used to associate only with operating systems, and many applications appear to be following this lead. It's not just a case of wanting finer levels of control over the end-user experience, although that is an important consideration, but also a case of each application shielding its behavior and interactions with the user from other applications, from the host operating system platform and from the network.

If the money that drives the Internet is the money derived from knowledge of the end user's habits and desires, which certainly appears to be the case for Google, Amazon, Facebook and Netflix, and many others, then it would be folly for these applications to expose their knowledge to any third party. Instead of applications that rely on a rich set of services provided by the operating system and the network, we are seeing the rise of the paranoid application as the new technology model. These paranoid applications not only minimize their points of external reliance, but they also attempt to minimize the visibility of their behaviors as well.

Change as a Way of Life

The pressure of these emerging technologies competing with the incumbent services and infrastructure on the Internet is perhaps the most encouraging sign that the Internet is still alive and is still quite some time away from a slide into obsolescence and irrelevance. We are still changing the basic transmission elements, changing the underlying transport protocols, changing the name and addressing infrastructure, and change the models of service delivery.

And that's about the best signal we could have: the Internet is by no means a solved problem and poses many important technology challenges.

Where does this leave the New IP proposal?

In my view, it's going nowhere useful. I think it heads to the same fate as a long list of predecessors as yet another rather useless effort to adorn the network with more useless knobs and levers in an increasingly desperate attempt to add value to the network that no users are prepared to pay for.

The optical world and the efforts of the mobile sector are transforming communications into an abundant undistinguished commodity, and such efforts to ration it out in various ways, or adding unnecessary adornments are misguided efforts. The network is no longer managing applications. There is little left of any form of cooperation between the network and the application, as the failure of ECN attests. Applications are now hiding their control mechanisms from the network and making fewer and fewer assumptions about the characteristics of the network, as we see with QUIC and BBR.

So, if all this is a Darwinian process of evolutionary change, then it seems that the evolutionary attention currently lives in user space as applications on our devices. Networks are just there to carry packets.

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: Internet Protocol, IP Addressing, IPv6, Networks, Wireless

Categories: News and Updates

Help Science Fight COVID-19

Mon, 2020-05-25 17:53

Many organizations and individuals are socially committed and voluntarily help the weak, the poor, and the sick. Others consider how they can contribute. Supporting organizations and individuals by starting an aid project, donating money, or providing human resources, can make a crucial difference.

The corona crisis is a challenge for many, if not all. Scientists around the world are experimenting with cures and vaccines, and they need help. However, you don't have to be a virologist to help science fight COVID-19. All you need is a computer and a free program called Folding@Home. It is not entirely new, but the interest resumed due to the current pandemic. Stanford University initiated the project in 2000 and has been researching cancer, Alzheimer's, and Parkinson's. Since 2019, the University of St. Louis School of Medicine has led the project. It is a non-profit project, and the generated results are not for sale. Researchers worldwide can access these data sets on request.

The software utilizes the home computer for a vast experiment. Researchers simulate how viruses attack the human organism. Viruses use specific proteins, and there are innumerable ways in which these proteins can change their shape. It is like a massive bunch of keys, where you have to find the right keys. Because hardly any computer would be able to try each key, everyone can help with their PC.

By now, users provide more than 2.4 exaflops of computing power — exceeding the combined computing power of the 500 fastest supercomputers on Earth. Individual PCs can thus jointly simulate how active ingredients interact with the virus's spike protein, which is needed for docking to human host cells. These simulations limit physical tests to dozens of hundreds instead of thousands in search of a suitable active ingredient. Despite all efforts and the latest findings on COVID-19, many questions remain unanswered in finding effective drugs and vaccines. Therefore, more computing power will help.

More than 2.7 million users and over 250,000 teams worldwide have already registered. Some use their home computer, others their servers in a data center, and yet others have set up dedicated hardware. It is an illustrious circle: large organizations like Amazon, AMD, Apple, CERN, Google, Hewlett Packard, Intel, Microsoft, Nvidia, Petrobras, and VMware joined, but also smaller companies, such as domain name registrars and hosting companies.

Anyone using Windows, macOS, or Linux can download a client that runs in the background. The client supports single and multi-core CPUs as well as GPUs from Nvidia and AMD. The client downloads work units containing protein data. Work units are a fraction of the simulation between states in a Markov state model. After processing the work unit, the client returns the result to Folding@Home, which then credits the user in return. This cycle repeats automatically. All work units have deadlines, and when a user misses a deadline, work units are redistributed.

It is a fallacy to believe that help from large companies suffices. The variations in the search for new cures, drugs, and vaccines will take a lot of time and effort. In total, more than 200 scientific publications were published as a direct result of Folding@Home. It is still a long way, and a straightforward principle applies — the more, the merrier. Join in now!

Written by Tobias Sattler, CTO united-domains

Follow CircleID on Twitter

More under: Cloud Computing, Coronavirus, Data Center, Networks

Categories: News and Updates

Is Teleworking Here to Stay?

Fri, 2020-05-22 19:34

Travis County, TX says it plans to have large part of its workforce continue working from home permanently. (KXAN / MAY 13, 2020)

Broadband networks are stretched thin today due to the large numbers of adults and students working from home. There are many stories on the web that indicate that a lot of employees are not going to be going back to the office when the pandemic is over.

Here are two stories about a trend towards more teleworking from the dozens that a Google search uncovered. The government in Travis County, TX says that as many as 3,000 of their 5,000 employees might be asked to work from home at the end of the pandemic. This is a large county that includes Austin and the surrounding suburbs. There are about 2,000 employees who can't work from home, including law enforcement, medical examiners, and offices that work with the public like the County Clerk's office — but the government will consider sending everybody else home to work. The County says that productivity has gone up since employees went home, and the County is pleased with the noticeable difference in air pollution from fewer commuters.

An article in Marketwatch had interviews with the CEOs of six tech companies and all thought that a significant portion of the workforce would never be brought back to the office after the end of the pandemic. For example, Stewart Butterfield of Slack recently told investors that he would expect 20% to 40% of the company's workforce to remain at home. The other CEOs voiced similar opinions. They also said their companies are also likely to permanently dial-back on travel and attendance at conferences. The CEOs were excited about the options created by being able to hire talented employees from across the country.

There are some obvious impacts if companies everywhere adopt this kind of thinking. It bodes poorly for expensive office space in downtown areas. There would be a big downturn in all of the businesses that serve commuters, like restaurants and parking garages, if a significant portion of workers never returns to the big city centers. There would be a drop in transit revenues and road tolls.

It also has long-term implications for broadband. While the big ISPs are all telling the world that their networks are handling the increased traffic that's pouring into and out of neighborhoods today, those working at home know better. By now, everybody has experienced video calls where some callers are pixelating or disappearing in the middle of a call. Everybody probably also has friends who are telling them the stories of wresting with poor broadband outside of cities — where only one family member at a time can use the broadband.

ISPs have seen a one-time spike in usage that may never fully go away. Most of the increased usage comes from people doing office work or schoolwork over the broadband network that would formerly have been done inside of a school or office server environment. People are teleconferencing now for conversations that would have happened in a conference room or cubicle.

One of the most likely outcomes of people working from home is going to be a big outcry from folks demanding faster upload connection speeds. Many of the problems experienced from working at home during COVID-19 come from the miserly upload speeds that broadband technologies other than fiber provide to a home. Cable companies, in particular, are likely to increase upload speeds — something they've purposefully kept small in order to provide as much download speed as possible. But there is a world of difference between a 100/5 Mbps connection and a 90/15 Mbps connection.

ISPs are also going to have to get used to a different demand curve. Residential broadband networks have always been busiest in the evenings when everybody is at home using the Internet for videos and gaming. During COVID we've seen some interesting shifts in broadband usage by time of day. Daytime usage is up significantly, while evening usage has not grown, and many ISPs say evening usage has decreased. The busy hour in a neighborhood may no longer be 8:00 PM.

This also means that we need to get used to the idea of Zoom and Go-to-Meeting because a lot of the people we deal with will be working from home. There are likely to be many societal changes that evolve from this pandemic, but it doesn't take a crystal ball to see that working from home is going to be a lot more prevalent than before.

Written by Doug Dawson, President at CCG Consulting

Follow CircleID on Twitter

More under: Access Providers, Broadband, Coronavirus, Telecom

Categories: News and Updates

COVID Domain Registrations Surged in March

Wed, 2020-05-20 17:19

The Internet and the domain name system (DNS) have become the mainstay of the new COVID sheltered world. Afilias looked at registrations in the unrestricted domain name space, with a special focus on the popular .INFO, .PRO, .MOBI and .IO domain name extensions. The data shows that the number of website and domain registrations related to COVID and Coronavirus in these extensions is flattening after a surge in March.

Afilias has processed more than a million registrations in 2020, and the .info top-level domain (TLD) has proven to be the most popular extension to register names related to the COVID pandemic (e.g. "covid," "virus," etc.). Close to 5,000 new .info registrations are related to the ongoing pandemic and are being used to provide valuable information related to the illness.

Some examples include:
  • Up to date global stats on the pandemic and a host of other topics. Traffic to has surged, propelling it nearly a hundred-fold from 6,142 to the 74th most trafficked site on the Internet today
  • Live updates on the status of the pandemic in Pakistan
  • Country-level stats on COVID cases

About 10% of all registered names are registered to provide information about the pandemic specific to a geographic region -,, and so on.

Covid-related Registrations Surged Ahead of Pandemic spread

As shown below, COVID-related registrations in the .info TLD peaked in mid-March, just as global awareness of COVID exploded worldwide. As the coronavirus spread rapidly all over the world, informational websites using the names registered in March soon became local go-to resources for a world that needed concise information.

Weekly COVID Related Registrations: Afilias TLDs
Based on 4,792 registrations with COVID-related names registered across 25 Afilias TLDs

Daily New Case Counts Worldwide

Daily New Case Counts Worldwide / Source:

Top COVID-19 related searches in the US

Top COVID-19 related searches in the US, Jan-Apr 2020 / Source: Google

"COVID" is the most popular descriptor

Website owners showed that their registrations of .info names were dominated by three terms - COVID, Virus and Coronavirus, together accounting for over 80% of all names registered. Of these, COVID is the most popular descriptor, with over 40% of all names, including the term "COVID" in the name. Surprisingly, 15% of names were long and used a hyphen, including names such as ( and (

"COVID" and "VIRUS" Are Most Used Descriptors
Based on 4,792 registrations with COVID-related names registered across 25 Afilias TLDs

Most COVID registrations were for real use, not for scams

Over 80% of all registrants only registered one name — a striking statistic that indicates "real" use rather than malicious intent. Over 40% of all names registered in our TLDs were by registrants registering only one name. A large number of names (about 33%) were by registrants whose identity was shielded, thereby not allowing an analysis of whether most shielded names were similar to the 40% who bought only one name. 

The dearth of registrants buying hundreds of names is consistent with the absence of scams being perpetrated using these names. Separate abuse analysis shows that fewer than 20 of these names (~0.4%) were involved in domain abuse, resulting in rapid remediation. Our practice of proactively screening every incoming registration seems to have discouraged criminals from attempting to abuse names in these top-level domains.

COVID Registrants Bought Single Names
Based on 4,792 registrations with COVID-related names registered across 25 Afilias TLDs

COVID names average 12-14 characters

COVID names are about the normal length for domain names, with the distribution curve peaking at 13 characters as shown below.  The shortest name is 5 characters, and the longest is 60 characters (!

COVID Names Peak at 13 Characters
Based on 4,792 registrations with COVID-related names registered across 25 Afilias TLDs

Websites like the examples in the .info top-level domain shown above illustrate the value of addresses that are both dedicated to information in times of crisis and carefully monitored to discourage potential abuse. Trend analysis of registration data can also provide perspective on the progress of the subject area (in this case, the Covid-19 pandemic).

Let us hope that the current post-peak decline in Covid-related registrations presages the decline of the pandemic itself.

Written by Ram Mohan, Chief Operating Officer at Afilias

Follow CircleID on Twitter

More under: Coronavirus, Cybersecurity, Domain Management, DNS, Domain Names, New TLDs

Categories: News and Updates

China Will Remember the U.S. Huawei War for a Generation

Tue, 2020-05-19 19:36

Only an idiot would believe that the U.S. is blocking TSMC manufacture of Huawei cell phone chips because of security fears. This is a commercial rivalry. The U.S. wants to put China's leading technology company out of business.

We will fail, of course, at a price far higher than D.C. understands. The U.S. is ready for China's immediate countermeasures, even if Apple's stock price falls $hundreds of billions. But the long-run price will be devastating.

Giant German companies have been quietly turning away from U.S. components just in case they become the next target of U.S. wrath. When I discovered that last year, I wrote The unbelievably high cost of the war against Huawei.

This escalation means any sensible multinational manufacturer will do what is necessary to avoid becoming a pawn in battles between the U.S. and our perceived enemies. Volkswagen, Mercedes, Toyota, Honda, and BMW sell millions of cars in China. They'd be fools to be dependent on U.S. electronic parts. Their managers are not fools. They will quietly find other suppliers in Europe or Asia.

Huawei spends $20 billion a year on research and development. They can and have replaced almost everything sourced from the U.S. That's why the U.S. is going after Huawei's one external bottleneck, Taiwan's TSMC. TSMC and Samsung are the only plants in the world, capable of producing advanced 5G chips.

Huawei Smartphone Components Supplier Share in Percent / By Value: excluding displays / Source: Fomalhaut Techno Solutions

It will be several years before the Chinese can catch up in chip manufacturing, especially with the U.S. blockading Chinese purchases of chipmaking equipment, including EUV machines from the Netherlands. But they will find a way.

Huawei is a $120 billion company, larger than Cisco, Nokia, and Ericsson combined. It has $35 billion cash in the bank, more than enough to tide it over until it can bypass the U.S. It is a national champion that China will protect by any means necessary.

In a year, Huawei went from almost nothing to a world-class manufacturer of the crucial cellphone radio frequency parts. It is now making optics for some of the most advanced fiber systems.

98% of the parts in Huawei phones can already be sourced outside the U.S. Mediatek, Samsung, and UNISOC can provide an alternate source of 5G phone chips. Huawei has already shifted major orders from TSMC to SMIC in China, which is rapidly expanding.

Harmony/Hong Meng is already a decent substitute for Google's Android. It can't run all the Android apps, which is hurting sales in Europe, but that will be fixed, possibly in months.

Every schoolchild in China learns about the "unequal treaties" imposed by Britain, Germany, and America after the Opium Wars. Most Americans don't know our ignominious history. Everyone in China does.

Written by Dave Burstein, Editor, DSL Prime

Follow CircleID on Twitter

More under: Mobile Internet, Policy & Regulation, Telecom, Wireless

Categories: News and Updates

Is the Lockdown Driving Domain Registrations?

Tue, 2020-05-19 17:50

Businesses across Europe face a new and challenging situation not seen in generations. A mass lockdown of society due to the coronavirus pandemic with thousands of businesses having been forced to send employees home. The societal impact is broad and deep; however, for ccTLD registries, beyond changes to how staff work, other business effects so far seem minimal. One aspect of registry business may, however, be changing — the volume of registrations themselves. Figures from CENTR show a spike in new registrations in the month of April 2020.

New Domain Creations – Sample: 25 ccTLDs (CENTR full members), Source: CENTR

Based on a sample of 25 ccTLDs, new domains registered in April 2020 is up 20% from the same time a year earlier. The increase has even pushed up median domain growth rates of the CENTR30 (30 largest CENTR member ccTLDs) — something seldom seen over the past decade.

Often domain registrations are linked to events by investors hoping to make a profit selling those domains on the secondary market or sometimes by those with criminal motivation such as to create fake webshops selling products that will not be delivered. It is important to consider this before explaining other theories to the boost in new registrations.

A study conducted by CENTR in April 2020 studied newly registered domains of 12 ccTLDs between January and March 2020. The ccTLDs reported a total of combined 6154 domains which contained the term covid, corona and/or virus. To put this into perspective, the same set of ccTLDs recorded a total over 751K newly registered domains in the same 3-month period. The covid related domains, therefore, represent just 0.8%. A web crawler scanned the impacted domains finding roughly 26% had some sort of active website. The study shows a couple of things — firstly, that the appetite for covid themed domains appears small in ccTLDs and secondly that it does not likely explain the overall boost in new domains.

A more plausible theory to the boost in new domains links to the changing business and employment landscape. As lockdown has considerably reduced in-person customers to high street shops and cast millions of workers into precarious employment status, businesses and individuals have had to adapt. In order to cushion the impacts from falling revenues, traditional high street "bricks and mortar," businesses have had to explore new and alternative ways of doing business. If a business did not have an online presence before, the pandemic has given a compelling reason to do so now. From fitness studios conducting classes online, theatres live-streaming shows, and countless others rapidly upgrading their sites to include payment gateways for orders, an online presence is more important than ever.

Take, for example, 'The filling station Eco store' — a small locally operated business in Galway, Ireland. The business offers "minimal waste dried groceries and eco-friendly household alternatives" and has been in operation since mid-2019. Although the owner did not previously consider a website to be a priority, it has now become a necessity for survival. A domain has since been registered in March, a website built, and the business is now taking online orders.

It is not just existing businesses upgrading their web presence. New business ventures are popping up that are capitalising on the new market demands created by the pandemic. For example, this London business selling face masks.

Will the spike in new registrations continue?

One of the key drivers of the continued selloff among stock markets around the world was based on a warning from a World Health Organization official who said the coronavirus might be a permanent fixture. It is becoming clear that the new business environment may become the new normal, which includes a deglobalisation of society.

Deglobalisation will move society and business to become more oriented to the local community, and limitations on physical distance will push them online more than ever. While domain names are just a small part of the online ecosystem, their role may become increasingly important. To illustrate the potential, consider that in many countries, SMEs account for well over 95% of the overall business population. Furthermore, the increase in local flavour may be an opportunity for ccTLD registries to reinforce their role as a locally focused and operated domain.

Whatever happens, the shift in the way business works is already happening. The opportunities to businesses may be substantial, and domain names represent tried and tested avenue to the online world.

Written by Patrick Myles, Data Analyst at CENTR

Follow CircleID on Twitter

More under: Coronavirus, Domain Names, Web

Categories: News and Updates

A Cautionary Tale of Reputation Damage: Striking the Right Balance With Brand Protection

Fri, 2020-05-15 03:18

Co-authored by Dr. David Barnett, Brand Monitoring Subject-Matter Expert; Lan Huang, CSC Domain and Brand Abuse Enforcement Expert and Alexandra Midgley, CSC Social Media Enforcement Subject-Matter Expert.

In early March 2020, a well-known European fashion brand found themselves on the receiving end of a protest campaign on social media. The background to the case was the fact that, in 2019, the brand had launched a cease and desist (C&D) action against a small, U.K.-based company in response to their use of similar product names and sale of associated clothing merchandise. This resulted in significant legal and rebranding costs for the company and is just one of several cases where the brand had targeted other small organizations.

Many observers have viewed these actions as heavy-handed, and the subsequent online commentary has generated a significant amount of negative press for the brand. The case "shine(s) a light on the potential negative PR implications when undertaking a brand enforcement program," an intellectual property expert commented. "Even where a brand is legitimately enforced, brand owners must be alive to where issues may arise in relation to smaller businesses or individual use."

This is not the only organization to take an (over) enthusiastic approach to their brand protection efforts. In 2015, the Millennium and Copthorne Hotels group sent a notice to the Village Association for Copthorne — a small village in the U.K., and the company's founding location — protesting against their infringing use of the Copthorne name in the association's web address. The hotel group eventually backed down, stating the letter was sent in error.

In another case, Scottish brewery BrewDog issued a C&D against the owners of a pub planning to name it the "Lone Wolf" — one of BrewDog"s product names. BrewDog also eventually withdrew the action, following a campaign accusing the company of behaving like a "multinational corporate machine." A branding commentator at the time indicated that the backtracking by BrewDog could ultimately work in their favor, stating, "We've now got a business owner calling off his lawyers and favoring the underdog. That feels right for a challenger brand. Perhaps there's still a win available for them."

So how should brand owners address the issue of protecting their IP? Here are our top tips for getting it right.

Register your brand terms

As a minimum, CSC suggests that brands register all active brand terms in all relevant classes (i.e., product areas) and geographic jurisdictions. If a brand is able to achieve well-known trademark status, this can also open up further avenues for enforcement, making it possible to defend IP rights even in product classes where trademarks have not yet been registered.

Have a clear set of goals for your IP protection program

Just because you can launch an action in a particular case doesn't mean you should. In cases involving, for example, small companies operating in unrelated areas, with minimal risk of confusion, it may be advisable not to enforce. As with the case reported here, the risk is that an enforcement action can cause a large corporation to gain a reputation as a brand bully, and it's important to consider the risks of exacerbating an already inflamed situation. A brand owner should always be clear on the goals of their IP protection program, and be willing to answer the question — in cases where an action results in backlash — was it worth it?

Look at potential infringements case-by-case

At CSC, we advise against sending automated C&D notices; every case is different, and it's important to consider whether a notice is necessary and, if so, what the appropriate style of wording is. C&D language can be overly severe and may not be concise, leaving room for dispute. In cases where notices should not have been sent, there's the risk of counter-claims for groundless threat — in these instances, the brand owner could then be liable for any damage and costs arising from the claim.

Before taking any action on a potential infringement, it's advisable to assess the case against the following questions:

  1. Is there prominent and unauthorized use of the trademark?
  2. Is there a likelihood of confusion, i.e., is the disputed use likely to mislead a general consumer into believing that the products and services are offered by the brand owner who owns the trademark?
  3. Does the use of the trademarked name constitute bad faith or piggybacking on the brand owner's established brands and goodwill (i.e., unfair use for commercial gain)?
  4. Does the use of the trademark cause harm or damage to the brand?

If the answer is "yes" to these four questions, it may be appropriate for a brand owner to take action.

Personalize your C&Ds

If a potential infringement is identified, but bad faith cannot be definitively established, it may be best to contact the concerned parties using a personalized C&D. This should include:

  • Education on the importance of the intellectual property
  • Why and how there is a conflict of interest, and how they have infringed, specifically which aspects of the brand use are most concerning
  • How this can be mitigated without invoking costly legal battles

It's often the case that legitimate businesses are more likely to comply with infringement notifications, whereas those clearly using a trademark in bad faith are less likely to cooperate.

The general principle should be to treat the most serious cases more aggressively, escalating to using the legal route if necessary. Only consider legal action when the infringer refuses to comply without sufficient reason, or if there is a clear case of malicious intent to monetize the trademark. Less egregious offenders can be sent a softer C&D, incorporating educational information. A C&D done well can even positively boost a brand owner's image and public relations.

  1. This article originally published on Digital Brand Insider.

Written by David Barnett, Brand Monitoring Subject-Matter Expert at CSC

Follow CircleID on Twitter

More under: Domain Management, Domain Names, Brand Protection

Categories: News and Updates

The Upload Crisis

Wed, 2020-05-13 20:05

Carriers continue to report on the impact of COVID-19 on their networks. One of the more interesting statistics that caught my eye was when Comcast reported that upload traffic on their network was up 33% since March 1. Comcast joins the rest of big ISPs in saying that their networks are handling the increased traffic volumes.

By 'handling' the volumes, they mean that their networks are not crashing and shutting down. But I think there is a whole lot more to these headlines than what they are telling the public.

I want to start with an anecdote. I was talking to a client who is working at home, along with her husband and two teenagers. The two adults are trying to work from home, and the two kids are supposed to be online, keeping up with schoolwork. Each of them needs to create a VPN to connect to their office or school servers. They are also each supposed to be connecting to Zoom or other online services for various meetings, webinars, or classes.

These functions all rely on using the upload path to the Internet. The family found out early in the crisis that their broadband connection did not provide enough upload speed to create more than one VPN at a time or to join more than one video call. This has made their time working at home into a major hassle because they are being forced to schedule and take turns using the upload link. This is not working well for any of them since the family has to prioritize the most important connections while other family members miss out on expected calls or classes.

The family's upload connection is a choke point in the network and is seriously limiting their ability to function during the stay-at-home crisis. But the story goes beyond that. We all recall times in the past when home Internet bogged down in the evenings when everybody in the neighborhood was using broadband to watch videos or play games. Such slowdowns occurred when the download data path into the neighborhood didn't deliver enough bandwidth to satisfy everybody's request for broadband. When that download path hit maximum usage, everybody in the neighborhood got a degraded broadband connection. When the download path got overloaded, the network responded by giving everybody a little less bandwidth than they were requesting — and that resulted in pixelating video or websites that lose a connection.

The same thing is now happening with the upload links, but the upload path is a lot more susceptible to overload. For technologies like coaxial cable networks or telephone DSL the upload path leaving the neighborhood is a lot smaller than the download path into the area. As an example, the upload link on a coaxial network is set to be no more than 10% of the total bandwidth allowed for the neighborhood. It takes a lot more usage to overload the download path into the neighborhood since that path is so much larger. On the upload path, the homes are now competing for a much smaller data path.

Consider the difference in the way that homes use the download path compared to the new way we're all using uploading. On the download side, networks get busy mostly due to streaming video. Services like Netflix stay ahead of demand by downloading content that will be viewed five minutes into the future. By doing so, the neighborhood download network can have cumulative delays of as much as five minutes before the video streams collapse and stop working. The very nature of streaming creates a buffer against failure — sort of a network insurance policy.

Homes are not using the upload links in the same way. Connecting to a school server, a work server, or a video chat service creates a virtual private network (VPN) connection. A VPN connection grabs and dedicates some minimum amount of bandwidth to the user even during times when the person might not be uploading anything. A VPN carves out a small dedicated path through the upload broadband connection provided by the ISP. There is no buffer like there is with downloading of streaming video — when the upload path gets full, there's no room for anybody else to connect.

The nearest analogy to this situation harkens back to traditional landline telephone service. We all remember times, like after 911, when you couldn't make a phone call because all of the circuits were busy. That's what's happening with the increased use of VPNs. Once the upload path from the neighborhood is full of VPNs, nobody else is going to be able to grab a VPN connection until somebody 'hangs up.'

Residential customers have historically valued download speeds over upload speeds and ISPs have configured their networks accordingly. Many technologies allow an ISP to balance the upload and download traffic, and ISPs can help upload congestion by providing a little more bandwidth on the upload stream. Unfortunately for cable companies, the current DOCSIS standards don't allow them to provide more than 10% of bandwidth on the upload side — so their ability to balance is limited.

As I keep hearing these stories from real users, I am growing less and less impressed by the big ISPs saying that everything is well and that their networks are handling the increased load. I think there are millions of households struggling due to inadequate upload speeds. It's true, as the big ISPs are reporting, that the networks are not crashing — but the networks are not providing the connections people want to make. No big ISP is going to admit this to their stockholders — but I bet a lot of those stockholders already understand this first-hand from having troubles trying to work from home.

Written by Doug Dawson, President at CCG Consulting

Follow CircleID on Twitter

More under: Access Providers, Broadband, Coronavirus

Categories: News and Updates

What COVID-19 Means for Network Security

Wed, 2020-05-13 01:04

The COVID-19 Pandemic is causing huge social and financial shifts, but so far, its impact on network security has gone under-reported. Yet with thousands of companies worldwide requiring millions of employees to work remotely, network administrators are seeing unprecedented changes in the ways that clients are using their networks and new threats that seek to leverage the current crisis.

VPNs Show Explosive Growth

A quick Google search for "work remotely" gives an indication of one type of company that is going to massively benefit from the massive shift to remote working: VPN providers. Almost every article focused on how to work with remote teams recommends that businesses give all their employees VPNs.

Recent reports suggest that this has already begun. Statistics from VPN provider NordVPN show the US has experienced a 65.93% growth in the use of business VPNs since 11 March, with the biggest gain being in desktop users.

This is both good and bad news for network security. It's great, of course, that users are now encrypting sensitive commercial and personal data. On the other hand, some network engineers are struggling to manage users on systems that make use of IP addresses for authentication.

Growth in VPN usage in countries with large COVID-19 outbreaks (by Statista)

Changes in Network Usage

Another shift caused by the current pandemic has been an unprecedented spike in voice and video traffic. Verizon has previously reported that voice usage has long been fluctuated due to the popularity of texting, chat, and social media. Last week, though, voice traffic increased 25%. Their network report shows the primary cause of this is users accessing conference calls, but people are also talking longer on mobile devices, with calls lasting 15% longer.

From a network security perspective, this could be a huge problem. Voice data typically requires high amounts of processing power to encrypt, and so a spike in voice traffic is going to put an extra load on existing encryption systems.

This is already apparent, in fact. With so much voice traffic flooding networks, Ookla says it has started to see a degradation of mobile and fixed-broadband performance worldwide. Comparing the week of 16 March to the week of 9 March, the mean download speed over mobile and fixed broadband decreased in both Canada and the U.S.

Responding to these changes — at least in the short term — is going to require a process of employee management, rather than technical upgrades. Network admins who are seeing huge spikes of voice data on their networks, accompanied by performance issues, should report this to executives who can remind staff that they (hopefully) have many other ways to communicate with each other.

Emerging Threats

Finally, it's becoming increasingly apparent that hackers are taking advantage of the pandemic to spread malware. Security analyst Check Point's Threat Intelligence has reported that since January 2020 there have been over 4,000 coronavirus-related domains registered globally. 3% of these websites were found to be malicious, and an additional 5% are suspicious. Corona-related domains are 50% more likely to be malicious than other domains registered in the same period, and also higher than recent seasonal themes such as Valentine's Day.

These threats come at a very vulnerable time for network administrators. Many staff are working from home, and so the corporate firewalls that can stop employees falling victim to a scam are no longer in place. In addition, the panic caused by the virus means that employees are more likely to be taken in by a seemingly innocent site or email.

Responding to these threats relies, again, on educating staff about the importance of cybersecurity when working from home. They should be taught how to secure their home systems against common forms of cyberattack and should be extremely wary of COVID-19 information that doesn't come from a trusted source.

The Future

At a broader level, these shifts could change the way that networks are planned. Some have noted that the sudden spike in remote working might not automatically disappear once the pandemic is over: instead, many firms will realize the benefits of remote working, not least for their fixed costs, and make this standard in the future.

Because of this, Tom Nolle, president of CIMI Corporation, has argued that the current shutdown "could eventually produce a major uptick for SD-WAN services," particularly for MSPs. This will mean that the short-term changes in network usage caused by Corona might not be so short-term after all. In other words, this crisis might be the new normal.

Written by Gary Stevens, Front-End Developer

Follow CircleID on Twitter

More under: Coronavirus, Cybersecurity, Networks

Categories: News and Updates

Five Common Issues in Domain Management, and Solutions

Tue, 2020-05-12 20:26

Has your organization ever missed renewing a domain name? You really don't want to be in the news for that. Just look up "company forgot to renew domain name” and read about the historical consequences of missing vital domain name renewals. They range from failed services or infrastructure, lost revenue, lost business partners, wrecked reputation, hefty regulatory fines, to even the collapse of a business.

Even though some of these top search engine results page (SERP)1 articles may date before 2015, the challenge of domain management still exists today. The overseer of the domain name system (DNS), The Internet Corporation for Assigned Names and Numbers (ICANN), published a report in each of 20182 and 20193 that investigated the issues and challenges impacting domain name registrants. Both reports found that the majority of domain registrants were challenged with domain management, including domain transfers between registrars, domain name renewals, general registration support, issues with country code top-level domains (ccTLDs), etc.

Collectively, these issues are a huge burden for large corporations. Often times, large corporations such as Forbes 2000 companies have hundreds or thousands of domain names to manage, and these large numbers may give rise to the following challenges.

Issue #1: missed renewals

These hundreds or thousands of domain names vary in expiration dates, and their renewal notices are emailed to the organization's domain administrator. Should this person move on without proper handover, the new administrator may not have the login credentials to the registrar portal. Or, the corporate credit card tied to the domain account expires without anyone noticing. In other words, should these emails and domain renewals be missed for any reason, there is no other way that a domain owner will be alerted to expiring domains or be able to respond nimbly if disaster strikes.

Solution: Set your domains to auto-renew by default so that your organization never misses a domain renewal, and opt for credit on account instead of using a credit card.

Issue #2: risk and complexity of domain management and using many registrars

The complexity of domain name management arises when a single centralized team devotes resources to handling domain requests from different business units, administers the unique registry requirements of each registration, and tallies what each business unit should pay for their respective portfolio of domains at the end of each year. With decentralized management, it doesn't get better with different business units handling their own domain portfolios, and domain policies are disparate and vary within each unit.

Furthermore, when corporations use more than one registrar to manage domain portfolios, this adds a layer of complexity in management.

Multiple registrars also mean sensitive information is recorded with multiple parties. This increases the risk of account hijacking and losing control of domains if some of the registrars have weak security controls in place.

What are weak security controls? These are registrar platforms with bad security practices that hackers love, such as lenient password policies, no two-factor authentication, little to no restriction of access to critical DNS zones, poor cybersecurity awareness and training among their staff, lack of performing regular cybersecurity drills, and not enough scrutiny of the security vulnerabilities of the registrar platform software and servers being used.

Solution: Configure your registrar preferences to enable each business unit to manage their own domains under a parent account to be billed separately. In addition, you'll want to consolidate your domains with a security-conscious enterprise-class corporate domain registrar whose systems are designed with security first, and whose staff are well-versed in phishing methodology.

In particular, your registrar should provide you with secure access to its domain and DNS management systems (and secure access through two-factor authentication, IP validation, and federated ID). A provider should also enable you to control user permissions, and use advanced domain security features such as registry and registrar locks and DNS security extensions (DNSSEC), for example.

Issue #3: difficulty handling registry requirements and registering all the extensions you want

Different domain registry operators impose different criteria for registration, and in some cases, throughout the lifetime of the domain names. For instance, .MM (Myanmar) registrations are highly laborious and .AU (Australia) registrations require an active Australian presence or qualifying Australian trademark throughout the lifetime of the domain name. Even if enterprise systems are able to connect with their domain registrar systems, the amount of manual work involved in the management of the domain portfolio may require one individual's priority over other projects at various points throughout each year.

In addition, different registrars have varying domain endings on offer. Most registrars will offer .COM for example, but not every registrar offers ccTLDs such as .LY and .AR. What should a corporation do when their regular registrar does not allow them to register ccTLDs in countries, cities, or territories that they plan to do businesses in? They look for a registrar that does, usually using what they can find within a tight timeframe.

Solution: Partner with an experienced corporate registrar that knows the ins and outs of each registry requirement, offers you the ability to register domain names from all around the world, and curates vital information that enables you to make the best decisions.

Issue #4: complications in transferring domain names and poor service support

Large corporations commonly transfer domain names from one registrar to another after mergers and acquisitions. Most companies consolidate domain names with one domain registrar to enforce uniform domain name policies, and standardize prices, platforms, workflows, and expected service levels. The domain administrator usually faces the burden of managing a highly tedious, painstaking, and almost impossible task, which involves coordinating manual authorization requests with every registrar for each domain transferred.

When compounded with urgent business requests for domain registrations, the domain administrator is also burdened with locating a registrar who can support the domain extension, and is responsive and sensitive enough to deal with often confidential registrations (especially in the case of a campaign or launch).

Solution: Lean on your registrar to manage the entire transfer process on your behalf, freeing you up to do what you do best. Your registrar should also provide timely, and the most relevant information, as well as 24/7 support to help keep your reputation and brand assets safe.

Issue #5: determining options to combat infringement

Domain monitoring alerts corporations to possible domain infringements, but when detected, takedown mechanisms range from cease and desist, arbitration, to dispute resolution procedures. Which is most suitable? What are the criteria for each, and at what costs? Are there other options to prevent all these from happening in the first place?

Solution: Seek advice from an experienced provider who can assess each situation and act in your best interests. Besides domain management services, a corporate registrar should offer all-encompassing brand protection services that include monitoring, enforcement, takedown of domain names, social media handles, mobile apps, internet content, and more.

In summary, many of the domain management issues faced can be minimized by having clear policies and working with the right registrar. Good corporate registrars should have the breadth and depth to help mitigate the risk and disruption to their workflows and businesses, to take some of the burdens off corporations to protect their brands, reputation, and revenue, so that they can focus on doing what they do best, and reduce any damage by third parties.

  1. When you conduct an internet search, the SERP lists the results that you can click on. 
  2. Report: Issues and Challenges Impacting Domain Name Registrants published 26 September 2018 
  3. Report: Issues and Challenges Impacting Domain Name Registrants published 29 April 2019 
  1. This article originally published on Digital Brand Insider.

Written by Connie Hon, Domain Product Manager at CSC

Follow CircleID on Twitter

More under: Domain Management, Domain Names, Brand Protection

Categories: News and Updates

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer