Domain industry news

Syndicate content CircleID
Latest posts on CircleID
Updated: 11 hours 19 min ago

Internet Will Split Into Chinese-Led and US-Led Versions Within the Next Decade, Says Eric Schmidt

Fri, 2018-09-21 19:38

Speaking at a private event hosted by Village Global VC, tech luminary and former Google CEO Eric Schmidt predicted that the internet will split into Chinese-led and US-led versions by 2028. Lora Kolodny from CNBC reports: "Schmidt shared his thoughts at a private event in San Francisco on Wednesday night convened by investment firm Village Global VC."

Quoting Schmidt in response to question regarding the chances of the Internet fragmenting over the years:

"I think the most likely scenario now is not a splintering, but rather a bifurcation into a Chinese-led internet and a non-Chinese internet led by America."

"If you look at China, and I was just there, the scale of the companies that are being built, the services being built, the wealth that is being created is phenomenal. Chinese Internet is a greater percentage of the GDP of China, which is a big number, than the same percentage of the US, which is also a big number."

The real danger: While China is coming up with fantastic products and services, along comes a government-aligned leadership regime embedded with censorship and controls, says Schmidt: "it's perfectly possible [many] countries will begin to take on the infrastructure that China has with some loss of freedom."

Follow CircleID on Twitter

More under: Censorship, Internet Governance, Policy & Regulation, Web

Categories: News and Updates

Trump Administration Says US Will Start Using Offensive Strategy Towards Cyberattacks

Thu, 2018-09-20 21:27

The Trump administration today announced that the U.S. will begin a new strategy to deter and respond to cyberattacks with offensive actions against foreign adversaries. Jacqueline Thomsen reporting in The Hill: "The U.S.'s new cyber strategy, signed by President Trump, marks the federal government officially taking a more aggressive approach to cyber threats presented from across the globe. National security adviser John Bolton said that the actions are part of an overall deterrence strategy: Launching cyberattacks against actors in, or sponsored by, other nations, he said, will prevent those adversaries from attacking the U.S. in the first place." Also noted is National security adviser John Bolton's commentary stating that "not every response to a cyberattack would necessarily occur in cyberspace."

Follow CircleID on Twitter

More under: Cyberattack, Policy & Regulation

Categories: News and Updates

New York Times Sues FCC for Net Neutrality Records Concerning Possible Russian Involvement

Thu, 2018-09-20 20:07

The New York Times Co. filed a lawsuit today against the Federal Communications Commission concerning records the newspaper alleges may shed light on possible Russian participation in a public comment period before the commission rolled back Obama-era net neutrality rules. Jon Reid reporting in Bloomberg BNA broke the news – quoting plaintiffs: "The request at issue in this litigation involves records that will shed light on the extent to which Russian nationals and agents of the Russian government have interfered with the agency notice-and-comment process about a topic of extensive public interest: the government's decision to abandon 'net neutrality,'"

Follow CircleID on Twitter

More under: Net Neutrality

Categories: News and Updates

The .BEST New gTLD: Second Interview

Thu, 2018-09-20 19:06

This is a one hour podcast giving all details about what the .BEST social network is going to be and how users will be able to generate an income from it.

100,000,000 users

Yes, you read that correctly, we're talking about a potential of 100,000,000 million users by 2022, and that means millions of ".best" domain names. In this major project, two ventures are identified: the .BEST registry itself, which comes as the tool to develop an innovative social network.

Details are given

Cyril Fremont gives lots of details on how his social network, focusing on reviews, is going to be different from Google Reviews, Facebook, Yelp and TripAdvisors. If one would be tempted to think that ... well, "that's just going to be another social network", then, I strongly suggest to listen carefully the part of the interview on the decentralization of this network. I already interviewed Cyril Fremont in July 2018 and did not pay attention to this but the fact that this social network is decentralized answers a very important legal question: registrant's data will be hosted in their country of residence. GDPR, are you listening?

Unexpected questions

Another interesting aspect of this interview is that Cyril Fremont met with 6 high-tech experts on several subjects from different industries: questions were not written in advance and these specialists hit him with unexpected questions such as:

  1. How will you pay people?
  2. How will you ensure that reviews won't be fake?
  3. How will you have users to come to that network?
  4. How do you finance such a project? (And you will learn about: An ongoing $10,000,000 operation to finance the bootstrap of the social network; An ICO of $20,000,000 to finance the next stage of the social network.)
  5. How is the team built around this innovative project and… a few names are given.

I suggest to listen to his answers in this podcast (in French). The social network will be launched in January 2019.

Written by Jean Guillon, New generic Top-Level Domains' specialist

Follow CircleID on Twitter

More under: Blockchain, Domain Names, Registry Services, New TLDs

Categories: News and Updates

For the First Time in Recent Internet History a Subsea Cable Across South Atlantic Activated

Thu, 2018-09-20 02:32

For the first time in recent Internet history, a new submarine cable carrying live traffic across the South Atlantic was activated, directly connecting South America to Sub-Saharan Africa. Doug Madory, Oracle Dyn's Director of Internet Analysis reported in a post today: "It is hard to understate the potential for this new cable to profoundly alter how traffic is routed (or not) between the northern and southern hemispheres of the Internet. The South Atlantic was the last major unserviced transoceanic Internet route and the activation of SACS is a tremendous milestone for the growth and resilience of the global Internet."

The significance: "In addition to directly connecting Brazil to Portuguese-speaking Angola, the cable offers South America its first new submarine cable link to the outside world in 18 years that doesn't go through the United States."

Follow CircleID on Twitter

More under: Access Providers, Broadband

Categories: News and Updates

Lessons Learned from the Namejuice/DROA/DROC Outage

Wed, 2018-09-19 19:27

Last week an ICANN registrar, Namejuice, went off the air for the better part of the day — disappearing off the internet at approximately 8:30 am, taking all domains delegated to its nameservers with it, and did not come back online until close to 11 pm ET.

That was a full business day and more of complete outage for all businesses, domains, websites, and email who were using the Namejuice nameservers — something many of them were doing.

Over the course of the day, speculation abounded around the cause of the outage (and we look at some of them below). None of their customers whom I was in communications with, nor anybody in the Reddit thread reported receiving any communication from Namejuice about the cause of the outage or an ETA for the restoration of service. They were simply gone, and given the lack of information, there was a scant basis to discern whether this was a temporary or a permanent condition.

Needless to say, as far as outages go, this wasn't handled well by the vendor. The lessons learned make for an effective case study that validated the unifying theme of my book (sorry, I'm talking up my book here).

The underlying theme of Managing Mission Critical Domains and DNS is that in today's IT landscape, there is a divide between DNS operations and domain portfolio management. That divide is an artificial one, and it leads to disconnects that can result in domain outages. Those outages can take your company down, or even entire chunks of the internet, based on the dependencies of any given domain.

The two logical realms of a domain name

Namejuice's background

Namejuice is not unknown throughout IT circles owing to their practice of allegedly soliciting customers via "domain slamming".

They have had their ICANN accreditation suspended at least once, CIRA de-certified them as a Canadian .CA registrar, and the Canadian Competition Bureau has issued at least one warning about the practice of domain slamming to consumers.

I mention all this because domain slamming is one of the topics covered in my book under the "common pitfalls" chapter. It relates to the various vulnerabilities companies can expose themselves to by inadvertently authorizing a transfer of their domains to a new registrar without fully understanding what that entails.

It cannot be emphasized enough that when the administrative functions of managing domain portfolios, like registering and renewing domains, are separated from the ops aspect of making sure they work on the internet; it can lead to a situation that puts your organization at risk.

Bookkeeping or accounting may have filled out the form they received in the email, thinking it was a legitimate account payable, triggering the transfer. IT goes along with it because somebody in management must have done it for a reason, right?

Then last Monday comes along and *whammo*, everything's offline, the entire company is down and there's nothing anybody can do about it.

Overview of the Outage

The outage began around 8:30am in the morning Sept 10, 2018 and a Reddit thread in /r/sysadmin starting forming around the incident. The rumors were that the outage was caused by a power outage at Namejuice's data center in Markham. There were a few power outages throughout the GTA that morning.

The word was that somebody at ICANN had spoken to somebody at Namejuice and they were given a 1 pm ETA for the restoration of power to the data center. 1 pm came and went and nothing happened:

"We were able to get ahold of someone at ICANN. DROA.com data center has suffered a power outage and their backup generator failed. The power company is currently working to resolve the issues. We were given an ETA of 1 pm EST to when the power is restored. Hope this helps."

However, as a subsequent Redditor noted, that comment was posted from an account that was created that day and had only ever posted one comment, that one.

There are a lot of data centers in Markham. Whatever datacenter Namejuice is using couldn't get their backup generator working, which seems like one of the basics that any DC would need to get right. Even if the backup gens didn't kick in automatically, they should have been able to manually start them. Also, had the entire data center lost power without backup, then we would have expected to have seen reports of more outages from any other outfits who were co-located within the same datacenter.

Weirder still is that the Namejuice outage appeared to be caused by a DNS failure. All four nameservers were completely offline and unresponsive to ping requests, yet, only two of those name servers looked to be within the Markham DC.

Chart via robtex.com/dns-lookup/droc.ca

The other two nameservers were offsite, at Digital Ocean, but they were down too. Why would they be down if the outage was caused by a power failure in Markham?

I dwell on this because it leads directly to our next lesson learned from this outage, which is also in my book, under the "Nameservers Considerations” chapter, where we look at numbering and address schemes for allocating production nameservers, see below.

Rumours of ICANN de-accreditation

At one point in the day, I received an email from a sysadmin acquaintance whose company was down (an investment fund) because their domains and DNS were impacted, and he said: "GoDaddy verified that ICANN finally took them down."

I replied quickly to this one, that had to be a flat-out false rumor: There is no way an ICANN decertification would happen in such a "band-aid moment" fashion. It would be telegraphed well in advance, announced by ICANN via its website and all domains would have been transitioned to another registrar via a tender process.

Further, even if that all had happened, nothing ICANN does would impact the existing DNS. They don't have some magic killswitch that just shuts down a deaccreditted registrar's nameservers, that would be so far outside ICANN's purview as to be draconian. The only thing that would happen in such a situation is the decertified registrar can no longer register, transfer or renew domains and their existing customers get moved to a new registrar.

Again, absent any communications from the vendor, via some out-of-band medium like Twitter, these types of rumors were bound to circulate.

This outage was possibly caused by a DDoS attack.

By the end of the day, I was already skeptical of the power outage narrative and I knew it wasn't an ICANN thing, the most likely remaining explanation was that Namejuice's nameservers were under a DDoS attack.

My suspicions were reinforced the next morning when the CEO of another Markham datacenter I know personally discussed the events with me. He told me that somebody who peers at 151 Front experienced some degradation on their cross-connects with Zayo associated with a DDoS on port 53 UDP (which is DNS). The Markham IPs for Namejuice are Zayo/Allstream. It started around 8:30 am.

A DDoS attack against Namejuice's DNS would explain everything, and this is the point where we hit that next point I cover in my book under "Nameserver Considerations". That's the selection of the IP space your DNS provider uses to operate its nameservers on.

The path of least resistance is to simply get IP space from your upstream provider, or use the IPs assigned to you by some cloud provider and set your nameservers up on those IPs.

The problem is when you get DDoS-ed, your upstreams simply null route those IP addresses to preserve the rest of their infrastructure, and there's nothing you can do about it until they decide to lift those null routes. Different places have different policies but in general, they aren't too keen to do it until they are sure the DDoS is over. Back when we were still on external IPs one provider's policy was that once they dropped in a null route the soonest they would even look at it again was 24 hours later. I remember at least one DDoS where we ended up renumbering nameservers.

Ideally, your DNS provider is using IPs within their own netblocks so that they are in control of their own ASN and routing announcements. That way when a DDoS comes and if your upstream provider drops your routes to save themselves, you at least still have the option to bring up your routes someplace else to get back online, like a DDoS mitigation solution.

Without this, you're at the mercy of your upstream, unless, as I mentioned, you decide to completely renumber your nameserver IPs, and this is a practical option when it has to be. Of course, this only works if you can quickly get all of your hosted zones over to a new location and that new location has the capability and the willingness to deal with the DDOS which is almost certain to follow you there.

The big caveats with these tactics are that you should have all of it set up in advance. Run warm spare nameservers that you aren't using in other locations, in a DDOS mitigated DC or have access to a reverse proxy or GRE tunnel that you can turn up when you need it.

These solutions are non-trivial to set up, I guess it just comes down to how seriously you take your clients' uptime as to whether you would do this.

The other concern if it was a DDoS attack is that it may happen again. Was it against a Namejuice customer, who is still there? Or was it against Namejuice themselves? (Again, in my book we talk about tools like dnstop and delegation numbering schemes that provide ways to figure all this out). There are unanswered questions around how Namejuice reacted to this possible DDoS and what, if any, their DDOS mitigation capabilities are.

Registrar transfer-locks can backfire

One aspect of this outage has had me pondering what the best practice should be for transfer locks. We always say they should be on all the time until the time comes where you want to transfer out.

In this case, that would be bad advice. Even if people made it a habit to keep a local copy of their domain auth codes (good idea), it would have done them no good if the transfer lock was on and the registrar was down.

If you leave your lock off so that you can escape an out of commission and unresponsive registrar, you would want to use a registrar with enhanced security functions like 2FA and event notifications.

The other option is to leave your transfer lock on, but, as is our mantra:

Use multiple DNS providers

Is something we've been saying for years: use multiple DNS providers. Either in an active/active setup where you mix nameservers from multiple providers in your live delegation, or active/passive, where you run hot spares that are up to date and current with your DNS zone and have them kick in if your primary provider goes down.

That's what our proactive nameservers does automatically. That's failover at the nameserver level and it's something that will work even if the rest of our platform is being DDoS-ed, because nothing within the proactive nameserver system is public facing.

To this day we're still the only registrar providing that as a service but who are we to argue? All I know is it works and if anybody who had been on Namejuice nameservers last Monday was using it they would have had a much different Monday.

There are other options for running multiple DNS providers, we run them down on our High Availability DNS page here.

If you're using multiple DNS providers, then you can keep your registrar transfer locks enabled, because if your registrar blows up, you'll still be online and then you can take any corrective action required once your registrar is back online. In an emergency transfer (non-responsive registrar) or complete registrar failure situation you're looking at 3 to 5 days, minimum before anything can happen via ICANN, probably longer. So bear that in mind as you set your expire times, and be prepared to switch any secondaries to becoming primaries if your primaries are in danger of timing out or use an unpublished primary under your direct control.

The key takeaways

The lessons from all this come down as follow:

  • Have a coherent internal policy on registration and renewals so that your domains don't wind up anyplace unexpected
  • Stick with providers who are known for excellent client communications and support. The mettle test for this is looking at how they behave during Black Swan Events like outages.
  • Use DNS operators who run their nameservers on their own IP space and ASNs
  • Use multiple DNS providers
  • At the risk of sounding like "Scagnetti on Scagnetti", Read my book

Ultimately it really doesn't matter what caused the outage, all we really care about in the art and science of domain portfolio management is to be able to stay online whenever one of your key vendors experiences an outage.

The easyDNS mantra for 20 years is…

DNS is something nobody cares about… until it stops working.

But outages can and will happen to everyone, sooner or later. Take the time to really think about your DNS and domain portfolio before the outage hits, then you'll be ready for it.

Written by Mark Jeftovic, Co-Founder, easyDNS Technlogies Inc.

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, DDoS, DNS, Domain Names

Categories: News and Updates

New Brandsight Domain Management Survey Reveals Companies Face Challenges Managing Domain Portfolios

Wed, 2018-09-19 18:34

Brandsight recently concluded their Second Annual Domain Management Survey. Respondents to the survey were corporate domain name professionals. Of those that responded, 35% had portfolios that were between 3,000-10,000 domains and another 30% had portfolios greater than 10,000 domains. Fifty-seven percent of respondents reported that they manage domains out of the legal department, with the remaining respondents' portfolios managed out of IT, marketing and other groups.

This year's survey revealed that for 53% of respondents, managing domain name portfolios has become more difficult. Given the impact of GDPR, along with the desire to right-size portfolios, drive traffic to relevant content, and reduce expenditures, these results confirm what we have been hearing in the market — that companies are still facing a number of domain management challenges.

Highlights from the survey include:

  • 22% of respondents spend 2-3 days each week managing their portfolio, and 32% spend 4-5 days each week, with a close correlation between portfolio size and the amount of time spent managing the portfolio
  • 53% of respondents said that managing their domain name portfolio has become more difficult over the past year
  • 88% of respondents said that dealing with the impact of GDPR and the inability to access WHOIS contact information has been a challenge for them
  • 81% of respondents said that paring back bloated portfolios has been a challenge for them
  • 97% of respondents said that ensuring the security of their domain portfolio is an important goal
  • 88% of respondents said that reducing domain management expenditures is an important goal

Clearly, domain professionals have been presented with a new host of challenges. However, as a new mechanism for accessing non-public WHOIS becomes available and companies begin to rely on technology solutions to assist with right-sizing portfolios, domain name professionals will hopefully have an easier time managing their portfolios in the coming years. Of course, that assumes that there won't be any other major changes impacting domain professionals, which undoubtedly there will be.

Written by Elisa Cooper, SVP Marketing and Policy at Brandsight, Inc.

Follow CircleID on Twitter

More under: Domain Management, Domain Names

Categories: News and Updates

Microsoft Cancels Plans to Move Its Internal Wireless Network to IPv6-Only

Tue, 2018-09-18 21:47

Microsoft has digressed from a previously announced plan to move its internal wireless guest network to IPv6-only. Veronika McKillop, Network Architect at Microsoft, in a post on Monday says: "Unfortunately, we had to stop this work because we came across something that the previous internal testing had not uncovered — a team member attended a conference where Internet access was provided as IPv6-only and 99% of attendees could not get their VPN clients to connect on this network. VPN failing on IPv6-only networks (through NAT64) is, as we then found out, well documented in RFC 7269. This finding made it clear that visitors to Microsoft offices who rely on the Guest network would be heavily impacted unless their VPN gateways were IPv6-enabled."

Bottom line: The network part is easy, barring software bugs, applications are the big unknown, says McKillop. "Not just our own but the third-party applications that often claim 'IPv6 compatible' however when it comes to a real deployment, the experience is quite different."

Follow CircleID on Twitter

More under: IP Addressing, IPv6, Networks, Wireless

Categories: News and Updates

Caribbean Candidates Vie for Posts in ARIN Elections

Tue, 2018-09-18 17:02

Three Caribbean candidates — Peter Harrison, Kerrie-Ann Richards and Alicia Trotman — have been named among the final candidates to contest elections for leadership roles at the American Registry for Internet Numbers (ARIN) in October.

ARIN is one of five Internet registries worldwide that coordinate the distribution and administration of number resources. The registry serves the United States, Canada and several territories in the Caribbean.

Richards and Trotman will vie for posts on the ARIN Advisory Council. In 2017, Jamaican-born Richards and Barbadian-born Trotman made history, becoming the first Caribbean members of the ARIN advisory council since the registry was founded on April 18, 1997.

"I am running again because there is still much work to be done," Trotman said.

"The Caribbean voice matters at this level because policy decided here will affect the growth of the Internet in the region," said Richards, chairperson of education non-profit Vision for Jamaica.

"We are the only ones shortlisted from outside North America. I feel that we bring valuable perspectives to the table and added diverse insight from our Caribbean experience," she added.

Jamaican-born Harrison will contest for a seat on the ARIN Board of Trustees. Harrison is the chief technical officer and co-founder of Silicon Valley-based colocation services provider Colovore. He is also the founder of the Palisadoes Foundation, a registered non-profit that coordinates student internships in software development for Jamaican residents.

"My work with Palisadoes has many parallels with the ARIN fellowship program and I believe my broad experience would be of benefit to the ARIN and to the Caribbean," said Harrison, who has worked with hyperscale companies like Google, Netflix and eBay, as well as smaller ones in the Caribbean.

In an August 9 post, ARIN announced that Regenie Fräser, the former Secretary General of a regional trade association, had been selected to a special appointment to serve on its Board of Trustees for a one-year term "so as to provide more diversity in the Board's composition." Fräser became the first non-white and Caribbean person appointed as a trustee.

The final 2018 candidate slate for the ARIN Advisory Council also includes Brad Gorman (Verisign), Kathleen Hunter (Comcast), Rob Seastrom (ByteGrid) and Amy Potter. The final slate for the ARIN Board of Trustees includes Anna Valsami (Telstra), Cathy Chen-Rennie (Capriole Consulting) and Paul Andersen (EGATE Networks).

On October 4, during ARIN's public policy meeting in Vancouver, British Columbia, candidates will have the opportunity to address ARIN members. More information on each candidate is available on the ARIN website.

Online voting opens on October 4 at 6 pm EDT and closes on October 12 at 6 pm. All terms will begin on January 1, 2019.

Written by Gerard Best, Development Journalist

Follow CircleID on Twitter

More under: Internet Governance, IP Addressing

Categories: News and Updates

Continued Threats from Malware

Mon, 2018-09-17 16:14

As part of my job, I manage an incident response team that was engaged by a significant organization in Georgia whose network was infected by the QBOT (a.k.a. QAKBOT) malware. The customer had been infected for over a year, several teams before ours had failed to solve the problem, and they continued to get reinfected by the malware when they thought they had eradicated it. Over time it had spread to more than 1,000 computers in their ecosystem stealing user credentials along the way. Malware is a real problem for businesses and consumers, but how many people really understand what it is? I was recently asked this same basic question and realized that even my answer as a security subject matter expert was not as clear as it could have been. So, I thought it was time to put together this article to answer not only what malware is, but what it does, how to eradicate it and what are the best practices to remain secure.

To begin with, malware is a generic industry term that refers to malicious software designed to do harm to computer systems. Many people use the terms malware and computer virus interchangeably but technically that would be incorrect. The three most common categories malware falls into are viruses, worms and trojans. Ransomware, a specific type of malware, can result from any of these three malware categories' but typically is the result of a trojan. A computer virus is a malicious software that, when executed, replicates itself by modifying other computer programs and inserting its own code. Computer viruses typically need a human to execute them for a computer system to get infected. A computer worm is a malicious software whose primary function is to infect other computers while remaining active on infected systems. A computer worm is a self-replicating malware that duplicates itself (without human interaction) to spread to uninfected computers and it does not need to attach itself to another program in order to cause damage. Lastly, a trojan is malicious software that looks legitimate but can take control of your computer. A trojan is designed to damage, disrupt, steal, or in general inflict some other harmful action on your data or network, much like other types of malware. The QBOT malware I referenced earlier is somewhat unique in that it is defined as both a trojan and a worm. It is self-replicating, spreading to other computers on its own, steals user credentials and in this case disrupted the customer's active directory environment on their network.

Malware can infect your computer in a number of ways. The most common ones are the opening of an infected email attachment, connecting to an infected data source (e.g. thumb drive, network drive, etc.) and going to an infected website. According to Google, they identify and blacklist thousands of unsafe websites every week, which contain some sort of malicious software dangerous to their visitors (Google Transparency Report). It is estimated that nearly three-quarters of all websites have at least one vulnerability. Infected websites can have automatic malware downloads referred to as "drive-by-downloads", exploit kits that search your computer for unpatched vulnerabilities, JavaScript infections that download malicious software your browser then executes, URL injections commonly embedded inside of compromised WordPress blog sites or browser hijacks that constantly redirect you to other pages, collect personal information, or act as gateways to rootkits. This issue has even impacted well known and reputable websites due to their advertiser's and included 3rd party content that became compromised without their knowledge. The truly dangerous stuff and luckily less common today either happens before you receive your device somewhere in the supply chain or infects your machine at a level prior to your operating system loading. Some of the newest malware are known to infect your computer's BIOS or mobile device's bootloader.

Once infected, the malware is likely to spread through email, file sharing or your network to other workstations, servers, mobile devices or less protected devices like copiers and printers. Imagine everything you copy or print becoming available for sale on the internet. If connected to a network it can take advantage of existing file-transport or information-transport capabilities on the system itself, allowing it to travel unaided. If it can't find the mode of transport it wants, advanced malware is able to download additional post-exploitation modules to gain access to additional tools of the trade. Don't be surprised if you see malware utilizing older protocols like NetBIOS, which for today's operating systems is only used for file or printer sharing on a local area network. Once the new device is infected it doesn't always require human intervention to activate or launch the malware, many times simply exploiting a vulnerability on the target system. When on a file share, like a network drive, malware will typically infect files (e.g. MS Word or Excel) which it knows a human will eventually launch, activating hidden macros it has infected them with to perform its malicious intent.

To eradicate malware from your environment most incident response teams will implement a multi-step process but all of them should include some type of detection, analysis, containment, mitigation and lessons-learned to be applied after the incident. Our customer in Georgia failed to eliminate their malware issues prior to our involvement, by failing to properly perform two of these steps. They were unable to properly detect the QBOT malware due a lack of internal monitoring capabilities and its self-mutating nature rendering their signature-based tools completely ineffective. They also failed to contain the outbreak allowing it to reinfect systems immediately following their cleaning. There are no shortcuts. Each step in your incident response team's playbook will be important. Even basic things like changing access credentials and patching software are critical steps in your remediation plan.

If our customer in Georgia had properly segmented their network, it would have eliminated the propagation of exploits to a single segment and the malware's ability to laterally move around the network. Allowing unfiltered workstation-to-workstation communications (as well as other peer-to-peer communications) creates serious vulnerabilities, and can allow malware to easily spread to multiple systems. If malware can establish an effective "beach head" within your network, and then spread to create backdoors to maintain persistence, it will be difficult for defenders to contain and eradicate it. Monitoring for this lateral network traffic and external communications with command and control servers can identify a large majority of malware infections on a network.

Best practices to avoid getting infected by malware and reducing the impact if you do become infected include development of pre-establish security policies & procedures, companywide staff training, constant backups, consistent software vulnerability patching, use of a behavioral-based endpoint protection platform (EPP), proper network segmentation, encryption of data, effective monitoring of network traffic and security alerts, implementation of least-privilege based access rights for users, accounts, and computing processes and finally network edge-based protections (e.g. UTM, NGF, DNS, etc.) to block access to malicious sites and exfiltration of data. If you are not utilizing any of these best practice items I highly recommend contacting a qualified vendor to help. The risk is real and after the Target breach in 2013, it is widely recognized that all levels of management can now be held accountable for cybersecurity breaches.

Written by Rick Rumbarger, Technology Executive

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Cybersecurity, Malware

Categories: News and Updates

(DNS) Security Protocols Do What They Say on the Tin

Fri, 2018-09-14 19:23

DNS-over-TLS has recently become a welcome addition to the range of security protocols supported by DNS. It joins TSIG, SIG(0) and DNSSEC to add privacy, and, in the absence of validating stub resolvers, necessary data integrity on the link between a full-service resolver and the users' stub resolver. (The authenticated source feature of TLS may also offer some additional benefits for those of a nervous disposition.) Good stuff.

What is not good stuff is when implementers suggest that any specific security protocol is capable of doing more than it says on its tin.

Protocol designers, and especially security protocol designers, are cautious people and careful to define precisely, or as precisely as the English language is capable of, the functionality of their design in its specification (in our case RFCs).

It has been suggested that ubiquitous DNS-over-TLS (stub to resolver, resolver to authoritative sources) is functionally equivalent to DNSSEC. It is not. Both DNSSEC and TLS do what they say on their tin. No more and no less.

DNSSEC is designed to ensure DNS data originates only from the authoritative source and is unchanged at the termination of the DNSSEC scope — when the DNS data is validated. It does so by digitally signing the zone (technically RRsets within the zone) using RRSIG records and by providing a verifiable chain of trust, typically via the DNS delegation hierarchy (DS records). DNSSEC can be viewed as an application-specific content security and authentication protocol. That's what it says on its tin (RFC 4033 and many others).

TLS provides integrity, privacy and source authentication for data supplied to the TLS software via some API (not defined by TLS) from some application (not defined by TLS). The application may obtain the data it supplies to TLS by self-creation, from RAM, from a filesystem, a remote location or by some other esoteric process, any or all of which may be vulnerable. If the data supplied by the application, for example, a web server, a DNS resolver or a mail system, is clean, corrupt, has been hacked or is otherwise maliciously modified TLS will simply ensure the clean, corrupt, hacked or otherwise modified data is delivered unchanged and confidentially to the TLS peer. TLS is a powerful and highly efficient general purpose (non-application specific) secure communications and end-entity authentication protocol. That's what it says on its tin (RFC 8446 and many others).

(There is one application specific data content element within TLS. During the TLS handshake phase a certificate, typically an X.509 certificate, is normally supplied and validated before the connection can be established. The certificate validation process is not specified within TLS but determined by the certificate type. For example, the X.509 certificate validation process is defined by RFC 5280 and others.)
TLS plays a vital role in securing access to many services and will contribute its own unique capabilities to DNS.

The bottom line: If you want your clients to have privacy, secure last-mile communications and are content to hope the data you are sending is correct, then DNS-over-TLS is for you; If you want your clients to have privacy, secure last-mile communications and want to ensure the data you are sending is correct, then you need both DNS-over-TLS and DNSSEC.

There is, however, another reason to welcome DNS-over-TLS. TLS has been around, in one form or another (including its SSL ancestor), for about 26 years, DNSSEC for about half that period. TLS/SSL has had 5 minor surgeries and one, recent, major surgery (TLS 1.3). TLS penetration rates are high, partly driven by the inherent benefits of the protocol, partly by threat of obliteration by the search engines if not implemented. (Does that constitute a modest carrot and a very big stick?) Whatever the reasons, TLS has always taken a pragmatic approach to implementation while maintaining the highest levels of security. Perhaps the DNS community needs to review critically the implementation details of DNSSEC with the objective of radically improving its penetration rate. Learn some lessons from its new (TLS) stable mate.

DNSSEC is, arguably, the only application-specific content security protocol the Internet has. That has meant wrestling with its unique problems. But let's stop fighting the theory wars of the past (DNSSEC works) and admit we need some, perhaps major, surgery to make it practical.

Written by Ron Aitchison, Consultant, developer, trainer and author

Follow CircleID on Twitter

More under: DNS, DNS Security

Categories: News and Updates

"Seven Dirty Words" Restriction Policy Lifted from .US Domain Name Registrations

Fri, 2018-09-14 03:20

Neustar, the registry operator of the .US domain and NTIA have reversed course, allowing the inclusion of previously restricted "seven dirty words" from future .US domain name registrations. The decision came after EFF and the Cyberlaw Clinic at Harvard Law School intervened in the cancelation of a domain name containing a restricted word. The domain name registered by Mr. Rubin was suspended by Neustar calling it a violation of an NTIA "seven dirty words" policy — "a phrase with particular First Amendment significance," said EFF.

Cyberlaw Clinic explains in a blog post the significance of the case: "As a general rule, First Amendment law makes clear that the government cannot impose content-based restrictions on speech. The well-known case, Federal Communications Commission v. Pacifica Foundation, 438 U.S. 726 (1978), held that the Federal Communications Commission ('FCC') may regulate over-the-air broadcasts of the so-called 'seven dirty words' comedic bit made famous by George Carlin. But, that ruling is limited to broadcasts over public airwaves and is inapplicable to other forms of media distribution. It thus surprised [us] to learn that NTIA and Neustar had a policy of using the Pacifica list of seven words to police domain name registrations. NTIA and Neustar saw fit to cancel [Mr. Rubin's] registration in accordance with that policy upon noting that it incorporated the 'f-word.'"

Follow CircleID on Twitter

More under: Censorship, Domain Names, Internet Governance, Law, Policy & Regulation

Categories: News and Updates

LACNIC, Google, CaribNOG and Internet Society to Hold 'Internet Week Trinidad and Tobago'

Thu, 2018-09-13 22:31

The Latin American and Caribbean Internet Registry (LACNIC) and Google will hold a series of workshops next month as part of Internet Week Trinidad and Tobago, an event intended to advance the Internet development agenda of the wider region.

The workshops are part of a project through which LACNIC and Google seek to strengthen digital markets in Central American and Caribbean countries. This joint project seeks to enhance local connectivity and strengthen the ecosystem for entrepreneurs.

A half-day workshop will focus on marketing strategies for entrepreneurs, offering local business owners and start-up founders a package of free training on the use of Google's digital marketing tools.

A two-day workshop will focus on the nuts and bolts of Internet connectivity and traffic optimization, covering a range of technical issues such as the new Internet Protocol (IPv6), routing security (BGP and RPKI), peering models, open standards and root servers.

Another workshop, under the umbrella of LACNIC's AMPARO project, will address emerging cybersecurity issues across the region.

As part of Internet Week Trinidad and Tobago, the Caribbean Network Operators Group (CaribNOG) will hold its sixteenth regional meeting, an all-day affair held with the support of the Internet Society. CaribNOG 16 will focus on cybersecurity, outlining some of the foundational and necessary steps to secure an online environment in which Caribbean businesses can compete and collaborate internationally, across all industries.

The Internet Society will also conduct a session, continuing its capacity-building campaigns in the Caribbean region, this time with a particular focus on the issues of disaster management and mitigation.

The four-day event, held from October 2nd to 5th, is being held with the support of the Caribbean Telecommunications Union and the Internet Corporation for Assigned Names and Numbers (ICANN).

The workshop series in Trinidad and Tobago follows a similar conference held in Guatemala City last month, with the local support of the Guatemalan telecommunications regulator. Another series will be held later this year in the Dominican Republic.

Written by Gerard Best, Development Journalist

Follow CircleID on Twitter

More under: Cybersecurity, Internet Governance, IP Addressing, Regional Registries

Categories: News and Updates

Current Security Measures Not Enough to Protect Data in Lost or Stolen Laptops, Experts Warn

Thu, 2018-09-13 22:11

A weakness in modern computers allows attackers to steal encryption keys and other sensitive information, according to the latest discovery by cybersecurity firm F-Secure. Researchers from the firm are warning PC vendors and users that current security measures are not sufficient to protect data in lost or stolen laptops. Attackers do need physical access to the computer to carry out the exploit, however, F-Secure Principal Security Consultant Olle Segerdahl says once achieved, an adversary can successfully perform the attack in about 5 minutes. From the report: "The weakness allows attackers with physical access to a computer to perform a cold boot attack — an attack that's been known to hackers since 2008. Cold boot attacks involve rebooting a computer without following a proper shutdown process, then recovering data that remains briefly accessible in the RAM after the power is lost. Modern laptops now overwrite RAM specifically to prevent attackers from using cold boot attacks to steal data. However, Segerdahl and his team discovered a way to disable the overwrite process and re-enable the decade-old cold boot attack."

Plan ahead: "A quick response that invalidates access credentials will make stolen laptops less valuable to attackers. IT security and incident response teams should rehearse this scenario and make sure that the company's workforce knows to notify IT immediately if a device is lost or stolen," says Olle. "Planning for these events is a better practice than assuming devices cannot be physically compromised by hackers because that's obviously not the case."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity

Categories: News and Updates

Alphabet's Loon Balloons Can Now Cover 1000km of Internet Connectivity via 1 Access Point

Thu, 2018-09-13 19:54

Loon, formerly a Google X project and now an independent Alphabet company, reveals that it successfully transmitted data over a 1000 kilometers (621 miles) via a network of 7 balloons. This new milestone was achieved using a new custom antenna that stretched a 100km connection between two balloons ten times further across seven balloons. Why does this matter? Salvatore Candido, Head of Engineering at Loon, explains: "Even with our balloons' expanded coverage area  —  which is 20 to 30 times greater than a traditional ground-based system  —  there are people who live outside the reach of one of our balloons operating adjacent to a backhaul connection on the ground. If we can extend our reach by passing that connection across a network of balloons, like a cosmic soccer team advancing the ball through the sky, we can cover far more people."

Follow CircleID on Twitter

More under: Access Providers, Broadband, Wireless

Categories: News and Updates

The UN Panel on Digital Cooperation: Reinventing the Wheel or Innovating Internet Policy Making?

Thu, 2018-09-13 18:34

The new High-Level Panel on Digital Cooperation (HLP.DC), appointed by UN Secretary General Antonio Guterres, will have its first face-to-face meeting in New York, September 25-26, 2018, just before the beginning of the 73rd UN General Assembly. The Panel, co-chaired by an American woman, Melinda Gates from the Microsoft Foundation and a Chinese man, Jack Ma from Ali Baba, "is expected to raise awareness about the transformative impact of digital technologies across society and the economy, and contribute to the broader public debate on how to ensure a safe and inclusive digital future for all, taking into account relevant human rights norms." Its final report will be tabled in May 2019.

To paint a picture, how the "digital future for all" should look like, is a big challenge. There is obviously an opportunity that the panel concludes with some exciting new political innovations on how to stabilize a peaceful cyberspace which enables digital trade, sustainable development as well as economic growth and respects human rights. But there is also a risk that the outcome will be just another report which will sooner or later disappear in the UN archives.

Internet Governance and the United Nations: A Hot Potatoe

Internet Governance and United Nations is a delicate issue and a hot potato. Since 2003, the days of the UN World Summit of the Information Society (WSIS), there is a hidden intergovernmental arm-twisting behind the scenes on how to manage the global Internet. Some governments prefer an intergovernmental oversight UN mechanism, others prefer a multi-stakeholder governance model.

Just recently, in January 2018, the UNCSTD Working Group on Enhanced Cooperation (WGEC), after four years of arm-twisting, failed to reach a consensus on what to do next to enhance Internet Governance cooperation. Politically, the world is split over the question of how to manage the evolution and the use of the most important infrastructure of the 21st century. And with the growing unilateralism in today's world politics, there is only little hope, that a common understanding will emerge soon. In contrary, the fragmentation of opinions and approaches in world politics includes a growing risk of re-nationalization and fragmentation of the Internet.

On the other hand, the world has changed since 2003. 15 years ago, less than half a billion people were online. Today, we have more than four billion Internet users. In 2003, Internet Governance was a technical issue with some political implications. Now it is a political issue with a technical component. In 2003, the political controversy was mainly about the management of critical Internet resources; it was ICANN vs. ITU. Today, it is cybersecurity, cyberweapons, digital trade, eCommerce, privacy, freedom of expression and many other issues, managed by intergovernmental organizations and networks from G7, G20, BRICS, NATO, ASEAN, OSCE and SCO until WTO, UNESCO, WIPO, ILO, HRC and many others. Those actors had little or nothing to do with the Internet 15 years ago. Now they are key players. While the role of technical bodies like ICANN, RIRs, IETF, IAB, IEEE, ISOC, and others is as important as it was in 2003, there is a significant power shift in the global Internet Governance Ecosystem.

At the eve of the 2020s, there is no difference anymore between the "Internet world" and the "real world." The "real world" is now a world based on the Internet. In 1996, John Perry Barlow declared in Davos, that the cyberspace is the "new home of mind" where "Governments of the Industrial World" and "the giants of flesh and steel" have no place. But today's reality is, that the "giants of flesh and steel" and the "governments of the world" have likewise settled in cyberspace. And indeed, cyberspace is too big to be populated and managed by just one single community or by one stakeholder group. Peace, security, disarmament, trade, sustainable development, and human rights, where the UN has a mandate for policymaking since 1945, are now Internet issues.

Insofar, it makes a lot of sense that a UN panel looks into the broader implications of future digital cooperation. And it is very wise, to leave this investigation into the digital future, not in the hands of the "governments of the world" and the "giants of flesh and steel" but also not to exclude them.

One can not ignore that the new Internet Governance complexity has produced a new need to go beyond the Internet controversies of the last 15 years. Time is ripe to start a new thinking about a common responsibility of all state and non-state actors in tomorrow's digital world. Time is ripe for a new and unbiased approach to the global Internet Governance dialogue. We have to leave behind us the senseless battles between "multilateralism" and "multistakeholderism." As said above, the cyberspace is too big to be managed just by one group. There is space both for multistakeholder collaboration as well as for intergovernmental arrangements, as long as all sides follow international law, human rights and the fundamental principles of Internet Governance, as laid down, inter alia, in the NetMundial Declaration from April 2014.

When Guterres addressed the Munich Security Conference (MSC) on February 16, 2018, he made it clear that he is "one of those that defend that only through a multiple stakeholder approaches we will be able to make progress." I believe, said Guterres, that "it is necessary to bring together governments, the private sector involved in these areas, civil society, academia and research centers, in order to be able to establish at least some basic protocols to allow for the web to be an effective instrument for the good." He rejected any UN ambitions to control the Internet and added: "I don't intend that the United Nations has a leadership role on this, but I can guarantee that the United Nations would be ready to be a platform in which different actors could come together and discuss the way forward, in order to find the adequate approaches to make sure that we are able to deal with the problem of cybersecurity… especially now that artificial intelligence that is providing enormous potential for economic development, social development and for the well-being for all of us."

The UN as a facilitator, not as a manager or controller. This sounds reasonable. With all its weaknesses, the UN has a high authority and gives processes legitimacy. This gives the panel enough flexibility: it is not bound by traditional UN rules of procedures, but it benefits from the aura of the UN.

With the appointment of the new panel, Guterres has now placed the hot potato into the hands of a rather mixed multistakeholder group from around the globe with ministers, CEOs, professors, technical experts, civil society activists, Nobel prize winners and even one of the fathers of the Internet, Vint Cerf. Guterres can now wash his hands and say good luck. But will the panel be able to deliver? Payday is May 2019, a very small timeframe for such a big problem.

On the other hand, with so many Internet reports which have been produced in the last years by high-level commissions and working groups, it should not be a problem for the panel to understand the issue. There is no need anymore "to study" the various implications. All the cards are on the table. This is a time for courageous, creative and innovative decisions. If the panel produces a short report with very clear and simple messages which embrace the new Internet Governance complexity, it can offer a way forward on how to frame the political Internet discussions in the 2020s. It can send back the hot potato to the 193 UN member states and the numerous non-state actors on a dish with a knife and a fork and good instructions on how to eat it.

The WGIG Experience

Guiterres is not the first UN Secretary General who appointed an expert group to discuss Internet issues. In 2003, at the end of the first phase of the UN World Summit on the Information Society (WSIS) in Geneva, the governments of China and the US disagreed fundamentally on how the Internet should be governed. The risk was high, that the whole summit would have been collapsed. China argued in favor of "governmental leadership" for the Internet. The US preferred "private sector leadership." Both governments disagreed even about the terminology of "Internet Governance." The only thing they could agree was to ask UN Secretary General Kofi Annan, to establish a "Working Group on Internet Governance" (WGIG) with a mandate to define "Internet Governance," to identify Internet-related public policy issues and to give some recommendations to the UN member states, what to do during the 2nd phase of WSIS, scheduled for 2005.

Before the first WGIG meeting, Kofi Annan said in a speech in New York in March 2004: "The issues are numerous and complex. Even the definition of what we mean by internet governance is a subject of debate. But the world has a common interest in ensuring the security and the dependability of this new medium. Equally important, we need to develop inclusive and participatory models of governance. The medium must be made accessible and responsive to the needs of all the world's people." And he added that "in managing, promoting and protecting [the internet's] presence in our lives, we need to be no less creative than those who invented it. Clearly, there is a need for governance, but that does not necessarily mean that it has to be done in the traditional way, for something that is so very different."

Kofi Annan's call for "political innovation" was reflected already in the composition of the WGIG. The 40 members came not only from governments, as usual in the UN context, but included also stakeholders from the private sector, civil society, the academic and technical community. And indeed, the combined wisdom of this multistakeholder group paved the way for a real political innovation: The group rejected the concept, that the Internet needs "a leader." It argued that the Internet needs primarily a "trusted collaboration" and the engagement of the private sector, the civil society, the technical-academic community, and governments. Policy development and decision making should be "shared" by all stakeholders in their "respective roles," that is no stakeholder can substitute another stakeholder, but all stakeholders are needed to find sustainable solutions for the new emerging problems.

Not everybody in the WGIG expected that the governments will accept the "multistakeholder approach" as an innovative proposal for policymaking in cyberspace. Intergovernmental negotiations as a "world summit" have their own rules and governments take reports from expert groups normally just as "food for thought." Insofar it came as a surprise that the WGIG recommendation was confirmed in the "Tunis Agenda," word by word. There was no reasonable alternative on the negotiation table. And before declaring "defeat," the heads of States of the 193 UN member states took the "low hanging fruits" and opened the door to do "something that is so very different," as Kofi Annan has said in New York in 2004.

Nevertheless, the "governmental leadership camp," which could not stop the "opening of the door," tried to link the journey into the new cyberspace to their traditional understanding on the sovereign rights of states. For them, the "respective roles" of the stakeholders included a privileged role for governments, reflected in paragraph 35 of the Tunis Agenda which said that "authority for Internet-related public policy issues is the sovereign right of States." However, paragraph 35 was linked to paragraph 68 which said that the "development of public policy by governments" should be done "in consultation with all stakeholders." And paragraph 34 introduced language, that multistakeholder cooperation should be based on "shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet."

Which sounds like a "hair-splitting" by "word-smiths" reflected the unsettled conflict between worldviews of "hierarchies" and "networks," of "autocracies" and "democracies." It left space for three rather different concepts for digital cooperation:

  • The traditional concept of an absolute "state sovereignty" where "one decision maker" sits on top of a hierarchy,
  • The concept of "state sovereignty linked to consultations with non-state actors" where decision making remains in the hands of the "master on the top," but has to go through an open and transparent process,
  • The multistakeholder concept where policy development and decision making is shared on more or less equal footing among all involved and affected stakeholders, based on bottom-up, open and transparent processes.

Since 2005, the Internet Governance Forum (IGF), established by the Tunis Agenda, became the place for exercising such a multistakeholder cooperation. It worked out not bad. The multistakeholder approach for Internet Governance prooved its value. The IGF itself got a limited mandate which excluded the decision-making capacity. But the IGF discussions paved the way for a number of success stories by preparing such decisions through its open and transparent processes.

One example is the NetMundial Declaration on Principles of Internet Governance. Another one is ICANN's IANA transition. Since the first IGF in Athen in 2006, "Principles for Internet Governance" were discussed in numerous plenaries and workshops. In 2014, NetMundial summarized, globalized and multistakeholderized the IGF discussions and the numerous efforts by organizations like OECD, Council of Europe, Global Network Initiative (GNI), Association for Progressive Communication (APC) and others. In 2016, the IANA transition — a controversial item on nearly every IGF — demonstrated the capacity of the multistakeholder approach to transfer the stewardship role over the IANA functions from a powerful government to an empowered community.

Looking Forward

But all this is water under the bridge. Times have changed. There is a new Internet Governance Complexity. The power balance within the Internet Governance Ecosystem has shifted. On the horizon, we see shadows of new intergovernmental cyberconflicts, digital trade wars and massive violations of individual human rights. Are the "good old days" of the Internet over?

The Internet is again at a crossroads. This is not dramatically new. Since years the Internet community is stumbling forward from crossroads to crossroads. But at the next crossroads, the traffic will reach a new level. The "Internet community" is not sitting anymore in the driver's seat. The new vehicles on the information superhighway are operated by the security community, the military, the police, the trade people and many more constituencies which have their own established rules, procedures, cultures, lobby groups and only a little or no knowledge about the history of yesterday's Internet battles.

This new "clash of cultures" goes far beyond "multilateralism" vs. "multistakeholderism," far beyond "bellheads" vs. "netheads" or "privacy vs. security." 15 years ago, the military people were sitting over disarmament proposals, the police were dealing with traditional crimes, WTO people negotiated trade treaties, the UN Human Rights Commission discussed violations of human rights in failed states. All those communities were sitting in their well-established silos, busy with their well-defined core business. Today, all those groups have to deal with Internet-related issues.

Like on the Internet itself, where every computer is connected to every computer, every political problem is now connected to any other problem. Measures to strengthen cybersecurity — as the adoption of new laws — have economic implications, will affect digital trade and touch individual human rights as freedom of expression or privacy. Measures to protect human rights will affect the digital economy and cybersecurity. The European GDPR is a good example. Its intention, to strengthen individual privacy, has rocked the business model of many global corporations and challenges the day-to-day operations of law enforcement agencies.

The new "clash of cultures" is a "multiple clash" were military, trade, human rights, and Internet thinking comes together and it is not clear how this different approaches can be managed in a way that they can co-exist, learn from each other and coordinate their efforts to save cyberspace, to enable economic growth and sustainable development, respect human rights and allow a further and unfragmented evolution of a free, open and safe Internet. It seems that the old story of "the elephant and the seven blind men" gets a new reality check. A four-star-general, a police officer, an Internet entrepreneur, a human rights activist, a governmental bureaucrat, a professor, and an Internet user will have rather different sensors on their fingertips if they touch the Internet. But there is only one Internet.

It needs indeed a new wave of wisdom to bring this new complexity and the subsequent powershift in the Internet Governance Ecosystem into a new balance. The growing shadows of cyberconflicts and digital wars at the horizon are wake up calls. It is not too late to stop a digital dwindling spiral which could lead into a cyber catastrophe. But something has to be done to avoid, that the "clash" leads to a "crash," to a "digital Hiroshima."

It is interesting to recognize, that the two co-chairs of the panel are linked to private corporations which have initiated projects which are aimed to avoid a cyberwar and to enhance collaboration by developing digital trade. Microsoft is pushing since a couple of years for a "Digital Geneva Convention" to stabilize cyberpeace. Its "Tech Accord" is an invitation not only to the private sector, to move forward with substantial arrangements. AliBaba has launched an "eWorld Trade Platform" (eWTP) which is aimed to promote digital trade. "It is easy to start a trade war, but difficult to manage the consequences," said Jack Ma in January 2018 at the World Economic Forum in Davos. And he added: "Don't use trade as a weapon, use trade as a means to cooperate. It will take 30 years to fix the pain."

In other words, the new UN panel is confronted with the big issues. After years of great progress in the first ten years of the 21st century, in the last couple of years, we have seen rather irritating processes in the cyber world. Will the panel be able to make proposals to reverse such negative trends? Will the panel make innovative recommendations which will enable the pendulum to swing back in the 2020s? There are already two milestones fixed in the next decade: 2025 will see the WSIS+20 review of the Tunis Agenda. 2030 is the checkpoint for the sustainable development goals (SDGs).

Kofi Annan's plea for "policy innovation" was right in 2004, it is also right in 2018. Something has been achieved in the last 15 years. There is no need to reinvent the wheel. The WGIG-Definition, the IGF, the NetMundial Declaration, the London process, the IANA transition are good starting points. But with the new challenges coming from the new political unilateralism and the technical evolutions like the Internet of Things and Artificial Intelligence, a new innovative wave for Internet policymaking is needed. The multistakeholder approach was an innovation in 2005. The world is waiting now for another political innovation.

The UN panel is not alone. As said above, in the last years, many Internet reports had been produced by high-level groups. And more are in the pipeline. The UN Secretary-General did welcome "the increased focus on the implications of digital technologies for our society and our economy through commissions, conferences and other forums. This signifies that the timing is ripe for the digital policy ecosystem to evolve to the next level of maturity. The work of all these initiatives can and should be mutually reinforcing. Wherever possible, this Panel will work with other initiatives and seek to identify synergies and complementarities." The "Global Commission on the Future of Work" and the "Global Commission on Stability in Cyberspace" are only two such bodies, which could help to reach this "next level of maturity."

In my "Internet Governance Outlook 2018” I wrote: "What is needed is a holistic approach which takes into consideration all aspects, including unintended side effects. But unfortunately, the existing Internet negotiations mechanisms — with the exception of the Internet Governance Forum (IGF) — does not provide such a broad and inclusive approach. As long as the constituencies will remain in their silos, progress will be limited. And if this "silo approach" is mixed with a political unwillingness to enter into multistakeholder arrangements, not much can be expected from 2018."

I hope that I was wrong and the UN Panel will contribute to turning political unwillingness into readiness that will take the next stumbling step forward into the still unknown territory of the endless cyberspace.

Written by Wolfgang Kleinwächter, Professor Emeritus at the University of Aarhus

Follow CircleID on Twitter

More under: Internet Governance, Policy & Regulation

Categories: News and Updates

New Zealand's Domain Name Commission Wins Injunction in a Lawsuit Against DomainTools

Thu, 2018-09-13 03:40

New Zealand's Domain Name Commission today won a motion for preliminary injunction in a US lawsuit against the company DomainTools. Plaintiff argued that DomainTools breached the Commission's terms of use and exposed details of domain name holders who choose to have their details kept private. Domain Name Commissioner, Brent Carey, in a release today said: "We look forward to presenting our full case to the Court, as we seek to permanently prevent DomainTools from ever building a secondary .nz database offshore and outside the control of the Domain Name Commission."

DomainTools argued that this lawsuit may cause an avalanche of litigation as a result of other registries also attempting to protect the privacy of their registrants to which Judge Lasnik responded they may be correct.

The court paper here.

Follow CircleID on Twitter

More under: Domain Names, Law, Policy & Regulation, Privacy, Whois

Categories: News and Updates

Spare a Thought for Venezuela

Thu, 2018-09-13 03:13

Please spare a thought for Venezuela. This, the 33rd largest country in the world and with about 34 million people, the largest proven reserves of oil, the cheapest price of gasoline in the world, and was in 1950 richer than Germany, has fallen on times so hard in this once Latin America's richest country that 75% of the population lost an average of 11 Kg (24 pounds) in weight in one year because of food scarcity. And you might ask: "Why should I care?"

Venezuela is located on the northern coast of South America and shares land borders with Brazil, Colombia, Guyana, as well as Trinidad and Tobago. Officially called the Bolivarian Republic of Venezuela, the country was colonized by Spain in 1522 but gained full independence about 300 years later and 188 years ago, in 1830. Venezuela is a charter member of many important international organizations, including the United Nations (UN), the Organization of American States (OAS), Mercosur (the South American trade bloc), and the Organization of Petroleum Exporting Countries (OPEC), to name a few.

Venezuela is the 10th largest exporter of oil in the world, and its economy is largely based on the petroleum sector, which accounted for over 50% of the gross domestic product (GDP) of the country in 2015. Thanks to a huge government subsidy program, Venezuela has the cheapest gasoline in the world, costing $0.11 per gallon ($0.03 per litre) in 2014, even after a 6,200% increase!

Give Me Two!

Venezuela's history, although rocky, has had its glorious moments, including periods of political plurality, as well as oil-fueled economic booms that attracted immigrants from near and far (including Europe) into the country. Venezuela has also had dark moments of political turmoil, dictatorships, and economic gloom, because of various domestic and external factors.

Venezuela enjoyed both a relatively stable democracy, and significant economic growth between 1958 and the early 1980s. During this period, politics was dominated by two parties, and government revenues more than doubled after the 1973 Arab-Israeli war resulted in a huge rise in oil prices. Although the boom would soon turn to burst, it lasted long enough for Venezuelans to become known for the phrase "Ta barato, dame dos” ("It's cheap, give me two") as they bought up relatively cheap goods in their travels abroad.

In the early 1980s, the global oil glut brought down crude oil prices, and along with it, the Venezuelan economy. As a result, the country's foreign exchange reserves ran desperately low, as government debt mounted, and in desperation government devalued the currency by 100% on "Black Friday” (February 18, 1983). In addition, government announced an IMF-backed economic reform program which increased fuel, transport, and utilities prices, and lifted the price cap on some basic goods.

As a result, the country was plunged into economic and political crises, with deadly a wave of protests called the Caracazo riots in which an estimated 300 to 3,000 people died. The prevailing conditions also resulted in two unsuccessful coup attempts (in February and November, 1992), by leftist military officers, led by Hugo Rafael Chávez Frías to overthrow the government. Although Chavéz was imprisoned after the first coup attempt, he was pardoned in March 1994, thus enabling him to contest, and win, the 1998 presidential elections.

The Road to Hell

The road to hell, they say, is paved with good intentions. And if anything, Venezuela's current tragic plight proves the adage, given the innocuous and well-intentioned start of the Chavéz-led Bolivarian Revolution.

Following his victory in the 1998 elections, Chavéz embarked on a reform campaign some have termed "institutional engineering” in a bid to consolidate his power and break the 40-year hold the two leading political parties had on Venezuelan politics. Chavéz also implemented "democratic socialist" economic policies, including the redistribution of wealth, land reform, the creating creation of worker-owned cooperatives, and the attainment of food security and food sovereignty. To implement these policies, Chavéz created social programs called Bolivarian Missions and Communes, and 100,000 state-owned cooperatives.

Venezuela suffered an economic downturn in early 2002, and this, coupled with anger about Chavéz's war on the elites, and his alignment with Fidel Castro's Cuba led to an abortive coup against his government in April 2002. That same month, managers at the state petroleum company, Petróleos de Venezuela (PDVSA), went on a two-month strike over Chavéz's nominations for new members of the company's board of directors. Chavéz had long denounced PDVSA (parent company of Citgo, a US energy company) for its association with the US and business elites. After the collapse of the strike, Chávez fired 18,000 PDVSA employees and replaced them with 100,000 of his supporters, aka Chavistas.

The failed PDVSA management strike enabled Chavéz to take control of the company's coffers to fund his social programs and support his political base. By 2011, some $500 million was siphoned from the PDVSA pension fund to finance government-backed financiers. Such lavish spending inflicted the Dutch Disease on the country, with its fiscal health wrecked, despite increasing exports of, and revenues from oil.

Chavéz's policies were initially helpful, but unsustainable because of the drain on the country's finances. While poverty rates decreased 48.6% in 1999 to 32.1% in 2013, Venezuela had the one of the highest literacy rates in the region, and malnutrition fell from 21% in 1998 to 6% in 2009, price controls led to shortages of goods. Similarly, currency controls, introduced to curb capital flight exacerbated matters by leading to a shortage of foreign exchange, a decline in the value of the Bolivar, and increasing inflation.

Although oil prices increased significantly later in 2002, bringing in huge revenues, Venezuela's economy began to collapse because of what someone called RIDDS: recession, inflation, declining foreign reserves, debt, and shortages. Thus, the huge increase in government spending during the 2012 campaign to re-elect Chavéz ballooned public debt and fiscal deficit. Similarly, the decision to replace trained PDVSA personnel with revolutionaries, and deprive the company of capital decreased oil production, despite increasing levels of proven reserves of oil.

Between 2007 and 2013, inflation grew an annual rate of 27.4% (at least five times the rate for Latin America), scarcity, (especially of food, medicines and essential goods) set in, and foreign reserves fell at an alarming rate of $38 million per day. In short, all indicators pointed to trouble ahead for Venezuela when Chavéz died in March 2013.

Enter Maduro

Following the death of Chavéz, his Vice President and former Minister of Foreign Affairs, Nicolás Maduro Moros, became Interim President. Maduro went on to win the constitutionally-mandated presidential elections in April 2013, and was inaugurated as President on April 19, 2013. A former bus driver who did not complete his high school education, Maduro made up for his lack of the charisma that Chávez, his predecessor and mentor, had by being autocratic and continuing Chávez's populist programs.

Things got worse in a hurry. Annual inflation increased from 29.4% in April 2013 to 61.5% a year later. Hyperinflation (i.e. monthly inflation rate exceeds 50%) set in in 2016 when inflation reached an annual rate of 800%, and the economy contracted almost 19%. By September 2016, 15% of Venezuelans were eating discarded food they picked from dumpsites.

In 2017, inflation reached 4,000%, as Venezuelans lost average of 11 Kg (24 pounds) because of food shortages, and 90% of them lived in poverty. To make matters worse, the US government slapped sanctions on Maduro in a bid to prevent him from raising additional funds. Last month, Venezuela's annual inflation rate reached a staggering 200,000%, meaning that prices increase by that much since August 2017. In addition, corruption and crime increased, and people took to the streets to protest against the difficult conditions they were enduring.

President Maduro tried every trick in the book to arrest the country's slide into chaos by increasing the minimum wage four times between April 2014 and July 2015, changing the method of calculation of inflation, introducing a crypto currency, the Petro, as well as devaluing and re-denominating the currency by cutting five zeros from it. Nothing worked.

In exasperation, Venezuelans have been voting with their feet, with an estimated 1.6 million of them having fled their country since 2015, according to the UN agency for migration. The majority of these migrants are professionals and the middle class, thereby depriving the country of much-needed human capital. The mass emigration of Venezuelans has also strained neighboring countries (especially Colombia and Peru, which have taken in almost 1 million, and over 400,000 Venezuelans, respectively), and provided graphic images to a global audience. About two weeks ago, a Summit of 13 Latin American countries agreed to reduce restrictions on Venezuelan migrants, and appealed for more international aid to enable them deal with the inflow of Venezuelans, which Venezuela's president Maduro said was "normal."

Who Cares?

Venezuela's economic meltdown should be of concern to many, including those in the Internet infrastructure industry. With one chicken costing 14 million Bolivars (about $2.2), and banks out of cash, it is easy to imagine how difficult it would be to register or renew the registration of a domain name, or purchase a certificate of authority, not to mention buy hardware such as servers. Such challenges impact many people and issues such as online privacy and safety, as well as security, both within and outside Venezuela.

Venezuela should also be a cause for concern because it has been a major source of funding for developing countries, especially in development cooperation among developing countries in the global South (the so-called South-South cooperation). Between 2000 and 2011, Venezuela spent $7.6 billion (in 2009 USD) on development finance in other countries. In addition, Venezuela initiated the Petrocaribe alliance which provides oil at concessionary terms to its 17 member countries, many of them from CARICOM, the 15-member Caribbean economic community.

Despite the unimaginable difficulties faced by Venezuelans, they continue to make significant contributions to the maintenance of the global Internet infrastructure as well as the policy development processes that underpin it. The Venezuelan ccTLD, .VE is an important part of the Internet infrastructure, along with copies of the F and L root servers. Earlier this year, the .VE ccTLD was reported to have had an additional 48,000 domain names registered compared to a year earlier. This increase, however, is probably due to the high-handed methods of the Venezuelan government which banned the use of all non-.VE domains in the country.

In partnership with the Latin America and Caribbean Network Information Centre (LACNIC), Internet Systems Consortium (ISC), and ICANN, Venezuela installed copies of the F- and L-root servers in 2006, and 2015, respectively, thereby helping increasing the resilience of the Internet infrastructure in the region. Venezuela's contribution to the global Internet infrastructure includes the ALBA-1 fiber cable which connects Venezuela, Cuba and Jamaica to the Internet. Other international sub-marine cable projects Venezuela is connected to include the Americas-1 South, the Americas-II, Arcos-1, the Atlantica-1/GlobeNet, and the PAC.

Besides hardware and hard cash, Venezuela also contributes to the global Internet by participating in ICANN, and other similar regional and international organizations. Venezuela has been a member of ICANN's Government Advisory Committee (GAC) since 2014, is a member of the CCNSO as well as ALAC, and its regional structure in Latin America and the Caribbean, LACRALO. Venezuelans also contribute to the work of Internet-related regional organizations such as LACNOG and LACNIC, work in the DNS industry, and have an Internet Society (ISOC) Chapter.

What Now?

In view of the difficult economic situation in Venezuela and the contribution of the country and its people to the global Internet, it is fair to contemplate what can and should be done to support them. Clearly, support from the international community should be coordinated, and tailored to the missions of various organizations or partner providing such support.

Venezuelans themselves should take the lead in the developing a support program or programs. In this regard, organizations such as ICANN and ISOC, along with international development partners (e.g. World Bank, and the ITU), should work with Venezuelans, as well as regional partners (e.g. LACNIC) to develop a support program to address current needs and plan for the recovery of the Venezuelan economy.

Although this might be easier said than done given that the US is hell-bent on regime change in Venezuela, the old adage that there's a way where there's a will applies. ICANN, for example, can help in a number of ways including travel support to enable Venezuelans attend the organization's meetings and providing them additional slots in ICANN'S Fellowship and Future Leaders programs.

ICANN can also support the Venezuelan Internet infrastructure and DNS industry players, including ISPs, registrars, the .VE ccTLD registry, and operators of root server copies. ICANN, ISOC, and the DNS industry outside Venezuela can hire Venezuelans who are or have been active in these communities but were forced to leave their country because of the difficult circumstances prevailing there.

No Condition is Permanent

Venezuela's present predicament is a good reason to recall the Nigerian proverb: No condition is permanent! As tough as things are, they will get better. Just as the United States, which once paid tributes to Barbary (aka Ottoman) corsairs based in Tripoli, Libya, can now choose when and how to lord over that country, Venezuela will one day rise from these ashes. This is a great time to leave a lasting legacy on the minds of Venezuelans, especially the younger generation, and all hands should be on deck to help them wade through their present difficulties. So, when next you get in the mood to sing What a Wonderful World, please spare a thought for Venezuela.

Written by Katim S. Touray, International Development Consultant, and ICT for development advocate

Follow CircleID on Twitter

More under: ICANN, Internet Governance, Policy & Regulation, Regional Registries, Web

Categories: News and Updates

Basketball Australia Switches Its Official Website to New Top-Level Domain .Basketball

Wed, 2018-09-12 23:26

As part of a plan to unify Australian basketball digital front, Basketball Australia is moving its official website to a new TLD-based domain name at www.australia.basketball. It is also switching its four national men's and women's teams to new domains at boomers.basketball, opals.basketball, gliders.basketball and rollers.basketball. The transition follows the world governing body's move to FIBA.basketball in late 2017. "The process to obtain the .basketball domain began in 2012 when FIBA partnered with Roar Domains, a US-based company focused on the commercialization of sports-related Top Level Domains, to submit its application," reports Australasian Leisure Management.

Follow CircleID on Twitter

More under: Domain Names, New TLDs

Categories: News and Updates

Frequency of DDoS Attacks Risen by 40% While Duration of Attacks Decrease

Wed, 2018-09-12 22:58

The frequency of DDoS attacks has risen by 40% year on year while the duration of attacks decreased with 77% lasting ten minutes or less, according to a new report released by Corero Network Security. The report warns one in five organizations will be targeted again within 24 hours. Other key highlights from the report: Low volume, sub-saturating attacks continue to dominate (94% less than 5Gbps); Whilst still infrequent, attacks over 10Gbps have doubled; Organisations faced an average of 8 attacks per day in Q2 2018, an increase of 40% compared to the same quarter in 2017.

Increase in DDoS attacks attributed to IoT Botnets: In another report from security firm NexusGuard also released today, the company warns the increase in attacks and their sizes is the result of attackers amassing giant botnets using insecure IoT devices. "Attackers are using vulnerabilities in these devices to rapidly build large botnets ... at one point the Mirai Satori botnet was seen from over 280,000 IP addresses over a 12 hour period and the newer Anarchy botnet was able to amass over 18,000 routers in a single day."

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Cybersecurity, DDoS, Internet of Things

Categories: News and Updates

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer