Domain industry news

Syndicate content CircleID
Latest posts on CircleID
Updated: 10 hours 19 min ago

Baltimore Gets Hacked: Main Computer Systems Crippled, Experts Estimate Months to Recover

11 hours 46 min ago

Director of the Mayor's Office of Information Technology, Frank Johnson, gives an update on the ransomware attack that took control of the city's computer system the day prior.

On May 7, hackers breached parts of the computer systems that run Baltimore's government, taking down essential systems such as voice mail, email, a parking fines database, payment systems used for water bills, property taxes, real estate transactions and vehicle citations. The ransom demanded by hackers is about $100,000 worth of bitcoin. As of today, over two weeks from the breach, Baltimore City Mayor Jack Young said the city won't pay. Experts estimate months for the city to recover. The authorities have identified the malware behind the attack as "RobbinHood," a relatively new ransomware variant, according to a May 7 report from The Baltimore Sun. There have been more than 20 cyberattacks against municipalities in 2019 alone, according to NPR.

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Cybersecurity, Malware

Categories: News and Updates

US Huawei Ban Threatens Internet Access in Rural Areas, Some Providers May Fold

14 hours 28 min ago

Much of rural America with very low population density, depends on small wireless carriers for their internet access as AT&T, T-Mobile and other large providers have no interest in providing services. Many of these small carries can't afford equipment from suppliers such as Ericsson and Nokia Corporation and have to rely on cheaper suppliers from Huawei Technologies Co. and other Chinese companies. The upcoming ban on U.S. telecommunications networks acquiring or using equipment from Chinese suppliers under a cloud of uncertainty, with some fearing bankruptcy. Rural broadband carriers could be forced to rip out and replace entire networks because they wouldn't be able to import spare parts or software updates to maintain infrastructure, Roger Entner, a telecom analyst at Recon Analytics told LA Times. More from LA Times:

Replacing the network would cost $5 million to $10 million according to the President of Pine Belt Communications, a small telecommunications company in Alabama. "And downtime from installing new equipment would probably cause Pine Belt to forgo $1 million to $3 million in roaming fees, according to Federal Communications Commission filings."

By one estimation: Carri Bennet, general counsel for the Rural Wireless Association, a trade group for carriers with fewer than 100,000 subscribers estimates 25% of the association's members used Huawei or ZTE gear in their networks.

Follow CircleID on Twitter

More under:

Categories: News and Updates

DDoS Storm Is Coming, Warn Researchers Noting an 84% Surge in the First Quarter of 2019

Wed, 2019-05-22 21:21

Dynamics of the number of DDoS attacks in Q1 2019 (Source: Kaspersky Lab)

The number of DDoS attacks during the first three months of 2019 increased by 84%, compared with the previous quarter. The most noticeable area of DDoS attack growth observed was in the number of DDoS attacks that lasted for more than an hour, according to a new report issued by Kaspersky Lab. "These incidents doubled in quantity, and their average length increased by 487%." The geographical distribution of targets closely mirror the geographical distribution of attacks: the Top 3 were again China (59.85%), the US (21.28%), and Hong Kong (4.21%).

One theory behind the sudden surge: "Over the last six months of the previous year, we have been observing less the redistribution of botnet capacity for other purposes and more the emergence of a market vacuum. Most likely, the supply deficit was linked to the clamping down on DDoS attacks, the closure of sites selling related services, and the arrest of some major players over the past year. Now it seems the vacuum is being filled: such explosive growth in the indicators is almost certainly due to the appearance of new suppliers and clients of DDoS services."

Most dangerous day of the week: "Saturday was the most intensive day (accounting for 16.65% of attacks), with Friday in second place (15.39%). Sundays saw a relative lull — just 11.41% of attacks. Recall that in late 2018 Thursday had the largest share of DDoS attacks (15.74%), with Sunday again the most peaceful."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, DDoS

Categories: News and Updates

Qualcomm’s Licensing Practices Are Illegal, U.S. Judge Rules

Wed, 2019-05-22 19:38

A U.S. district court judge rules that Qualcomm violated anti-trust laws and has ordered the chip maker to change some of its licensing and negotiation practices. The case brought to court in 2017 by the US Federal Trade Commission, accuses Qualcomm of illegally suppressing competition in the market for smartphone chips by threatening to cut off supplies and extracting excessive licensing fees. "Qualcomm's licensing practices have strangled competition," for years said U.S. District Judge Lucy Koh in San Jose, California who issued the decision late Tuesday night. Qualcomm told reporters it will appeal the decision and seek a stay to stop it from taking effect. "We strongly disagree with the judge's conclusions, her interpretation of the facts and her application of the law," said Don Rosenberg, Qualcomm general counsel.

Follow CircleID on Twitter

More under: Law, Mobile Internet, Wireless

Categories: News and Updates

Back to the Future Part IV: The Price-Fixing Paradox of the DNS

Wed, 2019-05-22 15:22

Today's TechLash should focus on causes rather than effects.

GenX-ers may remember spending a summer afternoon at the movie theater and seeing the somewhat corny but beloved antics of Marty McFly and Doc as they used a souped-up Delorean to travel the space-time continuum. In Back to the Future Part II, Doc and Marty travel into the future, where the bullying, boorish Biff causes a time-travel paradox when he steals the Delorean and takes a joyride into the past to give his younger self a sports almanac containing the final scores of decades worth of sporting events.

When Marty and Doc return to their present time, the younger Biff has used the almanac to amass a fortune from sports betting and, in this altered reality, he's rich and powerful, runs the town with an iron fist and has married Marty's mom after McFly peré died under suspicious circumstances. Basically, it's a circa-1989 Hollywood vision of dystopia that results when the space-time continuum is warped by interference  —  and what results is something that looks suspiciously similar to the Las Vegas Strip.

This warped reality that Marty and Doc arrive in when they come back from the future is not unlike where we find ourselves today with regards to Silicon Valley's Tech Titans, who have developed into absurdly wealthy behemoths that have sparked a backlash with boorish, if not outright bullying, behavior. They have managed to get a stranglehold on the global economy and their core product decisions now determine the extent that humanity enjoys fundamental rights such as privacy. For many, their experience of being "on the Internet" is through these companies' products.

However, this scenario was not inevitable  —   it is a consequence of a decision made just a few years after Back to the Future Part II was on the silver screen.

During the 1990s, the Clinton Administration was busily engaged with building an information superhighway. Part of this effort was the privatization of the addressing system of the Internet and the outsourcing of the Domain Name System's first domain name registries   —   .com, .net, .org, .gov, .mil, and .edu.

At some point, the U.S. Government decided domain name registries would offer domain names for annual registration with uniform and non-discriminatory pricing. Interestingly, neither the National Telecommunications and Information Administration or the National Science Foundation (which had jurisdiction over the DNS prior to it being handed to NTIA) have been able, or willing, to produce the original Cooperative Agreement between Network Solutions and the NSF, making any assessment of the decisions taken during this time period incomplete. But, it is plausible that the U.S. Government desired budget predictability for .gov domain name registrations due to annual Congressional appropriations and   —   for purposes of contract simplicity   —   this approach was replicated across all of the initial registries.

Whatever the motive, this approach would prove to be fateful. Substituting convenience for common sense, the government strayed from free-market principles and enshrined a price-fixing scheme into the root zone of the Internet that disconnected the price of acquiring and maintaining a domain name from the actual costs of resolving DNS queries associated with that domain name. Instead, every registrant of a domain name paid the same registration price regardless of the volume of resolutions that were performed for a specific domain name. In practice, this meant  —  and means  —  that the annual wholesale registration price for google.com is the same as a parked domain.

This had a few practical effects.

First, it insulated high-volume, web-based businesses from the actual costs of acquiring their customers since companies like Google, Facebook, Amazon, and others weren't paying an annual registration fee that accounted for the resolution services that were being used by these wildly popular domain names that were generating millions and billions of DNS queries a day as people typed google.com into their browser bar. A parallel that might be illustrative is to consider this like a retailer being excused from having to pay rent, electric and other costs associated with maintaining a brick-and-mortar storefront, and, instead, must pay only for an annual magazine subscription to Sports Illustrated for the lobby in order to enjoy the full use of the leased space and have it available for massive numbers of customers to visit.

These platforms avoided the actual costs associated with customer acquisition because they benefitted from a DNS that is subsidized by the total volume of domain names. Some readers may be scratching their heads and thinking that this disconnect between cost and resource usage is unideal, but it resulted in a massive explosion of technological innovation, wealth creation and human progress, not to mention that even if they had been on the hook for the true costs of their resolutions services, it wouldn't have been material enough to slay the Tech Titans —   so what's the big deal?

Well, Google and the Tech Titans weren't gestating in a vacuum. This uniform, predictable, non-volatile pricing model was attractive to another segment of the population who began registering large volumes of domain names for speculative purposes. In the wildcatting early days, it was a land rush and not every speculator had scruples. The less scrupulous domainers, as domain investors came to be known, snapped up domain names similar or identical to trademarks, company names, slogans, service marks, and all manners of intellectual property and then ever-so-helpfully offered them for sale to the individuals or companies which held the rights to the intellectual property   —   at an extortionate premium.

Thus the world was introduced to a new take on an old crime   —   cybersquatting   —   and corporations and individuals began proactively acquiring huge volumes of domain names   —   what is called defensive registration   —   in the hopes of securing their intellectual property and protecting their brand identity without having to engage in costly enforcement actions that were, oftentimes and especially during the infancy and pre-teen years of the privatized Internet, of questionable efficacy.

Not all domain investors acted in bad faith, however, and many had accrued substantial portfolios of domain names, particularly in .com, and were realizing that maximizing the return on their investment in good faith   —   i.e. refraining from extorting exorbitant sums by selling companies their own intellectual property back to them   —   would require a strategic approach to managing their portfolios of long-term domain name investments.

The initial focus on "hunting" in cyberspace made way for "gathering" or monetizing domain name portfolios, primarily through (initially) lucrative pay-per-click advertising. PPC ads would generate more wealth than most people ever imagined  —  for both the domain name investors who monetized their portfolios, but especially for companies that controlled the market for online advertising. With hindsight, which is always 20/20, it's easy to see that, inevitably, this would metastasize into exactly what it did   —   a corrupt, self-perpetuating system.

Remember what pay-per-click ads were like in the late 1990s and early 2000s? They were those chintzy little text-only ads that were plastered on a website and usually indicated that whoever was driving the mouse had taken the wrong exit ramp off the information superhighway. They were especially prevalent on websites with domain names that were remarkably similar to that of the intended destination website, but nothing ever looked quite right, the content was a little garish, two-sizes-too-big, and practically leaped off of the page while imploring you to CLICK HERE!

A lot of people CLICKED.

Efforts to diversify   —   with censorious software, autonomous vehicles, broadband-bearing balloons, and probably a prototype or two of the T-9000   —   remain, effectively, a sideshow to the main act and Google still makes a massive amount of money off of those ads every year.
But, suppose that, instead, domain name registrations had been based on actual resolution network traffic in the DNS   —   so that annual registration prices were calculated to reflect actual queries made as people sought, or were driven to, websites by clicking or typing a domain name into the browser's address bar. Not just the Tech Titans would have developed under a vastly different set of economics:

Is it all that difficult to imagine that domain investors might not have found it so economically rational to pursue PPC revenues across large portfolios of domain names if, instead of a regulated uniform annual registration price they were faced with pricing that varied by domain name and was based upon usage of DNS resolution services  —  i.e. the number of times that a domain name must be looked up by the network to ascertain the corresponding machine-readable numerical IP address?

If there wasn't a pool of domain investors motivated to monetize long-term assets for Google to tap into, might it have decided on a different direction for revenue generation that didn't rely so heavily on developing and driving users towards freeware "honeypots" in order to shear the herd's personal information to be packaged in ever-increasingly complex, clever and invasive fleeces to be sold to the highest bidder?

If you're old enough to remember the 1990's and early 2000s, think back to the early days when the Tech Titan was Big Blue owning the mainframe or Microsoft   —   the Evil Empire   —   sparking outrage and protests because a .doc wouldn't open up in Lotus or WordPerfect.
Remember Back to the Future 2, and our heroes Marty McFly and Doc who returned home to a vastly different reality because one bully pwned the sports book with a literal cheat sheet.

Is it really that difficult to consider that our dystopia today stems partly from the adoption of a false economic model where price bears no material relation to actual cost and which   —   by generating wealth at the expense of value   —   encouraged all sorts of undesirable downstream behavior?

Anybody got a Delorean?

Written by Greg Thomas, Managing Director of The Viking Group LLC

Follow CircleID on Twitter

More under: Domain Management, DNS, Domain Names, Intellectual Property, Internet Governance, Policy & Regulation, Registry Services

Categories: News and Updates

Microsoft Sees Serious Appetite for Revised Privacy Laws in US, Says It's Time to Match EU's GDPR

Tue, 2019-05-21 18:52

With the first anniversary of the European Union's General Data Protection Regulation (GDPR) approaching in just a few days, Microsoft's Corporate Vice President and Deputy General Counsel, Julie Brill says GDPR has been an important catalyst for progress in privacy protection around the world. Since GDPR began, she tweets: "Over 18 million people have used the Microsoft privacy dashboard to control their data… including 6.7 million users from the US — the most of any country. Does this show an appetite among Americans for updated privacy laws? Yes!" In an accompanying post on Monday she notes:

"A lot has happened on the global privacy front since GDPR went into force. Overall, companies that collect and process personal information for people living in the EU have adapted, putting new systems and processes in place to ensure that individuals understand what data is collected about them and can correct it if it is inaccurate and delete it or move it somewhere else if they choose.

This has improved how companies handle their customers’ personal data. And it has inspired a global movement that has seen countries around the world adopt new privacy laws that are modeled on GDPR. Brazil, China, India, Japan, South Korea and Thailand are among the nations that have passed new laws, proposed new legislation, or are considering changes to existing laws that will bring their privacy regulations into closer alignment with GDPR.

... Now it is time for Congress to take inspiration from the rest of the world and enact federal legislation that extends the privacy protections in GDPR to citizens in the United States."

Follow CircleID on Twitter

More under: Internet Governance, Law, Policy & Regulation, Privacy

Categories: News and Updates

Trump Orders Cyberattacks by US Companies

Tue, 2019-05-21 17:42

It is supremely ironic. A rogue national leader with the stroke of a pen, dictates that its companies will expose a foreign company's end users to cyberattacks. This is the net effect of denying security patches or operating system updates pursuant to Trump's order. In the US Great Rogue Leader's bizarro world, this is the very behavior that he claims makes his actions necessary. In fact, this Trump malware attack is worse because of the mass exposure to exploits.

The actions here deserve strong rebuke by other nations, industry associations, and companies pursuing the 5G market. In the 5G ecosystem, the most significant developments are software based, and Trump has dealt a devastating blow to the ability of US companies to pursue those global markets. Indeed, the behavior is so patently harmful, that it may well represent a kind of retribution against Silicon Valley.

What Trump's actions are likely to induce are increased Balkanization and insulation measures worldwide against his unlawful behavior where international accords mean nothing. He is today unchecked and unrestrained by neither the US Constitutional System nor public international law. Global collaboration has been replaced by aberrant bilateral bullying, and US commitments mean nothing.

Trump's Loser Touch for 5G

Given the profound ignorance of the Trump Administration on technical matters and failure to engage in global collaborative activity, the US was unlikely to get a boost in the 5G marketplace. What was perhaps unexpected by US vendors was getting their marketplaces trashed.

The unfortunate result of this Trump behavior is that few if any nations will trust the US in the future. Most countries will demand that US companies create fully separated subsidiaries and facilities for their national markets. Countries will seek to prevent Trump software malware or denial of service attacks — where on a whim, he launches adverse cyber actions. The counter-actions include developing alternative sources of code, data centres, network resolver and discovery services and other support capabilities — that will have the long term effect of freezing US vendors out of large markets worldwide. Layered on top of the loss of the largest US VLSI component supplier markets and likely retributions in product markets, it is all part of Trump's disastrous 5G loser touch.

The author is reflecting on concerns at last week's international 5G security meetings in France enhanced with additional perspective from a remote village in Provence.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC

Follow CircleID on Twitter

More under: Internet Governance, Mobile Internet, Policy & Regulation, Telecom, Wireless

Categories: News and Updates

Threat Intelligence Platform in Action: Investigating Important Use Cases

Tue, 2019-05-21 15:02

As technology gets more and more sophisticated, tech-savvy cybercriminals are having a field day devising increasingly ingenious ways to steal confidential data from ill-prepared targets.

What this means is that an equally sophisticated cybersecurity response is needed to keep attackers at bay. This would involve re-examining reactive cybersecurity practices and adopting a proactive approach towards an active search for risks and vulnerabilities with the help of threat intelligence (TI).

However, it's crucial to remember that the efficient deployment of threat intelligence tools requires a proper acknowledgment of their capabilities. And the best way to learn about them is to examine the variety of TI's use cases. We've already covered this topic in our post 5 More Examples of Threat Intelligence Platform Use Cases. Now in this article, we're going to take a closer look at some of them.

Use Case 1: Catching Phishers

Cyber threats sometimes emerge from familiar sources. Just recently in February 2019, a Payoneer user (and probably hundreds of others) was surprised to receive emails from this digital payment service notifying him about unexpected payments in his favor and thus prompting him to log in on pages strikingly similar to payoneer.com.

Investigators can subject such messages to thorough threat intelligence analysis that scrutinizes different parameters. In our case, the evidence pointed out to a phishing attack. Some important red flags discovered in the WHOIS records' and the SSL configurations' feeds of the TI report included the use of newly-registered domains with hidden owners' contact details and recently acquired SSL certificates — suggesting that the websites were created specifically for this attack.

Use Case 2: Defusing Malware

In the beginning, computer software was created to help businesses do their work faster and better. But then came the hackers who created malware and used them notably to steal sensitive data, delete confidential files, and even cause company operations to shut down. How do you stop it?

A threat intelligence platform can disarm malware attacks by conducting a domain malware check which allows running a suspicious domain through multiple security databases to verify if it is considered dangerous in any of them. Target websites can also be scanned for potentially dangerous .exe or .apk files capable of running malicious code.

Use Case 3: Exposing Social Hacking

We've all heard of corporate websites being hacked, but there's a new phenomenon called social hacking where perpetrators aim to cause damage to the reputation of their targets. To achieve that, they troll social media accounts, post negative messages, or pretend to be the company's representatives to scam people.

A threat intelligence platform can prevent social hacking by analyzing data feeds from WHOIS and malware databases to help spot fake social media profiles as well as allow the deep examination of the links that hackers tempt netizens to click or download since these may contain malware and viruses.

Use Case 4: Unmasking Impostors

How many times have employees been tricked into releasing huge company funds by somebody assuming a fake identity? Many times, apparently, since damage from business-email compromise (BEC) scams reached $12.5 billion last year, according to the FBI. How can you put a stop to this threat that could bring your company to its knees?

A threat intelligence platform can unmask impostors by examining their domain history. Warnings can be raised, for instance, if the target being investigated has changed domain ownership multiple times in a short period of time. Another technique is to verify the validity of its SSL certificates, paying particular attention to recently-acquired certificates which are often indicative of a malicious entity preparing for an attack.

* * *

The threat landscape is getting increasingly dangerous, and it demands a proactive defensive response. Deploying threat intelligence makes it possible by putting the most essential cybersecurity measures at your disposal.

Written by Jonathan Zhang, Founder and CEO of Threat Intelligence Platform

Follow CircleID on Twitter

More under: Cybersecurity

Categories: News and Updates

NGOs, Academics Warn Against EU’s Deep Packet Inspection Problem, at Least 186 ISPs Breaking Rules

Tue, 2019-05-21 05:01

European Digital Rights organization (EDRi) along with 45 NGOs, academics and companies from 15 countries sent an open letter to European policymakers and regulators on Wednesday warned against the widespread use of Deep Packet Inspection (DPI) technology by Internet service providers in the EU. Despite net neutrality regulation in effect, the EU ISPs are using DPI technology to examine the content of users' communication for traffic management and differentiated pricing of specific applications or services.

DPI deployment in large scale: "[W]ith the proliferation of zero-rating in all but two European countries, the industry has started to deploy DPI equipment on a large scale in order to charge certain data packages differently or to throttle services and cram more internet subscribers in a network already running over capacity." (EDRi)

Watering down the rules: "Europe's current net neutrality rules indeed ban DPI technology that examines specific user information for the purpose of treating traffic differently," says Jan Penfrat, EDRi's Senior Policy Advisor. A mapping of zero-rating offers in Europe conducted by EDRi member Epicenter.works has identified 186 telecom services potentially using of DPI technology. "Most regulators have so far turned a blind eye on these net neutrality violations. Instead of fulfilling their enforcement duties, they seem to now aim at watering down the rules that prohibit DPI."

Follow CircleID on Twitter

More under: Access Providers, Net Neutrality, Policy & Regulation, Privacy, Telecom

Categories: News and Updates

ICANN Says Amazon Inc's Application for .AMAZON TLD Can Proceed Following 30 Days of Public Comment

Tue, 2019-05-21 04:25

The giant online retailer Amazon Inc is one step away from winning the .AMAZON top-level domain name after a 7-year battle with the eight Latin American countries. The Internet Corporation for Assigned Names and Numbers (ICANN) on Monday concluded that there is no public policy reason for the .AMAZON applications not to proceed in the New gTLD Program. It has given the application a final 30-day period of public comment before moving ahead. The conclusion was reached after the Amazon basin countries Brazil, Bolivia, Peru, Ecuador, Colombia, Venezuela, Guyana and Suriname failed to reach an agreement.

From ICANN May 15th resolution: "the Board finds the Amazon corporation proposal of 17 April 2019 acceptable, and therefore directs the ICANN org President and CEO, or his designee(s), to continue processing of the .AMAZON applications according to the policies and procedures of the New gTLD Program. This includes the publication of the Public Interest Commitments (PICs), as proposed by the Amazon corporation, for a 30-day public comment period, as per the established procedures of the New gTLD program."

Rights undermined: The Brazilian Foreign Ministry said it feared the ICANN decision did not sufficiently take into account the interests of the South American governments involved and undermined the rights of sovereign states.

Follow CircleID on Twitter

More under: ICANN, Internet Governance, New TLDs

Categories: News and Updates

Broadband and Food Safety

Mon, 2019-05-20 18:00

I recently saw a presentation that showed how food safety is starting to rely on good rural broadband. I've already witnessed many other ways that farmers use broadband like precision farming, herd monitoring, and drone surveillance, but food safety was a new concept for me.

The presentation centered around the romaine lettuce scare of a few months ago. The food industry was unable to quickly identify the source of the contaminated produce and the result was a recall of all romaine nationwide. It turns out the problem came from one farm in California with E. Coli contamination, bur farmers everywhere paid a steep price as all romaine was yanked from store shelves and restaurants, also resulting in cancellations of upcoming orders.

Parts of the food industry have already implemented the needed solution. You might have noticed that the meat industry is usually able to identify the source of problems relatively quickly and can usually track problems back to an individual rancher or packing house. Cattle farmer are probably the most advanced at tracking the history of herd animals, but all meat producers track products to some extent.

The ideal solution to the romaine lettuce problem is to document every step of the farming process and to make that information available to retailers and eventually to consumers. In the case of romaine that might mean tracking and recording the basic facts of each crop at each farm. That would mean recording the strain of seeds used. It would mean logging the kinds of fertilizer and insecticide applied to a given field. It would mean recording the date when the romaine was picked. The packing and shipping process would then be tracked so that everything from the tracking number on the box or crate, and the dates and identity of every immediate shipper between farm to grocery store would be recorded.

Initially, this would be used to avoid the large blanket recalls like happened with romaine. Ultimately, this kind of information could be made available to consumers. We could wave our smartphone at produce and find out where it was grown, when it was picked and how long it's been sitting in the store. There are a whole lot of steps that have to happen before the industry can reach that ultimate goal.

The process needs to start with rural broadband. The farmer needs to be able to log the needed information in the field. The day may come when robots can automatically log everything about the growing process, and that will require even more intensive and powerful broadband. The farmer today needs an easy data entry system that allows data to be scanned into the cloud as they work during the growing, harvesting, and packing process.

There also needs to be some sort of federal standards so that every farmer is collecting the same data, and in a format that can be used by every grocery store and restaurant. There is certainly a big opportunity for any company that can develop the scanners and the software involved in such a system.

In many places, this can probably be handled with robust cellular data service that extends into the fields. However, there is a lot of rural America that doesn't have decent, or even any cell service out in the fields. Any farm tracking data is also going to need adequate broadband to upload data into the cloud. Farms with good broadband are going to have a big advantage over those without. We already know this is true today for cattle and dairy farming where detailed records are kept on each animal. I've talked to farmers who have to drive every day to find a place to upload their data into the cloud.

In the many counties where I work today the farmers are among those leading the charge for better broadband. If selling produce or animals requires broadband we are going to see farmers move from impatience to insistence when lack of connectivity means loss of profits.

I know as a consumer that I would feel better knowing more about the produce I buy. I'd love to buy more produce that was grown locally or regionally, but it's often nearly impossible to identify in the store. I'd feel a lot safer knowing that the batch of food I'm buying has been tracked and certified as safe. Just in the last year there's been recalls on things like romaine, avocados, spring onions, and packaged greens mixes. I don't understand why any politician that serves a farming district is not screaming loudly for a national solution for rural broadband.

Written by Doug Dawson, President at CCG Consulting

Follow CircleID on Twitter

More under: Broadband

Categories: News and Updates

A Closer Look at the "Sovereign Runet" Law

Fri, 2019-05-17 21:08

In December 2018, a bill on the "stable operation" of the Russian segment of the Internet was introduced and got the title "Sovereign Runet" in mass media and among the public. It was adopted after 5 months later, despite doubts about the technical feasibility of its implementation. The law is very ambitious in its intent to simultaneously control Internet traffic and protect Runet from some external threats, but legislators still have no idea how it would actually work.

This is not the first attempt of Russian legislators to take control of the Internet within the state borders. The previous bill was initiated by the Ministry of Communications (MoC) in 2014. Then it was proposed to describe the elements of the critical information infrastructure of the Runet, to establish control over traffic exchange points and cross-border communication lines. The main element of the first bill was the creation of a state information system that contains a copy of databases with traffic exchange points, autonomous system numbers (ASN), allocation of IP addresses and routing policies. The state information system should be used by the Russian telecom operators when routing national traffic. But this "national Internet" just means making a copy of the existing RIPE NCC databases. And that makes no technical sense because the data requires constant updates to keep the actual routing information (See my recent paper "Sovereign RUnet: What does it mean"?)

The discussion of the 2014 bill continued for 2 years; a lot of amendments were made to it. The latest activity on it was observed in January 2018, when the press referred to new edits that took into account the opinion of the telecom industry. Ultimately, a kind of compromise was reached but the bill was never submitted to the State Duma for debate and approval. Instead, a new bill was introduced in December 2018 by two senators and one deputy. None of them are directly connected to the Internet infrastructure issues. Obviously, such a move was chosen to launch the consideration of the bill in the State Duma as soon as possible, and to avoid additional coordination with other relevant ministries and the security service, as happened to the MoC bill.

According to anonymous sources (former MoC employees), the main interested party in the adoption of both bills is the Security Council. In 2014, after the start of anti-Russian sanctions and problems with the operation of Internet services in Crimea, the main task was to ensure the stability and security of the Russian segment of the Internet. Other interlocutors recalled even 2006-2007, when people in the Security Council and Administration of the President were preoccupied with the likelihood of an external Internet shutdown. They took seriously the prospect that the U.S. could unilaterally disable Russia's DNS. That is why Russia had been consistently taking initiatives to transfer ICANN's functions to the International Telecommunication Union (ITU), and still continues to criticize ICANN for being a US-based corporation.

Another concern was the circulation of Russian Internet traffic. Some high-ranking officials believed that a lot of Russian traffic loops through foreign networks. This did actually happen in the early 2000s, because of the low cost of such routes and competition between ISPs. But people from the Administration, inspired by several ideologues from Roskomnadzor (RKN, the communications supervisory agency) exploited this story: loop traffic is unacceptable because foreign intelligence can spy on our traffic or snatch it and replace it with something else. Exactly the same reasons were heard from the deputies and senators advocating for the new bill in 2019, as will be shown below.

Another interested party became RKN, since this supervisory agency got very broad powers to block prohibited Internet resources in 2012. In particular, the system of blocking built by RKN created DNS vulnerabilities that are regularly exploited.1 Finally, RKN's failure to block Telegram messenger became a reputational blow for the agency. As part of RKN's attempts to execute the law, on peak days in April 2018 entire subnets of IP addresses were blocked, reaching 18 million records in the blacklist. It negatively affected the work of many third-party services and Internet businesses. So RKN's interest in a new law that empowers it to control and filter all traffic, is obvious.

What's in the adopted law?

On May 1 2019 the new law was signed by President Putin. In total, only 5 months have passed since the first introduction of the bill and only 6 more months remain until its entry into force on November 1, 2019. Amazing speed! The content and focus of law, after all the debates, is not very different from its first December draft, except for several additions. Basically, the document contains amendments to two existing laws "on Communications" and "on Information", and these are summarized and commented upon in this document.

In brief, the law sets the following:

  • The main subjects responsible for stable operation of the Internet in Russia are telecom operators and owners and/or proprietors of: (1) technical communication networks (used for operations of transport/energy and other infrastructures, not connected to the public communication network), (2) traffic exchange points, (3)communication lines crossing the state border and (4) autonomous system numbers (ASN). RKN will keep registries for the last three categories. All subjects must participate in the regular exercises for the stable Runet.
  • RKN will execute the centralized management of communication networks in the event of threats to the stability and security of the Runet, by defining routing policies for telecom operators and other subjects and coordinating their connections.
  • Telecom operators are required to ensure the installation in their networks of technical means for countering threats to the stability, security and integrity of Internet operation on the territory of Russia. These technical means will also serve the purpose of traffic filtering and blocking access to prohibited Internet resources.
  • The law creates a Center for monitoring and control of public communication networks under the RKN supervision.
  • The law creates a national domain name system

The debate over the law

Based on the statements of deputies and senators during the readings of the bill (3 in the State Duma and 1 in the Federation Council), the motivation for its adoption can be summarized in several points. The main motive is that this law is a response to the latest US cybersecurity strategy, where the Russian lawmakers saw a direct threat to Russian networks in a statement to use offensive capabilities to protect US networks and interests in cyberspace. The speed of the law's adoption was justified by its critical meaning for implementation of the national program "Digital Economy" that highly depends on the Internet.

"Obviously, it is necessary to protect the digital lifestyle of Russians; in this regard, it is necessary to ensure the stability of the main services of Runet and the reliability of Russian Internet resources, and this requires a national infrastructure that can protect Runet in the event of a threat of blocking the connection to the root servers placed abroad." — Ms. Arshinova, Deputy from the United Russia party.

The co-author of the law Mr. Lugovoy, Deputy from the Liberal-Democratic Party of Russia, frightened his colleagues with the controversial case of an Internet shutdown in Syria in November 2012, which he attributed to the special operations of the US National Security Agency. Another argument to adopt the law was the analogy with sanctions by international payment systems in Crimea in 2014 when Russia had to elaborate its own national payment system "МИР" to avoid financial collapse. And finally, some deputies still believe that foreign loop traffic must be "reduced significantly" according to the "Digital Economy program."

"The bill has already been called the law on autonomous, sovereign Runet, but if you look closely at the proposed changes, there is no separation of Runet or turning it into a closed system that does not communicate with the global Internet. The bill is not aimed at isolation at all — it is about ensuring the smooth functioning of our economy and other spheres of society, and most importantly, protecting the rights of Russian citizens who adhere to the digital lifestyle” — Ms. Arshinova, Deputy from the United Russia party.

The other co-author of the law, Senator Mr. Klishas claimed that technically Russia can be disconnected from the Internet root servers. But he didn't take into account that the governance of critical Internet infrastructure requires trust and cooperation amongst all involved stakeholders. To say that American companies (namely ICANN and Verisign) can immediately "cut out" records of Russian domains by the order of the US government is a major misconception. If ICANN sets such a precedent, the credibility of this organization will be lost forever — and it threatens the resilience of the Internet as a whole if there is no authoritative center for the coordination of the domain name space. There could be a rollback to the 80-90s, when various large regional networks coexisted. If we talk in terms of American interests, this is the last thing the US government wants to do, because it directly contradicts its policy of globalization and the spread of the Internet around the globe.

Nevertheless, representatives from the opposition parties asked tricky questions and conveyed the concerns of society about the real censorship nature of the law. Firstly, they demanded that the bill's advocates name the threats from which the law is supposed to protect the Runet. The law should reflect all these threats because they directly relate to the constitutional right of our citizens to access reliable information.

"The list of threats, as the authors tell us, they will determine during the exercises — wow! Imagine, colleagues, if we were to report our bills in the following way: we do not know what will happen, we will say after the experiment, so you first pass the law, and then we will conduct exercises. Will you conduct exercises on people? You can't do that, colleagues” — Mr. Nilov, deputy form the Just Russia party.

Another point of critique was the absence of responsibility for network crashes that may happen during centralized management by RKN. The law removes responsibility from operators, but there is no transfer of it. Operators can only ask RKN about anomalies in their networks, that is all.

"Whatever this bill may be called, its main purpose is to control the cross-border information flows. What for? In order to restrict this very information, the flow of this very information — there can be no doubts or illusions. They say, all this is done exclusively for the public good — for the good it would be enough to duplicate domain infrastructure, it could be carried out even without making appropriate changes to the law, it could be done at the level of Roskomnadzor or the Ministry of Communications. So, the bill is extremely restrictive, and it is also an attempt to force the execution of those laws which we adopted earlier” — Mr. Kurinnyi, deputy from the Communist party of Russia.

By the last sentence, the deputy implied the complete failure of RKN to block Telegram messenger, as well as to compel foreign companies like Twitter and Facebook to localize the personal data of Russian citizens.

"Now we are asked to adopt in the first reading the draft law on the protection of "something from something". And where are the guarantees that the next step, which will determine the Government, will not be the transformation of the currently public Internet into such a corporate intranet, limited by the borders of the Russian Federation?" — Mr. Yushchenko, deputy from the Communist party of Russia.

Other deputies paid attention to the creation of a point of failure for the Runet — the Center for monitoring and control of public communication networks. If there is a single control center, it is easy to break it and disrupt Runet at once. Finally, deputies were angry about the budget issue. Initially, the financial justification of the bill claimed that "adoption and implementation of the Federal Law will not require expenditures from the federal budget." But then it became known that the money was already allocated to the budget of the national program Digital Economy — 20,8 billion rubles to purchase the equipment to counter threats, 4,5 billion rubles for national DNS and 5,5 billion rubles to develop necessary hard and software.

"You know, colleagues, I have not seen such a brazen and cynical bill, which you push forward, saying that it won't require even a ruble from the budget. We have a government like Nostradamus: the government, adopting the draft budget last year, already assumed that three cranks (two from the Federation Council and one from the State Duma) will introduce this year this bill, and has already saved some money for it!" — Mr. Ivanov, deputy from the Liberal-Democratic Party of Russia.

Even before the first reading happened in the State Duma in February, measures in the bill were greeted negatively by the technical community, while the broader IT industry took an ambiguous position supporting but slightly criticizing the bill. It is known that there was only one expert meeting, organized by the State Duma Committee on information policy, information technologies and communications in January. It gathered representatives from IT business and telecom, public organizations and authorities. Some transcripts of the conversations were leaked to social media. Together, of the 33 speakers, 13 were clearly against or had serious objections to the bill — the "Big 3" telecom operators MTS, VimpelCom, and MegaFon (with Rostelecom predictably supporting the bill), the Association of Computer and IT Enterprises (which represents participants of the digital economy in Russia), the Association of Documentary Telecommunication (in 2017 it conducted the study of loopback traffic in Russia and proved its insignificant share), the Technical Center of Internet, Coordination Center for TLD .RU, the Russian Association of Electronic Communications and Regional public organization "Center of Internet-technologies."

Industry was concerned with these issues:

  • The "black boxes" — the technical means to counter threats provided to telecom operators by RKN — will dramatically affect the quality of communication. It is obvious from the law because operators are even immunized from responsibility for future network crashes. Also, the law does not cover the cost of their installation and maintenance, nor take into consideration the development and growth of networks — operators will have to spend billions of rubles on that, which will slow down their development and growth.
  • Legislators mixed up technical and content-based threats. It is impossible to solve both problems with one "black box."
  • The issue of duplication of critical elements of the Internet infrastructure and domain names has already been agreed with the industry last year. Several representatives of telecom industry recalled the bill mentioned in the beginning of the post. They were curious why legislators decided not to push the adoption of the previous bill while there was a consensus with industry, but instead invented a new document and added an ambitious aim to filter all Runet traffic.

Anyway, despite the substantial criticism, the law was adopted. Legislators couldn't provide adequate answers on the resilience of the technical means and even lied that they won't degrade the quality of communication. The recent case with Yandex illustrates the argument. In March 2019, when attackers conducted a DNS attack on several large Russian Internet-resources, one of the main victims became Yandex. That was exactly that type of attack that exploits the vulnerability in the RKN blocking system which I explained above. As a result of the attack, a few small operators blocked access to some IP addresses of Yandex, and large operators who use DPI systems to block content were forced to pass all traffic to Yandex services through DPI. It significantly reduced the speed of access to Yandex services for users. Yandex repelled the attack for several days. "The blocking of sites was avoided, but the attack did not go unnoticed: active users of the company's services noticed a decrease in the speed of access to them," the company representative said. The case clearly illustrates the perspectives of traffic inspection on a large scale in future — the equipment won't cope with bandwidth.

What's now?

What will happen during the 5 months before the law comes into force? The MoC, the Government and RKN are required to prepare 30 by-laws (you can track their readiness here) which should fill in the blind spots in the text of the law. Specifically, they will need to:

  • Make a list of the threats to the Runet and the principles of centralized traffic management
  • Define the technical parameters and rules governing the "black boxes"
  • Define how the registry of traffic exchange points will be formed
  • Define rules for providing information from operators and owners of ASN for filling in various information systems,
  • Figure out how the national DNS will work
  • Establish a Center for monitoring and control of the public communications network. (It is noteworthy that the resolution on its creation was signed by the Government in February 2019, before the adoption of the law. The Center should start working by January 2020.)

Concluding thoughts

Analysis of the law leaves the impression that it was written by people who do not understand the way the Internet works and are relying on a mental model of telephone communications. Moreover, they appear to blindly believe in the omnipotence of "black boxes" that will filter traffic and protect Runet from unknown threats on a national scale.

With this first impression, it seems like the law is primarily aimed at censorship under the cover of national security. Companies who don't comply with laws that require decryption or localization of users' messages, and continue to operate in Russia, such as Twitter, Facebook and Telegram, have damaged the reputation of RKN. The government cannot allow these companies to continue to fail to execute its decisions anymore.

Of course, one can agree that the resiliency of the Internet in the country is a serious concern and should be addressed in some way, but the measures offered by this law don't solve those problems; on the contrary they can degrade the quality of access and make Runet more vulnerable than it is now by centralizing management of public networks.

More likely this law will share the fate of the anti-terrorist amendments known as the "Yarovaya package," which required service providers to store the content of voice calls, data, images and text messages for 6 months, and the metadata of communications for 3 years. It came into force in October 2018, but since then none of the service providers execute data retention, simply because they do not possess the necessary equipment needed to store such enormous amounts of data. Moreover, there is still no ready-made suitable solution on the market for this purpose. And government is still fighting to establish the requirement to use only national technological solutions.

One can imagine how much work will be needed to develop the traffic management equipment to support the RKN Center for monitoring and control of public networks, and the systems supporting a national DNS. It is therefore highly unlikely that those 30 by-laws needed to clarify the technical requirements will be issued by the 1st of November 2019. On the contrary, it will probably take several years to complete.

However, the upcoming field testing of DPI solutions by RKN will gradually reveal the insanity of its idea to fully control all traffic in the country. End users and especially businesses will need to be prepared for service interruptions; "without a declaration of war," access to some "legitimate" Internet services will be denied. Well, it's good, if such problems would be immediately acknowledged by RKN and rolled back, but who will compensate the businesses for the losses? That's why optimists simply crossed their fingers, held their breath and waited for telecom to sabotage the execution of the law or find a way to comply formally on paper, without actually doing so. Moreover, there is nothing to execute yet — practical steps are awaiting to be defined in future.

Originally published in the Internet Governance Project.

Written by Ilona Stadnik, Ph.D. candidate at the Saint-Petersburg State University

Follow CircleID on Twitter

More under: Censorship, Internet Governance, Law, Policy & Regulation

Categories: News and Updates

SpaceX Reports Significant Broadband Satellite Progress

Fri, 2019-05-17 19:51

SpaceX may be approaching debris detection as a machine-learning problem in which the entire constellation, not individual satellites, is learning to avoid collisions.

SpaceX delayed last Wednesdays Starlink launch due to high winds and on Thursday they decided to do a software update and postpone the launch until next week, but they revealed significant progress in their Starlink mission press release and in tweets by and a media call with Elon Musk.

Starlink size comparison – novel packaging accommodates 60 satellites in a single launch. (Source)

The mission press release said SpaceX has significantly reduced the size and weight of their satellites. Their initial November 2016 FCC filing specified 386 kg satellites that measured 4 x 1.8 x 1.2 meters. In February 2018, they launched two Internet-service test satellites — TinTin A and B — that measured only 1.1 x .7 x .7 meters with a total mass of approximately 400 kg. The mass of the Starlink satellites will be only 227 kg, about 43% that of the test satellites. (They are still heavier than OneWeb's 147.4 kg test satellites)

As far as I know, SpaceX has not previously commented on the number of satellites that might be launched at once, but the number was generally estimated as 25-30 after considering constraints on mass, volume, and numbers of satellites per orbital plane. As shown here, they will be launching a surprising 60 flat-packed satellites. Launching 60 satellites also demonstrates continued progress in rocket capability — this will be the heaviest SpaceX payload ever.

The speed and density of satellites in
low-earth orbit increase the likelihood
of a cascading debris collision. (Source)The current and planned proliferation of low-earth orbit satellites increases the likelihood of a Kessler Syndrome event — a cascade of collisions between satellites and the ensuing debris. The press release alluded to what may be a significant advance in debris mitigation, stating that:

Each spacecraft is equipped with a Startracker navigation system that allows SpaceX to point the satellites with precision. Importantly, Starlink satellites are capable of tracking on-orbit debris and autonomously avoiding a collision.

That would be a breakthrough if feasible, but on first consideration, it seems impossible. Low-earth orbit satellites move very fast and even if a satellite had the resolution and pattern-recognition capability to "see" debris in its path, it would not be able to maneuver quickly enough to avoid a collision. That point was raised in this online discussion and a possible solution suggested — the entire constellation could dynamically pool and share data from each satellite as well as use NORAD tracking data, which Musk mentioned during the media call.

SpaceX may be approaching this as a machine-learning problem in which the entire constellation, not individual satellites, is learning to avoid collisions using its shared data as well as data from other sources like NORAD. One can imagine sharing such data with competitors like OneWeb and Telesat or even with Russia, China or India. (Elon Musk is known to read science fiction — this speculation is reminiscent of Azimov's Gaia or Teilhard de Chardin's noosphere).

The prospect of launching 60 satellites at once and a shared-data approach to collision avoidance have grabbed my attention, but Musk's tweets and media call were also highly informative — a few examples:

All that and they have yet to launch the satellites — stay tuned.

Written by Larry Press, Professor of Information Systems at California State University

Follow CircleID on Twitter

More under: Access Providers, Broadband, Wireless

Categories: News and Updates

A Report on the ICANN DNS Symposium

Fri, 2019-05-17 02:39

By any metric, the queries and responses that take place in the DNS are highly informative of the Internet and its use. But perhaps the level of interdependencies in this space is richer than we might think. When the IETF considered a proposal to explicitly withhold certain top-level domains from delegation in the DNS the ensuing discussion highlighted the distinction between the domain name system as a structured space of names and the domain name system as a resolution space where certain names are instantiated to be resolvable using the DNS protocol. It is always useful to remember that other name resolution protocols exist, and they may use other parts of the domain name space. Having said that, the recent ICANN DNS Symposium was almost exclusively devoted to the name space associated with the DNS resolution protocol, and this protocol.

The DNS protocol represents an inconsistent mix of information leakage and obscurity. When a name resolution query is passed through a forwarder or a recursive resolver, the identity of the original source of the query is not preserved. Name resolution is a hop-by-hop process that hides the end user's identity in address terms. At the same time, the full query name is used throughout the resolution process, which exposes the end user's DNS traffic in all kinds of unanticipated places. Oddly enough we've seen recent changes to the protocol specification that attempt to reverse the effect of both of these measures!

The anonymity of the end user in DNS queries was compromised with the adoption of the Client Subnet extension. The ostensible rationale was to improve the accuracy of DNS-based client steering, allowing an authoritative name server to respond with the content address that would optimize the user experience. However, when one looks at the number of Client Subnet enabled authoritative servers on a country-by-country basis the countries which feature at the top of this list include the United States, Turkey, Iran, China, Taiwan and the United Kingdom. Some 10% of users use recursive resolvers that will add effectively gratuitous client information to the query. It seems that use of the client subnet extension has gone far beyond the original objectives of using the DNS to perform content steerage, as David Dagon pointed out in his keynote presentation to the symposium.

At the same time, we've seen moves to seal up the gratuitous information leaks in the DNS. The use of full query names when performing name server discovery is a major problem in the DNS, and the operators of the root zone servers tend to see a wealth of information relating to terminal names as a result, as do the operators of the top-level domains. The adoption of query name minimization by recursive resolvers effectively plugs that leak point, and the resolver only exposes the precise extent of information that it needs to expose in order to complete the various steps in the iterative name server discovery process.

The EU NIS Directive

The introduction of the GDPR regulations in the EU and the adoption of similar measures in other national environments has gone a long way to illustrate that the Internet's actors are not beyond conventional regulatory purview. There is a relatively EU directive, concerning the operation of "essential services" and the imposition of various requirements on the operators of such essential services, with hefty fines for non-compliance with the measures and also for serious outages of the essential service, as Jim Reid pointed out. The usual suspects of transport, banking, health care, financial markets and similar services are all part of this measure, but there is the inclusion of digital infrastructure in this directive, which appears to sweep in top-level domain registries and DNS service providers. What makes a DNS service "essential" is an interesting question. How to measure such criticality when much of the information is provided in local caches is also an interesting question.

Working out a set of objective metrics to define an "essential" part of the DNS infrastructure seems like a rather odd requirement, but to implement this NIS directive we may see work in this area. In any case, the bottom line is very clear. The name space is part of a set of essential public services, and it demands far more than a "best available effort" response by DNS service providers.

Measuring "DNS Magnitude"

If parts of the DNS are considered to be an essential service, then we may want to have some kind of metric that measures the use or impact of a domain name, as compared to other domain names. This leads to efforts to measure what has been termed "DNS Magnitude".

The DNS name resolution infrastructure is basically a collection of caches. The whole approach is to ensure that as often as possible, your DNS queries are directly answered from a nearby cache. The queries that seen at the authoritative servers are essentially cache misses. This confounds various attempts to measure the use of any domain name. If the name uses an extended cache time (TTL) then the number of cache misses will drop. If the use pattern of a name is highly bursty again, the cache will be very effective, and the authoritative server will see a small cache miss rate. So how can one use the query data seen as an authoritative name server to measure some aspect of the popularity of a domain name if the effective query rate is so dependent on the name's TTL settings?

The work presented by Alex Mayrhofer of nic.at starts with the assumption that the number of queries is of less value than the number of discrete hosts. He cites the extreme example that 100,000 queries from the same host address are lesser indicators of domain impact than a single query from each of 100,000 hosts. The basic idea is that if the shared name server sees a certain number of hosts making queries, then the relative magnitude of any particular domain name is the ratio of the number of hosts performing a query for this name as compared to the size of the entire host set.

The work uses a log scale to capture details of the "long tail" that exist in such metrics, so the refined metric is the log of the host seen querying for a domain compared to the log to the size of the overall host set. The metric appears to be reasonably independent of TTL settings, but it does assume a wide distribution of DNS recursive resolvers, which appears to be an increasingly dubious assumption as the large open DNS resolvers gather more momentum. One can only guess what QNAME minimization will have on this work, as the query rate would be unaltered by the full domain name is occluded from the upper-level DNS servers.

Dark Deeds in the DNS

It is no secret to either the people who undertake dark deeds on the Internet or to those trying to catch them that the DNS is one of the few systems that is universally visible. So, it's no surprise that domain names are used to control botnets. Much time and effort has been spent studying DNS and how the DNS has been co-opted to control malware. Stewart Garrick of Shadowserver presented on the Avalanche investigation, a multi-year law enforcement effort that spanned several countries. Some 2.5M domain names were blocked or seized during the investigative process.

There are various forms of blacklists that are intended to help service providers in denying oxygen to digital miscreants. One of these, SURBL, was described at the symposium. It uses a DNS-based reputation database where a client can append the common surbl.org suffice to the name and query the DNS for an A record. If the query returns an address within the loopback address prefix, then this DNS name has been listed as blocked by the operators of this service.

As Paul Vixie explained, SURBL is a specific instance of a more general approach to Response Policy Zones in the DNS that have existed for many years as a standard for DNS firewall policies. The firewall operates via a DNS zone and firewall rules are published, subscribed to, and shared by normal DNS zone transfer protocol operations. A recursive resolver can be configured to subscribe to a response policy, and resolution operations for firewalled names result in a NXDOMAIN response being generated by the recursive resolver. Implementations of this approach exist for Bind, Knot, Unbound and PowerDNS. More information on this approach can be found at https://dnsrpz.info.

Domain Attacks

Much has been said in recent times about the weakest link in the secured name environment, namely the link between the registrar and the name holder. If this relationship can be breached and unauthorized instructions can be passed to the registrar, which in turn are passed to the registry and make their way into the zone file, then the resources that lie behind the name can be readily compromised by trusting applications. One service operator, PCH, was compromised in this manner, and Bill Woodcock shared some details of the attack process. The subversion of the name matched a local holiday shutdown window. An earlier attack had exposed a collection of EPP (Extensible Provisioning Protocol) credentials. The rogue instructions to change the target's name servers were passed into the system via a compromised EPP credential. With control of the domain, it was then possible to obtain a domain validated name certificate immediately, using a CA that did not perform DNSSEC validation, even though the domain was DNSSEC-signed. This then allowed a remote mail access server (IMAP) to be compromised and IMAP account credentials to be exposed, together with mailboxes, and all other material sitting in various mail stores. Because the DS records were not altered in this particular attack, other information that required a validation check on the domain name was not exfiltrated. If the attack had also changed the DS records, it might have exposed more assets.

The attack was a well-rehearsed and rapidly executed set of steps, so other defense mechanisms, such as certificate logs ("certificate transparency") offer little in the way of substantive defense here. In this particular case, the use of DANE to perform certificate pinning would've been of material assistance, particularly if the TLSA record in DANE referenced the zone's KSK public key, but this particular case was an NS delegation change without a DS record change. Had the attacker also changed the DS record then DANE would not have been helpful. A similar comment can be made about CAA records and other forms of in-band pinning.

More generally, if the registrar/customer relationship is vulnerable, then many other aspects of name security are also vulnerable. If the attacker can alter both the delegation records and the zone signing key data in the parent zone, then there is very little for applications to fall back on to detect the attack and correctly identify the new information as bogus. It seems that in today's name environment that registrar/customer relationship is not well protected in many cases, and minimum practices of two-factor authentication would be a necessary and practical minimum. The other aspect of such attacks is the speed of execution. Deliberately slowing down the process of change of records in the parent zone through registry lock practices does offer some tangible benefit.

As usual, there is no magic cure-all defense here, and careful selection of name registrars, coupled with constant monitoring, is an essential minimum these days.

DNS over HTTPS

Any DNS meeting would not be complete without extended mention of DNS over HTTPs and the Symposium was no exception. However, I have covered this topic in some detail in recent posts, so I'll skip making any further comment here!

Meeting Materials – The full agenda and presentation materials for the 2019 symposium can currently be found at https://www.icann.org/ids

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: Cybersecurity, DNS, ICANN, Internet Protocol

Categories: News and Updates

Two Years Later WannaCry Continues to Spread to Vulnerable Devices, Nearly 5M Devices Affected

Thu, 2019-05-16 22:49

A slide from a 2017 presentation by Sophos CTO Joe Levy depicting the timeline of events and how the WannaCry outbreak was able to spread so quickly. (Source: Sophos)

Two years after the initial wave of WannaCry attack in May of 2017, security researchers say the ransomware continues to spread to vulnerable devices. WannaCry infection has affected close to 5 million devices to date. InfoSecurity's Michael Hill writes: "Although WannaCry variants detections have been subdued since the global kill switch was activated, they have far from disappeared. Malwarebytes' research showed that Eastern countries are most at risk from WannaCry; the majority of detections since its initial spread landed in India (727,883), Indonesia (561,381), the US (430,643), Russia (356,146) and Malaysia (335,814). In the UK, there have been 17,185 detections since the initial attack took place, with just 41 incidents recorded since April 1, 2019. In contrast, other countries have continued to register large numbers of detections in the same period; India (19,777), Indonesia (19,192) and the US (3325), for instance."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, Malware

Categories: News and Updates

WordPress Parent Company Automattic, and .Blog Operator Switches Registry From Nominet to CentralNic

Thu, 2019-05-16 20:06

The operator of .blog top-level domain, Knock Knock WHOIS There, LLC, which is a subsidiary of Automattic, the parent company of WordPress.com and Jetpack, announced on Wednesday that it is "moving into the next phase of .blog," and has chosen to partner with CentralNic instead as its new TLD registry provider. Since its launch in 2015, the UK-based outfit Nominet had been the registry service provider.

"From a registrar perspective backend changes are, for lack of a better word, a pain," says Blacknight CEO, Michele Neylon. "We generally have to deal with a quite messy switchover process which requires a lot of extra work for our developers and other teams without any benefit." However, Automattic says the difficulty involved in the transition is justified as it will help in further advancing the .blog experience for its partners and domain end users via tools and services available through CentralNic's registry services.

The change is currently pending ICANN approval and the expected completion date for the migration is estimated to be in the late August/early September 2019.

Follow CircleID on Twitter

More under: Registry Services, New TLDs

Categories: News and Updates

Close to 735K Fraudulently Obtained IP Addresses Have Been Uncovered and Revoked, ARIN Reveals

Tue, 2019-05-14 20:27

The American Registry for Internet Numbers, Ltd. (ARIN) has won a legal case against an elaborate multi-year scheme to defraud the Internet community of approximately 735,000 IPv4 addresses, the organization has revealed. While the specifics of the findings are not released, John Curran, ARIN President and CEO said the fraud was detected as a result of an internal due diligence process.

ARIN is a nonprofit member-based organization responsible for distributing Internet number resources in the US, Canada, and parts of the Caribbean. The emerging IPv4 address transfer market and increasing demand have resulted in more attempts to obtain IPv4 addresses fraudulently.

This is the first arbitration ever brought under an ARIN Registration Services Agreement, and related proceedings in the U.S. District Court for the Eastern District of Virginia. ARIN was able to prove an intricate scheme to fraudulently obtained resources that included many falsely notarized officer attestations sent to ARIN. "A company in South Carolina obtained and utilized 11 shelf companies across the United States, and intentionally created false aliases purporting to be officers of those companies, to induce ARIN into issuing the fraudulently sought IPv4 resources and approving related transfers and reassignments of these addresses. The defrauding party was monetizing the assets obtained in the transfer market, and obtained resources under ARIN's waiting list process." (ARIN Press Release)

The defrauding entity adopted an aggressive posture after ARIN requested that it produce certain documents and explain its conduct. The suspected party filed a motion for a Temporary Restraining Order and Preliminary Injunction against ARIN in U.S. District Court, and demanded a hearing the following morning (the Friday just before Christmas). "The aggressive posture was taken after ARIN indicated its intent to revoke addresses, while permitting defrauding entity to renumber to allow existing bona fide customers not to have service interrupted," ARIN’s General Counsel told CircleID. "The litigation was filed against ARIN to seek an injunction to stop ARIN from revoking and enter arbitration. Some addresses were transferred for money prior to that demand, others were pending transfer and were never transferred due to ARIN investigation."

Some fraudulently obtained addresses were transferred to third parties; however ARIN made no effort to pursue the parties that received the completed transfer, ARIN’s General Counsel told CircleID. The reason being: "(a) addressed were in another RIR service region (e.g. RIPE NCC and APNIC) and (b) ARIN did not see any evidence they knew of or participated in the fraud. In other words, they appeared to be bona fide 3rd parties."

ARIN obtained the arbitration award on May 1, 2019, which included revocation of all resources issued pursuant to fraud and $350,000 to ARIN for its legal fees.

UPDATE May 15, 2019: "Charleston Man and Business Indicted in Federal Court in Over $9M Fraud" – United States Department of Justice issues a statement annoucing Amir Golestan, 36, of Charleston, and Micfo, LLC, were charged in federal court in a twenty-count indictment. The indictment charges twenty counts of wire fraud, with each count punishable by up to 20 years imprisonment.

"The indictment alleges that since February 2014, Golestan and Micfo created and utilized 'Channel Partners,' which purported to consist of several individual businesses, all of whom acquired the right to IP addresses from the American Registry of Internet Numbers (ARIN). The indictment alleges that Golestan and Micfo fabricated the true nature of the Channel Partners, including creating false officers and deceptive websites for the businesses, which were in turn used to deceive ARIN and to fraudulently obtain IP address rights from ARIN. The indictment charges that, through this scheme, Golestan and Micfo obtained the rights to approximately 757,760 IP addresses, with a market value between $9,850,880.00 and $14,397,440.00." (DOJ / May 15, 2019)

Follow CircleID on Twitter

More under: IP Addressing

Categories: News and Updates

Huawei Says They Are Willing to Sign No-Spy Agreements With Governments

Tue, 2019-05-14 17:50

During a London conference, Huawei's chairman Liang Hua told reporters the company would sign no-spy agreements with governments as a response to United States' pressure on Europe to bar the Chinese telecommunications company over spying concerns. "We are willing to sign no-spy agreements with governments, including the UK government, to commit ourselves to making our equipment meet the no-spy, no-backdoors standard," said Liang. This is the first time Huawei has made such a statement in public.

Critics, however, are concerned that Huawei could be forced to comply with surveillance demands based on the 2017 Chinese intelligence law requiring companies to abide by the country's government if demanded.

Never been asked says Huawei: Tim Watkins, Huawei's vice-president for western Europe told reporters in London that the company founder, Mr Ren Zhengfei, "has made it clear that he has never been asked to hand over any customer data or information, and he has made it clear that if asked he would refuse and if it was attempted to be enforced he would shut the company down."

Forging ahead: According to Huawei's 2018 report, the company has close to 188,000 employees, operates in more than 170 countries and regions, and serves more than three billion people around the world. At the end of 2018, the company board approved to invest an initial budget of US$2 billion for a companywide transformation in enhancing its software engineering capabilities

Follow CircleID on Twitter

More under: Cybersecurity, Mobile Internet, Telecom

Categories: News and Updates

Know Someone Who Has Made the Internet Better? Postel Service Award Nominations Deadline May 15

Tue, 2019-05-14 01:48

Do you know of someone who has made the Internet better in some way who deserves more recognition? Maybe someone who has helped extend Internet access to a large region? Or wrote widely-used programs that make the Internet more secure? Or maybe someone who has been actively working for open standards and open processes for the Internet?

Each year the Internet Society awards the Jonathan B. Postel Service Award to an individual or an organization that has made outstanding contributions in service to the Internet community.

Some of the recent winners include (see the full list):

  • 2018 - Steven G Huter
  • 2017 - Kimberly C. Claffy
  • 2016 - Kanchana Kanchanasut
  • 2015 - Rob Blokzijl
  • 2014 - Mahabir Pun
  • 2013 - Elizabeth "Jake" Feinler
  • 2012 - Pierre Ouedraogo
  • 2011 - Professor Kilnam Chon
  • 2010 - Dr. Jianping Wu
  • 2009 - CSNET
  • 2008 - La Fundación Escuela Latinoamericana de Redes (EsLaRed)
  • 2007 - Dr Nii Quaynor

The deadline for nominations for the 2019 Postel Service Award is this coming Wednesday, May 15. The award is both a presentation crystal and a $20,000 USD prize. The award will be presented at the 105th meeting of the Internet Engineering Task Force (IETF) in Montreal, Canada, in July 2019.

To complete the nomination form (including for yourself), you need the following:

  • ​The nominee's contact information.
  • ​The nominee's resume/curriculum vitae.
  • A statement of recommendation​ — A brief statement that includes specific acts, works, contributions, and other criteria that show how the candidate exemplifies the standard set by Jon Postel. It should be clear from this statement that the candidate has performed in this manner consistently and over a long period of time, not simply that the candidate has done several significant things in the area of data communications and the Internet.
  • Two references — Names, email addresses and phone numbers for at least two people who will support your recommendation.

The Postel Service Award provides a great opportunity to recognize people who have made the Internet better is some way. Please consider nominating someone you know before Wednesday's deadline!

Written by Dan York, Author and Speaker on Internet technologies - and on staff of Internet Society

Follow CircleID on Twitter

More under: Internet Protocol, Networks, Telecom, Web

Categories: News and Updates

Gall's Law and the Network

Mon, 2019-05-13 19:02

In Systemantics: How Systems Really Work and How They Fail, John Gall says:

A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.

In the software development world, this is called Gall's Law (even though Gall himself never calls it a law) and is applied to organizations and software systems. How does this apply to network design and engineering? The best place to begin in answering this question is to understand what, precisely, Gall is arguing for; there is more here than what is visible on the surface.

What does a simple system mean? It is, first of all, an argument for underspecification. This runs counter to the way we instinctively want to design systems. We want to begin by discovering all the requirements (problems to be solved and constraints), and then move into an orderly discussion of all the possible solutions and sets of solutions, and then into an orderly discussion of an overall architecture, then into a nice UML chart showing all the interaction surfaces and how they work, and ... finally ... into building or buying the individual components.

This is beautiful on paper, but it does not often work in real life. What Gall is arguing for is building a small, simple system first that only solves some subset of the problems. Once that is done, add onto the core system until you get to a solution that solves the problem set. The initial system, then, needs to be underspecified. The specification for the initial system does not need to be "complete;" it just needs to be "good enough to get started."

If this sounds something like agile development, that's because it is something like agile development.

This is also the kind of thinking that has been discussed on the history of networking series (listen to this episode on the origins of DNS with Paul Mockapetris as an example). There are a number of positive aspects to this way of building systems. First, you solve problems in small enough chunks to see real progress. Second, as you solve each problem (or part of the problem), you are creating a useable system that can be deployed and tested and solves a specific problem. Third, you are more likely to "naturally" modularize a system if you build it in pieces. Once some smaller piece is in production, it is almost always going to be easier to build another small piece than to try to add new functionality and deploy the result.

How can this be applied to network design and operations?

The most obvious answer is to build the network in chunks, starting with the simple things first. For instance, if you are testing a new network design, focus on building just a campus or data center fabric, rather than trying to replace the entire existing network with a new one. This use of modularization can be extended to use cases beyond topologies within the network, however. You could allow multiple overlays to co-exist, each one solving a specific problem, in the data center.

This latter example, however — multiple overlays — shows how and where this kind of strategy can go wrong. In building multiple overlays, you might be tempted to build multiple kinds of overlays by using different kinds of control planes, or different kinds of transport protocols. This kind of ad-hoc building can fit well within the agile mindset but can result in a system that is flat-out unmaintainable. I have been in two-day meetings where the agenda was just to go over every network management related application currently deployed in the network. A printed copy of the spreadsheet, on tool per line, came out to tens of pages. This is agile gone wildly wrong, driving unnecessary complexity.

Another problem with this kind of development model, particularly in network engineering, is it is easy to ignore lateral interaction surfaces, particularly among modules that do not seem to interact. For instance, IS-IS and BGP are both control planes, and hence seem to fit at the same "layer" in the design. Since they are lateral modules, each one providing different kinds of reachability information, it is easy to forget they also interact with one another.

Gall's law, like all laws in the engineering world, can be a good rule of thumb — so long as you keep up a system-level view of the network, and maintain discipline around a basic set of rules (such as "don't use different kinds of overlays, even if you use multiple overlays").

Written by Russ White, Infrastructure Architect at Juniper Networks

Follow CircleID on Twitter

More under: Networks

Categories: News and Updates

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer