Domain industry news

Syndicate content CircleID
Latest posts on CircleID
Updated: 4 hours 10 min ago

Berners-Lee Talks Net Neutrality in Washington, "ISPs Should be Treated More Like Utilities"

Fri, 2017-11-17 20:34

Tim Berners-Lee is in Washington urging lawmakers to reconsider the rollback of net neutrality laws — while remaining optimistic, he sees a "nasty wind" blowing amid concerns. Olivia Solon reporting in The Guardian writes: "These powerful gatekeepers ... control access to the internet and pose a threat to innovation if they are allowed to pick winners and losers by throttling or blocking services. It makes sense, therefore, that ISPs should be treated more like utilities. ... 'Gas is a utility, so is clean water, and connectivity should be too,' said Berners-Lee. 'It's part of life and shouldn't have an attitude about what you use it for — just like water.'"

Follow CircleID on Twitter

More under: Access Providers, Net Neutrality, Policy & Regulation

Categories: News and Updates

U.S. Government Takes Steps Towards Increased Transparency for Vulnerabilities Equities Process

Fri, 2017-11-17 02:47

The White House has released a charter offering more transparency into the Vulnerabilities Equities Process. Tom Spring from ThreatPost reports: "On Wednesday it released the 'Vulnerabilities Equities Policy and Process' [PDF] charter that outlines how the government will disclose cyber security flaws and when it will keep them secret. The release of the charter is viewed as a positive by critics and a step toward addressing private-sector concerns that the VEP's framework is to secretive."

Follow CircleID on Twitter

More under: Cybersecurity, Policy & Regulation

Categories: News and Updates

IBM Launches Quad9, a DNS-based Privacy and Security Service to Protect Users from Malicious Sites

Fri, 2017-11-17 01:58

In a joint project, IBM Security along with Packet Clearing House (PCH) and The Global Cyber Alliance (GCA) today launched a free service designed to give consumers and businesses added online privacy and security protection. The new DNS service is called Quad9 in reference to the IP address 9.9.9.9 offered for the service. The group says the service is aimed at protecting users from accessing malicious websites known to steal personal information, infect users with ransomware and malware, or conduct fraudulent activity.

Quad9 is said to provide these protections without compromising the speed of users' online experience. From the announcement: "Leveraging PCH's expertise and global assets around the world, Quad9 has points of presence in over 70 locations across 40 countries at launch. Over the next 18 months, Quad9 points of presence are expected to double, further improving the speed, performance, privacy and security for users globally. Telemetry data on blocked domains from Quad9 will be shared with threat intelligence partners for the improvement of their threat intelligence responses for their customers and Quad9."

The Genesis of Quad9: "Quad9 began as the brainchild of GCA. The intent was to provide security to end users on a global scale by leveraging the DNS service to deliver a comprehensive threat intelligence feed. This idea lead to the collaboration of the three entities: GCA: Provides system development capabilities and brought the threat intelligence community together; PCH: Provides Quad9's network infrastructure; and IBM: Provides IBM X-Force threat intelligence and the easily memorable IP address (9.9.9.9)."

Philip Reitinger, President and CEO of the Global Cyber Alliance: "Protecting against attacks by blocking them through DNS has been available for a long time, but has not been used widely. Sophisticated corporations can subscribe to dozens of threat feeds and block them through DNS, or pay a commercial provider for the service. However, small to medium-sized businesses and consumers have been left behind — they lack the resources, are not aware of what can be done with DNS, or are concerned about exposing their privacy and confidential information. Quad9 solves these problems. It is memorable, easy to use, relies on excellent and broad threat information, protects privacy, and security and is free."

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, DNS, DNS Security, Malware, Privacy, Web

Categories: News and Updates

UDRPs Filed - Brand Owners Take Note

Thu, 2017-11-16 21:27

After being in the domain industry for over 15 years, there aren't too many things that catch me by surprise, but recently a few UDRP filings have me scratching my head.

Both ivi.com and ktg.com have had UDRPs filed against them, and I have to say for anyone holding a valuable domain name, it's a cautionary tale and one that should have folks paying attention to the outcome of each.

Just as a refresher, to be successful in a UDRP filing, the complainant must prove the following:

  • the domain name is identical or confusingly similar to a trademark or service mark in which the complainant has rights; and
  • the registrant has no rights or legitimate interests in respect of the domain name; and
  • the domain name has been registered and is being used in bad faith.

With that in mind, let's look a little closer at the details of these two troubling UDRP filings.

Ivi.com is registered to WebMD, LLC, a long-time provider of health and wellness information on the Internet, and the domain has been registered since 1992. The domain name currently doesn't resolve to any content, so it's not actively being used. The complainant is Equipo IVI SL, an assisted reproduction group based in Spain. They appear to operate their company off of the domain ivi-fertility.com. According to their website, IVI appears to have been initially founded in 1990 in Valencia.

The domain ktg.com is registered to HUKU LLC which appears to be an entity based in Belize and has been registered since at least 2001. According to a reverse WHOIS lookup, this entity owns a few hundred generic domain names in a variety of extensions. The domain ktg.com resolves to a Domain Holdings page with a message stating that this domain may be for sale. The complainant is a company called Kitchens To Go which operates off the kitchenstogo.com domain which was registered in 1998. They also appear to operate the k-t-g.com domain name as well.

Based on prima facie evidence, I'm doubtful that either of these UDRP filings should be successful — but then again the domain imi.com was recently handed over to the complainant in a case which appears to have very similar circumstances to these latest two. It should be noted though in that case, the registrant did not even respond to the UDRP.

What can brand owners do to ensure that they don't find themselves losing a domain in a questionable UDRP filing? A few things:

  • Ensure your WHOIS information is up-to-date and accurate so that any correspondence sent to the contacts is received. People think nothing of value comes to those published contacts, but UDRP filings would certainly be something you'd want to make sure you received.
  • If you do find a long-held domain being subject to a UDRP (or any UDRP for that matter), make sure you file a response so that you don't leave the complainant as the only voice in front of the UDRP panelists.
  • Make sure that your registrar has a procedure in place to notify you of any UDRP filing they may receive for your domains. In addition to communication to the domain owner, the registrar of record also receives notification, and they should be passing those notifications on to their clients.

It will be very interesting to see how these two UDRP filings play out, and we'll be sure to report back once the decisions have been made public.

Written by Matt Serlin, SVP, Client Services and Operations at Brandsight

Follow CircleID on Twitter

More under: Domain Names, Intellectual Property, UDRP

Categories: News and Updates

When UDRP Consolidation Requests Go Too Far

Thu, 2017-11-16 15:25

Although including multiple domain names in a single UDRP complaint can be a very efficient way for a trademark owner to combat cybersquatting, doing so is not always appropriate.

One particularly egregious example involves a case that originally included 77 domain names — none of which the UDRP panel ordered transferred to the trademark owner, simply because consolidation against the multiple registrants of the domain names was improper.

The UDRP case, filed by O2 Worldwide Limited, is an important reminder to trademark owners that they should not overreach when filing large complaints — at least when the disputed domain names are held by different registrants.

The Same Domain-Name Holder

Under the UDRP rules, a "complaint may relate to more than one domain name, provided that the domain names are registered by the same domain-name holder." As a result, many UDRP complaints include multiple domain names — from two to as many as more than 1,500.

While this UDRP rule may seem straightforward, it can become more complicated in practice, especially as some clever cybersquatters try to hide behind aliases to frustrate trademark owners.

Where the registrants appear to be different, the WIPO Overview of WIPO Panel Views on Selected UDRP Questions, Third Edition, says that UDRP panels often consider the following in considering whether it is proper to include multiple domain names in a single complaint: "whether (i) the domain names or corresponding websites are subject to common control, and (ii) the consolidation would be fair and equitable to all parties."

The Overview adds: "Procedural efficiency would also underpin panel consideration of such a consolidation scenario."

Not Procedurally Efficient

In the O2 case, the panel found that consolidation was not appropriate, based on a most unusual set of facts. O2 had argued that "unifying features… link all of the domains" and that a single individual "maintain[ed] common control" over all of the domain names.

But the panel strongly disagreed, noting that 25 different entities were named as respondents for the 77 domain names in the original complaint. Incredibly, the panel said:

The administrative procedure that the [WIPO] Center was required to undertake as a result of this filing involved: (i) numerous communications with four different Registrars; (ii) the withdrawal of the Complaint against 11 of the domain names due to the fact that they were no longer registered; (iii) the receipt of 20 separate communications, from 12 different Respondents or Other Submissions, respectively, each of whom appeared to be operating independently of the others and whose positions were not identical; (iv) the receipt of two separate formal Responses; and (v) the filing of one unsolicited Supplemental Filing by the Complainant.

This, the panel wrote, created an "administrative burden" that was "undue — and certainly not procedurally efficient." Further, the panel said that because "the Respondents appear to be separate persons whose positions are not necessarily identical," treating them alike in a single proceeding "is unlikely to be fair and equitable."

Not only did the panel reject O2's consolidation arguments, but it also rejected O2's request to proceed against any of the disputed domain names:

In the Panel's view, what the Complainant has sought to do is throw a large number of disputed domain names registered by a large number of separate Respondents into one Complaint, request consolidation on the basis of a general assertion of connectedness, rely on the Center to verify the situation of every disputed domain name and Respondent to identify those against whom the Complaint can proceed, and rely on the Panel to work through the case of every Respondent to determine in respect of whom consolidation would be fair and equitable. The Panel does not wish to encourage Complainants to adopt this approach. Accordingly, the Panel will not accede to the Complainant's request to allow consolidation to proceed in respect of some sub-set of the disputed domain names.

Despite the panel's ultimate denial of O2's entire complaint (allowing all of the registrants to retain their domain names), the panel made clear that O2 could file new (but separate) complaints against some (or all) of the registrants. But, interestingly, as of this writing (more than four months after the decision date), it appears as if O2 has not done so.

More Routine Requests

Few, if any, UDRP cases raise the same consolidation complexities (or unprecedented arguments) as in the O2 case. Rather, many consolidation requests are routinely granted.

Indeed, even some large cases that involve registrants with different identities have been allowed to proceed on a consolidated basis, that is, in a single proceeding.

For example, in a UDRP case filed by United Parcel Services (UPS) for 122 domain names "owned by many different registrants," the panel allowed consolidation because there was "sufficient evidence demonstrating that the listed domain name holders are aliases and that the domain names are controlled by a single person" — namely, that "(i) all 122 domain names were registered in October 2016, (ii) all 122 names have the same registrar, Wild West Domains, LLC, (iii) all 122 names were initially registered using the same privacy service, Domains By Proxy, (iv) the email addresses listed for each of the 122 infringing domain names follow the same general format — FirstNameLastName@FirstNameLastName.onmicrosoft.com, and (v) all resolve to the same website — an inactive website displaying the same error message, 'The page cannot be displayed.'"

Despite the different outcomes, the decisions in the O2 and UPS cases are consistent with each other as well as with UDRP precedent and practice. Together, they provide important lessons about the appropriateness of — but limitations on — including multiple domain names in a single complaint.

Written by Doug Isenberg, Attorney & Founder of The GigaLaw Firm

Follow CircleID on Twitter

More under: Domain Names, UDRP

Categories: News and Updates

Russia Targeted British Telecom, Media, Energy Sectors, Reveals UK National Cyber Security Centre

Wed, 2017-11-15 20:14

Speaking at The Times Tech Summit in London, Ciaran Martin, chief of the National Cyber Security Centre (NCSC), warned Russia is seeking to undermine the international system. "I can't get into too much of the details of intelligence matters, but I can confirm that Russian interference, seen by the National Cyber Security Centre, has included attacks on the UK media, telecommunications and energy sectors. ... The government is prioritising cyber security because we care so much about the digital future of the country. We're doing it broadly on the themes that will come up today — defend networks, deter attackers and develop the skills base."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, Policy & Regulation

Categories: News and Updates

Airplanes Vulnerable to Hacking, Says U.S. Department of Homeland Security

Wed, 2017-11-15 18:03

Researchers have been able to successfully demonstrate a commercial aircraft can be remotely hacked. Calvin Biesecker reporting in Avionics reports: "A team of government, industry and academic officials successfully demonstrated that a commercial aircraft could be remotely hacked in a non-laboratory setting last year, a U.S. Department of Homeland Security (DHS) official said Wednesday at the 2017 CyberSat Summit in Tysons Corner, Virginia. [U.S. Department of Homeland Security aviation program manager says] 'We got the airplane on Sept. 19, 2016. Two days later, I was successful in accomplishing a remote, non-cooperative, penetration ... [which] means I didn't have anybody touching the airplane, I didn't have an insider threat. I stood off using typical stuff that could get through security and we were able to establish a presence on the systems of the aircraft."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity

Categories: News and Updates

Your Online Freedoms are Under Threat - 2017 Freedom on the Net Report

Tue, 2017-11-14 16:08

As more people get online every day, Internet Freedom is facing a global decline for the 7th year in a row.

Today, Freedom House released their 2017 Freedom on the Net report, one of the most comprehensive assessments of countries' performance regarding online freedoms. The Internet Society is one of the supporters of this report. We think it brings solid and needed evidence-based data in an area that fundamentally impacts user trust.

Looking across 65 countries, the report highlights several worrying trends, including:

  • manipulation of social media in democratic processes
  • restrictions of virtual private networks (VPNs)
  • censoring of mobile connectivity
  • attacks against netizens and online journalists

Elections prove to be particular tension points for online freedoms (see also Freedom House's new Internet Freedom Election Monitor). Beyond the reported trend towards more sophisticated government attempts to control online discussions, the other side of the coin is an increase in restrictions to Internet access, whether through shutting down networks entirely, or blocking specific communication platforms and services.

These Internet shutdowns are at the risk of becoming the new normal. In addition to their impact on freedom of expression and peaceful assembly, shutdowns generate severe economic costs, affecting entire economies [1] and the livelihood of tech entrepreneurs, often in regions that would benefit the most from digital growth.

We need to build on these numbers as they open a new door to ask governments for accountability. By adopting the U.N. Sustainable Developed Goals (SDGs) last year, governments of the world have committed to leveraging the power of the Internet in areas such as education, health and economic growth. Cutting off entire populations from the Internet sets the path in the wrong direction.

Mindful that there is urgency to address this issue, the Internet Society is releasing today a new policy brief on Internet shutdowns, which provides an entry into this issue, teases various impacts of such measures and offers some preliminary recommendations to governments and other stakeholders.

Of course, this can only be the beginning of any action and we need everyone to get informed and make their voices heard on shutdowns and other issues related to online freedoms.

Here is what you can do:

[1] Among other similar studies, Brookings assessed a cost of about USD 2.4 billion resulting from shutdowns across countries evaluated between July 1, 2015 and June 30, 2016.

Written by Nicolas Seidler, Senior Policy advisor

Follow CircleID on Twitter

More under: Censorship, Internet Governance, Policy & Regulation

Categories: News and Updates

Telesat - a Fourth Satellite Internet Competitor

Mon, 2017-11-13 20:58

Telesat will begin with only 117 satellites while SpaceX and the others plan to launch thousands — how can they hope to compete? The answer lies in their patent-pending deployment plan.

I've been following SpaceX, OneWeb and Boeing satellite Internet projects, but have not mentioned Telesat's project. Telesat is a Canadian company that has provided satellite communication service since 1972. (They claim their "predecessors" worked on Telstar, which relayed the first intercontinental transmission, in 1962). Earlier this month, the FCC approved Telesat's petition to provide Internet service in the US using a proposed constellation of 117 low-Earth orbit (LEO) satellites.

Note that Telesat will begin with only 117 satellites while SpaceX and the others plan to launch thousands — how can they hope to compete? The answer lies in their patent-pending approach to deployment. They plan a polar-orbit constellation of six equally-spaced (30 degrees apart) planes inclined at 99.5 degrees at an altitude of approximately 1,000 kilometers and an inclined-orbit constellation of five equally-spaced (36 degrees apart) planes inclined at 37.4 degrees at an approximate altitude of 1,248 kilometers.

Telesat's LEO constellation will combine polar (green) and inclined (red) orbits.

This hybrid polar-inclined constellation will result in global coverage with a minimum elevation angle of approximately 20 degrees using their ground stations in Svalbard Norway and Inuvic Canada. Their analysis shows that 168 polar-orbit satellites would be required to match the global coverage of their 117-satellite hybrid constellation and according to Erwin Hudson, Vice President of Telesat LEO, their investment per Gbps of sellable capacity will be as low, or lower than, any existing or announced satellite system. They also say their hybrid architecture will simplify spectrum-sharing.

An inter-constellation route (source)The figure (right) from their patent application illustrates hybrid routing. The first hop in a route to the Internet for a user in a densely populated area like Mexico City (410) would be to a visible inclined-orbit satellite (420). The next hop would be to a satellite in the polar-orbit constellation (430), then to a ground station on the Internet (440).

The up and downlinks will use radio frequencies, and the inter-satellite links will use optical transmission. Since the ground stations are in sparsely populated areas and the distances between satellites are low near the poles, capacity will be balanced. This scheme may result in Telesat customers experiencing slightly higher latencies than those of their competitors, but the difference will be negligible for nearly all applications.

They will launch two satellites this year — one on a Russian Soyuz rocket and the other on an Indian Polar Satellite Launch Vehicle. These will be used in tests and Telesat says a number of their existing geostationary satellite customers are enthusiastic about participating in the tests. They will launch their phase 2 satellites beginning in 2020 and commence commercial service in 2021. They consider 25 satellites per launch vehicle a practical number so they will have global availability before their competitors. Their initial capacity will be relatively low, but they will add satellites as demand grows.

Like OneWeb, Telesat will work with strategic partners for launches and design and production of satellites and antennae. They have not yet selected those partners, but are evaluating candidates and are confident they will be ready in time for their launch dates. Their existing ground stations give them a head start. (OneWeb just contracted with Hughes for ground stations).

Their satellites will work with mechanical and electronically steered antennae, and each satellite will have a wide-area coverage mode for broadcast and distributing software updates. Their patent application mentions community broadband and hotspots, large enterprises, ships and planes, software updates and Internet of things, but not homes as initial markets.

Telesat's Canadian patent application goes into detail on all of the above, and I'd be curious to know what exactly would be protected by it. They also consider their global spectrum priority rights from the International Telecommunication Union as an asset, but they will have to agree to spectrum sharing conventions and debris mitigation agreements.

Let me conclude with a suggestion for Telesat and the Cuban government.

OneWeb has committed to providing coverage to the entire state of Alaska by the end of 2020, and Telesat says they will have global coverage by 2021. I follow the state of the Internet in Cuba and think Cuba would be a good starting place for Telesat service. Cuba has the best-educated, Internet-starved population in Latin America and the Caribbean, they have very little domestic Internet infrastructure, and much of the infrastructure they do have is obsolete. Cuba is close to being an Internet "green field" and, since it is an island nation, their polar satellite "footprint" would not be densely populated.

Cuba could work with Telesat to leapfrog over several infrastructure generations. If Telesat can deliver on their claims, the barriers would be political and bureaucratic, not technical. Cuba is about to change leadership, and there is some indication that Miguel Díaz-Canel, who many expect to replace Raúl Castro, will favor Internet development.

SpaceX could also provide early Cuban connectivity, but dealing with a US company would be politically problematical, and Cuba and Canada have a well established political and economic relationship. Even if Cuba were willing to work with SpaceX, the current US administration would not allow them to do so. Connecting Cuba would be good for Cubans and good publicity of Telesat.

For more on Telesat and their plans for LEO satellite Internet service see their patent application and you can see animations of their proposed hybrid-constellation connectivity here and here.

Written by Larry Press, Professor of Information Systems at California State University

Follow CircleID on Twitter

More under: Access Providers, Broadband, Wireless

Categories: News and Updates

Google Now a Target for Regulation

Mon, 2017-11-13 19:35

Headline in the Washington Post: "Tech companies pushed for net neutrality. Now Sen. Al Franken wants to turn it on them." 9 Nov 2017

The time was — way back around the turn of the century — when all Internet companies believed that the Internet should be free from government regulation. I lobbied along with Google and Amazon to that end (there were no Twitter and Facebook then); we were successful over the objection of traditional telcos who wanted the protection of regulation. The Federal Communications Commission (FCC) under both Democrats and Republicans agreed to forbear from regulating the Internet the way they regulate the telephone network; the Internet flourished, to put it mildly.

Fast forward to 2015. Google and other Internet giants and their trade group, the Internet Association, were successful in convincing the Obama FCC to reverse that policy and regulate Internet Service Providers (ISPs) under the same regulation which helped to stifle innovation in telephony for decades. The intent, according to the Internet Association, was to protect Net Neutrality (a very good name) and assure that ISPs didn't either censor or prefer their own content over the content of others — Google, for example. The regulation was acknowledged to be preemptive - ISPs weren't discriminating but they might.

This spring Trump's FCC Chair, Ajit Pai, announced the beginning of an effort to repeal the 2015 regulations and return the Internet to its former lightly regulated state. The Internet Association and its allies mounted a massive online campaign against deregulation in order, they said, to protect Net Neutrality. One of their allies was the Open Market Initiative, which was then part of The New America Foundation. More about them below.

I blogged to Google:

"You run a fantastically successful business. You deliver search results so valuable that we willingly trade the history of our search requests for free access. Your private network of data centers, content caches and Internet connections assure that Google data pops quickly off our screen. Your free Chrome browser, Android operating system, and gmail see our communication before it gets to the Internet and gets a last look at what comes back from the Internet before passing it on to us. You make billions by monetizing this information with at least our implied consent. I mean all this as genuine praise.

"But I think you've made a mistake by inviting the regulatory genie on to the Internet. Have you considered that Google is likely to be the next regulatory target?"

It didn't take long.

In August the European Union declared a penalty against Google. Barry Lynn of the Open Market Initiative posted praise for the EU decision on the New America website. According to the NY Times:

"The New America Foundation has received more than $21 million from Google; its parent company's executive chairman, Eric Schmidt; and his family's foundation since the think tank's founding in 1999. That money helped to establish New America as an elite voice in policy debates on the American left and helped Google shape those debates…

"Hours after this article was published online Wednesday morning, Ms. Slaughter announced that the think tank had fired Mr. Lynn on Wednesday for 'his repeated refusal to adhere to New America's standards of openness and institutional collegiality.'"

Mr. Lynn and his colleagues immediately founded The Open Market Institute. The front page of their websites says:

"Amazon, Google and other online super-monopolists, armed with massive dossiers of data on every American, are tightening their grip on the most vital arteries of commerce, and their control over the media we use to share news and information with one another."

Sen. Al Franken and the Open Market Institute held an event which led to the WaPo headline and the article which begins:

"For years, tech companies have insisted that they're different from everything else. Take Facebook, which has long claimed that it's a simple tech platform, not a media entity. 'Don't be evil,' Google once said to its employees, as though it were setting itself apart from the world's other massive corporations.

"But now, some policymakers are increasingly insisting that firms such as Google, Facebook and Twitter really aren't that special after all — and that perhaps it's time they were held to the same standard that many Americans expect of electricity companies or Internet providers.

"Sen. Al Franken (D-Minn.) became the latest and most vocal of these critics Wednesday when, at a Washington conference, he called for tech companies to follow the same net neutrality principles that the federal government has applied to broadband companies such as Verizon, AT&T and Comcast."

I'm not happy to have been right; on the contrary, I'm appalled. The last thing we should want is the government regulating Internet content, especially at a time when both the political right and the political left are anti-free speech. But there is no principled argument that Google's potential competitors, the ISPs, should be constrained by regulatory oversight while Google, much bigger than any of these competitors and much more dominant worldwide, can exert its dominance freely. Google truly opened a Pandora's box and let out a regulatory genie.

As much as I am against regulatory oversight of content, I do believe that the government has a very proper role both in antitrust and in truth in advertising. These are some of the tools which do need to be used to keep new or old oligarchs from ruling the world.

Written by Tom Evslin

Follow CircleID on Twitter

More under: Access Providers, Net Neutrality, Policy & Regulation, Telecom

Categories: News and Updates

Court Finds Anti-Malware Provider Immune Under CDA for Calling Competitor's Product Security Threat

Mon, 2017-11-13 18:26

Plaintiff anti-malware software provider sued defendant — who also provides software that protects internet users from malware, adware etc. — bringing claims for false advertising under the Section 43(a) of Lanham Act, as well as other business torts [Enigma Software Group v. Malwarebytes Inc., 2017 WL 5153698 (N.D. Cal., November 7, 2017)]. Plaintiff claimed that defendant wrongfully revised its software's criteria to identify plaintiff's software as a security threat when, according to plaintiff, its software is "legitimate" and posed no threat to users' computers.

Defendant moved to dismiss the complaint for failure to state a claim upon which relief may be granted. It argued that the provisions of the Communications Decency Act at Section 230(c)(2) immunized it from plaintiff's claims.

Section 230(c)(2) reads as follows:

No provider or user of an interactive computer service shall be held liable on account of —

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in [paragraph (A)].

Specifically, defendant argued that the provision of its software using the criteria it selected was an action taken to make available to others the technical means to restrict access to malware, which is objectionable material.

The court agreed with defendant's argument that the facts of this case were "indistinguishable" from the Ninth Circuit's opinion in in Zango, Inc. v. Kaspersky, 568 F.3d 1169 (9th Cir. 2009), in which the court found that Section 230 immunity applied in the anti-malware context.

Here, plaintiff had argued that immunity should not apply because malware is not within the scope of "objectionable" material that it is okay to seek to filter in accordance with 230(c)(2)(B). Under plaintiff's theory, malware is "not remotely related to the content categories enumerated" in Section 230(c)(2)(A), which (B) refers to. In other words, the objectionableness of malware is of a different nature than the objectionableness of material that is obscene, lewd, lascivious, filthy, excessively violent, harassing. The court rejected this argument on the basis that the determination of whether something is objectionable is up to the provider's discretion. Since defendant found plaintiff's software "objectionable" in accordance with its own judgment, the software qualifies as "objectionable" under the statute.

Plaintiff also argued that immunity should not apply because defendant's actions taken to warn of plaintiff's software were not taken in good faith. But the court applied the plain meaning of the statute to reject this argument — the good faith requirement only applies to conduct under Section 230(c)(2)(A), not (c)(2)(B).

Finally, plaintiff had argued that immunity should not apply with respect to its Lanham Act claim because of Section 230(e)(2), which provides that "nothing in [Section 230] shall be construed to limit or expand any law pertaining to intellectual property." The court rejected this argument because although the claim was brought under the Lanham Act, which includes provisions concerning trademark infringement (which clearly relates to intellectual property), the nature of the Lanham Act claim here was for unfair competition, which is not considered to be an intellectual property claim.

Written by Evan D. Brown, Attorney

Follow CircleID on Twitter

More under: Law, Malware

Categories: News and Updates

Weaponizing the Internet Using the "End-to-end Principle" Myth

Sun, 2017-11-12 22:39

At the outset of the Internet Engineering Task Force (IETF) 100th meeting, a decidedly non-technical initial "Guide for human rights protocol considerations” was just published. Although the IETF has always remained true to its DARPA origins as a tool for developing disruptive new technical ideas, it launches into bizarre territory when dealing with non-technical matters. The rather self-referential draft Guide asserts research containing 19 different proposed "guidelines" based on work of a small group of people over the past two years known as the Human Rights Protocol Considerations Research Group (HRPC). The preponderance of the work and postings were those of the chair, and 2/3 of all the posts were from only five people. Whatever one might think about the initiative, it is a well-intentioned attempt by activists in several human rights arenas to articulate their interests and needs based on their conceptualisation of "the internet."

At the outset of the guidelines is a clause dubbed "connectivity" that consists of an implementation of the internet "end-to-end principle." Connectivity is explained as

the end-to-end principle [which] [Saltzer] holds that 'the intelligence is end to end rather than hidden in the network' [RFC1958]. The end-to-end principle is important for the robustness of the network and innovation. Such robustness of the network is crucial to enabling human rights like freedom of expression. [Amusingly, the first citation is not freely available and requires $15 to view]

There are several ironies here. The Saltzer article was written in 1984 shortly after DARPA had adopted TCP and IP for use on its own highly controlled packet networks. RFC1958 was written in 1996 shortly after the DARPA Internet became widely used for NREN (National Research and Educational Network) purposes and still largely controlled by U.S. government agencies for deployments in the U.S. and its international scientific research partners. Already, the DARPA Director who had originally authorized DARPA internet development in the 1970, had become significantly concerned about it becoming part of a public infrastructure and weaponized. The concern was turned into action as CRISP (Consortium for Research on Information Security and Policy) at Stanford University. The CRISP team described in considerable detail how the DARPA internet in a global public environment was certain to be used to orchestrate all manner of network-based attacks by State and non-State actors on public infrastructures, end-users, and trust systems.

Twenty years later, it is incredulous that decades-old technical papers prepared for closed or tightly managed U.S. government networks are being cited as global public connectivity mantras for human right purposes — after the predicted profoundly adverse CRISP exploits have been massively manifested. Never mind that the notion is also founded on a kind of chaotic utopian dream where running code somehow provides for unfettered communication and information capabilities for every human and object on the planet rather than business, legal, and economic systems.

To the extent that global internetworking capabilities have actually come into existence, it has occurred first and foremost by commercial mobile providers and vendors using their own internet protocols, combined with the telecommunication, commercial internet, and cable providers and vendors worldwide.

The "end-to-end principle" which has never really existed except as some kind of alt-truth political slogan, is plainly a recipe for disaster on multiple levels. It is disastrous because the complexities and vulnerabilities of our networking infrastructure today results in a highly asymmetric threat environment. Those possessing the massive resources and incentives to pursue those threats and "innovate," will always far exceed the ability of individual end-users to protect themselves — whether it is the Federal Security Service of the Russian Federation or a neo-Nazi organization bringing about regime change in the West, or criminal organizations engaging in widespread cybercrime, or an ISIS trolling for recruits, or a malicious hacker dispersing malware.

To the credit of the Guide authors, they do recognize that "Middleboxes ... serve many legitimate purpose." However, what the human rights activists get wrong is that there is no end-to-end free ride. There are shared ownerships, service and regulatory obligations, and other fundamentally important requirements along all the transport facilities and cloud data centres that comprise the entire end-to-end path. It is also the "node intelligence" in those paths that is going to protect end-users from attacks and exploitations — and that is a human right as well.

So, if the activists really want to help end-users, they need to support the widespread industry efforts today across multiple bodies with solutions to manage the challenges. Simply promulgating myths about end-to-end connectivity simply furthers internet weaponization that defeats their own altruistic human rights objectives.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC

Follow CircleID on Twitter

More under: Internet Governance

Categories: News and Updates

Data on Cuba's SNET and a Few Suggestions for ETECSA

Fri, 2017-11-10 22:42

What would be the impact of, say, a $100,000 equipment grant from ETECSA to SNET?

I've written several posts on Cuba's user-deployed street networks (SNET), the largest of which is SNET in Havana. [1] (SNET was originally built by the gaming community, but the range of services has grown substantially). My posts and journalist's accounts like this one describe SNET, but a new paper presents SNET measurement data as well as descriptive material.

The abstract of the paper sums it up:

Working in collaboration with SNET operators, we describe the network's infrastructure and map its topology, and we measure bandwidth, available services, usage patterns, and user demographics. Qualitatively, we attempt to answer why the SNET exists and what benefits it has afforded its users. We go on to discuss technical challenges the network faces, including scalability, security, and organizational issues.

You should read the paper — it's interesting and well-written — but I can summarize a few points that caught my attention.

* * *

The Street Network in Havana – Community-created map showing the service areas of several SNET pillars spanning metro Havana. Source

SNET is a decentralized network comprised of local nodes, each serving up to 200 users in a neighborhood. The users connect to local nodes using Ethernet cables strung over rooftops, etc. or WiFi. The local nodes connect to regional "pillars" and the pillars peer with each other over fixed wireless links. The node and pillar administrators form a decentralized organization, setting policy, supporting users and keeping their servers running and online as best they can. (This reminds me of my school's first Web server — a Windows 3 PC on my desk that crashed frequently).

SNET organization Source

The average utilized bandwidth between two pillars during a 24-hour period was 120 Mb/s of a maximum throughput of 250 Mb/s and the authors concluded that throughput is generally constrained by the available bandwidth in the WiFi links between pillars. As such, faster inter-pillar links and/or adding new pillars would improve performance. Faster links from local nodes to pillars, new node servers, etc. would also add to capacity and availability, but that hardware would cost money. The Cuban government would probably see the provision of outside funds as subversive, but what would be the impact of, say, a $100,000 equipment grant from ETECSA to SNET?

The paper drills down on the network topology, discusses applications and presents usage and performance statistics. Forums are one of the applications and one of the forums is Netlab, a technical community of over 6,000 registered members who have made over 81,000 posts. They focus on open-source development and have written a SNET search engine and technical guides on topics like Android device repair. The export of Cuban content and technology has been a long-standing focus of this blog, and it would be cool to see Netlab available to others on the open Internet.

Netlab growth – Registration dates of Netlab users since its creation showing accelerated growth over the past year Source

The authors of the paper say that as far as they know, "SNET is the largest isolated community-driven network in existence" (my italics). While it may be the largest isolated community network there are larger Internet-connected community networks and that is a shame. I hope Cuba plans to "leapfrog" to next-generation technology and policy) while implementing stopgap measures like WiFi hotspots, 3G mobile and DSL. If SNET and other community networks were legitimized, supported and linked to the Internet (or even the Cuban intranet), they would be useful stopgap technology. ETECSA could also use the skills of the street net builders.

I don't expect ETECSA to take my advice, but if working with SNET is too big a step, they might test community collaboration by working with the developers of a smaller street net like the one in Gaspar or try involving communities in networking some schools, experimenting with community-installed backhaul or deploying interim satellite connectivity.

(You can find links to the paper, Initial Measurements of the Cuban Street Network, presentation slides and abstract here).

[1] See the collection of several posts on SNET here.

Written by Larry Press, Professor of Information Systems at California State University

Follow CircleID on Twitter

More under: Access Providers, Internet Governance, Networks

Categories: News and Updates

How a DNS Proxy Service Can Reduce the Risk of VoIP Service Downtime

Fri, 2017-11-10 21:13

Consumers are embracing VoIP services now more than ever as they get used to calling over Internet application services such as Skype, Facetime, and Google Hangouts. Market Research Store predict that the global value of the VoIP services market is expected to reach above USD140 billion in 2021, representing a compound annual growth rate of above 9.1% between 2016 and 2021.

For Cable MSOs deploying voice services, the ability to implement and manage Dynamic DNS (DDNS) is essential. However, DDNS updates pose significant challenges for large Tier 1 and 2 operators due to the difficulty of synchronizing DNS servers and DHCP servers in large "zones" or domains. When DNS servers become too difficult to manage, it often results in unreliable or even unavailable VoIP services. In cases when resynchronization is needed between DNS and DHCP servers, service downtime can take up to an hour to resolve. Customer will typically have higher quality of experience expectations when using voice services, so unwanted downtime can increase the risk of a negative experience and potentially cause customer churn.

The VoIP market shows no signs of slowing its growth, so how are today's operators going to manage the increasing complexity of synchronizing DNS servers and DHCP servers?

One emerging solution for today's operator is to eliminate the need for using DDNS altogether and instead deploy a DNS proxy service. These proxy services send DNS requests directly to the DHCP server, significantly simplifying the management of large DNS zones and reducing the risk of VoIP service downtime. Because the DHCP server already knows the relationship between the IP and Fully Qualified Domain Name (FQDN), since it is the authority on IP-FQDN mapping, a DNS Proxy Service can request the mapping directly from the DHCP server without the necessity of completing dynamic DNS updates and the headache of managing large DNS zones.

In many cases, this simple solution can be integrated seamlessly with existing or updated network topologies, making the most of an operator's existing device provisioning investments. As a result, DNS synchronization is no longer a concern since the DHCP server is where the IP-to-FQDN assignment originates. This means increased reliability of the DNS solution, less chance of subscriber service downtime, and by association, reduced risk of customer churn.

Learn more about providing higher availability of mission-critical services such as VoIP by reading the Incognito DNS Proxy Service fact sheet.

Written by Pat Kinnerk, Senior Product Manager at Incognito Software Systems

Follow CircleID on Twitter

More under: Access Providers, DNS, VoIP

Categories: News and Updates

Internet Hall of Fame Inductees Gather at GNTC to Discuss New Generation of Internet Infrastructure

Fri, 2017-11-10 18:37

Confronted with the rapid development of the Internet, the traditional network is facing severe challenges. Therefore, it is imperative to accelerate the construction of global network infrastructure and build a new generation of Internet infrastructure to adapt to the Internet of Everything and the intelligent society. From November 28 to 30, 2017, "GNTC 2017 – Global Network Technology Conference” organized by BII Group and CFIEC, will see a grand opening in Beijing. "Global Internet Infrastructure Forum", as the most famous link of the conference, will attract several international Internet Hall of Fame inductees as well as a number of authoritative experts in the field of Internet to discuss Internet infrastructure technology changes and infrastructure development and challenges.

Since human beings achieved data transmission between two computers for the first time in 1969, great changes have taken place in a few decades from military field to scientific research and even civil use, thus ushering in a brilliant chapter of "era of Internet" in history. However, as "Internet+" and industrial Internet deepen, it has become an increasingly clear trend that the Internet fully subverts all walks of life and becomes a common infrastructure. In this context, the existing architecture has exposed more and more problems in scalability, security and controllability and manageability due to its complex design, inadequate openness and low efficiency. As a result, it has constantly been improved in the industry for decades, but it is the fundamental way for future long-term development to upgrade Internet infrastructure in every aspect.

In the "Global Internet Infrastructure Forum" of GNTC Conference, father of the Internet and Internet Hall of Fame inductee Vint Cerf, Father of Korean Internet and Internet Hall of Fame inductee Kilnam Chon, the inventor of DNS Internet Hall of Fame inductee Paul Mockapetris, Internet Hall of Fame inductee Paul Vixie, APNIC's Director General Paul Wilson and other global Internet authoritative experts will gather together in Beijing. Meanwhile, presidents of organizations and institutions, senior management of Internet companies and global operator representatives will also be invited to attend the conference, focusing on technological change, infrastructure development, root server, new opportunities and challenges and other directions, and exploring that how will Internet infrastructure fully upgrade to adapt to the new world of Internet of Everything in the rapid application of IPv6, SDN and other network technology.

As the largest network technology event in China, there will be more than 2000 elites attending the conference. It will set up two main sessions, one roundtable forum, eight technical summits (SDN, NFV, IPv6, 5G, NB-IoT, Network Security, Cloud and Data Center, Edge Computing) and a number of workshops (P4, the Third Network, CORD, ONAP, etc.). By providing a platform for the parties to communicate and exchange, it is dedicated to promoting win-win cooperation and the process of network reconstruction.

Written by Xudong Zhang, Vice President of BII Group

Follow CircleID on Twitter

More under: Cloud Computing, Cybersecurity, Data Center, Internet Governance, Internet of Things, Internet Protocol, IP Addressing, IPv6, Networks

Categories: News and Updates

Apple (Not Surprisingly) is Not a Cybersquatter

Thu, 2017-11-09 18:50

It's highly unusual for a well-known trademark owner to be accused of cybersquatting, but that's what happened when a Mexican milk producer filed a complaint against Apple Inc. under the Uniform Domain Name Dispute Resolution Policy (UDRP) in an attempt to get the domain name <lala.com>.

Not only did Apple win the case, but the panel issued a finding of "reverse domain name hijacking" (RDNH) against the company that filed the complaint.

The 'LA LA' Story

According to the UDRP decision, Apple obtained the domain name <lala.com> in 2009 when it purchased the online music-streaming company La La Media, Inc. The domain name had been registered in 1996 and was acquired in 2005 by La La Media, which used it in connection with its online music service between 2006 and 2009.

Although Apple stopped operating the La La Music service in 2010, and the corresponding LA LA trademarks were canceled in 2015 and 2017, Apple said that it continues to use the domain name <lala.com> in connection with "residual email services."

Apparently seizing on the cancelled LA LA trademarks, Comercializadora de Lacteos y Derivados filed a UDRP complaint against Apple for the domain name, arguing that it "claims to have used LALA as a trademark before the registration of the Disputed Domain Name, since as early as 1987" — long before Apple acquired <lala.com>.

The complainant further argued that Apple "registered and used the Disputed Domain Name with the bad faith intent to defraud the Complainant's customers" and that "Respondent's passive holding of the Disputed Domain Name constitutes sufficient evidence of its bad faith use and registration."

Apple's 'LA LA' Rights

The UDRP panel rejected these arguments, as well as those related to the UDRP's "rights or legitimate interests" requirement, finding that the complainant had "put these assertions forward without any supporting argumentation or evidence."

Importantly, the panel wrote:

The Panel is of the opinion that, between June 2006 and May 2010, Respondent and its predecessor-in-interest made legitimate use of the Disputed Domain Name to offer bona fide services under its own LA LA mark. These services are unrelated to the Complainant and its LALA mark.

The Panel also wrote:

[T]he fact that the Respondent chose to cease active use of the Disputed Domain Name does not demonstrate in itself that the Respondent has no rights or legitimate interests in the Disputed Domain Name. It is common practice for trademark holders to maintain the registration of a domain name, even if the corresponding trademark was abandoned, e.g., following a rebranding exercise. Apart from the goodwill that might be associated to the trademark, the domain name in question may have intrinsic value. In the case at hand, the Panel notes that the term "la-la" is often used as a nonsense refrain in songs or as a reference to babbling speech, and that there are many concurrent uses of the "LALA" sign as a brand. In such circumstances, a domain name holder has a legitimate interest to maintain the registration of a potentially valuable domain name.

(Interestingly, the panel said nothing about "La La Land," the 2016 movie that won six Academy Awards — and which uses the domain name <lalaland.movie>.)

After its conclusion in favor of Apple, allowing the computer company to keep the domain name, the panel found that the "Complainant was, or should have been aware, of [Apple]'s bona fide acquisition and use of the Disputed Domain Name" and that it "must have been aware, before filing the Complaint, that the Disputed Domain Name has never be[en] used to target the Complainant or trade on its goodwill."

As a result of this finding, the panel said that the Complainant had engaged in RDNH, which is reserved for situations in which a complaint was brought in bad faith and constitutes an abuse of the UDRP process.

Lessons from 'LA LA'

The <lala.com> case is interesting for many reasons, including the panel's findings about the impact of expired trademarks and the multiple uses for some trademarks.

But the case is probably most interesting simply because it was filed against Apple — a 40-year-old company that is ranked No. 1 on Interbrand's list of "best global brands" and has quarterly revenue of $52.6 billion. Companies of this sophistication and stature typically aren't sloppy enough to own problematic domain names, and anyone who files a UDRP complaint against a company of this size should expect a rigorous legal fight.

Plus, not surprisingly, companies like Apple are typically filing (not defending) domain name disputes. Apple has filed at least 37 UDRP complaints through the years, but the <lala.com> case appears to represent the first time that it had to defend itself against a claim of cybersquatting.

This case holds a lesson not only for companies considering filing a domain name complaint against a large and well-known trademark owner (be prepared for an uphill battle), but also for the trademark owners themselves: No one is immune from having a domain name dispute filed against it, so be ready to file a quick and effective response.

Written by Doug Isenberg, Attorney & Founder of The GigaLaw Firm

Follow CircleID on Twitter

More under: Domain Management, Domain Names, Intellectual Property, Law

Categories: News and Updates

Qatar Crisis Started With a Hack, Now Political Tsunami in Saudi Arabia - How Will You Be Impacted?

Thu, 2017-11-09 18:35

The world has officially entered what the MLi Group labels as the "New Era of The Unprecedented". In this new era, traditional cyber security strategies are failing on daily basis, political and terrorist destruction-motivated cyber attacks are on the rise threatening "Survivability", and local political events unfold to impact the world overnight and forever. Decision makers know they cannot continue doing the same old stuff, but don't know what else to do next or differently that would be effective.

Deloitte and Equifax are giants who discovered the hard way they were not immune. The Qatar crisis with damage in $Billions was triggered by a cyber-attack which a Washington Post report claims was perpetrated by its neighbor the UAE. Now comes the Saudi Tsunami with ramification that will impact stakeholders worldwide. If you thought these events in Far Far Away lands don't impact you and your businesses, then I suggest you take your head out of the sand, and fast.

Local Geo-Political events are sending shockwaves globally. To learn what to be on the look out for, or learn how you can mitigate them watch the MLi Group's "Era of the Unprecedented" Video on the Saudi Tsunami by clicking here.

Written by Khaled Fattal, Group Chairman, The Multilingual Internet Group

Follow CircleID on Twitter

More under: Censorship, Cyberattack, Cybersecurity, Data Center, Internet Governance

Categories: News and Updates

Why Aren't We Fixing Route Leaks?

Wed, 2017-11-08 22:55

In case you missed it (you probably didn't), the Internet was hit with the Monday blues this week. As operator-focused lists and blogs identified,

"At 17:47:05 UTC yesterday (6 November 2017), Level 3 (AS3356) began globally announcing thousands of BGP routes that had been learned from customers and peers and that were intended to stay internal to Level 3. By doing so, internet traffic to large eyeball networks like Comcast and Bell Canada, as well as major content providers like Netflix, was mistakenly sent through Level 3's misconfigured routers."

In networking lingo, a "route leak" had occurred, and a substantial one at that. Specifically, the Internet was the victim of a Type 6 route leak, where:

"An offending AS simply leaks its internal prefixes to one or more of its transit-provider ASes and/or ISP peers. The leaked internal prefixes are often more-specific prefixes subsumed by an already announced, less-specific prefix. The more-specific prefixes were not intended to be routed in External BGP (eBGP). Further, the AS receiving those leaks fails to filter them. Typically, these leaked announcements are due to some transient failures within the AS; they are short-lived and typically withdrawn quickly following the announcements. However, these more-specific prefixes may momentarily cause the routes to be preferred over other aggregate (i.e., less specific) route announcements, thus redirecting traffic from its normal best path."

In this case, the painful result was significant Internet congestion for millions of users in different parts of the world for about 90 minutes. One of the main culprits apparently fessed up, with CenturyLink/Level 3 quickly issuing a reason for the outage (I pity "that guy," being a network engineer at the world's largest ISP ain't easy).

Can't we fix this?

Route leaks are a fact of life on the Internet. According to one ISP's observations, on any given day of the week, between 10-20% of announcements are actually leaks. Type 6 route leaks can be alleviated in part by technical and/or operational measures. For internal prefixes never meant to be routed on the Internet, one suggestion is to use origin validation to filter leaks, but this requires adoption of RPKI and only deals with two specific types of leak.

Source: Job Snijders, "Everyday practical BGP filtering" presented at NANOG 67From a contractual and operational perspective, Level 3's customers and others affected are presumably closely scrutinizing their SLAs. Maybe this episode will incentivize Level 3 to take some corrective action(s), like setting a fail-safe maximum announcement limit on their routers to catch potential errors. Perhaps Level 3's peering partners are similarly considering reconfiguring their routers to not blindly accept thousands of additional routes. Although, the frequency or other characteristics of changes in routing announcements might make this infeasible.

Another potential solution requiring broader collective action is NTT's peer locking, where NTT prevents leaked announcements from propagating further by filtering on behalf of other ISPs with which it has an agreement. It's an approach that is mutually beneficial. Much of the routing chaos could have been prevented if peer locking arrangements had been in place between NTT (or other large backbone ISPs peering with Level 3) and any of the impacted ASes (e.g., Comcast had ~20 impacted ASes). NTT has apparently had some success with the approach, having arrangements with many of the world's largest carriers of Internet traffic. In one case where they deployed peer locking, the number of route leaks has decreased by an order of magnitude. Moreover, the approach is apparently being replicated by other large carriers.

Regardless of the solution(s) implemented, the complexity of the problem space highlights the ongoing importance of understanding routing data governance and operator incentives to engage in filtering. We also need to be able to empirically assess over time whether or not specific approaches relate to observed variance in different types of route leaks.

Originally published in the Internet Governance Project blog.

Written by Brenden Kuerbis, Internet Governance Researcher & Policy Analyst at Georgia Tech

Follow CircleID on Twitter

More under: Access Providers, Networks

Categories: News and Updates

Poland to Test a Cybersecurity Program for Aviation Sector

Wed, 2017-11-08 21:08

During the two-day Cybersecurity in Civil Aviation conference, Poland announced an agreement to test a cybersecurity pilot program for the aviation sector as Europe's European Aviation Safety Agency (EASA) civil aviation authority face increasing threats posed by hackers to air traffic. "We want to have a single point in the air transport sector that will coordinate all cybersecurity activities… for airlines, airports, and air traffic," said Piotr Samson, head of Poland's ULC civil aviation authority. "Despite the assurances of experts in the field, computer systems failures triggered by hackers or accident have caused flight chaos in recent years. Poland's flagship carrier LOT was briefly forced to suspend operations in June 2015 after a hack attack." See full report.

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity

Categories: News and Updates

Former Yahoo CEO Marissa Mayer Apologizes for Data Breach, Blames Russian Agents

Wed, 2017-11-08 18:52

Former Yahoo CEO Marissa Mayer apologized today at the Senate Commerce, Science and Transportation hearing regarding massive data breaches at the internet company, blaming Russian agents. David Shepardson [reporting](http://www.reuters.com/article/us-usa-databreaches/former-yahoo-ceo-apologizes-for-data-breach-blames-russians-idUSKBN1D825V) in Reuters: "Verizon [which] acquired most of Yahoo Inc's assets in June ... disclosed last month that a 2013 Yahoo data breach affected all 3 billion of its accounts, compared with an estimate of more than 1 billion disclosed in December. In March, federal prosecutors charged two Russian intelligence agents and two hackers with masterminding a 2014 theft of 500 million Yahoo accounts, the first time the U.S. government has criminally charged Russian spies for cyber crimes."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity

Categories: News and Updates

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer