wck + security   103

Introduction to Auditing the Use of AWS
Security at AWS is job zero. All AWS customers benefit from a data center and
network architecture built to satisfy the needs of the most security-sensitive
organizations. In order to satisfy these needs, AWS compliance enables
customers to understand the robust controls in place at AWS to maintain security
and data protection in the cloud.
As systems are built on top of AWS cloud infrastructure, compliance
responsibilities will be shared. By tying together governance-focused, auditfriendly
service features with applicable compliance or audit standards, AWS
Compliance enablers build on traditional programs, helping customers to
establish and operate in an AWS security control environment
AWS manages the underlying infrastructure, and you manage the security of
anything you deploy in AWS. AWS as a modern platform allows you to formalize
the design of security, as well as audit controls, through reliable, automated and
verifiable technical and operational processes built into every AWS customer
account. The cloud simplifies system use for administrators and those running
IT, and makes your AWS environment much simpler to audit sample testing, as
AWS can shift audits towards a 100% verification verses traditional sample
testing.
Additionally, AWS’ purpose-built tools can be tailored to customer requirements
and scaling and audit objectives, in addition to supporting real-time verification
and reporting through the use of internal tools such as AWS CloudTrail, Config
and CloudWatch. These tools are built to help you maximize the protection of
your services, data and applications. This means AWS customers can spend less
time on routine security and audit tasks, and are able to focus more on proactive
measures which can continue to enhance security and audit capabilities of the
AWS customer environment.
aws  security  compliance  audit 
july 2017 by wck
HolisticInfoSec: The DFIR Hierarchy of Needs & Critical Security Controls
Incident Response Hierarchy of Needs, through which your DFIR methods should move
dfir  Security  cybersecurity  controls 
january 2017 by wck
This startup offers a vendor-risk management solution
Businesses are depending more and more on third parties, but managing a complex web of vendors can create massive risk. In the privacy world, even vendors that don’t typically have access to an organization’s network can also wreak havoc. Who would have thought that an HVAC company would have been part of one of the most well-known data breaches in recent years?

On top of increased risk, companies also face civil and regulatory liability with their vendors. Plus, with the obligations in the upcoming General Data Protection Regulation, companies could face as much as four percent of their annual global turnover. We’re not just talking bad press here, an insecure vendor can do real damage to a company’s bottom line.

One such startup, SecurityScorecard, recently received a whopping $20 million in series B funding from Google Ventures. In a sense, it is what it sounds like: The company analyzes and rates vendors’ security posture based on a range of security criteria. SecurityScorecard then provides that easy-to-digest information to its clients so they can better assess the risk for every one of their vendors.
vendor  third_party  Security  liability  iapp 
august 2016 by wck
Analog Malicious Hardware - IEEE
While the move to smaller transistors has been a
boon for performance it has dramatically increased the cost to
fabricate chips using those smaller transistors. This forces the
vast majority of chip design companies to trust a third party—
often overseas—to fabricate their design. To guard against shipping
chips with errors (intentional or otherwise) chip design
companies rely on post-fabrication testing. Unfortunately, this
type of testing leaves the door open to malicious modifications
since attackers can craft attack triggers requiring a sequence
of unlikely events, which will never be encountered by even
the most diligent tester.
In this paper, we show how a fabrication-time attacker can
leverage analog circuits to create a hardware attack that is
small (i.e., requires as little as one gate) and stealthy (i.e.,
requires an unlikely trigger sequence before effecting a chip’s
functionality). In the open spaces of an already placed and
routed design, we construct a circuit that uses capacitors to
siphon charge from nearby wires as they transition between
digital values. When the capacitors fully charge, they deploy
an attack that forces a victim flip-flop to a desired value. We
weaponize this attack into a remotely-controllable privilege
escalation by attaching the capacitor to a wire controllable
and by selecting a victim flip-flop that holds the privilege bit
for our processor. We implement this attack in an OR1200
processor and fabricate a chip. Experimental results show that
our attacks work, show that our attacks elude activation by a
diverse set of benchmarks, and suggest that our attacks evade
known defenses
ieee  hardware  chips  backdoor  security  trojan 
may 2016 by wck
Anonymization and Risk by Ira Rubinstein, Woodrow Hartzog :: SSRN
Perfect anonymization of data sets has failed. But the process of protecting data subjects in shared information remains integral to privacy practice and policy. While the deidentification debate has been vigorous and productive, there is no clear direction for policy. As a result, the law has been slow to adapt a holistic approach to protecting data subjects when data sets are released to others. Currently, the law is focused on whether an individual can be identified within a given set. We argue that the better locus of data release policy is on the process of minimizing the risk of reidentification and sensitive attribute disclosure. Process-based data release policy, which resembles the law of data security, will help us move past the limitations of focusing on whether data sets have been “anonymized.” It draws upon different tactics to protect the privacy of data subjects, including accurate deidentification rhetoric, contracts prohibiting reidentification and sensitive attribute disclosure, data enclaves, and query-based strategies to match required protections with the level of risk. By focusing on process, data release policy can better balance privacy and utility where nearly all data exchanges carry some risk.
Privacy  anonymization  deidentification  data  release  risk  security  open 
december 2015 by wck
Why is Android security so bad: Google-funded research explains | BGR
You can now see why BlackBerry has been devoting so many resources lately toward making Android more secure — it’s clearly an area that needs a lot of work. Via ZDNet, researchers at the U.K.’s University of Cambridge recently conducted a study (PDF) that was funded partially by Google and revealed that the state of security on Android devices is a complete horror show.


How bad is this? Because of Android’s highly fragmented distribution and because third parties are responsible to delivering critical patches to their devices, the researchers estimate that 90% of Android devices right now are exposed to at least one critical vulnerability.

“The difficulty is that the market for Android security today is like the market for lemons,” the researchers explain. “There is information asymmetry between the manufacturer, who knows whether the device is currently secure and will receive security updates, and the customer, who does not.”

Unsurprisingly, the study found that Nexus devices are the most secure Android devices around because they run stock Android and don’t have to rely on manufacturers or wireless carriers to issue patches in a timely fashion. When it comes to third-party OEMs, LG-manufactured devices received the best scores for security, although that’s likely in part because LG has traditionally been a major manufacturer of Nexus phones. Following LG, manufacturers Motorola, Samsung, Sony and HTC all trail by considerable margins while smaller Android manufacturers that mostly serve emerging markets fare even worse.

“The security of Android depends on the timely delivery of updates to fix critical vulnerabilities,” the researchers conclude. “Unfortunately few devices receive prompt updates, with an overall average of 1.26 updates per year, leaving devices unpatched for long periods. We showed that the bottleneck for the delivery of updates in the Android ecosystem rests with the manufacturers, who fail to provide updates to fix critical vulnerabilities.”
Android  android_updates  Security  stagefright 
october 2015 by wck
InfoSec Handlers Diary Blog - The Wordpress Plugins Playground
From a security perspective, plugins are today the weakest point of a CMS. If most of the CMS source code is regularly audited and well maintained. It’s not the same for their plugins. By deploying and using a plugin, you install third-party code into your website and grant some rights to it. Not all plugins are developed by skilled developers or with security in mind. Today, most vulnerabilities reported in CMS environment are due to … plugins!
wordpress  Security  security_research  third_party 
september 2015 by wck
FireEye 44Con injunction
FireEye “chose to put out an injunction on German firm ERNW, whose employee Felix Wilhelm was planning on delving into FireEye security technology and a trio of now-fixed vulnerabilities during a talk at the 44Con event in London tonight. Whilst he was able to offer some information on the flaws, Wilhelm wasn’t able to go as deep as he would have liked.”
Enno Rey at ENRW “states the companies met during the Black Hat conference in Las Vegas on 5 August and appeared to have agreed on a final report. “Less than 24 hours later we received an extensive cease-and-desist letter stating a number of demands, mainly in the realm of intellectual property protection. It was requested to sign the associated confession by Monday 10th which was roughly one working day after receipt of the letter,””

Also touches on different bugs found by ERNW & Kristian Erik Hermansen.
(And on issue of whether FireEye has a way to report vulnerabilities, they at least now have a page up. No idea how long it’s been live. https://www.fireeye.com/company/security.html)
vulnerability_disclosure  Security 
september 2015 by wck
Heartbleed disclosure timeline: who knew what and when
All times are in US Pacific Daylight Time

Friday, March 21 or before - Neel Mehta of Google Security discovers Heartbleed vulnerability.

Friday, March 21 10.23 - Bodo Moeller and Adam Langley of Google commit a patch for the flaw (This is according to the timestamp on the patch file Google created and later sent to OpenSSL, which OpenSSL forwarded to Red Hat and others). The patch is then progressively applied to Google services/servers across the globe.

Monday, March 31 or before - Someone tells content distribution network CloudFlare about Heartbleed and they patch against it. CloudFlare later boasts on its blog about how they were able to protect their clients before many others. CloudFlare chief executive officer Matthew Prince would not tell Fairfax how his company found out about the flaw early. "I think the most accurate reporting of events with regard to the disclosure process, to the extent I know them, was written by Danny over at the [Wall Street Journal]," he says. The article says CloudFlare was notified of the bug the week before last and made the recommended fix "after signing a non-disclosure agreement". In a seperate article, The Verge reports that a CloudFlare staff member "got an alarming message from a friend" which requested that they send the friend their PGP email encryption key as soon as possible. "Only once a secure channel was established and a non-disclosure agreement was in place could he share the alarming news" about the bug, The Verge reported. On April 17, CloudFlare says in a blog that when it was informed it did not know then that it was among the few to whom the bug was disclosed before the public announcement. "In fact, we did not even know the bug's name. At that time we had simply removed TLS heartbeat functionality completely from OpenSSL..."

Tuesday, April 1 - Google Security notifies "OpenSSL team members" about the flaw it has found in OpenSSL, which later becomes known as "Heartbleed", Mark Cox at OpenSSL says on social network Google Plus.

Tuesday, April 1 04:09 - "OpenSSL team members" forward Google's email to OpenSSL's "core team members". Cox at OpenSSL says the following on Google Plus: "Original plan was to push [a fix] that week, but it was postponed until April 9 to give time for proper processes." Google tells OpenSSL, according to Cox, that they had "notified some infrastructure providers under embargo". Cox says OpenSSL does not have the names of providers Google told or the dates they were told. Google declined to tell Fairfax which partners it had told. "We aren't commenting on when or who was given a heads up," a Google spokesman said.

Wednesday, April 2 ~23:30 - Finnish IT security testing firm Codenomicon separately discovers the same bug that Neel Mehta of Google found in OpenSSL. A source inside the company gives Fairfax the time it was found as 09:30 EEST April 3, which converts to 23:30 PDT, April 2.

Thursday, April 3 04:30 - Codenomicon notifies the National Cyber Security Centre Finland (NCSC-FI) about its discovery of the OpenSSL bug. Codenomicon tells Fairfax in a statement that they're not willing to say whether they disclosed the bug to others. "We have strict [non-disclosure agreements] which do not allow us to discuss any customer engagements. Therefore, we do not want to weigh in on the disclosure debate," a company spokeswoman says. A source inside the company later tells Fairfax: "Our customers were not notified. They first learned about it after OpenSSL went public with the information."

Friday, April 4 - Content distribution network Akamai patches its servers. They initially say OpenSSL told them about bug but the OpenSSL core team denies this in an email interview with Fairfax. Akamai updates its blog after the denial - prompted by Fairfax - and Akamai's blog now says an individual in the OpenSSL community told them. Akamai's chief security officer, Andy Ellis, tells Fairfax: "We've amended the blog to specific [sic] a member of the community; but we aren't going to disclose our source." It's well known a number of OpenSSL community members work for companies in the tech sector that could be connected to Akamai.

Friday, April 4 - Rumours begin to swirl in open source community about a bug existing in OpenSSL, according to one security person at a Linux distribution Fairfax spoke to. No details were apparent so it was ignored by most.

Saturday, April 5 15:13 - Codenomicon purchases the Heartbleed.com domain name, where it later publishes information about the security flaw.

Saturday, April 5 16:51 - OpenSSL (not public at this point) publishes this (since taken offline) to its Git repository.

Sunday, April 6 02:30 - The National Cyber Security Centre Finland asks the CERT Coordination Centre (CERT/CC) in America to be allocated a common vulnerabilites exposure (CVE) number "on a critical OpenSSL issue" without disclosing what exactly the bug is. CERT/CC is located at the Software Engineering Institute, a US government funded research centre operated by Carnegie Mellon University. The centre was created in in 1988 at DARPA's direction in response to the Morris worm incident.

Sunday, April 6 ~22:56 - Mark Cox of OpenSSL (who also works for Red Hat and was on holiday) notifies Linux distribution Red Hat about the Heartbleed bug and authorises them to share details of the vulnerability on behalf of OpenSSL to other Linux operating system distributions.

Sunday, April 6 22.56 - Huzaifa Sidhpurwala (who works for Red Hat) adds a (then private) bug to Red Hat's bugzilla.

Sunday, April 6 23.10 - Huzaifa Sidhpurwala sends an email about the bug to a private Linux distribution mailing list with no details about Heartbleed but an offer to request them privately under embargo. Sidhpurwala says in the email that the issue would be made public on April 9. Cox of OpenSSL says on Google Plus: "No details of the issue are given: just affected versions [of OpenSSL]. Vendors are told to contact Red Hat for the full advisory under embargo."

Sunday, April 6 ~23:10 - A number of people on the private mailing list ask Sidhpurwala, who lives in India, for details about the bug. Sidhpurwala gives details of the issue, advisory, and patch to the operating system vendors that replied under embargo. Those who got a response included SuSE (Monday, April 7 at 01:15), Debian (01:16), FreeBSD (01:49) and AltLinux (03:00). “Some other [operating system] vendors replied but [Red Hat] did not give details in time before the issue was public," Cox said. Sidhpurwala was asleep during the time the other operating system vendors requested details. "Some of them mailed during my night time. I saw these emails the next day, and it was pointless to answer them at that time, since the issue was already public," Sidhpurwala says. Those who attempted to ask and were left without a response included Ubuntu (asked at 04:30), Gentoo (07:14) and Chromium (09:15), says Cox.

Prior to Monday, April 7 or early April 7 - Facebook gets a heads up, people familiar with matter tell the Wall Street Journal. Facebook say after the disclosure: "We added protections for Facebook’s implementation of OpenSSL before this issue was publicly disclosed, and we're continuing to monitor the situation closely." An article on The Verge suggests Facebook got an encrypted email message from a friend in the same way CloudFlare did.

Monday, April 7 08.19 - The National Cyber Security Centre Finland reports Codenomicon's OpenSSL "Heartbleed" bug to OpenSSL core team members Ben Laurie (who works for Google) and Mark Cox (Red Hat) via encrypted email.

Monday, April 7 09.11 - The encrypted email is forwarded to the OpenSSL core team members, who then decide, according to Cox, that "the coincidence of the two finds of the same issue at the same time increases the risk while this issue remained unpatched. OpenSSL therefore released updated packages [later] that day."

Monday, April 7 09:53 - A fix for the OpenSSL Heartbleed bug is committed to OpenSSL's Git repository (at this point private). Confirmed by Red Hat employee: "At this point it was private."

Monday, April 7 10:21:29 - A new OpenSSL version is uploaded to OpenSSL's web server with the filename "openssl-1.0.1g.tgz".

Monday, April 7 10:27 - OpenSSL publishes a Heatbleed security advisory on its website (website metadata shows time as 10:27 PDT).

Monday, April 7 10:49 - OpenSSL issues a Heartbleed advisory via its mailing list. It takes time to get around.

Monday, April 7 11:00 - CloudFlare posts a blog entry about the bug.

Monday, April 7 12:23 - CloudFlare tweets about its blog post.

Monday, April 7 12:37 - Google's Neel Mehta comes out of Twitter hiding to tweet about the OpenSSL flaw.

Monday, April 7 13:13 - Codenomicon tweets they found bug too and link to their Heartbleed.com website.

Monday, April 7 ~13:13 - Most of the world finds out about the issue through heartbleed.com.

Monday, April 7 15:01 - Ubuntu comes out with patch.

Monday, April 7 23.45 - The National Cyber Security Centre Finland issues a security advisory on its website in Finnish.

Monday, April 8 ~00:45 - The National Cyber Security Centre Finland issues a security advisory on its website in English.

Tuesday, April 9 - A Red Hat technical administrator for cloud security, Kurt Seifried, says in a public mailing list that Red Hat and OpenSSL tried to coordinate disclosure. But Seifried says things "blew up" when Codenomicon reported the bug too. "My understanding is that OpenSSL made this public due to additional reports. I suspect it boiled down to 'Group A found this flaw, reported it, and has a reproducer, and now Group B found the same thing independently and also has a reproducer. Chances are the bad guys do as well so better to let everyone know the barn door is open now rather than wait 2 more days'. But there may be other factors I'm not aware [of],” Seifried says.

Wednesday, April 9 - A Debian developer, Yves-Alexis Perez, says on the same mailing list: "I think we… [more]
security  disclosure  coordination  Open_Source  heartbleed 
september 2015 by wck
US Contractors Scale Up Search for Heartbleed-Like Flaws - Bloomberg Business
Michael Daniel, the White House cybersecurity coordinator, said in a blog post this week that “building up a huge stockpile of undisclosed vulnerabilities while leaving the Internet vulnerable and the American people unprotected would not be in our national security interest.”
He said the U.S. would continue to develop and use those vulnerabilities to protect the country, however, and that the administration has established “a disciplined, rigorous and high-level decision-making process” when it comes to deciding whether to keep the flaws secret or disclose them so they can be fixed.
zero_day  disclosure  security_research  Security 
august 2015 by wck
The rise of the new Crypto War
“Essentially, each part of [the limitations section] provides a reason why the current CALEA couldn’t be applied to make Apple change its software,” Schoen said. “Because it doesn’t apply to Apple [as an information service], because it can’t be used to dictate features, and because companies aren’t responsible for decrypting unless they have the keys.” (Apple’s proudly self-proclaimed inability to access its users’ encryption keys would prove to be the inspiration for Comey’s “going dark” speech.)
encryption  calea  Privacy  backdoor  Security 
july 2015 by wck
Michael Chertoff Makes the Case against Back Doors | emptywheel
I think that it’s a mistake to require companies that are making hardware and software to build a duplicate key or a back door even if you hedge it with the notion that there’s going to be a court order. And I say that for a number of reasons and I’ve given it quite a bit of thought and I’m working with some companies in this area too.
First of all, there is, when you do require a duplicate key or some other form of back door, there is an increased risk and increased vulnerability. You can manage that to some extent. But it does prevent you from certain kinds of encryption. So you’re basically making things less secure for ordinary people.
The second thing is that the really bad people are going to find apps and tools that are going to allow them to encrypt everything without a back door. These apps are multiplying all the time. The idea that you’re going to be able to stop this, particularly given the global environment, I think is a pipe dream. So what would wind up happening is people who are legitimate actors will be taking somewhat less secure communications and the bad guys will still not be able to be decrypted.
The third thing is that what are we going to tell other countries? When other countries say great, we want to have a duplicate key too, with Beijing or in Moscow or someplace else? The companies are not going to have a principled basis to refuse to do that. So that’s going to be a strategic problem for us.
Finally, I guess I have a couple of overarching comments. One is we do not historically organize our society to make it maximally easy for law enforcement, even with court orders, to get information. We often make trade-offs and we make it more difficult. If that were not the case then why wouldn’t the government simply say all of these [takes out phone] have to be configured so they’re constantly recording everything that we say and do and then when you get a court order it gets turned over and we wind up convicting ourselves. So I don’t think socially we do that.
nsa  backdoor  crypto  Security  Privacy 
july 2015 by wck
Law Across the Wire and Into the Cloud FTC Expands Education Efforts on Business Security Practices - Law Across the Wire and Into the Cloud
The FTC released “Start with Security,”a whitepaper promoting best security practices. Based on the FTC’s more than fifty Section 5, Unfair and Deceptive Trade Practices settlements, the whitepaper provides examples of what is and is not “reasonable security.” Below are the ten themes of the FTC’s growing security precedent, including a few case cites.
ftc  data_protection  data_security  Privacy  Security 
july 2015 by wck
Judiciary FBI malware letter
Letter from Senate Judiciary to FBI about use of malware
malware  hacking  fbi  hack_back  security  senate  senate_judiciary 
july 2015 by wck
BlindBox: Deep Packet Inspection over Encrypted Traffic
Many network middleboxes perform deep packet inspection
(DPI), a set of useful tasks which examine packet payloads.
These tasks include intrusion detection (IDS), exfiltration
detection, and parental filtering. However, a long-standing
issue is that once packets are sent over HTTPS, middleboxes
can no longer accomplish their tasks because the payloads
are encrypted. Hence, one is faced with the choice of only
one of two desirable properties: the functionality of middleboxes
and the privacy of encryption.
We propose BlindBox, the first system that simultaneously
provides both of these properties. The approach of BlindBox
is to perform the deep-packet inspection directly on the
encrypted traffic. BlindBox realizes this approach through
a new protocol and new encryption schemes. We demonstrate
that BlindBox enables applications such as IDS, ex-
filtration detection and parental filtering, and supports real
rulesets from both open-source and industrial DPI systems.
We implemented BlindBox and showed that it is practical
for settings with long-lived HTTPS connections. Moreover,
its core encryption scheme is 3-6 orders of magnitude faster
than existing relevant cryptographic schemes
deep_packet_inspection  encryption  networking  intrusion_detection  Security  security_research 
june 2015 by wck
DOOMED TO REPEAT HISTORY? LESSONS FROM THE CRYPTO WARS OF THE 1990s
In the past year, a conflict has erupted between technology companies, privacy advocates, and members of the U.S. law enforcement and intelligence communities over the right to use and distribute products that contain strong encryption technology. This debate between government actors seeking ways to preserve access to encrypted communications and a coalition of pro-encryption groups is reminiscent of an old battle that played out in the 1990s: a period that has come to be known as the “Crypto Wars.” This paper tells the story of the that debate and the lessons that are relevant to today. It is a story not only about policy responses to new technology, but also a sustained, coordinated effort among industry groups, privacy advocates, and technology experts from across the political spectrum to push back against government policies that threatened online innovation and fundamental human rights.
Privacy  crypto  Security  security_research 
june 2015 by wck
Perfect Security | 99% Invisible
But in the entire history of the world, there was only one brief moment, lasting about 70 years, where you could put something under lock and key—a chest, a safe, your home—and have complete, unwavering certainty that no intruder could get to it.

This is a feeling that security experts call “perfect security.”
Security  Privacy 
april 2015 by wck
Identifier based XSSI attacks
Cross Site Script Inclusion (XSSI) is an attack technique (or a vulnerability) that enables
attackers to steal data of certain types across origin boundaries, by including target data
using SCRIPT tag in an attacker's Web page as below:
<!-- attacker's page loads external data with SCRIPT tag -->
<SCRIPT src="http://target.example.jp/secret"></SCRIPT>
For years, it has been known among Web security researchers that JavaScript file, JSONP
and, in certain old browsers, JSON data are subject to this type of information theft attacks,
namely XSSI. In addition, some browser vulnerabilities, that allow attackers to gain
information via JavaScript error messages, have been discovered and fixed in the past.
In 2014, we conducted a research on this faded topic and discovered some new attack
techniques and browser vulnerabilities that allow attackers to steal simple text strings such
as CSV, and more complex data under certain circumstances. In the research, we mainly
focused on a method of stealing data as a client side script's identifier (variable or function
name).
In this paper, we first describe the attack techniques, browser vulnerabilities in the next
section, and in the last section will describe the relevant countermeasures.
webappsec  Security  security_research  xss 
april 2015 by wck
Before We Knew It An Empirical Study of Zero-Day Attacks In The Real World
Little is known about the duration and prevalence of zeroday
attacks, which exploit vulnerabilities that have not been
disclosed publicly. Knowledge of new vulnerabilities gives
cyber criminals a free pass to attack any target of their
choosing, while remaining undetected. Unfortunately, these
serious threats are difficult to analyze, because, in general,
data is not available until after an attack is discovered.
Moreover, zero-day attacks are rare events that are unlikely
to be observed in honeypots or in lab experiments.
In this paper, we describe a method for automatically
identifying zero-day attacks from field-gathered data that
records when benign and malicious binaries are downloaded
on 11 million real hosts around the world. Searching this
data set for malicious files that exploit known vulnerabilities
indicates which files appeared on the Internet before the
corresponding vulnerabilities were disclosed. We identify 18
vulnerabilities exploited before disclosure, of which 11 were
not previously known to have been employed in zero-day attacks.
We also find that a typical zero-day attack lasts 312
days on average and that, after vulnerabilities are disclosed
publicly, the volume of attacks exploiting them increases by
up to 5 orders of magnitude.
zero_day  Hacking  webappsec  Security  security_research 
april 2015 by wck
Google Online Security Blog: Ready, aim, fire: an open-source tool to test web security scanners
Securing modern web applications can be a daunting task—doubly so if they are built (quickly) with diverse languages and technology stacks. That’s why we run a multi-faceted product security program, which helps our engineers build and deploy secure software at every stage of the development lifecycle. As part of this effort, we have developed an internal web application security scanning tool, codenamed Inquisition (as no bug expects it!).

The scanner is built entirely on Google technologies like Chrome and Google Cloud Platform, with support for the latest HTML5 features, a low false positive rate and ease of use in mind. We have discussed some of the technology behind this tool in a talk at the Google Testing Automation Conference 2013.

While working on this tool, we found we needed a synthetic testbed to both test our current capabilities and set goals for what we need to catch next. Today we’re announcing the open-source release of Firing Range, the results of our work (with some help from researchers at the Politecnico di Milano) in producing a test ground for automated scanners.

Firing Range is a Java application built on Google App Engine and contains a wide range of XSS and, to a lesser degree, other web vulnerabilities. Code is available on github.com/google/firing-range, while a deployed version is at public-firing-range.appspot.com.

How is it different from the many vulnerable test applications already available? Most of them have focused on creating realistic-looking testbeds for human testers; we think that with automation in mind it is more productive, instead, to try to exhaustively enumerate the contexts and the attack vectors that an application might exhibit. Our testbed doesn’t try to emulate a real application, nor exercise the crawling capabilities of a scanner: it’s a collection of unique bug patterns drawn from vulnerabilities that we have seen in the wild, aimed at verifying the detection capabilities of security tools.
google  webappsec  security 
november 2014 by wck
Schneier on Security: iPhone Encryption and the Return of the Crypto Wars
iPhone Encryption and the Return of the Crypto Wars
Last week Apple announced that it is closing a serious security vulnerability in the iPhone. It used to be that the phone's encryption only protected a small amount of the data, and Apple had the ability to bypass security on the rest of it.

From now on, all the phone's data is protected. It can no longer be accessed by criminals, governments, or rogue employees. Access to it can no longer be demanded by totalitarian governments. A user's iPhone data is now more secure.

To hear U.S. law enforcement respond, you'd think Apple's move heralded an unstoppable crime wave. See, the FBI had been using that vulnerability to get into peoples' iPhones. In the words of cyberlaw professor Orin Kerr, "How is the public interest served by a policy that only thwarts lawful search warrants?"

Ah, but that's the thing: You can't build a "back door" that only the good guys can walk through. Encryption protects against cybercriminals, industrial competitors, the Chinese secret police and the FBI. You're either vulnerable to eavesdropping by any of them, or you're secure from eavesdropping from all of them.

Back-door access built for the good guys is routinely used by the bad guys. In 2005, some unknown group surreptitiously used the lawful-intercept capabilities built into the Greek cell phone system. The same thing happened in Italy in 2006.

In 2010, Chinese hackers subverted an intercept system Google had put into Gmail to comply with U.S. government surveillance requests. Back doors in our cell phone system are currently being exploited by the FBI and unknown others.

This doesn't stop the FBI and Justice Department from pumping up the fear. Attorney General Eric Holder threatened us with kidnappers and sexual predators.

The former head of the FBI's criminal investigative division went even further, conjuring up kidnappers who are also sexual predators. And, of course, terrorists.

FBI Director James Comey claimed that Apple's move allows people to "place themselves beyond the law" and also invoked that now overworked "child kidnapper." John J. Escalante, chief of detectives for the Chicago police department now holds the title of most hysterical: "Apple will become the phone of choice for the pedophile."

It's all bluster. Of the 3,576 major offenses for which warrants were granted for communications interception in 2013, exactly one involved kidnapping. And, more importantly, there's no evidence that encryption hampers criminal investigations in any serious way. In 2013, encryption foiled the police nine times, up from four in 2012­and the investigations proceeded in some other way.

This is why the FBI's scare stories tend to wither after public scrutiny. A former FBI assistant director wrote about a kidnapped man who would never have been found without the ability of the FBI to decrypt an iPhone, only to retract the point hours later because it wasn't true.

We've seen this game before. During the crypto wars of the 1990s, FBI Director Louis Freeh and others would repeatedly use the example of mobster John Gotti to illustrate why the ability to tap telephones was so vital. But the Gotti evidence was collected using a room bug, not a telephone tap. And those same scary criminal tropes were trotted out then, too. Back then we called them the Four Horsemen of the Infocalypse : pedophiles, kidnappers, drug dealers, and terrorists. Nothing has changed.

Strong encryption has been around for years. Both Apple's FileVault and Microsoft's BitLocker encrypt the data on computer hard drives. PGP encrypts email. Off-the-Record encrypts chat sessions. HTTPS Everywhere encrypts your browsing. Android phones already come with encryption built-in. There are literally thousands of encryption products without back doors for sale, and some have been around for decades. Even if the U.S. bans the stuff, foreign companies will corner the market because many of us have legitimate needs for security.

Law enforcement has been complaining about "going dark" for decades now. In the 1990s, they convinced Congress to pass a law requiring phone companies to ensure that phone calls would remain tappable even as they became digital. They tried and failed to ban strong encryption and mandate back doors for their use. The FBI tried and failed again to ban strong encryption in 2010. Now, in the post-Snowden era, they're about to try again.

We need to fight this. Strong encryption protects us from a panoply of threats. It protects us from hackers and criminals. It protects our businesses from competitors and foreign spies. It protects people in totalitarian governments from arrest and detention. This isn't just me talking: The FBI also recommends you encrypt your data for security.

As for law enforcement? The recent decades have given them an unprecedented ability to put us under surveillance and access our data. Our cell phones provide them with a detailed history of our movements. Our call records, email history, buddy lists, and Facebook pages tell them who we associate with. The hundreds of companies that track us on the Internet tell them what we're thinking about. Ubiquitous cameras capture our faces everywhere. And most of us back up our iPhone data on iCloud, which the FBI can still get a warrant for. It truly is the golden age of surveillance.

After considering the issue, Orin Kerr rethought his position, looking at this in terms of a technological-legal trade-off. I think he's right.

Given everything that has made it easier for governments and others to intrude on our private lives, we need both technological security and legal restrictions to restore the traditional balance between government access and our security/privacy. More companies should follow Apple's lead and make encryption the easy-to-use default. And let's wait for some actual evidence of harm before we acquiesce to police demands for reduced security.

This essay previously appeared on CNN.com

EDITED TO ADD (10/6): Three more essays worth reading. As is this on all the other ways Apple and the government have to get at your iPhone data.

And a Washington Post editorial manages to say this:

How to resolve this? A police "back door" for all smartphones is undesirable--a back door can and will be exploited by bad guys, too. However, with all their wizardry, perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant.

Because a "secure golden key" is completely different from a "back door."
ios  Security  backdoor 
october 2014 by wck
[SSL Observatory] The Trust Tree: An interactive graph of the CA ecosystem
interactive graph that shows the relationship
between the root-CAs of the Mozilla root-store and their intermediates
at http://notary.icsi.berkeley.edu/trust-tree/
ssl  trust_tree  root_CAs  security  internet 
july 2014 by wck
« earlier      
per page:    204080120160

related tags

account_highjack  adserved_malware  ad_injection  ad_networks  Ad_tracking  and  android  android_updates  anonymization  apple  apps  articles  att  Attacks  audit  aws  backdoor  BINDING  blackhat  bluetooth  Boats  border  bsides  bug  bug_bounty  calea  Card  carwash  Cellphones  Cellular  cfaa  chips  Chrome  chromebook  cis  Civlib  Coast_Guard  compliance  computer_security  conference  CONTRACTS  controls  coordination  CORPORATE  cox  Cross-Border_Issues  crypto  crypto_challenges  cs  cybersecurity  data  datacaps  data_breach  data_protection  data_security  dc  deep_packet_inspection  deidentification  developerrelations  development  dfir  disclosure  DIY  education  encryption  ENFORCEMENT  EUROPEAN  explainers  Facebook  Fb  fbi  Feature  Federal  five_whys  franchise  ftc  funny  fuzzer  gchq  GENERAL  github  google  Government  hack  hacking  hack_back  HARBOR  hardware  heartbleed  iapp  ieee  InfoSec  internet  intrusion_detection  ios  iot  javascript  law  Laws  Lawsuit  learning  liability  lion  Litigation  LocationTracking  Locint  lock  Locks  lock_picking  logjam  mac  macosx  malvertising  malware  man_in_middle  Matasano  microsoft  Military  mobile  mobile_security  MODEL  money  msft  mturk  multics  NATIONAL  netflix  network  networking  News  nsa  Ocean  open  Open_Source  opm  or  Os_X  papers  password  patching  Personal_Information  phone  php  PII  pki  Plastic  Politics  precautions  pretexting  Princeton  privacy  programming  project_zero  Protection  python  Recommended_Reading  Regulation  release  Research  Ripoffs  risk  root_CAs  RULES/BCR  SAFE  Safety  science  sec  security  security_research  senate  senate_judiciary  settings  Ships  Shopping  slides  Social_Networks  Software_&_Tools  software_product_liability  Spam  ssl  stagefright  standards  state_ags  SURVEILLANCE  Surveillance  Targeted  tech  Technology  telecom  Terrorism  testing  third_party  Top  tor  touch_id  Tracking  trojan  Trueposition  TruePosition_LOCINT  trust_tree  tweet  twitter  UNION  unlock  Usa  vendor  via:popular  vicarious_liability  vulnerabilities  vulnerability  vulnerability_disclosure  Wassenaar  Weapons  web  webappsec  wiretap  Wiretapping  wordpress  wyndham  xsrf  xss  zero_day 

Copy this bookmark:



description:


tags: