Sunday, July 29, 2012

Google: Didn't delete Street View data after all

Google: Didn't delete Street View data after all

After being caught spying on people across Europe and Australia with its Wi-Fi-slurping Street View cars, Google had told angry regulators that it would delete the ill-gotten data.

Google broke its promise.

Britain's Information Commissioner's Office (ICO) received a letter from Google in which the company admits it kept a "small portion" of the electronic information it had been meant to get rid of.

"Google apologizes for this error," Peter Fleischer, Google's global privacy counsel, said in the letter, which the ICO published on its website.

The ICO said in a statement that Google Inc. had agreed to delete all that data nearly two years ago, adding that its failure to do so "is cause for concern."

Other regulators were less diplomatic, with Ireland's deputy commissioner for data protection, Gary Davis, calling Google's failure "clearly unacceptable." Davis said his organization had conveyed its "deep unhappiness" to Google and wants answers by Wednesday.

Google said that other countries affected included France, Belgium, the Netherlands, Norway, Sweden, Finland, Switzerland, Austria and Australia. Attempts to reach regulators in several of those countries weren't immediately successful Friday.

Google angered officials on both sides of the Atlantic in 2010 when it acknowledged that its mapping cars, which carried cameras across the globe to create three-dimensional maps of the world's streets, had also scooped up passwords and other data being transmitted over unsecured wireless networks. Investigators have since revealed that the intercepted data included private information including legal, medical and pornographic material.

The Mountain View, California-based company had been meant to purge the data, and Google chalked up its mistake to human error.

The company said it recently discovered the data while undertaking a comprehensive manual review of Street View disks. The company said it had contacted regulators in all of the countries where it had promised to delete data but realized it had not.

Fleischer's letter asks Britain's ICO for instructions on how to proceed; the ICO told Google that it must turn over the data immediately so it can undergo forensic analysis.

Friday's disclosure comes just over a month after the ICO reopened its investigation into Google's Street View, saying that an inquiry by authorities in the United States raised new doubts about the disputed program.

In April, the U.S. Federal Communications Commission fined Google, saying the company "deliberately impeded and delayed" its investigation into Street View.

It's unclear what, if any, penalties would be imposed on Google by Britain's ICO or regulators in any of the 10 other jurisdictions in which the company had wrongly retained Street View data.

"We need to take a look at the data... There's all sorts of questions we need to ask," an ICO spokesman said, speaking on condition of anonymity because office rules prohibit him from being named in print.

The ICO has the power to impose fines of up to 500,000 pounds (roughly $780,000) for the most serious data breaches, although penalties are generally far less severe and can involve injunctions or reprimands.

More information:
Google's letter to Britain's watchdog: bit.ly/OrXSd3
British watchdog's response to Google: bit.ly/OrXYRV

Red Flag On Biometrics: Iris Scanners Can Be Tricked

At the Black Hat security conference in Las Vegas this week, Javier Galbally revealed that it’s possible to spoof a biometric iris scanning system using synthetic images derived from real irises. The Madrid-based security researcher’s talk is timely, coming on the heels of a July 23 Israeli Supreme Court hearing where the potential vulnerabilities of a proposed governmental biometric database drove the debate. Consider the week’s events a reminder that if the adoption of biometric identification systems continues apace without serious contemplation of the pitfalls, we’re headed for trouble.

When it comes to the collection and storage of individuals’ digital fingerprints, iris scans, or facial photographs, system vulnerability is a chief concern. A social security number can always be cancelled and reissued if it’s compromised, but it’s impossible for someone to get a new eyeball if an attacker succeeds in seizing control of his or her digital biometric information.

Among all the various biometric traits that can be measured for machine identification--such as fingerprints, face, voice, or keystroke dynamics--the iris is generally regarded as being the most reliable. Yet Galbally’s team of researchers has shown that even the method traditionally presumed to be foolproof is actually quite susceptible to being hacked.

The project, unveiled for the first time at the security researchers’ conference, made use of synthetic images that match digital iris codes linked to real irises. The codes, which are derived from the unique measurements of an individuals’ iris and contain about 5,000 pieces of information, are stored in biometric databases and used to positively identify people when they position their eyes in front of the scanners. By printing out the replica images on commercial printers, the researchers found they could trick the iris-scanning systems into confirming a match.

The tests were carried out against a commercial system called VeriEye, made by Neurotechnology. The synthetic images were produced using a genetic algorithm. With the replicas, Galbally found that an imposter could spoof the system at a rate of 50 percent or higher. A Wired article hit on the significance of this discovery:

“This is the first time anyone has essentially reverse-engineered iris codes to create iris images that closely match the eye images of real subjects, creating the possibility of stealing someone’s identity through their iris.”

This revelation not only exposes a security hole in a commercial iris-recognition system, but also proves that prominent tech firm and FBI contractor B12 Technologies--which is building a database of iris scans for the Next Generation Identification System--was wrong when it when it noted on its website that biometric templates “cannot be reconstructed, decrypted, reverse-engineered or otherwise manipulated to reveal a person’s identity.”

Any new detection of biometric system flaws is relevant in the context of the massive governmental identification programs moving forward at the global level. There’s India’s bid to create the world’s largest database of irises, fingerprints and facial photos, for example, and Argentina’s creation of a nationwide biometric database containing millions of digital fingerprints. Just this week in Israel, High Court justices criticized a planned biometric database as a “harmful” and “extreme” measure. Lawmakers who approve such identification schemes should give serious consideration to any new information surfacing about biometric system vulnerabilities.

EFF

180K Computer Woman Put To Work At Newark Airport