Monday, September 25, 2017

GPS Spoofing Attacks

Wired has a story about a possible GPS spoofing attack by Russia:

After trawling through AIS data from recent years, evidence of spoofing becomes clear. Goward says GPS data has placed ships at three different airports and there have been other interesting anomalies. "We would find very large oil tankers who could travel at the maximum speed at 15 knots," says Goward, who was formerly director for Marine Transportation Systems at the US Coast Guard. "Their AIS, which is powered by GPS, would be saying they had sped up to 60 to 65 knots for an hour and then suddenly stopped. They had done that several times."

All of the evidence from the Black Sea points towards a co-ordinated attempt to disrupt GPS. A recently published report from NRK found that 24 vessels appeared at Gelendzhik airport around the same time as the Atria. When contacted, a US Coast Guard representative refused to comment on the incident, saying any GPS disruption that warranted further investigation would be passed onto the Department of Defence.

"It looks like a sophisticated attack, by somebody who knew what they were doing and were just testing the system," Bonenberg says. Humphreys told NRK it "strongly" looks like a spoofing incident. Fire Eye's Brubaker, agreed, saying the activity looked intentional. Goward is also confident that GPS were purposely disrupted. "What this case shows us is there are entities out there that are willing and eager to disrupt satellite navigation systems for whatever reason and they can do it over a fairly large area and in a sophisticated way," he says. "They're not just broadcasting a stronger signal and denying service this is worse they're providing hazardously misleading information."



from GPS Spoofing Attacks

Saturday, September 23, 2017

Friday Squid Blogging: Using Squid Ink to Detect Gum Disease

A new dental imagery method, using squid ink, light, and ultrasound.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.



from Friday Squid Blogging: Using Squid Ink to Detect Gum Disease

Friday, September 22, 2017

Boston Red Sox Caught Using Technology to Steal Signs

The Boston Red Sox admitted to eavesdropping on the communications channel between catcher and pitcher.

Stealing signs is believed to be particularly effective when there is a runner on second base who can both watch what hand signals the catcher is using to communicate with the pitcher and can easily relay to the batter any clues about what type of pitch may be coming. Such tactics are allowed as long as teams do not use any methods beyond their eyes. Binoculars and electronic devices are both prohibited.

In recent years, as cameras have proliferated in major league ballparks, teams have begun using the abundance of video to help them discern opponents' signs, including the catcher's signals to the pitcher. Some clubs have had clubhouse attendants quickly relay information to the dugout from the personnel monitoring video feeds.

But such information has to be rushed to the dugout on foot so it can be relayed to players on the field -- a runner on second, the batter at the plate -- while the information is still relevant. The Red Sox admitted to league investigators that they were able to significantly shorten this communications chain by using electronics. In what mimicked the rhythm of a double play, the information would rapidly go from video personnel to a trainer to the players.

This is ridiculous. The rules about what sorts of sign stealing are allowed and what sorts are not are arbitrary and unenforceable. My guess is that the only reason there aren't more complaints is because everyone does it.

The Red Sox responded in kind on Tuesday, filing a complaint against the Yankees claiming that the team uses a camera from its YES television network exclusively to steal signs during games, an assertion the Yankees denied.

Boston's mistake here was using a very conspicuous Apple Watch as a communications device. They need to learn to be more subtle, like everyone else.



from Boston Red Sox Caught Using Technology to Steal Signs

Thursday, September 21, 2017

ISO Rejects NSA Encryption Algorithms

The ISO has decided not to approve two NSA-designed block encryption algorithms: Speck and Simon. It's because the NSA is not trusted to put security ahead of surveillance:

A number of them voiced their distrust in emails to one another, seen by Reuters, and in written comments that are part of the process. The suspicions stem largely from internal NSA documents disclosed by Snowden that showed the agency had previously plotted to manipulate standards and promote technology it could penetrate. Budget documents, for example, sought funding to "insert vulnerabilities into commercial encryption systems."

More than a dozen of the experts involved in the approval process for Simon and Speck feared that if the NSA was able to crack the encryption techniques, it would gain a "back door" into coded transmissions, according to the interviews and emails and other documents seen by Reuters.

"I don't trust the designers," Israeli delegate Orr Dunkelman, a computer science professor at the University of Haifa, told Reuters, citing Snowden's papers. "There are quite a lot of people in NSA who think their job is to subvert standards. My job is to secure standards."

I don't trust the NSA, either.



from ISO Rejects NSA Encryption Algorithms

Wednesday, September 20, 2017

What the NSA Collects via 702

New York Times reporter Charlie Savage writes about some bad statistics we're all using:

Among surveillance legal policy specialists, it is common to cite a set of statistics from an October 2011 opinion by Judge John Bates, then of the FISA Court, about the volume of internet communications the National Security Agency was collecting under the FISA Amendments Act ("Section 702") warrantless surveillance program. In his opinion, declassified in August 2013, Judge Bates wrote that the NSA was collecting more than 250 million internet communications a year, of which 91 percent came from its Prism system (which collects stored e-mails from providers like Gmail) and 9 percent came from its upstream system (which collects transmitted messages from network operators like AT&T).

These numbers are wrong. This blog post will address, first, the widespread nature of this misunderstanding; second, how I came to FOIA certain documents trying to figure out whether the numbers really added up; third, what those documents show; and fourth, what I further learned in talking to an intelligence official. This is far too dense and weedy for a New York Times article, but should hopefully be of some interest to specialists.

Worth reading for the details.



from What the NSA Collects via 702

Tuesday, September 19, 2017

Apple's FaceID

This is a good interview with Apple's SVP of Software Engineering about FaceID.

Honestly, I don't know what to think. I am confident that Apple is not collecting a photo database, but not optimistic that it can't be hacked with fake faces. I dislike the fact that the police can point the phone at someone and have it automatically unlock. So this is important:

I also quizzed Federighi about the exact way you "quick disabled" Face ID in tricky scenarios -- like being stopped by police, or being asked by a thief to hand over your device.

"On older phones the sequence was to click 5 times [on the power button], but on newer phones like iPhone 8 and iPhone X, if you grip the side buttons on either side and hold them a little while -- we'll take you to the power down [screen]. But that also has the effect of disabling Face ID," says Federighi. "So, if you were in a case where the thief was asking to hand over your phone -- you can just reach into your pocket, squeeze it, and it will disable Face ID. It will do the same thing on iPhone 8 to disable Touch ID."

That squeeze can be of either volume button plus the power button. This, in my opinion, is an even better solution than the "5 clicks" because it's less obtrusive. When you do this, it defaults back to your passcode.

More:

It's worth noting a few additional details here:

  • If you haven't used Face ID in 48 hours, or if you've just rebooted, it will ask for a passcode.
  • If there are 5 failed attempts to Face ID, it will default back to passcode. (Federighi has confirmed that this is what happened in the demo onstage when he was asked for a passcode -- it tried to read the people setting the phones up on the podium.)
  • Developers do not have access to raw sensor data from the Face ID array. Instead, they're given a depth map they can use for applications like the Snap face filters shown onstage. This can also be used in ARKit applications.
  • You'll also get a passcode request if you haven't unlocked the phone using a passcode or at all in 6.5 days and if Face ID hasn't unlocked it in 4 hours.

Also be prepared for your phone to immediately lock every time your sleep/wake button is pressed or it goes to sleep on its own. This is just like Touch ID.

Federighi also noted on our call that Apple would be releasing a security white paper on Face ID closer to the release of the iPhone X. So if you're a researcher or security wonk looking for more, he says it will have "extreme levels of detail" about the security of the system.

Here's more about fooling it with fake faces:

Facial recognition has long been notoriously easy to defeat. In 2009, for instance, security researchers showed that they could fool face-based login systems for a variety of laptops with nothing more than a printed photo of the laptop's owner held in front of its camera. In 2015, Popular Science writer Dan Moren beat an Alibaba facial recognition system just by using a video that included himself blinking.

Hacking FaceID, though, won't be nearly that simple. The new iPhone uses an infrared system Apple calls TrueDepth to project a grid of 30,000 invisible light dots onto the user's face. An infrared camera then captures the distortion of that grid as the user rotates his or her head to map the face's 3-D shape­ -- a trick similar to the kind now used to capture actors' faces to morph them into animated and digitally enhanced characters.

It'll be harder, but I have no doubt that it will be done.

More speculation.

I am not planning on enabling it just yet.



from Apple's FaceID

Monday, September 18, 2017

Bluetooth Vulnerabilities

A bunch of Bluetooth vulnerabilities are being reported, some pretty nasty.

BlueBorne concerns us because of the medium by which it operates. Unlike the majority of attacks today, which rely on the internet, a BlueBorne attack spreads through the air. This works similarly to the two less extensive vulnerabilities discovered recently in a Broadcom Wi-Fi chip by Project Zero and Exodus. The vulnerabilities found in Wi-Fi chips affect only the peripherals of the device, and require another step to take control of the device. With BlueBorne, attackers can gain full control right from the start. Moreover, Bluetooth offers a wider attacker surface than WiFi, almost entirely unexplored by the research community and hence contains far more vulnerabilities.

Airborne attacks, unfortunately, provide a number of opportunities for the attacker. First, spreading through the air renders the attack much more contagious, and allows it to spread with minimum effort. Second, it allows the attack to bypass current security measures and remain undetected, as traditional methods do not protect from airborne threats. Airborne attacks can also allow hackers to penetrate secure internal networks which are "air gapped," meaning they are disconnected from any other network for protection. This can endanger industrial systems, government agencies, and critical infrastructure.

Finally, unlike traditional malware or attacks, the user does not have to click on a link or download a questionable file. No action by the user is necessary to enable the attack.

Fully patched Windows and iOS systems are protected; Linux coming soon.



from Bluetooth Vulnerabilities