Turn bluetooth off on your android device until you get a system update

GasBandit

Staff member
Reactions
1,126 236 4
#1
Two weeks ago a vulnerability was found in Bluetooth that allows phones to be taken over remotely, without pairing and regardless of whether the phone is in "discoverable" mode or not.

https://www.theverge.com/2017/9/12/16294904/bluetooth-hack-exploit-android-linux-blueborne

Apple and Microsoft have already patched the vulnerability, and Google has in their stock devices for every version from Kitkat forward, but because so many android devices are running android versions altered by their manufacturer/carrier, it takes longer for the fix to get to users. So, if you haven't had a system update in the last week or two, turn your bluetooth off and leave it off until you do.

This is particularly aggravating to me because I use a verizon-branded LG3 which I've refused system updates on ever since I had to restore to a factory image to roll back the lollipop update because it strangled my phone's performance to excruciating levels. So I've been back on Kitkat 4.4.2 ever since, and I hadn't planned to ever update until I got a new phone. I use bluetooth all the time in my car, and it's going to be an irritating inconvenience to put up with all this.

Guess getting a new phone just got bumped up my priority list - and you can bet your ass I'm getting it straight from google this time instead of verizon.
 

GasBandit

Staff member
Reactions
1,126 236 4
#3
Yep, we were already talking about this over in the Apple thread (yes I know you don't hang out there).

But you might also want to know about this recently-discovered Dirty COW thing.
...I'm sorry, did I say "recent?" It's actually been around for about 10 years, but now people have actually started using it to make your life miserable.

--Patrick
That one seems less of a threat, given that you basically have to go download 3rd party apps from seedy websites to get it. The bluetooth exploit though only requires BT to be turned on, and that's it.
 
Reactions
547 98 4
#4
That one seems less of a threat, given that you basically have to go download 3rd party apps from seedy websites to get it.
I thought that was one of the whole reason to root your phone...so that you could download 3rd party apps of questionable provenance.

--Patrick
 
Reactions
547 98 4
#5
Oh, and lest anyone accuse me of playing favorites:


Vulnerabilities...everyone's got 'em, what matters is how fast they get patched. The new motto for IS should be, "Don't be an Equifax."

--Patrick
 
Reactions
547 98 4
#8
You don't need to root your phone to do that.
I went straight from flip to iPhone, so I'm used to hearing about jailbreaking. Nobody in my household has Android* so I still haven't delved into that ecosystem.

--Patrick
*who wants to do anything other than use it to make calls, that is.
 
Reactions
400 92 6
#9
I thought that was one of the whole reason to root your phone...so that you could download 3rd party apps of questionable provenance.

--Patrick
For apple, yes, but for android you just need to enable certain permissions and it doesn't require a hack of some sort, just changing some settings in the system.
 
Reactions
239 69 4
#10
What exactly do I have to wait on for an update? There was a Google system update in the app store a few days ago - can't remember what it was called.
 

GasBandit

Staff member
Reactions
1,126 236 4
#11
What exactly do I have to wait on for an update? There was a Google system update in the app store a few days ago - can't remember what it was called.
That was probably it. If you've had a system update in the last couple weeks, you're probably good to go.
 
Reactions
59 1 0
#13
I'm running an old iPod Touch stuck on iOS 6 as my car bluetooth audio player, can't be updated any further.
 
Reactions
547 98 4
#14
You're probably not a high-profile target, then.
What are they going to do? Steal all your music?
Additionally, iOS doesn't really like doing file sharing over Bluetooth, only audio. So you're probably good.

--Patrick
 

figmentPez

Staff member
Reactions
602 47 1
#15
The most frustrating thing about vulnerabilities like this is trying to figure out if your device is vulnerable, and if there's a patch. I have no idea how to figure out if my Moto G (1st-ish gen, but with LTE) has a patch available.

There should be fines for leaving shit like this unpatched. Not like "well, they're technically liable if someone sues them" (and I'm not even sure if they are), but "in the interest of the public good, and the health and safety of the nation, companies that don't take reasonable cybersecurity measures will be fined for their negligence."
 
Reactions
400 92 6
#16
There should be fines for leaving shit like this unpatched. Not like "well, they're technically liable if someone sues them" (and I'm not even sure if they are), but "in the interest of the public good, and the health and safety of the nation, companies that don't take reasonable cybersecurity measures will be fined for their negligence."
While I understand the feelings and intent behind this, it would really open the door for a lot of problems. If something like this were implemented there'd have to be a whole slew of rules and limitations to make sure it doesn't have terrible side effects.

I carry errors and omissions insurance in case one of my designs causes a customer a problem that they could conceivably sue me in court for, but making a law which makes me liable for the efforts of hackers (and truly, no code is unhackable, so please don't suggest that developers should simply release perfect software in the first place) would have an enormous chilling effect on the industry.
 

figmentPez

Staff member
Reactions
602 47 1
#17
While I understand the feelings and intent behind this, it would really open the door for a lot of problems. If something like this were implemented there'd have to be a whole slew of rules and limitations to make sure it doesn't have terrible side effects.

I carry errors and omissions insurance in case one of my designs causes a customer a problem that they could conceivably sue me in court for, but making a law which makes me liable for the efforts of hackers (and truly, no code is unhackable, so please don't suggest that developers should simply release perfect software in the first place) would have an enormous chilling effect on the industry.
There's a difference between being held accountable for the possible effect of every unknown flaw, and expecting companies to take reasonable measures once the flaws are known. "Ha! You had no idea this was an issue and now you'll pay" is very different from "You've been sitting on this known vulnerability for how long? And you never made any sort of fix available to your customers, despite the fact that you knew exploits were in the wild; and all the while it's caused untold amounts of harm to banking an other systems because of your negligence."
 
Reactions
400 92 6
#18
There's a difference between being held accountable for the possible effect of every unknown flaw, and expecting companies to take reasonable measures once the flaws are known. "Ha! You had no idea this was an issue and now you'll pay" is very different from "You've been sitting on this known vulnerability for how long? And you never made any sort of fix available to your customers, despite the fact that you knew exploits were in the wild; and all the while it's caused untold amounts of harm to banking an other systems because of your negligence."
It's just a slippery slope. Let's say that the device is a decade old, are we going to force the manufacturer, who isn't making money on it and hasn't for 5-8 years, to spend tens of thousands of dollars, minimum, making, testing, and releasing a security fix? For a consumer device? Do we draw the line with some devices, or are we going to enforce this for every internet connected fridge, toaster, and other internet connected device?

Do we limit it to companies of a certain size?

Do we limit it only if there are at least 100,000 such devices in active use? How do we measure that? If we can't measure it, then are we forcing companies to submit patches when there may only be a thousand devices in use, but we don't know or can't tell?

Even if we agree that we should have a law (and at this point I'm not convinced that the damage is sufficient to require government regulation/oversight), the complexity is staggering to me.

And that's before we get into vendor complications. Let's say a car manufacturer puts a module in a vehicle, the vehicle is 5 years old, the vendor of the module evaporated just after delivery of the module (don't get me started on the shell games people play in the auto industry), there's still thousands of them on the road.

The manufacturer isn't going to fix it. They're going to do the same thing that happened to the prius* and simply disable the module and all associated features. (*I can't find it, but a few years ago an exploit was found in a recent model but out of warranty prius model with their connected vehicle module, and they simply disabled it. I don't recall the follow up, whether they offered to replace it, made a fix, or if it remains disabled to this day)

I'm not convinced that there's a good legal reason for the government to regulate/legislate this in the first place, though, particularly for consumer devices. Worst case your ID was stolen, but that can (and does) happen outside of device hacks and the banks and other parts of industry have established procedures to deal with it. Beyond that, it's a consumer device, not a safety and security issue. These aren't life support or safety devices, and the industries where they are (transportation, medicine, military) are separately regulated such that the code issues do have to be taken care of somehow, usually through recalls. The toyota unintended acceleration issue was dealt with through normal existing channels and regulation.

Adding additional regulation for consumer devices sounds good in a feel good way, but presents a terrible burden for industry, forcing them to pretend their consumer devices are somehow safety critical, and thus spending untold amounts of money developing to standards that are unrealistic and unnecessary.
 

figmentPez

Staff member
Reactions
602 47 1
#19
I wish I had more time, but the worst case scenario is NOT a single user's identity getting stolen. It's a group of devices taking down an entire 911 system for a major metropolitan area (smaller 911 systems have been taken down by accident). It's people's pacemakers getting messed up because their wireless communications contain the same bug as the bluetooth in a phone. And, with increasingly "smart" homes, it's apartment complexes burning down because of a flaw in oven's controller. The more connected our phones and devices become, the more important it is going to be to have some sort of consumer protection against companies that make devices that can devastate lives when they're compromised with malicious intent.

EDIT: The security of even consumer devices, if they can connect to the internet and/or phone network, is a public safety/health issue for the same reasons that requiring food workers wash their hands is a public safety/health issue.
 
Last edited:
Reactions
400 92 6
#20
...taking down an entire 911 system...people's pacemakers getting messed up...apartment complexes burning down...
If it's a life support system (pacemaker), safety critical (oven/gas appliances, certain vehicle controls), or public safety system (911) then the existing mechanisms for recall already force manufacturers to solve the problem.

If it's your phone getting rooted because of a bluetooth exploit, I feel bad for those affected, but I can't suggest the government should get involved. If they turn a thousand phones into a botnet, it will then be resolved by the communications provider by restricting packets and internet access until they themselves release a fix.

In other words, for safety critical situations, the laws already exist that do what you want.

For non safety/life critical systems the consumer pressure will exist, or the damage will start occurring and the affected parties will apply the necessary pressure, or the nature of the damage isn't sufficient to warrant a manufacturer fix, and the consumers or those affected will find a work around.

In years past (as with hard drives that didn't have a long life span) you'd typically find that the brand would lose customers.

Free market can't and won't fix everything, but I don't think the government needs to or should become involved in regulating consumer level, non safety and non life critical consumer/business relationships.
 

figmentPez

Staff member
Reactions
602 47 1
#21
If it's a life support system (pacemaker), safety critical (oven/gas appliances, certain vehicle controls), or public safety system (911) then the existing mechanisms for recall already force manufacturers to solve the problem.

If it's your phone getting rooted because of a bluetooth exploit, I feel bad for those affected, but I can't suggest the government should get involved. If they turn a thousand phones into a botnet, it will then be resolved by the communications provider by restricting packets and internet access until they themselves release a fix.

In other words, for safety critical situations, the laws already exist that do what you want.
But any phone exploit that allows a phone to be taken over completely allows for them to be used to mass dial 911.

I mean, right now we're pretty much relying on the fact that hackers are financially motivated and want to spam and steal identities more than they want to cause havok.
 
Reactions
400 92 6
#22
But any phone exploit that allows a phone to be taken over completely allows for them to be used to mass dial 911.

I mean, right now we're pretty much relying on the fact that hackers are financially motivated and want to spam and steal identities more than they want to cause havok.
There are policies, procedures, and automated systems to detect such an attack, alert the appropriate system administrators, and today there are automated systems that figure out ways to limit the attack and determine good calls from bad calls.

Most critical public safety systems have a variety of methods and policies they follow that limit the possible extent of such attacks, shunt the attack when it happens, and contact those who can fix things so it can't continue to be a problem.

911 and other such systems have been attacked, and these systems do work fairly well, though sometimes 911 is down for hours at a time if an attacker is particularly agile and present multiple sequential attacks as each is blocked.

The reality is that as consumers and attackers have gone digital, the systems they are attacking are also going digital, and a lot of things can be done from behind a keyboard during the attack to reduce it and ultimately eliminate it by discerning the attack vector and shunting it into a phone dead end so real calls can get through.

I still assert that safety critical system are already handled well enough that additional government regulation isn't necessary, but my guess is we could continue to discuss examples forever, so we may just have to conclude that we disagree on whether additional government intervention is required.
 
Top