Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is both terrifying and amusing that rm'ing files in /sys can brick my Linux machine.

I find it even more worrisome that some compare this mistake to accidental clobbering of /dev/sd?.



Lesson learned: if you want to semi-permanently take a system offline, hope they have a bad EFI implementation and rm -rf /sys. The fact that a malicious actor can compromise your hardware via software like that is incredible.

That's an insane design decision and if I replicated that in my professional capacity designing heavy machinery I'd be rightly fired and sued to oblivion because the equivalent result is dead people. This is a basic case of the principle of safety-in-design.


Which is why the campaign for liberated firmware is so important. If motherboard manufacturers were committing work to a common project like libreboot then hundreds of eyes would be upon it and awful code that does this insanity would never enter official repos.

Linksys had this problem a few years back with their new line of "open source" routers - it took them months to clean up their awful internal coding styles to get patches accepted into DDWRT, and even then they were accepted on a compromise where DDWRT developers had to fix a lot of it to make it less of a security portability and readability nightmare.

These hardware vendors at all levels - storage controllers, chipsets, radios, and more all have absolutely no QA on their code and by being so extremely proprietary nobody can do anything about it, and not enough people care to speak with their wallet to changes these terrible habits.


> These hardware vendors at all levels - storage controllers, chipsets, radios, and more all have absolutely no QA on their code

Hardware vendors do have QA, but it's mainly about ensuring that things work, not about trying to break them in every possible way. Safety and security seems to be notoriously hard for people who have been taught how to make things work, but not how to make them fail.


Which is exactly why it would be so valuable to have that code in the open.

I know I'm repeating myself but I still think the interations between OpenWRT and the Chinese firmware vendor that was pushing Linksys firmware upstream is a valuable example of why open source is valuable in this context even if you are not intimately involved in the development, testing, or inspection of such code. Public code by its nature requires more scrutiny and its harder to get people to accept something broken or poorly written when they can see just how bad it is.

If you want to develop awful coding habits, only work with people who never develop free software. If you want to have really good habits, work in a very popular free software community, because when your work is in the open like that and everyone is a volunteer nobody is going to put up with crap.


  > then hundreds of eyes would be upon it and awful code that
  > does this insanity would never enter official repos.
As the example of OpenSSH clearly shows.


Well, the insanity would rarely enter official releases.

There is no comparison between the bugginess of BIOSes and OpenSSH.


OpenSSH would not fit my definition of a popular project, which is exactly why it has become a security disaster. Though another contributing problem is that C as a language is awful for writing secure or trustworthy code in in the first place, which is the primary cause of most of OpenSSH's problems.

There are degrees of return on code visibility, though. Even a dozen competent developers could miss arcane buffer overflows or bad page execution issues in a large patch because the language is awful and lets you do crazy shit. That is one aspect of development quality that doesn't go away when you move from closed to open source.

But the best practices - consistent code style, documentation, reasonable variable names, reasonable line lengths, and the need to defend your contributions are all products of open collaborative development processes.

I'd argue in many ways that the open nature of OpenSSH is why we have only had three (four?) major security vulnerabilities out of it in the last five years. Its a sixteen year old ANSI C codebase, of course its a security nightmare, but it is a lot less dangerous than it could have been - imagine having heartbleed on a proprietary TLS implementation where developers could not immediately fix it or easily deploy the fix.


You can do exactly the same thing on Windows by calling SetFirmwareEnvironmentVariable. The problem is on the firmware side!


Yeh I wasn't laying blame on the *nix side of things either. No OS should be able to do this because the OS shouldn't have that level of control over the system it's sittting upon. Firmware absolutely shouldn't fail to a broken state. If you hose some configuration and it crashes, it should revert back to a known good configuration e.g.: a factory reset / fail-safe configuration.

You shouldn't be able to hose it completely except through special equipment, for example by connecting to system programming terminals on the motherboard with external hardware. The fact that a higher-level system can damage a lower-level system is just bad design.


It's interesting to contrast this with Apple's solution to the same problem: El Capitan's rootless.

As of OSX10.11, the live, everyday-use OS doesn't have write access to EFI variables. Instead, to fiddle with EFI vars (which happen to include the OS's kernel-module code-signing grant store, which is how people most often run into rootless) you have to reboot into the recovery partition.

In other words, instead of creating a custom BIOS setup as a special UEFI app with privileges that the OS never has, Apple has instead given OSX the equivalent of SysV runlevels—and then made EFI only writable in the single-user maintenance runlevel. Instead of transitioning between these runlevel-equivalents "online", you reboot between them; and instead of being modes of the same OS image, they're two distinct copies of the same OS. But the usage semantics are the same.

(The key to security here, if you're wondering, is that the recovery OS is a single solid image that's been code-signed as a whole, with the signer's pubkey kept in another EFI var. The live OS can't just be made to overwrite the recovery OS into something malicious, even though the live OS has full control of the disk it sits on and is responsible for replacing the recovery OS when it receives updates.)

Personally, I think something similar might be the best solution for Linux as well. People are suggesting something like a wrapper program, but a wrapper can still be used maliciously. It's far easier to secure a "maintenance mode" of the OS that must be rebooted into, and doesn't bring up the network; such a mode necessitates (remote virtual) console access to actually do what you want, rather than allowing you to simply trigger off a destructive EFI op over SSH.

This can still be automated; your automation just needs to be able to speak to the remote console. And tools like grub-install can still work; they just need one program on the live-image side and one on the recovery-mode side, where the live OS's grub-install just records your desired changes, sets the recovery-mode flag, and reboots; and where the recovery-mode grub-install agent reads the file, actually performs the op, unsets the flag, and reboots back.


Well, it's still a solution to problem which shouldn't exist in the first place. Before UEFI, x86 boxes were hard to brick unless you really knew what you were doing.


Tell that to the bloke who did it with the echo command in 2001:

* http://narkive.com/yG8yWfLt.1


I found the one that described removing the "backup battery" under the wristrest. Once I removed that, and replaced it... everything came back together, and the laptop booted (with generic factory settings).

Difficulty in getting to the battery aside, he just did a regular CMOS reset, the standard technique for getting otherwise unusable systems back to a good state.


Sorry, I forgot about ACPI :)

Now that you posted this, I think I recall one friend telling me that hibernation killed his laptop. But this was over 10 years ago and I only know about one such incident.

OTOH, what UEFI gave us is basically a portable and convenient API to brick any machine from any OS.


Dangerous tasks should have the most safety interlocks, but not require manual, needy attention that makes it harder to automate deployments. This edge-case functionality may still be useful for self-destructing / remote bricking sensitive embedded devices.

    efidestructivecmd opts... --really-brick-myself-and-catch-fire  # fire optional


If you call SetFirmwareEnvironmentVariable, then you reasonably expect firmware memory to be updated. You do not expect any firmware to be modified by "rm -rf /". So the efivarfs filesystem interface is the problem. It violates the principle of least astonishment. In Unix we say that "everything is a file" but it is false; efi variables definitely aren't files.


>fact that a malicious actor can compromise your hardware via software like that is incredible.

Why is it incredible? It's no news that you can flash various things if you've got root.


Fun anecdote, a while back I was installing Windows onto a box with Debian. When I needed to get GRUB set up again, efibootmgr was throwing an inscrutable error when installing the boot loader, but I had no issues manually booting it from GRUB on a USB.

Ended up being the case that the EFI pstore was filled (half full?) with Linux crash dumps from before I ironed out some OC stability issues. Had to manually mount it and then delete files named "dump-type0-" from the BIOS NVRAM to resolve the issue, which was pretty fun.

Something along these lines: https://bugzilla.redhat.com/show_bug.cgi?id=947142

forgot account name


More precisely, rm'ing files as root in /sys can brick certain machines with broken UEFI implementations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: