Hacker Newsnew | past | comments | ask | show | jobs | submit | eemil's commentslogin

Maybe I'm in the minority... but this seems like an extremely compelling offering for certain use-cases. Not for enterprises, but for individuals and small businesses.

My off-site backup is a thinkpad x230 with a 1 TB HDD. It's currently at my friends house, and I access it with tailscale. 7 eur/month to colocate this in a datacenter with stable (and fast) Internet + power seems like a pretty good deal.

I can understand some of the concerns with user-provided hardware. Maybe a better model, would be for CoLaptop to offer hardware themselves. This would allow them to standardize on a few models, which opens up many possible improvements such as central DC power, power efficient BIOS settings, enclosures with cooling ducts, etc. They can still follow the "old laptop as a server" model by buying off-lease laptops from the corporate world.


I mean we literally did this in one of my previous places. We took all the old laptops that were to be junked by IT, and used them as a selenium test farm. We saved like $100k per month on the AWS bill at the cost of basically electricity.

If all the machines were running Windows, the difference would've been even more drastic.

What I dont get is that we have these autoscaling technologies that allow software to be fault tolerant to hardware failure, yet companies still insist on buying expensive server grade HW for everything.


Been through this recently in a fairly large enterprise

We have some in house software which runs in k8s. Total throughput peaks at about 1mbit a second of control traffic - it's controlling some other devices which are on dedicated hardware. Total of 24GB of ram.

The software team say it needs to run across 3 different servers for resilience purposes.

The VM team want to use neutronix as their VM platform, so they can live migrate one VM to another.

They insist on 25gbit networking, and for resilience purposes that needs to be mlagged

The network team also have to have multiple switches and routers, again for resilience.

So rather than having 3 $1000 laptops running bare metal kubes hanging off a pair of $500 1G switches eating maybe 200W, we have a $140k BOM sucking up 2kW.

When something goes wrong all those layers of resilience will no doubt fight each other. The hardware drops, so the VM freezes as it restored onto another host, so K8s moves the workloads, then the VM comes back, the k8s gets confused (maybe? I don't know how k8s works).

It's all needlessly overspecced costing 30 times as much as it should.

But from each individual team it makes sense. They don't want to be blamed if it doesn't work, they don't have to find the money. It's different departments.


One of my favorite bits of hardware is a UPS. I’ve played with several over the years, from fancy server-grade rack-mount APC stuff to inexpensive edge stuff. Without exception, downtime is increased by use of a UPS. I used to plug a server with redundant PSUs into the UPS and the wall so it could ride out UPS glitches.

Even today, a UPS that turns itself back on after power goes out long enough to drain the battery and is then restored is somewhat exotic. Amusingly, even the new UniFi UPSes, which are clearly meant to be shoved in a closet somewhere, supposedly turn off and stay off when the battery drains according to forum posts. There are no official docs, of course.


Sounds like crappy UPSes. Even the cheap old used eBay Eaton UPSes I have in my homelab have a setting for "Auto restart" and the factory default setting is "enabled".

But even rackmount UPSes are more of an "edge" sort of solution. A data center UPS takes up at least a room.


I assume that datacenters UPSes are better, but I’ve never used one except as a consumer of its output.

But I’ve had problems with UPSes that advertise auto-restart but don’t actually ship with it enabled. And that fancy APC unit was sold by fancy Dell sales people and supported directly by real humans at APC, and it would still regularly get mad, start beeping, and turn off its output despite its battery being fully charged and the upstream power being just fine (and APC’s techs were never able to figure it out either).


> I assume that datacenters UPSes are better [...]

I don't know about specific datacenter models, but in our colocation there are humans available 24/7. So the UPS might not start after failure, but there's a human to figure it out.


Most (all?) decent datacenters also have generators on site, and the intent is that the UPS will never run out of charge. So the fully-discharged case is an error and it might be intentional to require intervention to recover.


Yeah, some people treat UPSes as "backup power" but that's not really what they're intended for. Their intended purpose is to bridge the gap during interruptions... either to an alternative power source, or to a powered-off state.


Sure, but when you stick a UPS in the closet to power your network or security cameras or whatever for a little while if there is a power interruption, you expect:

a) If the power is out too long for your UPS (or you have solar and batteries and they discharge overnight or whatever) that the system will turn back on when the power recovers, and

b) You will not have extra bonus outages just because the UPS is in a bad mood.


I completely agree with B. But alas, people love buying shitty cheap UPSes.

But A is along the lines of the misconception that I'm referring to... There should be no such thing as "the power being out too long for your UPS". A UPS isn't there to give you a little while to ignore the problem, it's there to give you time to address it. Either by switching to another source of power, or to power off the equipment.

Now, the reason that every UPS that supports auto-restart has it as a configurable option, is because you often don't want to do this for many reasons, e.g.:

* a low SOC battery could not guarantee a minimum runtime for safe shutdown during a repeated outage

* a catastrophic failure (because the battery shouldn't be dead) could be an indication of other issues that need to be addressed before power on

* powering on the equipment may require staggering to prevent inrush current overload

The whole use case of "I'm using the UPS to run my equipment during an outage" is kind of an abuse of their purpose. It's commonly done, and I've done it myself. But it's not what they're for.

But also, if you want a UPS that auto-restarts -- they exist -- but you get what you pay for.


Some of these is IMO a bit silly:

> a low SOC battery could not guarantee a minimum runtime for safe shutdown during a repeated outage

A lot of devices are unconditionally safe to shut down. Think network equipment, signs, exit lights, and well designed computers.

> a catastrophic failure (because the battery shouldn't be dead) could be an indication of other issues that need to be addressed before power on

This is such a weird presumption. Power outages happen. Long power outages happen. Fancy management software that triggers a controlled shutdown when the SOC is low might still leave nonzero remaining load. In fact, if you have a load that uses a UPS to trigger a controlled shutdown, it’s almost definitional that a controlled shutdown is not a catastrophe and that the system should turn back on eventually.

All of your points are valid for serious datacenter gear and even for large server closets, but for small systems I think they don’t apply to most users, and I’m talking about smaller UPSes.


> > a low SOC battery could not guarantee a minimum runtime for safe shutdown during a repeated outage

> A lot of devices are unconditionally safe to shut down.

Yeah, but that doesn't mean you want to expose them to brownout conditions when your UPS is depleted. If the power is continuing to flip on and off, it's better to just leave it off if you don't have the battery to prevent even short interruptions. A good UPS can do this automatically for you. A cheap one will just stay off and let you respond to the outage.

> This is such a weird presumption.

It wasn't a presumption I was making for all users -- but an example of why some users might not want auto-restart as a feature. Of course, if you want auto-restart as a feature, you can buy a UPS that has it as a feature and turn it on.

> they don’t apply to most users, and I’m talking about smaller UPSes.

Yeah, I know the situation: Someone has a network closet on a budget with a UPS they've sized to get them a few minutes of runtime. They put a UPS on the BOM because it checks a box. So they buy a low-end UPS that either doesn't have the feature, or it doesn't work right.

The solution is just to buy the right UPS for the thing they were trying to do... and test it.


The funniest thing about huge enterprises is that they often have processes so convoluted and restrictive for everything, that getting stuff done by the book is basically impossible, so people get creative with the limitations and we often end up with the sketchiest solutions in existence.

I hope the words 'web server hosted in Excel VBA' illustrate the magnitude of horrors that can emerge in these situations.


Raspberry pi on a network controlled power supply to rebroadcast udp broadcast traffic across subnets


I saw an entire physical switch configured for bridging VLANs. It was even labeled as such. 802.1q is hard and confusing if you don't know what you're doing.


which is exactly why this being different departments makes no sense

one infra team - provides the entire platform

any other approach and you’re dicking around


Enterprise hardware has companies that your company can call to get support when things go sideways, if they're using a rack full of 5 year old Thinkpads then they're on their own if something breaks


I believe they are referring to the dumpster support model. The hardware is so cheap that, if it fails, you toss it in a dumpster and buy more by the gross. Using Kubernetes to spread loads across your less reliable nodes ensures high availability. Sometimes this can be even more reliable because you are regularly testing your recovery and backup features and your hardware is more varied.

The downside is that if some piece of firmware or hardware has a vulnerability you have a larger attack surface.


There's a ton of out-of-support enterprise gear racked up in data centers. It can be done if you have a plan to handle failures.

But that's still a lot easier than managing laptops, which are unwieldily in a DC for a lot of other reasons.


We didn't have support, and we didn't need it, as the hardware was essentially EOL, probably would've been sold for like 20% of new price. We just chucked Selenium grid on them, locked them in the storage room, and if they died, they died (they didn't die a lot tho, which is surprising, as we had quite a few cheap sketchy in there as well)


I can deconstruct my workflow to the point where the benefits of plugging outdated hardware into the project are calculable. Info, transformation, etc I don't need in near real time feels like it's trending towards the price of electricity.

Since I've been looking at this situation from a resource point of view for a bit I see obvious savings in slowing down certain accepted processes. For example, an entity that continuously updates needs to be continuously scraped while an entity that publishes once a day needs to be hit once a day.


Seems like they'd have to find another 5 year old Thinkpad.


> What I dont get is that we have these autoscaling technologies that allow software to be fault tolerant to hardware failure, yet companies still insist on buying expensive server grade HW for everything.

Simple: the cost of managing the hardware scales with its heterogenity and reliability. Even just dealing with the dozens of different form factors (air vent placement!) and power units of laptops would be a big headache.


> We saved like $100k per month on the AWS bill

Did you also compare the bill to places that are not AWS, not Azure, and not GCP?


I would agree with you about autoscaling if ECC was enabled in every consumer computer :'/


I'd like to be able to do something similar, but the old batteries in these things seem like a point of catastrophic failure. How is that dealt with?


Remove the batteries. Datacenters have redundant power and UPS.


If it's an offsite backup, you would deal with it like you would any DR site-- either plan for yet another backup, or presume that it's unlikely that both the primary and replica would go down simultaneously and accept the risk.


where is the difference in paying 7 USD per month to get some gigabytes at whatever online hoster?


Same here, I've found a single (not too big) monitor to be best for ergonomics.

Still keep a second monitor around, but it's exclusively for screen sharing. Speaking of, having a dedicated monitor for sharing is really nice:

- It can have a standard resolution and aspect ratio (1080p) which is perfect for sharing

- It is a clean slate. I only share stuff I consciously move to that monitor. No need to clear my screen or burden my colleagues with unrelated windows in our call.

- Yes app sharing exists, but screen sharing is just more reliable and works better for sharing multiple things sequentially/simultaneously.


Would be nice if you could buy a Macbook with a proper on-site warranty.

Dell, Lenovo, HP will gladly send a technician to your house, and their NBD warranties cost about the same as Applecare. And they don't care if you're an enterprise or an individual buying one measly laptop.


They're cheap enough and Apple stores ubiquitous enough that you just go and replace it as needed, and send the now-defunct one in for repair.


I want to switch to Roon, but the lack of a web client (let alone a native linux client!) makes it a total dead end.


Bit of an unknown feature, but tree can output HTML. I've used tree -H to generate directory listings more than once.


I use tree almost every single day and I never realized this. Thank you so much for this wonderful factoid, which has simplified my life immensely, seriously. Going to also adopt a mental note “rtfm||gtfo, ffs.”


WAT??? TIL!!! Thank you. Also thank you baker and moore and rocher and sesser and tokoro, you devils you.

  tree v2.1.1 © 1996 - 2023 by Steve Baker and Thomas Moore
  HTML output hacked and copyleft © 1998 by Francesc Rocher
  JSON output hacked and copyleft © 2014 by Florian Sesser
  Charsets / OS/2 support © 2001 by Kyosuke Tokoro


What a brilliant, simple solution. This way each segment in the LED strip has an equally long current path, and should have identical voltage/brightness.

---

That being said, 20-50m is a really long run even with 24V LEDs. Even using this trick, you'll run into significant voltage drop and heat in the LED strip's copper traces since they're only so thick. There's a reason why manufacturers specify a maximum length. I would check the datasheet and split the strip into multiple segments depending on this value. Maybe there are some LED strips designed for this use-case, with an even higher voltage and/or thicker traces.


If you're going to do a phone-width camera bump, at least make it flat so I can put my phone down without it wobbling. Apple's bump on a bump is the worst of both worlds.


Even if the patents are only valid in China, this is going to hurt western companies a lot. If you're manufacturing a product in China, you'll need to either:

1. Pay the patent trolls, giving them power and hurting your margins

2. Move manufacturing to a more expensive, less competitive country

In the long run, you could argue that point 2 will lead to domestic manufacturing which everyone wants. But unless you can find a way to make these companies actually competitive (e.g. tariffs on chinese printers), I think the more likely scenario is these hamstrung companies will wither and go out of business.


This simply isn't true. Where I live every major operator offers multisim i.e. two (e)sims with the same number. It's primarily used for smartwatches, but they support phones as well.


Do dumb phones accept esims though? Usually they require the physical card.


Every time I learn a new Bash trick or quirk, it just pushes me further towards Powershell and Python for system administration.

Bash scripts are so hacky. With any other language, my though process is "what's the syntax again? Oh right.." but with bash it's "how can I do this and avoid shooting myself in the foot?" when doing anything moderately complex like a for loop.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: