Hacker Newsnew | past | comments | ask | show | jobs | submit | ethan_smith's commentslogin

Your skepticism is well placed. Every time a new quantization or compression technique drops, the immediate response is to just scale up context length or run a bigger model to fill whatever headroom was freed up. It's Jevons paradox applied to VRAM - efficiency gains get eaten by increased usage almost immediately.

A small blocking `<script>` in the `<head>` that reads the saved preference from localStorage and sets a class on `<html>` before any rendering happens is the standard approach. You can also set `<meta name="color-scheme" content="dark light">` which tells the browser to use the OS preference for the initial paint, covering the default case without any JS at all.

That's still after the server's response arrives, they're talking about the blank browser page before anything comes back in the response.

This is kind of the exact thing the article is about though. They're not "failing to understand" costs - they just have different context. Your job is to help them make informed tradeoffs, not to expect them to already know what things cost before asking.

it's not possible to make everyone understand nuclear physics, there is certain threshold of cognitive skills/motivation required for that.

The people involved in commissioning and funding nuclear power plants don't understand nuclear physics either.

The customer doesn't need to understand how the solution works, as long as they can understand that it would solve their problem (in the case of the power plant: producing "clean" energy) and any potential drawbacks or limitations (in the case of the power plant: the waste byproduct).

The point here is that as a "tech person", it's your job to help the customer understand the cost of what they're asking, and come up with a satisfactory solution based on your understanding of their needs.


The ads are honestly one of the best parts to read now. You can trace the entire trajectory of the PC industry through them - watching prices drop, new categories emerge, companies appear and vanish. It's like an economic fossil record of personal computing.

This is essentially ULP (units in the last place) comparison, and it's a solid approach. One gotcha: IEEE 754 floats have separate representations for +0 and -0, so values straddling zero (like 1e-45 and -1e-45) will look maximally far apart as integers even though they're nearly equal. You need to handle the sign bit specially.

There's another gotcha. Consider positive, normal x and y where ulp(y) != ulp(x). Bitwise comparison, regardless of tolerance, will consider x to be far from y, even though they might be adjacent numbers, e.g. if y = x+ulp(x) but y is a power of 2.

This case actually works because for finite numbers of a given sign, the integer bit representations are monotonic with the value due to the placement of the exponent and mantissa fields and the implicit mantissa bit. For instance, 1.0 in IEEE float is 0x3F800000, and the next immediate representable value below it 1.0-e is 0x3F7FFFFF.

Signed zero and the sign-magnitude representation is more of an issue, but can be resolved by XORing the sign bit into the mantissa and exponent fields, flipping the negative range. This places -0 adjacent to 0 which is typically enough, and can be fixed up for minimal additional cost (another subtract).


I interpreted OP's "bit-cast to integer, strip few least significant bits and then compare for equality" message as suggesting this kind of comparison (Go):

  func equiv(x, y float32, ignoreBits int) bool {
      mask := uint32(0xFFFFFFFF) << ignoreBits
      xi, yi := math.Float32bits(x), math.Float32bits(y)
      return xi&mask == yi&mask
  }
with the sensitivity controlled by ignoreBits, higher values being less sensitive.

Supposing y is 1.0 and x is the predecessor of 1.0, the smallest value of ignoreBits for which equiv would return true is 24.

But a worst case example is found at the very next power of 2, 2.0 (bitwise 0x40000000), whose predecessor is quite different (bitwise 0x3FFFFFFF). In this case, you'd have to set ignoreBits to 31, and thus equivalence here is no better than checking that the two numbers have the same sign.


Yeah, that's effectively quantization, which will not work for general tolerance checks where you'd convert float similarity to int similarity.

There are cases where the quantization method is useful, hashing/binning floats being an example. Standard similarity checks don't work there because of lack of transitivity. But that's fundamentally a different operation than is-similar.


I don't think this is true. Modulo the sign bit, the "next float" operator is equivalent to the next bitstring or the integer++.

Sure, but that operator can propagate a carry all the way to the most significant bit, so a check for bitwise equality after "strip[ping] few least significant bits" will yield false in some cases. The pathologically worst case for single precision, for example, is illustrated by the value 2.0 (bitwise 0x40000000) and its predecessor, which differ in all bits except the sign.

Yes, exactly - same Germanic root. "Fast" in Scandinavian languages means firm/fixed/stuck, which is also the original meaning in English (as in "hold fast", "steadfast", "fasten"). The "quick" meaning in English is actually the newer one, derived from the idea of being "stuck" on a course.

The app was already built against the S3 API when it used cloud storage. Keeping that interface means the code doesn't change - you just point it at a local S3-compatible gateway instead of AWS/DO. Makes it trivial to switch back or move providers if needed.

The OCP CAD Viewer extension for VS Code (works with both CadQuery and build123d) gets partway there - you can click on faces/edges in the 3D view and it shows you the selection info you'd need for your code. It's not full "click to generate code" but it helps a lot with the "keeping geometry in my head" problem. Still a long way from the OnShape FeatureScript model where GUI and code are truly bidirectional though.

The mainframe/PC analogy is spot on. And the hardware floor keeps dropping - you can grab a mini PC with 32-64GB RAM for a few hundred bucks and run surprisingly capable quantized models locally. Something like https://terminalbytes.com/best-mini-pcs-for-home-lab-2025/ shows the kind of hardware that's now available at consumer prices. The "scarcity" framing only makes sense if you assume everyone needs frontier-tier models for everything.

This matches what I've seen too. Mobile carriers have been way ahead on IPv6 - T-Mobile in the US has been IPv6-only with NAT64 for years. The weekend pattern is pretty much a smoking gun for mobile being the driver. It also explains why the Google metric (which skews consumer) looks so much better than the Wikipedia numbers someone else posted (35% IPv6) or the server-side adoption stats from Common Crawl.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: