Hacker Newsnew | past | comments | ask | show | jobs | submit | purerandomness's commentslogin

In PHP, an established tool is adding GrumPHP [0] to your dependencies.

It will then handle git hooks on each commit via composer script by default (but can be omitted per commit).

[0] https://github.com/phpro/grumphp


IPv6 will never make it. Maybe IPv8 [0], which IPv6 should have actually looked like:

> 1.1.1.1.1.1.1.1

[0] https://www.ietf.org/archive/id/draft-thain-ipv8-00.html


Why do people keep proposing alternatives to IPv6 that are no easier than IPv6 but still require the whole world to start the deployment over from 0%?

I'd say it's either because they're just having fun, or because they're dumb.

For observers, this draft was posted to HN earlier but quickly flagged and removed because the linked "IPv8" draft is absolute bunk.

See the removed thread for details: https://news.ycombinator.com/item?id=47788857


Having read that thread, I guess one of the small upsides of the world I live in is that "FIFA Peace Prize" is now available as a joke award reference. FIFA really hit it out of the park there in a way that even their normal legendary levels of corruption couldn't imagine.

Edited: In hindsight I notice that "hit it out of the park" is the wrong sport metaphor for FIFA, but I stand by it anyway.


> Edited: In hindsight I notice that "hit it out of the park" is the wrong sport metaphor for FIFA, but I stand by it anyway.

For future reference, you can use: "knocked it into the top corner", "put it in the back of the net" or "smashed it past the keeper". Not a native football-talker, but hang out too much with a few.


"Back of the net" doesn't feel the same to me even though (I learn after reading far too much about a sport I do not play) "Out of the park" is basically the same thing.

In my mind "out of the park" had meant the ball leaves the actual stadium but in fact (I read) "the park" in this context is actually the field of play and so "out of the park" represents in fact the vast majority of home runs and not the over-achievement I had imagined.

So TIL but thanks for the suggestions.


True, "back of the net" is more "someone kicked the ball really hard and it hit the back of the net really hard" instead of "the ball came across the goal line" which can be very different, so in my mind that's as close to "out of the park" as you can get in soccer :)

Nice idea. Always wondered why IPv6 went so ambitious with the addressing

One of the craziest aspects of IPv6 implementation is the reverse DNS lookups.

IPv6 uses ip6.arpa and segments each little nybble into a subdomain!

https://en.wikipedia.org/wiki/Reverse_DNS_lookup#IPv6_revers...

This means there are always 32 octets to a reverse-IPv6 address, and there are no shortcuts or macros to overcome this! That means if you wish to assign a singular name that maps from a legitimate /64 Network ID, you must populate 64 bits worth of octets in a zone with this data. It is an absurd non-solution. This never should've been allowed to happen, but it will basically mean that ISPs abandon reverse DNS entirely when they migrate to IPv6 implementations.


  $ dig -x 2606:7100:1:67::26 | grep PTR
  ;6.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.7.6.0.0.1.0.0.0.0.0.1.7.6.0.6.2.ip6.arpa. IN PTR
Run this, then copy/paste the output into your zone file. Remove the ; and add "example.com." or whatever to the end.

I agree it's a pain to read, mostly because DNS addresses are written backwards, but an "absurd non-solution"? For a set of instructions that don't even depend on the format of the record (they work for v4 too), and which I could describe in one line in a HN comment?

If this is the craziest part of v6 then it must be incredibly well designed overall.


It is a pretty nice design, partly as a result of the fact that we've got a working system to look at (IPv4) and we have a lot more eyeballs "these days" (when IPv6 was designed, so, decades ago now) than when the Internet Protocol was a new idea.

I think perhaps the person you're responding to imagines that somehow DNS mandates a very naive implementation and so this behaviour would be incredibly expensive. The sort of person who sees a flip clock and imagines it needs 1440 different faces not 84 (or in some cases 72) because they haven't realised 12:34 and 12:35 simply use the same hour face.


“copy paste the output” is your solution? You think this somehow scales to manage entire networks like this with dynamic addressing? Do you perceive a network admin as a monkey who copy-pastes things all day?

This is exactly the absurd non-solution I am referring to, and it seems like if someone dismisses this with “one line instruction is all u need lol” they cannot even comprehend the scale at which real life operates.


Copying and pasting was just my attempt to demonstrate how simple a v6 rDNS record is to add. If you were interested in hiring me to write a solution for your ISP, that's fine, but you can't seriously expect random people to do it for you for free in a HN comment.

It should be pretty obvious that a script can generate these records from the forward records or from any other source of IPs/hosts, with no per-address effort needed on the part of the network admins.


Again, absolutely blind to the management of these things at scale. Yeah, I don't rightly care about "how easy it is" to generate them. You can't even comprehend or convey the massive number of records and zones that are involved in managing a network of devices that all require dynamic updates to reverse-DNS and add/update/remove device addresses on a regular basis.

DNS is a distributed database system, and so the challenge is not cramming in data with a brainless script, but managing how that data is distributed and accessed by thousands or millions of peer servers, caches, and clients worldwide.

IPv4 reverse-DNS was quite simple when it was broken on octet boundaries and there were only four of those boundaries in total. But even then, ISPs could often not be arsed to put the right data in there. Some left it blank and some waited until they were forced, by strict requirements that said reverse must match forward DNS in many cases.

I have never found any user-accessible software, not on any Linux distribution or on any cloud service, that would permit an ordinary consumer to manage even a /24 IPv4 network's reverse-DNS at scale, or programmatically, as opposed to by-hand "copy paste" as has been so condescendingly suggested here. There are plenty of hosted DNS providers, and there are plenty of monkey-brain Dashboard interfaces where you can pound out one A record at a time. But there was nothing to deal with dynamic addressing or DNS databases at scale. That's why IPv6's reverse DNS remains an absurd non-solution.


So... how many records and zones? I'm pretty sure I could convey it if I could work out what you were talking about.

You went from "you can't even comprehend or convey the massive number of records and zones that are involved" to one v4 /24, managed "at scale" but by an ordinary consumer, who you expect to be capable of programming. This is a bit all over the place.

It's not any harder to deal with v6 reverse DNS than it is v4. In fact, making every reverse label 4 bits instead of 8, combined with v6 being much bigger than v4, makes rDNS much easier to deal with in v6 because you can generally delegate reverse zones on exactly the same boundaries that you delegate the corresponding IP blocks. In v4, you often need to delegate on boundaries that aren't /8, /16 and /24 and it suddenly gets more annoying.

Scaling up for rDNS is no different to scaling up for forward DNS. It's a well-understood problem.


Anyone who's ever had to delegate DNS authority on anything other than an 8-bit boundary can understand the value of that feature.

At face value, yeah, that's replacing "8" with "4," but from a practical perspective, delegating authority for a customer IPv4 /25 requires, at minimum, 128 records. (Granted, there's also no practical need to be stingy about IPv6 allocations -- that IPv4 /25 customer could simply receive an IPv6 /48.)

I would firmly expect to see a lot more formulaic reverse (and presumably forward) DNS responses, where needed, since filling files with records you need to store on disk (and in memory) doesn't scale well. The tech has existed for years; I wrote my own implementation years ago, but these days I'd use something like PowerDNS with https://github.com/wttw/regexdns .


Might as well go big. 24 extra bytes per packet is not that big deal. And having that much extra space means you can screw up design multiple times and still be able to reuse lot of infra. Also getting rid of idea that you are even trying to manually manage the address space eases many things.

But it's not human readable anymore, nor backwards compatible. The expectation was that the industry is reasonable, but it proved to be as hard as it would be to push breaking email v2 implementation.

If you think v6 isn't backwards compatible then literally anything bigger than 32 bits will never count as backwards compatible for you. The whole point of making the address space bigger is to make it bigger, so what do you expect to achieve by complaining that the result is incompatible?

As a human, I've found that e.g. "fd00::53" is perfectly readable to me, and most of the time you're interacting with strings like "news.ycombinator.com" anyway which is identical to how it works in v4, so I'm not sure how far I'd agree with that part either.


FastAPI is quite old (2018)

Svelte even older (2016, SvelteKit was just an new version in 2022)

SQLAlchemy is ancient (2006)

Use newer tech, like HTMX (2020)

(/s obviously)


MySQL does not let you have transactional DDL statements (alter, create, index etc).

If you're building anything serious and your data integrity is important, use Postgres.

Postgres is much stricter, and always was. MySQL tried to introduce several strict modes to mitigate the problems that they had, but I would always recommend to use Postgres.


If you want to draw circles, you're probably looking for a vector drawing program, like Inkscape.


Hah, thought I'd read that before.


If you want to draw squares, you're probably looking for a vector drawing program, like Inkscape.


Comments like yours miss the point. In fact worse, they just serve to stagnate FOSS because it pushes the assumption that the software is always right and users are idiots, without taking any time to understand what those users are actually trying to do.

There are hundreds of good reasons why someone might want to overlay a vector shape on a bitmap image. The desire to draw shapes on bitmap isn’t something weird that I’ve just invented for HN. It’s been a staple feature such graphics packages since the inception of bitmap graphics editing. And it’s been a staple feature of Gimp since I first switched to Linux in the 90s.

But that’s all moot because I was just making an arbitrary example.

And as an aside, I do use vector drawing software too. So I’m fully aware of their existence.


Why is then opening a JPEG not an Import?

I get it, and when Photoshop changed this default, GIMP followed with changing this workflow. It used to be different in older versions of Photoshop and Gimp.

Advanced user usually know exactly what they're doing, and opening a PNG or JPEG file, changing a few pixels, and saving it, should require as few key presses as possible.

I don't want the UI to get in my way when I open->edit->save.


When opening a jpg, it literally says "importing pic.jpg" when opening a jpg


Exactly. That's my point.

When saving, it simply should say "Exporting pic.jpg".


'Opening a JPEG' is creating a new image and importing the JPEG to it. ctrl-e on first use will establish the export setting. It's two clicks if you really want to overwrite the original. I think it would be very easy to accidentally and destructively overwrite the original image file if it was different, when ctrl-e is in muscle memory.


k9s, ncdu, htop, powertop are good showcases how a TUI reduces mental load and are superior to browsers and / or other GUI tools


More importantly, it also reduces CPU and memory load.


You haven't been around here in the Blockchain/NFT/Smart Contract dark ages, have you?


Naw man I just signed up.


I chuckled. Everything on earth is recent if you look at it from a cosmic timeframe I guess


To be fair, it really was annoying when everything was blockchain.


On the other hand man was it easy to make money at the time. I guess that’s probably true now for those in the AI space too


Aren't there blockchain agents, surely there must be agents running in the blockchain as smart contracts?


I wonder in what timeframe the cosmic timeframe is recent.

It's turtles all the way down ....

;)


TBH I’ve been here a while, never felt what the point with the above is but do feel LLM:s are a new valuable affordance in computer use.

I mean I don’t have to remember the horrible git command line anymore which already improves my exprience as a dev 50%.

It’s not all hype bs this time.


> I mean I don’t have to remember the horrible git command line anymore

Every time I see a comment like this, I have to wonder what the heck other devs were doing. Don’t you know there were shell aliases, and snippet managers, and a ton of other tools already? I never had to commit special commands to memory, and I could always reference them faster than it takes to query any LLM.


You do realize it does not help _me_ at all if _you_ have found your perfect custom setup.

Because it’s custom there is no standard curriculum you could point me to etc.

So it’s great you’ve found a setup that works for you but I hope you realize it’s silly to become idignant I don’t share it.


The point I’m making is there are tons of solutions. Deterministic, fast, low-energy, customisable. Which is why I said “I have to wonder what the heck other devs were doing”. As in, have you never looked for a solution to your frustration? Hard to believe there was nothing out there before which wouldn’t have improved your Git command-line experience. Like, say, one of the myriad GUI tools which exist.

> Because it’s custom there is no standard curriculum you could point me to etc.

Not true. There are tons of resources out there not only explaining the solutions but even how different people use them and why.

If I sat with you for ten minutes and you explained me the exact difficulties you have, I doubt I couldn’t have suggested something.


I use a git gui :)

So the only time I need terminal, it’s for something non-obvious.

”There are tons of resources”

This is not a standard curriculum as such though.

I’ve tried to come to terms with posix for 25 years and am so happy I don’t need to anymore. That’s just me!


> keep trying because really deep down they just don't know how nice you're being by giving them a chance to talk to you.

I don't fathom what kind of trauma would lead you to take this positive, light-hearted advice to connect to fellow human beings, and to spin this into such a vile, evil, anti-social narrative.

How does that help?


And that's precisely the point: you can't fathom what someone has been through.

Don't assume people want to talk. Respect boundaries, leave people alone.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: