Hacker Newsnew | past | comments | ask | show | jobs | submit | cfiggers's commentslogin

That's a very good point. I hadn't thought about that aspect before.

> If you only care about the UX of TUIs, that I can stand behind

This is a confusing concession. Of course we love TUIs because of the UX, what other reason is there?

Constraint breeds consistency and consistency breeds coherence.

Take 1,000 random TUI designers and 1,000 random GUI designers and plot the variations between them (use any method you like)—the TUI designers will be more tightly clustered together because the TUI interface constrains what's reasonable.

Yes of course you CAN recreate TUI-like UX in a GUI, that's not the issue. People don't. In a TUI they must. I like that UX and like that if I seek out a TUI for whatever thing I want to do, I'm highly likely to find a UX that I enjoy. Whereas with GUIs it's a crapshoot. That's it.


> the TUI designers will be more tightly clustered together because the TUI interface constrains what's reasonable.

It constrains what’s possible, not what’s reasonable. For example, one could typically fit more text on a screen by compressing it, but most of the time, that’s not the reasonable thing to do.

I’m saying most of the time because of the existence of English Braille (https://en.wikipedia.org/wiki/English_Braille#System) which uses a compression scheme to compress frequently used words and character sequences such as ‘and’ and ‘ing’ shows that, if there is enough pressure to keep texts short, humans are willing to learn fairly idiosyncratic text compression schemes.

colorforth (https://en.wikipedia.org/wiki/ColorForth) is another, way less popular example. It uses color to shorten program source code.

One could also argue Unix, which uses a widely inconsistent ad-hoc compression scheme, writing “move” as “mv”, “copy” as “cp” or “cpy” (as in “strcpy”), etc. also shows that, but I think that would be a weaker argument.


Try a 300 baud modem for a few months and good money says something terribly modern like Get-MrParameterCount would get compressed, a lot. Here's Bill Joy on the topic:

> No. It took a long time. It was really hard to do because you've got to remember that I was trying to make it usable over a 300 baud modem. That's also the reason you have all these funny commands. It just barely worked to use a screen editor over a modem. It was just barely fast enough. A 1200 baud modem was an upgrade. 1200 baud now is pretty slow. — "Bill Joy's greatest gift to man – the vi editor". The Register. 2003.


Come on, my previous phone had more bandwidth via a goddamn satellite, sending emergency infos.

> It constrains what’s possible, not what’s reasonable.

Why do you say "constrains what’s possible, not what’s reasonable", as though it's one and not the other? Does possibility conflict with reasonability? I would think it's not an either/or, it's a both/and.

The set of reasonable things is bounded by the set of possible things. So if the constraints of TUI design make certain things impossible, surely they make those same things unreasonable at the same time.


> Constraint breeds consistency and consistency breeds coherence.

In principle I would agree, but there are plenty of bad citizens among TUIs, it's absolutely not true that you can just start using one.

The same way there are excellent GUI applications like blender or intellij.


I'm sorry, excellent GUI with Blender? With the 2.5 interface things were ass backwards but you had a bunch of stuff you could do with only the mouse. With the 2.8 interface suddenly a bunch of stuff was hidden behind arcane key combinations, options disabled by default, and the loss of important visual data like the bounding box view and having both the UV and cursor coordinates in the same tab in the UV/image editor. No matter what the controls are different with every sub-window type, and interface panels flip from top to bottom and left to right for best readability without thought spared for consistency. There's a reason why someone can learn FL Studio in a few weeks, but take months or even over a year to become competent in Blender. I love it's jank and have been using it for eleven years, but I would never call the UI more than serviceable.

The gap between vi and emacs is larger than that of any GUI program I use as regularly as I use either of those.

I only know what "sokoban" means because of the sokoban levels in NetHack.

The thing that happens to me is that I'll get something working in the REPL, then try to deploy it and it breaks—because unbeknownst to me, I had gotten my REPL into some state where everything was working, but a cold start doesn't look the same.

Is this a skill issue? Absolutely. Do I still restart the REPL frequently (not after every def, but often) just to make sure I'm working with the same environment my program will be experiencing at run time? Yes I absolutely do.


Ah yeah, been there, and probably the first time was when renaming a function but missing to update callers, so callers keep calling the old function, and you have no idea why the changes you made in the new function aren't working.

I have a this little function for clearing the current namespace that I call every time I rename any var inside a namespace:

  (defn clear-ns []
    (map #(ns-unmap *ns* %) (keys (ns-interns *ns*))))
Not a perfect solution, but personally I also don't know a better solution that doesn't involve loosing out on other (non-negotiable for me) benefits.

Couldn't you have one test that requires your -main and runs it?

Just fire that off every now and then?

Basically the same thing, but you don't lose repl state


I have an AutoHotkey that just takes whatever is in my clipboard and sends it through as individual virtual keystrokes, specifically for defeating paste-disabled form fields.

It gets way more use than I wish it did.


> we do need ways to stay mentally sharp in the age of AI.

Here's my advice: if there's someone around you who can teach you, learn from them. But if there isn't anyone around you who can teach you, find someone around you who can learn from you and mentor them. You'll actually grow more from the latter than from the former, if you can believe that.

I think there's a broad blindness in industry to the benefits of mentorship for the mentors. Mentoring has sharpened my thinking and pushed me to articulate why things are true in a way I never would have gone to the effort of otherwise.

If there are no juniors around to teach, seniors will forever be less senior than they might have been had they been getting reps at mentorship along the way.


A long-standing truth in martial arts circles has been that you can't advance beyond a certain belt before you teach classes.

It's purely because of the fact that if you can't teach something, you really don't understand it.

And the act of having to simplify and break down a skill to explain it to others improves your knowledge of it.


I haven't heard this benefit for mentors clearly articulated before (probably just missed it), but definitely felt it - I guess it's a deeper version of how writing/other communication forces clarity/organization of thoughts because mentorship conversations are so focused on extracting the why as well as the what.


See one, do one, teach one.


Yes—and the key is, you do not leave the category of "how to learn a thing" between steps 2 and 3.


Interesting and semi-related idea: use LLMs to flag when comments/docs have come out of sync with the code.

The big problem with documentation is that if it was accurate when it was written, it's just a matter of time before it goes stale compared to the code it's documenting. And while compilers can tell you if your types and your implementation have come out of sync, before now there's been nothing automated that can check whether your comments are still telling the truth.

Somebody could make a startup out of this.


If you have CI hooked up to AI you could you just use a SLM to do that in a periodic job with https://github.github.com/gh-aw/ or https://www.continue.dev/. You could also have it detect architectural drift.


I'm a technical writer. Off the top of my head I reckon at least 10 startups have … started up … in this space since 2023.


I once had a mad idea of creating an automated documentation-driven paradigm where every directory/module/class/function has to have a DocString/JSDoc, with the higher level ones (directory/module) essentially being the documentation of features and architecture. A ticket starts by someone opening a PR with suggested changes to the docs, the idea being that a non-technical person like a PM or tester could do it. The PR then passes to a dev who changes the code to match the doc changes. Before merging, the tool shows the doc next to every modified piece of code and the reviewer must explicitly check a box to say it's still valid. And docstrings would be able to link to other docstrings, so you could find out what other bits of code are connected to what you're working on (as that link doesn't always exist in code, e.g. across APIs) and read their docs to find the larger context and gotchas.


There is at least one startup doing it already (I'm not affiliated with it in any way): https://promptless.ai/


Thanks for the pointer. That looks more to me like it's totally synthesizing the docs for me. I can see someone somewhere wanting that. I would want a UX more like a compiler warning. "Comment on line 447 may no longer be accurate." And then I go fix it my own dang self.


Ha, this is funny (also sad for me because I failed to explain on website clearly) because you have described exactly what it does as an example of what it can't do.

The core loop is more like a truffle-hunting pig than a ghostwriter. Promptless watches for signal that your product is behaving differently from the live documentation. It watches PRs opened/merging, Slack threads, support tickets. Then like a pig alerting on a truffle it shows up like "hey, this section over here doesn't match what the code/product does anymore."

Now of course we'll also generate a first draft of a suggested fix, but I want to say 40% of tech writers just like knowing when things changed.

Its a proper union find algorithm, where every suggestion links back to the source that triggered it, but multiple source do get linked up to just a single canonical suggestion. So you don't get duplicate alerts if people keep talking for weeks about a fix going out in the next release.

Obviously I've got some more work to do on the website again but c'est la vie.


Why would you need comments from an AI if you can just ask it what the code is doing?


Because the human needs to tell the AI whether it’s the code or the comment that’s wrong.


Because only a human writer can explain why he did the resolution. But nobody wants to update comments each time.


> 2.

To an individual consumer perhaps, but schools need to buy hundreds at a time and the second-hand market isn't really great for that.

This is basically Apple taking a bite at the Chromebook market. Interested to see what reviewers have to say.


Definetly, my kids at different schools had iPad's and Chromebooks. The kid with the iPad was using an external keyboard most of the time iirc.


"That's not true" is too strong a statement on your part.

The statistic you cite does not necessarily contradict what the parent comment is saying. "Up 29% since February 2020" is an absolute change since a specific point. The parent comment is saying prices have "come down" i.e. since their peak. It can still be up overall, so long as it's not up as high as it was at one point.

EDIT: To be clear, the parent comment might still be wrong, or might be right only within a biased sample (i.e. their own experience). I'm only making the point that the statistic you're referencing does not outright disprove what they're saying. Prices can be up since six years ago AND down since two years ago (random time periods chosen for illustration only).


I'm gonna value national stats over unsourced anecdotal assertions.

At no point has the US entered deflation so far this millennium.


The US did have deflation in the fourth quarter of 2008:

https://www.ebsco.com/research-starters/economics/deflation

Of course this is talking about the overall price level. The prices in specific sectors can fluctuate independently of that. Food and energy in particular are excluded from core inflation because they're especially volatile.


It's entertaining to me to imagine future historians arguing with one another, writing dissertations, publishing virtual reality eyeBooks, explaining to one another all about the ancient etymological connection between "claws" and "webhooks".


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: