In one of his speeches, Obama said "Better is good". I think about this a lot. It feels like better compounds over time, too. Small improvements add up. From experience, nothing new is perfect the first go round, so sitting around trying to come up with a perfect design is counterproductive because there's no such thing.
"impediment to action advances action. what stands in the way, becomes the way".
A saying I've come across is: "Don't let perfect be the enemy of good"
I had a coworker who would always be diplomatic about code changes he felt could be improved but when he felt he was nitpicking, where he would say: It's better than it was. It allowed him to provide criticism while also giving permission to go ahead even if there were minor things that weren't perfect. I strongly endorse this kind of attitude.
It's worth noting that this is a kind of different "nit" than something that might be attached to a line of code. Like, someone might "nit" using a bunch of if statements where a switch statement might work, or if someone uses a `for each` where a `thing.map` would do.
What I am describing would be something higher level, more like a comment on approach, or an observation that there is some high-level redundancy or opportunity for refactor. Something like "in an ideal world we would offload some of this to an external cache server instead of an in-memory store but this is better than hitting the DB on every request".
That kind of observation may come up in top-level comment on a code review, but it might also come up in a tech review long before a line of code has been written. It is about extending that attitude to all aspects of dev.
I had someone reject my code that improved/regularized half a dozen instances of a domain object we had, where they were showing up in code paths I cared about. He said there’s dozens of these, don’t submit this unless you fix them all.
I had something similar but convinced the other person the rest of the work can be done later. Then the person went ahead and did it despite the other instances having no use/value. Go figure. I guess having consistency has some value to argue the other side. I tend to be extremely flexible in terms of allowing different ways of doing things but some seem to confuse form with function insisting on some "perfection" in the details. I think this is partly why we get these very mixed reactions to AI where LLMs aren't quite "right" (despite often producing code that functions as well as human written code).
Consistency reduces the mental cost of acquiring and maintaining an understanding of a system. In a real sense, moving from one approach to two different approaches, even if one of them is slightly better than the original one, can be a downgrade.
Like many other things it's a judgement call. The break down occurs when people replace judgement with rules or "religion". This tends to happen when they don't have the experience of seeing the long term impact of decisions in various contexts.
But then you end up with nit inflation, people feel like they need to fix the nits, and do, and there's no meaning to nit any more. I try to just not comment unless I feel there is some learning from the nit.
I have a crippling guilt about not keeping my apartment as spotlessly clean as my parents did theirs, to the point that I end up procrastinating, which just makes it worse.
The trick to overcoming this is not to aim for "clean" but for "cleaner than before".
Just keep chipping away at it, whether it is a messy codebase or a messy kitchen.
I use it for cleaning all the time. Whenever I have dishes, I always give myself permission to do as little as I want knowing that one clean dish is better than nothing. Most often I end up doing them all.
The other saying I say is "completion not perfection". That helps me in yard work especially. I'm not going for the cover shot of "Better Homes and Gardens", I just need the lawn to be cut.
I always thought perfectionism meant extremely high achievements (for too great of a cost). But it can also be quitting without any progress because you can't accept anything less than perfect (which may or may not be achievable). Perfectionism can be someone procrastinating on a large task.
The thing is, "better" is an ambiguous word. I can change a program in some way and make it smaller. I can change it in some other way and make it faster. Both are "better", but in different ways. More often than not, however, you can't have both smaller and faster - or else your are just fixing a performance bug. Often even improving one property makes some other property less good, as you can see in the numerous "pick two" rules.
So "better" means "more specialized" more often that it means "more optimized". I don't say it is a bad thing per se, but it is best to keep in mind that they are two types of improvement, fixes and specializations, because the latter is a commitment.
> "impediment to action advances action. what stands in the way, becomes the way".
Don’t know where this is from but it’s like saying “when you stop work in progress, progress is stopped”, I mean, sure. That’s how things go, but why even say such an obvious fact of life?
> Agents are opening pull requests, reviewing each other's work, and closing them without a human ever touching the keyboard, with a continuously live log monitoring loop to rapidly fix issues.
I know gas town made a splash here a while back and some colleagues promote software factories, but I haven’t seen much real output..have any of you?
I prefer the guided development approach where it’s a pretty detailed dialog with the LLM. The results are good but it’s hardly hands off.
If I squint I can almost see this fully automated development life cycle, so why aren’t there real life examples out there?
I think the reason we're not seeing many examples yet is that the full loop doesn't work completely autonomously yet. There's still a human in the loop at some critical points - specifically testing against a spec (runtime testing if say working on web or mobile app before shipping to users). LLMs can do compile time testing and validation, unit tests, and can write your end to end tests, but if you're shipping software to users, there's still a human somewhere involved. This isn't even mentioning marketing and actually getting your software into the hands of users - which while it can be automated, a lot of marketing with AI is still sloppy.
How do you know that there aren't? If you had a "robot software factory" that worked, and you were certain it was a source of not just lifechanging or generational but potentially centenary wealth - well.
There was a time in my life when I too would give such a thing away free, on the idea that those who might do some good with it may make up for the ones who will certainly turn it to great evil. After 30 years' exposure, some consensual, to Bay Area/Silicon Valley "culture," I am no longer so sweetly naïve.
I think you're maybe oversimplifying a bit. I dont think the argument here is that "AI" is not 100% so we shouldn't use AI. There are issues we need to be aware of.
Specifically, AI companies want to inflate the utility of AI because that's how they make money. There should be guardrails where appropriate. Unfortunately, as usual, we need to make mistakes before we can learn from them.
I read through the link. The other pilot and the captain are complicit by the virtue of being there. Autopilot disengages at 2:10 and they crash at 2:14. Terrible.
My other immediate thought -- Tesla's autopilot. I've never used it so I'm not sure I'm fully correct here, but apparently it requires you to be vigilant and take over in certain situations? Wonder how well that works out in practice.
In practice, there's a camera in the Tesla that looks at the driver to make sure they're paying attention. If they're not, perhaps fiddling with their phone or looking at something in the passenger's seat, then the system gives a warning and then a strike. Get five strikes and you can't use FSD for the next week or two. So drivers are directly incentivized to keep their eyes on the road because if they don't, they can't actually use the system which would suck for a long road trip.
I've been using Claude and it's a game changer in my day to day. The caveat being of course that my tasks at a small "feature" level and all interactions are supervised. I see no evidence that this is going to change soon...
My other thought, that I can't articulate that well is....what about testing? Sure LLMs can generate tons of code but so what? If your two sentence prompt is for a tiny feature that's one thing. If you ask Claude to "build me a todo system" the results will likely rapidly diverge from what you're expecting. The specification for the system is the code, right? I just don't see how this can scale.
I really applaud your discipline. It took me a long time to realize that the todo system you use is almost meaningless if you don't execute on the items that you prioritized.
Haha thank you, I must admit I can't make it always work out, but writing (typing) them down and forcing myself to look at the view to remind my brain is a lot better than juggling all the priorities around in my head. I really really hate planning tasks, I just want to do things and write code, so Linear is a good way for me to offload these computations and just focus on clearing the list.
I would gladly pay $99/month if this was honest FSD. I started tracking my time consistently since end of December and in the past 2 weeks I spend 23 hours driving. That's already only $4/hour.
I haven't been keeping up with the progress in this space. Last I heard, Benz introduced some sort of self driving feature AND accepted full liability for it (whereas Tesla does not). How does Benz's self driving compare to Teslas?
Mercedes' feature has been sunset. It only ever worked in good weather on a limited set of motorways, below a certain speed, and with a guide vehicle in front of it l.
We've had FSD trial for 4 months in the middle of last year. I work from home so I can't really justify $100 a month. However, we did take a few trips (about 60 miles in each direction) to see family through downtown LA.
I was honestly stunned by how far the tech has come. It basically drove us door to door without a single intervention.
I talked to about 3 people about this that have personal experience with Tesla autopilot and that's been the feedback. So where's the gap? What's the problem?
I don't think there is a problem per se. There are probably still edge cases out there that I didn't experience. But overall, I think the tech is ready to roll.
There are unsupervised HW4 Tesla robotaxis in Austin open to public use as of yesterday. Lemonade Insurance announced an FSD plan where the price is half the market rate while using FSD. So unless there are specific regulatory barriers for personal vehicles unsupervised should be available for their latest gen personal cars sometime this year.
That’s basically what SAE L3 and above levels of autonomy mean. The manufacturer takes full responsibility of the driving while the function is active.
I drove Mercedes and BMW L3 offering. Both had a really restricted ODD (Operational Design Domain) for it to be of much use outside high traffic situations on an Autobahn. It was restricted to good weather and speeds of around 60km/h. Basic all conditions under which their set of sensors and CPUs would work optimally.
But that was 2021 technology. L4 level of autonomy will be in the market during the next 4/5 years, no doubt. And that will be a game changer for anyone driving any significant amount of time. Sleeping, reading, watching a movie or just working on the laptop will be possible. And the manufacturer will take full responsibility of the driving while the functions are active.
"impediment to action advances action. what stands in the way, becomes the way".
reply