Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Legacy code will also tend to have had many of its bugs ironed out, since it's been in operation for so long. New code will have a whole new set of coding errors hidden in it, and you'll have to go through that ironing-out all over again.


I think this is a crucial point we all overlook in our rush to throw out the legacy code and 'do it right this time.' We're thinking, 'but we'll have unit tests so we'll catch the bugs.' But in reality, some get through and they tend to be a real pain in the ass.

I know this probably means that I don't unit test right. It also obviously means I don't code right, because I have bugs. The point is, these are facts of (real) life, and to be successful you have to manage the reality. Often this means that the best course of action is to stick with the legacy code.


You'll have bugs either way. That's a fact of life, and the vast majority of a developers time will always be spent debugging.

What legacy code does is inhibit your ability to make the fundamental changes you need to implement in order to stay competitive. And I'm not saying throw you code out every other week. And I'm not saying every attempt to rewrite your product should replace what you already have. But the for all the costs associated with re-writing your product, they pale into insignificance when compared to being put out of business.


In terms of debugging, there are certainly the goofs (off-by-one, = instead of ==, and so on) which are just lapses by the programmer. The danger I think is in losing the wisdom gained in having already solved the hard problem (of course it is helpful in this regard if the original programmers are still around somewhere). It's something akin to cultural knowledge. Don't underestimate what it's going to take to rebuild it.

But perhaps we can agree, then: rewriting code is a business decision, driven by the business needs of the company. Premature rewriting is just another form of premature optimization, and can get you in trouble by putting your resources in the wrong places. But when you have a tangible threat- when you are unable to adapt to accommodate needs that you are convinced are on your horizon, then you should do it.


It should go without saying that fewer bugs is better than more bugs, though.

I'm not saying you should never rewrite a system from scratch. But I do think it's extremely costly to do so, and that it's extremely difficult to predict when you'll be driven out of business if you don't. So I don't think it's usually justifiable.

I'd be very interested if someone could supply an example of a company that was put out of business because they failed to re-write their product.


If you already saw a threat, then by the time you finish re-writing it, wouldn't you be dead?


Depends how well you can see the threat horizon.


>> What legacy code does is inhibit your ability to make the fundamental changes you need to implement in order to stay competitive.

There's always the possibility that your existing code doesn't suck, and was designed to be flexible for changes.

Ideally, you've done a good job and can rewrite small pieces of your system iteratively as needed, rather than having to chuck the whole thing.


Exactly.

Also, code that might look crap and seem to do weird stuff often handles intricate edge cases. But due to the lack of comments, it's not obvious what's going on.

I've been involved in several component refactorings when as a team we've gone "look how complicated the old code is, we can make it much simpler".

When we've re-written it and then tested it with actual production use cases, it then becomes obvious why the code was so convoluted previously. So time was wasted, but at least there are comments now (and in those cases, the code's slightly better).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: