Hacker Newsnew | past | comments | ask | show | jobs | submit | staplung's commentslogin

The failure of the upper stage is a bummer. If it triggers a months-long review, that will almost certainly bump back the schedule for the prototype Blue Moon lander launch.

Honest question: how do you track stars during daylight? Is it the case that at the operational altitude of the B-52, bright stars are always visible?

What if I like magenta? Or brown?

Pedantry for pedantry, you're in luck as the title says they created 'any wavelength lasers' not 'any wavelength laser' so you can make any such combos you like rather than the fixed set now (if true) :p.

Can I interest you in indigo or violet? Or a nice orange?

Genuine q: how close can you get to magenta with the rainbow?

What we call "magenta" is the sensation of both red and blue color-sensitive cells in the eye being excited at the same time. There's no single wavelength that produces this effect (unlike e.g. yellow). The closes you can get is violet, which looks faint to the eye.

A rainbow gives you both red and blue; mute everything else, and you'll get magenta. That's what magenta pigments do when illuminated by white light (which is a rainbow scrambled).


It never clicked before that yellow and magenta are snowflakes to each other in this regard. I thought they were equals, but magenta is more majestic!

Saying a wavelength doesn’t do it doesn’t make any sense. If you can perceive it visually, a wavelength is doing it.

Two wavelengths do it; one does not suffice. It's like a perfect fifth can not one note.

The interference is a wavelength too. Maybe not pure but it is one. Afaik they cannot be interpreted as two separate wavelengths and then “brain combined” when the aperture (the retina) is so small.

I haven't heard of a wavelength of 2 frequencies merged. It is like saying what is the wavelength if you tune to 2 radio stations with 2 radios (assume silent transmition for simplicity). There are 2 wavelengths.

> I haven't heard of a wavelength of 2 frequencies merged. It is like saying what is the wavelength if you tune to 2 radio stations with 2 radios

No, any wave has a wavelength. You can add sin(3x) to sin(2x) and the resulting wave is a perfect fifth. Its wavelength is determined by its components; since sin(2x) has a wavelength of π and sin(3x) has one of 2π/3, the combined wave will have one of 2π.

The difference is that sin(2x) and sin(3x) are both sine waves, while their sum is not. There is no such thing as a pure tone of two merged frequencies, but there are many possible waves at any given frequency that aren't pure tones.


Thanks, interesting!

Not very! This is on the "line of purples".

Here's a nice visualization of color perception (there are more modern ones, but we used the 1931 color space when I was working in the field). The horseshoe shape on the outside is the single wavelength colors.

https://en.wikipedia.org/wiki/CIE_1931_color_space


You can also play it in the browser in Infinite Mac: https://infinitemac.org/1991/System%207.0

(Worked somewhat better for me than the classicreload version)


In total, a little over one dozen astronauts died on shuttle flights (14). No astronauts died during Gemini or Mercury. Three died in a test on Apollo 1. The shuttle failure rate was nowhere close to 1/10. In fact, it was 1/67 (2 failures out of 134 flights).


The Tower of London arguably qualifies as a fort built to protect its inhabitants from the city. In its original form, its most impressive and formidable defenses faced London.


Cool article but I think the write-up no longer matches the actual code. Snippets in the article use `*p->p` a lot. The *p is a parser struct defined above as

  struct parser {
    const char* s;
    int pos;
    struct grid* g;
  };

Notice there is no `p` member within. Assume the author meant `*p->pos`? And indeed if you look at the code in github the parser struct is defined as

  struct parser {
    const char *s, *p;
    struct grid* g;
  };
So there's the missing `p`, even though it's no longer an int. So I presume the member variable was once known as `pos` but got renamed at some point. Some of the snippets did not get updated to match.


The numbers in the headline seem odd. They imply that each (fake|fraudulent) worker only nets $5000 per year for Kim. I know the system has some inefficiencies where people behind the scenes are helping the "employee" with the work and there are cost of living expenses, taxes etc. but that seems like a pretty low take.


This might include people working in lumber camps in places like Siberia, "mercenaries" in Ukraine, people in NK-managed restaurants in China, Laos etc, or similar efforts that have been reported on, where the average revenue per worker is likely a lot lower.


I had the same thought - I guess there's additional overhead in paying the in-country proxy and probably also a lot of churn (being found out and fired, and then taking a long time to find another position).


5k a year could be 2 weeks of onboarding or waiting out a bureaucratic PIP process.

Its also possible that its a numbers game and only 2/3 succeed at getting hired.


Maybe some of them don't remain employed for very long.


It would be ironic if the DPRK just passes on more of the money than most contract software companies.


And the reason they were modeled after the dollar bill size is because there were already many types of systems for storing and organizing them. That came in handy for the census.

The old BBC Connections series has a segment with James Burke using the old census tabulators.

https://www.youtube.com/watch?v=z6yL0_sDnX0&t=2640s


Of course since the old syntax is merely deprecated and not removed, going forward you now have to know the old, bad form and the new, good form in order to read code. Backwards compatibility is a strength but also a one-way complexity ratchet.

At least they managed to kill `auto_ptr`.


I doubt it will be a problem in practice.

Regular variadic arguments in general aren't used very often in C++ with exception of printf like functions. Not rare enough for majority of C++ programmers to not know about them, but definitely much more rare than their use in python. Main reason people know about it at all is printf. The "new" C compatible form has been supported since the first ISO standardized version of c++ if not longer. There haven't been a good reason to use the "old" form for a very long time. Which means that the amount of C++ code using deprecated form is very low.

Being deprecated means that most compilers and linters will likely add a warning/code fix suggestion. So any maintained project which was accidentally using C incompatible form will quickly fix it. No good reason not to.

As for the projects which for some reason are targeting ancient pre ISO standard c++ version they wouldn't have upgraded to newer standard anyway. So if new standard removed old form completely it wouldn't have helped with those projects.

So no you don't need to know the old form to read C++ code. And in the very unlikely case you encounter it, the way for accessing variadic arguments is the same for both forms through special va_list/va_arg calls. So if you only know the "new" form you should have a pretty good idea of whats going on there. You might lookup in references what's the deal with missing coma, but other than that it shouldn't be a major problem for reading code. This is hardly going to be the biggest obstacle when dealing with code bases that old.


The “new” form has been valid since the original 1998 C++ standard, where it was added for compatibility with C. “You now have to know” has therefore already been the case for the past 27 years. Back then the old pre-standard form was kept for backwards compatibility, and is only now being deprecated.


The old-style variadics are rarely seen in C++ these days, never mind this particular edge case. If you working in a vaguely modern version of C++ this largely won’t impact you. You can almost certainly ignore this and you’ll be fine.

Unless you have a massive legacy code base that is never updated, C++ has become much simpler over time. At a lot companies we made a point of slowly re-factoring old code to a more recent C++ standard (often a couple versions behind the bleeding edge) and it always made the code base smaller, safer, and more maintainable. It wasn’t much work to do this either.

To some extent with C++, complexity is a choice.


PyCuda 2024, used fairly often in certain industries, still contains `auto_ptr` ;-;


I think Rust has shown a way to remove deprecated interfaces while retaining back compat - automated tooling to migrate to the next version and give a few versions for a deprecated interfaces to stick around at the source level.


If you're talking about editions, this isn't how they work at all; every edition continues to be supported forever. (The part about automated migration tooling is true, and nice.)

There've been a few cases where code was unsound and should never have compiled, but did due to compiler bugs, and then they fixed the bugs and the code stopped compiling. These were handled through deprecation warnings with timelines at least several months long (Rust releases a new version every six weeks), but usually didn't have automated migration tooling, and didn't fracture the language mostly because they were rare edge cases that most programmers didn't encounter.


Editions are still allowed to remove old syntax or even remove APIs - they only can’t break ABIs. So the code is still there once removed from an edition in previous editions, but such symbols don’t even get linked if they’re unused supporting progressive removal. And similarly, I could see editions getting completely removed in the future at some point. Eg rather than indefinitely maintaining editions, in 20 years have a checkpoint version of a compiler that supports the previous 20 years of editions and going forward editions older than 10 aren’t in the build (for example, assuming a meaningful maintenance burden, which is hard to predict when that happens and what a specific policy looks like).


Editions never remove APIs.


Have not yet. There’s nothing stopping them though and from talking with the std folks it seems like they will likely at some point experiment crossing that bridge.


C++ almost never removes features because of the ABI compatibility guarantees. Programs compiled with older versions of the standard can be linked against newer versions.

This is allegedly because in the 80s companies would write software, fire the programmers, and throw the source code away once it compiled.


Fixing syntax by definition does not affect the ABI. And Rust has shown that both ABI and API compatibility can be achieved in the presence of several "versions" (editions) of the language in the same build.


Rust has shown that it’s yet another language that kind of sort of addresses 3% of the issues c/c++ has, tops.


Probably because like 95% of C++'s issues are self-inflicted and don't need to be addressed if you use a different language in the first place, and 1% of them are fundamentally unsolvable by any language.


I really don't like C++ but it's hard to come up with thirty-odd times as many other terrible problems as the ones Rust addresses.


Do you actually know Rust or were you just talking out if hour ass? I’d like you to enumerate even thirty problems of C or C++ that Rust doesn’t fix, never mind hundreds (because Rust fixes a metric shit ton of C/C++ problems!)


lol. A functions module system that’s easy to use and adopted? A package manager? A well implemented hash table? Fast compile times? Effectively no segfaults? Effectively no memory leaks? Comparatively no race condition bugs? A benchmark and unit test framework baked into the language? Auto optimization of the layout of structs? No UB?

I don’t know what you’re counting as “3% of the issues” but if those are the 3%, they sound like massive productivity and safety wins that’s not existed in a language with a similar performance profile to C/C++.


Is Rust faster to compile than C++?


Different (though related) things make compiling Rust slow. In both cases the compiler can spend a lot of time working on types which you, as programmer, weren't really thinking about. Rust cares about types which could exist based on what you wrote but which you never made, whereas C++ doesn't care about that, but it does need to do a lot of "from scratch" work for parametrised types that Rust doesn't have to because C++ basically does a lot of textual substitution in template expansion rather than "really" having parametrised typing.

If you're comparing Clang the backend optimiser work is identical in both cases it's LLVM.

People who've never measured often believe Rust's borrowck needs a lot of compiler effort but actual measurements don't agree - it's not free but it's very cheap (in terms of proportion of compiler runtime).


For most day to day cases, rust will actually compile faster because the build system will do good incremental builds - not perfect, but better than c++. Also clean builds are still “perfectly” parallelized by default.

And yes, while rust has a reputation for being slow, in my experience it’s faster for most projects you encounter in practice because the c++ ecosystem is generally not parallelized and even if it is many projects have poor header hygiene that makes things slow.


Rust is a single vendor. It's not really the same situation.


Having multiple compiler vendors is a problem IMO not a feature. It fragments the ecosystem - the code compiles fine with this compiler but not this other one. The maintenance of portable Rust code is significantly easier.

I think the way forward is multiple backends (LLVM + GCC) to improve platform support, but a single unified frontend that works correctly on all platforms is a good thing.


There is a single standard committee though. There is really nothing stopping them from shipping tooling that can do the conversions for people. The number of vendors isn't really the problem here. The problem is that the committee shifts that responsibility onto the vendors of the compiler rather than owning it themselves.


[flagged]


Several times now C++ enthusiasts and indeed the committee have been told the way forward is the "Subset of a superset" that is, Step 1. Add a few new things to C++ and then Step 2. Remove old things to make the smaller, better language they want.

Once they've obtained permission to do Step 1 they can add whatever they want, and in a few years for them it's time to repeat "Subset of a superset" again and get permission for Step 1 again. There is no Step 2, it's embarrassing that this keeps working.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: