Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I don't get the discussions around side project and they're ML engineers, not security experts. Why are you excusing a company for a serious security leak.

No one is here as far as I can tell. But if you've ever been a software engineer who is required to work with someone purely from an ML lab and/or academia, you'll quickly discover that "principled software engineering" just isn't really something they consider an important facet of software. This is partly due to culture in academia, general inexperience (in the software industry) and deeply complicated/mathematical code really only needing to be read by other researchers who already "get it", to a degree.

Not an excuse but rather an explanation for _why_ such an otherwise impressive team might make a mistake like that.



Yeah, you're right, I was conflating the excusing bit.

I haven't worked with serious ML engineers, but having worked in large webdev there's usually a team involved in these projects, including senior none devs who would ensure the correct checks and balances are in place before go live. Does this not happen in ML projects? (of course there are always exceptions and unknowns that will slip through, I don't know if that was the case here, or something else)


> Yeah, you're right, I was conflating the excusing bit.

No worries. :)

> Does this not happen in ML projects?

Consistently? No. At the level of e.g. OpenAI/Anthropic? It is mandatory. These are not just research labs, they're product (ChatGPT, Claude) companies. These American companies have done a reasonable job at hiring for all sorts of skillsets to keep things well rounded.

Perhaps DeepSeek hasn't learned this lesson yet... Or, well - it could be far more complicated than that. Speculating is only so useful with so little information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: