Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I just want to emphasise two things, which are both mentioned in the article, but I still want to emphasise them as they are core to what I take from the article as someone who has been a fan boy of Nicholas for years now

1. Nicholas really does know how badly machine learning models can be made to screw up. Like, he really does. [0]

2. This is how Nicholas -- an academic researcher in the field of security of machine learning -- uses LLMs to be more efficient.

I don't know whether Nicolas works on globally scaled production systems with have specific security/data/whatever controls that need to be adhered to, or whether he even touches any proprietary code. But seeing as he heavily emphasised the "i'm a researcher doing research things" in the article -- I'd take a heavy bet that he does not. And academic / research / proof-of-concept coding has different limitations/context/needs than other areas.

I think this is a really great write up, even as someone on the anti-LLM side of the argument. I really appreciate the attempt to do a "middle of the road" post which is absolutely what the conversation needs right now (pay close attention to how this was written LLM hypers).

I don't share his experience, I still value and take enjoyment from the "digging for information" process -- it is how I learn new things. Having something give me the answer doesn't help me learn, and writing new software is a learning process for me.

I did take a pause and digested the food for thought here. I still won't be using an LLM tomorrow. I am looking forward to his next post, which sounds very interesting.

[0]: https://nicholas.carlini.com/papers



He's also a past winner of the International Obfuscated C Code Contest: https://www.ioccc.org/2020/carlini/index.html


Nicholas worked at Matasano, and is responsible for most of the coolest levels in Microcorruption.


He also worked at Google. I don't think that negates my point as he was still doing research there :shrugs:

> academic / research / proof-of-concept coding has different limitations/context/needs than other areas.


No idea. Just saying, on security stuff, he's legit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: