Hacker Newsnew | past | comments | ask | show | jobs | submit | noelwelsh's commentslogin

* Composition and reasoning. Standard things in FP. Build big things from little pieces. Understand them the same way.

* Explicitly define the order of evaluation (important in Haskell, where lazy evaluation makes the default order of evaluation difficult to trace)

* Useful mental model that helps with 1) design and 2) understanding new concepts

* Abstraction. Ignore irrelevant details. Write the standard library once, use it in many different situations.


Agreed, Zig's IO is closer to the effect handler / capability passing model. And by closer, I mean exactly the same [1]. However, it's related to monads by duality. A comonadic program is a program that depends on context, which captures the notion of passing capabilities around.

[1] Languages designed around capability passing often have other features, like capture checking to ensure capabilities aren't used outside the scope where they are active. There are only two such languages I know of. Effekt (see https://effekt-lang.org/tour/captures) and Scala 3 (see https://docs.scala-lang.org/scala3/reference/experimental/cc...) However, this is not core to the idea of capability passing.


On a quick read of the docs I'm a bit worried Quarkdown doesn't have the right evaluation model for the job. Text layout typically iterates to a fixed point, because adjusting the layout of one part of the document can throw out layout at another part, require another layout pass and so on. Typst has the concept of context[1] for this. I didn't see anything in Quarkdown that seemed similar, though perhaps I missed it.

I switched from pandoc / md / LaTex to Typst for my book[2], and have been very happy with it. Programming in a modern language is nice, and Typst is much faster than pandoc + LaTex.

[1]: https://typst.app/docs/reference/context/

[2]: https://functionalprogrammingstrategies.com/


Your muscles need energy to work. You have a variety of energy stores in your body, which range from small amounts of quickly available energy (ATP) to large amounts of slowly available energy (fat). Most relevant to this discussion is glycogen, which is is a carbohydrate. You have about 500g in your body, which is about 2000kcal. It is more readily accessible than fat, and 2000kcal is enough for an hour, or maybe two, of high intensity exercise.

These gels and drinks are trying to replenish glycogen stores. The idea is to keep the runner using glycogen for the entire race, as it provides more energy per unit time than fat metabolism.

In your hikes your energy demands probably aren't exceeding the rate that your fat metabolism can provide.


Really depends on the field. Computer science research usually has pretty short cycle times. If you're working on, say, biology or anthropology, collecting data can take substantially longer.

Overall, very nice article. A few notes:

* I think the first implementation in JS land was Flapjax, which was around 2008: https://www.flapjax-lang.org/publications/

* The article didn't discuss glitch-freedom, which I think is fairly important.


I didn't know about Flapjax, thanks I'll check it out. Glitch-freedom is indeed a gap in this article. I focused on the signal algorithm exclusively without some implementation optimisation like batching updates; there is so much more to cover! Maybe in a next one, Thanks!


And before there was "reactive", there were (oneway) dataflow constraints.

And a lot of literature on the algorithms.

I wrote a bit about the connection here:

https://blog.metaobject.com/2014/03/the-siren-call-of-kvo-an...

(It starts in a slightly different place, but gets there)

Also about constraints as an architectural connector.

https://dl.acm.org/doi/10.1145/2889443.2889456?cid=813164912...


Batching isn't just an optimisation, not having transactional semantics can cause bugs.


I wrote a whole screed here about how glitches are evil and Rx is evil for teaching people they’re normal, but then I thought about it a bit more—

The system as described isn’t actually glitchy, is it? It doesn’t eagerly run any user computations, just dirtying, and that is idempotent so the order is irrelevant. It’s also a bit useless because it only allows you to pull out values of your own initiative, not subscribe to them, but that’s fixable by notifying all subscribers after the dirtying is done, which can’t cause glitches (unless the subscribers violate the rules of the game by triggering more signals).

So now I’m confused whether all the fiddly priority-queue needlepoint is actually needed for anything but the ability to avoid recomputation when an intermediate node decides it doesn’t want to change its output despite a change in one of its inputs. I remember the priority queue being one of the biggest performance killers in Sodium, so that can’t be it, right?..

I’m also confused about whether push-pull as TFA understands it has much to do with Conal Elliott’s definition. I don’t think it does? I feel like I need to reread the paper again.

Also also, some mention of weak references would probably be warranted.


> whether push-pull as TFA understands it has much to do with Conal Elliott’s definition.

Virtually nothing that is getting sold/branded as "FRP" has anything to do with Conal Eliott's definition.

I once gave a long talk about this here in Berlin, but I don't remember if there was a video.

I've also explained it on twitter a bunch of times, including this memorable sequence:

https://x.com/mpweiher/status/1353716926325915648

Kinda like the Marshall McLuhan scene in Annie Hall ("if only real life were like this")

https://www.youtube.com/watch?t=136&v=sXJ8tKRlW3E


>> whether push-pull as TFA understands it has much to do with Conal Elliott’s definition.

> Virtually nothing that is getting sold/branded as "FRP" has anything to do with Conal Eliott's definition.

True but not what I meant. The article implicitly (and, in the links at the end, explicitly) refers to his 2009 paper “Push-pull functional reactive programming”, which describes a semantic model together with an specific implementation strategy.

So I was wondering if TFA’s “push-pull” has anything to do with Elliott 2009’s “push-pull”. I don’t think so, because I remember the latter doing wholly push-based recomputation of discrete reactive entities (Events and Reactives) and pull-based only for continuous entities that require eventual sampling (Behaviors).

With that said, I find it difficult to squeeze an actual algorithm out of Elliott’s high-level, semantics-oriented discussion, and usually realize that I misunderstood or misremembered something whenever I reread that paper (every few years). So if the author went all the way to reference this specific work out of all the FRP literature, I’m willing to believe that they are implying some sort of link that I’m not seeing. I would just like to know where it is.


After wondering what the heck glitch-freedom is and learning about it, I agree with you. It seems like it deserves at least a brief explanation in an article about how signals work.

I've gone with the universal `alien-signals` package for my project (which doesn't use a frontend framework that includes signals). They show benchmarks of being by far the fastest and have strict limits on code complexity. Those limits are also supposed to avoid glitches by design, and now at least some of that is tested[1].

[1]: https://github.com/stackblitz/alien-signals/pull/39


[flagged]


So yeah topological sorting is one element, but that global stack is a data race! You need to test set inclusion AND insert into it in an ordered way. Global mutex is gross. To do so lock-free could maybe be done with a lock free concurrent priority queue with a pair of monatomic generation counters for the priorities processed then next, then some memo of updates so that the conflicting re-update is invalidated by violation the generation constraint. I see no less than 3 CAS, so updates across a highly contentious system get fairly hairy. But still, a naive approach is good enough for the 99% so let there be glitches!


Can the reactive graph even be updated concurrently if the UI depends on it though? Because the UI is likely to run in its own single thread...


yea, this is in javascript. it's inherently single-threaded in almost all contexts (e.g. node.js shared memory where you're intentionally bypassing core semantics for performance, and correctness is entirely on you)


wouldn't this be solved by synchronously invalidating everything before computing anything? it seems like that's what the described system is doing tbh, since `setValue` does a depth-first traversal before returning. or is there a gap where that strategy fails you?


I think most sourdough recipes are written by people who are really into sourdough, because they involve so much bullshit. I was going to give up on sourdough until I discovered:

* 200g starter, 400g water, 600g flour, 10g salt. Mix together.

* Fold it over itself a few times every hour or so.

* When it looks risen, put it into whatever you want to bake it in, let it rise for another 30 minutes or so, bake at around 200C for about 30 minutes.

It's easier than fast yeast bread, as there is less kneading.


'Tis true. At the same time, Project Valhalla will be the most significant change to the JVM in a very long time, and probably its best chance to stay relevant in the future.


I'm writing a book, which covers the mental models for writing code in a functional style. The examples are in Scala, but it will be useful if you use other modern languages like Rust, Kotlin, Swift, OCaml, or Typescript.

https://functionalprogrammingstrategies.com/


This article would benefit from an introduction that lays out the structure of what is to come. I'm expecting an article on effect systems, but it jumps straight into a chunky section on the implementation of function calls. I'm immediately wondering why this is here, and what is has to do with effect systems.

Also, this is a very operational description: how it works. It's also possible to give a denotational description: what it means. Having both is very useful. I find that people tend to start with the operational and then move to the denotational.


That's fair feedback, thanks

I focussed on how it works since that's what I set out to understand myself after seeing effects mentioned a few times


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: