Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: How does the HN "More" link work?
7 points by zupreme on Dec 10, 2012 | hide | past | favorite | 7 comments
I noticed today, for the first time, that the "More" link at the bottom of the HN front page was not just a javascript trigger, nor was it a canonical link to something like "news.ycombinator.com/news/2", but was a link which populates a variable called "fnid" with what appears to be a randomly generated value.

In my case the link, the last time I looked, was "http://news.ycombinator.com/x?fnid=D6dWUUC7t3".

This came to my attention because, after getting pulled away from my screen for about 30 minutes to take care of some other items, I came back to review the listed news, and then hit "More" to see the 2nd page of results. When doing this I got an "Expired Link" message, which prompted me to look closer at the link itself.

Can anyone give any insight into how HN handles pagination and what that FNID variable really indicates?



The fnid is the ID of a function.

  (def morelink (f items label title . args)
    (tag (a href
            (url-for
              (afnid (fn (req)
                       (prn)
                       (with (url  (url-for it)     ; it bound by afnid
                              user (get-user req))
                         (newslog req!ip user 'more label)
                         (longpage user (msec) nil label title url
                           (apply f user items label title url args))))))
            rel 'nofollow)
      (pr "More")))
The Hacker News code keeps a table of anonymous functions (see the fn (req) definition there) that can be run later to generate a page. The fnid is the unique ID of that function (which is generated randomly)

  (def new-fnid ()
    (check (sym (rand-string 10)) ~fns* (new-fnid)))

  (def fnid (f)
    (atlet key (new-fnid)
      (= (fns* key) f)
      (push key fnids*)
      key))
So, when you hit a URL with a fnid in it the fnid is looked up in the fns* table and the function executed to produce the page. There's a reaper function that deletes fnids that are older than a timeout value (which is how you got the Expired Link error.

  (def harvest-fnids ((o n 50000))  ; was 20000
    (when (len> fns* n)
      (pull (fn ((id created lasts))
              (when (> (since created) lasts)
                (wipe (fns* id))
                t))
            timed-fnids*)
      (atlet nharvest (trunc (/ n 10))
        (let (kill keep) (split (rev fnids*) nharvest)
          (= fnids* (rev keep))
          (each id kill
            (wipe (fns* id)))))))


So the fnid is essentially a caching mechanism for pre-rendered web pages linked to a point in time? That's pretty neat and simple fitting in perfectly with the simplistic design of HN (not confusing simplistic with basic, instead a reference to the design paradigm).


My understanding is that pre-rendered web pages are not involved. What is "cached" is a function whose output is a particular webpage rendered on the fly so that "flag/unflag," current score and number of comments can be kept up to date.

In other words, the function fetches a particular group of articles along with the current information about them and generates the page. Some of those articles were known at the time the function was generated. Some of the articles are new or have undergone a significant change in status since the function was generated - I suspect these incur greater overhead.

At a certain point, the number of changes are are large enough that it is more efficient to generate a new function.


Thanks for this very detailed response. Is the same FNID shown for all visitors during the FNID's applicable period, or is it individualized or region-specific?

Essentially I'm asking if all of us see the same stuff at the same time.


This is kind of annoying in my opinion. Mainly when I am reading some new in my mobile, when I go back the more link sometimes is expired. I think it would be a good idea at least one of the following:

1 - Make the expiration longer at least 90 minutes.

2 - Go to the next more page independent if the items have scrolled.

3 - at least go to the main page of hacker news.


I have also gotten expired link errors, too, and it's kind of jarring. I don't know of any other site that does its paging in this way, and I'm not sure of the advantage of doing it this way. All I experience is a reading session that can't be left alone for some arbitrary amount of time lest I lose my place.


PG has stated that it is done this way because it was easiest/fastest/most elegant way to do it from a programming point of view. He acknowledges that it is not very efficient or enjoyable from a user's point of view.

It's also been like this for years, so presumably PG doesn't care enough about the usability to change it.

This type of thread comes up every month or two, someone explains how/why it happens and then we go back to having the same, broken interface. I doubt it will ever be fixed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: