Hacker Newsnew | past | comments | ask | show | jobs | submit | logicrook's commentslogin

Really? It seems to me that downvotes are used very negatively, as they are put a strong incentive (for most people) not to express unpopular opinions, even if they are good comments. If they were used only to discourage low quality posts, it wouldn't change a lot, since the most upvoted comments would still be to the top.

He suggests to remove lives for spam/harassment, not for shitposting. So basically, positive loop+ draconian enforcement of essential rules, and no negative loop.


I don't like downvotes because people mostly downvote what they disagree, not spam and whatnot. Having some ideas and habits contrary to what's popular, I experience this first-hand here on HN. And probably this is a reflex, most often when one sees someone disagree him, he feels a spontaneous whim and thinks that his argument is not reasonable. And only after a bunch of seconds does he understand that the other one actually poses a sound argument, criticism, etc. The HN's upvote-downvote sytem lacks undo, so unfortunately when you regret a downvote, there's nothing to do.


"Whenever you find yourself on the side of the majority, it's time to pause and reflect." -Mark Twain

I wonder how much karma Mark Twain would have if he had a Hacker News account...

Anyway, I, too, find downvoting very distasteful. IMO it should be reserved purely for low quality comments. When the crowd downvotes unpopular opinions, the place inevitably becomes an echo chamber, which is unquestionably harmful to honest, open debate and discussion.


> I wonder how much karma Mark Twain would have if he had a Hacker News account...

My first thought was probably about as much as coldtea has. Lots of up votes and lots of down votes with the up votes winning.


Not only unpopular opinions, but since here on hn we can receive down-votes for "low quality" comments, people who are not native english speakers might shy away from expressing their views (I speak from my own perspective here). I often find that anglophones can have lengthy posts ranting about the same stuff over and over, but when non english natives comment they tend to be more concise and thoughtful. I'm actually kind off miss grammatically incorrect sentences that have a valid point, those are much more common on other forums, in those cases the person didn't had to feel pressure to be down-voted just for bad grammar.


As a non-native speaker, I've observed this, too. Lengthy, well written posts get far more upvotes and are treated more leniently with respect to downvotes regardless of their content.

Actually interesting posts are often grumpy four liners written during a short break from actual work (e.g. while waiting for a compile to finish).


The question seems flawed, as having an AI making decisions only based on visual information is basically confusing how you get the information (visual) and what information the AI get (only limited information, similar to what the player has). Two different problems that can be solved completely independently. The first problem makes no sense for a game (how intensive would be the computation), while the latter one could be very interesting, since it would rely on designing the AI more like a natural player. The catch however is that it's a "could", in itself there is no reason to imagine such AIs would make the game better in any way (over "cheating" AI's).


Your argument is very unclear, and your last sentence makes no sense. What exactly is the problem with jointly learning perception and control? They are widely considered to be intertwined problems in robotics. Not independent at all. But for what it's worth, you will find researchers working on all sorts of different approaches.

Clearly this competition is an attempt to build off successes in reinforcement learning with agents that play games using only images and scores.


I don't know if your parent comment is correct, but their argument is really easy to follow. I'll put what I think their argument is in different words.

"this contest is like making Google make Alphago have to also include a robot and image recognition and making the robot have to place the stones." obviously that has "nothing to do with" the game and is "how Alphago gets the information."

But Go (and tetris etc) are games of perfect information where perception of the game state is not a challenge. If you have access to the internal data structure representing the Go or tetris board that's the same as having to scrape it off of a screen and recognize it or do real world image recognition.

If your parent comment is wrong it's because that's not the kind of game Doom is.

So what you consider "intertwined" really isn't, unless Google has not even built a go engine, since a human was doing the perception.

(again, I am just saying your parent's argument is easy to follow, not that they're correct in this case.)


I got that part. Neither you nor the original poster gave a good reason why that's a bad idea. After all, "end to end" learning is the holy grail of AI.

The only reason people have historically (and many currently) separate the two is because it's been too hard! But that's the point of research: solving hard problems.


Thank you, for the excellent rephrasing.

>But Go (and tetris etc) are games of perfect information where perception of the game state is not a challenge.

In general, "perception of the game state" is not a challenge, at least according to good game design principles (e.g. in danmaku shmups, perception can be a challenge because of visual effects that are not really part of the game, but this is seen as poor game design, similarly to how being unable to differentiate backgrounds from platforms in a run&jump is bad design). Although there are games where the perception of the game state is a game mechanic, but Doom isn't really one.

But even in Doom, you can separate quite neatly the two tasks. The vision task essentially aims to reconstruct a model of the world. But in a video-game, this model comes for free. You can trivially limit the information an agent get to what he would get as a player (in games like MGS it's already the case, albeit in a very simplistic way). It's fairly easy to make a function that computes what is visible, what sounds a player would hear, etc. You can then rephrase the problem as make an AI that can only access this function, and this wouldn't change anything.

So for the AI community, I think a more interesting question would have been to design an AI over such a function.


If the contestants are using deep learning, I don't see why it should be any more difficult to generate a meaningful, low-dimensional representation of the game-state from raw pixels than from an abstract view input.


The data the agent perceives is going to be very different from what would be attained from access to dooms internals, as a 'normal' AI would have. The challenge is to bridge the image recognition and intelligence together efficiently and effectively.

Of course it does not make sense to use in a real game implementation. The idea is that such technology could (with further development) be used in real world scenarios/applications. The problem is simply posed in a game environment to make it interesting and easier to approach.


Think about players hiding behind foliage. The AI cheats that are used now make this a lot less practical / fun than a per-pixel visibility test.


This is brilliant. I so wish I enrolled for an ethology PhD now...


And the thread concerning that much better model: https://news.ycombinator.com/item?id=11403653


I agree with most of your comment, but here are two nitpicks.

>This is what makes the cult of science dangerous, not the word "bitches".

Nobody said that, so there's no need to burn your strawman there.

Science is all about the method and proper use of critical thinking, so you could assume it is a direct contradiction to have a vapid attitude of shitty reposts of forced-meme-tier macros that are often inaccurate, without trying to think about it an instant because it's nice virtue signaling (IFL in a nutshell). But you're right, he could write it explicitly.

There's a nice writeup of this problem on the language log [0], arguing that science is basically filling the role of biblical parables.

>Woo-pushers have appropriated the vocabulary of science indistinguishable to a layperson.

They are not responsible for that, and honestly, nobody is. Recently, I read the description of some machine learning algorithm that was filled with buzzwords and dubious physics analogies to a point that I thought it was a clever Sokal, but after some reading all of it was genuine. That's just how jargon works, you assume that the one who using it understands what he is saying, as long as he's using it seemingly properly, but you can't know unless you have a sufficiently good grasp of the semantics.

[0]http://itre.cis.upenn.edu/~myl/languagelog/archives/003847.h...


>They are not responsible for that, and honestly, nobody is. Recently, I read the description of some machine learning algorithm that was filled with buzzwords and dubious physics analogies to a point that I thought it was a clever Sokal, but after some reading all of it was genuine. That's just how jargon works, you assume that the one who using it understands what he is saying, as long as he's using it seemingly properly, but you can't know unless you have a sufficiently good grasp of the semantics.

I don't think machine learning was a good place to pick an example from. A lot of so-called explanations of ML algorithms basically are Sokal hoaxes, and the fact is that the writer doesn't understand what the algorithm does and how.


Another human activity where avant-garde is fully indistinguishable from straight parody. Such a world we live in.

The takeaway should be to keep some sane garde-fou principle, such as "form follows function", and use it as unit test before rolling out a design/product/...


Nothing to do with the movie, but I thought his autistic aspects to be violently hilarious when reading more about him. He seemed to be at times completely oblivious of social implications and rules, which gave many great anecdotes to punctuate his life.


To rephrase you question, do you prefer something stupid, or something stupid that pretends to be clever (and sully something you like)?

Also, false dichotomy.


Oh thanks, that looks pretty interesting, and will distract from shitposting on random pop-science/political quasi-journalism articles for a while. It seems that it was the purpose of HN, but I'm not sure anymore.

However, it is unfortunately hard to make relevant comments on such articles. As the introduction says, it's pretty quick to set everything up (just one apt-get away), the clean racket syntax allows to define a calculus very neatly, but that's a far cry from being able to say much about it. I think I'll try to follow the tutorial with a classical calculus (λμ) and see how that turns out, but that's going to take some time. So here goes "This was posted once before by HN user 'ingve' but didn't get much attention".


The types and quality of stories on HN are like the tides; it regularly varies. One of the goals for HN is to have a good, balanced mix of interesting stories. Due to voting, populist stuff will surface, but HN still has an appreciation for heavy-weight, time-intensive articles.

Some truly great stories get few, if any, comments. If a post requires effort or specialized knowledge to even ask good questions, then there isn't much discussion. This happens a lot when academic papers are posted since reading a paper might require a multi-hour investment, but even when there is little discussion, it's good to have heavy articles submitted. They balance out the other stuff.

If you find something great-but-overlooked in the /newest queue, then send an email to hn@ycombinator.com asking for a repost request to be sent to the original submitter. That's what I did with this article, but Dan (dang) asked me to repost it myself. Neither 'ingve' nor I care who gets the credit/karma, but a lot of people want great articles to get attention on HN.

HN is what we make it.


>HN is what we make it.

Exactly, and certainly there are other people who would like to see more of that in first page, but you just can't bash useful comments, so even with good intentions you can be part of the problem (talking for myself).

Thank you for this comment, and again for resubmitting this.


The best thing to do about this is to find such articles and submit them. As jcr pointed out, HN has both moderation and software to try to give these than one chance at the front page. (See https://news.ycombinator.com/item?id=10705926 and the other links there.) That's what happened in the present case, for example. But for this to work, users need to find the stories and post them.


Renders also mean drawings. Scott Robertson's book "How to render" does not explain how to press the render button in Blender, it explains how to actually draw (render) a concept art.


We already had grammar nazis. I guess it's time for the semantics nazis!


"Semantics nazis"?

Quite a roundabout way for describing people that care for the accurate use of terms...

Is assigning random personal meanings to words a thing now? I mean, talk about putting the p...y on the chainwax!


Words are tricky. They don't have one true definition. What's accurate for one, may not be for another. Words have multiple, sometimes incompatible meanings, depending on context of discussion and participants. It's not really about completely random meanings that differ from person to person, but more about slightly different meanings that differ from culture group to culture group.

I see more and more that people attempt to devalue & derail statements made by others by making an offensive remark about semantics, instead of attempting to understand what was said.

It would be convenient if everybody used a single definition book for every word, however we don't live in such a reality. A "semantic nazi" is a person who goes around and tries to convert everyone to use their culture group's definition book. [1]

--

[1] I want to be clear that in this specific case the "render" definition extension was pointed out in a respectable manner and I wouldn't pin the nazi name on them for it.


How dare he correct someone who already tried to correct someone else (and was wrong). Why does calm clarification threaten you?


I truly believe that the renders -> photoshop clarification was genuinely more useful than the 'renders can mean any kind of drawing (even though it's clear from context that the writer meant computer 3D rendered)' clarification. Not that the second one was useless, but calling the photoshop point 'wrong' is way over doing it.


> renders -> photoshop clarification was genuinely more useful

Except 'the clarification' is wrong. If you have a render, it doesn't mean it's 3D. It can be 3D, 2D, a mixture of both (last is probably the most frequent). It's a render.

This question can be more important than you think, since people may have the wrong expectations when they get a 'render' from the industry. Good post-processing (color balance, a bit of motion blur and other small effects) can work wonders to show off a product (e.g. video-game), while not being so representative of the end result. Which explains then the disclaimers on trailers and screenshots, which are not there to be pedantic.

Anyway, I'm off sending Wittgenstein to the camps.


I disagree that it was wrong. A render is clearly a term with multiple meanings, depending on context, and in the context and audience here most readers will take it to mean 3D rendering . That there is a context where the statements are true does not mean that the statements weren't making the wrong impression in the readers here. Check out the author's reply, clearly indicating that they were surprised at the clarification.


Why should I feel threatened. Check tgb's comment for further clarification.


The fact that tgb "truly believes" the first, and erroneous, clarification was more useful means nothing to me. The first comment was simply incorrect. The impetus was to say someone was wrong. This was corrected by someone else, in a very civil manner, and you called that person a nazi. Grow up and leave meaningful comments or else don't bother.


It's not just that the first clarification was more useful. The first clarification was correct. The comment said the renders are beautiful, obviously meaning rendering in in the 3D rendering[1] sense. The clarification stated that they weren't rendered in that sense because the author used photoshop, which doesn't render in that sense.

Then the second clarification stated that rendering also meant drawing... and while it does, it also has a specific meaning within 3D representations which doesn't really fit here.

Nothing to call anyone a nazi over but the second clarification wasn't needed or useful.

[1] https://en.wikipedia.org/wiki/3D_rendering


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: