Hacker Newsnew | past | comments | ask | show | jobs | submit | Traster's commentslogin

As with all silly internet block BS, simply reload the site and hit escape before the cookie banner loads.

> reload the site and hit escape

What exactly does that do? Which web browser?

I’m on mobile right now, so can’t test.


I'm on Firefox and it did nothing for me. The popup came up so fast between me refreshing and hitting escape.

Alternatively, you can disable JavaScript on the website. That lets me view it.


On Chrome on desktop it stops the (slow) javascript running which is what actually loads the blocker. You need a fast finger.

I would suppose it interrupts the page load after streaming the HTML and before loading and/or executing the cookie banner‘s javascript, meaning the content is there but the cookie banner will never open.

Or just inspect element + press delete. In some cases you also need to then delete an extra gray overlay and re-enable scroll on the base html tag, but takes 30s

Seems like a lot of work to browse a site that makes a good joke once every month or so.

This is slacktivism. I can kind of understand someone coming to the conclusion that we're replacing working class jobs with compute (caveat, I use working class more broadly than you), and that compute is pure capital. So essentially the capital class are wringing the neck of the working class. I think that, at the very least, is what the capital class is hoping for. If that's what you believe though, slightly poisoning a model is not even close to grappling with what is going on.

Even if you buy the idea that Kalshi is a prediction market whose mechanism is gambling but whose product is accurate predictions, you don't have to buy the idea that insider trading is a good thing. Yes, in the rare occasion there exists someone with (a) insider information (b) confidence their actions won't impact their insider position and (c) access to capital - then you get extremely accurate predictions.

In every other case you get worse predictions. Since those who are predicting have to now construct their bets such that they know they can always get run over by an insider. So in the general case it reduces the ability of the predictors to push the market in the right direction, because they always have to risk manage the fact that someone out there might run them over with insider information.


Not just insider information, but insider access. If the outcome of some prop bet is under the control of a handful of people, those people can trivially conspire to produce whatever outcome is most profitable to them.

If the outcome of a prop bet really is fully controlled by insiders, so that those insiders are making decisions based on betting outcomes, then allowing that betting to occur seems antisocial and counterproductive to begin with. This is another problem with the Polymarket/Kalshi species of "prediction market".

The problem is it's pretty hard to tell ahead of time whether that's what happens.

Suppose some large private company has to decide whether they're going to build a new facility in city A or city B. This is useful information for all kinds of reasons. If you're a vendor then you need to start making preparations to set up shop in the city where your big customer is moving etc.

The company's analysis shows it would derive a $10M advantage from building in city A. The prediction market is correctly leaning that way. If there are only enough counterparties that someone who now bets on city B and wins would make $5M, everything works the way it should and the company goes with city A. But if there are enough counterparties that a winning bet on city B would net you $25M then the company can place the bet, eat the $10M loss by choosing city B and come out $15M ahead.

But the $10M number isn't public. It's essentially the thing you wanted the market to predict and it could be arbitrarily larger or smaller than that. So how are you supposed to know if the prediction market will be predicting the result or determining it?


A private company of any real size isn't plausibly going to choose Atlanta over Chattanooga to win a prediction market bet. This is a good example of the kind of prediction that can theoretically be prosocial, and one strong indicator that it might be is that an insider bet is helpful rather than harmful.

On the other hand, at the point where the prediction market winnings are material enough that they might alter the underlying decision itself, you've clearly got an antisocial structure. Prediction markets that don't want to be seen as mere prop betting venues should refuse to run markets on those questions.


> On the other hand, at the point where the prediction market winnings are material enough that they might alter the underlying decision itself, you've clearly got an antisocial structure.

How is that supposed to be determined?

There are many decisions that have only minor implications to the party making them (they're choosing between two nearly-equivalent alternatives) but massive implications for third parties (the company or city chosen gets a huge gain and knowing which one is valuable information). When the decision itself is essentially a coin flip, any prediction market winnings could alter the underlying decision. And whether it's that close of a decision is the thing the market would be trying to predict rather than something you already know.


A different example would be people bettering on whether a politician/celebrity will wear a certain color at an event. Since these apps allow exactly these sorts of trivial bets, this is not an stretch. That politician/celebrity or their team could easily wear a color that aligns with their bets. This seems indistinguishable from a scam.

> This is another problem

It is insider trading, the thing everyone here is talking about


If people with more information profit at the expense of people with less information, isn't that exactly how things are supposed to work?

If you're approaching a market with hard facts, detailed comparisons and solid evidence; while I'm trading in the same market based on vibes and intuition, surely it's expected that your returns would be better, and mine worse?


Short answer, no. If you're betting on an outcome that can be controlled by an individual or small group, the incentive is for them to game the system by doing the OPPOSITE of what the prediction is so as to make the most money.

"When a measure becomes a target, it ceases to be a good measure"

https://en.wikipedia.org/wiki/Goodhart%27s_law


Goodhart's law does not cleanly apply here, because the group cares about more than making money, and would bear all the costs of not doing (what observers regard as) being in their interest -- both in that case, and whether potential counterparties regard it as being predictable enough to make reliable long-term agreements with.

To illustrate with an example, your point is like saying that if we had a prediction market for "Will the United States cede Texas to Mexico in 2026?", then the US government would give up Texas just to get that sweet sweet prediction market payoff.

I would agree with a smaller point, that an org would accept minor tweaks it doesn't care about in order to game a market, but this just means it can tolerate being unpredictable about lower-order bits of its decisions. You see that in cases like Trevor Noah making a minor change to a speech to influence a particular bet.


You're confusing collusion with being informed. The concept of market rationality is based on the premise that all participants in said market more or less have access to the same information. Fools can choose to not be informed before making a trade, but passing along sensitive information that contradicts market rational behavior causes people to lose trust in the market.

Perfect example from today. Allbirds just announced that they're going all in on AI infra, skyrocketing the stock. Had I bought a million dollars worth of Allbirds yesterday, everyone would think I'm an idiot. But now, they would think I have insider information and would no longer want to participate because it would make no sense to buy Allbirds yesterday unless I knew the announcement was coming.


If you’re betting with a friend that they won’t have chicken for dinner, what’s to stop them from having chicken for dinner? What if you bet with a complete stranger who also took the reverse of that bet from your friend?

Nothing, that is why you quickly learn to not make stupid bets like that. If you don’t learn, then I guess survival of the fittest and all that.

A fact is a statement about past. A bet is contingent on the future.

Insiders can change the facts.


> Even if you buy the idea that Kalshi is a prediction market whose mechanism is gambling but whose product is accurate predictions, you don't have to buy the idea that insider trading is a good thing.

Yes, and furthermore even if you’re one of those people who think insider trading in prediction markets is a good thing [1] that doesn’t somehow make it not illegal. The DoJ seems to be pursuing the theory that it constitutes wire fraud, which since “everything is wire fraud”, seems possible.[2] The CFTC has also claimed jurisdiction, which isn’t surprising since it claims jurisdiction over pretty much everything. If true this would mean some of the commodities trading regulations could be used as well, although insider trading rules in the US around commodities are generally less stringent than say for equities. In Europe I’m pretty confident that the EU market abuse regulations would cover insider trading in prediction markets, and make insider trading market abuse as it would constitute trading on material non-public price sensitive information. (European insider trading rules are stricter than the US in general).

[1] the standard argument in favour of this is not one I agree with, but people say that the benefit is that the inside information is revealed by people acting on it in the market and that this therefore benefits the non-insiders. How much you buy into this idea depends on how much you feel that non-insiders benefit from paying insiders for this more accurate price.

[2] https://www.freshfields.com/en/our-thinking/blogs/a-fresh-ta...


If an insider with large amounts of capital makes a big trade, they also end up discouraging other trades. Once you see a huge position taken, LPs are going to scale back their liquidity in other positions to manage risk that the insider is going to stomp them. Any trader monitoring position sizes is going to probably scale back their trading. All of this contributes to less trading and less commission on these markets.

Sports betting is so profitable for prediction markets because they're mostly unsophisticated retail flow making lots and lots of trades, giving the platforms commission. If an insider just pushes market prices in their direction the platforms are going to lose on volume.


> Since those who are predicting have to now construct their bets such that they know they can always get run over by an insider.

The average person does not do this. People trade individual stocks all the time, despite every other market participant (banks, hedge funds, etc.) having better information and technology.

It's why institutions like Citadel pay for retail order flow. They know that retail traders don't have an edge and, if anything, often end up being negative signal.


No but sophisticated traders will also get stomped by this. Just because you're a sharp oil trading shop doesn't mean you can combat an insider who knows when Brent is about to spike in price due to insider knowledge.

You can see all across the responses here the encoded premise that the point of a prediction market is to enable people to profit from making accurate predictions. No. The point is for the price to be accurate; for the market to make an accurate prediction. That someone with a P1 prediction can roll over people with less confidence is a feature.

People can correct me if I'm wrong, but I think the core logic behind OpenAI's valuation was essentially that AI would work like search. Google had the best search engine, it became a centre of gravity that sucked everything in and suddenly network effects meant it was the centre of the universe. There seem to be 2 big problems with that though. The first is that for search, queries are both demand for the product and a way of making the product better. The second, is that Google was genuinely the best product for a very long time.

Maybe point (1) was unclear at some point, but I think it's mostly clear today that's not happening. Training the model is modestly distinct from inference.

Point (2) is really funny - because sure, at some point OpenAI was the best, and then Sam Altman blew the place up and spawned a whole host of competitors who could replicate and eventually surpass OpenAI's state of the art.

It now looks like AI is a death march. You must spend billions of dollars to have the best model or you won't be able to sell inference. But even if you do, a whole host of better funded competitors are going to beat you within months so your inference charges better pay off extremely quickly. When the gap between models starts to drop, distribution becomes king and OpenAI can't compete in that field either.

Google can do that. Meta can do that. MSFT probably can do that. Amazon can do that. OpenAI cannot. They do not have the cash to do it.


I think a large part of its valuation was it's ability to compete with search but thats understating it a bit. Unlike search it could/can be the platform users primarily interact with (ala a social media replacement) while having huge impacts on enterprise work and automation. I think its the combination of the ability for effectively one company to compete on every front in the modern web ecosystem thats contributed to the valuation.

It's also important to note the valuation is not just based off of its possible concrete economic implications in these areas but also future "unknown" possibility ( I.E. whatever "agi" means to investors ). Thats not to say I believe it's possible to achieve this but rather a huge part of Sam Altman's job is increasing valuation through unfounded claims of AGI's possibility and possible impact.


Yeah to zoom out, I think it was less specifically search and more generally: There was the PC, the winner became a behemoth. Then was Search, the winner became a behemoth. Then smartphones, the winner became a behemoth, Then there was social media and the winner became a behemoth.

The logic was basically "AI is going to be the next thing. The winner is going to be massive, let's back the person who looks best placed to do that". To be fair, it's probably correct. The people betting on OpenAI probably have plenty of money in Google shares and almost certainly have a share of Anthropic, grok, you name it. Most of them will go to 0, but the 1 winner could pay off. I'm not sure even 1 will pay off.


I've almost forgotten about AGI, that was suppose to be the reason for the valuations and all the hope/fear. Then, it just sort of went away and AI turned into the Software Developer doomsday machine. We're on month 4 since the models got really good at code and we were all going to be out of a job in 6 months. I guess we only have 2 more months of employment left /s

I'm just sad Google was intent on ruining their own product, whether removing + operator (seriously - Google+ is not an excuse, I don't care if it conflicts with search, don't do that) or some of their political censorship

"Google had the best search engine, it became a centre of gravity..."

Almost no one made serious attempts at competing with Google. And not because of network effects or any other hard blocker. In the early 2000s, the industry just wasn't mature enough to heavily fund serious competition.

By the 2020s the industry has funding and founders ready to jump on any huge opportunity that presents itself.

There are of course downsides, but this competitive landscape in AI seems like a huge net win for users in terms of lower costs and faster progress.


Yahoo? MSN Search?

There were lots of search engines but very few that made serious well-funded attempts at competing head-to-head with Google.

I'd consider both of those well-funded.

Microsoft had a good go with Bing.

They did as well competing with Google on search as they have competing with Apple on smartphones.

that's been my feeling for a while now. Google just has to keep up while OpenAI and Anthropic go bankrupt. I can see MSFT and Amazon eventually consuming OpenAI and Anthropic respectively when the money runs out but I still think Google is the eventual winner. I also have been pointing out that Apple making a deal with Google vs trying to do it on their own is another vote in that direction.

For actual searching, it seems like RAG would be the way. Instead of rebuilding models, focus on curating datasets and sources.

And as we all know, if you're smart enough to get root access, your neighbours children playing football in the street should be subject to the risk of you driven a car that claims to have full self driving with custom code on it.


This is just Apple saying "We own all user compute now". Yeah you guys can fight over data centres. But every device that a user physically has will be an Apple device. They've now got the full range of price points from low cost to prosumer, and they've got the software stack to back it up so you can have your sales staff running neos logging in to their CRM, engineers running their Mabcook Pros.

It's kind of insane the advantage Apple Silicon has brought along with the brutal price competition PC sales. The only question I have is whether this touches the sides. That is to say - they sell a billion iPhones, is the consumer laptop and low end business sales enough to bump the numbers. They're thinner margins, and that market has to some extent been on a downward trend (which is why the stock market is running to data centres where the compute actually happens).


At the end of the day the core business is throwing off tonnes of money and is run fine. Would it be better not to throw billions at the next cool thing? Who knows. Probably. But Google does the same thing and they've actually built some cool stuff.


Return it to the shareholders!


I was at Intel for a while and there was one glaring problem - they have one product that spins off a huge amount of cash. This means a few things: First, that one product is really where the things that matter happen. But second, they have all this money and they don't know what to do with it, they can't spend it all on their core product because that looks terrible - they're already throwing off money, investing more probably just makes your company look bad (you're spending more to get the same revenue). SO instead you have to take that money and make bets. But not just any bets. You need a bet that (a) matters if it pays off, and (b) looks favourable compared to the core business. So you buy Mcafee and Altera and MobilEye, 5G was the future once...

So to take the Meta example, they need something that is going to have revenue upside similar to Meta advertising revenue (one of the most profitable things in the universe), and that has better margins that the advertising business (basically impossible).So the only logical thing to do is to make grotesquely large bets on things that are extremely speculative. You can't bet on things that are well known - because nothing known has the properties from earlier that you're looking for, and you can't bet small because you've got to convince people you're the pay off is of a similar size to your existing business.

In Intel's case they lost focus on the core business and so that died and their other bets didn't matter because the core business was dead. With Meta the core business in't dead, but it's only a matter of time before it's seriously threatened and so they're going to attack that threat with everything they've got - and they have a tonne of resources.


Man, activist shareholders should really start demanding that companies return money to them rather than setting it on fire with vanity projects.


Sounds a lot like Google as well


But Google actually knows how to do research and how to apply it to products. Meta's AI research hasn't produced anywhere near as many state of the art products /revolutionary achievements.


> But Google actually knows how to do research and how to apply it to products.

I have seen basically no evidence of this. Google knows how to do research to create technology. Google is pretty terrible at creating product though.


Since we're comparing to Meta, you just have to look at the state of their publicly facing products that feature AI. Google has better AI models (Gemini, Nanobanana) and they've integrated them successfully into way more products than Meta has.

Meta spends a lot of money on AI research with little to show for it. As imperfect as Google may be, they're still doing much better.


Google knows how to do research - and at the very least lets other people figure out the products, and then becomes the #3 or #4 player.

Both GCP and Gemini are products of this. Modern cloud was arguably built by Google (think Chubby, GFS, Bigtable as building blocks) - they just spent 10 years ceding it to Amazon before competing.


Which is almost word for word the state Microsoft have been in for over 20 years.

To the point it was a running joke at MSR.


I'm not sure Microsoft is good at creating technology OR products. Microsoft is good at enterprise sales.


google knows how to do research, at any rate.


I was thinking more of their primary revenue source / money printer being their ads business like Meta then they also spend billions from it on all kinds of other bets.


In 2026 we need to update our mental model of Google. Google has been wildly successful at adding diversification. Around 40% of Google’s profit (depending on the quarter) comes from non-search income.

They build a wildly successful cloud platform, they’re expanding their subscription services, they’ve got enterprise offerings, etc

The trick is that Google accepted that none of their other business would likely have the margins and volume that search has, but they did it anyways.


Interesting I didn’t realize they had become so diversified.


Or you could just return the profits to shareholders.


Spending tens of billions on something that was never going to work is certainly meta.


They already attacked it with everything they've got lmao

As in, in 2012. They outright replaced people's email addresses in their profile (makes it harder to reach people outside the walled garden, makes it harder to transfer your credentials to a competing service) and I've heard Google+ links got blocked

Zuckerberg is many things, not everything he's accused of (Trump/Cambridge Analytica) might be entirely accurate but he is at least partly a bit of a scumbag


I was going to say I disagree, that I think that at least some level of discussion on HN about important things going on is important. Israel is actually a tech powerhouse and a lot of this is seriously shaping the defence technology policy and is telling a lot about how power dynamics actually can play out.

Having said that, my settings show me all comments that are flagged. HN is apparently not capable of having a respectful conversation about this. Almost anything expressed on the actual topic has been flagged. The only thing left are comments rules lawyering to say we shouldn't discuss the topic at all.

It's kind of an indictment of the users of HN. It might be the right move to remove the article, but it becomes the right move because the users of this site can't be trusted to actually conduct a conversation about it.


I feel something very similar. I have strong views that what Israel is doing is wrong. But I look around at our politics (in the UK), and there is such a well oiled Israeli PR operation that is very happy making career ending accusations that talking publicly about this is actually quite dangerous (Not helped by the loonies who are, and have always been disgusting anti-semites). And you look at our politician's stance on it - and the career of people like Lord Walney, and it's clear we're in a very dangerous place. I think there is a very wide gap between what the average British person actually believes about Israel and what is happening to the Palestinians, and the acceptable positions you can express in Westminster. I also fear that once the dam breaks, and it's no longer the case, that the swing back against Israel is going to be quick harsh, and that's difficult because I have friends and family in Israel - I would like to see Israel be a free and open liberal democracy that shares what used to be western values, but maybe we're too late for that.


Also to note UK is on massive rise in anti muslim sentiment in recent years. That also a major contribution


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: