Hacker Newsnew | past | comments | ask | show | jobs | submit | pityJuke's commentslogin

I’ve grown to appreciate this aspect of standard examination as I’ve gotten older. Everyone wants to say “oh, you can just look it up now”, but how can you come up with higher level thinking, when you don’t have the fundamentals in your mind?

To use math as an example, you can always look up formulas. But after more than 1 "layer" of looking up, that quickly becomes impossible. Like, when I had to learn to calculate derivatives and primitives, I could look those things up. But when I got to linear algebra, I couldn't progress until I deeply internalized derivatives and primitives, because looking up formula A only for it to contain unknown formula B just becomes a mess.

Agreed. We've been able to "look it up" for a while. To use math as an example, we've had calculators for a very long time. But when I was in school they didn't let us use calculators until precalc. Now I use calculators even for simple math because I already understand the fundamentals and just need expedience.

Just because one can "look it up" doesn't mean it's necessarily the best thing to do at the moment. But it also doesn't mean that folks who look it up are necessarily losing any higher level thinking, though I concede that many people certainly delude themselves into thinking they understand the fundamentals and thus can use AI as a tool for expedience when they're really using it as a tool for thought.


90% of my usage of it is because it actually displays the map on my Watch, whereas Google Maps & Citymapper only show directions.

If it weren't for that, I'd use Citymapper for practically everything.


God, I miss Android so much. iOS still annoys me. The app situation is sadly better on iOS, though.

I’m surprised they’ve said it so confidently given how it completely collapsed last time…

I believe its because its a different structure.

Previously, they were trying to buy the assets outright. That got into the "one group of families is owned $1.4 billion and another is owned $50 million" and the "how do you maximize the returns from Alex Jones assets to satisfy those claims?"

This is using a different structure.

> On Monday, Mr. Milligan asked Maya Guerra Gamble, a judge in Texas’s Travis County District Court overseeing the disposition of Infowars, to approve that licensing agreement in a court filing. Under the terms, The Onion’s parent company, Global Tetrahedron, would pay $81,000 a month to license Infowars.com and its associated intellectual property — such as its name — for an initial six months, with an option to renew for another six months.

They're not buying it - they're licensing it from the victims families instead.


Well, that's an example of exactly the type of media outlet they're trying to create!

Consider the fact this is a satirical news website; a fictional CEO; an imaginary corporation; and it literally proposes a vision of "Not just ads, but scams! Not just scams, but lies with no object [...] A digital platform where, every day, visitors sacrifice themselves at altars of delusion and misery"

I'm surprised you're surprised.


I think it's a good PR move. "Hey, look at how reasonable we've been in spite of the legal craziness. We've put money on the table and are moving forward with a plan that benefits everyone." Now anyone who blocks the plan will be seen as the problem.

Man, I cannot imagine how nice it must to be to work with leadership like this, who just gets it.


AI is also based on how we write. Some people are bound to write in a similar vein to LLMs naturally. See this person’s blog about it [0].

[0]: https://marcusolang.substack.com/p/im-kenyan-i-dont-write-li...


He also had blogs about those maps: https://www.johnsto.co.uk/design/


I mean Cash App is simply a workaround for the US Banking systems lack of a unified transfer system.


Ditto. I've found it pretty tolerable once I've used "ShutUp10!" to disable the annoying stuff. I've used harder tools than it, but I've then found it breaks useful stuff (like the Xbox Gaming stuff, which some MSFT games use).


Haiku 4.5 is a reasoning model. [0]

[0]: https://www-cdn.anthropic.com/7aad69bf12627d42234e01ee7c3630...

> Claude Haiku 4.5, a new hybrid reasoning large language model from Anthropic in our small, fast model class.

> As with each model released by Anthropic beginning with Claude Sonnet 3.7, Claude Haiku 4.5 is a hybrid reasoning model. This means that by default the model will answer a query rapidly, but users have the option to toggle on “extended thinking mode”, where the model will spend more time considering its response before it answers. Note that our previous model in the Haiku small-model class, Claude Haiku 3.5, did not have an extended thinking mode.


Sure, marketing people gonna market. But Haiku's 'extended thinking' mode is very different than the reasoning capabilities of Sonnet or Opus.

I would absolutely believe mar-ticles that Qwen has achieved Haiku 4.5 'extended thinking' levels of coding prowess.


>Sure, marketing people gonna market.

Oh HN never change.


Not sure what this means, but as a marketing person myself, here's what happened: One day, an Anthropican involved in the Haiku 4.5 launch shrugged, weighed the odds of getting spanked for equating "extended thinking" with "reasoning", and then used Claude to generate copy declaring that. It's not rocket surgery!


It's mainly that people on here, regardless of profession, speak incorrectly but confidentally about things that could be easily verified with a Google search or basic familiarity with the thing in question.

Haiku 4.5 is a reasoning model, regardless of whatever hallucination you read. Being a hybrid reasoning model means that, depending on the complexity of the question and whether you explicitly enable reasoning (this is "extended thinking" in the API and other interfaces) when making a request to the LLM, it will emit reasoning tokens separately prior to the tokens used in the main response.

I love your theory that there was some mix up on their side because they were lazy and it was just some marketing dude being quirky with the technical language.


> It's mainly that people on here, regardless of profession, speak incorrectly but confidentally about things that could be easily verified with a Google search or basic familiarity with the thing in question.

Yep. And if your heart wants to call Haiku a "reasoning model", obviously you must listen. It doesn't meet that bar for me for a couple reasons: (1) It lacks both "adaptive thinking" and "interleaved thinking" (per Anthropic, both critical for reasoning models), and (2) it also performed unacceptably with a real-world collection of very basic reasoning tasks that I tried using it for.¹ I'm glad you're having better luck with it.

That said, it's a great and affordable little model for what it was designed for!

¹ I once made the mistake of converting a bunch of skills (which require basic reasoning) to use Haiku for Axiom (https://charleswiltgen.github.io/Axiom/). It failed miserably, and wow, did users let me have it. On the bright side, as a result I'm now far better at testing models' ability to reason.


We are all reasonable people here, and while you are (mostly) correct, I think we can all agree that Anthropic documentation sucks. If I have to infer from the doc:

* Haiku 4.5 by default doesn't think, i.e. it has a default thinking budget of 0.

* By setting a non-zero thinking budget, Haiku 4.5 can think. My guess is that Claude Code may set this differently for different tasks, e.g. thinking for Explore, no thinking for Compact.

* This hybrid thinking is different from the adaptive thinking introduced in Opus 4.6, which when enabled, can automatically adjust the thinking level based on task difficulty.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: