Hacker Newsnew | past | comments | ask | show | jobs | submit | fcanesin's commentslogin

Wait what!? I have been programming CUDA since 2009 and specifically remember it being pushed to C++ as main development language for the first few years, after a brief "CUDA C extension" period.

CUDA variants extend several programming languages, including C, C++ and Fortran.

None of the extended languages is the same as the same as the base language, in the same way like OpenMP C++ in not the same as C++ or OpenMP Fortran is not the same as Fortran or SYCL is not the same as C++.

The extended languages include both extensions and restrictions of the base language. In the part of a program that will run on a GPU you can do things that cannot be done in the base language, but there also parts of the base language, e.g. of C++, which are forbidden.

All these extended languages have the advantage that you can write in a single source file a complete multithreaded program, which has parts running concurrently on a CPU and part running concurrently on a GPU, but for the best results you must know the rules that apply to the language accepted by each of them. It is possible to write program that run without modification on either a CPU or a GPU, but this is paid by a lower performance on any of them, because such a program uses only generic language features that work on any of them, instead of taking advantage of specific features.


CUDA is not C++. CUDA for GPU kernels is its own language. That's the actual problem requiring new languages or abstractions.

Anthropic is a great showing for startup founders how if you have a great product people will buy it, even if they dislike your pricing, your marketing and the CEO opinions.

Real PMF sells itself. The risk is of course the competition catching up, I bet switching costs are very low on this setup.


To get "End of Chat Control" EU should actually pass laws prohibiting it, this whack a mole will eventually lose.


46_255


The harness is the model "body", it's weight the cognition. Like in nature they develop together and the iteration of natural selection works at both.

If smaller labs (Zai, Moonshot, deepseek, mistral..) get together and embrace a harness, like opencode for example, as a consortium just by the power of "evolution across different environments" they might hit jackpot earlier than bigger labs.


But they rely on distilling the output of american leader models. Which will probably train against their own harness.

Someone has to do the baseline training, development, and innovation. it can't be clones all the way down


It goes the other way around as well. DeepSeek has made quite a few innovations that the US labs were lacking (DSA being the most notable one). It's also not clear to me how much of distilled outputs are just an additional ingredient of the recipe rather than a whole "frozen dinner" so to speak. I have no evidence to say it's one way or the other, but my guess is the former.


Why not? Humans are (very nearly) clones all the way down.


Citation needed, SOTA labs surely has technical protection and legaleese against using them for training. It's been done in th past but what indicates this is still the case?


>Citation needed, SOTA labs surely has technical protection

They have unlimited APIs, as long as you pay, how would they control how you use them?

> and legaleese against using them for training.

It's a whole different jurisdiction, and in general chinese companies care way less about copyright infringement

https://en.wikipedia.org/wiki/Counterfeit_consumer_good https://en.wikipedia.org/wiki/Allegations_of_intellectual_pr... https://en.wikipedia.org/wiki/China%E2%80%93United_States_tr...


this didn't stop the millions of copyrighted works used to train the models.


Mistral recently came out with their own harness (vibe) and I feel like it was a massive missed opportunity vs throwing in with with aider or opencode.


My experience trying LanceDB has been abysmal. It worked great on dev and small testing environments but as soon we tried production workloads it would get extremely slow. We shifted to PostgreSQL + pgvector and had absolutely no issues, even if it is not "engineered for multimodal data". Maybe we were doing something wrong but we did put effort in trying to make it work - it is this hard to get it performant?


I’m also curious to hear what challenges you encountered. I’ve used LanceDB for a few projects in production now and it’s worked out reasonably well.

The docs quality is spotty, and the lack of parity between the async and sync python API is frustrating, but otherwise it’s been great.

The only performance issues I’ve had have been A) not rebuilding indexes on an appropriate cadence, B) not filtering the search space enough for queries which bypass the index, or C) running search against millions of vectors on an object storage and expecting millisecond latency.


Great, thanks for the feedback! I work at LanceDB and will take these points into account (esp. the docs).


Curious what performance issues you faced. Was that in OSS LanceDB? And what were the challenges?


Inserts become increasingly slow. Became >10sec for a chat completion insert after 10_000 entries on k8s Longhorn atop NVMe.


Great stuff, now if could please do gemini-2.5-pro-code that would be great


Nice, congrats. But that O looks like an ass.


That's the corporate design of the industry: Why do AI company logos look like buttholes? - https://news.ycombinator.com/item?id=43649640


this, Vercel is at ~10B valuation with a business built atop React - they should and will probably take more of Meta space as stewards for it.


Please no. They don't have the best interests of React in mind.

They threw the resources behind RSC to make React, a framework for frontend reactivity, force opt-in for frontend reactivity. Meta is needed more than ever at this point, before React fully becomes a framework for burning compute on Vercel's infra.


I agree with this. I’d prefer to have Meta be the steward for React instead of Vercel because Meta does not have a conflict of interest.


They might not have the conflict of interest but they also don’t have the business interest either. Meta is a spyware company who makes all of their money from collecting personal data to sell to advertisers. They have zero incentive to dedicate any kind of significant resources to supporting millions of websites using their internal UI library.


Because Vercel makes money when components are rendered server side not client side.

I know almost nobody that even uses server side components. It's right out if your backend isn't node..


That is exactly why I stopped using React 2 years ago


Summed together with the study visa changes: Thanks Trump for helping solve Brazil's brain drain.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: