Hacker Newsnew | past | comments | ask | show | jobs | submit | timtimmy's commentslogin

Perhaps Anthropic should put a freeze on new signups until they can increase capacity. This is the best kind of problem for a business, I'm cheering for them.


If there is one thing that is crystal clear, its that LLM providers will always take your money, no matter how bad the service is.


This requires ethics.


Even with IDEs that have a terminal view, I still much prefer using a separate terminal app.


Same. Standalone terminals will always beat those built into other things in terms of being good at being a terminal. No need to pile more bloat onto an already bloated IDE/editor, and besides it feels kind of like those old combo TV+VCR units where neither the CRT tube nor the VHS player were great.

If I'd do anything to Xcode or Android Studio, it'd be to split more things out of them and make them excellent at their core tasks.


Why? It seems pretty pointless to keep hot memory of the context of every app and tab you have open as to recall what process and tab and window ties to what thing you were doing at what time, when it's effectively all one related workflow inside your Integrated* Development Environment. Do you just keep a separate dedicated tab in your terminal for actions you would only do against a single directory?


My machine has more memory than I generally know what to do with. The mapped-into-memory footprint of Terminal.app right now is ~112MB, for 12 terminal tabs across 4 windows.

In other words, I don't care about the memory use.

I think I commented earlier that there's not that much I use the terminal for during development - mainly git. Keeping a terminal open, mainly hidden, in the bottom-left corner with the tab set to the top-level directory isn't really a burden.


I do the same - largely because I open the IDE with `idea .`/`zed .` (or whatever) from a directory with the correct nix dev shell already loaded in order to ensure the correct toolchains get used.

Typically I have 3-4 different projects open at a time and probably 30-40 terminal windows across them and other places (in Ghostty).

Honestly it had never really crossed my mind that people used the built-in terminal for anything!


Careful, their ToS makes it clear they train on your Antigravity prompts (even on AI Ultra) and there is no opt-out that I can find.


Google keeps changing their privacy and “don’t train on my data/code” options. When gemini-cli launched, there was a clear toggle for “don’t train on my code.” That’s now gone; it just links to a generic privacy page for me. Maybe something with my account changed, I can't figure it out. Deep in the Cloud Gemini console, there’s another setting that might control training, but it’s not clear what products it actually covers.

Trying to pay for Gemini-3 is confusing. Maybe an AI Ultra personal subscription? I already pay for OpenAI and Anthropic’s pro/max plans and would happily pay Google too. But the only obvious option is a $250/month tier, and its documentation indicates Google can train on your code unless you find and enable the correct opt-out. If that opt-out exists in all the products, it’s not obvious where it lives or what products it applies to.

Workspace complicates it further. Google advertises that with business workspace accounts your data isn’t used for training. So, I was going to try Antigravity on our codebase. At this point I know I can't trust Google, so I read the ToS carefully. They train on your prompts and source code, and there doesn't appear to be a way to pay them and opt out right now. Be careful, paying for Google Workspace does not protect you, always read the ToS.

Be careful with AI-studio and your Google Workspace accounts. They train on your prompts unless you switch it to API mode.

The result is a lot of uncertainty. I genuinely have no idea how to pay Google for Gemini without risking my code being used for training. And if I do pay, I can’t tell whether they’ll train on my prompts anyway.

The marketing for their coding products does not clearly state when they do or do not train on your prompts and code.

I had to run deep research to understand the risks with using Gemini 3 for agentic work, and I still don't feel confident that I understand the risks. I might have said some incorrect things above, but I am just so confused. I feel like I have a <75% grasp on the situation.

I don't have a lot of trust. And honestly, this feels confusing and deceptive. One could easily confuse it as deliberate strategy to gather training data through ambiguity and dark patterns, it certainly looks like this could be Google's strategy to win the AI race. I assume this is just how it looks, and that they aren't being evil on purpose.

OpenAI in particular has my trust. They get it. They are carefully building the customer experience, they are product and customer driven from the top.


Personal antigravity hack: add a GPL license to every file, so google filters them before training to avoid legal complications. IANAL.


>OpenAI in particular has my trust.

I wouldn't trust Sam Altman. Or any of the big players really.


> trust

Hahaha...HAHAhaha. HAHAHHAHAHAHAHAHAHA!!!


A v2.0 update for my biology education app. I'm adding the ability to walk around cell models with billions of atoms on the Vision Pro.

I'm designing the content browser right now. I'm trying to achieve something really immersive like Apple's new Spatial Gallery app.


It's becoming quite common in S Korea for there to be a fixed tablet on a stand at each table. You order from the digital menu and then immediately pay with the integrated PoS. If there's no tablet you pay on your way out. There's a button at each table to summon the waiter. No tips.

Japan has ticket vending machines in many restaurants. You prepay and order at the front of the restaurant, it prints a little ticket, and you give that to the waiter or kitchen.


It's not beyond imagination that this will happen to the Canadian Prime Minister next.


I wonder what would happen if the Canadian Prime Minister then decided to file charges of treason against Musk, and issue an arrest warrant?


Nothing good I am afraid.


Optimizing our rendering algorithms for Apple Vision Pro. Trying to render a 300-million atom cell model at 90fps stereo. It's trivial on a 4090, it's pretty hard on a ~30W mobile GPU (W correct??). I'm thinking about a bunch of immersive mesoscale biology stuff next.


Curious—Do you have a best Metal project to learn from? Foveated rendering and temporal reprojection important here I imagine.


In my university days I climbed Mt Fuji at night for the sunrise in jeans, running shoes and a little tiny pen light. At the top there were people with small oxygen tanks (understandable, there are legitimate medical concerns for some). I'd do it differently today. :)

I'm ignoring the point of the article, but I'm currently in a country with a strong hiking culture. Everyone is decked out with every piece of hiking gear imaginable for a short trek up a hill (2 hour round trip?). It's a bit of a status thing. Well... maybe there is a connection to the article. Do we sometimes avoid simple tools in startups because of ego/status concerns?


I think you are right.

But a gentler answer is that if you don't know what you need to do the hike, you ask around for best practices and probably end up following some that are overengineered.

The people like you during your hike, in my experience, fall into two groups. Either they've had so much experience that they know exactly what works and what doesn't work for the conditions or they kind of got lucky.


Thank you for the gentler perspective, and I do think it plays a role.


"Current visualization software, such as UCSF ChimeraX6, can only render one or a few protein structures at the atomic level."

Lots of current visualization software is focused on visualizing a single protein structure (for example, ChimeraX). New visualizing and modeling systems are being developed to go up in scale to cellular scenes and even whole cells. For example, systems like le Muzic et al.'s CellView (2015) [1] are capable of rendering atomic resolution whole cell datasets like this in realtime: https://ccsb.scripps.edu/gallery/mycoplasma_model/

[1] : https://www.cg.tuwien.ac.at/research/publications/2015/cellV...


I cited cellVIEW in a parallel comment. ;)

I still think "few" is the wrong word. I usually think of "few" as meaning up to around 6, while Chimera and VMD can easily handle hundreds of proteins at the atomic level.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: