Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can’t speak for the states, but in AU I clearly see a massive displacement of undergrad and junior roles (only in AI exposed domains).

I say this as both someone who works with many execs, hearing their musings, and someone who no longer can justify hiring junior roles themselves.

Irrespective of that; if we take this strategy of only taking action once it is visible to the layman - our scope of actions available will be invariably and significantly diminished.

Even if you are not convinced it is guaranteed and do not believe what myself and others see. I would ask you is your probability of it happening now really that close to 0? If not then would it not be prudent to take the risk seriously?

 help



> If not then would it not be prudent to take the risk seriously?

What does taking the risk seriously look like?


> What does taking the risk seriously look like?

Politics - proper guardrails, adapting the legal framework to accommodate AI and make sure it doesn't benefit only preselected few.

Something that can and should be done yesterday is to stop the capital drain out of the economy and into accelerated, war-motivated AI development - there's no need for war-AI per se but clearly it's the most likely reason for the capital drain and rush.

Once the rush and wars stop, and some capital is made available for the rest of the economy, the latter can adapt to the introduction of AI at a normal pace, that should include legislative safeguards to support competition and prevent monopolization of AI and information sources.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: