Hacker Newsnew | past | comments | ask | show | jobs | submit | kodablah's commentslogin

> Saying there is no opt-out is just false

I can't see where one can opt-out of this new behavior and into the existing behavior, only a description of the new behavior's bypass (which is not the same thing at all)

> easy to bypass the cooling-off period with ADB

I don't think this is a reasonable use of the term "easy". I should be able to give my non-technical friend an apk and they can use it right then, with the one "are you very sure" screen.


> . I should be able to give my non-technical friend an apk and they can use it right then

Unfortunately that is the same vector that scammers use to drain people's bank accounts


Such is the cost of computing freedom. This line of thinking is analogous to surveillance justifications in meatspace.

The concepts don't need to be at odds with each other.

But also, I don't think that "computing freedom" means you get to use other people's computers without consent. Let's be clear here: Google's requirement for ID only applies to apps distributed from their computer. Presuming that you do actually respect computing freedom, I'd guess you'd support them in this.

I think a good compromise is that they could permit you to sideload. Which they are doing.

But also, if you are very concerned about computing freedom you can also vote with your wallet when you purchase a device.


> But also, I don't think that "computing freedom" means you get to use other people's computers without consent.

Who said anything like that? This is about being able to install software on your own device.


> I don't think that "computing freedom" means you get to use other people's computers without consent

Consent from whom? Consent is already required, why are you discussing this as though consent is not required? Why are you stating it as if people are using other's computers without consent? Right now when I sideload an APK on _my device_, I have to explicitly consent to allowing it to install. And I do not require the author of that APK to have made any deals/interactions with Google. What you mean is Google's consent or a debugger's consent or my consent tomorrow.

So I, as the user, will no longer be able to provide consent alone. I wish that you were right and it was just "no running without consent", but that is today's behavior, and that is being altered.

> I think a good compromise is that they could permit you to sideload. Which they are doing.

They always have, and that was a good compromise. They've now decided you can't sideload until tomorrow unless you break out debugging tools or require the author make special deals with a specific vendor. What exists today is a good compromise, the change is not.

I expect the same from my desktop and mobile devices here.


You mentioned surveillance -- I presumed you were talking about the ID requirement. This only applies if you're using Google's computers to push out your app.

If you sideload... what "surveillance" are you talking about?

> They've now decided you can't sideload until tomorrow

A single 24 hour waiting period, only the first time. Or just use ADB. The point is to prevent false-urgency scams. Honestly even this seems to me to be pretty weak.

Can you think of a single better option that has any efficacy at all?


If they're not surveilling what apps are being sideloaded, why is the bypass managed by google play services? There are at least 2 better options: - An option to not install the update which would fuck with my device - An option to use the OS layer instead of google play services for this fuckery. i disable gplay services the moment I get my hands on a new phone.

Because they're syncing the override flag to your next device so you don't have to do the waiting window ever again after the first time

>Unfortunately that is the same vector that scammers use to drain people's bank accounts

Is the solution really that no one can use a computer without special permission and inspection of government issued identification? If we wouldn't tolerate this with our desktop/laptop OS, why is it suddenly okay for our mobile computing platforms?

If Microsoft required this to run software in Windows, there would be riots.


> Is the solution really that no one can use a computer without special permission and inspection of government issued identification?

No, that is neither the only solution nor is it the one proposed here by Google.


Only physical practicalities will prevent this thinking be applied throughoutly: we can't have guardians preventing people from being scammed face to face. But having to identify yourself on a desktop computer and only be allowed to install software vetted by Microsoft and bunch of governments is readily on the books for the kind of thinking that makes these suggestions.

That's where it inevitably leads to. If people can't be allowed to be responsible for X, next they can't be allowed to be responsible for Y, then Z -- all for their own sake. Google taking some mythical "responsibility" on behalf of their users means the users are left powerless and that is that something Google wants more than just being a "good guy" who protects people from conmen.

It's not like people simply couldn't just limit themselves to installing apps from Google Play already, without these "guardrails". Android currently does make it clear that installing unknown apks from an external source is risky and shouldn't be done unless you really, really know what you're doing. No further technical solutions are required for the problem. You can't fix stupidity with technical means.


If someone is dumb enough to ignore a very explicit warning message, that's their problem. We also don't restrict the sales of kitchen knives just because some people inevitably are going to be dumb enough to hurt themselves with a knife. If they hurt themselves that's their problem, not the problem of more intelligent people.

I will say, an underrated use case for even small, local LLMs is making command line tools drastically more accessible to laypeople

I now know zero people I don't think should use linux, and people I know seems to run quite a gamut of technical know-how compared to most other technical folks I know


Having an LLM directly and autonomously drive command line tools outside of a strict sandbox sounds like a ticking time bomb.

Thinking tokens: "The files I'm trying to read are missing, I need to figure out why. I see the problem, I accidentally ran rm -rf /home/user. Let me run git restore. No that didn't work. Let me try git reset --hard origin/HEAD. That still didn't work. I should inform the user."

Output: "I was unable to complete the task you requested. Restore /home/user and I will try again"


I tend to set people up with a chat interface, which is pretty good for asking for commands or scripts that the user will then copy into their terminal. Most people I've gotten to try linux do pretty well with just a wiki, but once they run into something they want to do that's kind of idiosyncratic they tend to ask me for help. While I think running models that have access to a shell is dangerous and should be handled carefully, the fact that they've been trained for this use case generally means they're pretty good at shell commands and can give you one a decent chunk of the time. I'm never willing to inject an external dependency controlled by a company into people's computing needs unless they specifically ask for it, so this is usually a lightweight local model specialized in tool use, but not given shell access. This isn't much different from how they'd use search engine for this purpose these days, but if running locally, it can be more fault-tolerant to issues that affect their internet access as well as offering better privacy guarantees, albeit obviously a little less capable

ADB is not the only option. Do the 24hs wait then the experience will not be much different than what already happens today: https://imgur.com/a/Z9hoYIh

Doing a 24h wait _is_ much different from what happens today. That's the whole point. If my two options to run an application of my choosing are to use ADB to flip a switch or to wait a day, that is ridiculous.

I am only slightly comforted by the fact that desktop computing had set (some) self-ownership precedence before the current restrictive computing hegemony took control, though even that is eroding.


Wait until you find out about games consoles

The way you give your non-technical friends an APK and they just install it is by you signing it.

I should not have to enter into a business relationship with google just to hand my non-technical friend an APK any more than I have to enter into a business relationship with the Linux Foundation to hand my friend an AppImage.

And then having Google approve it, so hopefully your app does not do anything that Google does not like, such as block ads.

But I want to let someone MITM my non-technical friend and repalce my APK with malware.

> I can't see where one can opt-out of this new behavior and into the existing behavior, only a description of the new behavior's bypass (which is not the same thing at all)

I don't understand this, the ability to bypass new behavior in settings menus is basically the defenition of a new feature having an opt-out. Can you elaborate?


> If you dont have analytics you are flying blind

More like flying based on your knowledge as a pilot and not by the whims of your passengers.

For many CLIs and developer tooling, principled decisions need to reign. Accepting the unquantifiability of usage in a principled product is often difficult for those that are not the target demographic, but for developer tools specifically (be they programming languages, CLIs, APIs, SDKs, etc), cohesion and common sense are usually enough. It also seems real hard for product teams to accept the value of the status quo with these existing, heavily used tools.


Actually it's more like flying in the clouds with no instruments which can lead to spatial disorientation when you exit the cloud cover and realize you're nosediving towards the earth. https://en.wikipedia.org/wiki/Spatial_disorientation

Flying based on the whims of your passengers would be user testing/interviewing, which is a complementary, and IMO necessary, strategy alongside analytics.


> You can try the technical preview today by running npx cf. Or you can install it globally by running npm install -g cf.

A couple of obvious questions - Is it open source (npmjs side doesn't point to repo)? And in general will it be available as a single binary instead of requiring nodejs tooling to install/use? If so, using recently-acquired Bun or another product/approach?


I can't find any repository, either, but the package is listed as MIT-licensed and includes source maps, so I assume it will be published soon.


I suppose you could probably legally justify claude-code-ing the package from the source maps by the license if they don't...


It is also in that book, page 36/37, with transcription and minor note on issues with ISS toilets in 2008.


Duralade - a programming language for durable execution (but has many neat aspects)

Most of the work as of today is in a branch, can see the language spec at https://github.com/cretz/duralade/blob/initial-runtime/docs/..., and some samples at https://github.com/cretz/duralade/tree/initial-runtime/sampl....

May not amount to anything, but the ideas/concepts of this durable language are quite nice.


I'm curious what advantages this has over adding durability to an existing language, like DBOS does:

https://github.com/dbos-inc/dbos-demo-apps/blob/main/python/...


Modern languages are not safe enough nor are they very amenable to versioning, serialization, resumption, etc. It makes sense for modern durable execution engines to meet developers where they are (I wrote multiple of the SDKs at Temporal, including the Python one, this is just a fun toy side project), but a purpose-built language that has serialization, patching, wait conditions, kwargs everywhere, externalizing side effects, etc, etc, etc is a big win vs something like Python.

Admittedly the lang spec doesn't do a great job at the justification side, but the engine spec adjacent to it at https://github.com/cretz/duralade/blob/initial-runtime/docs/... that has sections showing CLI/API commands can help make it clearer where this runtime is unique.


Fascinating, thanks for the info!


> People who are saying they're not seeing productivity boost, can you please share where is it failing?

At review time.

There are simply too many software industries that can't delegate both authorship _and_ review to non-humans because the maintenance/use of such software, especially in libraries and backwards-compat-concerning environments, cannot justify an "ends justifies the means" approach (yet).


I'm of the mind that it will be better to construct more strict/structured languages for AI use than to reuse existing ones.

My reasoning is 1) AIs can comprehend specs easily, especially if simple, 2) it is only valuable to "meet developers where they are" if really needing the developers' history/experience which I'd argue LLMs don't need as much (or only need because lang is so flexible/loose), and 3) human languages were developed to provide extreme human subjectivity which is way too much wiggle-room/flexibility (and is why people have to keep writing projects like these to reduce it).

We should be writing languages that are super-strict by default (e.g. down to the literal ordering/alphabetizing of constructs, exact spacing expectations) and only having opt-in loose modes for humans and tooling to format. I admit I am toying w/ such a lang myself, but in general we can ask more of AI code generations than we can of ourselves.


I think the hard part about that is you first have to train the model on a BUTT TON of that new language, because that's the only way they "learn" anything. They already know a lot of Python, so telling them to write restricted and sandboxed Python ("you can only call _these_ functions") is a lot easier.

But I'd be interested to see what you come up with.


> that's the only way they "learn" anything

I think skills and other things have shown that a good bit of learning can be done on-demand, assuming good programming fundamentals and no surprise behavior. But agreed, having a large corpus at training time is important.

I have seen, given a solid lang spec to a never-before-seen lang, modern models can do a great job of writing code in it. I've done no research on ability to leverage large stdlib/ecosystem this way though.

> But I'd be interested to see what you come up with.

Under active dev at https://github.com/cretz/duralade, super POC level atm (work continues in a branch)


> you first have to train the model on a BUTT TON of that new language

Tokenization joke?


> The thing is, if you want people to understand durability but you also hide it from them, it will actually be much more complicated to understand and work with a framework.

> The real golden ticket I think is to make readable intuitive abstractions around durability, not hide it behind normal-looking code.

It's a tradeoff. People tend to want to use languages they are familiar with, even at the cost of being constrained within them. A naive DSL would not be expressive enough for the turing completeness one needs, so effectively you'd need a new language/runtime. It's far easier to constrain an existing language than write a new one of course.

Some languages/runtimes are easier to apply durable/deterministic constraints too (e.g. WASM which is deterministic by design and JS which has a tiny stdlib that just needs a few things like time and rand replaced), but they still don't take the ideal step you mention - put the durable primitives and their benefits/constraints in front of the dev clearly.


This still assumes an all encompassing transparent durability layer, what I'm arguing for is the opposite: something that can just be a library in any language, and any runtime, because it does not try to be clever about injecting durability in otherwise idiomatic code.


> that your entire workflow still needs to be idempotent

If just meaning workflow logic, as the article mentions it has to be deterministic, which implies idempotency but that is fine because workflow logic doesn't have side effects. But the side-effecting functions invoked from a workflow (what Temporal dubs "activities") of course _should_ be idempotent so they can be retried upon failure, as is the case for all retryable code, but this is not a requirement. These side effecting functions can be configured at the callsite to have at-most-once semantics.

In addition to many other things like observability, the value of durable execution is persisted advanced logic like loops, try/catch, concurrent async ops, sleeping, etc and making all of that crash proof (i.e. resumes from where it left off on another machine).


> The author's point about the friction from explicit step wrappers is fair, as we don't use bytecode generation today, but we're actively exploring it to improve DX.

There is value in such a wrapper/call at invocation time instead of using the proxy pattern. Specifically, it makes it very clear to both the code author and code reader that this is not a normal method invocation. This is important because it is very common to perform normal method invocations and the caller needs to author code knowing the difference. Java developers, perhaps more than most, likely prefer such invocation explicitness over a JVM agent doing byte code manip.

There is also another reason for preferring a wrapped-like approach - providing options. If you need to provide options (say timeout info) from the call site, it is hard to do if your call is limited to the signature of the implementation and options will have to be provided in a different place.


I'm still swinging back and forth which approach I ultimately prefer.

As stated in the post, I like how the proxy approach largely avoids any API dependency. I'd also argue that Java developers actually are very familiar with this kind of implicit enrichment of behaviors and execution semantics (e.g. transaction management is weaved into applications that way in Spring or Quarkus applications).

But there's also limits to this in regards to flexibility. For example, if you wanted to delay a method for a dynamically determined period of time, rather than for a fixed time, the annotation-based approach would fall short.


At Temporal, for Java we did a hybrid approach of what you have. Specifically, we do the java.lang.reflect.Proxy approach, but the user has to make a call instantiating it from the implementation. This allows users to provide those options at proxy creation time and not require they configure a build step. I can't speak for all JVM people, but I get nervous if I have to use a library that requires an agent or annotation processor.

Also, since Temporal activity invocations are (often) remote, many times a user may only have the definition/contract of the "step" (aka activity in Temporal parlance) without a body. Finally, many times users _start_ the "step", not just _execute_ it, which means it needs to return a promise/future/task. Sure this can be wrapped in a suspended virtual thread, but it makes reasoning about things like cancellation harder, and from a client-not-workflow POV, it makes it harder to reattach to an invocation in a type-safe way to, say, wait for the result of something started elsewhere.

We did the same proxying approach for TypeScript, but we saw as we got to Python, .NET, and Ruby that being able to _reference_ a "step" while also providing options and having many overloads/approaches of invoking that step has benefits.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: