Not trying to defend App Store policies, but writing this just for those who are struggling with Guideline 4.2 trying to publish an app that is only intended for a small group of users. There is a less well-known option called "unlisted app distribution", similar to unlisted YouTube videos: the app is public and can be downloaded using the direct link, but it cannot be found in App Store search. The "small, or niche, set of users" guideline normally does not apply for such apps.
To request unlisted distribution for your app, send it for review as usual, then file a special form [1], and mention that in the review notes.
Source: I struggled with Guideline 4.2 when I tried to publish an app showing the bell schedule and other local information for the neighborhood school. Its audience is, indeed, not of Apple scale: the school parents living nearby. Apple refused it as 4.2 and only agreed to publish it as unlisted, which I was okay with, because sharing the link between the parents was not a big deal. Google had no problems with publishing the Android app normally though.
> I struggled with Guideline 4.2 when I tried to publish an app showing the bell schedule and other local information for the neighborhood school.
Why would you not just make this a webpage, and then the users could add it to home page as if it were an app? no Apple review necessary then. What does it being an app give you besides bureaucratic headaches?
> 2. Those people and many more besides have no idea what "add it to home page" even means.
If Apple supported the beforeinstallprompt event (available in Chrome since 2015) then people would have same experience as installing app [0]. Instead, you must create a wrapper around webpage and submit thru App Store.
The main driver for making it into an app and not just a web page was the need to send push notifications. Of course, I just needed it for myself: hey, it's time to stop working and start driving to school to pick up the kid – "notify me 30 minutes before the last period ends" given that the schedule is different every day; then I just shared it with other parents.
There is a web version (it's Flutter so it was easy to make one), but parents use the app much more often.
Look how they hid the option to add a PWA to the home screen.
Instead of placing the option next to "Add to bookmarks" after you press "..." (especially if it recognized the page as a PWA!), you have to choose "Share" (which makes no sense!) and then scroll up (at the same time it's non-obvious that you can even scroll up) past the option "add to bookmarks" (again) and then pick the 5th option "add to home screen".
there still a lot of jank. on ios u can only doo this with safari, and even then u loose actual safari niceties like trad browser ui. and idk why but it op ens link in actual safari even if its the same app. unless u have a single page app that does nothing this is not a viable route/.
I live in the EU and just wrote my first PWA and that‘s not true, there is (almost) NO browser ui/ux.
No url bar, no back/forward, no tabs, nor translation, no menu bar, no loading indicator, just… pressing down on a link shows the target url and offers open, copy link, add to reading list and share -which honestly looks like a weird oversight.
I really hate it when the browser UI gets hidden. It loses me several great features like the ability to open multiple tabs. Or have a bookmark directly to a subpage.
Unfortunately some other features are only available to PWA do it's a tradeoff either way.
I also have a small private app that technically could have been a PWA.
It’s not a PWA because the UX is just always inferior. Even though we’ve come really far in browser UIs, the browser is still very clunky compared to the smoothness of a native app.
I assume you would use Oracle Cloud if, for whatever reason possibly related to legal or competition, you cannot use AWS, or GCP, or Azure. It's hard for me to imagine a startup that needs cloud and would onboard to Oracle Cloud and not to any of the top 3 providers instead.
I actually like the Claude's Co-Authored-By: line very much. Even in my personal repositories, where I'm the sole author and the sole reader, I would like to know if my older commit I'm looking at was vibe coded, implying possibly lower quality or weird logical issues with the code.
So, my personal rule is: if I implemented a feature with Claude, I'll ask it to commit the code and it will add Co-Authored-By. If I made the change manually, I'll commit it myself.
One thing I learned over the years is that the closer my setup is to the default one, the better. I tried switching to the latest and greatest replacements, such as ack or ripgrep for grep, or httpie for curl, just to always return to the default options. Often, the return was caused by a frustration of not having the new tools installed on the random server I sshed to. It's probably just me being unable to persevere in keeping my environment customized, and I'm happy to see these alternative tools evolve and work for other people.
This sort of thing is a constant tension and it's highly likely to be a different optimum for every individual, but it's also important not to ignore genuine improvements for the sake of comfort/familiarity.
I suspect, in general, age has a fair amount to do with it (I certainly notice it in myself) but either way I think it's worth evaluating new things every so often.
Something like rg in specific can be really tricky to evaluate because it does basically the same thing as the builtin grep, but sometimes just being faster crosses a threshold where you can use it in ways you couldn't previously.
E.g. some kind of find as you type system, if it took 1s per letter it would be genuinely unusuable but 50ms might take it over the edge so now it's an option. Stuff like that.
I might be missing the point of the article, but from what I understand, the TL;DR is, "cover your code with tests", be it unit tests, functional tests, or mutants.
Each of these approaches is just fine and widely used, and none of them can be called "automated verification", which, if my understanding of the term is correct, is more about mathematical proof that the program works as expected.
The article mostly talks about automatic test generation.
That's actually one thing that always prevented me from following the standard pathway of "write a design document first, get it approved, then execute" during my years in Google.
I cannot write a realistic non-hand-wavy design document without having a proof of concept working, because even if I try, I will need to convince myself that this part and this part and that part will work, and the only way to do it is to write an actual code, and then you pretty much have code ready, so why bother writing a design doc.
Some of my best (in terms of perf consequences) design documents were either completely trivial from the code complexity point of view, so that I did not actually need to write the code to see the system working, or were written after I already had a quick and dirty implementation working.
That’s why I either started with the ports and adapters pattern or quickly refactored into it on spikes.
You don’t have to choose what flavor of DDD/Clean/… you want to drink, just use some method that keeps domains and use cases separate from implementation.
Just with shapes and domain level tests, the first pass on a spec is easier (at least for me) and I also found feedback was better.
I am sure there are other patterns that do the same, but the trick is to let the problem domain drive, not to choose any particular set of rules.
Keeping the core domain as a fixed point does that for me.
I am very similar in this respect, however once I get to a place where I am implementing something very similar to something in my past, it becomes easier to draft a doc first because I have been down that path before
The article has so many "it's this, not that" contradictions – I counted seven! – that I seriously consider it to be written with a lot of assistance from LLMs.
One thing not mentioned in the article is that now that many software engineers are back to their offices, we get the regular fall / spring viral infections spreading out between employees who feel obliged to go to the office even if they have mild cold symptoms. If RTO is about productivity, I wonder if anyone has accounted the productivity drop caused by viruses in workspace.
I'm an ex-FAANG engineer working for a smaller (but still big enough) company.
At work we use one of the less popular solutions, available both as a plugin for vscode and as a claude code-like terminal tool. The code I work on is mostly Golang and there's some older C++ using a lot of custom libraries. For Golang, the AI is doing pretty good, especially on simple tasks like implementing some REST API, so I would estimate the upper boundary of the productivity gain to be maybe 3x for the trivial code.
Since I'm still responsible for the result, I cannot just YOLO and commit the code, so whenever I get to work on simple things, I'm effectively becoming a code reviewer for the majority of time. That is what probably prevents me from going above 3x productivity; after each code review session I still need a break so I go get coffee or something, so it's still much faster than writing all the code manually, but the mental load is also higher which requires more breaks.
One nontrivial consequence is that the expectations are adapting to the new performance, so it's not like we are getting more free time because we are producing the code faster. Not at all.
For the C++ codebase though, in the rare cases when I need to change something there, it's pretty much business as usual; I won't trust the code it generates, and would rather write what I need manually.
Now, for personal projects, it's a completely different story. For the past few months or so, I haven't written any code for my personal projects manually, except for maybe a few trivial changes. I don't review the generated code either, just making sure that it works as I expect. Since I'm probably too lazy to configure the proper multi-agent workflow, what I found works great for me is: first ask Claude for the plan, then copy-paste the plan to Codex, get its feedback back to Claude, repeat until they agree; this process also helps me stay in the loop. Then, when Claude implements the plan and makes a commit, I copy-paste the commit sha to Codex and ask it to review, and it very often finds real issues that I probably would've missed.
It's hard to estimate the productivity gain of this new process mostly because the majority of the projects I worked on these past few months I would've never started without Claude. But for those I would've started, I think I'm somewhere near 4-5x compared to manually writing the code.
One important point here is that, both at work and at home, it's never a "single prompt" result. I think about the high level design and have an understanding of how things will work before I start talking to the agent. I don't think the current state of technology allows developing things in one shot, and I'm not sure this will change soon.
My overall attitude towards AI code generation is quite positive so far: I think, for me, the joy of having something working so soon, and the fact that it follows my design, outweighs the fact that I did not actually write the code.
One very real consequence of that is I'm missing my manual code writing. I started going through the older Advent of code years where I still have some unsolved days, and even solving some Leetcode problems (only interesting ones!) just for the feeling of writing the code as we all did before.
To request unlisted distribution for your app, send it for review as usual, then file a special form [1], and mention that in the review notes.
Source: I struggled with Guideline 4.2 when I tried to publish an app showing the bell schedule and other local information for the neighborhood school. Its audience is, indeed, not of Apple scale: the school parents living nearby. Apple refused it as 4.2 and only agreed to publish it as unlisted, which I was okay with, because sharing the link between the parents was not a big deal. Google had no problems with publishing the Android app normally though.
[1]: https://developer.apple.com/support/unlisted-app-distributio...
reply