I think that the beauty of the human experience is that all you need to learn is to practice. You automatically improve at what you're doing. The kinds of skills that atrophy when you use AI are skills that AI can already automate. And nobody is going to pay you to do slowly what a machine can do quickly/cheaply.
When you deploy AI to build something, you wind up doing the work that the AI itself can't do. Holding large amounts of context, maintaining a vision, writing apis and defining interfaces. Alongside like, project management. How much time is spent on features vs refactoring vs testing.
Software engineering is more information processing engineering. Code is just the shovel with which we build the trench. Everything is about data, and we build the things that control and process the data based on various events according to some wants and needs.
LLMs are more like a trench digger with a cat's personality. It can helps in some cases, but are more likely to destroy a field. And good luck if you have some difficult terrain to pass through.
I really recommend you try LLMs again if you haven’t, the last part is really becoming less and less true every day. But I 100% agree that this does not pose a risk to the software developer for all the other points you mentioned. It will just make that trench digger more useful to the developer that needs it. And they’ll still need someone to drive the digger.
LLM is useful just like StackOverflow, Wikipedia, Web Search Engine, Manuals,… are. But automated (which is a pro) and with an hallucination problem (which is a core problem).
The others also may contain wrong information, but the risks are lower and not being automated means the risks are not compounded.
I personally believe we need more trustable source of information rather than automated way to transform it. Especially for the low hanging fruit of coding, which still require to presolve the problem and put us back at the real reason to have a developer.
And one thing that people seems to forget is the wealth of pre-LLM tools to speed up coding. No one uses notepad (from Windows 7) to write code, which is what they keep brandishing as the alternative to their agents and what not.
The hallucination problem has dropped exponentially in recent times in code generation. I can’t even recall a time any of the modern models I’ve used have done it in my recent usage. It’ll still do it in cheap/fast models and in places outside of code generation, but the good models write frankly incredible code, especially if you set them up with feedback loops.
The bikeshedding is coming from in the room. The point is that the feature didn't cause any regression in capability. And who tf wants a plugin system with only support for first party plugins?
That's a fair point. If you want to calculate the real total water usage of any person, you must first invent the universe. You have to cut it off somewhere.
See, I have no problem with searches that involve warrants and probably cause. They could already violate the shit out of your privacy with a warrant. That's kind of the point of a warrant.
How is the kubernetes secret API lock in? Genuinely wondering - were you trying to use that deployment yaml for something other than a kubernetes deployment? For most applications, you should be mounting the secret on your application, then you can inject it as either an environment variable or a json file that your application reads in an environment agnostic way.
Then, on the backend, you can configure etcd to use whatever KMS provider you like for encryption.
Because you can't run the container, even for development outside Kubernetes.
Yes, you can mount Secrets as Volumes or Env Var in Kubernetes which is fine but I'm not talking about "How you get env var/secret" but "Methods of dealing with config."
Yes you can? The container should be completely agnostic to the fact that it's running in kubernetes. You can do config the same way. Configmaps are mounted as regular files and environment variables. The application doesn't care if the configmap came from the cluster resource or a file your created on your dev machine with dev credentials. You can mount local files into the container yourself. It's docker run -v "source:destination" I think.
One of you is talking about mapping a secret to an environment variable and the other one of you is talking about having the work load make an API call to retrieve the secret. You’re not even talking about the same thing.
The k8s api server is the thing that's configured to talk to your Thales or whatever. On managed kubernetes, these are usually preconfigured to talk to the vendor -- that's the difference between a secret and a config map. The secret is encrypted when it's stored in etcd.
You'd be forgiven for being mistaken however, because this encryption is handled in a way that's transparent to the application.
If you're talking about your application making a call to the k8s api server, then you shouldn't do that unless you're developing a plugin. The kubelet knows how to retrieve and mount secrets from the k8s api server and display them as environment variables to the application. You just declare it as a part of your deployment in the podspec.
sigh I’m extremely competent Ops type and I know. If you mount secrets as Volume or Env Var, that’s Config file or Env var from Application PoV. We are looking at this from Application PoV.
I’ve seen Applications that do direct calls to Kubernetes API and retrieve the secret from it. So they have custom role with bindings and service account and Kubernetes client libraries.
If you're not developing k8s operators, you're calling the api server directly, then complaining about lock in, then that's a skill issue. If you're developing k8s operators, then you should use a tool like kind for integration tests and dependency injection for other stuff and the concept of lock in doesn't make sense. You can also deploy your helm chart directly to kind.
This is where I like things like Tilt. If you're deploying to a k8s cluster, it's probably a good idea to do local dev in as close to a similar environment as possible.
Bit more of an initial hurdle than "just run the docker image"; however.
I've look at Tilt and it's another abstraction for Kubernetes which rarely ends well at scale.
However, most of time, Devs don't need to develop on Kubernetes since it's just Container Runtime and Networking Layer they don't care about. They run container, they connect to HTTP endpoint to talk to other containers, they are happy. Details are left to us Ops people.
It seems contradictory to say that Tilt is an abstraction over kubernetes and say that won't work at scale, but then volunteer ops to be a layer of abstraction over kubernetes as a solution.
FWIW, Skaffold.dev is similar to Tilt, and has been working out great. "skaffold dev" on the cli or the corresponding button in the users IDE starts up a local kube cluster (minikube), the dev code in a container, any other configured containers, optionally opening a port to attach a debugger, and watches the code for changes and restarts the container with the dev code when there's changes. Developers aren't beholder to the capacity of whoever's on call on the ops team to manage the containers they need to be productive. The details of pods and CRDs and helm and ArgoCD and kubectl are abstracted away for them. They just run "skaffold dev" and edit code. Ops gets to run "skaffold render" to build and tag images, and generate the corresponding kubernetes manifest.
Ops is not a layer of abstraction over Kubernetes no more then Dev is layer of abstraction over Python. We both have different responsibilities and thinking that Ops is just missing one more library is why it goes so wrong.
Kubernetes is massive beast and I get it. It feels extremely overcomplicated for "Please for the love of all that holy, just run this container." However, trying to abstract away such complexity is like trying to use Golang with some Python to Golang cross compiler. It works until it you need some feature and then oh god, all hell breaks loose.
I have not played with scaffold either but I will say. scaffold render should not be Ops job, I find it goes best when Devs present artifact they believe is ready for production and I can slot into the system. Otherwise, the friction between Devs handing Ops what they think is possibly buildable artifact quickly becomes untenable.
OTOH, are all of the browsers supposed to move in lock step? Is chrome supposed to wait for everyone else's approval before launching any kind of feature?
That is literally how a standard supposed to work: arrive at consensus and have two independent implementations before it can be claimed to be a standard. Or at the very least arrive at an API shape and hammer out obvious problems before shipping.
Chrome literally doesn't even bother pretending that many of their proposals are more than some scribbles in spec-adjacent format. E.g. a spec for WebHID that other browsers could implement was just dumped into the repo after Chrome shipped it.
Constructable Stylesheets had both a badly named API and a trivially triggered race condition. Shipped in Chrome in the middle of discussion because Google-developed lit "needed" it.
But is every feature in a browser supposed to be standardized? Like, it's against the rules somehow to develop features without asking permission from Apple and Mozilla?
It's not against the rules, but it is hostile to the web. Forking the web because a company is big enough to do so may sound just dandy to you, but to the rest of us who have spent decades working on interoperability it's a big middle finger.
Apple doesn’t have a veto. If two independent implementations are required for something to become a web standard, all Google have to do is convince anybody outside of Google to implement their specs, such as Mozilla – who Google pay billions of dollars to.
The problem with all of these new specifications is that Google can’t convince anybody to do this, no matter how much money they throw at them. That’s not an Apple veto stopping these things from becoming standards, that’s Google pushing shitty specs.
I wouldn't say stuff like Manifest V2 is "new features". A lot of what Chrome is pushing is just to support its commercial interests.
We've kinda come full circle. Web standards were made to prevent what happened when Internet Explorer ruled the world but now a corporation has near-monopoly browser share and is driving the web standards themselves
Manifest v2 was having the same privacy guarantees as Safari and it broke a lot of people’s brains. Even if we assume it’s a secret way to neuter ad blockers even though they are fine, it does not imply we have IE or anything close. I’m kinda happy people take positions like these because they keep companies honesty but it’s completely irrational.
And one of the browsers is maintained by an OS vendor that benefits from the lock-in that comes from native apps and rent seeking from their app store. I'm sure they would love to control the pace of browser innovation by just deciding not to implement certain features.
When you deploy AI to build something, you wind up doing the work that the AI itself can't do. Holding large amounts of context, maintaining a vision, writing apis and defining interfaces. Alongside like, project management. How much time is spent on features vs refactoring vs testing.
reply