We have been trying to solve the same problem (and a bunch of other ones) with https://specific.dev as well. We’ve tried to stay away from Docker as much as we can though because of the still pretty bad experience on Mac.
Our approach is having our CLI handle port assignments (and pass any connection details/ports along as env vars) and that way being able to spin up “isolated” copies of the local dev environment. Has the added benefit of us being able to deploy the same config straight to production and switch in production database connections strings and anything else needed.
We started with an approach like that but I think our grounding principal has been that you shouldn't have to modify your docker-compose to get parallelized local development. I think we want to layer onto your existing setup, not make you re-write your stack around us.
I haven't really had a bad experience with Docker on Mac. but Is the idea you basically just build your service on top of specific.dev's provided services (postgres and redis) and those run bare-metal locally and then you can deploy to specific.dev's hosted solution?
Yes, exactly. Probably two different focuses between us, we are more focused on providing the full environment to build productively with coding agents, from local dev all the way to prod. The key thing for us is that the agent can write code, build infrastructure and test the entire system autonomously locally, and then deploying to production should be dead simple.
A bit of a different approach from the classic use case of docker-compose that is often orthogonal to the production infrastructure in some sense.
One thing I've used to great success though is taking an existing project or example docker-compose and simply asking the coding agent to translate it to Specific's IaC. Works a treat, especially as the coding agent can read all the code at the same time and connect it all together.
I could definitely see that being useful for folks who are Docker-fearful or just less infra literate in general.
I think we're focused on the other end of the spectrum. Folks who like docker and have a good docker setup but want to have parallel runtimes. Anyway, best of luck!
> "We’ve tried to stay away from Docker as much as we can though because of the still pretty bad experience on Mac."
This seems to be a pretty common perspective, but isn't it mostly about Docker Desktop? Orbstack solved my complaints, and I'm genuinely curious if I'm missing something significant (which is def possible).
To put things into perspective: we have an integration test suite that takes:
- 30 minutes with Colima on Mac;
- 20 minutes with OrbStack on Mac;
- 13 minutes on a weaker CPU (Ryzen 5500U) on a native Linux laptop;
- 14 minutes on a Ryzen 5600X and a virtualized Debian inside Windows 10 WSL2.
Pretty stark differences. Granted our test suite is mostly I/O bound but that really tells you something about the VM overhead on a Mac and the lack of an actual kernel-native containerization support on macOS.
Orbstack is definitely much better but far from native speeds in my experience. From our perspective of wanting all users to have a good experience, we also can't really point folks towards Orbstack as a "solution" to make the local dev experience great.
I think this was a common perspective from early docker days with regard to local bind mounts (before docker switched from virtual box with hyperkit on macos). I do use Orb Stack and have noticed faster build times with Orb Stack but I haven't really noticed any difference in runtime performance between Orb Stack and Docker Desktop.
There are differences, but I think most people's code does not expose macOS' suboptimal containerization performance is all. Check my comment sibling to yours. We have noticed very observable differences.
Until Apple adds a kernel-level containerization support (likely: never) then this difference in performance will continue to exist.
That being said, Orbstack really is the best on macOS. Docker Desktop is only slightly slower but much worse as an UX. Colima I appreciate for its full headless nature but it's severely behind in performance, sadly.
Our approach is having our CLI handle port assignments (and pass any connection details/ports along as env vars) and that way being able to spin up “isolated” copies of the local dev environment. Has the added benefit of us being able to deploy the same config straight to production and switch in production database connections strings and anything else needed.