The first transistors were slow, and it seems this "GPT3/4 calling itself" stuff is quite slow. GPT3/4 as a direct chat is about as slow as I can take. Once this gets sped up.
I am sure it will, as you can scale out, scale up and build more efficient code and build more efficient architectures and "tool for the job" different parts of the process.
The problem now (using auto gpt, for example) is accuracy is bad, so you need human feedback and intervention AND it is slow. Take away the slow, or the needing human intervention and this can be very powerful.
I dream of the breakthrough "shitty old laptop is all you need" paper where they figure out how to do amazing stuff with a 1Gb of space on a spinny disk and 1Gb RAM and a CPU.
I am sure it will, as you can scale out, scale up and build more efficient code and build more efficient architectures and "tool for the job" different parts of the process.
The problem now (using auto gpt, for example) is accuracy is bad, so you need human feedback and intervention AND it is slow. Take away the slow, or the needing human intervention and this can be very powerful.
I dream of the breakthrough "shitty old laptop is all you need" paper where they figure out how to do amazing stuff with a 1Gb of space on a spinny disk and 1Gb RAM and a CPU.