I used to use DO, but switched off after they decided to disconnect my droplet for 3 hours when it got DDoS'd. It didn't matter that my node was able to handle the traffic. I was only using it for a Mumble VOIP server and an IRC bouncer, so it's not like I was going to lose money by having some business going offline, but still frustrating and enough to decide that should I ever need to run an actual business, I definitely won't use DO for it.
This is great to see. I love DigitalOcean and they've really stepped up their game wrt. product offerings.
But I was surprised at how DO beat AWS EC2 in most but not all of the tests. Their performance is impressive considering that they're not on the same scale as AWS, Azure or GCP
EC2 (EBS in particular) has always had lackluster performance from my experience, compared to the alternatives. To be honest though, relative performance has never been a factor or even a consideration in most of the places I've worked at.
I'm not saying that to minimize the issue either, it's just that enterprise users/management simply don't care.
I did a similar set of benchmarks, except with a bit more of a focus on storage performance, several years ago. Even included the results in a presentation at LISA. The most striking thing at the time was not so much the averages but the variability. IIRC Amazon was particularly bad in that regard, and Vultr particularly good (so kudos to them), but Digital Ocean's advantage in raw performance was so big that it still won out. Looks like not much has changed.
I think the AWS failures on iops tests should've been examined more prior to publication, or at least explained more to the reader.
AWS General Purpose EBS volumes scale based on volume size, so a purely naively-done test with a default AMI's performance could be as low as 24 iops (8GB*3 IOPS per GB) once exhausting it's burst iops quota. I think it's unfair to compare apples to oranges here, as you can make these volumes scale to absurd numbers, if you have the cash.
Agree need to use 1TB EBS volume (the smallest volume that removes bursting limits) and an EBS optimized and enhanced networking instance to be accurate. I'll be the first to admit that AWS has a serious problem with overcomplicating things though. You really shouldn't have all these different options and gotchas.
The point of this test wasn't to determine the absolute limits. It was to determine what the actual real-world performance would be on the instances/servers we would actually be using.
I have tried/used the providers mentioned and others, and am now with UpCloud which really has great performance, better than Do etc for what I have seen. Only thing is that they don't offer much more than just servers yet.
I love DO, but man in practice the CPU performance of their machines have been horrible in my experience. Like 2-3x worse than the same $ spend on ec2.
Rubbish. Why is there even a section describing its methodology when it's comparing $40 and $50 instances against $20 ones? I can see why they might compare the $62 EC2 instances against other vendors' cheaper ones as that is the point of their investigation, but the challengers should be on a level playing field. Seems to me that they wanted DO to 'win'.
If you would read the paragraph directly after the list of instances tested you would see this was directly addressed. This test wasn't meant to mislead and was simply exploring the best options for us. This isn't the same for everyone, which is why we open-sourced the tool we made so that you can run your own tests as well.
> Even though neither Linode nor Vultr offer a CPU optimized tier, we wanted to test the options we would actually be using if we went with each provider.
This is the part that doesn't make sense. They basically chose one type from each vendor, before benchmarking. If there are clearly instance types at twice the cost and still lower than types from other vendors, the results were stacked. How can you see this any other way?