We have just completed the move to a shiny new server! It is much bigger and more powerful than our last one, which was getting to be over two years old.
Thanks to the many much-appreciated donations, we were able to afford the move in record time. We should also be able to cover the next 6 months of hosting costs with the cash left over!
The new server has been named "Avarice" (a Fullmetal Alchemist reference). We gave naming honors to FierceWendigo for being the first user to reach our new gem-level sponsorship ranks!
A list of all the users who donate by the end this month will be recorded permanently on our wiki as the people who brought us the new server. :D You can remain anonymous if you wish.
The new specs for Avarice are:
HP DL380eG8 Server 12 x Intel Xeon E5-2420 64-bit 1.9GHz CPU Cores (2 chips with 6 cores each) 32GB DDR3 RAM 2TB SATA2 (4 x 1TB in RAID10) for the web server and asset storage 64GB SSD (2 x 64GB in RAID1) for the database
The disk arrays push data much faster than the old server. The SATA2 web storage pack reaches around 450MB/s. SATA2 may seem outdated, but it's super-cheap and very fast in the RAID10 configuration we're using, which is why it's still a standard offering in servers. We also get twice the storage for the same price as the old SAS/RAID5 drives we were using (and they were a third of the speed, too!).
The SSD RAID1 disk pack for the database hits 1GB/s transfer rate. Yep, that's a Gigabyte per second. :O Compared with the old server's SSD at 250MB/s, it's a huge increase. The disk controllers in the HP DL380eG8 have special new optimisations especially for SSD which help it achieve that high transfer rate, along with the speed benefits of RAID1.
The Xeon CPU cores use hyperthreading, so our Debian OS sees 24 CPU cores (2 x 12 real cores). Current site traffic barely touches even two of these virtual cores at any given time, on average. So there's a lot of room for the site to grow in terms of CPU needs! It also helps deal with unexpected traffic spikes.
We have moved to a 100Mbps unlimited network connection; essentially the same as before, but we are no longer billed on a 95th percentile scale when we go over our 30MBps average speed. As the site gets more traffic, our data costs will stay exactly the same. We only use about 30Mbps on an average day, so we have the capacity for three times more users accessing the site daily before it becomes an issue.
In brief: the new server is much faster and gives us plenty of growth potential for the next couple of years!
On read requests, having mirrored drives means that the request can be serviced by either drive, or even both simultaneously, potentially doubling throughput while halving average access time. This is especially beneficial for sequential reads, i.e. a large non-fragmented large file. (Most of Inkbunny's images are large, non-fragmented files.) It depends on the controller, but they're a lot smarter now.
RAID0 is striping and RAID1 is mirroring. Our images are mirrored and striped, the DB is just mirrored.
On read requests, having mirrored drives means that the request can be serviced by either drive, or
We haven't been able to reproduce this issue. Could you give some examples, perhaps in a support ticket, and let us know who else is having such an issue?
We haven't been able to reproduce this issue. Could you give some examples, perhaps in a support tic
How many servers do you run now and what are their purpose? I'm curious.
When new servers are setup, have you ever replaced lesser servers with the older (but slightly better) ones? do you have a page that has the specs of each?
How many servers do you run now and what are their purpose? I'm curious. When new servers are setu
We'll be saying a lot more about our hardware setup and why it is as it is very soon, but for now, check out WikiFur's article about Inkbunny and its references. One thing to understand is that Inkbunny leases servers, and so does not really have to deal with old hardware - it just doesn't get renewed.
We'll be saying a lot more about our hardware setup and why it is as it is very soon, but for now, c
Interesting (I guess?) to hear that you "only" have one server. I love your explanation behind it and it makes a lot of sense. You guys figuring out how to optimize things also is great :) ...other sites could learn a thing or two about cleaning up the cruft and not over-complicating the hardware, etc (as well as just better management & choices).... but whatever, screw those guys!
And hey. if you ever don't know what to do with any extra money, I say splurge and get 6x1TB Samsung 850 Pros for storage. Overkill... but really not that much more pricey than your other choice (well... there are 2TB consumer drives... but I'd be scared how well they work in a server environment. and then there's this ST1200MM0017 .. which is only 200GB more and $35/ea cheaper than an SSD which has 2x the warranty, probably more reliable, and WAY faster.
Also... you said you're only rocking 2x16GB of ram? That means each CPU is running in single channel mode. I know you know but I'm just saying. Each of those CPUs support triple channel... so four more sticks of 16GB would be pretty sweet too. Even only dual-channel mode would be a nice upgrade in CPU speed (1-15% possibly more or something IIRC). DDR3 isn't something that gets cheaper over time (new, not used) either. Once DDR4 is the major player and DDR3 production slows... price is going to increase. Looking online... the sticks I think you'd need would be less than $200 a pop.
Good read! Interesting (I guess?) to hear that you "only" have one server. I love your explanation
The issue there is we don't own the hardware. We have to pay for upgrades, monthly, and quite a lot. They like people who are clearly making money off their servers and need higher capacity because they can charge them lots more. :-)
What we will probably do next is lease a new lower-capacity server to act as a front-end cache, testing server, streaming DB backup, etc. This might happen sooner than expected given that: they have a sale on right now; we've been growing faster this month due to FA issues; we expect to reach our goal; and we would've needed a new system for backups in a few months in any case.
The issue there is we don't own the hardware. We have to pay for upgrades, monthly, and quite a lot.