Welcome to Inkbunny...
Allowed ratings
To view member-only content, create an account. ( Hide )
Inkbunny

Putting your donations to work

Last month's donation drive was an outstanding success, with over a hundred individual donors.
After a few generous last-minute pledges, we received $10400, exceeding our $8000 target by 30%.

As promised, we put your money to work immediately. Not only have we paid for Inkbunny's main server, but we found a way to use the excess funds in line with our goal of obtaining three years' hosting.

From low-powered backup to secondary server

As noted in our previous FAQ, we planned to upgrade our on-site backup server this year. After all, Inkbunny will soon be larger than 1TB - we just passed 600,000 submissions and 260,000 members.

Coincidentally, our host ran an end-of-year sale – so naturally, we jumped on it!

The deals were so good that we managed to lease a highly capable secondary server with the excess:
HP DL120 G7 (photos) - Quad-core Xeon E3-1270 - 16GB DDR3-1333 ECC RAM - 4x1TB SATA2 RAID5

This configuration also comes with a 100TB/month network allowance, at a speed of 1Gbps.
To put that in perspective, Inkbunny currently uses around 12TB/month, on a 100Mbps line.

Compared to our current backup server, it has 4x the raw disk space, 8x CPU performance, 16x RAM, 10x bandwidth, and 20x network allowance - all for just €2003.40 (~US$2400) for three years.

What's it for?

Aside from submission and database backup, we intend to use this server in several new ways:
* to offload public content serving, decreasing network and disk utilization on Inkbunny's main server
* to test new versions of Inkbunny's software, database, and OS in a near-production environment
* to run a warm/hot standby database, decreasing the data-loss window in the event of server failure

Of course, its primary backup role remains important. We don't want to be shipping hard drives in a crisis!

Didn't you say you didn't need another server?

For years, we've maintained an on-site backup server. We took this opportunity to expand its capabilities beyond mere file backups, after evaluating the likely cost of main-server upgrades if we didn't do so.

We've seen increasing use of bulk downloaders, with hours-long bursts above 60Mbps. We consider this legitimate use, but it has the potential to cause network congestion and excessive disk usage. At the same time, we want to offer new high-bandwidth features without saturating our network link.

The new server will help to solve these issues. We can serve public files and thumbnails from it, while using the main server for the actual pages and private content. It should also ease future development.

Weren't you considering a U.S.-based cache?

Yes; we still are. This is a good first step. Once we split the load, it'll be easier to distribute further.
We considered leasing one in a separate sale, but the deals weren't as good.

Update: A sponsor stepped in to help us out here. You can also read about our Australian cache there.

Why this server/configuration? Why not just upgrade bandwidth?

It's a cheap way of getting 100TB/month transit and a 1Gbps port. Upgrading our main server would be nearly as expensive, and we'd "lose" its unmetered 100Mbps connection (worth ~30TB by itself).
Why broaden a road when you can build a new one and keep the one you have for the same price?

We considered a less-capable system with 10TB/mo. for €1000 - but this way, we get copious bandwidth and RAM for caching, and a quad-core CPU of the same architecture as our main system, as well as disk space for backups. As a bonus, our host ran out of the E3-1230, so we got a free 200Mhz upgrade.

We got 50% off the list price (in addition to a 30% contract-length discount) because it's a three-year-old model and limited to a maximum of 32Gb RAM and a single CPU. That's fine for our purposes.

In conclusion

It wasn't at all clear whether we'd raise what we needed, but you came through, and more.
We now have a solid foundation for future growth, thanks to the generosity of our members.

Many, many thanks to all those who contributed. We'll continue to accept donations to fund our other expenses and future payments. However, we encourage you to consider a commission or two as well!
Viewed: 1,217 times
Added: 2 years, 11 months ago
Site News Item: yes
 
Vladimir
2 years, 11 months ago
Thank you for keeping this awesome website alive! =^w^=
maxinered
2 years, 11 months ago
That sounds very nice what you snatched there. I think that should make Inkbunny run even smoother and make development a lot easier. Also take down times should be reduced too, as you can test updates/upgrades on your new server. Which I approve!

Also sorry, for that I am one of those bulk downloaders too, but I'm thinking of a few ways to reduce load. So that you guys can concentrate on other stuff! :)

So I'm happy to see all those donations are taken for good measure and love to see what future brings!
GreenReaper
2 years, 11 months ago
It's a good deal! I couldn't find 100TB that cheap anywhere else, and it gives us far more flexibility.

Bulk downloads are fine, although if you're doing it regularly it helps to schedule it for our "quiet time" (between 08:00-14:00 UTC). If you can avoid "breaking" caching, that will also help (e.g. there's no need to send a SID/query parameters, cookies, or a referrer for public files/thumbnails). We'll work on this on our end as well, once we get our cache set up.

We try to schedule maintenance for when it's quiet; unfortunately this impacts early-rising Europeans. Sorry! But better to have a few minutes every few weeks than hours or days of unexpected downtime.
maxinered
2 years, 11 months ago
My ideas range more in the type of proxies or a distributed proxy network. Where files get cached and bandwidth evened out.

Like 30Mbit/s for 4 hours, than 60Mbit/s in 2 hours.

And I'm on your side there. Better having a little delay, than not be able to use the site for quite some hours.
Rakes
2 years, 11 months ago
*cough cough* Varnish-Cache
GreenReaper
2 years, 11 months ago
Yes, indeed! It's not been necessary up to this point, but Varnish is what we plan to use on our cache nodes. I have experience with its use for WikiFur and Flayrah. Edit: We're probably going to go with nginx instead, for various reasons. But it's cool, too!
fluffdance
2 years, 11 months ago
Congratulations on reaching and exceeding the goal, and a big THANK YOU to all those who donated!  I'm looking forward to an awesome and wonderful 2015, full of much InkBunny goodness!  :-D
KNIFE
2 years, 11 months ago
You guys rock harder than anyone. Grats! :D
Corinth
2 years, 11 months ago
It's enough to make me think, ponder and appreciate what a big difference the upgraded new products are going to bring to the table in the near future. You guys rock, and here's to a safe and stable network.
ElMatto
2 years, 11 months ago
Lamia
2 years, 11 months ago
*insert victory fanfare*
ZetaHaru
2 years, 11 months ago
Transparency, I approves :3
mobkiller
2 years, 11 months ago
Yay~ 😸
Roketsune
2 years, 11 months ago
Glorious victory, hahah! *fires celebratory starburst artillery rounds*
InannaEloah
2 years, 11 months ago
Congrats.  :)
dahan
2 years, 11 months ago
Congrats! Inkbunny's great and continues to improve :)
MightBeFurry
2 years, 11 months ago
Bless you for communication like this, and I applaud these forward-thinking decisions about how to use the donations. Thank you, IB staff!
Ketsa
2 years, 11 months ago
Woo and yay~ I'm really we glad we made the $8000 goal, let alone exceeded it!
Norithics
2 years, 11 months ago
Amazing! I'm so glad this place is continuing to thrive,  this really is great news.
radix
2 years, 11 months ago
Hooray!
HydroFTT
2 years, 11 months ago
This is why Inkbunny is so awesome, you guys raise money and immediately tell us exactly how it'll be used. I hope this site keeps running for decades to come!
GreenReaper
2 years, 11 months ago
It's pretty easy if you figure out what you want to buy ahead of time!

We had to scramble a bit to decide on the secondary server when the sale arrived, but we already had an idea of what we wanted - it was simply a matter of cost/benefit for the "wants" (e.g. 100TB was cheap, but 16->32Gb would have cost another $750; not cost-effective).

And yes, we would like to stick around, too. :-)
That's how most sites get big - they stick around and don't fall over. Growth can be stressful, though!
HydroFTT
2 years, 11 months ago
Yeah just sticking around isn't always enough- not to name any other similar sites that never seem to understand that "growth without toppling" part. Just keep doing what you guys are doing, it's been a fantastic site for the whole time I've been here. Also it's funny, when you replied to my journal on FA I had no idea who you were, but konwing about turning off friend requests on here was some seriously useful info (that I had never even thought to look up, I guess they weren't that annoying).
RevampSkunk
2 years, 11 months ago
Now if only FA could have the transparency and efficiency as Inkbunny. Oh well.
DSHooves
2 years, 11 months ago
IB, you're doin' it right.
iedasb
2 years, 11 months ago
I was going to help IB buying some stickers in redbubble... but I couldn't TnT
GreenReaper
2 years, 11 months ago
Being positive about the site and contributing to its community is helpful, too. :-)
iedasb
2 years, 11 months ago
I wish I could help more, I <3 inkbunny!
Catwheezle
2 years, 11 months ago
If bulk downloads are legit use, would it be possible to publish guidelines on making their use more "friendly"? Letting people play nice will mean that at least some API authors will do so.

Simplest might be to publish your off-peak hours in the wiki where you link to the downloaders, and also in the API docs (requesting that downloader authors provide their users to schedule downloads if possible might have a good effect!).

Another option is to have a method to specifically request images from the overflow server, so that bulk activities can avoid affecting the main server.
GreenReaper
2 years, 11 months ago
That's a good idea, and I've added these times to the wiki.

The intended result of our future caching hierarchy is that content will only be accessed from the main server when it is not available on the secondary. This would be accomplished by changing the URL to point to the secondary server, which will transparently load the image from the main server if necessary.
Zippo
2 years, 11 months ago
Only 4 out of 7 fan slots used?
Occupy them all and watch it fly!
Nice pics.

Wish list:
PCIExpress controlled SSD, No spinning drives with higher failure rate, or at least an SATA3 or M.2
GreenReaper
2 years, 11 months ago
We have two SSDs in RAID1 on our main server. Throughput isn't high enough to need SATA3, though, even for them - and for hard disks, it's more about how fast the heads can actually move.

One good part about leasing: if something breaks, our host has to fix it! Oh, and those are sample photos from a review; I imagine with more fans it would just be noisier. :-)
Issarlk
2 years, 11 months ago
Inkbunny uses the servers it leases ? Wow, that's pretty innovative.
GreenReaper
2 years, 11 months ago
We felt bad about not using our backup server for more. Now it has the bandwidth, disk, RAM and CPU it needs to play a more significant role. Edit: And now it is!
Kapoku
2 years, 9 months ago
When i grow up i wanna give money to inkbunny n.n
darkdoomer
1 year ago
when i get rich and got less bills to pay, i wanna, too.
New Comment:
Move reply box to top
Log in or create an account to comment.