Welcome to Inkbunny...
Allowed ratings
To view member-only content, create an account. ( Hide )
GreenReaper

Netherlands cache no more, Roubaix here we come!

It's been years since the launch of Inkbunny's first Netherlands cache and backup server, phagos - aided by a surplus from our donation drive. As is often the case, requirements change over time, but our use of leases means we don't get stuck for years with hardware that no longer meets our needs.

I replace our content servers fairly regularly behind the scenes after consulting with other staff, based on available deals. It's been a while since I talked about it here, though; so I thought I'd write about what we've done with this particular service since 2015, and the thought behind its current iteration:

* In 2018, we moved to a somewhat-cheaper yet more capable system: a HP DL380e G8 with two hexa-core Xeon E5-2420 1.9-2.4GHz with 32GB DDR3-1333 ECC RAM and a 6TB RAID 5 [4 x 2TB]

* From December 2019 'til the end of November we used a HP DL180 Gen6 with two quad-core Xeon E5620 2.4-2.66Ghz, 64GB DDR3-1333 ECC RAM and 500GB RAID 10 plus 14.5TB RAID 5 [6 x 3TB].

The third system started as a very good deal. Honestly, it was still a good deal for the raw storage and HDD spindles, but we got an energy surcharge based on its two units of rack space it at the start of this year; combined with annual price inflation at that host, it was no longer best for our needs.

Indeed, after renewing our main server with a better price in 2021, and migration of our main cache in the Americas from Virginia to Quebec at the end of 2022, phagos was set to become our most expensive machine - which didn't fit with its limited role of serving content (albeit to most of Europe, the Middle East and Africa), acting as a backup, and processing +fav recommendations.

We considered other machines from the same provider, but instead decided to go with a smaller dedicated SYS-LE-2 server from French provider OVH's cut-price SoYouStart brand, with:
* a S1200SPL motherboard
* quad-core HT Xeon E3-1230 v6 3.5-3.9Ghz [Kaby Lake-DT] [1-4 core boost @ 3.9/3.8/3.8/3.7]
* 32GB DDR4-2666@2400Mhz [a CPU limit] of ECC RAM (2x16GB, dual-channel), and
* 2x8TB HGST Ultrastar He10 SATA HDDs, with 256MB cache each
** These drives were partitioned in software RAID 0 as 1TB + 14TB ext4
* Transfer is provided in the form of a guaranteed 250Mbps in/out @1Gbps line speed.

250Mbps bandwidth is, in theory, a limitation, but since we know this server only averages ~25-65MBps out (with ~125-175MBps peaks) and there's a limit to how fast we can realistically pull data off two HDDs, it seemed a reasonable limitation. In fact, when writing, it was able to exceed this speed, so it's clearly not a hard limit. Most importantly, our provider offers DDoS filtering, and there's no possibility of being charged because people are flooding your server with traffic - an issue with a previous host.

There was a more-expensive model with a slightly faster CPU and twice the RAM; as is often the case, the additional expense didn't justify the cost when our main bottleneck is storage. More RAM would have been nice, but turned out to be inessential. Likewise, while the drives aren't backed by a battery, they support FUA (Force Unit Access) - a feature which lets the server guarantee a write without forcing all writes 'in fly' to disk. This helps when the database replication process wants to be sure it's saved something, but the image cache is writing a temporary copy of a file from the main server too.

As this was one of our host's cheaper offers, we didn't get to choose where the machine was deployed beyond "France". It ended up in Roubaix, which was perfect as that's central to our host's network (being their headquarters), and has great links to the rest of France and Spain (including our main server in Gravelines, to the north - but not in the same datacenter for redundancy) as well as Central and Eastern Europe. Sorry, Italy, it's a bit more roundabout for you. We're guessing many won't know where Roubaix is, and it's closer to Belgium than most of the rest of France, so we named it the Belgium cache.

Like our main server, phagos uses 64-bit Debian Linux bullseye and encrypted storage for the 1TB database volume, located at the faster 'start' of the drive (which is actually the outer edge, where the speed under the head is fastest). This also holds thumbnails and other small files. Keeping these here significantly increases the attainable operations per second on a traditional HDD; it means there's far less distance for the heads to move on average. Larger files go in the second volume; and we can get 14TB on, which closely matches our main server (and is over twice what we currently need).

Both volumes use RAID 0 (striping) - this isn't great for redundancy, and isn't something we usually do, but it helps maintain the necessary performance with just two drives; both had less than six months of wear when we got them, so chances are good they'll last while we use them. Ultimately, this is a cache - all files are copies and/or have copies elsewhere. The server comes with 100GB NFS storage to backup configs.

CPU-wise, there are both frequency and architectural gains from seven years of development - it's ~112% faster than the old phagos at a common single-threaded benchmark (OpenSSL's speed test). This is borne out by average CPU usage: down to ~80% of a CPU from ~160%. In practice, what we were doing previously was to spread computation of one recommendation request over four workers with the expectation that we'd often have another request in flight, and thus be using all eight cores. This still works, since we actually have four cores now, and they barely clock down (to 3.6Ghz) if they're all used, so an individual recommendation completes in roughly half the time it used to.

With only one, far more efficient CPU (it struggles to use 32W, and is usually half of that), two HDDs rather than six, no hardware RAID card and far fewer fans, the new system uses roughly three times less power than the one it replaces overall, which doubtless plays into the lower cost. It's also ~130% faster than the current main server, CPU-wise, and has twice the RAM - both things we plan to address, soon, as we've grown enough that it's become an issue. SSD helps cover the lack of RAM, but there's a limit; it's running out of space, and it doesn't help if lots of people want to upload at once. So watch this space!
Viewed: 2,378 times
Added: 1 year, 6 months ago
Site News Item: yes
 
whitepawrolls
1 year, 6 months ago
Glad to see you still kicking, and thank you for the server information updates :)
GreenReaper
1 year, 6 months ago
Oh yeah! I'm always doing stuff in the background, even if it's not technical. It's just often not all that interesting, unless you have a deep and abiding interest in the intricacies of hosting or moderation.

I also got into the habit of putting stuff out on the IB Twitter account - it's more immediate than writing a long journal. Hence why you got all these cache updates at once; they were announced there, but without most of the reasoning. I also keep Inkbunny's page on WikiFur up to date with changes.
whitepawrolls
1 year, 6 months ago
Been doing stuff with hardware since about 91, and have built a few servers over the years including my own home ones :)

Don't use twitter myself. Too much Elon and Trump :P
ThisOtterDoesKnotExist
1 year, 5 months ago
I can also confirm, server administration, management and maintenance is indeed interesting :D
SamanthaIndigo
1 year, 6 months ago
*Snuggles and earrubs. <3
Calbeck
1 year, 6 months ago
Thanks for the transparency! Sounds like things are clicking along!
Issarlk
1 year, 6 months ago
It's good to get updates.
Appart from hardware news, does Inkbunny evolves too behind the scene ?
GreenReaper
1 year, 6 months ago
I'll be honest, we have a very small developer pool and all have been busy IRL for one reason or another. It is possible that new file format support will arrive soonish (WebP and/or WebM) due to improved support in a forthcoming version of our operating system (so we feel confident enabling it withou performance/security impact). A main server upgrade will occur first to support this and because serving increased traffic from new arrivals is the most pressing issue.
TPoE
1 year, 6 months ago
Coming from a network engineering point of view too, this is really interesting to hear!

Thank you for keeping this space operational for so many users, all without ads~!
Mad respect to y'all from the admin team.
GreenReaper
1 year, 6 months ago
And all for ~$10/day across all hardware! 😸

Our network story is pretty boring other than that we have at times had to limit the size of files from certain locations - but that was more commonly due to storage limitations than bandwidth or transfer limits. We also have limited backup bandwidth, but that's improved over the last year so it's far less of a concern (which is good, because some SSL transfer compression options we use are going away).

Figuring out where the limits of the cache node serving areas vs. others is always interesting, though - lots of pings!
bullubullu
1 year, 6 months ago
It never ceases to amaze me how data storage and transfer has become so efficient and relatively affordable!
If you can have Inkbunny running for like 300 bucks pr month, while servicing hundreds of thousands of active accounts and hosting millions of files... Well, I just think it's amazing^^
MrDoberman
1 year, 6 months ago
Thanks for the update! ❤️ Not many sites nowdays do transparency updates like this.
Inafox
1 year, 6 months ago
Is this why the thumbnails kept disappearing? Hopefully that's fixed now.
GreenReaper
1 year, 6 months ago
There was a power-related outage in your particular region today in the early afternoon that impacted the content server and some restoration was required after that.

If you're seeing issues over a longer period of that, drop me a PM or file a support ticket and maybe we can debug it, as it is served differently and it is possible that there are some ongoing IPv4 vs. IPv6 issues (for example).
Inafox
1 year, 6 months ago
Not seen it happen since this post, probably cert issues and geography.
Also when does IPV4 get abandoned there's like 2x as many humans as IPV4 addresses. And it's been two decades :P
Novilon
1 year, 6 months ago
Ask the ISPs that question.  A lot of them still don't provide IPv6 support for residential customers.

(And frankly, IPv6 is a pain in the ass to set up anyway.  I sure as hell don't want to switch to it anytime soon...)
GreenReaper
1 year, 6 months ago
Yeah, the one I had before never got around to it in over a decade. The reality is if you have IPv4 space already, the benefit is small, so ISPs instead see it as a cost and a source of misconfiguration.
Inafox
1 year, 6 months ago
Meanwhile my cousin's friend in India can't get IPv4 and join IPv4 servers because they don't support IPv6 so has to use a slow proxy. There's like more humans than IPs, it sucks esp in growing countries, USA has monopolised most IPs. Most westernised furries would never cope without access to their favourite games and sites, etc. So if you wonder why there's so few furries from growing and third world countries even where net is avail, you now know why :3 Someone needs to upgrade the internet ASAP!
https://worldpopulationreview.com/country-rankings/ip-a...
grungusschungus
1 year, 6 months ago
I take it this is why images aren't loading with a "ERR_CERT_COMMON_NAME_INVALID" error?
GreenReaper
1 year, 6 months ago
No, probably not. It is a sign of some problem relating to the expected name of the server not matching the security certificate being presented for it. This can occur from misconfiguration but also from interference with the initial communication with the server, either accidental or deliberate.

Some proxies or VPNs may be a bit flaky in this regard, in which case you might wish to try changing your content server to the main server, since at least you know you can access that.
pierogero
1 year, 4 months ago
appreciate the updates :3
DSHooves
1 year, 4 months ago
“Both volumes use RAID 0 (striping) - this isn't great for redundancy, and isn't something we usually Do”

Nothing to add here. All I can think is what my hubby usually says haha:

“Redundancy is key, redundancy is key, redundancy is key!” 😹
GreenReaper
1 year, 2 months ago
Yeah, by comparison the main server uses RAID5 for media files and will be moving to RAID1 soon (thanks to HDDs getting larger just slightly faster than our data has grown).
NeonDemon
1 year, 2 months ago
I'm late but is hosting cub content in a country (which happens to be mine so I know about that law) that considers *ANY* kind of underage content illegal (including drawings of anthros, cartoon characters and "1000 year lolis") a good idea? I don't really know the legal implications of it but I can't imagine OVH would be happy to have (technically) illegal content on their servers.
GreenReaper
1 year, 2 months ago
That would be this one, right? As with many such laws, it revolves on the definition of 'mineur' - bearing in mind that this section is titled 'offences against the human person' - and it'd be hard to both claim that this applied to non-human animals, without that decision setting a precedent affecting other non-human animals in, say, zoos. False imprisonment, anyone?

The only French case I'm aware of reported here (involving an Inkbunny user) involved lolicon. The UK goes further in an attempt to address nekomimi, yet in a decade and a half I have not heard of them going after full 'cub' characters, let alone successfully. This is probably because dedicating police time to cartoon animal characters risks bringing the justice system into disrepute.

OVH likewise has their hands full dealing with people using their servers for spam, DDoS, hacking their neighbours, etc. They are not too concerned about cartoons.  We actually got cub content reported a few times (it's relatively rare, once every few years), and all that happened is I got what appears to be an automated notification requesting that I investigate the situation.
NeonDemon
1 year, 2 months ago
after looking more in detail to your link (and others), it doesn't explicitly state "any kind of content" as I thought it did, which is good.
It does, however say, that fictional forms are included in that law, but it's probably up to them to decide if "underage-looking animal character" count, but if the 1000 year loli vampire goddess counts, I wouldn't be surprised about it.
So, I guess cub is a grey area.

either way, as you said, it's unlikely the police actively looks or care for that kind of thing, and even if it was brought up to them, they'd probably not care that much. we have bigger problems than "they're drawing imaginary things i don't like!" that some people might report, lol.
GreenReaper
1 year, 2 months ago
It gets even more interesting when you consider the safe-harbour defence provided by the e-Commerce Directive. Essentially, at least in the UK, users can be prosecuted for their actions, but IB can't in its role as a hosting provider unless it takes an active step in editing or promoting the work (probably more than, say, "Popular" does), or fails to take it down having been made aware of it being illegal. This applies to some of the more serious laws, including the one about cartoon depictions of sex involving fictional characters that appear to be children, despite one or more non-child features.

This appears to be implemented in French law in Law No. 2004-575 of 21 June 2004 on confidence in the digital economy, Article 6. Here's one example of its effect, as I think is this featuring OVH, although it seems it doesn't always work out (but I think this case predates that particular law, and perhaps helps justify it). Since then, OVH has been busy fighting against being sued by French campaigners about Spanish surrogacy sites. But more recently, the uncertain legal fight against child sexual abuse material continues.
MegaMoth
4 months, 1 week ago
I've always wondered, how do websites like these store SO MUCH data safely, securely and with some measure of redundancy in case of problems. When I upload, is every file saved to the website server 1 to 1 in sense of file size? Or is the file compressed? I can't imagine with multiple people uploading every couple seconds a normal hard drive would last long in terms of storage space. How does that work?
GreenReaper
4 months, 1 week ago
You can see the level of redundant storage on our Hardware section on WikiFur. Inkbunny uses about 6.5TB in total - perhaps smaller than you might think, and on our new main server we have two 12TB drives which can store a mirrored copy - plus we have backups on other servers locally and internationally. The content servers don't have to store all the content, just cache some of it.

PNG files are recompressed and we occasionally de-duplicate the archive which reduces storage usage about 5%. But it doesn't save much, it's just that if the average submission file might be 1MB and reduced-size copies come to another 1MB, we can store ~ six million of those (we have three).

We actually get about one submission every two minutes (although that comes close to one minute at peak times) and so there's adequate time to handle it. And storage technology has been growing fast enough that we don't have to use multiple or custom servers, though FA may be another matter. In the future we'll probably even be able to store it all on SSD! FA does that already which is part of why they are spending far more money than our ~$10/day.
New Comment:
Move reply box to top
Log in or create an account to comment.