Welcome to Inkbunny...
Allowed ratings
To view member-only content, create an account. ( Hide )

Saving space, time and money by refactoring Inkbunny's database

Wondering what all that late-night database maintenance is about? Here's a summary . . .

Inkbunny's development started almost five years ago. Most of that time was spent adding new features - and everyone loves those! But just as an artist might need to clean up around the edges, there are times we need to step back and refactor the code and the database. I'll be talking about the latter here.

As powerful as our current server is, it only has 32Gb of RAM, and we'd like to keep as much of the database in memory as possible. The enterprise-grade SSDs used to store the data are expensive, too - and we want to use your donations as efficiently as possible.

As such, we're applying transformations to reduce the size and rate of growth of the database. This could mean the difference between having to upgrade in a year or two vs. three or four. It should also improve its performance - less data to handle means faster operations.

If that's enough for you, you can stop here. If you want to know exactly what we're doing, and don't mind learning a bit about databases, read on!

Reducing the size of keys

Relational databases are organized into tables, and within each are rows and columns. Each row has information related to one instance of a particular type of thing - for example, there's one row per user in the users table, containing their username, email address, etc.

In most cases, each row is uniquely identified by one or more pieces of information. This is known as a key, and it's used in other tables to record a relation; for example, both the unread submissions and the favourites tables store one copy of the user and submission keys for each unread submission/+fav.

The primary key is often set to be a steadily-increasing number which otherwise has no meaning. This helps avoid issues with relations to other tables. (Usernames are unique, and could be used as a key, but they can also change; we don't want to have to update every table relating to a user when they do.)

Numbers are also small. Think about how fast unread submissions can rack up if you're watching a lot of users. We have members with tens of thousands of them. (This brought down FA in December.)

What we've used to date is an eight-byte integer (whole number) key. Eight bytes doesn't sound like much, but it can store 9,223,372,036,854,775,807 different positive values. It's good to be optimistic, but the reality is that Inkbunny will never have 9,223,372,036,854,775,807 users, past or present.

It's a fair bet that we'll also never have 2,147,483,647 users, the maximum for a four-byte signed integer. We can therefore use a four-byte key, saving space in the user table and every table which relates to it. This same reduction can be applied to submissions (even deviantART has only used 420,000,000 submission IDs), private messages, comments, keywords, etc.

The one thing that might not work so well is user favorites. We aleady have nine million of these, and it's not beyond the realm of possibility that we could have 250 times that in the future. But that's OK, because it turns out we can just get rid of that key!

Removing redundant keys on junction tables

We often want to represent a relationship between two types of object - for example, if and when a user +fav'd a work, and how many stars they used. These details are recorded in a separate table, using copies of the user and submission IDs (known as foriegn keys).

Because a user can only have one +fav for a single submission, the combination of the user and submission IDs (the copies I mentioned earlier) uniquely identifies the row. It's a compound key. A separate key is unnecessary - reflected by the fact that we never queried it. So we're getting rid of it.

Such tables are among the largest in the database (some people have over 20000 +favs), so slimming them down has a big impact. We expect to save almost 1GB on +favs alone, when combined with a few other things, such as . . .

Using more efficient data representations

Many items of data are stored as strings of text. Text is great - it can be used for just about anything. However, our database has more efficient data types which take up less space and are faster to search and compare to each other.

In particular, the 'inet' type is designed to store a network address, in just seven bytes for an IPv4 address. We're also planning to convert the number of +fav stars to a smaller format, and store MD5 hashes as UUIDs rather than strings, halving their size.

Removing unnecessary data

Of course, the cheapest data is that which you don't store at all. For example, we don't need to store your user ID, IP address or the date that you for every row in a character information sheet. Most of this information is stored for the submission. We just need the submission ID, the field, and the text.

The tables aren't the only thing we have to look at, though.

Removing unused indexes

You might think it natural for rows in a table to be ordered, In fact, data is generally stored in no particular order; for example, user 100 might come before user 1. This avoids the need to re-order rows on addition or deletion - besides, there are several different orders which might be used.

Instead, the database uses indexes. You can think of them as lists ordered by one or more pieces of data from the table, with each line pointing to a particular row.

For example, there might be indexes on username, user ID and email address. Without these, doing searches on such data - like finding every user whose name started with E, or checking to see that a given user ID didn't already exist - would be very slow; you'd have to read the whole table.

Indexes are crucial for performance. However, they occupy lots of space - often more than the table itself - and take time to update when entries are added or deleted, so if they're not actually used, they're a waste. We're identifying and removing unused ones.

Reordering the columns of tables to decrease padding

To work efficiently, computers often require that pieces of data be aligned in memory. For example, a four-byte integer might be put at location 0, 4 or 8, but not 6 or 3.

Unfortunately the database software we're using isn't smart enough to say "this field is big, it's got to go first". Instead, it arranges columns in the order given, and uses padding to enforce alignment.

By ordering the fields such that the larger, regularly-sized ones go first, we can reduce or eliminate padding. A few bytes can translate into tens of megabytes when you have millions of rows.

One neat trick that saves 45Mb

Reducing padding is great, but what if you end up with a tiny field at the end - say, the number of stars in a +fav - which has to be padded to a full eight bytes? Why, you get rid of it entirely!

Databases have a concept called NULL, which represents the lack of information, as opposed to any particular value. With our database software, nulls are stored in a series of bits after each row's header - if a bit is set, there is no space reserved for the corresponding field. There's normally a byte (eight bits) of padding after the header - alignment, again - so if a row has eight or fewer fields, nulls are free.

Here's the trick: It turns out that 70% of +favs use three stars, 20% are two star, 10% are one star. When it's three stars, we can set the star value to NULL, and turn it back into three stars when it's queried. This can save eight bytes on 70% of the rows in the table.

(Database purists might regard this as an abuse of NULL, but the savings are worth it.)

Database software upgrade

In addition to all that, we've upgraded the database to a version which can use indexes to satisfy certain queries directly, without looking at the table. This is a lot faster, reducing query time and CPU load.


Whew. It's a lot, but it's worth it. In the end, we think we can get the size of a +fav down from 84 bytes to 47 - a 44% improvement - and its index sizes down by 53%. The total savings are 900Mb; a full 50%. And that's just one table. Last night's changes cut almost 1.5GB in total

These kind of data transformations take time, so we're doing them in stages. We're trying to keep it to off-peak times, but apologize for any inconvenience caused! The result should be a faster site with less downtime for site migrations, so it's worth it in the end. :-)
Viewed: 370 times
Added: 9 years ago
9 years ago
Do what is needed to be done, I know exactly what you mean lol obviously but yeah keep up the good work. I enjoy using this place.
9 years ago
Very cool! IB already has lots of features so speed & maintenance doesn't hurt.

Also, is there any sort of a summary of what's next for IB? Just curious.

Thanks for your efforts, GR.
9 years ago
We generally won't pre-announce features because it's very easy to promise a feature and later find out you can't deliver it, especially given the all-volunteer nature of our team. We have an internal list of feature requests, and when we're confident that you'll get something on that list, you'll hear about it!

To be clear, it's a team effort: I devised these changes, and starling implemented them in production.
9 years ago
Ah! Thanks to Starling too.
9 years ago
That's some fabulous stuff, mang. Looking good :D
9 years ago
That was a very interesting read. Thanks for sharing! ^^
9 years ago
9 years ago
Not the most interesting read, but informative, and this style of problem solving appeals greatly to me, thanks for this work.
9 years ago
Something FA can't quite seem to grasp. lmao
9 years ago
Thank you for being so thorough in your explanation!
9 years ago
I love seeing these technical posts, really informative! X3 That NULL trick was pretty nice
9 years ago
An interesting read for sure. Hopefully I'm done with my days of DB work and won't have to deal with this myself, though. xD
9 years ago
Don't go down the TRUE-FALSE-NULL road.  It may save complexity and space now, but it can seriously bite your bum later on when you DO need that NULL again or when you need more values.

The rest is awesome.  I'm just surprised inet wasn't used until now :D

Also, I <3 Postgres, more people should use that thing.
9 years ago
Yeah Postgres tables can eat millions of rows like candy and not skip a beat. It's a beast!
9 years ago
We're still considering it, as it's more complex (read: requires code change). Booleans wouldn't save any space; we'd do 1, 2, NULL [=3], which leaves the potential for other values if we renumbered; but does have the other issues with respect to use of NULL (e.g. need to use COALESCE everywhere).

One way of looking at it is to say that with no information given about a +fav level, we treat it as the maximum. Three is the top of the current value system; if we expanded the ratings, we'd have to revalue rows, but the NULL ones would still rightly be at the maximum. This might also be a reasonable assumption for cases in which NULL would normally be used (e.g. importing from other databases which don't have +fav levels), since the maximum is by far the most common choice.

The inet type is, I think, not commonly known. I was not aware of its existence before I consulted the manual, and it has no equivalent in MySQL. Of course, we could store it as a four-byte integer and convert on display like they do.
9 years ago
I mean, I'm surprised by the lack of INET simply because I'd known for very long that starling used Pg :)  So I kinda expected him to take advantage of fun little things like that.

INET/CIDR's overloaded operators are beasts, I love them :)  And nothing, absolutely nothing comes even close to Pg's date/time management functionality.

The only great nuisance with Postgres is the upgrade even between minor versions — always guaranteed to not work, have to dump and reload.
8 years, 11 months ago
We've never had to dump and reload. There's always very clear instructions on major changes between versions in their changelog. To upgrade a database in place you usually just use the pgupgrade command http://www.postgresql.org/docs/9.3/static/pgupgrade.html
8 years, 11 months ago
Which requires two running Pg instances, which is a bit awkward, especially with really large DBs...
8 years, 11 months ago
Wait, you said it does it in-place? o_O  But it needs two datadirs...
8 years, 11 months ago
By in place I mean no dump. I've never had a problem running two instances. PG is designed to do that just fine. Installing 9.1 and 9.3 don't get in each other's way and I think even set up the alternate ports automatically.
6 years, 4 months ago
It does it by making a bunch of hardlinks. Of course, if it doesn't work out, you might end up with a non-functional system, but that's what backups are for - and they do a lot of work to try to avoid the need for that.
9 years ago
Given that I work with databases but don't know much of their design, this was awesome to learn about. :3 Thank you!
9 years ago
This is true for many professional programmers; it's a speciality. Inside companies, its often cordoned off into its own group which is shared around - but the most effective development teams I've worked on had embedded database expertise. For a custom, data-driven site like Inkbunny, it's vital.
8 years, 12 months ago
I never liked using postgres when I was in college. I much prefer SQL Server (MS). Nothing is perfect I suppose, but it's just what I'm most comfortable with. Overall, that's a lot of DB work, and I can only imagine the nerve wracking time spent while the transactions run on such large tables hoping you don't pooch it and have to reload the back up.

Given that our programming team consists of myself and my boss, we end up doing all the DB stuff ourselves. I'm always learning new tricks from my boss (and posts like this!). Keep up the great work. I hate doing that sort of stuff, I prefer to just write code.
8 years, 12 months ago
I always enjoyed databases, though I didn't get on well with our university's professor - it seemed like he'd been dumped in that course from a non-relational past and didn't know the coursework. While there I worked in the summers for Motorola; one year I planned out a database in Access, but implemented it in MySQL. Postgres has some differences, but the principles are much the same.

Of course, that was just a test database. Inkbunny has 150+ tables, some with millions of rows . . . thankfully a RAID1 array of SSDs runs transactions extremely fast. But the less data they have to work with, the better, which is why these optimizations are important.
8 years, 12 months ago
The databases I work on aren't huge, on average we probably have about 25-50 tables. Usually won't go above that. I'm actually not even entirely sure why there are 150+ tables in IB. Unless you guys are using a ton of temporary tables for stuff. I know it adds up quick sometimes, but I just can't see any reason you should need that many for a site like this. At any rate, sounds like you have work cut out for you. x3
8 years, 12 months ago
The table count is proportionate to Inkbunny's feature count. There's maybe forty that are over 10Mb in size, not counting indexes.

The rest are for miscellaneous uses. A fair number relate to the (currently unused) sales system, including payment methods, digital downloads, prints and access to high-resolution images. There's tables relating to comments, keywords (and suggestions), pools, views, journals, friends, streams, watches, character sheets, trouble tickets, user and artist blocking, content deletion, unread material for each content type, users, user roles, contact details, countries, donations . . . you can see how they add up. Some may go away, but most are here for the long term.

I created a relation graph for all the tables, including foreign key relations, and it took up 4500x13300px. Yeah. It's slightly smaller without salescruft, but still!

We do have a bunch of temporary tables for searches (and related features), but of course they're temporary.
8 years, 12 months ago
Once I tried to design a database for an art community just for kicks and I ended up with 95 tables, no temps. There's a heckton of n-n relations, info specific to this or that type of submission, and so forth. The comment system alone can get you 10+ tables since you have comments on different places  such submissions, user pages and journals as well as notification system for them in an optimized manner.
8 years, 12 months ago
It's all a matter of how you organize the data, and how you use it. For instance, the comments can all be done in one table regardless of where they attach to. All you need is a flag in the table to say what they belong to (ie: a boolean Is_Submission to say if it applies to a submission, or you can use a small int and have 0 represent submissions, 1 for shouts, etc.) However it's not always as efficient as having separate tables.
8 years, 11 months ago
I like your detailed explanation quite frankly.

Now in all seriousness, here's me continuing to wish the best of luck to you with handling this very large and growing community!

SVG support...
SVG support!

Oh, please, I can't be the only one that wants this. Also it sucks that my Firefox can't play MP3s natively with JW Player (from the music submissions), only open formats like OGG Vorbis. It only lets me download it. Ugh. Still better than forcing all music to go through another round of lossy encoding, am I right?
8 years, 10 months ago
Holy cow.. a 50% reduction... You cut the fat right off of that table!
New Comment:
Move reply box to top
Log in or create an account to comment.