Major changes: - Migrated from JavaScript to TypeScript. - Migrated to NodeJS v18+ - Remove Axios, and replace with built-in `fetch` - Empty Trashcan 😉
This requires the Local MongoDB server to store metadata. Files are stored in the `res/ib/{artist name}` group by the artist.
No, there is however a limit to how many submissions we will display as part of a search, and a gallery or favourite listing is considered a search of your favourites. This is because temporary results are stored in the database and so there is a cost to calculating and storing such views, which impacts performance. Of course, you can make a more specific search of your +favs and this will lead to a shorter list, displayed in full.
Frankly, I'm not sure that anyone truly has 18,000 favourite works, but it's hard to disprove.
No, there is however a limit to how many submissions we will display as part of a search, and a gall
Interesting, I am just looking at MongoDB myself (but mostly as a route to use Azure CosmosDB for MongoDB to handle autosuggestions for keywords and members, fed from our PostgreSQL database replica via mongo_fdw, as they can be rather slow for users that are located far from Europe).
Interesting, I am just looking at MongoDB myself (but mostly as a route to use Azure CosmosDB for Mo
It is possible, keywords might be ~2.5MB if limited to those we allow for autosuggest (i.e. 20+ uses), but username data including watch counts and userpic paths is closer to 30MB - not a cheap download, and for relatively few users of the feature. Autosuggest requests get what is needed 12kb at a time, they just have high latency and a relatively small CPU cost. We could offload them to our replica like +fav suggestions, but that would not solve the latency issue.
Don't really want to install stuff on all our caches, though it would be an option to distribute expiring text files that way, I guess (but it might not be in cache, while CosmosDB or similar always has it all). I was also looking at ElephantSQL but it has its own issues.
It is possible, keywords might be ~2.5MB if limited to those we allow for autosuggest (i.e. 20+ uses
Possibly. We do use R2 for backgrounds in most regions (as well as Azure web apps and IBM, Backblaze and Oracle Cloud object storage). But at a glance, linking D1 to PostgreSQL seems like it'd require Hyperdrive and therefore Workers Paid. I'd like to avoid additional costs as far as possible. Perhaps there is a push mechanism like mongo_fdw, but D1 seems quite new and so such integrations are less likely.
Meanwhile I now have three free 32GB MongoDB vCore on Cosmos DB instances to play with in Virginia USA, the Netherlands and Singapore, although these appear to be standalone - I still hope to get linked RU-based MongoDB in Texas, Brazil, Japan and Australia as well. (I have a support case open with Azure as my attempt to create RU with availability zones [zone redundancy] in the Netherlands failed on resources.)
I should also note that I don't particularly trust any of these cloud companies, and so while it's good to get experience with their products, especially if we can gain an advantage from it, I don't plan to give them anything mission-critical.
Possibly. We do use R2 for backgrounds in most regions (as well as Azure web apps and IBM, Backblaze