Welcome to Inkbunny...
Allowed ratings
To view member-only content, create an account. ( Hide )
alistair

Treatise on Systemic Vulnerabilities in Inkbunny Moderation Methodology

Note: This document is a very slightly edited version of one I have already provided to Inkbunny's admins and moderators; I have removed names and direct links to public comments that are referenced here, as it is not my intention to single out anyone.

I am providing this work publicly because I believe the case I make is general enough to apply to any communication forum where users are expected to report other users to the moderation team as a first action.  The problems I explain are not unique to Inkbunny.

The IB mods have been very gracious and open to hearing my case, and out of considerable respect to them I ask that any comments you may have regarding what is laid out here be made as a response to this post.  Please do not address the IB admins and mods directly; they have seen and are already aware of this treatise, the intent of which is to helpful, not dictatorial.

If you're only interested in the TL;DR, I suggest looking at (01) OVERVIEW, (05) PART THREE - Some Conclusions, and (07) CONCLUDING REMARKS.

Thank you.  -Ali


----
Treatise on Systemic Vulnerabilities in Inkbunny Moderation Methodology

Table of Contents
(00) PREFACE
(00.1) Site Description
(00.2) Site Ethos - The Inkbunny Philosophy
(00.3) Philosophy Analysis
(00.4) Summary of the Inkbunny Site Ethos

(01) OVERVIEW
(02) LOGICAL PRIORS

(03) PART ONE - Logical Progression of a Hypothetical
(03.1) Submission Description
(03.2) Delineating Member Responses to the Post
(03.3) Post Reporting and Moderator Action
(03.4) THE FIRST PROBLEM
(03.5) THE SECOND PROBLEM
(03.6) THE THIRD PROBLEM

(04) PART TWO - Exacerbating Issues
(04.1) Issue One - Language Barrier
(04.2) Issue Two - Hidden Rules
(04.3) Issue Three - Member Dis-empowerment as Moderator Policy

(05) PART THREE - Some Conclusions
(05.1) The Unintended Consequences
(05.2) Analysis of Member Tools vis-a-vis the Inkbunny Philosophy

(06) PART FOUR - Exploring Possible Solutions
(06.1) Stronger Emphasis on Existing Tools
(06.2) Enforced Blocking and Member Banning
(06.3) Journal Tagging
(06.4) Dedicated Member Reporting Function
(06.5) Clarification of Moderator Ethos
(06.6) Adjustment of Inkbunny TOS

(07) CONCLUDING REMARKS

--==--

(00) PREFACE

(00.1) Site Description

Inkbunny is a semi-social site dedicated to the sharing of anthropomorphic ("furry") art.  It is utilized by a population of Members who interact and communicate with one another in a digital community of overlapping Artists and Watchers.

Interactions between Members consist of:
-Posting of Submissions, discrete instances of sets of one or more works of art; Submissions must have an assigned Rating and Tags, short text strings categorizing the content of the Submission
-Posting of Journals, discrete instances of some textual message in a "blog" format
-Comments, medium-length textual messages added to Submissions and Journals and as responses to other Comments in a threaded format.
-Private Messages ("PM"), textual messages sent directly, and privately, between two Members, and as responses to other PMs in a threaded format.
-Shouts, shorter Comment-like messages added to an Artist's page; unlike Comments, Shouts can not have Comments added to them
-Posting of Streaming Notices, temporary notifications to Watchers about live-streams hosted on an external site


For the purposes of this document, I will be referring only to the first three: Submissions, Journals, and Comments.

Members are also provided several functions they are free to use to filter site interactions.  These functions are:

-Selective Watch Options, which allow a Member to limit how much of another Member's content will appear to them; Members may choose not to see any of Submissions, Journals, or Stream Notices.
-Ratings and Keyword Blocking, which filters out Submissions meeting certain Ratings and/or those with matching Tags from being visible to a Member
-Artist Blocking, which filters out Submissions from specified Members, but does not hide Comments or PMs from those Members; it is unspecified if this also includes Journals
-Member Banning, which completely hides all interactions from specified Members


These filtering functions ensure that no Member is forced to see any content they do not choose to see.

Members are required to abide by Inkbunny's Terms of Service ("TOS") when utilizing Inkbunny's features.  A group of Administrators and Moderators ("Moderators"), Members with additional site functionality that allows them to curate all site interactions, serve to enforce the TOS in the event some Member violates it.

In the event Members wish to contact Moderators to report an issue regarding Inkbunny, they submit a Support Ticket, which consists of a Title and Message describing the issue.  Members may report other Members for TOS violations to Moderators using this function.

(00.2) Site Ethos - The Inkbunny Philosophy

As available on the Inkbunny wiki here, the Inkbunny Philosophy outlines the ethos under which the site ostensibly operates.  It covers a broad range of explanation regarding the site's intended role and operation.

While much of the Philosophy details the rationale regarding the sale of artwork, a feature that was never fully realized and is now no longer relevant to the site, the remaining elements of the Philosophy do provide some further illumination.

(00.3) Philosophy Analysis

The Philosophy begins with the section What We're All About, and that section opens with the emphasized statement:
" Inkbunny is a furry art community


It then follows this in the next paragraph:
" We strive to create a community that is open, vibrant and growing. We consider freedom of artistic expression as a top priority.


Per this, it is reasonable to conclude that the site's intended purpose is to support an open, vibrant and growing community where freedom of artistic expression is a first-class ethical position.


In a further section, Support What You Believe In, the Philosophy states:
" If you actively support your favorite artists then they will make more of the stuff you love. Whether your support is simply words of thanks, a donation, or buying their work, it encourages them to do better and focus more time on their art. It makes their hard work more rewarding.


The section ends by saying:
" The attitude of the whole Inkbunny community towards actively supporting artists, and a willingness to give, will be key to its success.


This tells us that Inkbunny is not purely for the Artists; it is meant to facilitate a dialog between all Members, both Artists and Watchers, who are equally important for the success of the site.

The final four sections of the Philosophy relate to the site's ethos regarding Member interactions.

Acceptance
" No one has the right to harass anyone for their tastes or the content of artwork they post on Inkbunny. Inkbunny encourages a community where people of all different interests can co-exist. The community attitude is one of acceptance of the widest possible range of views and ideas, as long as they do not encourage hate and intolerance.


This section is an admonition against active efforts to curtails Members' expression, with the specific goal of enabling the widest possible range of views and ideas.  This interpretation is in line with the section What We're All About, explored above, regarding an open, vibrant, and growing community.

Deal With It
This section begins with the following emphasized statement:
" It is not everyone else's responsibility to prevent you from seeing what you don't want to see.


After explaining avenues for handling mis-rated or tagged Submissions, the section says:
" Where there's ambiguity, the artist has the final say as to what keywords apply. You do not have the right to harass them. Moderators may occasionally insist on certain keywords being applied, but only under exceptional circumstances.


The exceptional circumstances phrase is a hyperlink pointing to Inkbunny's mandatory tag requirements.

This section further cements the site's ethos as being one where Members are considered first-class elements of the site, empowered to make their own decisions regarding what interactions they take part in, and that the most narrow interpretation of exceptional circumstances by Moderators is thus implied.

Respect

" We enable members to deal with trouble-makers directly. Members can delete comments on their own account and submissions, or ban offenders from their account, all without any intervention from moderators.


This section also strengthens the ethos that Members be given first-class status, making an explicit statement about Moderator non-intervention.

Have Fun
" Most of all; have fun and don't take anything too seriously! We're all here to enjoy art. It's that simple. If it stops being fun then it's time to turn off the computer and do something else worthwhile.


This implies that Members have the ultimate say in how they choose to interact with other Members, by choosing not to interact at all.


(00.4) Summary of the Inkbunny Site Ethos

Inkbunny is a community, intended to be open, vibrant, and growing, where Members are first-class users empowered and able to choose what they do and do not wish to engage with on the site, and to do so with a minimum of Moderator intervention.


(01) OVERVIEW

Inkbunny's Moderation methodology, being based heavily on Member-initiated reporting of other Members perceived to be in violation of the TOS, has several vulnerabilities that result in both a power imbalance between Members and the punishing of the most tolerant and least "noisy" Members.

These imbalances are the logical and inevitable result of Inkbunny's Member reporting method and specific rules in the TOS which rely on subjective assessment of Submissions, Journals, and Comments.

As I explain the mechanism of these vulnerabilities, I will not be assuming any one position is held by Moderators or Members.  These problems exist regardless of the actual subjective opinions of Moderators and Members.

Because these flaws exist despite the values of Moderators and Members, it is possible that, should the Moderation team change significantly, Members would have no protection against a changing interpretation of the subjective rules.


(02) LOGICAL PRIORS

Each of the Problems I will explore below are based on two axioms and corollaries to those.

I.) Axiom: There are more Member interactions to moderate than Moderators are capable of managing directly.

I.A.) Corollary: Moderators are therefore limited to managing only the small subset of interactions they can manually curate plus a second small subset of interactions that Members bring to their attention via Support Tickets or, informally, PMs.

I.B.) Corollary: This necessarily also means that there is a majority set of interactions that go completely unnoticed, due to Moderators not seeing them and/or them not being reported.  As has been communicated to me, "We are not everywhere simultaneously."

To support Axiom I and its corollaries, we can look at a Site Administrator's words:

" We *don't* read each and every posted journal, or every posted submission. If people doesn't let us know, we may miss it.



II.) Axiom: All interactions on Inkbunny represent a consenting symmetrical communication between posting Members and any other Members who choose to see or not see what is posted via Watching or by utilizing Inkbunny's filtering features to hide undesired content.

II.A.) Corollary: A Member who does not use Inkbunny's filtering features are implicitly choosing to see other Members' content.

II.B.) Corollary: Because Inkbunny is a site for sharing content, if a Member is prevented from seeing a Posting Member's content by any means other than either Member's deliberate choice, both Members are being denied.  The Watching Member is implicitly being disallowed from seeing content they otherwise made the choice to see.


The above are logical truths, and in listing them I am merely pointing out that they are true, not condemning them.


(03) PART ONE - Logical Progression of a Hypothetical

Let me propose a little thought-experiment extrapolating from these base truths.

Just to keep it simple, let's assume a hypothetical Member population of 100 Members, all engaging with each other and using all of Inkbunny's provided features, and one (1) Moderator.  The reasoning presented still applies to Inkbunny's real Member and Moderator population.

(03.1) Submission Description

To begin, one Member (hereafter "Posting Member") posts something, a Journal, Submission or Comment on same (hereafter "the post"), to their own account.

Let's assume the post does not break any objective rules, like the rules regarding copyright, legality, malicious code, or attempt to defraud.  To be clear, we're focusing on the rule "your content does not contain material that defames or vilifies any person or group of people and is not harassing, threatening, harmful, invasive of privacy or publicity rights, abusive or inflammatory."

Additionally, assume the post does not constitute a threat, invasion of privacy or publicity, or defamation (i.e. libel).

Because the post is made on the Posting Member's own account, and no Member is forced to Watch another, and any Member can mute any other Member with the Block and Ban features, harassment can not reasonably apply, either. (However, See Note A)

The remaining rules the post may or may not be in violation of are vilification, or being harmful, abusive or inflammatory.

Finally, given Axiom I and its corollaries above, let us assume the post is not one of the subset noticed by Moderators during manual curation.  For the post to be subject to Moderator intervention in the way described below, it must be reported by a Member.

(03.2) Delineating Member Responses to the Post

Of the 100 Members, we can divide reactions to the post into four types:

Type 1. Members who believe the post is objectionable, and report it.
Type 2. Members who believe the post is objectionable, but not enough to justify reporting it. (Perhaps they simply use the various filtering features, or merely ignore it.)
Type 3. Members who don't believe the post is objectionable.
Type 4. Members who do not see the post.

The allocation of Members to each of these sets can be anything, but let's assume there is at least one Member in each set.

In other words, the assessment of the post is subjective and there is at least one Member on each side of the subjective judgment.

(03.3) Post Reporting and Moderator Action

Next, at least one Type 1 Member reports the post via the Support Ticket function, thereby bringing Moderator attention to it.  Two things can then potentially happen.

Mod Action 1: If the Moderator does not agree with the Type 1 Member, then no corrective action is taken.
Mod Action 2: If the Moderator agrees with the Type 1 Member, then corrective action is taken.  The corrective action involves modification or removal of the post and/or a warning or full banning given to the Posting Member.


(03.4) THE FIRST PROBLEM

At first glance, it appears as though the sequence of events leading to Mod Action 2 affects only the Type 1 Members and the Posting Member.

However, logically per Axiom II above, Members of any other Type are also affected.  If the action results in the post being modified or removed, the set of Type 2, 3 and 4 Members have the subjective values of the Type 1 Members coerced upon them.  Type 4 Members, those who do not see the post, are forever prevented from seeing it in the first place, and are thus coerced through omission.

To put this another way, when a post is modified or removed, it is not only that the Posting Member is being censured; any Member who might want to see it is being punished by not being allowed to see it.

The Problem: When a Type 1 Member is the initiating agent of moderation, the Moderator becomes a tool by which the Type 1 Member coerces all other Members into accepting the Type 1 Member's subjective values.  The fact that the Moderator agrees with the Type 1 Member is irrelevant, as the Moderator may not have seen the post in the first place if a Type 1 Member had not Reported it.

Stated more plainly, the Moderator has become a coercive tool, utilized exclusively by Type 1 Members, necessarily granting Type 1 Members a power greater than any other Member has.


(03.5) THE SECOND PROBLEM

Let's divide the 100 Members a different way, again assuming at least one Member in each set:

Type 1a. Members who report posts as a first resort.
Type 2a. Members who report posts as a last resort.
Type 3a. Members who do not report posts.

We might think of these types as degrees of increasing tolerance.

Given the First Problem, it follows here that Type 1a Members, the least tolerant set, also gain a substantial power imbalance over all others.

When we overlap the two sets of Type 1 and 1a Members, we can logically see that the least tolerant, most "noisy" Members gain the largest ability to wield the Moderator as a tool against other Members.

The Problem: Any set of Members consisting of Type 1 and Type 1a Members, those that are most likely to submit a report and do so as a first resort, are most able to use the Moderator as a tool by which they can coerce other Members into accepting that set's subjective values.  Again, due to the Moderator's attention being limited such that the Moderator might never have seen the post anyway, whether the Moderator agrees with the reporting Members is irrelevant.  The power imbalance exists naturally.


(03.6) THE THIRD PROBLEM

Reporting of Members on Inkbunny is performed formally through the Support Ticket function, which is a generic reporting tool, or informally via Private Message ("PM").  
Every Support Ticket or PM requires some non-negligible amount of time and energy for the Moderator to manage.  The Moderator does not have infinite resources, and so there is some limit to the number of reports the Moderator can manage before becoming overloaded, even in a case where the Moderator does not agree with a single of the reports.

Thus, even in cases where the Moderator may preferentially choose to take Mod Action 1, when the Moderator does not agree with the Type 1 Members, if the volume of Reports exceeds the Moderator's patience or ability to manage, and the Moderator has reason to anticipate continuing such costs, the Moderator will instead preferentially punish the Posting Member as doing so costs less than continuing to manage the Report volume.

This provides a mechanism by which Type 1 and Type 1a Members are able to coerce Moderator action.

As explained by a Site Administrator in a public comment:

" We respect your opinion, I know I do, and even support it most of the time, but [...] it's not a popular opinion. When you voice out an unpopular opinion you can expect backslash. [...] When a shitton of Members complain about your behaviour, then it's not about your ideas anymore


and also:

" We're going after the war, the mess. If the messages were the opposite and they created the same ruckus we'd do the same about it.


and additionally:

" If one complains, then we tell them to move on. If 20 people complain [...] that's when we intervene.


This suggests that Inkbunny Moderator policy is, in fact, to treat report volume as at least one factor in deciding what action to take, even when the Moderator does not agree with the individual reports.

The Problem: Due to the costs of managing reports, Type 1 and Type 1a Members have the collective ability to coerce the Moderator into taking a certain action and, thus, gain a power imbalance over all others.  The simple fact that there is some critical mass of reports is used against the Posting Member and all other Member Types, who are at a disadvantage as these Member Types are less likely to contribute counter-complaints against the Type 1 and Type 1a Members.


Note A) A sufficiently harassing Member will, assuming a Member population utilizing all of the available site tools, naturally become isolated and unable to directly interact with other Members.  This can happen entirely without Moderator intervention, and will implicitly reflect the will of the collective Member population regarding the disruptive Member.  The site tools should actually require that a case meet a very high standard before being considered harassment and subject to Moderator engagement.  For example, the same person repeatedly creating multiple accounts to bypass the Block feature and disrupt other Members' experience would certainly meet such a standard.


(04) PART TWO - Exacerbating Issues

The above issues are value-independent problems that are true regardless of the specific subjective judgments of the Moderator or reporting Members.

Each of these problems are made more damaging when other factors are considered.

(04.1) Issue One - Language Barrier

All language use inherently carries cultural nuance specific to the native users of a language.  Such nuances may exist even within the same language, if there is enough cultural divide between speakers.  Thus, fair judgment of any given expression rests upon a broad understanding of the nuances in communication.

When the Moderator attempts to apply subjective judgment regarding communication in a language that is not the Moderator's native language, the probability of rendering an incorrect judgment and taking unnecessary Moderator action increases.

Again quoting a Site Administrator from a public comment:

" English is not my first language, so sometimes I don't express myself correctly


If a Moderator has difficulty correctly expressing himself when interacting in a given language, it follows that the same Moderator may have difficulty correctly evaluating expressions in that language.

In other words, such a Moderator may incorrectly perceive the post as breaking the TOS, and thus incorrectly agree with Type 1 and Type 1a Members preferentially.

This means that any Moderator who is a non-native speaker of the Posting Member's language may be additionally vulnerable to co-opting by Type 1 and Type 1a Members.  The potential language barrier can be exploited to forcefully coerce all other Member Types into accepting the Type 1 and Type 1a Members' values in cases where the Moderator Action is Member-initiated.

To reiterate the point, this means that any set of Members who do not find the post objectionable are punished by potentially having their implicit choice to see the post denied.

(04.2) Issue Two - Hidden Rules

When enforcement of any rule requires subjective assessment by the Moderator, there are necessarily opaque rules under which all Members are subject.  That is, the criteria by which a Moderator makes a judgment can not be known by Members until Moderator action is taken at best.

When the Moderator's interpretations of purely-subjective rules are not clearly stated, there is an inherent chilling effect on all interactions between Members, who are therefore entirely unable to know what will or will not bring Moderator attention and censure.

The end result is a net loss in Member interactions, as Members may choose not to interact at all rather than risk offending any louder and less tolerant Member, and thus being the recipient of seemingly-arbitrary Moderator action. Per the Problems enumerated above, Type 1 and Type 1a Members therefore gain the ability to silence or suppress any other Members who share values different than their own through deliberate directing of Moderation attention.

(04.3) Issue Three - Member Dis-empowerment as Moderator Policy

At present, Inkbunny Moderator policy is to request that all Members engage Moderator intervention rather than engage in interactions that might result in some form of negative conflict.  When one Member posts something objectionable (an insulting Comment, let's say) on another Member's page, Moderators prefer the host Member not interact at all.

Per a comment by a Site Moderator:

" If you're receiving any kind of abuse the Terms of Service dictate your course of action. Report the abuse. If you choose not to do that, and engage in behavior in violation of the Terms of Service, then what the heck are we supposed to do?


This reveals a blind spot in Moderator policy.  The first course of action should always be to utilize the Member Blocking and Banning tools.  Since Moderator policy is that Members not be empowered to make their own choices about what content they will allow themselves to see, this is an implicit demand that all Members carry the Moderator's moral values rather than their own, or else refrain from interaction.

Site Moderator policy is to explicitly dis-empower Members, in direct contradiction of the Inkbunny Philosophy.


(05) PART THREE - Some Conclusions

-Inkbunny is operated by the Administrators and Moderators; it is their site, and they operate the site at their pleasure.  Moderator action in itself is not a problem.

-While Inkbunny is operated by the Administrators and Moderators, it exists for its Members, both Artists and Watchers, without whom the site serves no purpose.

(05.1) The Unintended Consequences

When one Member is censured over purely subjective rules violations, implicitly all other Members who choose to see the censured Member's content are denied service by the site.  This denial of service is not, in itself, a problem; censure of objective rules violations results in the same implicit denial of service, and is therefore an inherent property of Moderator action.

When Moderator attention is inherently limited by sheer Member interaction volume, however, Member reporting via Support Ticket or PM presents an inherent vulnerability to co-option by those who are the least tolerant of other Members.  The simple act of directing Moderator attention towards another Member becomes a de facto directing of Moderator enforcement by the reporting Member(s).

When the rules violations being reported require subjective interpretation, Members who coerce Moderator action in these ways are de facto utilizing the Moderators to coerce all other Members into accepting the reporting Members' personal values.

Additionally, even when Moderators do not agree with a reporting Member's subjective assessments, a large enough set of such Members can still coerce Moderator action through increasing costs of Support Ticket and PM management, making the censure of the target Member less time-and-energy expensive than continuing to dismiss regular reports.

Further, the co-opting of Moderator action by the least tolerant set of Members results in the suppression of other Members who may choose not to interact at all rather than risk offending any less-tolerant Member over subjective assessment and interpretation of rules and content.

In short, the site features and Terms of Service wording provide a mechanism by which the least tolerant Members may enforce their values upon all other Members and suppress their interactions.  The values held by the Moderators have no bearing on the reality of this vulnerability.

(05.2) Analysis of Member Tools vis-a-vis the Inkbunny Philosophy

Inkbunny provides an interesting imbalance in the tools it provides to its Members.  On the one hand, there are numerous, powerful tools that empower Members to customize their Inkbunny experience and interactions to an exacting degree, and on the other, Members' ability to report other Members to the site Moderators is severely lacking.

When we take this observation and measure it against the Inkbunny Philosophy, we can clearly see that the existence of strong user tools and lack of a dedicated member reporting feature highly suggest the site's design is one of assuming a bare minimum of direct Moderator action.

The degree to which Inkbunny's Member tools provide for a customized experience suggests that any undesired interaction that can be solved with the usage of those tools should be solved by those tools, not Moderator intervention.

For example, for an interaction that may subjectively be categorized as "inflammatory," if the "inflammatory" nature of the interaction can be nullified solely through Member Blocking and Banning, then that interaction should not be subject to Moderator intervention, since each Member is making the implicit choice to see that content by not utilizing the available tools.

The Inkbunny Philosophy indicates that Moderator intervention in inter-Member disputes should be preferentially avoided outside of site-exploitative meta-abuses such as ban avoidance, brigade-incitement and other such behaviors.


(06) PART FOUR - Exploring Possible Solutions

In composing this treatise, I have considered multiple solutions to the problems presented.  The solutions I will explore below are not intended as demands or even suggestions per se, but rather brainstorming concepts to illustrate that the problems can be solved.  This list is not meant to be exhaustive.

(06.1) Stronger Emphasis on Existing Tools

Inkbunny already provides powerful tools for Members to ensure their Inkbunny experience is the most positive for themselves.  Members who resort to Moderator engagement as first resort are not attempting to adjust their experience, they are attempting to adjust everybody else's experience.

Before Moderator intervention is enacted, Members could be even more strongly pressed to utilize the existing tools instead of attempting to wield the Moderators as a weapon.


(06.2) Enforced Blocking and Member Banning

In the event of Members' interactions becoming heated, should Moderators make the determination that action is needed, a preferred action could be to enforce mutual blocks or bans between the involved members rather than demanding edit or removal of any given post or giving a full site ban.

This would nullify the point of contention while still respecting all other Members' choice to see the content.  This would result in the least harm to the community as a whole, and conforms with the Inkbunny Philosophy.

To my knowledge of the site, this would require no additional work on the Moderators' part beyond an adjustment of personal Moderator policy.  It is possible this is already performed by Moderators, but I am personally not aware of it.


(06.3) Journal Tagging

Journals on Inkbunny are treated as distinct from Submissions, and do not utilize the tagging system.

Including Journals in IB's powerful tagging and filtering systems could significantly help mitigate negative Member interactions by enabling Members to avoid seeing potentially upsetting Journals.

For example, a politically-engaged furry may wish to make commentary on current events within that Member's country.  A simple "politics" tag would allow all other Members who do not wish to see such content on Inkbunny to trivially avoid it.


(06.4) Dedicated Member Reporting Function

The current methods for Members to report other Members are primitive and create unnecessary friction and overhead for Moderators, leading to the Third Problem explained above.

A dedicated reporting tool, which could include check-boxes for what rules are alleged to have been violated, a field for the Member being reported and such, could make management of such reports much easier on Moderators.

When a large number of reports are submitted regarding a specific Member, the stored report data could be used to collate all such reports for easy analysis instead of having to manually deal with a flood of Support Tickets and PMs.

While this would result in additional development hours in order to implement, the potential gains in Moderator efficiency may justify it.


(06.5) Clarification of Moderator Ethos

As explained in the above treatise, when Members become unable to anticipate the internal, subjective rules by which Moderators interpret the Terms of Service, there is a chilling effect that results in fewer Members interacting.  This runs counter to the Inkbunny Philosophy.

A strong clarification of the Moderators' values and ethos, such that any given Member can know what will or will not be the potential subject of censure, could mitigate this chilling effect.

Such clarification could additionally aid in the improvement of the TOS itself, as clearer interpretations of the broad rules could be pushed up-stream.


(06.6) Adjustment of Inkbunny TOS

This is a solution of last-resort, and one I am personally hesitant to suggest.  I include it for completeness.

Inkbunny Terms of Service rules that are expressed in purely subjective terms could be modified to detail the exact interpretation of those terms.

Alternately, the TOS could be amended to establish a high bar that must be reached before resulting in Moderator action.

As explored in Note A above, the term "harassment" for example is a subjective one, and given Inkbunny's available Member tools (which allows a Member to completely avoid undesired interactions) would only seem to make sense if the harassment is one that somehow evades the available tools themselves.  The tone of an interaction by itself should not be justification for Moderator action.


(07) CONCLUDING REMARKS

The current design of Inkbunny is one which empowers all Members to make their own choices regarding the content they interact with.  When one Member's content is modified or removed, or their account fully banned, any other Member who made the choice to see that content is also punished by having their choice voided without opportunity for redress.

There are unintended consequences of site Moderator policy coupled with the site's Member-focused design, which are that the most noisy, least tolerant Members become able to wield Moderator Attention as a potential cudgel against all other users, thereby suppressing and silencing some cohort of users in explicit violation of Inkbunny's Philosophy.

There is currently no policy to mitigate this vulnerability; we are told to simply accept Moderators' claims of just evaluation.  However, Moderators themselves admit they can be coerced into action by sheer complaint volume, so these claims fall short of observable reality.

Additionally, given the powerful Member tools available to customize user experience, Inkbunny Moderator concerns about the site becoming "furry 4chan," should Moderators choose not to enforce their subjective morals on the Member population, fall flat.  The tools explicitly provide for Members' own values regarding what they choose to interact with.

If Inkbunny truly exists to embody the expressed Philosophy, which is that it be an open, vibrant, and growing community, any vulnerability that enables Members to act in direct opposition to that Philosophy by shutting down voices, minimizing vibrancy and shrinking the Member base, should be closely examined.

Thank you for your time.
-Alistair
Viewed: 1,788 times
Added: 3 years, 7 months ago
 
Relica
3 years, 7 months ago
Wow, someone work on this thesis...college much, not bad.
Soulfire
3 years, 7 months ago
wow very detailed and Bravo as well. A few thoughts. What the moderators fail to realize that is a vocal minority has the world in its grips. The other groups you enumerated far out way the single group.  

If given a chance id complain about them as i am sure many others would. I would also assume that it would not happen cause the moderators would see it as some sort of friction. The friction is already there and they have firmly decided who they want to support. The world is marching blindly towards an authoritarian regime. The sad thing is these people do not understand that it NEVER works read history... At least read the pre 90s history instead of all this revisionist BS they teach now a days.

Standing up for freedom of expression and the rest takes balls. Balls which have apparently dropped off the staff of ink bunny. How about you just ban anyone that complains? That would make the same sense as what you're doing now and it would result in a quieter and more peaceful sites.

Those of us committed to allowing others to be as they want are seriously under fire  from those that only choose to attack things they don't like. I expect anyone who is serious about allowing freedoms to come down firmly on the side of not allowing a minority of people to decide for all what is what.

The fact that the site allows such heinous behavior and openly supports such childish tantrums of certain groups is an accurate reflection of the lack of morals of the staff .

I am 50 years old a Gen Xer. i speak my mind, i swear , i fight, i argue. Competition and conflict resolution skills have degraded for the younger generations ill grant you. But ill be damned if ill allow anyone to speak for me or moderate a problem as if i'm some sort of baby or kid. I am a college grad able to talk for hours on philosophy and psychology and physics and all manner of enlightened pursuits. The very idea that i would turn to some witless person to help me deal with an issue is laughable to the extreme. If you suggested it in my presence we'd be having a fight. As i have pointed out before, the right to freedom of expression trumps any wrong ideas about ones so called right not to be offended. Offense is nothing more than intellectual horseshit.  It is truly subjective. There is no way to codify objectively what will offend a group. Each individual decides for themselves. And in supporting the wrong headed notion of someone being offended you stray from the only real objective line, free expression trumps all.

I have been on this site  for going on 10 years now, and a furry going on 20.  I am annoyed by the snowflakey behavior and people who support the whole horseshit culture of victim hood. Grow a pair.
 
Anyone remember stick and stones?? What happened to words will never hurt me you pansies??

To conclude two famous notions:
I may disagree with what you say but ill defend to the death your right to say it
and  
The tree of Liberty, must at times, be refreshed with the blood of patriots and tyrants.
kiwakiwa
3 years, 7 months ago
sticks and stones may break my bones, but words will give crippling anxiety for decades to come...
alistair
3 years, 7 months ago
I suffered quite heavy verbal taunting and abuse by other kids as a young lad, which was particularly exacerbated by having lost my father at a very young age; my emotional control was not stellar, and the abuse really did a number on me.  When, in the late 90s, the general consensus began to shift towards treating such bullying as a serious thing, I was sympathetic.

I think the wrong lesson was learned, though.  What I understand now is that it was not the other kids, who were simply being kids, who were the problem; it was every single adult who failed to explain how to handle it and let it continue to happen.  It was weakness ensured by never teaching the skills to stand up to such things.

That weakness-indoctrination is now not merely accident, but actual policy.  I think it's disgusting.
Soulfire
3 years, 7 months ago
Yes. I am not saying true abuse should happen or be allowed . However the Same could be said for allowing people to be the victim all the time. There are some good neurological studies about the brain and its ability to recover from trauma if left to its own devices. Instead the concept of being the victim is constantly reinforced and people are not allowed to recover gain strength and move on.

 The same goes for conflict resolution skills a creeping cancer has been growing in society to never let kids see parents fight. The kids know when something is wrong, and by denying them the opportunity to view it and learn they lose the ability to develop the resolutions skill. There are some very interesting studies on this.

That treaty was very well written btw :P *Presades in honor of your skill*
kiwakiwa
3 years, 7 months ago
wow... totally expect this to have 0 effect on... anything x3
Soulfire
3 years, 7 months ago
yup gotta fight the good fight tho
kiwakiwa
3 years, 7 months ago
oh totally... and it is marvelously written... not complaining... just.. knowing the admins...  ehhhhhhhhhhhh...
Soulfire
3 years, 7 months ago
agreed! however some may be inspired to stand up and have the debate. Some may read and become impassioned .  One can only hope. The silent majority is just that .
kiwakiwa
3 years, 7 months ago
oh.. just to .. clarify.. i do agree with what is written in the above textwall x3
WolfHog96
3 years, 7 months ago
I wholeheartedly agree with what you have said here, just hope that the IB staff take some notes from it and can thus improve how they operate.
GreenPika
3 years, 7 months ago
so you're saying the bullies with big mouths usually get their way. What else is new.

journal tagging=YUSH. I wish we had this.

using the blocking system instead of the mods=yes. I am all for this.

the best moderation system is one of self moderation through incentive and good site design. With the right adjustments to the system, MOST of the moderating work would be self accomplished. IB has a great tagging and blocking system. IMO, if the system required better, more detailed tagging before your work or journal was able to be visible, it would save ALOT of problems.  
Wolfblade
3 years, 7 months ago
" SpaceCat wrote:

the best moderation system is one of self moderation through incentive and good site design. With the right adjustments to the system, MOST of the moderating work would be self accomplished.


That's precisely why it exists as it does. That was the intent, that was the point.

When I insisted on all that to Jery as he was putting the site together, my failure was in thinking this was all self-evident. I should have also insisted that he make the apparently-not-obvious-enough point clearly written staff-side, too.

Users have the tools to be told to handle it for themselves, and left to do so - no mods need to do jack shit unless the problem can't be solved by users simply being adults and using the tools available to them. There's nothing the system or tools can do to stop a moderator who just WANTS to find some excuse to sling their mod-stick around, and takes it upon themselves to start dishing out judgment instead of just telling users "if you're bothered by this person, push the thoughtfully supplied 'remove them from my sight' button," and then the mod be done with it.

Petty tyrants just want to wield power, and will find excuses to do so. They'll also see any criticism of them for such as just being other like-minded tyrants who just want control in their stead. Even if those others rarely even say anything.
GreenPika
3 years, 7 months ago
though it should be fixed so the mods can't go about abusing their power, I seriously doubt such a fix would be made at this point.
Wolfblade
3 years, 7 months ago
There's not really anything to be done. The people who'd need to recognize the logic here, simply can't.
GreenPika
3 years, 7 months ago
that's why, sadly, if you don't make a site correctly the first time, it's bound to fall to mod problems after a while. all rules need to be very well thought out, set in stone and very clearly stated. if there's any room for loopholes or interpretation, you will have problems eventually. :/
CodyFox
3 years, 7 months ago
Your treatise all hinges upon an assumption that moderators are mere mindless tools of the loudest complainer, not intelligent individuals who are able to independently decide if violations have occurred. Without your assumption, the whole thing falls apart. Likewise, under your reasoning, any figure granted the power to enforce social rules and regulations would similarly be considered as a mindless tool.

A society based on your theory wouldn't even have a developed system of justice or a representational structure. Judges, juries and elected representatives would all be considered as mere mindless tools swayed by the whims of the loudest complainer. Instead, citizens would settle all disputes by their own hands and weapons. Anarchy would ensue.

TLDR - as long as moderators are empowered to enforce social norms on the site, you won't be satisfied. You don't believe they are acting upon their independent judgment to protect the site's values. You want a more anarchic site where warring conflict among members becomes more normalized, rather than tamped down, because you see that as a form of freedom. However, that's not the type of freedom that I think Inkbunny was made to promote. We already have the freedom we want.
alistair
3 years, 7 months ago
Thank you for your comments.

" CodyFox wrote:
Your treatise all hinges upon an assumption that moderators are mere mindless tools of the loudest complainer, not intelligent individuals who are able to independently decide if violations have occurred.


Not so.  The inherent limits of moderator attention is one of the central points of the whole text.  The document assumes moderators acting in good faith and performing their duties well.  Being able to direct moderator attention is a tool that can be used to enforce a person's subjective values on others.  This is true even in the event that the moderator agrees with the reporting user, because without the report, the interaction being reported may never have been seen by the moderator.  I make this point very clearly in the text.

" TLDR - as long as moderators are empowered to enforce social norms on the site, you won't be satisfied. You don't believe they are acting upon their independent judgment to protect the site's values. You want a more anarchic site where warring conflict among members becomes more normalized, rather than tamped down, because you see that as a form of freedom. However, that's not the type of freedom that I think Inkbunny was made to promote. We already have the freedom we want.


Please do not claim to know what I do or do not think, believe, or want.  You are not a mind reader.
CodyFox
3 years, 7 months ago
If someone is improperly directing moderator attention, then they are an annoyance and I think the moderators have tools to handle that. If someone is properly directing moderator attention, then they are a valuable asset to the site. There's no use in taking issue with the fact that the second group of people exist or that they have impact on overall moderation. The same could be said for the police. If no one called 911, the police wouldn't become aware of a significant portion of crimes that occur. People would be more free to commit crimes. However, that is not a type of freedom that we value as a society.
alistair
3 years, 7 months ago
To reiterate from the text, my argument hinges on cases where the rules in question must be given subjective interpretation.

When the case rests on a subjective assessment such that one member may find something objectionable, but another doesn't, then the first member is able to enforce their values on the other.  It is not only that the reported member is punished, but all other members who do not share the values of the first member are also punished by having their choice to see the subjectively-offending content forbidden from them.  This implicitly empowers one set of members over all other sets in a way contrary to the site's philosophy.

" The same could be said for the police.


Except that when police enforce laws in an arbitrary way, we consider it an injustice.  Laws that are written requiring subjective interpretation are generally considered bad laws because they then become tools tyrants can use to oppress anyone they dislike, by merely adjusting their momentary interpretation to allow them to wield power as they desire.  My logic in the treatise would apply equally to such laws and the calling of police.

I was very clear about what rules the treatise is addressing, and that the effects of moderator action are not inherently bad, since when they are the result of objective rules there is little to no slack in interpretation.
CodyFox
3 years, 7 months ago
Every action taken by the police involves a subjective analysis, by an individual officer. They are not automatons. In the U.S., the very act of temporarily detaining someone without arresting them requires a "reasonable suspicion of criminal activity." Formally placing someone under arrest requires "probable cause," which means "a reasonable amount of suspicion, supported by circumstances sufficiently strong to justify a prudent and cautious person's belief that certain facts are probably true." Every time an officer fires a gun, they are acting on their subjective belief of imminent danger. I honestly don't see how you could realistically think they are doing anything objective.

As I said before, beyond that you have judges and juries. All rely on subjectivity. Which witness is telling the truth? Which one is lying? Do a set of facts lead them to believe conclusion A or conclusion B? Is the defense credible or not? How severely should someone be punished? Is someone innocent or guilty? It's all based on subjective analysis.

The concept of objective truth is nearly non-existent when it comes to man-made laws, justice and social norms. Expecting cyborg-like objectivity from moderators is not in any way realistic or workable. There will always be subjectivity in a moderation system. Demanding that all rules enforcement with a subjective element be halted is the same as demanding that there be no enforcement, (i.e. anarchy) as I outlined above.
alistair
3 years, 7 months ago
" CodyFox wrote:
It's all based on subjective analysis.


All human perception is subjective, yes.  You can run down that thread all you like, but it will remain a dodge.

When it comes to the Rule of Law, objectivity is the ideal to strive for.  The entire reason we have the system we do in, for example, the United States, is because we acknowledge that perception is subjective, and this subjectivity can result in arbitrary punishment without strong checks.  The very idea of the Rule of Law is that the law is the highest authority.  It aims to be as objective as we can manage.  When laws involve inherent subjectivity, we rely on case law to more strongly define the interpretations.  The direction is always towards a more objective defined standard.

" The concept of objective truth is nearly non-existent when it comes to man-made laws, justice and social norms.


Patently false.  If a law says "your automobile must show visible proof of inspection when operating on public roads," and a car does not have such proof displayed, this is an objective truth that the law is being violated.  I would argue that many, if not most, laws follow this pattern.

" Demanding that all rules enforcement with a subjective element be halted is the same as demanding that there be no enforcement, (i.e. anarchy) as I outlined above.


I rather explicitly have made zero demands in my treatise.  I haven't even made requests.  Heck, I don't even make suggestions; the treatise explores possible ideas only.  I began working on this when I was asked to explain the failure mode I see by a moderator.

I'm not sure where you're getting this idea that I want "no enforcement."  I mean, I'm fairly clear that I don't consider moderator action to be an inherent negative.
CodyFox
3 years, 7 months ago
"your automobile must show visible proof of inspection when operating on public roads"

Visible where? How far away? How big does it need to be? How does one decide whether to investigate someone's proof of inspection? Is that done by a human being or a mindless drone flying over all the highways and scanning cars?

You tried to cherrypick the most binary non-criminal traffic violation, but even that involves subjective interpretation. You seem to have backtracked by now saying that we should strive toward objectivity, an acknowledgement that pure objectivity is not possible under the circumstances. So now all that's left is to acknowledge that within a system that relies upon subjective interpretation of rules, it's not feasible to regulate the amount of subjectivity used by the enforcers of those rules. In reality, you're just criticizing them by asserting that your subjective analysis is superior to theirs.
alistair
3 years, 7 months ago
The only thing you're saying is that humans must use discernment to ascertain the very existence of a rules violation, regardless of the objectivity of the rule in question.  This is self-evident to a sophomoric degree, and missing the point by such a wide margin I cannot help but think it is deliberate.

A rule can be objective in its description.

"No shirts having more than a 10% of visible area comprised of a color emitted in the wavelengths between 625 and 740 nanometers are allowed" (aka, "no red shirts allowed") is an objective rule.  You can use tools to measure a shirt to determine if it violates this rule, entirely outside of human subjectivity.  The fact that a human must first notice that there even is a shirt to measure is irrelevant.

"Driving on this stretch of road faster than 55 miles-per-hour is not allowed" is an objective rule.  You can use tools to measure the speed of vehicle, entirely outside of human subjectivity.  The fact that a human must first observe a vehicle and decide to use the tool has no bearing on the objectivity of the rule itself.

A subjective rule would require that the measurement tool be human decision itself.

"Wearing ugly clothes is not allowed" is a subjective rule.  There is no tool you can use to measure ugliness.  A human must use their own internal, subjective judgement, not an external tool, to decide if the rule has been violated.

"Driving too fast is not allowed" is a subjective rule.  Without defining what "too fast" means, there is no tool that can be used to decide a vehicle is violating the rule.  A human must use their own internal, subjective judgement, not an external tool to decide if a vehicle is moving "too fast."


The crux of my treatise is that, specifically when a rule can only be tested with subjective judgement, there are meta-vulnerabilities that exist in any moderation system that relies on user-initiated reporting.  These vulnerabilities exist no matter what the subjective criteria are by which the moderators enforce the rule, how fair they are, or how well they do their job.  The problem is systemic.

When I recently got into a debate with Kadm, I asked where the "naughty words" ban is in the TOS, and I was pointed to the rule about "inflammatory" language, which says nothing about what actually constitutes "inflammatory."  There is no objective way to measure the "inflammatory-ness" of a potential violation.  Because the TOS does not even attempt to provide more objective definitions of its terms in that rule, potential violations can only be measured with subjective assessment.  Because this is true, IB has the meta-vulnerability I describe in the treatise.
CodyFox
3 years, 7 months ago
Problem here is that you are making a moral argument (subjectivity = bad in rule enforcement), not simply an argument about whether one thing can be categorized as objective or subjective. By making the subjectivity = bad argument, the logical conclusion of your argument is to extend it to all areas of enforcement where subjectivity is used (which, as I've shown, is all of them). We could have a philosophical debate about whether any of the things you listed is truly objective (who decided what nanometer means? who decided the meaning of a mph? these are concepts that have become "fact" merely by the majority of individuals subjectively deciding that they are), but I think the point remains whether we go to that level of philosophy or not.

Subjectivity will always exist in enforcement of rules. A cop could (and routinely does) watch 100 cars pass by traveling over the speed limit, then suddenly pull over the 101st car because they simply felt like it. All of those people technically broke the law, so there is no unfairness in one being pulled over vs the other under such circumstances. This is what you get when you take an absolutist position.

You could really simplify your position to get at the point that I think you really want to make, which is obviously that you are against regulation of hateful language full stop. All of the pseudo intellectualism of the treatise is meant to support that position.
alistair
3 years, 7 months ago
" CodyFox wrote:
Problem here is that you are making a moral argument (subjectivity = bad in rule enforcement), not simply an argument about whether one thing can be categorized as objective or subjective.


False.  I am not talking about enforcement.  I'm talking one abstraction step above/before enforcement.

" By making the subjectivity = bad argument, the logical conclusion of your argument is to extend it to all areas of enforcement where subjectivity is used (which, as I've shown, is all of them).


Which I've also shown is not the point.

" We could have a philosophical debate about whether any of the things you listed is truly objective (who decided what nanometer means? who decided the meaning of a mph? these are concepts that have become "fact" merely by the majority of individuals subjectively deciding that they are), but I think the point remains whether we go to that level of philosophy or not.


This is pure sophistry.  You're just making a post-modernist claim that because human perception is inherently subjective, there is no objective truth.  You can continue going down this road, but I'm not going to follow; it proves nothing, solves nothing, provides no actual value in the discussion, and you are simply using it to attempt to gain rhetorical power over me.  No thanks, not playing that game.

" Subjectivity will always exist in enforcement of rules. A cop could (and routinely does) watch 100 cars pass by traveling over the speed limit, then suddenly pull over the 101st car because they simply felt like it. All of those people technically broke the law, so there is no unfairness in one being pulled over vs the other under such circumstances. This is what you get when you take an absolutist position.


Again, I am not talking about enforcement.  If this cop decides to pull someone over, even if they just feel like it, the fact remains that the person either is or is not breaking the law.  The cop is just the agent of enforcement, not the law itself.

" You could really simplify your position to get at the point that I think you really want to make, which is obviously that you are against regulation of hateful language full stop. All of the pseudo intellectualism of the treatise is meant to support that position.


What you're failing to understand, either because of naive foolishness or deliberate antagonism, is that the problem I describe can affect you too.  The treatise explicitly does not assume any given position held by any set of members.  It could just as easy be a set of people who you disagree with who exploit this vulnerability to silence you.  The problem is universal.

I'm not going to continue engaging in this thread.  I've made my point, and I suspect you're going to keep going down the "but everything is subjective, nothing is objective" route.  I'm going to take your word that you truly believe this, and the logical conclusion is that there is zero point in further engagement.  Further, taking your word that you actually believe what you're saying, I can only conclude that you desire this state of affairs, because it gives you power over others.  Standard tyrant stuff, feh.

Peace.
CodyFox
3 years, 7 months ago
" alistair wrote:


False.  I am not talking about enforcement.  I'm talking one abstraction step above/before enforcement.


Liar

" alistair wrote:
The Problem: When a Type 1 Member is the initiating agent of moderation, the Moderator becomes a tool by which the Type 1 Member coerces all other Members into accepting the Type 1 Member's subjective values.  The fact that the Moderator agrees with the Type 1 Member is irrelevant...

" alistair wrote:
The Problem: Any set of Members consisting of Type 1 and Type 1a Members, those that are most likely to submit a report and do so as a first resort, are most able to use the Moderator as a tool by which they can coerce other Members into accepting that set's subjective values.  Again, due to the Moderator's attention being limited such that the Moderator might never have seen the post anyway, whether the Moderator agrees with the reporting Members is irrelevant...


Notably, you don't care whether the report of a violation is virtuous or not, or whether the moderator has properly agreed with such a good faith report. i.e., you don't care whether the subjective decision is good or bad. All you care about is that the enforcement involves subjectivity at all, which you find problematic. Thus, your moral aversion to subjectivity in enforcement overrides the issue of righteousness of outcome. Every enforcement decision could be 100% righteous, and your stance against it would still be the same. This is an extreme position by any measure.

" alistair wrote:

Again, I am not talking about enforcement.  If this cop decides to pull someone over, even if they just feel like it, the fact remains that the person either is or is not breaking the law.  The cop is just the agent of enforcement, not the law itself.


As I've demonstrated, you are offended by the enforcement whether the cop is acting 100% properly on a good faith tip or not. The question of guilt or innocence is simply another subjective question in a system with human enforcement. If it wasn't, there would be no need for cops or traffic court. A robot would measure your vehicle exceeding the limit and a ticket would soon arrive at your mailbox.

" alistair wrote:

What you're failing to understand, either because of naive foolishness or deliberate antagonism, is that the problem I describe can affect you too.  The treatise explicitly does not assume any given position held by any set of members.  It could just as easy be a set of people who you disagree with who exploit this vulnerability to silence you.  The problem is universal.


The alleged problem won't affect me, because I'm not making posts of images, journals and the like that are intentionally being done to inflame, troll or otherwise encourage the site's membership to engage in hateful verbal combat. I fully understand that there are some who get their kicks from that type of thing. They love to find a gathering of people who are generally being pleasant and happy, throw a bunch of hand grenades into that group, and watch them devolve into vile and hateful behavior. Being the catalyst for that kind of malice and to tear apart a community gives them a feeling of power and pleasure. I will never be one of those people, so I am not concerned.
alistair
3 years, 7 months ago
You have come to my journal, repeatedly maligned my character (defame/vilify), continued to do so once I explicitly expressed the intent to disengage (harassing), dictated to me what my own thoughts are (harmful, defame/vilify, abusive), and rather than point out where my journal is wrong you have instead targeted me (inflammatory).  Further, you have done so while stating specifically that you know you will not receive sanction for this behavior (abusive).

Meanwhile, in this thread I have been nothing but respectful, even cordial, with you, meaning you are engaging in this behavior actively and without provocation (inflammatory).

So, you leave me with a quandary: do I report you?  Per the subjective rules, and per the moderator's own preference, I should.  Are you so certain that you will not be censured for your behavior, are you so certain that the moderators will protect you, only because you believe you're a Good Person?

You are fortunate that I am not a Type 1 or Type 1a Member, and am willing to tolerate your abuse rather than attempt to silence you.

Additionally, since I have seen this sort of behavior from you on others' journals, I can't help but wonder if it isn't your intent to inflame people so that you can then report them and get them punished.  But, I am not a mind reader, so perhaps this is not your intent.

I will say nothing more.
CodyFox
3 years, 7 months ago
I'm sure if you filed that type of fraudulent report the mods will simply shake their heads and close out your report. They might even send you a message asking you not to file fraudulent reports. I trust their judgment. That is the difference between us.
FrancisJCat
3 years, 7 months ago
Wow. What a waste of time XD
I mean, I'm really sorry, but you think Admins and Mods give a fuck about IB? They don't.

The original vision of IB was set up by the creator of the site, Starling, who has long since left. The admins and mods are nothing but janitors doing the minimum effort to keep the thing from crumbling down.

I've brought this up several times before. Other than small patches here and there, nothing has been done to improve the site in years.

Sure, you may have some good points, but you're yelling in an empty room. Anyone who cares is gone.

Believe me. I was on the ground floor when this site came up. I saw it grow and change.. and now it's stale.
alistair
3 years, 7 months ago
You care.  : )

Thank you for reading.
GreenReaper
3 years, 7 months ago
Our technical focus has been on scaling to accommodate Inkbunny's ever-growing traffic, content and userbase. Inkbunny now has 21 servers and VMs, where back in September 2016 we had 12; of those, only two use the original hardware. All have had regular software upgrades every few months. Our main and secondary servers were both migrated - the main server changed twice - and a dedicated server in Virginia was established. In total we now serve ~65TB/month (almost 200Mbps) - though part of that is inter-cache transfers to ensure files load fast for people far from Europe.

We also made changes to accommodate more recent versions of PHP (we're now on PHP 7.4), upgrade our database from PostgreSQL 9.6 to 12, and address DDoS attacks. None of this bought obvious benefits beyond increased performance (or the ability to use cheaper hardware); but in my view they were all necessary changes, appropriate to prioritise, and I'm happier now they're done.

All this was done while maintaining expenses of ~$10/day - as opposed to splashing six figures on new hardware. If we'd done that and spent the time on features instead of scaling and performance "tweaks", you'd have seen big changes for sure, but I doubt you'd have liked them - doing so would've required significant funding, and most likely, significant compromises on content.

That's not to say there's been no development - just that "features" haven't been a focus. Our site footer states we're on Release 79, not 77. There's been 450+ commits to Inkbunny's code and documentation repository since 77 (~15% increase). To greatly abbreviate, Release 78 (in 2017/8) brought in short URLs for galleries and journals, and Release 79 (earlier this year) introduced content security policy on key pages - an important security improvement, and a painful one in terms of impact (due to the site's original rapid development, a lot of JavaScript was present in pages and had to be modularized), but one which would ideally have resulted in zero user-facing changes.

Another big new feature we've worked on, +fav recommendations, caused performance problems and the original implementation had to be backed out, but we still intend to release it; we now plan to do so using our secondary server for the expensive queries that it performs.

In short, site journals don't reflect the work that's been done on the codebase, let alone all the effort to maintain and upgrade the site's internal infrastructure - stuff like this comes up every other month, you just didn't hear about it. That's on me; journals take a lot of my own time to compile, and I didn't have time to do that and ensure we'd migrated to new servers and code before old contracts expired. Both main server migrations had priority issues to resolve that took me away from catching up; once we had, it seemed a little late to announce the changes (in retrospect, we probably should have).

As such, most news has come via our Twitter account, since it's easier to write 280 characters at a time than 18,000. Reviewing that should show we've been doing more than "keeping the lights on" - and it's in line with staff comments to your journal last year.
FrancisJCat
3 years, 7 months ago
And what has changed since last year?

You just repeated what I just said, just verbose. At the end of the day, the site has not changed since Starling left in any meaningful way.

Simple stuff like HTML5 support for animations. A different CSS sheet for mobiles so you CAN ACTUALLY READ THE FUCKING FONT ON A SMALL SCREEN. Uploading PDF for stories. Things that I've been bringing up from years ago

Or you know. Keep bots from spamming posts.

You're nothing but a software patcher. Thank you for that, but the site's original spirit is dead and you helped killing it.
GreenReaper
3 years, 7 months ago
Let's see; at the risk of repeating myself:
* We selected, configured, and moved to both a new main server (with NVMe storage), and secondary server:
** The main server is less capable CPU/RAM-wise, but has twice the storage, and unlimited, DDoS-protected bandwidth that we won't get overcharged for - so fits our needs better. It's also half the price of the one it replaces.
** The secondary now has twice the RAM and storage and three times the bandwidth - again, a better fit for what it does.
* We raised enough to pay for the Virginia cache for five years despite having to add 50% more transfer to it.
* With money saved on the main server, we added a bunch of image caches, increasing our network by 50%, including many far-flung places.
* We deployed a software release containing big changes to how the site's JavaScript worked, touching a bunch of pages.

These aren't exciting changes, unless you get excited by content delivery. But they mean we can provide Inkbunny to 30,000 users visiting every day, faster and hopefully more reliably than before. Speed is a feature; faster-loading pages encourage people to spend more time on a site and this ultimately means more traffic for your art.

Let's say you're in South America, Singapore, or San Francisco; your experience should be better, thanks to upgrades to the amount of data we serve locally - Singapore in particular got a huge new cache. Japan and Australia, where we already had servers, also benefited from this; in Tokyo, SSD capacity doubled and the local cache (for now, at least) no longer has to serve people in other East Asian countries, thanks to e.g. the Hong Kong cache. Fetching work from Singapore should also be faster than going back to Europe for it.

It's hard to value security improvements; their goal is to avoid loss. But you'd be upset if someone got access to your PMs or was able to wipe galleries out because they found a flaw - and incidentally, implementing content security policy was Starling's own suggested priority. (Much of his time and mine has been spent adding defence in depth against such issues; my security advice was one reason I was brought on staff, and it's probably one of the reasons he entrusted IB to me.)

Similarly, we want to avoid hardware failures causing data loss due to insufficient backups - or servers going down because we'd failed to comply with a host's requirement of hash checking for real-life child abuse. These aren't "features", but necessities - and most require ongoing effort to maintain over upgrades, the introduction of new servers, etc.

That's the bit you don't seem to understand: Inkbunny, as a service, takes significant effort to deliver in its current form. That spam attack? It doesn't magically disappear. Someone had to identify IPs, block, and clean up. The DDoS? I was up late setting up an IPv6 tunnel to route access to the old server via the new one while preparing to migrate everything.

That's why Starling left: he couldn't be that guy and [learn to] make games. At times, it's a full-time job; one I don't get paid for, or the $10/day would be ~$100/day.

Sure, we all like exciting new site features. Some have happened under my watch. And I didn't even mention adding a new site background. 😼

But my priority is keeping the service we have up, and capable of growth, so that when primo furry artists such as yourself want to bank $500 on a whim, people are able to see your past work, and bid for a commission without the site being taken down by a flood of junk traffic from China. That's the first, overriding goal. It's what donors pay for. And for the most part, it's been achieved, despite recent efforts to the contrary.
GreenReaper
3 years, 7 months ago
You added some specifics to your comment after I opened it to reply; here's a response:

* HTML5 animation support: Sure. I'm aware Flash is going away soon. I opened a bug about it in our tracker five years ago, and added to it in 2016 and 2018. And yes, Newgrounds has done something. But games is all they are about, and I don't agree the solution is "simple". As it stands, Inkbunny's designed to deliver one file at a time, without dependencies, on the same page as everything. Various assumptions tie into that. Changing this, and ensuring that the result can't be used to compromise the site, is non-trivial. (We could just embed sites, but then we're embedding, not hosting.)

* CSS for mobile: Due to the site's rapid development, a bunch of what'd normally be in style sheets is embedded style. So what's actually required is similar to what was required with JS for CSP - rip it out and replace with files and class/ID references, one page at a time - or, write a standalone mobile client. We actually had someone work on the latter, but they drifted off, as people often do when busy with life. Thankfully, we hadn't promised to release it as a feature. I can't speak for everyone, but I find Chrome's font boosting works well, and a few years back I tweaked things to make it work in some cases where it wasn't. Still sucks on Firefox Mobile - and hence, Tor Browser for Android - but so do a lot of sites.

* PDF for stories: PDF is dangerous. Write in text. (OK, I'm kidding. I hear you sent some stuff. Another staff member looked at it. But they also have RL work - and volunteers ultimately spend their time on what they think is important. They decided the priority should be tickets, and "more staff" - and related development, like reworking permission to make us feel safer about having more staff. So we're closer to that than to PDFs.)

* Keep bots from spamming posts: Some of the protection we put in vs. DDoS should help with that, as should the increased level on our signup CAPTCHA. We haven't had a repeat episode since those changes, that I'm aware of. But if people really want to do it again, I'm sure they can find a way. There's a cost/benefit call: does it matter vs. other stuff if a spam attack happens once every few years? (It only lasted so long because I was headed to sleep just as it started.)

* "The site's original spirit is dead and you helped killing it": I don't quite know what you mean by that, or how to respond, but I guess I can try. I've been part of Inkbunny's team since before it launched, though some may claim to be closer. I know how it felt launching an exciting new site. But IB's not the upstart it was a decade ago. Some rely on it to make their living. Others have nowhere else to go, as the mood elsewhere hardened against what they like. That sucks. But I'm not responsible for the changes in what's acceptable online, or within the fandom. Nor did I push out the dev you liked. He came to me offering IB's poisoned chalice because he wanted to go off and do something else.

Knowing I had commitments to WikiFur and Flayrah, he asked only that I take care of IB and hand it on to someone else if I felt unable to run it - like he was doing. IB's since taken the lion's share of my time, and roughly tripled in size. Maybe I'm not an ideal replacement, but I feel I've done what I promised. And it wouldn't be fair to blame him either for doing something with his life vs. code full-time for a site which could barely cover its own costs, let alone offer him financial compensation.

Much of this comes down to "not enough dev staff" or "it's not the priority" - two sides of the same coin. With enough people, some may have done more of what you want. We're looking to fix that. But it'd be dishonest to commit to specific features that may never happen within a set timeframe.

Apologies for being verbose again, it's late here. 😼
FrancisJCat
3 years, 7 months ago
If I had an employee who wrote a 10 page text on excuses why he didn't do the thing I asked him to do, he'd be fired on the spot.

See, you're missing the point altogether. It's not what you think works for the site. It's about serving the community.

Of course, technical upgrades and all that shiz is important and I'm not discounting that. But that's not what I was talking about. It's about providing a good service to the community, letting artists reach their audiences better.

Do you think I asked for PDF or HTML5 animation because I'm a fussy toddler or because I want to make interactive art and illustrated stories?

Stop making excuses, get your ass to work, or find someone that can do the job.

Pathetic.
FrancisJCat
3 years, 7 months ago
PS: Sorry if I'm bashing you over the head, but you chose to pick this fight, not me.
GreenReaper
3 years, 7 months ago
Yeah, well, I'm not your employee, am I? 😼

Inkbunny is a free service. We're providing what we can for the furry community. We have limited time to do that, and our opinions differ on what's most important. Nobody's saying the things you like wouldn't be great. They're just not the priority.

If you don't like our priorities, you're welcome to commission someone else to make your dream site - or to contribute to this one. Just as some developers do directly, from time to time, when their lives aren't in the way. But your money might be better-spent on that 3D game project of yours, which when it comes down to it could always be hosted on a simple website and just promoted here.
FrancisJCat
3 years, 7 months ago
You don't know when to shut the fuck up, do you?

" Yeah, well, I'm not your employee, am I?

That's what I said, you fucking moron.

" Inkbunny is a free service. We're providing what we can for the furry community.

The site runs on donations. It's not free.
And if you're providing for the community, this journal proves you're doing a piss poor job.

" If you don't like our priorities, you're welcome to commission someone else to make your dream site

The the good old "if you don't like it leave" false choice. Go get fucked.

" or to contribute to this one

Did you just tell me I don't contribute? What do you think artists do? I'll give you a hint. There would be no Inkbunny without them, you fucking pea-brain sized piece of shit.

" But your money might be better-spent on that 3D game project of yours

If I knew giving you money would get the features people request implemented I'd pay it gladly. But giving your track record, it would be a waste.

And I'll give you one last piece of advice.

Shut. The. Fuck. Up.

You managed to piss me off to no end and no answer you can give will satisfy me. Either do a better job or find someone that can.

shaemint
3 years, 7 months ago
At least it isn't FA lmao
FrancisJCat
3 years, 7 months ago
I'll give you that much
It could always be worse X3
IBp
IBp
3 years, 7 months ago
Holy fucking shit Arkaid you have some brass balls.
alistair
3 years, 7 months ago
Thank you for the insight into the development process.  If I may offer a light criticism, requiring users to utilize an off-site service, one which is quite polarized and many such as myself refuse to use on principle, just to keep up with what's happening as regards to this site is not the friendliest move.

Do you use some manner of VCS?  It seems like auto-publishing commit logs to the site when pushing to production would be trivial.  Doing everything through Twitter seems... sub-optimal.
Birdpup
3 years, 7 months ago
I don't really know what to think of this. I don't even know if it should be my place to weigh in my opinion but uh, sure, why not?

What I can imply from your conclusion is that moderators are too quick to punish those who are regularly reported again and are quick to look into posts that are reported...whilst I understand this weaponises them in the hands of those reporters, isn't that the point of a self-moderated community? Wasn't that why report systems for members were created in the first place across multiple platforms, not just Inkbunny? Even if you removed the reporting feature and introduced more moderators, you still come up with problems-- whether it's potentially harmful content slipping under the radar or moderators locking heads over whether content should be removed.

I personally don't believe in the philosophy of 'If you don't like what they're saying, ignore them'. Certain users using their profiles as a soapbox to say potentially harmful or offensive content shouldn't be allowed (and extreme outliers are punished) but there's a fine line, and being offended from the offset is a subjective opinion. If mass reporting by a number of users is wrong and weaponising moderators, then how are those users meant to speak out against what they feel is wrong? Arguing with said user in their own journals is inherently in itself a problem: either that user can choose to ignore or even remove their content, thereby policing their own page and forcing their values onto others or the page itself becomes a cesspit of mindless bickering between several parties. By and large, being able to report SHOULD be a feature because what they're saying could potentially be harmful or offensive, even if it skirts the line of Inkbunny's TOS. But you're always going to have problems with trigger-happy reporters throwing moderators left and right. So what's the solution?

I appreciate you writing all your thoughts out but I don't think even moderators know what to do to potentially solve the issues at this point. I mean, could anyone? Any one solution makes more problems.
alistair
3 years, 7 months ago
Thank you for your comments.

" Birdpup wrote:
What I can imply from your conclusion is that moderators are too quick to punish those who are regularly reported again and are quick to look into posts that are reported


Well, it's not specifically about moderator action, but rather the meta-vulnerability of the system as a whole.  Moderator action is just one of the mechanisms by which it happens.

" ...whilst I understand this weaponises them in the hands of those reporters, isn't that the point of a self-moderated community?


"Self-moderated" should not mean "users utilize moderators."  It should mean, and in Inkbunny's case actually does mean, "users have the tools to moderate for themselves."

" Wasn't that why report systems for members were created in the first place across multiple platforms, not just Inkbunny?


Inkbunny has a distinct lack of a report system for members. Reporting is achieved by piggy-backing on support tickets.  Inkbunny's original design was built to minimize direct moderator intervention in user-user interactions, which is strongly reflected in the power of the available user tools and the noted lack of reporting tools.

" Even if you removed the reporting feature and introduced more moderators, you still come up with problems-- whether it's potentially harmful content slipping under the radar or moderators locking heads over whether content should be removed.


None of the ideas I explore involve removing of reporting (one idea specifically strengthens it), nor the introduction of more moderators.  The failure mode will exist no matter how many moderators you add.  Facebook, Google, Twitter, et al. try to overcome this with AI, but it's a losing game because the failure mode is inherent.

" If mass reporting by a number of users is wrong and weaponising moderators, then how are those users meant to speak out against what they feel is wrong?


By speaking out.  But I think when you say "speak out against" what you mean is "get someone to silence the offender."
(Edit: I don't say mass reporting is wrong in the treatise.  I say it is a mechanism of potential exploit.  There is a difference.)

" By and large, being able to report SHOULD be a feature because what they're saying could potentially be harmful or offensive, even if it skirts the line of Inkbunny's TOS.


They're words, man.  What you're advocating for is to give the most sensitive people the most power to silence those they dislike or disagree with.  I can't say I approve.

" I appreciate you writing all your thoughts out but I don't think even moderators know what to do to potentially solve the issues at this point. I mean, could anyone? Any one solution makes more problems.


Only if you a priori assume that policing speech against subjective criteria is necessary.  When you begin by axiomatically defining a certain subjective interpretation as the only good interpretation, any action which would allow a different subjective interpretation becomes bad by definition.  I reject this premise just as axiomatically, as it prevents even searching for solutions, let alone out-of-the-box solutions.  Expand your world-view a little.

Thanks for engaging with me.
Birdpup
3 years, 7 months ago
I guess like most things I'm just not educated enough. Best of luck in your endeavours.
alistair
3 years, 7 months ago
You oughtn't belittle yourself.  Take care.  : )
Soulfire
3 years, 7 months ago
The problem very simply in not with illegal content . The problem is very much certain groups shout about anything they don't like. Since when are ideas harmful or hurtful?  If i expressed any opinion about transgender, women's issue's, anything to do with race and or certain political ideology, etc ,etc  that was not a parrot of what you see in the main stream media i'd be attacked. Not on the basis or merits of my words or views, cause this lot do not read much and do not have much in they way of comprehension. No i'd be attacked merely cause they decided it was evil or hate speech or wrong.  I have seen many get upset over written things that were not hateful . For some bringing up any counter point to any issue they hold dearly puts you as wrong.

At the end of the day Hate and Hate speech are subjective. There is no such thing as a concrete good and evil, Sorry religious people. And once again the freedom to express oneself must always be fought for and maintained. Even and especially if someone gets "Offended".
" Birdpup wrote:

What I can imply from your conclusion is that moderators are too quick to punish those who are regularly reported again and are quick to look into posts that are reported...whilst I understand this weaponises them in the hands of those reporters, isn't that the point of a self-moderated community?


are we a self moderated community? no i don't think so. what we have is the power in the hands of a few and the majority do not even know what the issues are in most cases. Been here going on 10 years, never once voted on a mod or had a chance to peruse their personal beliefs to see if they represented my own.  Time and time again the rest of society is held hostage to a few who cant cope. Simply cause they make noise. Instead of taking a poll on the issue seeing where the majority's feelings lie, changes are made by the unelected few. And how dare anyone object!!  As if we were not capable of understanding the issue or voicing a pertinent opinion.

At the end of the day id be willing to bet $100 that the "numerous complaints" directed at any one person or post  does not even pass %1 of the total members.
MystBunny
3 years, 7 months ago
This is incredible. It really puts things into perspective. Excellent work on this, it's simply brilliant.

Though you said that your ideas for solutions are merely brainstorming, I do want to add my thoughts to a couple of them.

" In the event of Members' interactions becoming heated, should Moderators make the determination that action is needed, a preferred action could be to enforce mutual blocks or bans between the involved members rather than demanding edit or removal of any given post or giving a full site ban.


I have to disagree with this one, and as such, it seems to be the only one I do disagree with here. I have had some heated exchanges myself here on Inkbunny, but in some of those cases, we were eventually able to calm down and come to a mutual understanding, and then re-assess the others' positions in a more productive way. An enforced mutual block would have undermined that resolution and only further exacerbated our anger towards the other completely and permanently.

" When a large number of reports are submitted regarding a specific Member, the stored report data could be used to collate all such reports for easy analysis instead of having to manually deal with a flood of Support Tickets and PMs.


I think this one is a must. By the very nature of member reporting, as you have so perfectly explained here, the mods are only hearing one side of the story, and most of the time, nobody else has a chance to react until a decision has already been made, but as it exists now, by that point the moderators are likely already burned out and might not want to deal with the issue any further. This certainly creates an unfair imbalance. Although, even this solution would only have one side of the story told before anyone else can react. I'm not sure what a good solution would be to counter that problem specifically...
alistair
3 years, 7 months ago
" MystBunny wrote:
" In the event of Members' interactions becoming heated, should Moderators make the determination that action is needed, a preferred action could be to enforce mutual blocks or bans between the involved members rather than demanding edit or removal of any given post or giving a full site ban.


I have to disagree with this one, and as such, it seems to be the only one I do disagree with here. [...] An enforced mutual block would have undermined that resolution and only further exacerbated our anger towards the other completely and permanently.


IB's admins and mods have the right to run the site as they like.  This idea would provide moderators the ability to safeguard what they feel is the appropriate tone of interaction while minimizing unintended harm by not forbidding others from seeing content they choose to see.

That said, I do agree with your point.

" This certainly creates an unfair imbalance. Although, even this solution would only have one side of the story told before anyone else can react. I'm not sure what a good solution would be to counter that problem specifically...


I have in mind a good solution to the entire problem I describe, but it would be such a fundamental adjustment (or rather, clarification), and require such a modification of moderator mindset, that I prefer not to even explain it.  I'm trying very hard to avoid telling the site admins and mods how to run their site, here.
MystBunny
3 years, 7 months ago
Well I mean.. yeah the mods have a right to run their site as they see fit but this site is also truly one-of-a-kind. This makes every decision a hell of a lot more impactful, so I'm definitely going to say what's on my mind. That said, I struggle to think of any site run more fairly than this one, but it could always be better.
CuriousFerret
3 years, 7 months ago
I like your suggestions.

In particular ensuring all tools avaible to members to resolve conflict are used before moderation, and the journal tags in particular.

Heavy read, appreciate the effort.
TheDeinonychus
3 years, 7 months ago
The core problem that your treatise outlines is one that affects all forms of moderation on any site or forum where user generated content is hosted. That being that, aside from strictly defined violation of rules, all moderation is inherently subjective in some way. Even in cases of obvious rules violation, such moderation is subject to interpretation by a moderator. One has only to look at sites such as Facebook and Twitter, where many users are allowed to post obvious violations of rules (sometimes even legally prohibited content) while others are banned for even remotely similar content.

This is especially an issue when moderation staff often changes, as new people are brought in and old staff is replaced over time, the general consensus of what is allowed and what isn't will often change. Even with a set out ethos for the staff to follow, their interpretation of said ethos always changes. Even the event that lead to the creation of Inkbunny was a result of this change. FurAffinity, once dedicated to hosting all forms of furry art, over time allowed their stance on artistic freedoms to be influenced by outside forces, leading to the banning of several forms of artistic expression (cub art being only one of these forms, by the way).

While a strictly defined set of rules and prescriptions on what content is or isn't allowed would go a ways to prevent biases moderation, that is still open to interpretation by moderators as to what should and shouldn't be acted upon. It's always the case that moderators will turn a blind eye on some actions, while scrutinizing others. Furthermore, the old proverb 'The squeaky wheel gets the grease' holds true in these cases. As you outlined, while the majority of users may have no issue with a poster's content (or simply not feel strongly enough about it to complain), all it takes is for one outspoken individual to report it in order to enforce their opinion on the entire user base. We've been seeing just this happening more and more often lately in nearly all walks of life. The phenomena of 'Cancel Culture' is the ultimate expression of this proverb. Where a vocal minority is allowed to force their opinions upon a silent majority for whatever reason they wish, dictating by proxy what people are or are not allowed to view. This has a two-fold affect. Not only do they prevent others from seeing or hearing things they find objectionable, but this also fosters an attitude of self-censorship. A state in which individuals make the choice to avoid expressing opinions or thoughts most would not find objectionable simply to avoid the ire of the vocal minority and the repercussions said minority are able to inflict upon them.

The only way to prevent these issues is a stance of complete self-moderation. That is to say, a stance of no enforced moderation, rather allowing users the full control over what they personally see and interact with. Obviously concessions to this stance will have to be made, due to legal restrictions enforced beyond the forum's control. But users should be allowed, indeed expected, to take the personal responsibility of deciding for themselves, and only for themselves, what content they wish to partake in and view. This would, of course, require a robust set of options and utilities to allow for such things as tag and creator filtering, as well as the ability to block communication from other individuals with said user. Thankfully most of these features are already in place.

(Continued below)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
TheDeinonychus
3 years, 7 months ago

This, of course, does not shield a content creator completely from public opinion. While it is well within a content creator's ability and right to block users from the ability to comment on their posts, or even from commenting directly to the creator, it is also within the user's ability to abstain from consuming said content. A creator who posts content the majority of users finds objectionable is not prevented or discouraged from posting such content, but the engagement the public has with said user and his content is an organic and natural extension of the collective opinion of the public at large. This is akin to the concept of 'market forces', where public demand rather than enforced opinion drives the need for and production of goods. In this case, said 'goods' is content created by other users. Of course, it is entirely up to the content creator's choice if they wish to follow the public opinion, or ignore such and continue following his own opinions. The key point is that this choice is not enforced upon the creator, not by a vocal minority of the public, and not by the opinions of a moderation staff.

Another issue that will need to be addressed is the problem with false reporting. Even with strict guidelines as to what is and isn't prohibited content, there will be those who attempt to abuse the system. Another way to look at this would be as 'malicious reporting'. That is, where a small group of users report content that is neither objectionable to the majority of users nor strictly violates established rules. Often times such reporting is done multiple times, either for the same content or on multiple pieces of content by the same creator. Often the purpose of this is to continually send reports to the moderation staff in the hopes of said reports being noticed by a member of the staff with similar opinions as the ones sending the reports. Again, this comes back to the issue inherit in enforced moderation. Not all moderators will share the same opinions, and even if the majority of moderators agree that something is not objectionable, it only takes one moderator to decide the entire user base should not be allowed to see something for it to become prohibited. The reason actions such as this are allowed and do happen is because there are no repercussions for falsely reporting content. Naturally individuals will have different opinions, and it's not unreasonable to assume that someone who reports content won't be met with a moderator with a far more lax interpretation of the rules, thus allowing content that does violate said rules. But there should be repercussions in place for users who attempt to abuse the system. The counter-argument to this would naturally be that such repercussions would make people less likely to report actual rules violations. This, however, is a strawman argument. If rules on prohibited content were strictly defined and less open to interpretation by moderators, and also freely available to users, there should be no confusion as to what does or doesn't violate the rules. Thus anyone who does report content for a rules violation should be well aware of if their report is false or not. Even if the majority of the user base are not aware of the rules regarding prohibited content, the majority is also likely to simply personally block such content rather than report it. Again, this goes back to fostering an environment of personal responsibility rather than one of enforced censorship.
alistair
3 years, 7 months ago
Your logic follows my own so tightly it is almost but not quite a rephrasing.  I find the cancel-culture analogy to be particularly apropos; I don't think I'd considered that when composing, though the two phenomena are clearly related in hindsight.

I agree that the subjective rules vulnerability also makes malicious reporting much more difficult to ethically punish, because intent becomes so difficult to prove.  This adds more weight to my point about power imbalances.  Those who report have a lower potential cost, nearing epsilon, compared to the potential cost imposed on those reported.  This is an inherent incentive to use reporting as offensive tool.
TheDeinonychus
3 years, 7 months ago
Exactly. Not to get too political with this, but it's the same issue with reporting false rape allegations. There's no real danger of accusing someone of the crime, yet even if vindicated, the accused is often greatly damaged simply by the accusation. In this case, even if reported content is deemed to be not in violation of the rules, the fact that people are reporting it puts the creator under the scrutiny of the moderators, potentially labeling them as a 'problem user', one that they will be more critical towards when other reports against them are made.

Again, the greatest danger about this is how much of the rules are up to a moderator's judgement. Websites will often make rules ill-defined and broad, leaving room for them to make judgement calls on what is and isn't allowed. On one hand, this lets them 'sanitize' their site if an outside force (such as Paypal or Adsense for instance) starts imposing demands on them to remove content without strictly violating their own rules. But on the other hand, it puts all of a site's content creators at risk of suddenly having their content removed without warning. In InkBunny's case, since they have put forth a statement of their philosophy behind the site, it /should/ prevent such events taking place. The problem is that it doesn't in practice. Not only because the rules are not narrowly defined enough, but because the system of moderation is as you have pointed out too open for abuse by malicious users.
alistair
3 years, 7 months ago
" In this case, even if reported content is deemed to be not in violation of the rules, the fact that people are reporting it puts the creator under the scrutiny of the moderators, potentially labeling them as a 'problem user', one that they will be more critical towards when other reports against them are made.


Good addition.  This dovetails with Problem Three very well, though I don't think I would have included it, as it impugns the intentions of the moderators; I wanted to stick with points that apply even when moderators are behaving properly.

" Not only because the rules are not narrowly defined enough, but because the system of moderation is as you have pointed out too open for abuse by malicious users.


The users need not even be malicious, merely intolerant and noisy.  Every reporting user could truly believe they are just in their report, and the problems explained in the treatise would still be true.
TheDeinonychus
3 years, 7 months ago
A moderator's intentions don't have to be malign for this to happen. Even if the moderator is well-meaning, the fact that his attention is being brought to an individual inclines them to be more aware and judgemental about that individual's actions. A real world example of this would be how police treat known felons versus first time offenders. Cops assume that a repeat offender will be more likely to react aggressively towards them than even an individual who's unknown to them. Naturally, in our case, this could be mitigated by a content creator who's been very cooperative and good-natured with the moderation staff, but that's not always the case. If the staff is getting reports about someone almost on the daily, it's going to sour their impression of that individual.

And it's true, you shouldn't ascribe malice to what ignorance more easily explains. But the fact that there are those users out there who would willingly damage others just for their opinion (regardless of if they themselves believe their opinion is in everyone's interest or not) means you have to account for those individuals. The least amount of malice allowed would be to let everyone moderate their own viewing, but unfortunately these days there are far too many people who believe they should be the ones to decide what everyone else is or isn't allowed to view. I've no doubt that the majority of these people believe they have good intentions. But that does not mean that they are correct in their assumptions. Outside of what is proscribed and prohibited by the law the only ones truly able decide what is in an individual's best intentions is ultimately that individual themselves.
Weaselgrease
3 years, 7 months ago
Something to consider adding would be a mechanism many competitive games would use when there's a complaint about another player.  The staff would receive the complaint, but the user being reported is also ignored, sometimes discretely and sometimes optionally.  In the case of IB I think it would be fair to simply autonomously hide any "problem posts" from the reporting party by default, then place the report into a soft batch that only becomes visible for Moderation once it passes a threshold of identical reports made, so that instead of a number of reports to the moderators, they get a single report with a list of all those who it offended. That way they are alleviating the problem for themselves as is intended by InkBunny's philosophy and also reducing the burden of tickets.

Not to be confused with reports of gross ToS violations.  Though in such a case I do believe anyone who misreports ToS violations through their own subjective interpretation should themselves be charged with a violation.  Either through direct enforcement by moderators or discrete 'Boy who cried wolf' silencing for repeat offenses.
alistair
3 years, 7 months ago
" Something to consider adding would be a mechanism [...]


Interesting.  This would be something like a "Report and Block" feature, rather than a just a Report.  It might be even better if it is orthogonal to the Support Ticket system for reporting altogether, so that it's more like a "Flag and Block."

It still has the vulnerability to brigading (stochastic or coordinated), of course, but combines multiple possible solutions to some of the problems.

" Though in such a case I do believe anyone who misreports ToS violations through their own subjective interpretation should themselves be charged with a violation.


I agree with this whole-heartedly, though for it to have any teeth, and not be subject to a whole host of problems similar to the ones I outline in the treatise, the rules being so reported would need to be crystal clear.  Otherwise, proving intent to abuse the reporting system becomes an exercise in arbitrary, and thus selective, enforcement.  Not great.

Thank you for engaging.  : )
ferretsage
3 years, 1 month ago
Here's how'd I'd fix the problem of people abusing blocks and moderation to vilify people they have personal vendettas against:

First, all blocks are immediately and automatically mutual. You block someone, they autoblock you. This means that unblocking of these two-way blocks can only happen by mutual consent. Allow me to explain why this is desirable.

Blocking is only intended as a shield of defense against harassment. Blocking is not supposed to be a power tripping vindictive sword to vanquish others in self-righteous indignation. If every block comes with the price of being automatically blocked by the person blocked, this means contact can only be restored with the consent of the blocked person. Knowing that they, too, are being blocked by their targets, would take all the power and fun out of harassment blocking by bad actors who abuse mass blocking to scarlet letter others and dehumanize, rather than using the blocking tool as defense from harassing attacks, as blocking was intended to be. Others would be discouraged from casually joining in on the harassment blocking, if they have to pay the price of being autoblocked themselves and only able to have contact restored by mutual consent.

Additionally, my proposed feature of unblock by mutual consent, would also prevent blocking from being abused as an aggressive tool of harassment. Namely, where people unblock a target of harassment periodically only long enough to send nastygrams to the person they are "blocking", before reblocking and getting off on the power of insulting and accusing without having their attacks being defended against.

BastionShadowpaw
2 years, 10 months ago
I know I'm quite late in reading this, but I wanted to say thank you for taking the time to share this, not only with the staff but with the rest of the site as well.

Regardless of whether or not it leads to any changes in how IB is moderated, it is useful to think about for others that are dealing with similar problems.  It has certainly been a valuable read for me.
alistair
2 years, 10 months ago
Thanks!  As I was writing this (it started as a DM trying to explain my reasoning) I realized that the concepts were general enough that it was worth sharing more broadly.  It just kind of accidently ended up as a (the?) logical explanation for why social places on the internet are such censorious hellholes.

Alas, I'm not holding my breath that it will ever result in any meaningful changes to any existing sites.  Any community whose governance is stridently convinced of their own righteousness and infallibility will forever be subject to the arbitrary whims of that governance.

So it goes.

Thanks for engaging.  : )
New Comment:
Move reply box to top
Log in or create an account to comment.