1.0 Bays adjustment
Overview
We take it for granted that we each have our biases. Though inherent in us all, they have the potential to be harmful. To account for these biases, and mitigate their influence, we look for correlations between the direct-inputs submitted via review, and the context surrounding them.
Many factors implicitly influence perception. Things such as:
Interpretation of another’s gender, race, ethnicity, age, sexual orientation, attractiveness, miscellaneous affiliations, and the like.
Biofeedback relating to time of day, day of week, time-since-last-meal, the damn weather, and other biological & environmental cues.
Inherent nature of the interaction: area of engagement, whether it is transactional or not, asymmetries in status, instigating motivations, and other factors that set an expectation frame. Some of these factors we can measure, others we cannot.
-No amount of adjustments can be made to ensure uniform & unbiased assessments, because no model of such exists. We do not suppose to know the precise reasons behind any one particular review or the exact nature or extent of the biases that shape it; -Nor to we care. The purpose in drawing these correlations is not punitive, they’re made in order to make adjustments that help reduce the punitive effects of bias.
This process of Baysiean adjustment, is how we apply context to the raw data gained through the review process.
(This next part is a bit dense, so examples will be provided along the way)
The values issued to Individuals and Guilds as Reputation (Nature, Aptitude, & Communication, as well as Meaningful, Fun & Approachable) are not the same as those submitted directly in the review process.
(Jimbo and Bubba went out fishing, and Jimbo thought Bubba was a real jerk!)
For all reviews submitted, the context in which the interaction occurred is parsed out per whichever metrics are made available (we cannot account for undisclosed information).
(As it happens, Bubba is a young man from Texas, and the two left for their journey at 4:30 in morning)
These most recent values are then compared against prior values and contexts. The strength of these correlations are drawn per baysiean inference.
(But anyone who knows Jimbo would tell you: He doesn’t care much for Texans, cannot abide young-folk, and is a real bear before his morning coffee.)
These most-recent values are then interpreted through the context they occur in; and “bias” is adjusted for in manner inversely-proportionate to how fall it falls from the the expected distribution curve given a certain context.
(It comes as no surprise then that Jimbo didn’t have a good time)
These adjusted values are what is issued to Individuals and Guilds as Reputation.
(So you’ve got to take what he says about Bubba with a grain of salt)
Where it occurs/ how we initiate it
Raw data is received through Review process
Its context is compared to other instances sharing similar contexts
The review is then algorithmically adjusted to mitigate bias, per exhibited correlative patterns, proportionate to the strength of that correlation.
These adjusted values (not the raw data from the review) are what is issued as Reputation.
Last updated