Here's what I've been working on...
Imagine students taking a standardized test. Their goal is to select answers that will be marked correct. In selecting what they believe is the right answer, they must also have confidence that this is the answer the test-preparer will consider the right answer, that the test has no misprints, that it will be graded correctly, etc. In short, that if they do their part in selecting the right answer, the test-givers will do their part in marking it correct. On the test-giver's side, they have to believe they have made the test properly and that the answers they will mark as correct are the ones well-prepared students will select.
Now suppose you want to cheat. You don't know the others, so you don't know who's worth copying off of. If you could compare their answers to the key, you'd know who to copy off of, but if you could do that you wouldn't need to. No joy there.
Now suppose that in addition to selecting an answer, you rate your confidence in selecting that answer, say on a scale from 1 to 5. You could imagine the test-givers using this as a sort of wager, and giving students more points for confidently selected right answers than for guesses, but otherwise it wouldn't change much for them.
But it would change a lot for the students. Now you have an obvious way of deciding who to copy off of.
Now suppose the test is actually not being graded against a key, that instead the answers selected by the students are being tallied as votes and the biggest vote-getter is treated as the right answer. Without the confidence mechanic, and assuming the students are relatively well-prepared, this makes surprisingly little difference. (I've been running some little "simulations" in Excel. If students mostly choose the right answer and wrong answers are randomly distributed, the right answer still usually wins.)
But with the confidence mechanic, things can get weird, because students can collude to move the answer. As I tried testing this, it looked like it only took two students out of ten so colluding to make a noticeable difference, and three was overkill. (The idea is for the conspirators all to confidently select the same answer; they'll pick up some help from whoever believed this answer actually to be right, and often enough swamp other answers, including the right one, selected with only random confidence. Thus their choice tends to win more than it should.)
What's the point of all this?
I wanted to see if we could build up a community's idea of truth from scratch. Test-taking makes a good stand-in for truth because there is a mechanical sense of correctness here, which we can exchange via voting for something like consensus, and we have a way of adding in confidence or certainty as a factor -- socially this would be something like reputation. The goal is to model a speech community without
using the concept of truth, but rather
explaining their concept of truth.
But the test-taking example leads naturally to the idea of cheating. In broader social terms, you can imagine cheaters as people who value prestige and standing above truth, and it turns out even a smallish group can collude to manipulate the community's consensus. And by manipulating the consensus they can reinforce their reputation as the people who know and speak the truth, despite having other goals entirely.
So I'm a little stuck. I hadn't foreseen the cheating issue, and I'm not sure where to go with this next.