[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Trust (was "Re: Trust and Discovery")



I've separated the problem of validating and assimilating factoids on
the assumption that more threads with smaller messages is good.


On Fri, 21 Jan 2000 13:28:50 +0900, scozens@pwj.co.jp wrote:
>

[Attribution for >> was lost.]
>
>> I see three possibilities:
>>  i) really chatty bots
>>  ii) really slow bots
>>  iii) really smart bots
>
>Let's see if I can remember the saying:
>     quick, reliable, cheap : choose two, you can't have all three.
>
>> Plus if any joe can run an infobot and be "discovered" and have
>> their factoids shared, bad things are inevitable:
>> "teehee, I made purl say 'poop'".
>
>This happens already, and is, I suspect, a deliberate decision - purl
>is not only a *member* of the community, but a *reflection* of it.
>
>> There should not only be weight given to factoids, but weight given to
>> bots.  Trust should be gained.
>
>This isn't as hard as it sounds, so long as you're prepared to solve
>some fairly fundamental artificial intelligence problems along the way.
>How it would work is this:
>
>Factoid not found. Ask peers.
>One response -> Add response to database, do not alter trust.
>Two responses -> Choose response from most trusted bot then proceed as
>below.
>More than one response -> Compare all factoids, rank by similarity*.
>Choose response from most trusted bot from factoids that are similar.
>Add trust to all bots producing factoids similar to chosen one, remove
>trust from bots producing differing factods.
>
>* Yes, this part is impossible. Not to worry. We'll work something out.
>
>So, your bot asks `what is a square?'
>Responses:
>BotA (Trust 3) : A square is a four-sided shape
>BotB (Trust 3) : Poo!
>BotC (Trust 8) : A square is a shape with four corners
>BotD (Trust 9) : A square is foo bar baz
>BotE (Trust 5) : A square is four lines connected by right-angles.
>
>Sort response by similarity and trustworthiness:
>
>BotB BotD          BotE BotA BotC
>
>Select BotC's answer. Alter trust:
>BotB -2
>BotD -2
>BotE +3
>BotA +3
>BotC +3
>
>Do we do this just for bots, or for humans too?


Humans, too!

I still like the idea of combining factoids.  In your example, Bot0
(your bot) would build a new factoid from the others:

  A square is a four-sided shape or Poo! or a shape with four corners
  or foo bar baz or four lines connected by right-angles

Additionally, it might be fun to store with the factoid which parts
came from which places at what times:

  A square is
  (BotA@948461095:a four-sided shape) or
  (BotB@948462000:Poo!) or
  (BotC@948461111:a shape with four corners) or
  (BotD@948459833:foo bar baz) or
  (BotE@948461000:four lines connected by right-angles.)

Times and sources would be useful for when acquiring updates.

A fun forms interface or some extra syntax could let you forget
subsets of factoids:

  bot, forget a square/BotB
  bot, replace square/BotD with foo bar baz quux
  bot, a square is also the opposite of a hep cat

That eliminates "Poo!", replaces "foo bar baz" with an updated
version, and adds a new subfactoid from a local user.  Now it's
hypothetically stored as:

  A square is
  (BotA@948461095:a four-sided shape) or
  (BotC@948461111:a shape with four corners) or
  (BotD@948459833:foo bar baz quux) or
  (BotE@948461000:four lines connected by right-angles.) or
  (User@948476281:the opposite of a hep cat)

Editing factoids might also subtract some trust from bots B and D in
the peers table.  The amount of trust subtracted might be proportional
to the amount of trust the local bot places in the people making
changes.

Local user trust?  That's a hard one; it may be mask or nick based,
like regular karma.  Editors may have to log in to participate in
trust, or it could be as relaxed as karma and just work out.

Since all the factoid's authors are known at least internally, point
awards (and penalties) to factoids would be divided among the
factoid's authors and added (subtracted, in the case of penalties) to
their accumulators.  Negative trust doesn't exist; 0 is the minimum.
Factoids themselves don't hold karma; authors would have "authorship
karma" or something.

  Or perhaps award an amount of "factoid karma" for each
  local fetch; assuming that the factoid must be good if
  nobody bothers to change it?

This brings into play a second tier of inter-bot trust: the local
bot's trust in its own factoid authors.  To rehash the original
factoid transaction, with a local-trust twist:

Bot0 (your bot) asks: What is a square?
BotA (Trust 3) says: A square is (UserAx,12345,97:a four-sided
                     shape)

  Oh, the fields are (Author,Time,AuthorTrust) In the BotA
  response, UserAx added the subfactoid at 12345.  At the
  response time, UserAx has a local trust of 97.

BotB (Trust 3) says: A square is (UserBx,12346,0:Poo!)
BotC (Trust 8) says: A square is (UserCx,12347,133:a shape with
                     four corners)
BotD (Trust 9) says: A square is (UserDx,12348,2:foo bar baz)
BotE (Trust 5) says: A square is (UserEx,12349,212:four lines
                     connected by right-angles)

Overall factoid trust would be the remote bots' trust in the factoid
author, weighted by the local bot's trust in the remote bot.  For fun,
let's try (bot trust * user trust).  Sorted in decreasing order of
trust:

BotC/UserCx = 1064 = a shape with four corners
BotE/UserEx = 1060 = four lines connected by right-angles
BotA/UserAx = 291  = a four-sided shape
BotD/UserDx = 18   = foo bar baz
BotB/UserBx = 0    = Poo!

Weighing and evaluating trust would be a lot easier if trust had a
small set of values.  I've specified a similar trust scheme with four
inter-system and four intra-system classes; the combinations fall into
a small set of overall security classes which are easy enough to
manage.  This won't work for a system where trust is a fuzzy value.


Assuming factoids at and above the median are kept and combined, you
get:

  Bot0: A square is a shape with four corners or four lines connected
        by right-angles or a four-sided shape


That works out pretty well, but it's all contrived examples.  These
sorts of things seem to break terribly in the field.


-- Rocco Caputo / troc@netrus.net / Thinks he's human, too.