[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Trust



Hi , I just started reading this mailing list, but I thought I had 
something to contribute on this topic anyway.

I think it would be interesting if each factoid did have a trust value 
assigned to it, as well as every bot and person.  This way a factoid's 
trust would actually change all the time.  Let me give a good example, we 
will start with just one on one conversation:

Human1 (Trust 3): "infobot: A square is a foo bar baz"

- infobot does not know this value, so it adds it to its database with a 
trust of 3.

Human2: infobot, square?
infobot: A square is a foo bar baz
Human3 (trust 2): no infobot, a square is a shape with four sides

infobot adds to database: trust 2 : a shape with four sides
infobots edits trust: trust 1: a foo bar baz

Human2: infobot, square?
infobot: A square is (1)a foo bar baz (2) a shape with four sides
Human3 (trust 1): no infobot, a square is a shape with four sides

infobot: removes: a foo bar baz from the database, it has reached 0 in trust
infobot: increases the trust to 3 for, a shape with four sides

------------------
This is the simplest example, just to get started. Each time someone 
verifies a factoids trust increases, if someone negates a factoid trust 
decreases.  It gets more fun in a more complicated situation.

Human: infobot, what is a square?
infobot: I don't know, let me ask around...
--infobot asks a few bots on its network

BotA (trust 2): a foo bar baz
BotB (trust 6): the shape a box is made of
BotC (trust 8): a shape with four sides
BotD (trust 3): the bad dressin pog thing

since infobot got more answers than it needed, it will try to weed them out 
by reducing every answer other than the highest one by the distance it is 
from the hightest one:

BotC is the highest trust and becomes the base:

BotA(trust 2): has a distance of 6 from BotC, giving it a trust of -4, 
which is <= 0, so that answer is removed
BotB(trust 6): has a distance of 2 from BotC, so its factoid survives with 
a trust of 4
BotD(trust 3): has a distance of 5 from BotC, so its factoid goes down in 
smoke with a rating of -2

The end result would be two factoids:
1.(trust 8) a shape with four sides
2.(trust 4) the shape a box is made of

This way the higher a trust is in a bot, the better chance it would have of 
eliminating untrustworthy answers from getting added.

Now lets say infobot gets a couple extra factoids from watching channels, 
so it now has the following:
1.(trust 8) a shape with four sides
2.(trust 4) the shape a box is made of
3.(trust 6) has all equal sides
4.(trust 3) a four sided shape

infobot now looks at some of the people on its list of trusted users and 
finds one with a high trust and msg's them:

infobot: Could I ask you a quick factoid question?
trustedone(trust 10): yes infobot
infobot: I have heard that a square is the following:

         1.a shape with four sides
         2.the shape a box is made of
         3.has all equal sides
         4.a four sided shape

trustedone: infobot, 1 and 4 are the same and 2 is wrong
infobot: do you know if 3 (has all three sides) is right?
trustedone: no infobot
<end conversation>

The end result of this short conversation is the following:

- (trust 4) the shape a box is made of: gets removed (10 trust are 
subtracted from 4 for -6)

- since factoids 1 and 4 are the same, infobot takes the factoid with the 
highest trust, a shape with four sides (8) and adds the current trust to it 
(8+10) for 18, and subtracts the trust from the other one (a four sided 
shape)(3-10) for -7, removing it.
- because it could not get an answer on 3. has all equal sides, it will 
have to verify with someone else, or leave the factoid with a trust of 6.

Current trusts for a square:
1. (trust 18) a shape with four sides
2. (trust 6) has all equal sides

I like this method cause it continually reinforces factoids that many 
people agree with.  It does make it hard to get rid of factoids that have 
been verified by many people though, cause one person can't remove a really 
entrenched factoid (which is both good and bad).  I think with this system 
there should be some way to scale a factoid, or put a limit on how high a 
trust a factoid can have, so we don't create zealotbot, that refuses to 
forget long used factoids even if they are wrong.

Two distadvantages to this method are:
1. Bots ask questions alot
2. A person could keep telling a bot a fact to artificially inflate it.

You could shut the bot up in a couple ways.  You could have infobot only 
watch channels and only reenforce if someone actually tells infobot to, or 
if infobot see's a factoid in a channel.  You could also have infobot 
increase the trust of a factoid if a trusted person tells infobot to tell 
someone else a fact.  The trust of the factoid could even be increased by a 
percentage of the persons total trust because telling infobot to tell 
someone else is not as powerful as telling infobot a fact is correct directly.

To combat autoinflation, infobot could keep track of people that have told 
it a factoid for a limited time, say a week, so that in that week, they 
cannot reenforce the same factoid.

One last issue left is people's trust and how to raise/lower it.
What infobot could do is, each time a person has verified a factoid, the 
person who originally gave infobot the factoid would have their trust 
raised a little.  If someone discounts a factoid, then infobot lowers their 
trust a little.  How much they raise or lower depends on the scale 
used.  infobot could also watch a channel, and if a person says many 
factoids in conversation that match a factoid that infobot has a high trust 
in, infobot could raise their trust a little, so that its not necessary to 
give tons of new facts to gain trust, but verifying many old ones will work 
too.

Oh, one last thing in this stream of thought.  There probably should be a 
way to let bots tell each other how trusted a factoid they have is.  That 
way if a bot has some new erroneous factoid it just got, it won't ruin its 
trust with other bots by giving it away as a trusted fact.  When a friendly 
bot asks another for a factoid, it should return the trust it has on the 
factoid.  This way, if the bot that recieves the factoid later discounts 
it, if the factoid had a low trust anyway with the other bot, that bot 
won't be penalized for giving a bad factoid.  It should only be penalized 
for erroneous and trusted factoids.

The only thing I see thats bad about all this is that the factoid database 
could get pretty big, because infobot would have to remember the people 
that told it factoids, and for at least a little while, people that 
reenforce them.  I do think it would make a much smarter infobot though.

Oh, one last thing, it would be interesting to have it so that factoids 
degrade in trust over time, so that a factoid that hasn't been reenforced 
in any way for a long time that has a low trust would get removed 
automatically.  It might make the bot keep only more trusted answers.

Anyway, I hope this theory is useful, and I haven't treaded on already past 
steps.  Thanks for taking the time to read it.

         Thank you,
         Nathan Ewing / nre7468@rit.edu

At 09:24 AM 1/21/00 -0500, you wrote:
>I've separated the problem of validating and assimilating factoids on
>the assumption that more threads with smaller messages is good.
>
>
>On Fri, 21 Jan 2000 13:28:50 +0900, scozens@pwj.co.jp wrote:
> >
>
>[Attribution for >> was lost.]
> >
> >> I see three possibilities:
> >>  i) really chatty bots
> >>  ii) really slow bots
> >>  iii) really smart bots
> >
> >Let's see if I can remember the saying:
> >     quick, reliable, cheap : choose two, you can't have all three.
> >
> >> Plus if any joe can run an infobot and be "discovered" and have
> >> their factoids shared, bad things are inevitable:
> >> "teehee, I made purl say 'poop'".
> >
> >This happens already, and is, I suspect, a deliberate decision - purl
> >is not only a *member* of the community, but a *reflection* of it.
> >
> >> There should not only be weight given to factoids, but weight given to
> >> bots.  Trust should be gained.
> >
> >This isn't as hard as it sounds, so long as you're prepared to solve
> >some fairly fundamental artificial intelligence problems along the way.
> >How it would work is this:
> >
> >Factoid not found. Ask peers.
> >One response -> Add response to database, do not alter trust.
> >Two responses -> Choose response from most trusted bot then proceed as
> >below.
> >More than one response -> Compare all factoids, rank by similarity*.
> >Choose response from most trusted bot from factoids that are similar.
> >Add trust to all bots producing factoids similar to chosen one, remove
> >trust from bots producing differing factods.
> >
> >* Yes, this part is impossible. Not to worry. We'll work something out.
> >
> >So, your bot asks `what is a square?'
> >Responses:
> >BotA (Trust 3) : A square is a four-sided shape
> >BotB (Trust 3) : Poo!
> >BotC (Trust 8) : A square is a shape with four corners
> >BotD (Trust 9) : A square is foo bar baz
> >BotE (Trust 5) : A square is four lines connected by right-angles.
> >
> >Sort response by similarity and trustworthiness:
> >
> >BotB BotD          BotE BotA BotC
> >
> >Select BotC's answer. Alter trust:
> >BotB -2
> >BotD -2
> >BotE +3
> >BotA +3
> >BotC +3
> >
> >Do we do this just for bots, or for humans too?
>
>
>Humans, too!
>
>I still like the idea of combining factoids.  In your example, Bot0
>(your bot) would build a new factoid from the others:
>
>   A square is a four-sided shape or Poo! or a shape with four corners
>   or foo bar baz or four lines connected by right-angles
>
>Additionally, it might be fun to store with the factoid which parts
>came from which places at what times:
>
>   A square is
>   (BotA@948461095:a four-sided shape) or
>   (BotB@948462000:Poo!) or
>   (BotC@948461111:a shape with four corners) or
>   (BotD@948459833:foo bar baz) or
>   (BotE@948461000:four lines connected by right-angles.)
>
>Times and sources would be useful for when acquiring updates.
>
>A fun forms interface or some extra syntax could let you forget
>subsets of factoids:
>
>   bot, forget a square/BotB
>   bot, replace square/BotD with foo bar baz quux
>   bot, a square is also the opposite of a hep cat
>
>That eliminates "Poo!", replaces "foo bar baz" with an updated
>version, and adds a new subfactoid from a local user.  Now it's
>hypothetically stored as:
>
>   A square is
>   (BotA@948461095:a four-sided shape) or
>   (BotC@948461111:a shape with four corners) or
>   (BotD@948459833:foo bar baz quux) or
>   (BotE@948461000:four lines connected by right-angles.) or
>   (User@948476281:the opposite of a hep cat)
>
>Editing factoids might also subtract some trust from bots B and D in
>the peers table.  The amount of trust subtracted might be proportional
>to the amount of trust the local bot places in the people making
>changes.
>
>Local user trust?  That's a hard one; it may be mask or nick based,
>like regular karma.  Editors may have to log in to participate in
>trust, or it could be as relaxed as karma and just work out.
>
>Since all the factoid's authors are known at least internally, point
>awards (and penalties) to factoids would be divided among the
>factoid's authors and added (subtracted, in the case of penalties) to
>their accumulators.  Negative trust doesn't exist; 0 is the minimum.
>Factoids themselves don't hold karma; authors would have "authorship
>karma" or something.
>
>   Or perhaps award an amount of "factoid karma" for each
>   local fetch; assuming that the factoid must be good if
>   nobody bothers to change it?
>
>This brings into play a second tier of inter-bot trust: the local
>bot's trust in its own factoid authors.  To rehash the original
>factoid transaction, with a local-trust twist:
>
>Bot0 (your bot) asks: What is a square?
>BotA (Trust 3) says: A square is (UserAx,12345,97:a four-sided
>                      shape)
>
>   Oh, the fields are (Author,Time,AuthorTrust) In the BotA
>   response, UserAx added the subfactoid at 12345.  At the
>   response time, UserAx has a local trust of 97.
>
>BotB (Trust 3) says: A square is (UserBx,12346,0:Poo!)
>BotC (Trust 8) says: A square is (UserCx,12347,133:a shape with
>                      four corners)
>BotD (Trust 9) says: A square is (UserDx,12348,2:foo bar baz)
>BotE (Trust 5) says: A square is (UserEx,12349,212:four lines
>                      connected by right-angles)
>
>Overall factoid trust would be the remote bots' trust in the factoid
>author, weighted by the local bot's trust in the remote bot.  For fun,
>let's try (bot trust * user trust).  Sorted in decreasing order of
>trust:
>
>BotC/UserCx = 1064 = a shape with four corners
>BotE/UserEx = 1060 = four lines connected by right-angles
>BotA/UserAx = 291  = a four-sided shape
>BotD/UserDx = 18   = foo bar baz
>BotB/UserBx = 0    = Poo!
>
>Weighing and evaluating trust would be a lot easier if trust had a
>small set of values.  I've specified a similar trust scheme with four
>inter-system and four intra-system classes; the combinations fall into
>a small set of overall security classes which are easy enough to
>manage.  This won't work for a system where trust is a fuzzy value.
>
>
>Assuming factoids at and above the median are kept and combined, you
>get:
>
>   Bot0: A square is a shape with four corners or four lines connected
>         by right-angles or a four-sided shape
>
>
>That works out pretty well, but it's all contrived examples.  These
>sorts of things seem to break terribly in the field.
>
>
>-- Rocco Caputo / troc@netrus.net / Thinks he's human, too.