WO2001041014A1 - Expertise-weighted group evaluation of user content quality over computer network - Google Patents

Expertise-weighted group evaluation of user content quality over computer network Download PDF

Info

Publication number
WO2001041014A1
WO2001041014A1 PCT/US2000/032159 US0032159W WO0141014A1 WO 2001041014 A1 WO2001041014 A1 WO 2001041014A1 US 0032159 W US0032159 W US 0032159W WO 0141014 A1 WO0141014 A1 WO 0141014A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
users
regard
item
items
Prior art date
Application number
PCT/US2000/032159
Other languages
French (fr)
Other versions
WO2001041014A9 (en
Inventor
Larry S. Marso
Brian E. Litzinger
Original Assignee
High Regard, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by High Regard, Inc. filed Critical High Regard, Inc.
Priority to AU19274/01A priority Critical patent/AU1927401A/en
Publication of WO2001041014A1 publication Critical patent/WO2001041014A1/en
Publication of WO2001041014A9 publication Critical patent/WO2001041014A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/535Tracking the activity of the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/329Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]

Definitions

  • This application relates to networks, such as computer networks and more specifically to a method and system for rating users, user-contributed items, groupings of related user-contributed items and other items on a network.
  • a significant and distinctive feature of wide-area networks is sometimes considered to be the ability of users anywhere on the network to access a centralized source for content in a particular category.
  • the provision of information from centralized sources has a long tradition in print and electronic media.
  • this is an "old media" concept, based on the economics of content production and information distribution before the advent of global wide area networks.
  • this model still holds considerable sway, despite overwhelming changes in the technology and economics of information. More than ever before, any user anywhere has the ability not only to access centrally produced content, but also to interact with other users anywhere — at almost zero marginal cost.
  • the described embodiments of the present invention offer alternative structures for decentralized interactions among users on wide-area networks and a method of constructing, applying and distributing ratings of users, user-contributed items, groupings of related user-contributed items and other items.
  • the higher the Quality of the items contributed by or associated with a particular user the more weight assigned to ratings supplied by such user.
  • a calculation using weighted ratings determines item Quality.
  • the weights assigned to ratings affect Quality and Quality affects the weights assigned to ratings, simultaneously.
  • the preferred embodiments include a solution to the "circular" character of this approach.
  • the described embodiments include a series of mathematical methods embedded in network processes. These network processes provide for a series of interactions between user nodes, mediated by other network elements that create a structured environment.
  • user interactions can include participation in a web site's online discussion group or chat facilities, an e-mail based mailing list, unidirectional, bidirectional or widely broadcast digital video or audio communication, a distributed communication facility such as Usenet newsgroups or IRC chat, or an online auction or other facility through which users provide, or agree to provide, goods and services in commerce, in each case enhanced with features of the described embodiments.
  • a chat session would generally be considered as a separate item for each user who participated, for example by injecting at least one statement into the session during a particular hour.
  • particular charities might be considered to be items. Users can post their ratings of the charities, and the charities are assigned a Quality value as described below.
  • ratings of items contributed by or associated with a particular user are used to construct measurements of the user's competence and credibility as a participant in a networked environment, called herein “Expertise” and “Regard.”
  • Expertise is used in connection with the “Expert” methods described herein and the term “Regard” is used in connection with the "High Regard” methods described herein.
  • Either Expertise or Regard can be used as a factor in determining the relative weight assigned to each user's evaluations of items contributed by or associated with other users, producing a measure of the relevance, accuracy and importance of a particular item, referred to herein as item "Quality.”
  • the Expertise or Regard of users rating a particular discussion group posting might be used to weight their ratings in a calculation of the posting 's Quality.
  • Either Expertise or Regard may also be used to predict the likely relevance, accuracy and importance of a user's contributed items (both historical and future), for example, when a particular item is too new to have itself received ratings or when ratings of a particular item are sparse.
  • Either Expertise or Regard may also serve as an independent benchmark relevant to an evaluation of the user for other purposes, including purposes not associated with interactions across wide-area networks.
  • Expertise, Regard or Quality, or related measures may be bundled with relevant content and transmitted to nodes on a network, offering multiple additional opportunities to enhance user interactions, in some embodiments.
  • One or more of these measures may also be used to filter, highlight, sort or otherwise evaluate items or to limit a user's interactions to items and other users who meet minimum standards.
  • High ratings can raise the profile of any item, irrespective of the Expertise or Regard of the user who contributed it. Also, an item that misses the mark and receive poor ratings can be identified, downgraded or removed from view, even if the user who contributed it has high Expertise or Regard.
  • the described embodiments of applicants' invention also include methods enabling users to "Vouch" for items contributed by another user by associating their own Expertise or Regard with such items.
  • a well Regarded user also called a highly Regarded user
  • users are enabled to "Discredit" items contributed by other users by asserting their own Expertise or Regard in opposition to such items.
  • the Expertise or Regard not only of the contributor, but also of the user who Vouched for or Discredited such items can be enhanced or diminished depending on other user's ratings of such items.
  • certain embodiments will construct an environment in which users provide ratings as a natural part of navigating through items. For example, if the user views an item for a period of time suggesting the user has given it attention and consideration, the user may be encouraged or required either to provide a rating or to exit the system, or the portion of the system the user is currently interacting with. Also, for example: the act of moving from one item to another at a particular moment, when certain visual cues (among a rotating series of alternatives) appear on the user interface, may indicate that the user has selected a particular rating.
  • the intention is to create an efficient manner for users to provide ratings, to limit the requirement to provide ratings to situations in which the user has actually reviewed the item and to generate a large body of useful rating data. By providing an efficient user interface and striking an appropriate balance, the breadth, Quality and utility of systematic ratings -more than compensates users for the additional effort required to supply ratings.
  • Fig. 1 shows a network, various users of the network and a network server.
  • Fig. 2(a) shows an example of a user interface allowing users to view and rate items.
  • Fig. 2(b) describes alternative rating schemes that can be used in accordance with the present invention.
  • Fig. 2(c) shows a web page incorporating a rating scheme in accordance with the present invention.
  • Fig. 3 shows a simple example in a system having only four users demonstrating that ratings supplied by users are not necessarily valued equally.
  • Fig. 4(a) and 4(b) show a simple example demonstrating that a user's Regard is affected by how other users rate the items that the user contributes and by the Regard of those users, and that one user's Regard can affect the Regard of other users and the Quality of items contributed by other users.
  • Figs. 5(a) and 5(b) show another simple example demonstrating that a user's Regard is affected by how other users rate the items that the user contributes and by the Regard of those users, and that a user's Regard can affect the Regard of other users and the Quality of items contributed by other users.
  • Figs. 6(a) and 6(b) show a simple example demonstrating that a user's Expertise is affected by how others rate the items that the user contributes and that a user's Expertise can affect the Quality of items contributed by other users.
  • Fig. 7(a) is an overview of data flows in an example High Regard ratings system, showing the circular nature of Regard affecting the Regard of other users and the Quality of items contributed by other users.
  • Fig. 7(b) is a more detailed overview of data flows in an example High Regard ratings system, showing the circular nature of Regard affecting the Regard of other users and the Quality of items contributed by other users, emphasizing procedures for the flow, storage and interpretation of data. .
  • Figs. 7(c)-7(e) are flow charts showing the inputs and outputs to "basic" determinations of item Quality, user Regard and user Expertise.
  • Fig. 8 is a plot of a Regard Inertial Model.
  • Fig. 9(a) is a plot of a Quality Inertia Model.
  • Fig. 9(b) is a plot of a Segmented Decay Model.
  • Fig. 10 shows an example of a small Regard data set.
  • Fig. 11 shows an example of a data structure used to store and retrieve data required to perform calculations of Regard and Quality in a preferred embodiment of the present invention.
  • Fig. 12(a) is a block diagram of a first example forum server application including a rating server.
  • Fig. 12(b) is a block diagram of another example forum server application including a rating server, where the rating information is sent directly to a user's web page.
  • Fig. 12(c) is an example of a web page generated in accordance with the block diagram of Fig. 12(b).
  • Fig. 13 is a block diagram of another example forum server application communicating with a separate communication forum.
  • Fig. 14 is a block diagram of another example forum server application communicating with a separate communication forum and an ad server.
  • Fig. 15(a) is a flow diagram showing communication between elements of Fig. 14 during forum/thread index intercommunication.
  • Fig. 15(b) is a flow diagram showing communication between elements of Fig. 14 during article view intercommunication.
  • Fig. 15(c) is a flow diagram showing communication between elements of Fig.
  • Figs. 16(a)- 16(h) are flow charts showing methods used during the intercommunication processes of Figs. 15(a) - 15(c).
  • Fig. 17 is a block diagram of an example e-commerce server.
  • Fig. 18 is a block diagram of an example auction server and a rating server.
  • Fig. 19 is a block diagram of an example server for an individual and commercial rating service.
  • Figs. 20(a) and 20(b) are, respectively, a graph and a table that aid in explaining the concepts of "Vouching” and “Discrediting.”
  • Fig. 1 shows a network 100, such as the Internet, an intranet, a wide-area network (WAN), a wireless or telephonic network, or any other appropriate network.
  • Network 100 can also be a combination of various types of networks or of various networks and sub-networks. Users communicate via the network 100 by sending information by way of methods and protocols appropriate to the network 100. Although it will be understood that many users can access network 100 simultaneously, three users 110, 120 and 130 are shown.
  • a user can be a human being or another entity, such as a computer program capable of accessing network 100.
  • Fig. 1 also shows a rating server 140 as discussed below.
  • users contribute items, such as, for example, e-mail messages or discussion group postings.
  • items encompass a wide variety of things.
  • the users view each other's items and some or all of the users rate the items contributed by other users. For example, in Fig. 1, user 110 reads other items and rates them, but does not contribute items himself. In Fig. 1, user 120 reads others' items but does not rate them. User 120 does, however, contribute items. In Fig. 1, user 130 reads others' items and rates them. User 130 also contributes items.
  • the three users discussed herein are intended to show that various users interact with network 100 in different ways. In Fig. 1, user 110 receives items 112 contributed by other users, including the
  • Fig. 1 discusses a system using the Regard of users, Fig. 1 could also apply to a system that uses the Expertise of users (as both terms are defined herein). Only Regard is shown in the example of Fig. 1 to aid in maintaining the simplicity of the example.
  • User 110 contributes a rating 114 of one or more of the items.
  • User 120 receives items 122 contributed by other users, including the Quality of the items and the Regard of the users who contributed the items.
  • User 120 contributes one or more items 126 of his own.
  • User 130 receives items 132 contributed by other users, including the Quality of the items and a Regard of the users who contributed the items.
  • User 130 contributes a rating 134 of one of more of the items.
  • User 130 contributes one of more items 136.
  • Fig. 1 is provided by way or example and not limitation.
  • the invention can be implemented in a network as shown, or in other environments, such as, for example, a database in which users enter, read and access items in the database.
  • the invention could also be implemented in, for example, an e- mail system operating in a networked environment.
  • the functionality described herein is preferably performed by a data processing system or systems performing instructions stored on a medium or memory accessible by the data processing system(s).
  • the invention is not limited to the architecture, programming models, protocols or procedures shown and described herein.
  • Fig. 2(a) shows an example of a user interface allowing users to view and rate items. Although not shown, users can also contribute their own items.
  • Fig. 2(a) shows a web page that displays messages in an online discussion forum. The text of a current message is displayed in an area 202. As the user views a current message, he can rate the message using rating area 204 (or a similar rating scheme, such as one of the alternative ratings schemes described below).
  • the user interface of Fig. 2(a) uses alternating visual cues. Specifically, in the ratings scheme shown in the Figure, five diamond boxes 204 are displayed. One of these diamonds is highlighted per second, with the highlights preferably moving sequentially from left to right and wrapping from diamond #5 back to diamond #1. The pattern repeats until a predefined user action is detected.
  • a predefined user action can be, but is not limited to a keystroke, mouse movement or click, voice command and/or another appropriate command.
  • a predefined user action selects a rating corresponding to a currently highlighted diamond in area 204.
  • such a predefined user action also causes the user to proceed to the next item, thread, or elsewhere. For example, the user clicks on a rating and automatically advances to a next item, thread, etc.
  • the rating diamonds preferably correspond to varying ratings, from, for example, lowest to highest ratings.
  • the user merely needs to wait until a rating that he agrees with is highlighted and perform one of the predefined user actions, such as slightly moving his mouse or touching any key on the keyboard.
  • the user does not have to move a mouse controlled pointer to a particular location on the web page, or click or highlight specific objects or locations on the web page to rate messages, since the act of rating automatically causes the display to advance to the next message.
  • the interface of Fig. 2(a), including diamonds 204 is implemented via an applet on the web page.
  • Fig. 2(b) describes alternative rating schemes that can be used in accordance with the present invention. While Fig. 2(b) describes the "single click rating” scheme of Fig. 2(a), it also describes several other rating schemes. These include a "rate while you navigate” scheme and a "single keystroke/click rate/Navigate” scheme. In the rate while you navigate scheme, use of keyboard or mouse commands to navigate to other items while the visual cues are alternating is interpreted as selecting the rating represented by the currently highlighted visual cue ("rate while you navigate”).
  • the rating diamonds 242 are not automatically highlighted. Instead, the user clicks on an appropriate diamond or presses one of keys 1, 2, 3 or 4 on the keypad (or some other predetermined keys) to both select a rating for the current message and to go to the next article, next thread, out or elsewhere.
  • ratings schemes do not automatically advance the user to a next view, but require the user to perform the rating and navigation functions separately (even though one or the other can still be accomplished by alternating indicators as discussed above).
  • an alternative rating scheme places the ratings diamonds next to each message or each thread visible on the page.
  • Still another scheme places the rating diamonds next to the messages.
  • a message "thread” is a group of related messages having a linear order and a hierarchy of levels of indentation (representing interrelationship) so that a user can view the next or previous item in the message thread. It is important to note that, in the described embodiment, it is the messages, message threads, or other items that are being rated, not the authors/contributors of the items. Still another scheme allows the user to type a rating into a box on the web page or enter a rating in a drop-down menu or special window. Still another rating scheme allows the user to move a slider bar or similar non-discrete input device on a web page and translates the user's action into a numeric rating. Thus, the above user interfaces could also be implemented as non- discrete interfaces wherein user ratings fall along a spectrum (for example, between 0 and 1) and are not limited to predetermined discrete values.
  • Fig. 2(c) shows a web page incorporating a rating scheme in accordance with the present invention.
  • the ratings diamonds invite the user to rate the entire page, not just the contents of certain areas on the page.
  • the rating diamonds can alternate as shown above or require the user to click on or enter an appropriate rating.
  • Still another rating scheme rates products and services offered on and transactions completed through a networked commerce facility, such as an online auction service.
  • the item can be, for example, a completed transaction in which the user was the buyer or seller or another user monitoring the quality of the participation by the buyer or seller, or of the goods or services that are the subject of the transaction.
  • Other types of items can also be rated in such a system.
  • Quality will attach to various transactions from a buyer's perspective and a seller's perspective. For example the buyer may rate the seller highly, but the seller may rate the buyer much lower.
  • the same transaction receives very different ratings from buyer and seller, both of whom are
  • the author of the transaction in some sense.
  • the buyer behavior and the seller behavior can be considered two separate items that are rated separately.
  • any user may be able to provide a rating or only a limited number of users may provide ratings.
  • Fig. 2(a) also shows a Regard diamond 206 (which represents the Regard of the author of the current item) and a Quality diamond 208 (which represents the Quality value assigned to the current item).
  • Regard can be used alone as an independent benchmark, as a predictor of the value of the user's historical and future contributions or, in a system-wide calculation, as a factor determining the relative weight assigned to the user's evaluations of other user's contributions.
  • the Quality of an item is based on the ratings assigned to the item by other users and also based on the Regards of those other users. A discussion of the meaning of and preferred methods of obtaining Regard and Quality are discussed below in detail. It should be understood that an interface similar to that of Fig. 2(a) could also be used to display item Quality and user Expertise in a system that uses Expertise values rather than Regard values.
  • Fig. 2(c) also shows a ranking diamond 244. This diamond represents the
  • Figs. 7(c)-7(e) are flow charts showing the inputs and outputs to "basic" determinations of item Quality, user Regard and user Expertise.
  • each user is assigned a Regard value, in accordance with ratings that other users assign to items contributed by that user and in further accordance with the Regard values of the users who contribute the ratings.
  • each user is assigned an Expertise value, in accordance with ratings that other users assign to items contributed by that user.
  • a user is assigned a default Expertise or Quality value. Quality values are determined for each item contributed by a user, whether the systems implements Regard or Expertise.
  • the Quality value of an item is determined in accordance with the ratings received for the item and the Regard (or Expertise) of the users who contributed the ratings. Details of the determination of "basic" Quality, Regard and Expertise are discussed below in Section 5 et seq. Variations on the basic determinations are also discussed below in Section 5 et seq.
  • Figs. 3-6 show a simple example in a system having only four users demonstrating that ratings supplied by users are not necessarily valued equally in a system using Regard.
  • Figs. 3-5 show a system implementing Regard values.
  • Fig. 6 shows a system implementing Expertise values. To enhance the clarity of the examples in Figs. 3-6, it is assumed that the system has only four users and that only one item is contributed and rated at a time. It is contemplated that most systems employing an embodiment of the present invention will have many more users and that multiple item contributions and multiple ratings by the users of other user's items will overlap in time.
  • User 4 contributes Item A, which is viewed and rated by Users 1, 2 and 3.
  • User 1 is well Regarded (having, e.g., a Regard of 0.9).
  • User 2 is poorly Regarded, having a Regard of 0.1.
  • User 3 is also poorly Regarded, having a Regard of 0.1. (Each User's Regard is based at least in part on his previous contributions and the ratings of other users for those contributions, as discussed below in detail).
  • the ratings contributed by all users are not equally valued. Ratings from well Regarded users are valued more when determining the Quality of an item.
  • the terms "love” and “hate” are presented here as an aid to understanding.
  • the embodiment described here determines numeric ratings. For example, a user interface may allow a user to choose “love” or “hate” or some other word suggesting a reaction somewhere in between, but these non-numeric choices are eventually translated into numeric values for the purpose of calculations in the preferred embodiments.
  • Figs. 4(a) and 4(b) show a simple example in a system having only four users demonstrating that a user's Regard is affected by how others rate the items that the user contributes and by the Regard of those users and that a user's Regard can affect the Regard of other users and the Quality of items contributed by other users. Again, for the purposes of this simple example, it is assumed that only one item is contributed and rated at a time, although this would not necessarily be the case in a real situation.
  • Fig. 4(a) shows that, once users have rated Item A in Fig. 3, the ratings for
  • Item A affect the Regard of User 4, who contributed Item A.
  • the new item contributed by User 4 receive high ratings by well Regarded users and User 4's Regard value goes up.
  • User 4's Regard value rises from 0.5 (middle of the scale used here) to some higher value.
  • Fig. 4(b) continues the example of Fig. 4(a), showing that User 4's changed
  • the rise in User 4's Regard also positively affects the Regard of User 1 (not shown), since User 1 's Regard is based on the ratings his items received from other users and the Regard values of those users.
  • the example of Fig. 4(b) could be expanded to show more steps in which the newly affected Regard of Users 1 and 3 affect the Regard of other users whose items Users 1 and 3 have rated, and the Quality of those items, and so on.
  • New ratings for an item of one user can cause a change in the Quality of that item.
  • new ratings for an item of one user can cause a change in the Regard of that user, which can potentially cause a change in the Regard of all users and in the Quality of items for all users.
  • Figs. 5(a) and 5(b) show another simple example in a system having only four users showing that a user's Regard is affected by how others rate the items that the user contributes and by the Regard of those users and that a user's Regard can affect the Regard of other users and the Quality of items contributed by those other users.
  • Fig. 5(a) shows that a user's Regard can be adversely affected as well. Again, it is assumed that the system has only four users and that only one item is contributed and rated at a time.
  • User 1 contributes Item B, which is viewed and rated by Users 2, 3 and 4. In the example, User 1 was well Regarded when he contributed Item B.
  • User 2 is poorly Regarded, having a Regard of 0.1.
  • User 3 is also poorly Regarded, having a Regard of 0.1.
  • User 4 is well Regarded, having a Regard of 0.9.
  • poorly Regarded User 3 "loves" Item B, giving it a rating of 1.0 (which in our example is the highest possible rating).
  • Fig. 5(a) shows that, once the users have rated Item B, the rating for Item B adversely affects the Regard of User 1, who contributed Item B.
  • the new item contributed by User 1 receives low ratings from well Regarded users (and below the average ratings for User l's previous items). Therefore, in the example, User l's Regard value falls to 0.5 (in the middle of the range) from some higher value.
  • Fig. 5(b) continues the example of Fig. 5(a), showing that User l's changed
  • Fig. 5(b) could be expanded to show more steps in which the newly affected Regard of Users 2 and 4 affect the Regard of other users whose items Users 2 and 4 have rated, and the Quality of those items, and so on.
  • Figs. 4 and 5 have involved systems that implement a Regard value for users.
  • Figs. 6(a) and 6(b) show a simple example in a system that implements an Expertise value for users.
  • the example shows that a user's Expertise is affected by how others rate the items that the user contributes and that a user's Expertise can affect the Quality of others' items.
  • a change in the Expertise value of a user does not affect the Expertise values of other users.
  • a user's Expertise value is changed when other users submit new ratings for that user's items.
  • Fig. 6(a) shows an example in which User 4 contributes Item C, which is rated by Users 1, 2 and 3.
  • User 2 has a high Expertise, but "hates” the item and give it a low rating.
  • Users 1 and 3 both have low Expertise values, but "love” the item and give it a high rating.
  • User 4's Expertise is based on the arithmetic mean of all ratings for all items contributed by User 4.
  • the new high ratings from Users 1 and 3 have raised the mean of the ratings of User 4's items.
  • the ratings from other users who rate User 4's items affect user 4's Expertise, but the Expertise of those other users does not affect User 4's Expertise.
  • Fig. 6(b) continues the example of Fig. 6(a), showing that User 4's changed Expertise value affects the item Quality of other users whose items User 4 has rated in the past.
  • User 4 has historically given low ratings to items contributed by User 1.
  • the rise in User 4's Expertise does not affect the Expertise of User 1.
  • User 4 has historically given high ratings to items contributed by User 2.
  • User 4's Expertise rises, it positively affects the Quality value for items of User 2 that were previously rated by User 4.
  • the rise in User 4's Expertise does not affect the Expertise of User 2.
  • New ratings for an item of one user can cause a change in the Quality of that item.
  • new ratings for an item of one user can cause a change in the Expertise of that user, which can potentially cause a change in the Quality of items for all users.
  • An item may be composed of words, whether ASCII text or text formatted by a word processing technology or hypertext mark-up language, text contained in a collection of data packets constituting an e-mail message, discussion group posting or Usenet newsgroup article, or the output of a process that translates from one language to another, or written words into spoken words, or spoken words into written words.
  • An item may also be an interactive, sequential exchange of words making up a group or one-to-one chat session.
  • An item may be a fixed visual image, whether a drawing or image captured by a digital camera, or transferred from a photographic original to a scanned representation of the original.
  • Streaming audio or video whether live or previously recorded material and whether unidirectional, bidirectional or broadcast, would constitute an item, as would words, images, or streaming audio or video integrated into a web site.
  • any single behavior or collection of behaviors viewable by others, whether online or offline, may constitute an item.
  • An example of this includes the course of a user's conduct in an online auction, whether as buyer or as seller.
  • Another example is the performance offline of a subcontractor or of a general contractor, in their respective professional roles — whether assessed by each other or a third party — who entered into their agreement for the performance of services using an online medium.
  • An item may also be information, goods and services, or assets which one user recommends to other users, or otherwise associates with oneself or one's reputation.
  • An example of this is a recommended link to a third party website, or a link deep into the structured hierarchy of a website.
  • Another example is an asset one puts forth for sale in an online auction.
  • Another example is software, whether distributed as source code or in executable form and whether constituting a stand-alone program or operating system, or a replacement of or addendum to a portion thereof.
  • An item may exist only as a pointer in records to locations accessible over, or data streams transmitted across, a wide area network.
  • different users or types of users may have items associated with them and different users or types of users may participate in the calculation of Expertise, Regard, Quality and other measures included in the described embodiments of the present invention.
  • users will overtly choose to participate.
  • a user might access a mailing list, discussion group, chat session or other items or grouping of items via a facility, such as a website, which is specifically enabled with features of the described embodiments of the invention.
  • this facility will be the only avenue to access the items or grouping of items and users would anticipate receiving in the ordinary course information regarding the Regard and Expertise (and related measures) of users and regarding the Quality (and related measures) of items.
  • some users need not ever access such a facility, or give any indication of interest in, or the intention of, assigning ratings to items contributed by other users, or give direct consent to the application of the measurements and methods of the described embodiments to themselves or to items they contribute.
  • a user might participate in a mailing list, discussion group, chat session or other grouping of items that is accessed by multiple facilities, only a subset of which are enabled with features of the described embodiments of the invention. More specifically, a user might choose to read and post Usenet newsgroup messages via a desktop application or website that has no support for features of the described embodiments, while other users access or contribute such postings via a facility that supports features of the described embodiments.
  • users who contribute items via facilities that are not enabled with any or all features of the described embodiments may also be incorporated into databases as item contributors and assigned Expertise or Regard values.
  • the items contributed by such users may therefore be assigned Quality or related measurements and be subject to Vouching and Discrediting by other users and the other methods of the described embodiments, as discussed in various section below.
  • n n e ⁇ , - ⁇ , Ci ⁇
  • 'i can be any numerical scale for ratings, or any scheme that orders items to reflect user assessments or preferences.
  • behavior y may include overt selection of a rating with a mouse or keyboard interface, a voice command, a movement of the hand or eye or other observable behavior by the human body.
  • ratings can also include the time spent viewing an item, the number of comments one interjects into a chat session, the decision to click on an ad banner, or a decision to bid on an auction opportunity.
  • the user interface can present sequentially a repeating series of highlighted or colored diamonds 242, or other alternating visual cues, each associated with a particular value of the rating variable.
  • the rate at which the visual cues 242 alternate is one second, although other intervals can be used.
  • the currently colored or highlighted visual cue is interpreted as the user's rating ("single-click rating").
  • the visual cues and rating opportunity are not presented to the user until after the user has navigated to the end of the item and a fixed number of seconds has elapsed since delivery of, or the beginning of the user's interaction with, the item. In a preferred embodiment, the interval is 10 seconds.
  • use of keyboard or mouse commands to navigate to other items is interpreted as selecting the rating represented by the currently highlighted visual cue ("rate while you . navigate").
  • rating is an automatic or mandatory step in accessing items sequentially, or otherwise continuing to interact with the system, without exiting the system, the interaction or some level of the interaction.
  • the Basic Expert method determines ⁇ « , the "Expertise” of a user, according to the ratings other users assign to items contributed by such user.
  • Expertise is the arithmetic mean of the ratings that other users assign to items contributed by user i:
  • t e xpert se o eac user s ca cu ated periodically, once every 12 hours.
  • a user who has not contributed any items, or none of whose contributed items has been rated by any other user, is preferably granted the arithmetic average Expertise of users whose contributed items have received ratings:
  • the Basic High Regard method determines ⁇ t , the "Regard" of a user, according to the ratings that other users assign to items contributed by user i and the Regard of the users providing the ratings.
  • Fig. 7(a) is an overview of data flows in an example High Regard ratings system, showing the circular nature of Regard affecting the Regard of other users and the Quality of items contributed by other users.
  • one or more users contribute one or more items (also called content).
  • a user does not have to contribute any item, but if he or she does contribute at least one item, the other users are given the opportunities to view and rate each item contributed.
  • Each item is rated (for example, arrows 704, 709, 707) by one or more users.
  • the user may rate a current message in an online forum.
  • the user may rate a web page.
  • the ratings of the other users, along with the Regard of the users providing the ratings are used to create a Quality value of an item. Meanwhile, the Regard rating of the users giving the ratings are potentially being changed if those users are contributing items of their own or other users are rating their previously contributed items. As discussed below, not all embodiments determine Quality immediately when an item is entered. In a preferred embodiment, the Regard of users is updated periodically, for example, every twelve hours, although a shorter period of time can be used. In the future, as computing power increases, it is contemplated that both Quality and Regard will be updated more frequently.
  • Fig. 7(b) is a more detailed example of Fig. 7(a). It will be understood that the
  • matrices shown here are a conceptual presentation of a described embodiment of the present invention. An actual implementation of the matrices shown in Fig. 6(b) is discussed below in connection with the data structure of Fig. 11.
  • the rating of each item 712 contributed by each user and rated by each of the other users can be stored in a database as the record of ratings 718.
  • the ratings 714 given to an item by the users, together with the Regard values 716 of the users, can be used to derive a measure of the Quality of the item 712.
  • the Regard values of the users are created by using the entire record of ratings 718.
  • an iterative method 714 can be used to simultaneously create the Regard values of the users and to use the Regard values to weight the ratings given by the users in the record of ratings.
  • Certain embodiments calculate user Regard and item Quality values per user demand. Other embodiments calculate these values periodically and store them in a database for later retrieval.
  • the High Regard method involves the simultaneous solution of a number of related equations.
  • the Regard of each user is calculated periodically, once every 12 hours.
  • the Regard of the user is set equal to that rating.
  • the Basic Quality method determines 1? , the "Quality" of an item, according to the ratings assigned to the item and the Expertise or Regard of the users providing the ratings.
  • Quality is the arithmetic mean of the ratings that other users assign to item n contributed by user i, weighting each rating by the Expertise or Regard (the method being selected by the system operator) of the user
  • Quality is calculated on demand — at the time it is needed for any purpose — preferably subject to, a time period following the most recent calculation, for example, a 10 minutes during which time a cached copy of the most recently calculated Quality value for the item is provided.
  • An item which has not received any ratings is granted the arithmetic average Quality of items that have received ratings.
  • the Quality of the item is set equal to that rating.
  • An assumption of the method is that an item rating is an effective measurement of the capacity of the user who contributed the item to identify valuable items contributed by other users. From the point of view of the user assigning item ratings, the user is helping to identify experts whose assessment of the value of items will receive higher weighting. From the contributor's point of view, high item ratings mean higher Expertise, implying more influence over Quality levels. In this sense, the Basic Expert method is an expert system.
  • User i's Expertise increases whenever another user rates an item contributed by user i above user i's previous level of Expertise. The increase is equal to the difference between the rating and the previous level of Expertise, divided by the total number of ratings previously assigned to user i items.
  • the Quality of item n contributed by user i increases whenever another user rates the item above its previous level of Quality.
  • the increase is equal to the difference between the rating and the item's previous level of Quality, weighted by the Expertise of the user providing the rating, divided by the total Expertise of users who previously
  • An assumption of the method is that an item rating is a useful measurement of the capacity of the user who contributed the item to identify talented item contributors.
  • h b r - ⁇ fc + r ** h ⁇ b + ⁇ ** hb z l 3 ⁇ "*x * ⁇ xz ⁇ z
  • the increase is equal to the difference between the rating and user i's previous level of Regard, weighted and adjusted, divided by the total weighted number of ratings previously assigned to user i items.
  • the weighting in the numerator is the rating user's Regard.
  • the adjustment for cross- rating dependencies accounts for the effect of the additional rating on the Regard of users other than i — and hence, how the rating affects the weight assigned to their previously assigned ratings of user i items.
  • Cross-rating dependencies can magnify or diminish
  • the impact on item Quality of a rating by usery is: weighting.
  • the impact of a rating on item Quality is similar in form to the impact of a rating on the contributor's Regard.
  • the Quality of item n contributed by user increases whenever another user rates item « above the item's previous level of Quality.
  • the increase is equal to the difference between the rating and the item's previous level of Quality, weighted and adjusted, divided by the total Regard of the users who previously rated the item.
  • the weighting in the numerator is the rating user's Regard.
  • the adjustment for cross- rating dependencies accounts for the effect of the additional rating on the Regard of users other than i — and hence, how the rating affects the weight given their previously assigned ratings of item n.
  • Cross-rating dependencies can magnify or diminish ⁇ , but cannot change its sign.
  • An item rating can affect the Expertise of the user who contributed the item, but not the Expertise of any other user. However, a rating of an item contributed by one user can affect the Quality of an item contributed by another user. A simple example of this is discussed above in connection with Fig. 6.
  • a rating of item n contributed by user i can affect the Quality of item m contributed by user x (already assigned ratings by all users):
  • an item rating potentially affects the Quality of the item, the Expertise of the contributor and the Quality of every item ever rated by the contributor.
  • a rating has secondary effects that extend as far as one other user and that user's rating activities.
  • the method recognizes an item rating as, in part, an indication of the ability of the contributor of the item to identify valuable items contributed by other users.
  • the method recognizes an item rating as, in part, an indication of the ability of the contributor of the item to identify other talented items contributors.
  • the rating of an item by a contributing user affects the Regard of other users in the system.
  • An item rating has ripple effects that can propagate through the entire network, potentially affecting the Regard of the user who contributed the item, the Regard of the user who assigns the rating, the Regard of other users, the Quality of the item and the Quality of other items.
  • a useful example is a situation in which cross-rating dependencies are limited to two parties. For example, if users y and z have previously assigned item ratings that are equal to the Regard values of the contributors, cross-rating dependencies are limited to users i and x.
  • This aspect of the High Regard method facilitates construction of user "communities”. Users who associate and build a web of positive cross-ratings can help each other increase their Regard quickly — the effect of their positive ratings on each other's Regard is magnified.
  • This aspect of the described method also heightens the risk of mutual negative ratings, in the course of a form of destructive behavior often observed on wide area networks, known popularly as "flaming" — intentional provocation or repetitive attacks, typically in the form of an exchange of written messages. Two or more users engaging in mutual attacks lose Regard quickly — the effect of their negative ratings on each other's Regard is magnified.
  • (+) x (-) x (-) (+)
  • user/s rating of user i is magnified. ir > 0
  • ⁇ ( ⁇ ) is a transformation function and ⁇ x is a function that reverses the effect of
  • the Extended methods calculate arithmetic means — weighted, in the case of Regard and Quality.
  • the Extended Expert method is equivalent to the Basic Expert method
  • the Extended High Regard method is equivalent to the Basic High Regard method
  • the Extended Quality method is equivalent to the Quality method.
  • n is a parameter set by the system operator
  • the Extended methods calculate Holder means.
  • the Expertise or Regard granted a user who has not contributed any items may be determined in accordance with:
  • the Quality granted an item to which no user has assigned a rating may be determined in accordance with:
  • the Expertise or Regard of each user is calculated periodically, after the passage of a specific period of time, after the collection of a specific number of new item ratings, or as often as possible given the available computational resources.
  • the Quality of each item is calculated every time the item's Quality is requested or required, subject to a minimum time period following the most recent calculation, during which time a cached copy of the Quality value most recently calculated is provided; or after the passage of a specific period of time, after the collection of a specific number of new item ratings, or as often as possible given the available computational resources.
  • the interdependency of each user's Regard in the Extended High Regard method can be expressed in matrix notation.
  • ⁇ > is the Expertise of each user calculated above and n is determined according to a selected procedure for calculating a default value.
  • the value is based on the K values determined in the first step.
  • ⁇ T is determined according to a selected procedure for calculating a default value.
  • the value is based on the n « values determined in the previous step.
  • the user's Expertise or Regard is set equal to that rating. This could have undesirable consequences.
  • the items contributed by a user may be rated initially by a poorly Regarded user and for a period of time thereafter, by a small or poorly Regarded subset of users. In order to control the amount by which item ratings influence the Expertise or
  • h e denotes the vector of Extended Expertise or Regard values, as the case may be, for all users.
  • the notional user assigns r ⁇ and * « ⁇ for each user, taking into account the number and level of ratings assigned to user i's items by other users, the number and level of ratings user i has assigned to items contributed by other users and the Expertise or Regard of all users.
  • r n is set equal to i,l/l d and f « ⁇ is determined according to an equation in the form of: ⁇ — (bx + cy — ⁇ /dxy) e where a,b,c,d and e are constants selected by the system operator and x represents the aggregate number of the user's items that have been rated by other users and y the number of other users' items that have been rated by the user.
  • this formulation has the characteristic of reducing the fixed constant a more slowly when x and y grow in tandem and more quickly when x and grow divergently.
  • This property causes the system to withdraw inertia as a user interacts with the system, but at different rates: more slowly from a user who rates items and whose items are rated by other users close to a targeted ratio and more quickly from a user who primarily does one but not the other.
  • Fig. 8 is a three-dimensional plot of a High Regard Inertial Model having two axes labeled from 0 to 100 representing, along the first axis, the number of ratings other users have assigned to items contributed by the user in question and, along the second axis, the number of ratings the user in question has assigned to items contributed by other users.
  • a third time axis labeled from 0 to 500 represents the number of notional ratings (set at a predetermined default level of Regard) assigned to an item contributed by the user in question in order to create the inertia effect.
  • the effect of inertia diminishes as other users assign ratings to items contributed by the user in question and as the user in question assigns ratings to items contributed by other users.
  • the shape of the three-dimensional plot represents the characteristic of the specified embodiment that the inertia effect diminishes more slowly when the user in question rates items contributed by other users and the user in question's contributed items are rated by other users, in tandem.
  • the described embodiment reflects a target ratio.
  • the Regard inertia effect in the described embodiment diminishes rapidly, in contrast, when the user in question rates items contributed by other user often, but few users rate items contributed by the user in question, or vice versa.
  • the number of notional ratings is initially 500.
  • the item's Quality is set equal to that rating. This could have undesirable consequences.
  • the item may be rated initially by a poorly Regarded user and for a period of time thereafter, by a small or poorly Regarded subset of users.
  • d represents the number of units of time, using a standard unit selected for a preferred embodiment (e.g. minutes, hours or days), measuring the time elapsed since user i contributed item n.
  • a standard unit selected for a preferred embodiment e.g. minutes, hours or days
  • d is expressed in minutes.
  • the notional user assigns a 2 and k a for each item, taking into account the Quality of other items with a track record of ratings, time elapsed since the item was contributed and the number and level of ratings assigned to the item by other users.
  • 9a. is set equal to *» ' 20d , with the formula determining *& in the form of:
  • x represents the aggregate number of ratings assigned to the item and y the time interval d.
  • This formulation causes the system to withdraw inertia as users rate the item and with the passage of time, but at different rates: more slowly for an item that receives ratings at a targeted rate per period of time and more quickly for an item that receives ratings at a higher rate (resulting in adequate data) or at a lower rate (suggesting disinterest).
  • Fig. 9(a) is a three-dimensional plot of a Quality Inertia Model having a first axis labeled from 0 to 300 units of time as specified above, a second axis labeled from 0 to 80 representing the number of ratings users have assigned to the item in question and a third time axis labeled from 0 to 600 representing the number of notional ratings (initially set to a predefined default level of Quality) assigned to the item in question in order to create the inertia effect.
  • the effect of a rating diminishes with the passage of time and as other users assign ratings to the item in question.
  • the shape of the three-dimensional plot represents the characteristic of the described embodiment that the inertia effect diminishes more slowly when users rate the item in question steadily in small numbers.
  • the described embodiment reflects a target number of ratings per units of time. The Regard inertia effect diminishes rapidly, by contrast, when considerable time passes with very few ratings contributes, or if a large number of users rate the item during a short time interval.
  • Notional user values created for calculations of Quality will, in some embodiments, be excluded from the calculation of High Regard values.
  • a standard time period e.g. days, weeks or months, determines the rate at which ratings are removed from the calculation.
  • g be the number of such standard time periods from which ratings will be included in the calculation of Expertise or Regard, as the case may be, after which they are removed.
  • f m and R m be the (P x P) matrices of *v and r 'J values reflecting only the ratings provided during a single period m (Regardless of the date the item was contributed).
  • r TM and -Km defined below, substitute for T and R to calculate Expertise or Regard, whichever is used in the embodiment.
  • T m T m - l + T n
  • Decay treat differently subgroups of users segmented according to one or more user characteristics.
  • Users can be divided into subsets according to their Regard (or Expertise) values, or some other appropriate attribute. For example, one can make the ratings given to items contributed by the most poorly Regarded users "sticky," that is largely unaffected by decay procedures, or even strengthen the worst ratings under some circumstances. In this way, users with the lowest level of Regard who are prone to disruptive behavior are largely and permanently excluded from user interactions.
  • ⁇ ( , U) b e a function that maps users into subsets U x , . . . , U Z ⁇ in sorne embodiments, this function will segment users according to their previously computed Expertise or Regard.
  • a standard time period e.g. days, weeks or months, determines the rate at which ratings are affected by a transformation function using the procedure set forth below.
  • e x T R Let T 7 " and R m be ( P*P ) and ⁇ J>*P ) matrices including only ratings provided during period m (by any user in any subset) for items contributed (at any time) by users in subset ⁇ x .
  • Hh separates users according to four levels of Expertise or Regard
  • the procedure adjusts the record of ratings as follows. When a user provides a rating of an item contributed by another user, the rating remains in the record unaffected during the following 10 standard time periods. All such ratings are removed from the record after 25 ⁇ x standard time periods.
  • T and R are adjusted (according to different formulas for each subgroup) which lessen the impact of each rating as the periods progress and adjust older ratings from lower values to higher values in the case of poorly Regarded users and from higher values to lower values in the case of well Regarded users.
  • the procedure continues to base Expertise or Regard, as the case may be, on item ratings assigned over an extended period of time, but lightens the weight of bad historical ratings on poorly Regarded users and reduces the benefit of good historical ratings for highly Regarded users.
  • ⁇ m ⁇ i + - - - + ⁇ m
  • R m R m + . . . + R m ⁇ 10 + p ⁇ RTM- 11 + ⁇ ⁇ ⁇ + p m R l ⁇ / m>12
  • Fig. 9(b) is a graph of a Segmented Decay.
  • Fig. 9(b) displays the different adjustments made to ratings assigned to items contributed by users in four different categories of Regard.
  • the first group ⁇ represents users with Regard of greater than 0.75 to 1.0 (inclusive).
  • the second group ⁇ 2 represents users with Regard of greater than 0.50 to 0.75.
  • the third group ⁇ 3 represents users with Regard of greater than 0.25 to 0.50.
  • the fourth group represents users with Regard from 0.0 to 0.25.
  • the horizontal axis represents time periods 11-25 following the contribution of any rating. The time periods can represent days, weeks, or months, or some other time intervals.
  • Time periods 0 through 10 are not represented on the graph because in the described embodiment, segmented decay does not begin to affect any rating during the first ten periods following the contribution of the rating.
  • the described embodiment causes ratings to regress toward the rating of 0.5, represented by the movement of the diamonds, triangles, circles and spades toward the mean given the passage of time.
  • the Vouching user's higher Expertise or Regard value is thereafter associated with the item, instead of the Expertise or Regard of the user who contributed the item, in procedures that filter, highlight, sort or otherwise evaluate items based on the Expertise or Regard of the contributor.
  • the user who Vouched for the item receives credit — or potentially loses Expertise or Regard — to some extent, as does the user who contributed the item.
  • “Discrediting” does the opposite: it permits users to dedicate their Expertise or Regard, as the case may be, to impugn an item submitted by another user. It is designed to give users an incentive to identify bad items contributed by users with relatively high Expertise or Regard, quickly alerting the community of users. A value equal to one minus the Expertise or Regard of the user who Discredits the item is thereafter associated with the item, instead of the Expertise or Regard of the user who contributed the item. As the item subsequently receives ratings, the user who Discredited the item receives credit — or potentially loses Expertise or Regard—but in an inverse relationship to the ratings received by the item.
  • indicates the establishment of a link with column n of the K ⁇ and G ⁇ matrices.
  • the newly created (and the original) columns reflect future ratings assigned to the item.
  • Vouching may also link the newly created column to historical ratings.
  • indicates the establishment of a link with column n of the ⁇ ' ⁇ and matrices.
  • the newly created columns reflect the mirror image of future ratings assigned to the item.
  • Discrediting may also link the newly created column to the mirror image of historical ratings.
  • a user is only permitted to Vouch or Discredit if the user's
  • a user who Vouches or Discredits might receive only a partial link to ratings for the item, expressed generally in the case of Vouching as:
  • a user who Vouches or Discredits will be affected by ratings on the basis of how much the act raises or lowers the profile of the item, as the case may be.
  • Um be the m tn user in U in .
  • users may be permitted or encouraged to contribute items that are anticipated to yield a negative response.
  • an item might be a link to a web site containing objectionable content, which the user wishes to bring to the attention of the community of users.
  • a specific forum designed for this purpose may be established, clearly separated from other forums. If such items were to be directly associated with the contributing user, there would be substantial disincentives for participation. Therefore, in some embodiments, the user contributing the item will have the option to Discredit the item upfront, such that
  • the adjusted rating would be included in calculations of Regard and expertise for the contributing user. In some embodiments, the adjusted value will also be used in the calculation of the Quality of the item.
  • Figs. 20(a) and 20(b) are, respectively, a graph and a table to aid in explaining the concepts of "Vouching” and “Discrediting” and how these concepts provide a solution to the "sparse ratings problem" that arises during the early life of an item.
  • the vertical axis represents the Regard of an individual posting an item (e.g., a Regard of 0.400). This is the Regard that is associated with the item, for example, for use in the calculation of thread caliber (discussed below in Section 15).
  • inertia refers to Quality inertia, which may obscure the value of the theretofore received ratings, because too few ratings have been received during the period shortly following the contribution of the item. During this period, not enough ratings have been received to be certain that Basic Quality method or other methods of calculating Quality, or a simple average, or any other measure, is representative of user or expert opinion.
  • a preferred embodiment permits a user with a higher level of Regard to step forward and "Vouch" for the item. If, for example, the Regard of the Vouching user is 0.600, the Regard associated with the item being Vouched also becomes 0.600.
  • Vouching When a user Vouches for an item, the user is making the strongest possible statement in support of the item in question, backing up that statement with the user's reputation. Vouching for an item implies that the user would be willing to take the ratings offered by other people for the item as if the user had actually authored it. A user cannot, for example Vouch a poorly Regarded item half-way up to the user's Regard level. Vouching, as used in the described embodiment, is all or nothing, based on the Vouching user's Regard value.
  • Discrediting is a similarly strong statement.
  • a user Discredits an item the user is making the strongest possible statement in opposition to the item in question, aligning the full value of the user's reputation against the item. In fact, the user would be willing to take the opposite of whatever ratings are given by other users.
  • poorly Regarded users are able to Discredit items contributed by highly Regarded users.
  • Fig. 20(b) shows calculation of Quality values starting at time period 9. (Prior to time period 9, an inertia value preferably is used for Quality).
  • Fig. 20(a) shows 15 time periods (1 through 15).
  • a user Vouches or Discredits the same item.
  • a user having a Regard of 0.400 posts the item.
  • time periods 2-4 three users Vouch for the item, improving the Regard associated with the item to 0.850.
  • time period 5 a user having a Regard of 0.400 Discredits the item, giving it an associated Regard of 0.600. Users continue to Vouch and Discredit the item until time period 15, when a user Vouches for the item and improves its associated Regard to 0.900.
  • the shaded areas represent the maximum of Regard and Quality (i.e., MAX(Regard, Quality)) for the item.
  • MAX(Regard, Quality) the maximum of Regard and Quality
  • a ratings system does not necessarily have robust information until a certain amount of time has passed (e.g., 9 time periods).
  • the results of Vouching and Discrediting provide a robust prediction of item Quality much earlier in the process, even when ratings are "sparse.”
  • sequential Vouching/Discrediting preferably allows the Vouching or Discrediting user to share in the ratings received for the item, but only to the extent that the user moves the current Regard value associate with the item.
  • the user Vouching in time period 7 will have the least share in the ratings since the user has moved the Regard the least (from 0.800 to 0.875).
  • the user who Discredits in time period 12 will have the most significant share (in the opposite of the ratings) since he moves the Regard the most (from 0.950 to 0.750).
  • items will fall into separate categories, weighted differently in the calculation of Expertise or Regard. Factors considered in establishing separate categories will include the difficulty of creating the items, the effort required by other users to review the items and the urgency of the items.
  • the relative weight attached to an item in calculations of Expertise or Regard will depend on its category and a number of item attributes that are particular to the category.
  • I ⁇ is a function of:
  • Some embodiments will involve the transmission, display or evaluation of groupings of items of different Quality, contributed by users with varying Expertise or
  • a good example is a threaded discussion list — two or more discussion group postings with a common subject and explicit relationships between messages, i.e. an indication of which messages respond directly to which other messages.
  • One objective of the method is to present groupings containing better items first, without breaking apart the thread structure. Multiple threads are sorted among themselves, based upon a measurement taking into account characteristics of some or all of the contents of each thread.
  • the Caliber method determines C- , the "Caliber" of an item grouping.
  • Caliber is the grouping average of, for each item, the higher of Regard (of the user associated with the item) and Quality:
  • z identifies the grouping (e.g. a thread identification number), m the number of items contained within, z involve the Expertise or Regard of the user who contributed item n and i the Quality of the item.
  • some embodiments may permit a user to user select h mm and 9mm ; such that items below a certain level of Quality, or associated with a user of Expertise or Regard below a certain level, are excluded from the calculation of Caliber:
  • Some embodiments will involve the transmission, display and integration of Expertise, Regard or Quality into the operation of online auctions.
  • the Auction method integrates Expertise or Regard, specifically, into an auction pricing mechanism.
  • the objective of the method is to increase the integrity, fairness and efficiency of online auctions by establishing standards and a procedure for a seller to limit the field of users permitted to bid in an auction.
  • s be an Expertise or Regard threshold defined by user i, who is putting item n up for sale in an online auction.
  • user i selects s " at or before the commencement of the auction. If no value is selected, the default is zero.
  • s i will establish the minimum Expertise or Regard required for a user to submit a binding bid qualified to participate in whatever auction pricing formula determines the winner of the auction and the closing price.
  • a user placing a record bid will have the option of withdrawing the bid at any time before the event described below.
  • the selling user will have the option at any time until the conclusion of the auction, or until a defined period prior to the conclusion of the auction, to reduce the value of s ? .
  • Record bids submitted by users who fall within the Expertise or Regard threshold after the reduction will immediately be made effective, pari passu with already pending bids and the users will no longer be entitled to withdraw.
  • the selling user may reduce s ?
  • the selling user is not permitted to increase its value at any point in the course of the auction. Reductions can be made one time, or multiple times during the pendency of an auction. s " can even be reduced to zero.
  • the door is opened to bids from any users that meet the lower standard.
  • the seller will not be permitted to discriminate among users according to any standard other than the buyer's Expertise or Regard, applied consistently.
  • the user's Expertise or Regard may be monitored during the pendency of the auction and, if the value falls below the current level of s ? , the selling user given the option (not an obligation) of releasing the user's binding bid.
  • Fig. 10 shows an example of a small data set in accordance with an embodiment of the Basic High Regard method.
  • This data set is intentionally small for the sake of example. It will be understood that actual data sets usually used with the methods and systems shown here can be very large.
  • Various ones of the various users have rated items having item ids 1-13.
  • every user has authored items. Ratings received from the users for various items vary between a lowest rating of 0.03 (items item 7 rated by user 1 and item 4 rated by user 5) and a highest rating of 0.95 (items 2 and 3 rated by user 1).
  • Applying the Basic High Regard method discussed above yields the Regard (HR) values shown in table 1006 for each of users 1-5. It will be understood that the Regard values shown in this example are for purposes of example only and are not to be taken in a limiting sense.
  • Fig. 11 shows an example of a data structure used to store and retrieve data required to perform calculations of Regard and Quality in a preferred embodiment of the present invention.
  • This implementation uses circular linked lists to represent the sparse matrixes used to store the ratings contributed by the users.
  • the Figure shows a Users linked list 1100 and an Items linked list 1140.
  • Users list 1100 contains an entry for p+1 users. The users have user IDs (uids) 0 through p.
  • Items list 1140 contains an entry for n+1 items that were contributed by the users. The items have item IDs (iids) 0 through n.
  • Each element in Users list 1100 has a corresponding Authors linked list 1120.
  • the entry in Users list 1100 for user 2 has an associated Author listl 1120 containing three entries. These entries in the Author list 1120 shown in the Figure represent the authors whose items have been rated by user 2. Thus, the first entry 1122 represents user 1, whose item(s) were rated by user 2; the second entry 1124 represents user 4, whose item(s) were rated by user 2; and the third entry 1126 represents user 7, whose item(s) were rated by user 2.
  • Each element in an Authors list 1120 has a corresponding Ratings linked list 1130.
  • the entry 1122 in Authors list 1120 for Author 1 has an associated Ratings list 1130 containing three entries. These entries in the Ratings lists 1130 represent items contributed by User 1 (and rated by user 2). .
  • the first entry 1132 represents item 3, which was contributed by user 1 and rated by user 2;
  • the second entry 1134 represents item 8, which was contributed by user 1 and rated by user 2;
  • the third entry 1136 represents item 17, which was contributed by user 1 and rated by user 2.
  • Each "g” value in a Ratings list 1130 represents a rating for the item.
  • the "r" value in each entry of Authors list 1120 is the sum of the ratings g from its corresponding Ratings list 1130.
  • the sum of the ratings g in entries 1132, 1134, 1136 is stored as value r in entry 1122.
  • Each entry in a Ratings list 1130 has a "k” value.
  • the "t" value in each entry of Authors list 1120 is the sum of the "k” values from its corresponding ratings list 1130.
  • the sum of the "k” values in entries 1132, 1134 and 1136 is stored as value "t" in entry 1122.
  • the Figure includes a plurality of circular linked lists 1150 (used to find high Regard values) and a plurality of circular linked lists 1160 (used to find "Quality"). Only one of each of lists 1150 and 1160 is shown for the sake of clarity.
  • Each list 1150 is formed by a series of b next links and represents the "r" and "t" values of a single author.
  • list 1150 contains entries whenever items authored by user 1 were rated by another user.
  • entry 1122 represents items of user 1 rated by user 2.
  • Entry 1154 represents items of user 1 rated by user 0.
  • entry 1154 points to the entry for user 1 in Users list 1100, which points in turn to entry 1122, forming circular linked list 1150.
  • Each list 1160 is formed by a series of s next links and represents the ratings
  • list 1160 contains entries whenever item 2 of user 4 was rated by another user.
  • entry 1162 represents item 2 of user 4 rated by user 2.
  • Entry 1164 represents item 2 of user 4 rated by user 9.
  • entry 1164 points to entry 1142 in Items list 1140 (for item 2).
  • Entry 1142 points to entry 1162 , forming a circular linked list.
  • the method follows the s_next chain for a list 1160 to pick up the non-zero ratings "g" values.
  • each user in User list 1100 has a corresponding "Regard" in high Regard list 170.
  • Each entry in the lists is preferably time-stamped (to aid, for example, in the decay method discussed above).
  • rating server 1202 can be part of a larger data processing system performing one or more of the functions discussed herein. Rating server 1202 is shown separately, but is not required to be separate from the entities to which it provides information. In addition, the functions of rating server 1202 and the other entities discussed below can be distributed over more than one data processing system or network without departing from the spirit and scope of the present invention.
  • Fig. 12(a) is a block diagram of a first example forum server application including a rating server 1202.
  • a forum server 1204 provides, for example, the content of a forum such as that shown in Fig. 2(a).
  • Forum server 1204 accesses a database 1206 storing the items displayed by the forum and preferably caching the Quality and Regard values returned from rating server 1202, although not all servers 1204 cache. Expertise and/or Caliber values might also be returned.
  • Rating server 1202 accesses its own database 1208 containing information about the ratings, Quality and the Regard of the various users (including associated Regards caused by Vouching and Discrediting, if these features are part of the system).
  • the data structure of Fig. 11 is preferably stored in database 1208.
  • a user requests forums, threads and articles within the threads and posts his own items through interaction with forum server 1204.
  • Forum server 1204 interacts with rating server 1202 to obtain the Regard of the contributing users and the item Quality requested.
  • Forum server 1204 also sends ratings contributed by users to rating server 1202 as they are received.
  • forum server 1204 identifies new items contributed by the users to rating server 1202. The items them selves are not necessarily sent to server 1202, but server 1202 need to know that new items have been contributed.
  • An item ID can be determined either by forum server 1204 (in which case, the item ID is preferably stored in conjunction with a forum ID to uniquely identify the item) or by rating server 1202.
  • Forum server 1202 also identifies the existence of new authors to rating server 1202. Again, an author id can be established either by forum server 1204 or by rating server 1202.
  • Forum server 1204 also identifies the existence of new users to rating server 1202.
  • a user id can be established either by forum server 1204 or by rating server 1202.
  • Forum server 1204 also identifies the existence of new ratings to rating server
  • Fig. 12(b) shows another embodiment of servers 1202 and 1204 in which server 1202 communicates directly with the user's browser instead of with forum server 1204.
  • server 1204 still sends information about new users, new items, new ratings, etc to rating server 1202, but requests for Quality, Expertise, Caliber and/or Regard values are sent by a browser of user 1201 and returned by rating server 1202 directly to the user's browser.
  • the browser may cache these values in certain embodiments.
  • Fig. 12(c) shows an example of a web page where the html of the page cause a web page including items (or descriptions of items) to be fetched from forum server 1204 and Regard Expertise/Caliber and/or Quality values to be fetched separately from server 1202.
  • a web page could be fetched from a third server.
  • This third party web page might include links to both forum server 1204 and 1202. When the browser encounters these links on the web page, it requests information from the specified server and incorporates it into the displayed web page.
  • Fig. 13 is a block diagram of another example forum server application communicating with a separate integrated content server 1302 and a rating server 1202.
  • Integrated content server 1302 communicates with forum server 1204 to obtain items (e.g., threads of forum messages) and communicates with forum server 1204 (and indirectly with rating server 1202) to obtain Regard/Expertise for users and/or Quality for items.
  • Server 1302 preferably also communicates with a global network.
  • integrated content server 1302 might obtain content from an outside source (for example, an online newspaper) and add ratings information to the content so obtained.
  • Fig. 14 is a block diagram of another example forum server application 1204 communicating with a separate integrated content server 1302 and a rating server 1202.
  • An ad server 1402 can be, for example, a known commercial ad server, such as the Doubleclick, 24/7 or Adforce ad servers.
  • Ad server 1402 communicates with a user's browser, for example, to deliver ad content.
  • the user 1401 views the contents of a web page including ads from ad server 1404 and integrated content from server 1302.
  • the integrated content includes data from forum server 1204 and Regard and/or Quality values from rating server 1202.
  • the ads can include clickthrough banners from ad server 1402 and/or advertiser's web site 1404.
  • Servers 1402/1404 access a commercial data server 1406.
  • Commercial data server 1406 can track the user's identity through information provided from servers 1402 and 1404 and obtains the Regard and/or item Quality from rating server 1202. Commercial data server 1406 then provides the obtained ratings information to the requesting advertiser web site 1404, ad server 1402 and forum server 1204 to help them target their advertising. Thus, rating server 1202 provides data both to the forum server 1204 and to commercial data server 1406, which forwards the data to its own requesting clients.
  • rating server 1202 can provide ratings directly to the browser of user 1401 in a manner similar to that shown in Fig. 12(b).
  • Fig. 15(a) is a flow diagram showing communication between elements of Fig.
  • Fig. 15(b) is a flow diagram showing communication between elements of Fig.
  • the forum server requests ratings for the article from rating server 1202 and provides the items and ratings to the user.
  • the rating is sent to rating server 1202, which calculates a new Quality for the item and sends the new Quality to the forum server and web server with instructions that the new Quality invalidates any previously cached Quality for the item. Details of this process are shown in Figs. 16(d)-16(f).
  • Fig. 15(c) is a flow diagram showing communication between elements of Fig.
  • the article/item is sent to rating server 1202.
  • Rating server 1202 sends a message to web server 1302 to invalidate any cached Caliber values for the thread (as opposed to items in the thread).
  • the forum server may handle the caliber calculation.
  • caliber may be calculated by a Java applet on the user's own machine. Details of this process are shown in Figs. 16(g)- 16(h).
  • Fig. 17 is a block diagram of an example integrated content server 1302, where the user's browser also communicates with an e-commerce web site 1702 and an e-commerce sourcing server 1704.
  • Rating server 1202 provides Quality and/or Regard values to forum server 1204 and to commercial data server 1406.
  • the user visits the e-commerce web site, which communicates the user's identity to commercial data server 1406.
  • server 1406 communicates with rating server 1202 to obtain information about the user and passes it to its own requesting clients, such as site 1702, server 1704, and server 1204.
  • Fig. 18 is a block diagram of an example auction server 1802, where the user's browser also communicates with an e-commerce web site 1702 and an e-commerce sourcing server 1704.
  • Rating server 1202 provides Quality and/or Regard values to forum server 1204 and to commercial data server 1406. The user visits the e- commerce web site, which communicates the user's identity to commercial data server 1406.
  • server 1406 communicates with rating server 1202 to obtain information about the user and passes it to its own requesting clients, such as site 1702, server 1704, and server 1204.
  • the functionality of rating server 1202 is integrated into auction web server 1802, so that auction server 1202 calculates Regard and Quality and receives ratings.
  • auction web server 1802 can provide Quality and Regard information about items on the auction site and about auctions transactions as discussed above.
  • rating server 1202 communicates Regard and/or Quality information to respective browsers of users 1810
  • Fig. 19 is a block diagram of an example rating server 1202 for an individual and commercial rating service.
  • rating server 1202 communicates with browser or communication software of user 1902 to provide that user with ratings (Quality and/or Regard) of other users and their items.
  • Rating server 1202 also communicates with commercial data server 1402 as described above in connection with Fig. 14.
  • the commercial data server passes information concerning ratings obtained from rating server 1202 to both e-commerce web sites and advertisers web sites and to still other individual users.
  • rating server 1202 provides data both to individuals and to a commercial service in this example.

Abstract

A method and system for constructing, applying and distributing ratings of users, user-contributed items, groupings of related user-contributed items and other items in a network environment. Users rate each instance of one user interacting with other users, referred to as an 'Item' (136). Ratings of Items (134) construct measurements of user's competence and credibility (Expertise and Regard) as a participant in a networked environment. Expertise and Regard factors determine relative weight assigned to users' evaluations of Items producing measures of Quality: relevance, accuracy, and importance. Embodiments include methods enabling one user to associate Expertise or Regard with Items to Vouch for and Discredit Items contributed by another; calculation of measures of groupings of user contributed items (Caliber); and an environment in which users provide ratings as part of navigating through Items. The system comprises users at client computers (110, 120, 130) communicating over a network and server(s) (140).

Description

EXPERTISE- WEIGHTED GROUP EVALUATION OF USER CONTENT QUALITY OVER COMPUTER NETWORK
Related Applications
This application is related to and claims priority under 35 U.S.C. § 119(e) from U.S. Provisional Application No. 60/167,594 filed November 26, 1999, which is herein incorporated by reference.
A. Technical Field
This application relates to networks, such as computer networks and more specifically to a method and system for rating users, user-contributed items, groupings of related user-contributed items and other items on a network.
B. Background
A significant and distinctive feature of wide-area networks, including global wide area networks such as the Internet, is sometimes considered to be the ability of users anywhere on the network to access a centralized source for content in a particular category. The provision of information from centralized sources has a long tradition in print and electronic media. However, in many respects, despite the wider possibilities for dissemination, this is an "old media" concept, based on the economics of content production and information distribution before the advent of global wide area networks. Remarkably, this model still holds considerable sway, despite overwhelming changes in the technology and economics of information. More than ever before, any user anywhere has the ability not only to access centrally produced content, but also to interact with other users anywhere — at almost zero marginal cost.
Nevertheless, several considerations have reduced the extent and growth of decentralized interactions over wide-area networks. It is difficult for a user to collect accurate information and form a meaningful opinion about the competence and credibility of another party without extensive contact — costly in time and effort. When interactions do occur, they are largely one-time or limited in number, with few expectations of repeat dealings. There is little incentive for users to share their assessments of other users and such assessments cannot necessarily be taken at face value.
Accessing raw, decentralized content that is time sensitive, without the ability to identify which users regularly produce items of higher Quality, means wading through an avalanche of biased, narrow or uninformed material obscuring valuable contributions. Even with the passage of time, after other users have reviewed the content and reacted to it, there may be more information to assess rather than less. User comments are rarely communicated in a form useable by others to filter content for Quality. By sheer volume, commentary is typically focused on controversial or negative content that has (often intentionally) prompted a strong reaction.
Any system that rates Quality or content must achieve a critical mass of input in order to produce useful output. However, in the case of the Internet and other wide-area networks, ratings are systemically underprovided. The problem is particularly acute for content that is produced decentrally in a networked environment. When rating schemes seek input widely and make results freely available on public networks, users do not have appropriate incentives to contribute.
Although users may engage in some rating, the possibility to "free ride" on the contributions of others discourages involvement. Users may respond to financial incentives to provide ratings, but it is difficult to monitor the Quality of these contributions. Also, if such incentives are financed through user fees restricting access, fewer users benefit from the ratings.
Much of the prior art in the field of ratings has to do with predicting what is the best item of content to deliver next to a particular user — often for strictly commercial purposes — based on the user's revealed preferences and current position on the network, so-called "collaborative filtering." The mechanism of this approach is generally to associate each user with other users of like mind who have been given appropriate incentives to provide ratings, or who have otherwise provided ratings incidentally or voluntarily. Such a method does little, however, to assist users exploring new subject matter. The emphasis is not absolute item Quality, but the relative likes and dislikes of different people and the objective is to provide a user who has revealed certain preferences with more of the same. At best, such a method determines how the user, with the user's current breadth of knowledge and experience, might rate items. It does little to identify experts acknowledged by other people who have themselves demonstrated talent.
The inadequacies of conventional rating systems have exacerbated the centralized nature of content production and filtering on the Internet. Groups of online "experts," such as the staff or affiliates of Internet portals or community web sites with tight, centralized editorial control, play an excessive role. As a result, fewer and less meaningful interactions occur that take advantage of the rich, decentralized resources of wide-area networks.
Summary of Preferred Embodiments
The described embodiments of the present invention offer alternative structures for decentralized interactions among users on wide-area networks and a method of constructing, applying and distributing ratings of users, user-contributed items, groupings of related user-contributed items and other items.
In certain of the described embodiments, the higher the Quality of the items contributed by or associated with a particular user, the more weight assigned to ratings supplied by such user. At the same time, a calculation using weighted ratings determines item Quality. In effect, the weights assigned to ratings affect Quality and Quality affects the weights assigned to ratings, simultaneously. The preferred embodiments include a solution to the "circular" character of this approach.
The described embodiments include a series of mathematical methods embedded in network processes. These network processes provide for a series of interactions between user nodes, mediated by other network elements that create a structured environment. Among other things, user interactions can include participation in a web site's online discussion group or chat facilities, an e-mail based mailing list, unidirectional, bidirectional or widely broadcast digital video or audio communication, a distributed communication facility such as Usenet newsgroups or IRC chat, or an online auction or other facility through which users provide, or agree to provide, goods and services in commerce, in each case enhanced with features of the described embodiments.
Among such enhancements are facilities that provide participants with an opportunity to rate each instance of one user interacting with other users, referred to herein as an "item." For example, a chat session would generally be considered as a separate item for each user who participated, for example by injecting at least one statement into the session during a particular hour. As another example, particular charities might be considered to be items. Users can post their ratings of the charities, and the charities are assigned a Quality value as described below.
In certain preferred embodiments, ratings of items contributed by or associated with a particular user are used to construct measurements of the user's competence and credibility as a participant in a networked environment, called herein "Expertise" and "Regard." In general, the term "Expertise" is used in connection with the "Expert" methods described herein and the term "Regard" is used in connection with the "High Regard" methods described herein.
Either Expertise or Regard can be used as a factor in determining the relative weight assigned to each user's evaluations of items contributed by or associated with other users, producing a measure of the relevance, accuracy and importance of a particular item, referred to herein as item "Quality." For example, the Expertise or Regard of users rating a particular discussion group posting might be used to weight their ratings in a calculation of the posting 's Quality.
Either Expertise or Regard may also be used to predict the likely relevance, accuracy and importance of a user's contributed items (both historical and future), for example, when a particular item is too new to have itself received ratings or when ratings of a particular item are sparse. Either Expertise or Regard may also serve as an independent benchmark relevant to an evaluation of the user for other purposes, including purposes not associated with interactions across wide-area networks.
Expertise, Regard or Quality, or related measures, may be bundled with relevant content and transmitted to nodes on a network, offering multiple additional opportunities to enhance user interactions, in some embodiments. One or more of these measures may also be used to filter, highlight, sort or otherwise evaluate items or to limit a user's interactions to items and other users who meet minimum standards.
Items contributed by users who regularly contribute items of poor Quality or interject negative, disruptive or misleading items into the system through incompetence or ill will, can be identified, downgraded or even removed from view.
Equally, items by users who regularly provide items of high Quality can be given immediate prominence, even before receiving a single user rating.
High ratings can raise the profile of any item, irrespective of the Expertise or Regard of the user who contributed it. Also, an item that misses the mark and receive poor ratings can be identified, downgraded or removed from view, even if the user who contributed it has high Expertise or Regard.
Arguably, these methods benefit not only users in general, but also the contributors, who would want more visibility for their best items.
The described embodiments of applicants' invention also include methods enabling users to "Vouch" for items contributed by another user by associating their own Expertise or Regard with such items. In this manner, a well Regarded user (also called a highly Regarded user) can quickly raise the prominence of a valuable item contributed by an unknown or otherwise poorly Regarded user. Similarly, users are enabled to "Discredit" items contributed by other users by asserting their own Expertise or Regard in opposition to such items. In these cases, the Expertise or Regard not only of the contributor, but also of the user who Vouched for or Discredited such items can be enhanced or diminished depending on other user's ratings of such items. To keep ratings current and relevant, there are also methods to remove ratings from the calculation of Expertise or Regard over time, based on the age of items, the Expertise or Regard of the user and other factors.
To facilitate the accumulation of a critical mass of data, certain embodiments will construct an environment in which users provide ratings as a natural part of navigating through items. For example, if the user views an item for a period of time suggesting the user has given it attention and consideration, the user may be encouraged or required either to provide a rating or to exit the system, or the portion of the system the user is currently interacting with. Also, for example: the act of moving from one item to another at a particular moment, when certain visual cues (among a rotating series of alternatives) appear on the user interface, may indicate that the user has selected a particular rating. The intention is to create an efficient manner for users to provide ratings, to limit the requirement to provide ratings to situations in which the user has actually reviewed the item and to generate a large body of useful rating data. By providing an efficient user interface and striking an appropriate balance, the breadth, Quality and utility of systematic ratings -more than compensates users for the additional effort required to supply ratings.
Advantages of the described embodiments will be set forth in part in the description which follows and in part will be obvious from the description, or may be learned by practice of the described embodiments. The objects and advantages of the described embodiments will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims and equivalents.
Brief Description of the Drawings
Fig. 1 shows a network, various users of the network and a network server.
Fig. 2(a) shows an example of a user interface allowing users to view and rate items.
Fig. 2(b) describes alternative rating schemes that can be used in accordance with the present invention. Fig. 2(c) shows a web page incorporating a rating scheme in accordance with the present invention.
Fig. 3 shows a simple example in a system having only four users demonstrating that ratings supplied by users are not necessarily valued equally.
Fig. 4(a) and 4(b) show a simple example demonstrating that a user's Regard is affected by how other users rate the items that the user contributes and by the Regard of those users, and that one user's Regard can affect the Regard of other users and the Quality of items contributed by other users.
Figs. 5(a) and 5(b) show another simple example demonstrating that a user's Regard is affected by how other users rate the items that the user contributes and by the Regard of those users, and that a user's Regard can affect the Regard of other users and the Quality of items contributed by other users.
Figs. 6(a) and 6(b) show a simple example demonstrating that a user's Expertise is affected by how others rate the items that the user contributes and that a user's Expertise can affect the Quality of items contributed by other users.
Fig. 7(a) is an overview of data flows in an example High Regard ratings system, showing the circular nature of Regard affecting the Regard of other users and the Quality of items contributed by other users.
Fig. 7(b) is a more detailed overview of data flows in an example High Regard ratings system, showing the circular nature of Regard affecting the Regard of other users and the Quality of items contributed by other users, emphasizing procedures for the flow, storage and interpretation of data. .
Figs. 7(c)-7(e) are flow charts showing the inputs and outputs to "basic" determinations of item Quality, user Regard and user Expertise.
Fig. 8 is a plot of a Regard Inertial Model.
Fig. 9(a) is a plot of a Quality Inertia Model.
Fig. 9(b) is a plot of a Segmented Decay Model. Fig. 10 shows an example of a small Regard data set.
Fig. 11 shows an example of a data structure used to store and retrieve data required to perform calculations of Regard and Quality in a preferred embodiment of the present invention.
Fig. 12(a) is a block diagram of a first example forum server application including a rating server.
Fig. 12(b) is a block diagram of another example forum server application including a rating server, where the rating information is sent directly to a user's web page.
Fig. 12(c) is an example of a web page generated in accordance with the block diagram of Fig. 12(b).
Fig. 13 is a block diagram of another example forum server application communicating with a separate communication forum.
Fig. 14 is a block diagram of another example forum server application communicating with a separate communication forum and an ad server.
Fig. 15(a) is a flow diagram showing communication between elements of Fig. 14 during forum/thread index intercommunication.
Fig. 15(b) is a flow diagram showing communication between elements of Fig. 14 during article view intercommunication.
Fig. 15(c) is a flow diagram showing communication between elements of Fig.
14 during post/reply intercommunication.
Figs. 16(a)- 16(h) are flow charts showing methods used during the intercommunication processes of Figs. 15(a) - 15(c).
Fig. 17 is a block diagram of an example e-commerce server.
Fig. 18 is a block diagram of an example auction server and a rating server. Fig. 19 is a block diagram of an example server for an individual and commercial rating service.
Figs. 20(a) and 20(b) are, respectively, a graph and a table that aid in explaining the concepts of "Vouching" and "Discrediting."
Detailed Description of Embodiments
1. Overview
Fig. 1 shows a network 100, such as the Internet, an intranet, a wide-area network (WAN), a wireless or telephonic network, or any other appropriate network. Network 100 can also be a combination of various types of networks or of various networks and sub-networks. Users communicate via the network 100 by sending information by way of methods and protocols appropriate to the network 100. Although it will be understood that many users can access network 100 simultaneously, three users 110, 120 and 130 are shown. A user can be a human being or another entity, such as a computer program capable of accessing network 100. Fig. 1 also shows a rating server 140 as discussed below.
In the network of Fig. 1, users contribute items, such as, for example, e-mail messages or discussion group postings. (As discussed below in a separate section, the term "items" encompass a wide variety of things.) The users view each other's items and some or all of the users rate the items contributed by other users. For example, in Fig. 1, user 110 reads other items and rates them, but does not contribute items himself. In Fig. 1, user 120 reads others' items but does not rate them. User 120 does, however, contribute items. In Fig. 1, user 130 reads others' items and rates them. User 130 also contributes items. The three users discussed herein are intended to show that various users interact with network 100 in different ways. In Fig. 1, user 110 receives items 112 contributed by other users, including the
Quality of the items and the Regard of the users who contributed the items. It should be understood that, although Fig. 1 discusses a system using the Regard of users, Fig. 1 could also apply to a system that uses the Expertise of users (as both terms are defined herein). Only Regard is shown in the example of Fig. 1 to aid in maintaining the simplicity of the example. User 110 contributes a rating 114 of one or more of the items. User 120 receives items 122 contributed by other users, including the Quality of the items and the Regard of the users who contributed the items. User 120 contributes one or more items 126 of his own. User 130 receives items 132 contributed by other users, including the Quality of the items and a Regard of the users who contributed the items. User 130 contributes a rating 134 of one of more of the items. User 130 contributes one of more items 136.
It will be understood that Fig. 1 is provided by way or example and not limitation. The invention can be implemented in a network as shown, or in other environments, such as, for example, a database in which users enter, read and access items in the database. The invention could also be implemented in, for example, an e- mail system operating in a networked environment. It will be understood that, in the described embodiments, the functionality described herein is preferably performed by a data processing system or systems performing instructions stored on a medium or memory accessible by the data processing system(s). The invention is not limited to the architecture, programming models, protocols or procedures shown and described herein.
Fig. 2(a) shows an example of a user interface allowing users to view and rate items. Although not shown, users can also contribute their own items. Fig. 2(a) shows a web page that displays messages in an online discussion forum. The text of a current message is displayed in an area 202. As the user views a current message, he can rate the message using rating area 204 (or a similar rating scheme, such as one of the alternative ratings schemes described below).
The user interface of Fig. 2(a) uses alternating visual cues. Specifically, in the ratings scheme shown in the Figure, five diamond boxes 204 are displayed. One of these diamonds is highlighted per second, with the highlights preferably moving sequentially from left to right and wrapping from diamond #5 back to diamond #1. The pattern repeats until a predefined user action is detected. A predefined user action can be, but is not limited to a keystroke, mouse movement or click, voice command and/or another appropriate command. When a predefined user action is detected, it selects a rating corresponding to a currently highlighted diamond in area 204. In certain embodiments, such a predefined user action also causes the user to proceed to the next item, thread, or elsewhere. For example, the user clicks on a rating and automatically advances to a next item, thread, etc.
The rating diamonds preferably correspond to varying ratings, from, for example, lowest to highest ratings. Thus, the user merely needs to wait until a rating that he agrees with is highlighted and perform one of the predefined user actions, such as slightly moving his mouse or touching any key on the keyboard. The user does not have to move a mouse controlled pointer to a particular location on the web page, or click or highlight specific objects or locations on the web page to rate messages, since the act of rating automatically causes the display to advance to the next message. In a preferred embodiment, the interface of Fig. 2(a), including diamonds 204, is implemented via an applet on the web page. The interface shown could be implemented using any appropriate method, such as via a browser plug-in, Java code transmitted during a particular network connection and persisting for a limited period of time, or a standard html web page. Fig. 2(b) describes alternative rating schemes that can be used in accordance with the present invention. While Fig. 2(b) describes the "single click rating" scheme of Fig. 2(a), it also describes several other rating schemes. These include a "rate while you navigate" scheme and a "single keystroke/click rate/Navigate" scheme. In the rate while you navigate scheme, use of keyboard or mouse commands to navigate to other items while the visual cues are alternating is interpreted as selecting the rating represented by the currently highlighted visual cue ("rate while you navigate"). In the single keystroke/click rate/navigate scheme, the rating diamonds 242 are not automatically highlighted. Instead, the user clicks on an appropriate diamond or presses one of keys 1, 2, 3 or 4 on the keypad (or some other predetermined keys) to both select a rating for the current message and to go to the next article, next thread, out or elsewhere.
Although not shown, other ratings schemes do not automatically advance the user to a next view, but require the user to perform the rating and navigation functions separately (even though one or the other can still be accomplished by alternating indicators as discussed above). For example, an alternative rating scheme places the ratings diamonds next to each message or each thread visible on the page. Still another scheme places the rating diamonds next to the messages.
A message "thread" is a group of related messages having a linear order and a hierarchy of levels of indentation (representing interrelationship) so that a user can view the next or previous item in the message thread. It is important to note that, in the described embodiment, it is the messages, message threads, or other items that are being rated, not the authors/contributors of the items. Still another scheme allows the user to type a rating into a box on the web page or enter a rating in a drop-down menu or special window. Still another rating scheme allows the user to move a slider bar or similar non-discrete input device on a web page and translates the user's action into a numeric rating. Thus, the above user interfaces could also be implemented as non- discrete interfaces wherein user ratings fall along a spectrum (for example, between 0 and 1) and are not limited to predetermined discrete values.
Fig. 2(c) shows a web page incorporating a rating scheme in accordance with the present invention. In this example, the ratings diamonds invite the user to rate the entire page, not just the contents of certain areas on the page. The rating diamonds can alternate as shown above or require the user to click on or enter an appropriate rating.
Still another rating scheme rates products and services offered on and transactions completed through a networked commerce facility, such as an online auction service. In a transaction-rating scheme, the item can be, for example, a completed transaction in which the user was the buyer or seller or another user monitoring the quality of the participation by the buyer or seller, or of the goods or services that are the subject of the transaction. Other types of items can also be rated in such a system. In such a rating scheme, Quality will attach to various transactions from a buyer's perspective and a seller's perspective. For example the buyer may rate the seller highly, but the seller may rate the buyer much lower. Thus, the same transaction receives very different ratings from buyer and seller, both of whom are
"the author" of the transaction in some sense. Alternatively, the buyer behavior and the seller behavior can be considered two separate items that are rated separately. In general, in a product and services ratings scheme, any user may be able to provide a rating or only a limited number of users may provide ratings.
Fig. 2(a) also shows a Regard diamond 206 (which represents the Regard of the author of the current item) and a Quality diamond 208 (which represents the Quality value assigned to the current item). In general, Regard can be used alone as an independent benchmark, as a predictor of the value of the user's historical and future contributions or, in a system-wide calculation, as a factor determining the relative weight assigned to the user's evaluations of other user's contributions. The Quality of an item is based on the ratings assigned to the item by other users and also based on the Regards of those other users. A discussion of the meaning of and preferred methods of obtaining Regard and Quality are discussed below in detail. It should be understood that an interface similar to that of Fig. 2(a) could also be used to display item Quality and user Expertise in a system that uses Expertise values rather than Regard values. Fig. 2(c) also shows a ranking diamond 244. This diamond represents the
Quality of the web page. In this example, no web page author has been identified. This example demonstrates that Quality can be displayed for an item even though the item is not associated with a user. Alternately, both Regard (of the author) and Quality (of the page) could be displayed for the web page, in a manner similar to that shown in Fig. 2(a). Expertise could be used in place of Regard.
Figs. 7(c)-7(e) are flow charts showing the inputs and outputs to "basic" determinations of item Quality, user Regard and user Expertise. In certain embodiments, each user is assigned a Regard value, in accordance with ratings that other users assign to items contributed by that user and in further accordance with the Regard values of the users who contribute the ratings. In certain other embodiments, each user is assigned an Expertise value, in accordance with ratings that other users assign to items contributed by that user. In certain preferred embodiments, a user is assigned a default Expertise or Quality value. Quality values are determined for each item contributed by a user, whether the systems implements Regard or Expertise. The Quality value of an item is determined in accordance with the ratings received for the item and the Regard (or Expertise) of the users who contributed the ratings. Details of the determination of "basic" Quality, Regard and Expertise are discussed below in Section 5 et seq. Variations on the basic determinations are also discussed below in Section 5 et seq.
Figs. 3-6 show a simple example in a system having only four users demonstrating that ratings supplied by users are not necessarily valued equally in a system using Regard. In the following examples, Figs. 3-5 show a system implementing Regard values. Fig. 6 shows a system implementing Expertise values. To enhance the clarity of the examples in Figs. 3-6, it is assumed that the system has only four users and that only one item is contributed and rated at a time. It is contemplated that most systems employing an embodiment of the present invention will have many more users and that multiple item contributions and multiple ratings by the users of other user's items will overlap in time.
In Fig. 3, User 4 contributes Item A, which is viewed and rated by Users 1, 2 and 3. In the example, User 1 is well Regarded (having, e.g., a Regard of 0.9). User 2 is poorly Regarded, having a Regard of 0.1. User 3 is also poorly Regarded, having a Regard of 0.1. (Each User's Regard is based at least in part on his previous contributions and the ratings of other users for those contributions, as discussed below in detail).
This example assumes that possible ratings fall between 0 and 1 inclusive. Other systems may, of course, use other ratings scales. In Fig. 3, well Regarded User 1 "loves" Item A, giving it a rating of 1.0 (the highest rating possible). Poorly Regarded User 2 "hates" Item A, giving it a rating of 0.0 (the lowest rating possible). Poorly Regarded User 3 also "hates" Item A, giving it a rating of 0.0. Even though two users hate Item A, both of those users are poorly Regarded. Thus, Item A receives a Quality value that is closer to the rating by well Regarded User 1 than to the ratings received from User's 2 and 3. Thus, in a system using Regard, the ratings contributed by all users are not equally valued. Ratings from well Regarded users are valued more when determining the Quality of an item. It should be noted that the terms "love" and "hate" are presented here as an aid to understanding. The embodiment described here determines numeric ratings. For example, a user interface may allow a user to choose "love" or "hate" or some other word suggesting a reaction somewhere in between, but these non-numeric choices are eventually translated into numeric values for the purpose of calculations in the preferred embodiments.
Figs. 4(a) and 4(b) show a simple example in a system having only four users demonstrating that a user's Regard is affected by how others rate the items that the user contributes and by the Regard of those users and that a user's Regard can affect the Regard of other users and the Quality of items contributed by other users. Again, for the purposes of this simple example, it is assumed that only one item is contributed and rated at a time, although this would not necessarily be the case in a real situation. Fig. 4(a) shows that, once users have rated Item A in Fig. 3, the ratings for
Item A affect the Regard of User 4, who contributed Item A. In the example, the new item contributed by User 4 receive high ratings by well Regarded users and User 4's Regard value goes up. In the example, User 4's Regard value rises from 0.5 (middle of the scale used here) to some higher value.
Fig. 4(b) continues the example of Fig. 4(a), showing that User 4's changed
Regard value affects the Regard and the item Quality of other users whose items User 4 has rated in the past. In the example, User 4 has historically given low ratings to items contributed by User 3. When User 4's Regard rises, it negatively affects the Quality value for items contributed by User 3 that were previously rated by User 4. The rise in User 4's Regard also negatively affects the Regard of User 3 (not shown), since User 3's Regard is based on the ratings his items received from other users and on the Regard values of those users. In the example, User 4 has historically given high ratings to items contributed by User 1. When User 4's Regard rises, it positively affects the Quality value for items contributed by User 1 that were previously rated by User 4. The rise in User 4's Regard also positively affects the Regard of User 1 (not shown), since User 1 's Regard is based on the ratings his items received from other users and the Regard values of those users. The example of Fig. 4(b) could be expanded to show more steps in which the newly affected Regard of Users 1 and 3 affect the Regard of other users whose items Users 1 and 3 have rated, and the Quality of those items, and so on. New ratings for an item of one user can cause a change in the Quality of that item. Similarly, new ratings for an item of one user can cause a change in the Regard of that user, which can potentially cause a change in the Regard of all users and in the Quality of items for all users.
Figs. 5(a) and 5(b) show another simple example in a system having only four users showing that a user's Regard is affected by how others rate the items that the user contributes and by the Regard of those users and that a user's Regard can affect the Regard of other users and the Quality of items contributed by those other users.
Fig. 5(a) shows that a user's Regard can be adversely affected as well. Again, it is assumed that the system has only four users and that only one item is contributed and rated at a time. In Fig. 5(a), User 1 contributes Item B, which is viewed and rated by Users 2, 3 and 4. In the example, User 1 was well Regarded when he contributed Item B. User 2 is poorly Regarded, having a Regard of 0.1. User 3 is also poorly Regarded, having a Regard of 0.1. User 4 is well Regarded, having a Regard of 0.9. In Fig. 5(a), poorly Regarded User 3 "loves" Item B, giving it a rating of 1.0 (which in our example is the highest possible rating). Similarly, poorly Regarded User 2 "loves" Item B, giving it a rating of 1.0. In contrast, well Regarded User 4 "hates" Item B, giving it a rating of 0.0. Even though two users love Item B, both of those users are poorly Regarded.
Fig. 5(a) shows that, once the users have rated Item B, the rating for Item B adversely affects the Regard of User 1, who contributed Item B. In the example, the new item contributed by User 1 receives low ratings from well Regarded users (and below the average ratings for User l's previous items). Therefore, in the example, User l's Regard value falls to 0.5 (in the middle of the range) from some higher value.
Fig. 5(b) continues the example of Fig. 5(a), showing that User l's changed
Regard value affects the Regard of other users and the Quality of items contributed by other users whose items User 1 has rated in the past. In the example, User 1 has historically given low ratings to items contributed by User 2. When User l's Regard falls, it positively affects the Quality value for items contributed by User 2 that were previously rated by User 1, because his low ratings of these items are given less weight. The fall in User 1 's Regard also positively affects the Regard of User 2, since User 2's Regard is based on the ratings his items received from other users and the Regard values of those users. In the example, User 1 has historically given high ratings to items contributed by User 4. When User l's Regard falls, it negatively affects the Quality value for items of User 4 that were previously rated by User I, because his high ratings of these items are given less weight. The fall in User l 's Regard also negatively affects the Regard of User 4, since User 4's Regard is based on the ratings his items received from other users and the Regard values of those users. The example of Fig. 5(b) could be expanded to show more steps in which the newly affected Regard of Users 2 and 4 affect the Regard of other users whose items Users 2 and 4 have rated, and the Quality of those items, and so on.
The examples of Figs. 4 and 5 have involved systems that implement a Regard value for users. In contrast, Figs. 6(a) and 6(b) show a simple example in a system that implements an Expertise value for users. The example shows that a user's Expertise is affected by how others rate the items that the user contributes and that a user's Expertise can affect the Quality of others' items. Note however, that a change in the Expertise value of a user does not affect the Expertise values of other users. A user's Expertise value is changed when other users submit new ratings for that user's items.
Fig. 6(a) shows an example in which User 4 contributes Item C, which is rated by Users 1, 2 and 3. User 2 has a high Expertise, but "hates" the item and give it a low rating. Users 1 and 3 both have low Expertise values, but "love" the item and give it a high rating. User 4's Expertise is based on the arithmetic mean of all ratings for all items contributed by User 4. Here, the new high ratings from Users 1 and 3 have raised the mean of the ratings of User 4's items. Thus, the ratings from other users who rate User 4's items affect user 4's Expertise, but the Expertise of those other users does not affect User 4's Expertise. It is irrelevant that the users who love Item C both have low Expertise values. It should be noted that, in a system implementing Expertise instead of Regard, Quality is determined in much the same way as shown in Fig. 3 and as described below. That is, the basic Quality determination for an item is made in accordance with ratings received for the item, weighted by Expertise values of the users providing the ratings.
Fig. 6(b) continues the example of Fig. 6(a), showing that User 4's changed Expertise value affects the item Quality of other users whose items User 4 has rated in the past. In the example, User 4 has historically given low ratings to items contributed by User 1. When User 4's Expertise rises, it negatively affects the Quality value for items of User 1 that were previously rated by User 4. The rise in User 4's Expertise does not affect the Expertise of User 1. In the example, User 4 has historically given high ratings to items contributed by User 2. When User 4's Expertise rises, it positively affects the Quality value for items of User 2 that were previously rated by User 4. The rise in User 4's Expertise does not affect the Expertise of User 2.
New ratings for an item of one user can cause a change in the Quality of that item. Similarly, new ratings for an item of one user can cause a change in the Expertise of that user, which can potentially cause a change in the Quality of items for all users.
Again, it is emphasized that the above examples are simple examples, provided to aid in a basic understanding of the described embodiments of the present invention. A more precise discussion is provided below. The following paragraphs provide a discussion of terms and terminology used herein.
2. Items
The growth of the Internet and other wide area networks has multiplied the forms of and opportunities for, decentralized interactions. An item may be composed of words, whether ASCII text or text formatted by a word processing technology or hypertext mark-up language, text contained in a collection of data packets constituting an e-mail message, discussion group posting or Usenet newsgroup article, or the output of a process that translates from one language to another, or written words into spoken words, or spoken words into written words. An item may also be an interactive, sequential exchange of words making up a group or one-to-one chat session.
An item may be a fixed visual image, whether a drawing or image captured by a digital camera, or transferred from a photographic original to a scanned representation of the original. Streaming audio or video, whether live or previously recorded material and whether unidirectional, bidirectional or broadcast, would constitute an item, as would words, images, or streaming audio or video integrated into a web site.
More broadly, any single behavior or collection of behaviors viewable by others, whether online or offline, may constitute an item. An example of this includes the course of a user's conduct in an online auction, whether as buyer or as seller. Another example is the performance offline of a subcontractor or of a general contractor, in their respective professional roles — whether assessed by each other or a third party — who entered into their agreement for the performance of services using an online medium.
An item may also be information, goods and services, or assets which one user recommends to other users, or otherwise associates with oneself or one's reputation. An example of this is a recommended link to a third party website, or a link deep into the structured hierarchy of a website. Another example is an asset one puts forth for sale in an online auction. Another example is software, whether distributed as source code or in executable form and whether constituting a stand-alone program or operating system, or a replacement of or addendum to a portion thereof. An item may exist only as a pointer in records to locations accessible over, or data streams transmitted across, a wide area network.
3. Users
In various embodiments of the present invention, different users or types of users may have items associated with them and different users or types of users may participate in the calculation of Expertise, Regard, Quality and other measures included in the described embodiments of the present invention. In certain embodiments, users will overtly choose to participate. For example, a user might access a mailing list, discussion group, chat session or other items or grouping of items via a facility, such as a website, which is specifically enabled with features of the described embodiments of the invention. In some embodiments, this facility will be the only avenue to access the items or grouping of items and users would anticipate receiving in the ordinary course information regarding the Regard and Expertise (and related measures) of users and regarding the Quality (and related measures) of items.
In other embodiments, some users need not ever access such a facility, or give any indication of interest in, or the intention of, assigning ratings to items contributed by other users, or give direct consent to the application of the measurements and methods of the described embodiments to themselves or to items they contribute. For example, a user might participate in a mailing list, discussion group, chat session or other grouping of items that is accessed by multiple facilities, only a subset of which are enabled with features of the described embodiments of the invention. More specifically, a user might choose to read and post Usenet newsgroup messages via a desktop application or website that has no support for features of the described embodiments, while other users access or contribute such postings via a facility that supports features of the described embodiments. In such embodiments, users who contribute items via facilities that are not enabled with any or all features of the described embodiments may also be incorporated into databases as item contributors and assigned Expertise or Regard values. The items contributed by such users may therefore be assigned Quality or related measurements and be subject to Vouching and Discrediting by other users and the other methods of the described embodiments, as discussed in various section below.
4. Notation
The following section discusses a preferred method of determining Quality and Regard from ratings supplied by users. User Expertise is also discussed. Let/? be the number of participating users, p grows to include each user added over time.
Let i and./ represent two users, where i,j e {l, ... ,p} Let c be the number of items contributed by user i.
4.1 Ratings
Let
where n is
Figure imgf000023_0001
n e {ι, - , Ci}
is behavior by usery in response to item n, which behavior is mapped by the function x) to a floating point variable 9v , with.
0 ≤ 9?: ≤ 1-
In alternative embodiments, 'i can be any numerical scale for ratings, or any scheme that orders items to reflect user assessments or preferences.
In alternative embodiments, behavior y may include overt selection of a rating with a mouse or keyboard interface, a voice command, a movement of the hand or eye or other observable behavior by the human body. However, it is not necessary that ratings be overt, or that the user even be aware of participating in a ratings system. For example, behavior can also include the time spent viewing an item, the number of comments one interjects into a chat session, the decision to click on an ad banner, or a decision to bid on an auction opportunity. In the case of an item organized into a hierarchy, such as a web page with multiple levels of depth or specificity under subject categories, the behavior that triggers a rating may be the number of levels one traverses to lower levels. In the event that usery" does not connect with, engage in, or be associated with any behavior related to, the item, then 9li = .
In certain preferred embodiments, such as those shown in Figs. 2(a)-2(c), the user interface can present sequentially a repeating series of highlighted or colored diamonds 242, or other alternating visual cues, each associated with a particular value of the rating variable. In a preferred embodiment, the rate at which the visual cues 242 alternate is one second, although other intervals can be used. Upon any use of the keyboard or mouse, the currently colored or highlighted visual cue is interpreted as the user's rating ("single-click rating"). In order to limit ratings to items that a user has had a meaningful opportunity to review, in certain embodiments the visual cues and rating opportunity are not presented to the user until after the user has navigated to the end of the item and a fixed number of seconds has elapsed since delivery of, or the beginning of the user's interaction with, the item. In a preferred embodiment, the interval is 10 seconds. In certain other embodiments, use of keyboard or mouse commands to navigate to other items is interpreted as selecting the rating represented by the currently highlighted visual cue ("rate while you . navigate"). Expressed differently, rating is an automatic or mandatory step in accessing items sequentially, or otherwise continuing to interact with the system, without exiting the system, the interaction or some level of the interaction.
Let
Figure imgf000024_0001
We will also refer to the columns of G, as % ι 9i > ■ • ■ > i '
Let r<i = ∑ δ n=l and
Figure imgf000024_0002
We will also refer to the rows of R as ' T2> ■ • • ' τv and the columns of J? as rλ,r2,... ,rp
4.2 Number of Ratings
Let
0 if user i = user j
1 if user j has assigned a rating to the nth item contributed by user i 0 if user j has not assigned a rating to the item
Let
1.1 1.2 Λtl Λtl
A-1
(pxc.) k Kιlp k^
We will also refer to the columns of K( as *t > *ι > • • • >
Let
f υ ~ Z- *y n=l so that
0<«„ <c
Let
Figure imgf000025_0001
We will also refer to the rows of T as tl^ ••• ' *p and the columns of r as * » * > • • > tP 5. Basic Expert Method
The Basic Expert method determines Λ« , the "Expertise" of a user, according to the ratings other users assign to items contributed by such user.
5.1 Initial Embodiment
In the initial embodiment of the method, Expertise is the arithmetic mean of the ratings that other users assign to items contributed by user i:
«ι
Figure imgf000026_0001
In the initial embodiment, t e xpert se o eac user s ca cu ated periodically, once every 12 hours.
5.2 Default Expertise
A user who has not contributed any items, or none of whose contributed items has been rated by any other user, is preferably granted the arithmetic average Expertise of users whose contributed items have received ratings:
Figure imgf000026_0002
Note that, in the initial embodiment, the first time any item contributed by a user receives a rating, the Expertise of the user is set equal to that rating. 6. Basic High Regard Method
The Basic High Regard method determines ^t , the "Regard" of a user, according to the ratings that other users assign to items contributed by user i and the Regard of the users providing the ratings.
6.1 Initial Embodiment
Fig. 7(a) is an overview of data flows in an example High Regard ratings system, showing the circular nature of Regard affecting the Regard of other users and the Quality of items contributed by other users.
As shown, one or more users contribute one or more items (also called content). A user does not have to contribute any item, but if he or she does contribute at least one item, the other users are given the opportunities to view and rate each item contributed. Each item is rated (for example, arrows 704, 709, 707) by one or more users. For example, as shown in Fig. 2(a), the user may rate a current message in an online forum. As another example, as shown in Fig. 2(c), the user may rate a web page. Various other types of items and groups of items are discussed herein.
The ratings of the other users, along with the Regard of the users providing the ratings are used to create a Quality value of an item. Meanwhile, the Regard rating of the users giving the ratings are potentially being changed if those users are contributing items of their own or other users are rating their previously contributed items. As discussed below, not all embodiments determine Quality immediately when an item is entered. In a preferred embodiment, the Regard of users is updated periodically, for example, every twelve hours, although a shorter period of time can be used. In the future, as computing power increases, it is contemplated that both Quality and Regard will be updated more frequently.
Fig. 7(b) is a more detailed example of Fig. 7(a). It will be understood that the
"matrices" shown here are a conceptual presentation of a described embodiment of the present invention. An actual implementation of the matrices shown in Fig. 6(b) is discussed below in connection with the data structure of Fig. 11. The rating of each item 712 contributed by each user and rated by each of the other users can be stored in a database as the record of ratings 718. The ratings 714 given to an item by the users, together with the Regard values 716 of the users, can be used to derive a measure of the Quality of the item 712. The Regard values of the users, are created by using the entire record of ratings 718. In some embodiments, an iterative method 714 can be used to simultaneously create the Regard values of the users and to use the Regard values to weight the ratings given by the users in the record of ratings.
Once the Regard values 716 of the users are determined, one can then calculate the Quality 720 of an item as discussed below in Section 5 et seq.
Certain embodiments calculate user Regard and item Quality values per user demand. Other embodiments calculate these values periodically and store them in a database for later retrieval.
In the initial embodiment of the Basic High Regard method, Regard is the arithmetic mean of the ratings that other users assign to items contributed by user i,
Figure imgf000028_0001
weighting each rating by the Regard of the user providing the rating:
The Regard of each user is potentially an input into the calculation of the Regard of every other user, in addition to item ratings. Therefore, the High Regard method involves the simultaneous solution of a number of related equations.
In the initial embodiment, the Regard of each user is calculated periodically, once every 12 hours.
6.2 Default Regard
A user who has not contributed any items, or none of whose contributed items has been rated by any other user, is granted the arithmetic average Regard of users whose contributed items have received ratings:
Figure imgf000028_0002
Note that, in the initial embodiment, the first time any item contributed by a user receives a rating, the Regard of the user is set equal to that rating.
6.3 Matrix Notation
The interdependency of each user's Regard can be expressed in matrix notation. Setting:
Figure imgf000029_0001
then
Figure imgf000029_0004
Figure imgf000029_0002
7. Basic Quality Method
The Basic Quality method determines 1? , the "Quality" of an item, according to the ratings assigned to the item and the Expertise or Regard of the users providing the ratings.
7.1 Initial Embodiment
In the initial embodiment of the method, Quality is the arithmetic mean of the ratings that other users assign to item n contributed by user i, weighting each rating by the Expertise or Regard (the method being selected by the system operator) of the user
Figure imgf000029_0003
providing the rating:
In the initial embodiment, Quality is calculated on demand — at the time it is needed for any purpose — preferably subject to, a time period following the most recent calculation, for example, a 10 minutes during which time a cached copy of the most recently calculated Quality value for the item is provided.
7.2 Default Quality
An item which has not received any ratings is granted the arithmetic average Quality of items that have received ratings.
Figure imgf000030_0001
Note that, in the initial embodiment, the first time an item receives a rating by any user, the Quality of the item is set equal to that rating.
7.3 Matrix Notation
The calculation of Quality can be expressed in matrix notation:
Figure imgf000030_0002
8. Analysis and Illustration
8.1 Basic Expert IV lethod
An assumption of the method is that an item rating is an effective measurement of the capacity of the user who contributed the item to identify valuable items contributed by other users. From the point of view of the user assigning item ratings, the user is helping to identify experts whose assessment of the value of items will receive higher weighting. From the contributor's point of view, high item ratings mean higher Expertise, implying more influence over Quality levels. In this sense, the Basic Expert method is an expert system.
Expertise under the Initial Embodiment In a universe of four users i,j, x and z, with user i having contributed at least one item and users j, x and z having each rated at least one user i item:
Figure imgf000031_0001
In the calculation of Expertise, item ratings are weighted equally, Regardless of which users assigned the ratings. Partial differential equations demonstrate the impact on user i's Expertise of an additional item rating by usery: rating/Expertise differential dh, 9?, - K dt„ ϊ%j + tt; + t1: total # of previous item ratings
User i's Expertise increases whenever another user rates an item contributed by user i above user i's previous level of Expertise. The increase is equal to the difference between the rating and the previous level of Expertise, divided by the total number of ratings previously assigned to user i items.
Quality
If users x and z have already assigned ratings to item n contributed by user i:
_ + Λ,3» hx 4- h.
The impact on item Quality of a rating by usery is: rating Quality weighting by differentiai rater's Expertise
dt,3 hx + hz total Expertise of previous raters The Quality of item n contributed by user i increases whenever another user rates the item above its previous level of Quality. The increase is equal to the difference between the rating and the item's previous level of Quality, weighted by the Expertise of the user providing the rating, divided by the total Expertise of users who previously
^- > 0 if g" > ςt" dt rated the item.
8.2 Basic High Regard Method
An assumption of the method is that an item rating is a useful measurement of the capacity of the user who contributed the item to identify talented item contributors.
From the point of view of the user assigning item ratings, the user is helping to identify experts whose assessment of the value of other users' items will further refine the selection of experts. From the contributor's point of view, high item ratings mean higher Regard, implying more influence over item Quality and more influence over the Regard of other users. In this sense, the Basic High Regard method is a robust expert system.
Regard under the Initial Embodiment
Again in a universe of four users, with user i having contributed at least one item and users j, x and z having rated at least one item contributed by user i: hb = rfc + r**hχb + τ**hbz l3 ι "*x * ^xz^z
In the calculation of Regard, the sum of all the ratings that each user has assigned to items contributed by user is weighted by the rating user's Regard.
The impact on user i's Regard of an additional item rating by user is: weighting by rating/Kegard ra er's Regard cross-rating dependencies differential
^ ' total weighted # of previous item ratings User i's Regard increases whenever another user rates an item contributed by user i above user i's previous level of Regard.
The increase is equal to the difference between the rating and user i's previous level of Regard, weighted and adjusted, divided by the total weighted number of ratings previously assigned to user i items.
The weighting in the numerator is the rating user's Regard. The adjustment for cross- rating dependencies accounts for the effect of the additional rating on the Regard of users other than i — and hence, how the rating affects the weight assigned to their previously assigned ratings of user i items. Cross-rating dependencies can magnify or diminish
9t<> , but cannot change its sign.
Quality
If users x and z have already assigned ratings to item n contributed by user :
Figure imgf000033_0001
The impact on item Quality of a rating by usery is: weighting.
Figure imgf000033_0002
dh dt ^, > 0 if ø5 > Λf
The impact of a rating on item Quality is similar in form to the impact of a rating on the contributor's Regard. The Quality of item n contributed by user increases whenever another user rates item « above the item's previous level of Quality.
The increase is equal to the difference between the rating and the item's previous level of Quality, weighted and adjusted, divided by the total Regard of the users who previously rated the item.
The weighting in the numerator is the rating user's Regard. The adjustment for cross- rating dependencies accounts for the effect of the additional rating on the Regard of users other than i — and hence, how the rating affects the weight given their previously assigned ratings of item n. Cross-rating dependencies can magnify or diminish ^ , but cannot change its sign.
8.3 Rating Dependencies
Basic Expert Method
An item rating can affect the Expertise of the user who contributed the item, but not the Expertise of any other user. However, a rating of an item contributed by one user can affect the Quality of an item contributed by another user. A simple example of this is discussed above in connection with Fig. 6.
Figure imgf000034_0003
Table 1. Basic Expertise: Rating Dependencies
In the example case, a rating of item n contributed by user i can affect the Quality of item m contributed by user x (already assigned ratings by all users):
Figure imgf000034_0001
where
Figure imgf000034_0002
l at,, > 0 if s» > A, and£ > <rx When user./' rates item n above user i's previous level of Expertise, user i's Expertise rises. If user i has already rated an item m above its previous level of Quality, then item m's Quality rises.
Thus, an item rating potentially affects the Quality of the item, the Expertise of the contributor and the Quality of every item ever rated by the contributor. A rating has secondary effects that extend as far as one other user and that user's rating activities. The method recognizes an item rating as, in part, an indication of the ability of the contributor of the item to identify valuable items contributed by other users.
Basic High Regard Method
An important distinction of the Basic High Regard method is that one user's rating of an item contributed by a second user can affect the Regard of a third user. j →] i (n) i x (m) user j rates item n:
- above item 's Quality - Quality rises
- above user i's Regard - Regard rises
• user i's previous rating of item m:
- above item m's Quality → Quality rises
- above user x's Regard — r Regard rises
Table 2: Basic High Regard: Rating Dependencies, Sparsely Rated Items
Thus, the method recognizes an item rating as, in part, an indication of the ability of the contributor of the item to identify other talented items contributors. The rating of an item by a contributing user affects the Regard of other users in the system.
Figure imgf000036_0001
z (o) t→ x (in) user j rates item n:
- above item n's Quality + Quality rises
- above user i's Regard - Regard rises
if all the users have rated each other's listed items, the Regard of each user and the Quality of each item may change.
as the Regard of users change, their previous ratings of user j and item n may receive greater or lesser weight, circling back to magnify or diminish, but will not entirely offset, the impact of user j's rating.
Table 3: Basic High Regard: Rating Dependencies, Fully Populated
An item rating has ripple effects that can propagate through the entire network, potentially affecting the Regard of the user who contributed the item, the Regard of the user who assigns the rating, the Regard of other users, the Quality of the item and the Quality of other items.
A useful example is a situation in which cross-rating dependencies are limited to two parties. For example, if users y and z have previously assigned item ratings that are equal to the Regard values of the contributors, cross-rating dependencies are limited to users i and x.
In this case, the formulas for the impact of an additional rating by user./ on user i's Regard reduce to:
Figure imgf000036_0002
cross-rating dependency
Figure imgf000036_0003
Thus, if the rating is above user i's previous level of Regard, then
5," - b > 0
and, because cross-rating dependencies can diminish, but not reverse, the impact of a rating
dttj
If user /' has rated items contributed by user x on average higher than user x's Regard, then rx, - h%, > 0
and dhx dttJ
If user x has rated items contributed by user i on average higher than user i's previous level of Regard, then
rtx - hx btιx > 0 and
dti. w dt.- > - m
Hence, the impact of the rating is magnified; i.e., it exceeds the rating/Regard differential weighted by user s Regard.
The alternatives can be presented in tabular form:
|^- Magnified or Diminished j ED *
IT /Cross-Rating Dependency
X
Figure imgf000038_0001
Table 4: Impact of Cross-Rating Dependencies on Regard (Outsider Rating)
This result also applies to larger numbers of users with a populated data set. When users owe their Regard, in part, to cross-rating each other — or have mutually reduced each other's Regard — their Regard is subject to greater volatility under some circumstances. A rating by a user without a cross-rating dependency, such as user in the example, will have greater impact than suggested by the user's Regard alone. A similar result holds when one user rates an item contributed by another user with whom the user shares a cross-ratings dependency.
dh°τ 9tx,
Magnified or Diminished [Ijr /Cross-Rating Dependency x (m)
Current (item m) + Average user i ratings of user x items
Figure imgf000038_0002
Table 5: Impact of Cross-Rating Dependencies on Regard (Cross-Rating Party Rating)
This aspect of the High Regard method facilitates construction of user "communities". Users who associate and build a web of positive cross-ratings can help each other increase their Regard quickly — the effect of their positive ratings on each other's Regard is magnified. This aspect of the described method also heightens the risk of mutual negative ratings, in the course of a form of destructive behavior often observed on wide area networks, known popularly as "flaming" — intentional provocation or repetitive attacks, typically in the form of an exchange of written messages. Two or more users engaging in mutual attacks lose Regard quickly — the effect of their negative ratings on each other's Regard is magnified.
Ratings assigned by "outsiders", users outside the web of cross-ratings, can have similar effects. An outsider who boosts the items of members of a community can prompt a magnified increase in their Regard. An outsider who rates negatively the items of users engaging in flaming can cause a magnified drop-off in their Regard. However, these magnifying effects do not circle back to affect the outsider's Regard, because there are no cross-rating dependencies as to the outsider. Users who associate in communities or engage in flaming, who subsequently reverse course and rate each other in the opposite direction, have a diminished impact on Regard. In fact, any time that two users or groups of users rate each other's items in opposing directions, it creates conflicting cross-rating dependencies. In effect, a user is either weakening one of the sources of the user's own Regard, or strengthening a force that is acting against the user's own Regard.
»- Magnified or Diminished
3 →
\ i.
Table 5: Chain of Dependencies Similar overall effects can arise when, instead of direct cross-ratings, there is a chain of dependencies. Whether the original rating is magnified or diminished is a question of the product of the signs of these links in the chain. For example, if user has rated items contributed by user x above user x's Regard (+), user x has rated items contributed by user z below user z's Regard (-) and user z has rated items contributed by user i below user i's Regard (-), then the product of the signs is
(+) x (-) x (-) = (+) In this case, user/s rating of user i is magnified. ir> 0
Cross-Rating Dependencies — Quality
Deriving Quality in the same two-user cross-ratings scenario triggers the same cross-ratings effects.
The formulas for the impact of a rating by usery of item n contributed by user i reduce to:
Figure imgf000040_0001
cross-rating dependency
Figure imgf000040_0002
Thus, if the rating is above item n's previous level of Quality, then
Figure imgf000040_0003
and, because cross-rating dependencies can diminish, but cannot reverse, the impact of a rating dq l dtl3 If user / has rated items contributed by user x on average higher than user x's Regard, then
and dhx > 0 If user x has rated item n contributed by user i above its previous level of Quality, then
9?x ~ 9? > 0
>«-*«
Hence, the impact of the rating is magnified; i.e., it exceeds the rating/Quality differential weighted by user/s Regard.
The alternatives can be presented in tabular form:
-r^- Magnified or Diminished
3 FH ι n)
IT /Cross-Rating Dependency
X
Average user i rating of user x items above Regard below Regard
User x rating of user i's item n <r» - hbt > 0) (r» - hbtx, < 0) above Quality (g?x - gt" > 0) Magnified Diminished below Quality (g?x - g," < 0) Diminished Magnified
Table 7: Impact of Cross-Rating Dependencies on Quality
9. Extended Methods
9.1 Expert Methods
A general formulation for alternative embodiments of the Expert method (the "Extended Expert methods") is:
Figure imgf000041_0001
9.2 High Regard Methods
A general formulation for alternative embodiments of the High Regard Method (the "Extended High Regard methods") is:
Figure imgf000042_0001
9.3 Quality Methods
A general formulation for alternative embodiments of the Quality method (the "Extended Quality methods") is:
Figure imgf000042_0002
Figure imgf000042_0003
d otherwise
9.4 Transformation Function
φ(τ) is a transformation function and ^ x is a function that reverses the effect of
Φ( ) .
Φ(Φ(ι)) = x Note that if
(x) = X and
<b(x) = x
then the Extended methods calculate arithmetic means — weighted, in the case of Regard and Quality. In this case, the Extended Expert method is equivalent to the Basic Expert method, the Extended High Regard method is equivalent to the Basic High Regard method and the Extended Quality method is equivalent to the Quality method.
If
Φ(x) = log(x) and then the Extended methods calculate geometric means.
Φ(x) = exp(x)
If
Φ(x) = -
X and
Φ(ι) = -
X
then the Extended methods calculate harmonic means.
If
Φ(x)
and
Φ(ι) = x
where n is a parameter set by the system operator, then the Extended methods calculate Holder means. In alternative embodiments, n may be set to any real number. In one embodiment, n=3. In another embodiment, π=l/3rd.
9.5 Default Value
In alternative embodiments, the Expertise or Regard granted a user who has not contributed any items may be determined in accordance with:
Figure imgf000043_0001
in which \ represents the Expertise or Regard, as the case may be, of user i; or, more generally, an amount otherwise related to the ratings assigned to items contributed by other users: K = ϋ(T, R)
or simply set to zero:
Λ5 = 0
The Quality granted an item to which no user has assigned a rating may be determined in accordance with:
Figure imgf000044_0001
or, more generally, an amount otherwise related to the ratings assigned to other items:
Figure imgf000044_0002
or simply set to zero:
9d = 0
9.6 Periodicity
In alternative embodiments, the Expertise or Regard of each user, as the case may be, is calculated periodically, after the passage of a specific period of time, after the collection of a specific number of new item ratings, or as often as possible given the available computational resources. In alternative embodiments, the Quality of each item is calculated every time the item's Quality is requested or required, subject to a minimum time period following the most recent calculation, during which time a cached copy of the Quality value most recently calculated is provided; or after the passage of a specific period of time, after the collection of a specific number of new item ratings, or as often as possible given the available computational resources.
9.7 Matrix Notation
The interdependency of each user's Regard in the Extended High Regard method can be expressed in matrix notation.
Let
Figure imgf000045_0001
and
Figure imgf000045_0002
Setting:
Figure imgf000045_0006
Figure imgf000045_0003
then
Figure imgf000045_0004
The calculation of Quality can also be expressed in matrix notation:
Figure imgf000045_0007
10. Computational Method 10.1 Description
The following procedure solves for Expertise in one step and for Regard K by an iterative method of successive approximation.
10.2 First Step Let
Figure imgf000046_0001
where (x) is a selected transformation function:
'S = ∑*S) n=l
and i is determined according to the selected procedure for calculating a default value. In one embodiment, in this first step:
Figure imgf000046_0002
In each case, using a function ^(χ) , for which:
Φ( ( )) = X
Stopping here results in K , Expertise under the Extended Expert method; and if:
Φ(x) = X
then is equivalent to t t Expertise under the Basic Expert method. 10.3 Second Step
Calculation of Regard requires additional steps.
A first approximation of each K value is calculated as follows:
Figure imgf000047_0001
τ e i Φl where Λ> is the Expertise of each user calculated above and n is determined according to a selected procedure for calculating a default value. In one embodiment, in this second step, the value is based on the K values determined in the first step.
Figure imgf000047_0002
Using a function *( ) , for which: φ(φ(a;)) = x 10.4 Subsequent Steps
The following procedure is repeated successively with w=(2,3,4, ... ), converging toward a solution.
1
Figure imgf000048_0003
Figure imgf000048_0001
where ^<T is determined according to a selected procedure for calculating a default value. In one embodiment, in this second step, the value is based on the n« values determined in the previous step.
Figure imgf000048_0002
Using a function φ( ) , for which:
Φ(Φ(x)) = X
and lim (λ ") = ft? The procedure converges toward h , Regard under the Extended High Regard method and if
Φ(x) = X
then is equivalent to ^t , Regard under the Basic High Regard method.
Convergence
This procedure can be applied successively a specified number of times m. In one embodiment, =5. In other embodiments, the procedure can be applied successively until the approximation of Regard at the beginning and at the end are nearly equal. In one embodiment, the procedure is considered complete when: max |?(ft - ft -,) < .005
11. Inertia
11.1 Expertise/Regard Inertia
Under the Expert and High Regard methods, the first time that any of the items contributed by a user receives a rating, the user's Expertise or Regard, as the case may be, is set equal to that rating. This could have undesirable consequences. The items contributed by a user may be rated initially by a poorly Regarded user and for a period of time thereafter, by a small or poorly Regarded subset of users. In order to control the amount by which item ratings influence the Expertise or
Regard of relatively new or infrequent users, we create a notional user i=l, with r,-ι = a(t1, ti, τl, Ti, h'')
and
Figure imgf000049_0001
where he denotes the vector of Extended Expertise or Regard values, as the case may be, for all users.
Thus, the notional user assigns rα and *«ι for each user, taking into account the number and level of ratings assigned to user i's items by other users, the number and level of ratings user i has assigned to items contributed by other users and the Expertise or Regard of all users.
In certain embodiments, rn is set equal to i,l/ld and f«ι is determined according to an equation in the form of: α — (bx + cy — ^/dxy)e where a,b,c,d and e are constants selected by the system operator and x represents the aggregate number of the user's items that have been rated by other users and y the number of other users' items that have been rated by the user. Using a relatively high value for a and small (positive) values for b,c,d and e, this formulation has the characteristic of reducing the fixed constant a more slowly when x and y grow in tandem and more quickly when x and grow divergently.
This property causes the system to withdraw inertia as a user interacts with the system, but at different rates: more slowly from a user who rates items and whose items are rated by other users close to a targeted ratio and more quickly from a user who primarily does one but not the other.
Example r5ι = hιhe d
t5l = max{0, 500 - ]
Figure imgf000050_0001
Fig. 8 is a three-dimensional plot of a High Regard Inertial Model having two axes labeled from 0 to 100 representing, along the first axis, the number of ratings other users have assigned to items contributed by the user in question and, along the second axis, the number of ratings the user in question has assigned to items contributed by other users. A third time axis labeled from 0 to 500 represents the number of notional ratings (set at a predetermined default level of Regard) assigned to an item contributed by the user in question in order to create the inertia effect. As can be seen, the effect of inertia diminishes as other users assign ratings to items contributed by the user in question and as the user in question assigns ratings to items contributed by other users. The shape of the three-dimensional plot represents the characteristic of the specified embodiment that the inertia effect diminishes more slowly when the user in question rates items contributed by other users and the user in question's contributed items are rated by other users, in tandem. The described embodiment reflects a target ratio. The Regard inertia effect in the described embodiment diminishes rapidly, in contrast, when the user in question rates items contributed by other user often, but few users rate items contributed by the user in question, or vice versa. In the described embodiment, the number of notional ratings is initially 500.
11.2 Item Quality (Quality inertia)
Under the Expert and High Regard methods, the first time that any item receives a rating, the item's Quality is set equal to that rating. This could have undesirable consequences. The item may be rated initially by a poorly Regarded user and for a period of time thereafter, by a small or poorly Regarded subset of users.
In order to control the amount by which early item ratings influence the Quality of items, we create a notional user ι=2, with ga = l(q, d, k?, g?, h) and
Figure imgf000051_0001
where d represents the number of units of time, using a standard unit selected for a preferred embodiment (e.g. minutes, hours or days), measuring the time elapsed since user i contributed item n. In the following example, d is expressed in minutes.
In other words, the notional user assigns a 2 and ka for each item, taking into account the Quality of other items with a track record of ratings, time elapsed since the item was contributed and the number and level of ratings assigned to the item by other users. In certain embodiments, 9a. is set equal to *»'20d , with the formula determining *& in the form of:
Figure imgf000051_0002
In this case, x represents the aggregate number of ratings assigned to the item and y the time interval d. This formulation causes the system to withdraw inertia as users rate the item and with the passage of time, but at different rates: more slowly for an item that receives ratings at a targeted rate per period of time and more quickly for an item that receives ratings at a higher rate (resulting in adequate data) or at a lower rate (suggesting disinterest).
Example
Figure imgf000051_0003
*S = max{l, 600 - I ∑ fc£ + - & ∑kV
\ x=2 V ι=2 Fig. 9(a) is a three-dimensional plot of a Quality Inertia Model having a first axis labeled from 0 to 300 units of time as specified above, a second axis labeled from 0 to 80 representing the number of ratings users have assigned to the item in question and a third time axis labeled from 0 to 600 representing the number of notional ratings (initially set to a predefined default level of Quality) assigned to the item in question in order to create the inertia effect. As can be seen, the effect of a rating diminishes with the passage of time and as other users assign ratings to the item in question. The shape of the three-dimensional plot represents the characteristic of the described embodiment that the inertia effect diminishes more slowly when users rate the item in question steadily in small numbers. The described embodiment reflects a target number of ratings per units of time. The Regard inertia effect diminishes rapidly, by contrast, when considerable time passes with very few ratings contributes, or if a large number of users rate the item during a short time interval.
11.3 Clarification
Notional user values created for calculations of Quality will, in some embodiments, be excluded from the calculation of High Regard values.
12. Decay
12.1 Description In order to keep Regard and Expertise values current, or to discount older or unusual historical results, some embodiments will include "Decay", procedures for adjusting the record of ratings with the passage of time.
12.2 Simple Time Decay
In one embodiment, a standard time period, e.g. days, weeks or months, determines the rate at which ratings are removed from the calculation.
Let g be the number of such standard time periods from which ratings will be included in the calculation of Expertise or Regard, as the case may be, after which they are removed.
Let fm and Rm be the (P x P) matrices of *v and r'J values reflecting only the ratings provided during a single period m (Regardless of the date the item was contributed). r and -Km , defined below, substitute for T and R to calculate Expertise or Regard, whichever is used in the embodiment.
For m=\, let and
Figure imgf000053_0001
For 2m
Tm = Tm-l + Tn and
Figure imgf000053_0002
For m > 9 m — 1 m-l — * . rf and
Figure imgf000053_0003
12.3 Segmented Decay
Other embodiments of Decay treat differently subgroups of users segmented according to one or more user characteristics. In some embodiments, it is helpful to divide the users into a number of subsets and to use the method of "segmented decay" to alter the number and value of ratings among the record of ratings for items contributed by particular groups of users, applying systematic but different procedures to each subset. Users can be divided into subsets according to their Regard (or Expertise) values, or some other appropriate attribute. For example, one can make the ratings given to items contributed by the most poorly Regarded users "sticky," that is largely unaffected by decay procedures, or even strengthen the worst ratings under some circumstances. In this way, users with the lowest level of Regard who are prone to disruptive behavior are largely and permanently excluded from user interactions. Alternatively, one can make the worst ratings given to items contributed by both well Regarded and poorly Regarded users decay faster over time than for users who have more average Regard values. In this way, well Regarded users have an incentive to make continuous contributions to stay highly ranked and poorly Regarded users will receive encouragement and assistance to improve their Regard. Additionally, one can use Decay to encourage users to contribute ratings of items contributed by other users, for example, by causing the better ratings of items contributed by well Regarded users to persist longer, if such users actively rate items contributed by other users. The following paragraphs describe an embodiment of segmented decay, as expressed mathematically.
Let θ( , U) be a function that maps users into subsets Ux , . . . , UZ Θ in sorne embodiments, this function will segment users according to their previously computed Expertise or Regard.
A standard time period, e.g. days, weeks or months, determines the rate at which ratings are affected by a transformation function using the procedure set forth below. ex T R Let T7" and Rm be (P*P) and <J>*P) matrices including only ratings provided during period m (by any user in any subset) for items contributed (at any time) by users in subset θx .
To compute either Expertise or Regard with segmented Decay, substitute:
∑ βχ ι=l and
Rm
using ^m and m defined below.
fm = Yj δτ(l, m, x,θx,Tl) ι=ι A transformation function δ is selected, such that and
Example βx ™ β* Ax
Rm = Yj δR(l, m,x, θx, Rl,Tl)
Figure imgf000054_0001
In one embodiment, Hh, separates users according to four levels of Expertise or Regard
User i U2 β. if 0.25 < h < 0.50 User i e Uξ if 0.50 < ft; < 0.75 User i € if 0.75 < fti < 1.00 The procedure adjusts the record of ratings as follows. When a user provides a rating of an item contributed by another user, the rating remains in the record unaffected during the following 10 standard time periods. All such ratings are removed from the record after 25 βx standard time periods. Between the 10 and the 25* period, T and R are adjusted (according to different formulas for each subgroup) which lessen the impact of each rating as the periods progress and adjust older ratings from lower values to higher values in the case of poorly Regarded users and from higher values to lower values in the case of well Regarded users. The procedure continues to base Expertise or Regard, as the case may be, on item ratings assigned over an extended period of time, but lightens the weight of bad historical ratings on poorly Regarded users and reduces the benefit of good historical ratings for highly Regarded users. These effects help users with poor ratings to recover Expertise or Regard and encourage users with high ratings to make additional contributions.
For all subsets θ * G (^ θ2> e3, #4} , when m=l, let τm = τm and
Figure imgf000055_0001
For 2 < m < 10
τm = τi + - - - + τm and
Figure imgf000055_0002
For 11 < m < 25
Figure imgf000055_0003
+ ■ ■ ■ + T" + PiιT <m- ll +.- +£* ι/ >12 and θx θx βx θx βx
Rm = Rm + . . . + Rm~10 + p^R™-11 + ■ ■ ■ + pmRl ι/ m>12
For m ≥ 26
Figure imgf000055_0004
~25 and
Figure imgf000055_0005
The following formula reduces proportionately the impact of ratings over the Decay period: for all θχ 26 - 1
Pi 15
The following formulas cause ratings of each subgroup to move closer to the mean over the Decay period
Figure imgf000056_0001
-ι v (.8)<'- »>
4 rr ;
Figure imgf000056_0002
. x (.8)<'- 11)
* )
Fig. 9(b)) is a graph of a Segmented Decay. In accordance with the mathematical model, Fig. 9(b) displays the different adjustments made to ratings assigned to items contributed by users in four different categories of Regard. On a scale from zero to one, the first group θ, represents users with Regard of greater than 0.75 to 1.0 (inclusive). The second group θ2 represents users with Regard of greater than 0.50 to 0.75. The third group θ3 represents users with Regard of greater than 0.25 to 0.50. The fourth group represents users with Regard from 0.0 to 0.25. The horizontal axis represents time periods 11-25 following the contribution of any rating. The time periods can represent days, weeks, or months, or some other time intervals. Time periods 0 through 10 are not represented on the graph because in the described embodiment, segmented decay does not begin to affect any rating during the first ten periods following the contribution of the rating. The described embodiment causes ratings to regress toward the rating of 0.5, represented by the movement of the diamonds, triangles, circles and spades toward the mean given the passage of time.
Expressed mathematically, this is a transformation of elements of the R matrix. The solid straight line represents the incremental reduction in the weight assigned to older ratings. Expressed mathematically, this is a transformation of element of the T matrix.
13. Vouch/Discredit
13.1 Description In the case of items that have recently entered the system and therefore received few ratings, a robust calculation of Quality will not be available. In this early period, the Expertise or Regard (whichever is used in the embodiment) associated with the user who contributed the item may be the primary indicator employed by other users to predict whether the item has value and is worth review. "Vouching" permits users to associate their Expertise or Regard, as the case may be, with an item contributed by another user, as if they had contributed the item themselves. The procedure is designed to help solve the sparse ratings problem by giving users with relatively high Expertise or Regard an incentive to identify good items contributed by users with relatively low Expertise or Regard and to use their Expertise or Regard to bring the item quickly to the attention of the community of users. The Vouching user's higher Expertise or Regard value is thereafter associated with the item, instead of the Expertise or Regard of the user who contributed the item, in procedures that filter, highlight, sort or otherwise evaluate items based on the Expertise or Regard of the contributor. As the item subsequently receives ratings, the user who Vouched for the item receives credit — or potentially loses Expertise or Regard — to some extent, as does the user who contributed the item.
"Discrediting" does the opposite: it permits users to dedicate their Expertise or Regard, as the case may be, to impugn an item submitted by another user. It is designed to give users an incentive to identify bad items contributed by users with relatively high Expertise or Regard, quickly alerting the community of users. A value equal to one minus the Expertise or Regard of the user who Discredits the item is thereafter associated with the item, instead of the Expertise or Regard of the user who contributed the item. As the item subsequently receives ratings, the user who Discredited the item receives credit — or potentially loses Expertise or Regard—but in an inverse relationship to the ratings received by the item.
13.2 Initial Embodiment
In the initial embodiment, when usery Vouches for the ntn item contributed by user i, new columns are added to the κi and "j matrices, such that: k?+1 ≡ ft,"
and
+l ≡ (l - g?)
where "≡" indicates the establishment of a link with column n of the Kι and Gι matrices. The newly created (and the original) columns reflect future ratings assigned to the item. In some embodiments, Vouching may also link the newly created column to historical ratings.
When user Discredits the ntn item contributed by user i, new columns are added to the ">' and matrices, such that:
>Λ+l ≡ - k?
Cj + 1 _
where "≡" indicates the establishment of a link with column n of the Λ'ι and matrices. The newly created columns reflect the mirror image of future ratings assigned to the item. In some embodiments, Discrediting may also link the newly created column to the mirror image of historical ratings.
In this embodiment, a user is only permitted to Vouch or Discredit if the user's
Expertise or Regard (whichever is used in the embodiment), is higher than the
Expertise or Regard currently associated with the item; initially, this is the Expertise or Regard of the user who contributed the item. Each user is permitted to Vouch for or Discredit a particular item only once.
The newly created columns would not be factored into the calculation of Quality. 13.3 General Expression
In some embodiments, a user who Vouches or Discredits might receive only a partial link to ratings for the item, expressed generally in the case of Vouching as:
l ≡ r(ft?) and
Figure imgf000059_0001
And expressed generally in the case of Discrediting as:
^+1 = τ(l - fl?) and
Figure imgf000059_0002
where
0 < τ < 1
For example, r = 0.5 would simply divide values in two, so that the user Vouching for or Discrediting an item contributed by another user is less affected by ratings.
13.4 Sequential Vouching
In some embodiments, a user who Vouches or Discredits will be affected by ratings on the basis of how much the act raises or lowers the profile of the item, as the case may be. Let U%" — {o., b, c, . . . } De the set of users who have, sequentially beginning with user a, Vouched for or Discredited item « contributed by user i, where
Ω(ft.) < ha, Ω(ft0) < hb, Ω(ftb) < hc . . .
Ω(ftj) = hj where
if user j Vouches for the item and
Ω(ftj) = l - h3 if user./ Discredits the item.
Let Um be the mtn user in Uin .
In the case of Vouching, let τι = Vl - and, where m>\,
Figure imgf000059_0003
In the case of Discrediting, let τ[ = ft, - (1 - hVl) and, where m>\, τm = n(hUm_l) - (i - hUm)
To include the effect of Vouching in the calculation of Expertise or Regard, let
Figure imgf000060_0001
Figure imgf000060_0002
and
To include the effect of Discrediting in the calculation of Expertise or Regard, let m+1 ≡ rm' ) and
9?;+1 ≡ ^(i - s.")-
13.5 Initial Discredit
In some embodiments, under certain circumstances, users may be permitted or encouraged to contribute items that are anticipated to yield a negative response. For example, an item might be a link to a web site containing objectionable content, which the user wishes to bring to the attention of the community of users. In some embodiments, a specific forum designed for this purpose may be established, clearly separated from other forums. If such items were to be directly associated with the contributing user, there would be substantial disincentives for participation. Therefore, in some embodiments, the user contributing the item will have the option to Discredit the item upfront, such that
Adjusted
The adjusted rating would be included in calculations of Regard and expertise for the contributing user. In some embodiments, the adjusted value will also be used in the calculation of the Quality of the item.
13.6 Additional Discussion of Vouching and Discrediting Figs. 20(a) and 20(b) are, respectively, a graph and a table to aid in explaining the concepts of "Vouching" and "Discrediting" and how these concepts provide a solution to the "sparse ratings problem" that arises during the early life of an item. In the graph of Fig. 20(a), the vertical axis represents the Regard of an individual posting an item (e.g., a Regard of 0.400). This is the Regard that is associated with the item, for example, for use in the calculation of thread caliber (discussed below in Section 15).
In certain embodiments of the present invention, there is a period shortly after an article is posted during which any ratings received from users may not be deemed sufficient to determine a robust measure of item Quality. The term "inertia" in Fig 20(b) refers to Quality inertia, which may obscure the value of the theretofore received ratings, because too few ratings have been received during the period shortly following the contribution of the item. During this period, not enough ratings have been received to be certain that Basic Quality method or other methods of calculating Quality, or a simple average, or any other measure, is representative of user or expert opinion.
One can, during this period, simply rely on the reputation/Regard of the user who contributed the item as a proxy measurement of the value of the item. A preferred embodiment, however, permits a user with a higher level of Regard to step forward and "Vouch" for the item. If, for example, the Regard of the Vouching user is 0.600, the Regard associated with the item being Vouched also becomes 0.600.
When a user Vouches for an item, the user is making the strongest possible statement in support of the item in question, backing up that statement with the user's reputation. Vouching for an item implies that the user would be willing to take the ratings offered by other people for the item as if the user had actually authored it. A user cannot, for example Vouch a poorly Regarded item half-way up to the user's Regard level. Vouching, as used in the described embodiment, is all or nothing, based on the Vouching user's Regard value.
Discrediting is a similarly strong statement. When a user Discredits an item the user is making the strongest possible statement in opposition to the item in question, aligning the full value of the user's reputation against the item. In fact, the user would be willing to take the opposite of whatever ratings are given by other users. This means that a highly Regarded user with, for example, a Regard of 0.900 will, by Discrediting an item, reduce the Regard associated with that item to 0.100 (i.e., reduce the associated Regard by 1.000-0.900). In the described embodiment, poorly Regarded users are able to Discredit items contributed by highly Regarded users. But because items can be Discredited only to the extent of (1 -Regard), a poorly Regarded user who Discredits and item will not reduce the Regard associated with the item by much. Note that, although Quality values are associated with items and Regard values are associated with users, the Regard of the author of a particular item is often associated with the item. As noted above, for example, in the case of articles that have received few ratings, the author's Regard is a helpful benchmark for the item's value. The concepts of Vouching and Discrediting allow a "special" Regard to be used in calculations involving certain items instead of the Regard of the user who contributed that item. As shown in the example in the figure, a user having a Regard of 0.250 associates this Regard with items by Vouching for the items. Similarly, the user gives a Regard of 0.750 to any item which he Discredits. This particular user cannot affect the Regard of an item that already has a Regard higher than 0.250 and similarly cannot affect the Regard of an item that already has associated with it a Regard of less than 0.750.
As items are Vouched for and/or Discredited,, the Regard of the contributing user, as affected by these procedures, becomes a robust prediction of the Quality that will ultimately be calculated for the item. Fig. 20(b) shows calculation of Quality values starting at time period 9. (Prior to time period 9, an inertia value preferably is used for Quality).
The horizontal axis of Fig. 20(a) shows 15 time periods (1 through 15). During each time period, in this example, a user Vouches or Discredits the same item. As shown in Figs. 20(a) and 20(b), initially, a user having a Regard of 0.400 posts the item. During time periods 2-4, three users Vouch for the item, improving the Regard associated with the item to 0.850. During time period 5, a user having a Regard of 0.400 Discredits the item, giving it an associated Regard of 0.600. Users continue to Vouch and Discredit the item until time period 15, when a user Vouches for the item and improves its associated Regard to 0.900.
In Fig. 20(a), the shaded areas represent the maximum of Regard and Quality (i.e., MAX(Regard, Quality)) for the item. As discussed above, a ratings system does not necessarily have robust information until a certain amount of time has passed (e.g., 9 time periods). The results of Vouching and Discrediting, however, provide a robust prediction of item Quality much earlier in the process, even when ratings are "sparse."
In addition, sequential Vouching/Discrediting preferably allows the Vouching or Discrediting user to share in the ratings received for the item, but only to the extent that the user moves the current Regard value associate with the item. For example, the user Vouching in time period 7 will have the least share in the ratings since the user has moved the Regard the least (from 0.800 to 0.875). The user who Discredits in time period 12 will have the most significant share (in the opposite of the ratings) since he moves the Regard the most (from 0.950 to 0.750).
14. Multi-Item Categories and Attributes
14.1 Description
In some embodiments, items will fall into separate categories, weighted differently in the calculation of Expertise or Regard. Factors considered in establishing separate categories will include the difficulty of creating the items, the effort required by other users to review the items and the urgency of the items.
The relative weight attached to an item in calculations of Expertise or Regard will depend on its category and a number of item attributes that are particular to the category.
14.2 Formulation
Let bι designate the category of item n contributed by user . Let ^ > • ■ • > b?m identify additional attributes of item n, specific to the item's category. Let the function
Figure imgf000063_0001
map the category and attributes of the item to an additional weighting factor for use in calculations of Regard and expertise.
14.3 Applied to the Expert Method
Figure imgf000064_0001
14.4 Applied to the High Regard Method
Figure imgf000064_0002
14.5 Example
Let k l — 25 indicating that item 5 contributed by user is participation in a chat session.
Figure imgf000064_0003
Table 8: Attribute Function
I\ is a function of:
• the number of participants, = ^
• the total session time in minutes, **, = 45 an
• whether or not user i served as a moderator, ^ = 0 (indicating not a moderator, whereas =1 would indicate the user was a moderator).
Let ft52 />53 r? = ma^, -25} + max{^, .3} + *> 4 * .2} = 0.15 + 0.3 = 0.45 15. Caliber Method
15.1 Description
Some embodiments will involve the transmission, display or evaluation of groupings of items of different Quality, contributed by users with varying Expertise or
Regard. A good example is a threaded discussion list — two or more discussion group postings with a common subject and explicit relationships between messages, i.e. an indication of which messages respond directly to which other messages.
One objective of the method is to present groupings containing better items first, without breaking apart the thread structure. Multiple threads are sorted among themselves, based upon a measurement taking into account characteristics of some or all of the contents of each thread. The Caliber method determines C- , the "Caliber" of an item grouping.
15.2 Initial Embodiment
In the initial embodiment of the method, Caliber is the grouping average of, for each item, the higher of Regard (of the user associated with the item) and Quality:
Figure imgf000065_0001
where z identifies the grouping (e.g. a thread identification number), m the number of items contained within, z„ the Expertise or Regard of the user who contributed item n and i the Quality of the item.
15.3 Caliber Threshold
In sorting threads, some embodiments may permit a user to user select hmm and 9mm ; such that items below a certain level of Quality, or associated with a user of Expertise or Regard below a certain level, are excluded from the calculation of Caliber:
Cf,y = έ ∑ max{ {h*n n " if' h"n' - > ; h"m »» in• \ <q/π» n "" if Vv qn„n >^ lVmmi«n, } n=l 0 otherwise ' 1 0 ootthheerwise
where
Figure imgf000065_0002
15.4 Extended Caliber
A general formulation for alternative embodiments of the Caliber method is:
« = ∑Φ(max{A£,£}) n=l
based on the same transformation functions discussed above. Applied to the Caliber threshold method:
CY=V l ^ . r fftn ϊf ftn > ft in j Qn if Qn > qmin
J V max{ < . , < . n=ι \ 1 0 otherwise 1 0 otherwise
where
o_ ™ j 1 if hn > hmin or qn > h„ n=ι 1 0 otherwise
16. Auction Method
Some embodiments will involve the transmission, display and integration of Expertise, Regard or Quality into the operation of online auctions.
The Auction method integrates Expertise or Regard, specifically, into an auction pricing mechanism. The objective of the method is to increase the integrity, fairness and efficiency of online auctions by establishing standards and a procedure for a seller to limit the field of users permitted to bid in an auction.
Let s" be an Expertise or Regard threshold defined by user i, who is putting item n up for sale in an online auction. In the initial embodiment, user i selects s" at or before the commencement of the auction. If no value is selected, the default is zero. si will establish the minimum Expertise or Regard required for a user to submit a binding bid qualified to participate in whatever auction pricing formula determines the winner of the auction and the closing price. Users who do not meet the s standard, in the initial embodiment, will still be permitted to place "record bids" which are bid levels that are recorded and reported, either to the seller alone or to the seller and other users, together with data on the Expertise or Regard of the user making the record bid, subject to whatever limitations are in effect on the release of price information pursuant to the auction model. For example, in a "second price" style auction, only the second highest bid is disclosed to other potential buyers and this standard might be applied to record bids.
In the initial embodiment, a user placing a record bid will have the option of withdrawing the bid at any time before the event described below. The selling user will have the option at any time until the conclusion of the auction, or until a defined period prior to the conclusion of the auction, to reduce the value of s? . Record bids submitted by users who fall within the Expertise or Regard threshold after the reduction will immediately be made effective, pari passu with already pending bids and the users will no longer be entitled to withdraw. Importantly, although the selling user may reduce s? , the selling user is not permitted to increase its value at any point in the course of the auction. Reductions can be made one time, or multiple times during the pendency of an auction. s" can even be reduced to zero.
In other words, if a seller who begins an auction with strict standards decides to open up the bidding to users with lesser Expertise or Regard, as the case may be, the door is opened to bids from any users that meet the lower standard. The seller will not be permitted to discriminate among users according to any standard other than the buyer's Expertise or Regard, applied consistently. In alternative embodiments, the user's Expertise or Regard may be monitored during the pendency of the auction and, if the value falls below the current level of s? , the selling user given the option (not an obligation) of releasing the user's binding bid.
17. Data structures
Fig. 10 shows an example of a small data set in accordance with an embodiment of the Basic High Regard method. This data set is intentionally small for the sake of example. It will be understood that actual data sets usually used with the methods and systems shown here can be very large. In the example, there are five users 1002, each having a user id, such as an e-mail address. Various ones of the various users have rated items having item ids 1-13. In the example every user has authored items. Ratings received from the users for various items vary between a lowest rating of 0.03 (items item 7 rated by user 1 and item 4 rated by user 5) and a highest rating of 0.95 (items 2 and 3 rated by user 1). Applying the Basic High Regard method discussed above yields the Regard (HR) values shown in table 1006 for each of users 1-5. It will be understood that the Regard values shown in this example are for purposes of example only and are not to be taken in a limiting sense.
Fig. 11 shows an example of a data structure used to store and retrieve data required to perform calculations of Regard and Quality in a preferred embodiment of the present invention. This implementation uses circular linked lists to represent the sparse matrixes used to store the ratings contributed by the users. The Figure shows a Users linked list 1100 and an Items linked list 1140. Users list 1100 contains an entry for p+1 users. The users have user IDs (uids) 0 through p. Items list 1140 contains an entry for n+1 items that were contributed by the users. The items have item IDs (iids) 0 through n.
Each element in Users list 1100 has a corresponding Authors linked list 1120. In the Figure, only one Author list is shown to improve the clarity of the Figure. The entry in Users list 1100 for user 2 has an associated Author listl 1120 containing three entries. These entries in the Author list 1120 shown in the Figure represent the authors whose items have been rated by user 2. Thus, the first entry 1122 represents user 1, whose item(s) were rated by user 2; the second entry 1124 represents user 4, whose item(s) were rated by user 2; and the third entry 1126 represents user 7, whose item(s) were rated by user 2.
Each element in an Authors list 1120 has a corresponding Ratings linked list 1130. In the Figure, only two Ratings list are shown for the sake of clarity. For example, the entry 1122 in Authors list 1120 for Author 1 has an associated Ratings list 1130 containing three entries. These entries in the Ratings lists 1130 represent items contributed by User 1 (and rated by user 2). . Thus, the first entry 1132 represents item 3, which was contributed by user 1 and rated by user 2; the second entry 1134 represents item 8, which was contributed by user 1 and rated by user 2; the third entry 1136 represents item 17, which was contributed by user 1 and rated by user 2. Each entry in a Ratings list 1130 includes a "g" value, an author ID of the author (aid = author's uid), a "k" value and an "m" value. Each "g" value in a Ratings list 1130 represents a rating for the item. The "r" value in each entry of Authors list 1120 is the sum of the ratings g from its corresponding Ratings list 1130. Thus, for example, the sum of the ratings g in entries 1132, 1134, 1136 is stored as value r in entry 1122.
Each entry in a Ratings list 1130 has a "k" value. In the example, k=l, although k can also have other values. The "t" value in each entry of Authors list 1120 is the sum of the "k" values from its corresponding ratings list 1130. Thus, for example, the sum of the "k" values in entries 1132, 1134 and 1136 is stored as value "t" in entry 1122.
In the described embodiment, when author 2 rates an item of author 1, the rating is added to the Ratings list 1130 for Author 1 and the "t" value for Author 1 is incremented.
The Figure includes a plurality of circular linked lists 1150 (used to find high Regard values) and a plurality of circular linked lists 1160 (used to find "Quality"). Only one of each of lists 1150 and 1160 is shown for the sake of clarity. Each list 1150 is formed by a series of b next links and represents the "r" and "t" values of a single author. Thus, in the Figure, list 1150 contains entries whenever items authored by user 1 were rated by another user. Thus, entry 1122 represents items of user 1 rated by user 2. Entry 1154 represents items of user 1 rated by user 0. In the example, entry 1154 points to the entry for user 1 in Users list 1100, which points in turn to entry 1122, forming circular linked list 1150. To calculate "high Regard," the method follows the b_next chain for a list 1150 to pick up the non-zero "r" and "t" values for a particular author. Each list 1160 is formed by a series of s next links and represents the ratings
"g" of a single item that has been rated by various users. Thus, in the Figure, list 1160 contains entries whenever item 2 of user 4 was rated by another user. Thus, entry 1162 represents item 2 of user 4 rated by user 2. Entry 1164 represents item 2 of user 4 rated by user 9. In the example, entry 1164 points to entry 1142 in Items list 1140 (for item 2). Entry 1142 points to entry 1162 , forming a circular linked list. To calculate "Quality" for an item, the method follows the s_next chain for a list 1160 to pick up the non-zero ratings "g" values.
As shown, each user in User list 1100 has a corresponding "Regard" in high Regard list 170. Each entry in the lists is preferably time-stamped (to aid, for example, in the decay method discussed above).
There may be situations where different types of items co-exist. For example, discussion group postings, participation in an online chat session, the products, services and assets one offers to sell in online auctions and web link recommendations, etc. can be considered items of different types. The concept of "multi-type High Regard" is created so that an item of one type may have more (or less) influence on the High Regard values than an item of a different type.
18. Example Architectures
It will be understood that the example architectures discussed herein are provided by way of example and are not to be taken in a limiting sense. There are a large number of possible architectures and network structures that benefit from having a server such as rating server 140 of Fig. 1 therein (or a similar rating server returning Expertise and Quality). In the figures that follow, the function of receiving ratings and user information and returning Quality and/or Regard (Expertise) information is preferably performed by rating server 1202. It will also be understood that rating server 1202 can be part of a larger data processing system performing one or more of the functions discussed herein. Rating server 1202 is shown separately, but is not required to be separate from the entities to which it provides information. In addition, the functions of rating server 1202 and the other entities discussed below can be distributed over more than one data processing system or network without departing from the spirit and scope of the present invention.
Fig. 12(a) is a block diagram of a first example forum server application including a rating server 1202. A forum server 1204 provides, for example, the content of a forum such as that shown in Fig. 2(a). Forum server 1204 accesses a database 1206 storing the items displayed by the forum and preferably caching the Quality and Regard values returned from rating server 1202, although not all servers 1204 cache. Expertise and/or Caliber values might also be returned. Rating server 1202 accesses its own database 1208 containing information about the ratings, Quality and the Regard of the various users (including associated Regards caused by Vouching and Discrediting, if these features are part of the system). For example, the data structure of Fig. 11 is preferably stored in database 1208. It will be understood that the databases 1206, 1208 shown can be located at other appropriate places in the system without departing from the spirit and scope of the invention. In Fig. 12(a), a user requests forums, threads and articles within the threads and posts his own items through interaction with forum server 1204. Forum server 1204 interacts with rating server 1202 to obtain the Regard of the contributing users and the item Quality requested. Forum server 1204 also sends ratings contributed by users to rating server 1202 as they are received.
Thus, forum server 1204 identifies new items contributed by the users to rating server 1202. The items them selves are not necessarily sent to server 1202, but server 1202 need to know that new items have been contributed. An item ID can be determined either by forum server 1204 (in which case, the item ID is preferably stored in conjunction with a forum ID to uniquely identify the item) or by rating server 1202. Forum server 1202 also identifies the existence of new authors to rating server 1202. Again, an author id can be established either by forum server 1204 or by rating server 1202.
Forum server 1204 also identifies the existence of new users to rating server 1202. A user id can be established either by forum server 1204 or by rating server 1202.
Forum server 1204 also identifies the existence of new ratings to rating server
1204. These ratings are received by forum server 1204 from users via an interface such as that shown in Fig. 2. In alternative embodiments, ratings are received directly from the users without having to go through forum server 1204. Communication between servers 1202 and 1204 and between the servers and user 1201 can be accomplished using any appropriate protocol and message format. For example, rating and forum items can be sent to a user's browser using the well-known http Web protocol. Similarly, information can be exchanged among the elements of Fig. 12(a) using any appropriate non-web protocol. Fig. 12(b) shows another embodiment of servers 1202 and 1204 in which server 1202 communicates directly with the user's browser instead of with forum server 1204. In this embodiment, server 1204 still sends information about new users, new items, new ratings, etc to rating server 1202, but requests for Quality, Expertise, Caliber and/or Regard values are sent by a browser of user 1201 and returned by rating server 1202 directly to the user's browser. The browser may cache these values in certain embodiments.
Fig. 12(c) shows an example of a web page where the html of the page cause a web page including items (or descriptions of items) to be fetched from forum server 1204 and Regard Expertise/Caliber and/or Quality values to be fetched separately from server 1202. Alternately (not shown) a web page could be fetched from a third server. This third party web page might include links to both forum server 1204 and 1202. When the browser encounters these links on the web page, it requests information from the specified server and incorporates it into the displayed web page.
Fig. 13 is a block diagram of another example forum server application communicating with a separate integrated content server 1302 and a rating server 1202. Integrated content server 1302 communicates with forum server 1204 to obtain items (e.g., threads of forum messages) and communicates with forum server 1204 (and indirectly with rating server 1202) to obtain Regard/Expertise for users and/or Quality for items. Server 1302 preferably also communicates with a global network. For example, integrated content server 1302 might obtain content from an outside source (for example, an online newspaper) and add ratings information to the content so obtained.
Fig. 14 is a block diagram of another example forum server application 1204 communicating with a separate integrated content server 1302 and a rating server 1202. An ad server 1402 can be, for example, a known commercial ad server, such as the Doubleclick, 24/7 or Adforce ad servers. Ad server 1402 communicates with a user's browser, for example, to deliver ad content. The user 1401 views the contents of a web page including ads from ad server 1404 and integrated content from server 1302. The integrated content includes data from forum server 1204 and Regard and/or Quality values from rating server 1202. The ads can include clickthrough banners from ad server 1402 and/or advertiser's web site 1404. Servers 1402/1404 access a commercial data server 1406. Commercial data server 1406 can track the user's identity through information provided from servers 1402 and 1404 and obtains the Regard and/or item Quality from rating server 1202. Commercial data server 1406 then provides the obtained ratings information to the requesting advertiser web site 1404, ad server 1402 and forum server 1204 to help them target their advertising. Thus, rating server 1202 provides data both to the forum server 1204 and to commercial data server 1406, which forwards the data to its own requesting clients.
Alternatively, rating server 1202 can provide ratings directly to the browser of user 1401 in a manner similar to that shown in Fig. 12(b). Fig. 15(a) is a flow diagram showing communication between elements of Fig.
14 during forum/thread index intercommunication. This flow is similar to that shown in Fig. 13, except that Fig. 13 does not include an ad server that provides the user with ads on the web pages the user is viewing. Details of this process are shown in Figs. 16(a)- 16(c). Fig. 15(b) is a flow diagram showing communication between elements of Fig.
14 during article view intercommunication. Here, the user requests a particular article/item in a current thread. The forum server requests ratings for the article from rating server 1202 and provides the items and ratings to the user. When the user rates an item, the rating is sent to rating server 1202, which calculates a new Quality for the item and sends the new Quality to the forum server and web server with instructions that the new Quality invalidates any previously cached Quality for the item. Details of this process are shown in Figs. 16(d)-16(f).
Fig. 15(c) is a flow diagram showing communication between elements of Fig.
14 during post/reply intercommunication. When the user posts a new article in a thread, the article/item is sent to rating server 1202. Rating server 1202 sends a message to web server 1302 to invalidate any cached Caliber values for the thread (as opposed to items in the thread). The forum server may handle the caliber calculation. Alternatively, caliber may be calculated by a Java applet on the user's own machine. Details of this process are shown in Figs. 16(g)- 16(h).
Fig. 17 is a block diagram of an example integrated content server 1302, where the user's browser also communicates with an e-commerce web site 1702 and an e-commerce sourcing server 1704. Rating server 1202 provides Quality and/or Regard values to forum server 1204 and to commercial data server 1406. The user visits the e-commerce web site, which communicates the user's identity to commercial data server 1406. As in Fig. 14, server 1406 communicates with rating server 1202 to obtain information about the user and passes it to its own requesting clients, such as site 1702, server 1704, and server 1204.
Fig. 18 is a block diagram of an example auction server 1802, where the user's browser also communicates with an e-commerce web site 1702 and an e-commerce sourcing server 1704. Rating server 1202 provides Quality and/or Regard values to forum server 1204 and to commercial data server 1406. The user visits the e- commerce web site, which communicates the user's identity to commercial data server 1406. As in Fig. 14, server 1406 communicates with rating server 1202 to obtain information about the user and passes it to its own requesting clients, such as site 1702, server 1704, and server 1204. Alternatively, the functionality of rating server 1202 is integrated into auction web server 1802, so that auction server 1202 calculates Regard and Quality and receives ratings. Thus, auction web server 1802 can provide Quality and Regard information about items on the auction site and about auctions transactions as discussed above. Alternatively, rating server 1202 communicates Regard and/or Quality information to respective browsers of users 1810
Fig. 19 is a block diagram of an example rating server 1202 for an individual and commercial rating service. Here, rating server 1202 communicates with browser or communication software of user 1902 to provide that user with ratings (Quality and/or Regard) of other users and their items. Rating server 1202 also communicates with commercial data server 1402 as described above in connection with Fig. 14. The commercial data server passes information concerning ratings obtained from rating server 1202 to both e-commerce web sites and advertisers web sites and to still other individual users. Thus, rating server 1202 provides data both to individuals and to a commercial service in this example.
Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope of the invention being indicated by the following claims and equivalents.

Claims

1. A method for providing interactive evaluation of a content item disseminated over a computer network comprising the steps of:
(a) disseminating a content item to a plurality of individual users of computers, wherein the content item is provided by one of said users;
(b) receiving evaluations of the content item from the individual users; and
(c) assigning a quality rating to the content item based on weightings of the evaluations provided by the individual users.
2. The method of claim 1, wherein the evaluation provided by a first individual user is weighted to reflect an individual expertise rating of the first individual user.
3. The method of claim 2, wherein the individual expertise of the first individual user is based on weighted evaluations by other individual users of at least one of the content items or evaluations provided by the first individual user.
4. The method of claim 1, further comprising the step of sorting content items by quality rating.
5. The method of claim 2, further comprising the step of sorting content items by the individual expertise of the provider of the content item.
6. The method of claim 4, wherein the evaluation provided by a first individual user is weighted to reflect an individual expertise rating of the first individual user.
7. The method of claim 6, wherein the individual expertise of the first individual user is based on evaluations by other individual users of at least one of the content items or evaluations provided by the first individual user.
8. The method of claim 5, further comprising the step of sorting content items by the individual expertise of the provider of the content item.
9. The method of claim 2, wherein a first individual user may associate his expertise for or against a content item provided by another individual user, thereby affecting the expertise associated with the content item.
10. The method of claim 3, wherein a first individual user may associate his expertise for or against a content item provided by another individual user, thereby affecting the expertise associated with the content item.
11. The method of claim 4, wherein a first individual user may associate his expertise for or against a content item provided by another individual user, thereby affecting the expertise associated with the content item.
12. The method of claim 1, further comprising the step of revising the weightings of evaluations provided by users in accordance with pre-established criteria.
13. The method of claim 12, wherein the evaluation provided by a first individual user is weighted to reflect an individual expertise rating of the first individual user.
14. The method of claim 12, wherein the individual expertise of the first individual user is based on evaluations by other individual users of at least one of the content items or evaluations provided by the first individual user.
15. The method of claim 13, further comprising the step of revising the individual expertise rating of the first individual user in accordance with pre-established criteria.
16. The method of claim 1, wherein an individual user navigates through information available over the network at least in part by providing evaluations of content items.
17. The method of claim 3, wherein an individual user navigates through information available over the network at least in part by providing evaluations of content items.
18. The method of claim 4, wherein an individual user navigates through information available over the network at least in part by providing evaluations of content items.
19. The method of claim 7, wherein an individual user navigates through information available over the network at least in part by providing evaluations of content items.
20. The method of claim 12, wherein an individual user navigates through information available over the network at least in part by providing evaluations of content items.
1. The method of claim 15, wherein an individual user navigates through information available over the network at least in part by providing evaluations of
content items.
PCT/US2000/032159 1999-11-26 2000-11-27 Expertise-weighted group evaluation of user content quality over computer network WO2001041014A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU19274/01A AU1927401A (en) 1999-11-26 2000-11-27 Expertise-weighted group evaluation of user content quality over computer network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16759499P 1999-11-26 1999-11-26
US60/167,594 1999-11-26

Publications (2)

Publication Number Publication Date
WO2001041014A1 true WO2001041014A1 (en) 2001-06-07
WO2001041014A9 WO2001041014A9 (en) 2001-11-08

Family

ID=22607999

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/032159 WO2001041014A1 (en) 1999-11-26 2000-11-27 Expertise-weighted group evaluation of user content quality over computer network

Country Status (2)

Country Link
AU (1) AU1927401A (en)
WO (1) WO2001041014A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2397912A (en) * 2003-01-31 2004-08-04 Hewlett Packard Development Co Priority filtering content of user feedback
WO2004072876A1 (en) * 2003-02-11 2004-08-26 Ipc Gmbh Method for providing services via a communication network
WO2004044705A3 (en) * 2002-11-11 2004-09-02 Transparensee Systems Inc Method and system of searching by correlating the query structure and the data structure
WO2017007488A1 (en) * 2015-07-09 2017-01-12 Hewlett Packard Enterprise Development Lp Staged application rollout
US9626356B2 (en) 2012-12-18 2017-04-18 International Business Machines Corporation System support for evaluation consistency

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5926794A (en) * 1996-03-06 1999-07-20 Alza Corporation Visual rating system and method
US6073117A (en) * 1997-03-18 2000-06-06 Kabushiki Kaisha Toshiba Mutual credit server apparatus and a distributed mutual credit system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5926794A (en) * 1996-03-06 1999-07-20 Alza Corporation Visual rating system and method
US6073117A (en) * 1997-03-18 2000-06-06 Kabushiki Kaisha Toshiba Mutual credit server apparatus and a distributed mutual credit system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DATABASE BUSINESS WIRE [online] 10 October 1999 (1999-10-10), "FL Mindsolve Tech : Visual 360 Continues its Leadership", XP002939641, retrieved from 0977438 accession no. DIALOG Database accession no. BW1330 *
DATABASE BUSINESS WIRE [online] 14 December 1998 (1998-12-14), "Mediappraise Receives National Award for Web-based Technology That Enables Companies to Solve Thorny HR Problem", XP002939642, retrieved from 0951520 accession no. DIALOG Database accession no. BW1257 *
DATABASE BUSINESS WIRE [online] 2 August 1999 (1999-08-02), "cPulse Launches Industry's First Online Customer Satisfaction Monitoring Service", XP002939639, retrieved from 00083710 accession no. DIALOG Database accession no. 19990802214B1274 *
DATABASE BUSINESS WIRE [online] 27 March 1995 (1995-03-27), "Paradigm makes WorkWise-Evaluations a True Workgroup Solution with Free Companion Program", XP002939640, retrieved from 0473181 accession no. DIALOG Database accession no. BW1095 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004044705A3 (en) * 2002-11-11 2004-09-02 Transparensee Systems Inc Method and system of searching by correlating the query structure and the data structure
GB2397912A (en) * 2003-01-31 2004-08-04 Hewlett Packard Development Co Priority filtering content of user feedback
WO2004072876A1 (en) * 2003-02-11 2004-08-26 Ipc Gmbh Method for providing services via a communication network
US9626356B2 (en) 2012-12-18 2017-04-18 International Business Machines Corporation System support for evaluation consistency
US9633003B2 (en) 2012-12-18 2017-04-25 International Business Machines Corporation System support for evaluation consistency
WO2017007488A1 (en) * 2015-07-09 2017-01-12 Hewlett Packard Enterprise Development Lp Staged application rollout
US10496392B2 (en) 2015-07-09 2019-12-03 Micro Focus Llc Staged application rollout

Also Published As

Publication number Publication date
WO2001041014A9 (en) 2001-11-08
AU1927401A (en) 2001-06-12

Similar Documents

Publication Publication Date Title
Fradkin Search, matching, and the role of digital marketplace design in enabling trade: Evidence from airbnb
AU2006290220B2 (en) Framework for selecting and delivering advertisements over a network based on user behaviorial interests
US9009082B1 (en) Assessing user-supplied evaluations
US8195522B1 (en) Assessing users who provide content
US8751307B2 (en) Method for implementing online advertising
US7664669B1 (en) Methods and systems for distributing information within a dynamically defined community
US20040225577A1 (en) System and method for measuring rating reliability through rater prescience
US8103540B2 (en) System and method for influencing recommender system
US7966342B2 (en) Method for monitoring link &amp; content changes in web pages
US8554601B1 (en) Managing content based on reputation
US20110153508A1 (en) Estimating values of assets
US20060106670A1 (en) System and method for interactively and progressively determining customer satisfaction within a networked community
US20040260600A1 (en) System &amp; method for predicting demand for items
JP2008537817A (en) Systems, methods, and computer program products for scoring items and determining predictor proficiency based on user sentiment
WO2005054994A2 (en) Method and system for word of mouth advertising via a communications network
CN101727643A (en) Method for providing advertising listing variance in distribution feeds
WO2005084370A2 (en) Integrated ratings for legal entities
CN103678518A (en) Method and device for adjusting recommendation lists
Kostyk et al. Less is more: Online consumer ratings' format affects purchase intentions and processing
CN102163304A (en) Method and system for collaborative networking with optimized inter-domain information quality assessment
WO2014065920A1 (en) System and method for interactive forecasting, news, and data on risk portfolio website
Cook et al. eTrust: Forming relationships in the online world
WO2001041014A1 (en) Expertise-weighted group evaluation of user content quality over computer network
Suki et al. Examination of Mobile Social Networking Service (SNS) Users' Loyalty: A Structural Approach
WO2000050967A2 (en) Computer system and methods for trading information in a networked environment

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WPC Withdrawal of priority claims after completion of the technical preparations for international publication
AK Designated states

Kind code of ref document: C2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

COP Corrected version of pamphlet

Free format text: PAGES 1/36-36/36, DRAWINGS, REPLACED BY NEW PAGES 1/36-36/36; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP