US20010013035A1 - System and method for accessing heterogeneous databases - Google Patents

System and method for accessing heterogeneous databases Download PDF

Info

Publication number
US20010013035A1
US20010013035A1 US09/028,471 US2847198A US2001013035A1 US 20010013035 A1 US20010013035 A1 US 20010013035A1 US 2847198 A US2847198 A US 2847198A US 2001013035 A1 US2001013035 A1 US 2001013035A1
Authority
US
United States
Prior art keywords
query
list
partial list
field
collections
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/028,471
Other versions
US6295533B2 (en
Inventor
William W. Cohen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rutgers State University of New Jersey
Original Assignee
Rutgers State University of New Jersey
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rutgers State University of New Jersey filed Critical Rutgers State University of New Jersey
Priority to US09/028,471 priority Critical patent/US6295533B2/en
Priority to PCT/US1998/003627 priority patent/WO1998039697A2/en
Assigned to AT&T CORP. reassignment AT&T CORP. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: COHEN, WILLIAM W.
Publication of US20010013035A1 publication Critical patent/US20010013035A1/en
Application granted granted Critical
Publication of US6295533B2 publication Critical patent/US6295533B2/en
Assigned to RUTGERS, THE STATE UNIVERSITY OF NEW JERSEY reassignment RUTGERS, THE STATE UNIVERSITY OF NEW JERSEY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AT&T CORPORATION
Assigned to RUTGERS, THE STATE UNIVERSITY OF NEW JERSEY reassignment RUTGERS, THE STATE UNIVERSITY OF NEW JERSEY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AT&T CORPORATION
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/256Integrating or interfacing systems involving database management systems in federated or virtual databases
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • Y10S707/99935Query augmenting and refining, e.g. inexact access

Definitions

  • This invention relates to accessing databases, and particularly to accessing heterogeneous relational databases.
  • Databases are the principal way in which information is stored.
  • the most commonly used type of database is a relational database, in which information is stored in tables called relations. Relational databases are described in A First Course on Database Systems by Ullman and Widom, Prentice Hall, 1997, and in An Introduction to Database Systems, by C. J. Date, Addison Wesley, 1995.
  • Each entry in a relation is typically a character string or a number.
  • relations are thought of as sets of tuples, a tuple corresponding to a single row in the table.
  • the columns of a relation are called fields.
  • Joining relations is the principal means, of aggregating information that is spread across several relations.
  • FIG. 1 shows two sample relations Q 101 and R 102 , and the result of joining Q and R (the “Join” of Q and R) 103 on the fields named MovieID (the columns indicated by 104 .)
  • relations are usually joined on special fields that have been designated as keys, and database management systems are implemented so as to efficiently perform joins on fields that are keys.
  • each tuple corresponds to an assertion about the world. For instance, the tuple ⁇ 12:30, 11, “Queen of Outer Space (ZsaZsa Gabor)”, 137 >(the row indicated by 105 ) in the relation Q 101 of FIG. 1 corresponds to the assertion “the movie named ‘Queen of Outer Space’, starring Zsa Zsa Gabor, will be shown at 12:30 on channel 11.”
  • Known systems can represent information that is uncertain in a database.
  • One known method associates every tuple in the database with a real number indicating the probability that the corresponding assertion about the world is true. For instance, the tuple described above might be associated with the probability 0.9 if the preceding program was a major sporting event, such as the World Series. The uncertainty represented in this probability includes the possibility, for example, that the World Series program may extend beyond its designated time slot. Extensions to the database operations of join and selection useful for relations with uncertain information are also known.
  • a document vector representation of a document is a vector with one component for each term appearing in the corpus.
  • a term is typically a single word, a prefix of a word, or a phrase containing a small number of words or prefixes. The value of the component corresponding to a term is zero if that term does not appear in the document, and non-zero otherwise.
  • non-zero values are chosen so that words that are likely to be important have larger weights. For instance, word that occur many times is a document, or words that are rare in the corpus, have large weights.
  • a similarity function can then be defined for document vectors, such that documents with the similar term weights have high similarities, and documents with different term weights have low similarity. Such a similarity function is called a term-based similarity metric.
  • ranked retrieval An operation commonly supported by such text databases is called ranked retrieval.
  • the user enters a query, which is a textual description of the documents he or she desires to be retrieved. This query is then converted into a document vector.
  • the database system then presents to the user a list of documents in the database, ordered (for example) by decreasing similarity to the document vector that corresponds to the query.
  • the Review column (the column indicated by 107 ) of relation R 102 in FIG. 1 might be instead stored in a text database.
  • the answer to the user query “embarrassingly bad science fiction” might be a list containing the review of “Queen of Outer Space” as its first element, and the review of “Space Balls” as its second element.
  • RDBMS relational database management systems
  • M and E where each tuple in M encodes a single person's medical history, and each tuple in E encodes data pertaining to a single employee of some large company. Joining these relations is feasible if M and E both use social security numbers as keys.
  • Another known technique for handling key mismatches is to use an equality predicate, a function which, when called with arguments Key 1 and Key 2 , indicates if Key 1 and Key 2 should be considered equivalent for the purpose of a join.
  • an equality predicate a function which, when called with arguments Key 1 and Key 2 , indicates if Key 1 and Key 2 should be considered equivalent for the purpose of a join.
  • Such a function is of limited applicability because it is appropriate only for a small number of pairs of columns in a specific database.
  • the use of equality tests is described in the Identification and Resolution of Semantic Heterogeneity in Multidatabase Systems , by Douglas Fang, Joachim Hammer, and Dennis McLeod, in Multidatabase Systems: An Advanced Solution for Global Information Sharing , pages 52-60. IEEE Computer Society Press, Los Alamitos, Calif., 1994.
  • Both normalization and equality predicates are potentially expensive in terms of human effort: for every new type of key field, a new equality predicate or normalization procedure must be written by
  • the keys to be matched are strings that name certain real-world entities. (In our example, for instance, they are the names of movies.) Techniques are known for examining pairs of names and assessing the probability that they refer to the same entity. Once this has been done, then a human can make a decision about what pairs of names should be considered equal for all subsequent queries that require key matching. Such techniques are described in Record Linkage Techniques— 1985, edited by B. Kilss and W. Alvey, Statistics of Income Division, Internal Revenue Service Publication 1299-2-96, available from ⁇ http://www.bts.gov/fcsm/methodology/ ⁇ , 1985, as well as in the Merge/purge Problem for Large Databases, by M. Hernandez and S.
  • known methods require that data from heterogeneous sources be preprocessed in some manner.
  • the data fields that will be used as keys must be normalized, using a domain-specific procedure, or a domain-specific equality test must be written, or a determination as to which keys are in fact matches must be made by a user, perhaps guided by some previously computed assessment of the probability that each pair of keys matches.
  • An embodiment of the present invention accesses information stored in heterogeneous databases by using probabilistic database analysis techniques to answer database queries.
  • the embodiment uses uncertain information about possible key matches obtained by using general-purpose similarity metrics to assess the probability that pairs of keys from different databases match. This advantageously allows a user to access heterogeneous sources of information without requiring any preprocessing steps that must be guided by a human. Furthermore, when pairs of keys from different sources are assumed to match, the user is apprised of these assumptions, and provided with some estimate of the likelihood that the assumptions are correct. This likelihood information can help the user to assess the quality of the answer to the user's query.
  • Data from heterogeneous databases is collected and stored in relations.
  • the data items in these relations that will be used as keys are represented as text.
  • a query is received by a database system. This query can pertain to any subset of the relations collected from the heterogeneous databases mentioned above. The query may also specify data items from these relations that must or should refer to the same entity.
  • a set of answer tuples is computed by the database system. These tuples are those that are determined in accordance with the present invention to most likely to satisfy the user's query. A tuple is viewed as likely to satisfy the query if those data items that should refer to the same entity (according to the query) are judged to have a high probability of referring to the same entity. The probability that two data items refer to the same entity is determined using problem-independent similarity metrics that advantageously do not require active human intervention to formulate for any particular problem.
  • the answer to a query could consist of a small number of tuples with a high probability of being correct answers, and a huge number of tuples with a small but non-zero probability of being correct answers.
  • Known probabilistic database methods would disadvantageously generate all answer tuples with non-zero probability, which often would be an impractically large set.
  • the present invention advantageously solves this problem by computing and returning to the user only a relatively small set of tuples that are most likely to be correct answers, rather than all tuples that could possibly be correct answers.
  • the answer tuples are returned to the user in the order of their computed likelihood of being correct answers, i.e., the tuples judged to be most likely to be correct are presented first, and the tuples judged less likely to be correct are presented later.
  • Each collection includes a structured entity.
  • Each structured entity in turn includes a field.
  • a query is received that specifies a subset of the set of collections and a logical constraint between fields that includes a requirement that a first field match a second field.
  • the probability that the first field matches the second field based upon the contents of the fields is automatically determined.
  • a collection of lists is generated in response to the query, where each list includes members of the subset of collections specified in the query.
  • Each list also has an estimate of the probability that the members of the list satisfies the logical constraint specified in the query.
  • the present invention advantageously combines probabilistic database techniques with probabilistic assessments of similarity to provide a means for automatically and efficiently accessing heterogeneous data sources without the need for human intervention in identifying similar keys.
  • FIG. 1 shows an prior art example of two relations Q and R and a join of relations Q and R.
  • FIG. 2 shows an embodiment of a system and apparatus in accordance with the present invention.
  • FIG. 3 shows a-table of relations upon which experiments were performed to determine properties of the present invention.
  • FIG. 2 An embodiment of an apparatus and system in accordance with the present invention is shown in FIG. 2.
  • a search server 201 , user 202 , amd database server A 203 , database server B 204 and database server C 205 are coupled to network 206 .
  • Heterogeneous databases U 207 , V 208 and W 209 are coupled to database server A 203 .
  • Heterogeneous databases X 210 and Y 211 are coupled to database server B 204 .
  • Heterogeneous database Z 212 is coupled to database server C 213 .
  • User 202 submits a query to search server 101 .
  • Search server 101 conducts a search of heterogeneous databases U 207 , V 208 , W 209 , X 210 , Y 211 and Z 212 in an automatic fashion in accordance with the method of the present invention.
  • search server 201 includes processor 213 and memory 214 that stores search instructions 215 adapted to be executed on processor 213 .
  • processor 213 is a general purpose microprocessor, such as the Pentium II processor manufactured by the Intel Corporation of Santa Clara, Calif.
  • processor 213 is an Application Specific Integrated Circuit (ASIC) that embodies at least part of the search instructions 215 , while the rest are stored at memory 214 .
  • memory 214 is a hard disk, read-only memory (ROM), random access memory (RAM), flash memory, or any combination thereof.
  • Memory 214 is meant to encompass any medium capable of storing digital data. As shown in FIG. 2, memory 214 is coupled to processor 213 .
  • One embodiment of the present invention is a medium that stores search instructions.
  • the phrase “adapted to be executed” is meant to encompass instructions stored in a compressed and/or encrypted format, as well as instructions that have to be compiled or installed by an installer before being executed by processor 213 .
  • the search server further comprises a port 216 adapted to be coupled to a network 206 .
  • the port is coupled to memory 214 and processor 213 .
  • network 206 is the Internet. In another embodiment, it is a Local Area Network (LAN). In yet another embodiment, it is a Wide Area Network (WAN). In accordance with the present invention, network 206 is meant to encompass any switched means by which one computer communicates with another.
  • LAN Local Area Network
  • WAN Wide Area Network
  • the user is a personal computer.
  • database servers A 203 , B 204 and C 205 are computers, adapted to act as interfaces between a network 206 and databases.
  • the database servers 203 , 204 and 205 are server computers. In another embodiment, they act as peer computers.
  • phrase means any fragment of text down to a single character, e.g., a word, a collection of words, a letter, several letters, a number, a punctuation mark or set of punctuation marks, etc.
  • the present invention operates on data is stored in relations, where the primitive elements of each relation are document vectors, rather than atoms.
  • This data model is called SUR, which stands for Simple Texts In Relations.
  • SUR Simple Texts In Relations.
  • the term “simple” indicates that no additional structure is assumed for the texts.
  • an extensional database consists of a term vocabulary T and set of relations ⁇ p 1 , . . . p n ⁇ . Associated with each relation p is a set of tuples called tuples(p). Every tuple (v 1 , . . . , v k ) ⁇ tuples (p) has exactly k components, and each of these components v i is a document vector. It is also assumed that a score is associated with every tuple in p. This score will always be between zero and one, and will be denoted score ((v 1 , . . . , v k ) ⁇ tuples (p)). In most applications, the score of every tuple in a base relation will be one; however, in certain embodiments, non-unit scores can occur. This allows materialized views to be stored.
  • WHIRL Word-based Heterogeneous Information Retrieval Logic
  • B 1 . . . B k A conjunctive WHIRL query is written B 1 . . . B k , where each B i is a literal.
  • An EDB literal is written p(X 1 , . . . , X k ) where p is the name of an EDB relation, and the X i 's are variables.
  • a similarity literal is written X ⁇ Y, where X and Y are variables. Intuitively, this can be interpreted as a requirement that documents X and Y be similar. If X appears in a similarity literal in a query Q, then X also appears in some EDB literal in Q.
  • a substitution ⁇ is a mapping from variables to document vectors.
  • the variables X i in the substitution are said to be “bound” by ⁇ .
  • Q ⁇ denotes the result of applying that mapping to Q, i.e., the result of taking Q and replacing every variable X i appearing in Q with the corresponding document vector v i .
  • a substitution ⁇ is “ground for Q” if Q ⁇ contains no variables.
  • B is a literal
  • R Q contains r highest-scoring substitutions, ordered by non-increasing score.
  • a “basic WHIRL clause” is written p(X 1 , . . . ,X k ) ⁇ Q, where Q is a conjunctive WHIRL query that contains all of the X i 's.
  • a “basic WHIRL view ⁇ ” is a set of basic WHIRL clauses with heads that have the same predicate symbol p and arity k. Notice that by this definition, all the literals in a clause body are either EDB literals or similarity literals. In other words, the view is flat, involving only extensionally defined predicates.
  • the r-materialization of a view can be constructed using only an r-answer for each clause body involved in the view. As r is increased, the r-answers will include more and more high-scoring substitutions, and the r-materialization will become a better and better approximation to the full materialized view. Thus, given an efficient mechanism for computing r-answers for conjunctive views, one can efficiently approximate the answers to more complex queries.
  • WHIRL implements the operations of finding the r-answer to a query and the r-materialization of a view.
  • r-materialization of a view can be implemented easily given a routine for constructing r-answers.
  • finding an r-answer is viewed as an optimization problem.
  • the query processing algorithm uses a general method called A* search to find the highest-scoring r substitutions for a query.
  • the A* search method is described in Principles of Artificial Intelligence, by Nils Nilsson, Morgan Kaufmann, 1987. Viewing query processing as search is natural, given that the goal is to find a small number of good substitutions, rather than all satisfying substitutions.
  • the search method of one embodiment also generalizes certain techniques used in IR ranked retrieval. However, using search in query processing is unusual for database systems, which more typically use search only in optimizing a query.
  • One way of avoiding this expense is to start by retrieving a small number of documents Y that are likely to be highly similar to x 1 .
  • any Y's not retrieved in this step must be somewhat dissimilar to X 1 , since such a Y cannot share with the high-weight term “Armadillos.”
  • a subtask like “find the r documents Y that are most similar to x 1 ” might be accomplished efficiently by the subplan of “find all Y's containing the term ‘Armadillos’.” Of course, this subplan depends on the vector x 1 .
  • Each substitution is a list of values that could be assigned to some, but not necessarily all, of the values appearing in the query.
  • one state in the search space for the query given above would correspond to the substitution that maps X to x 1 and leaves Y unbound.
  • Each state in the search space is a “partial list” of possible variable bindings.
  • a “partial list” can include bindings to all variables in the query, or bindings to some subset of those variables, including the empty set. The steps taken through this search space are small ones, as suggested by the discussion above.
  • one operation is to select a single term t and use an inverted index to find plausible bindings for a single unbound variable.
  • the search algorithm orders these operations dynamically, focusing on those partial substitutions that seem to be most promising, and effectively pruning partial substitutions that cannot lead to a high scoring ground substitution.
  • A* search is a graph search method which attempts to find the highest scoring path between a given start state so and a goal state.
  • a pseudo-code embodiment of A* search as used in an embodiment of the present invention is as, follows:
  • procedure A* (r s 0 , goalState (.), children(.))
  • OPEN: OPEN ⁇ s ⁇
  • OPEN: OPEN U children(s)
  • ⁇ 1 is E-valid.
  • goal states are defined by a goalState predicate.
  • the graph being searched is defined by a function children(s), which returns the set of states directly reachable from state s.
  • the A* algorithm maintains a set OPEN of states that might lie on a path to some goal state. Initially OPEN contains only the start state s 0 .
  • a single state is removed from the OPEN set; in particular, the state s that is “best” according to a heuristic function, h(s), is removed from OPEN. If s is a goal state, then this state is output; otherwise, all children of s are added to the OPEN set. The search continues until r goal states have been output, or the search space is exhausted.
  • An inverted index will map terms t ⁇ T to the tuples that contain them: specifically, I assume a function index (t,p,i) which returns the set of tuples (v 1 , . . . , v i , . . . , v k ) in tuples(p) such that v i t >0.
  • This index can be evaluated in linear time (using an appropriate data structure) and precomputed in linear time from the EDB.
  • I also precompute the function maxweight (t,p,i), which returns the maximum value of v i t over all documents v i in the i-th column of p.
  • Inverted indices are commonly used in the field on information retrieval, and means of storing and accessing them efficiently are well known to those skilled in the art of information retrieval.
  • the maxweight function is also used in many known techniques for speeding up processing of ranked retrieval queries, such as those described in Turtle and Flood.
  • the states of the graph searched will be pairs ( ⁇ ,E), where ⁇ is a substitution, and E is a set of exclusions. Goal states will be those for which ⁇ is ground for Q, and the initial state s 0 is (0,0).
  • An exclusion is a pair (t,Y) where t is a term and Y is a variable. Intuitively, it means that the variable Y must not be bound to a document containing the term t.
  • the second operation of constraining a state implements a sort of sideways information passing.
  • p(Y 1 , . . . ,Y k ) be the generator for the (unbound) variable Y, and let l be Y's generation index.
  • the states in S t thus correspond to binding Y to some vector containing the term t.
  • the set children(s) is S t ⁇ s′ ⁇ .
  • h′(*) is defined as follows: h ′ ⁇ ( B t , ⁇ , E ) ⁇ ⁇ t ⁇ T ; ( t , Y ) ⁇ ⁇ ⁇ x t ⁇ maxweight ⁇ ( t , p , l )
  • the terms of a document are stems produced by the Porter stemming algorithm.
  • the Porter stemming algorithm is described in “An Algorithm for Suffix Stripping”, by M. F. Porter, Program, 14(3):130-137, 1980.
  • weights for a document v i are computed relative to the collection C of all documents appearing in the i-th column of p.
  • the TF-IDF weighting scheme does not provide sensible weights for relations that contain only a single tuple. (These relations are used as a means of introducing “constant” documents into a query.) Therefore weights for these relations must be calculated as if they belonged to some other collection C′.
  • every query is checked before invoking the query algorithm to see if it contains any EDB literals p(X 1 , . . . ,X k ) for a singleton relation p. If one is found, the weights for the document x i which a variables will be bound are computed using the collection of documents found in the column corresponding to Y i , where Y i is some variable that appears in a similarity literal with X i . If several such Y i 's are found, one is chosen arbitrarily. If X i does not appear in any similarity literals, then its weights are irrelevant to the computation.
  • a state will again be removed from the OPEN list. It may be that h(s′ 1 ) is less than the h(*) value of the best goal state; in this case, a ground substitution will be removed from OPEN, and an answer will be output. Or it may be that h(s′ 1 ) is higher than the best goal state, in which case it will be removed and a new term, perhaps equipment”, will be used to generate some additional ground substitutions. These will be added to the OPEN list, along with a state which has large exclusion set and thus a lower value.
  • the first step will be to explode the smaller of these relations. Assume that this is p, and that p contains 1000 tuples. This will add 1000 states s 1 , . . . ,s 1000 to the OPEN list. In each of these states, Company 1 and Industry are bound, and Company 1 ⁇ Company 2 is a constraining literal. Thus each of these 1000 states is analogous to the state s 1 in the preceding example.
  • the h(*) values for the states s 1 , . . . ,s 1000 will not be equal.
  • the value of the state s 1 associated with the substitution ⁇ i will depend on the maximum possible score for the literal Company 1 ⁇ Company 2 , and this will be large only if the high-weight terms in the document Company 1 ⁇ i appear in the company field of q.
  • a one-word document like “3Com” will have a high h(*) value if that term appears (infrequently) in the company field of q, and a zero h(*) value if it does not appear; similarly, a document like “Agents, Inc” will have a low h(*) value if the term “agents” does not appear in the first column of q.
  • next step of the algorithm will be to choose a promising state from the OPEN list, a state that could result in an good final score.
  • a term from the Company 1 document in s 1 e.g., “3Com”, will then be picked and used to generate bindings for Company 2 and WebSite. If any of these bindings results in perfect match, then an answer can be generated on the next iteration of the algorithm.
  • WHIRL In short, the operation of WHIRL is somewhat similar to time-sharing 1000 simpler queries on a machine for which the basic unit of computation is to access a single inverted index. However, WHIRL's use of the h(*) function will schedule the computation of these queries in an intelligent way: queries unlikely to produce good answers can be discarded, and low-weight terms are unlikely to be used.
  • bindings can be propagated through similarity literals.
  • the binding for IO is first used to generate bindings for Company 1 and Industry, and then the binding for Company 1 is used to bind Company 2 and Website. Note that bindings are generated using high-weight, low-frequency terms first, and low-weight, high-frequency terms only when necessary.
  • Embodiments of the invention have been evaluated on data collected from a number of sites on the World Wide Web. I have evaluated the run-time performance with CPU time measurements on a specific class of queries, which I will henceforth call similarity joins.
  • a similarity join is a query of the form p(X 1 , . . . ,X i , . . . ,X k ) ⁇ circumflex over () ⁇ q(Y 1 , . . . ,Y j , . . . ,Y b ) ⁇ circumflex over () ⁇ X i ⁇ Y j
  • the naive method for similarity joins takes each document in the i-th column of relation p in turn, and submits it as a IR ranked retrieval query to a corpus corresponding to the j-column of relation q.
  • the top r results from each of these IR queries are then merged to find the best r pairs overall. This might be more appropriately be called a “semi-naive” method; on each IR query, I use inverted indices, but I employ no special query optimizations.
  • WHIRL is closely related to the maxscore optimization, which is described in Query Evaluation: Strategies and Optimizations by Howard Turtle and James Flood, in Information Processing and Management, 31(6):831-850, November 1995. WHIRL was compared to a maxscore method for similarity joins; this method is analogous to the naive method described above, except that the maxscore optimization is used in finding the best r results from each “primitive” query.
  • WHIRL speeds up the maxscore method by a factor of between 4 and 9, and speeds up the naive method by a factor of 20 or more.
  • Average precision is the average precision for all “plausible” target answers, where an answer is considered a plausible target only if it is correct.
  • I used three pairs of relations from three different domains.
  • I joined Iontech 301 and Hoovers Web 302 using company name as the primary key, and the string representing the “site” portion of the home page as a secondary key.
  • I joined Review 305 and MovieLink 306 (FIG. 3), using film names as a primary key.
  • the invention In the process of finding answers with high score, the invention employs A* search. Many variants of this search algorithm are known and many of these could be used.
  • the current invention also outputs answer tuples in an order that is strictly dictated by score; some variants of A* search are known that require less compute time, but output answers in an order that is largely, but not completely, consistent with this ordering.

Abstract

A system and method are provided for answering queries concerning information stored in a set of collections. Each collection includes a structured entity, and each structured entity includes a field. A query is received that specifies a subset of the set of collections and a logical constraint between fields that includes a requirement that a first field match a second field. The probability that the first field matches the second field is determined automatically based upon the contents of the fields. A collection of lists is generated in response to the query, where each list includes members of the subset of collections specified in the query, and where each list has an estimate of the probability that the members of the list satisfies the logical constraint specified in the query.

Description

    CROSS-REFERENCES TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 60/039,576 filed Feb. 25, 1997. [0001]
  • FIELD OF THE INVENTION
  • This invention relates to accessing databases, and particularly to accessing heterogeneous relational databases. [0002]
  • BACKGROUND OF THE INVENTION
  • Databases are the principal way in which information is stored. The most commonly used type of database is a relational database, in which information is stored in tables called relations. Relational databases are described in [0003] A First Course on Database Systems by Ullman and Widom, Prentice Hall, 1997, and in An Introduction to Database Systems, by C. J. Date, Addison Wesley, 1995.
  • Each entry in a relation is typically a character string or a number. Generally relations are thought of as sets of tuples, a tuple corresponding to a single row in the table. The columns of a relation are called fields. [0004]
  • Commonly supported operations on relations include selection and join. Selection is the extraction of tuples that meet certain conditions. Two relations are joined on fields F[0005] 1 and F2 by first taking their Cartesian product (the Cartesian product of two relations A and B is the set of all tuples a1, . . . , am, b1, . . . , bn, where a1, . . . , am is a tuple from A, and b1, . . . , bn is a tuple from B) and then selecting all tuples such that F1=F2. This leads to a relation with two equivalent fields, so usually one of these is discarded.
  • Joining relations is the principal means, of aggregating information that is spread across several relations. For example, FIG. 1 shows two [0006] sample relations Q 101 and R 102, and the result of joining Q and R (the “Join” of Q and R) 103 on the fields named MovieID (the columns indicated by 104.) For reasons of efficiency, relations are usually joined on special fields that have been designated as keys, and database management systems are implemented so as to efficiently perform joins on fields that are keys.
  • In most databases, each tuple corresponds to an assertion about the world. For instance, the tuple<12:30, 11, “Queen of Outer Space (ZsaZsa Gabor)”, [0007] 137>(the row indicated by 105) in the relation Q 101 of FIG. 1 corresponds to the assertion “the movie named ‘Queen of Outer Space’, starring Zsa Zsa Gabor, will be shown at 12:30 on channel 11.”
  • Known systems can represent information that is uncertain in a database. One known method associates every tuple in the database with a real number indicating the probability that the corresponding assertion about the world is true. For instance, the tuple described above might be associated with the probability 0.9 if the preceding program was a major sporting event, such as the World Series. The uncertainty represented in this probability includes the possibility, for example, that the World Series program may extend beyond its designated time slot. Extensions to the database operations of join and selection useful for relations with uncertain information are also known. One method for representing uncertain information in a database is described in [0008] Probabilistic Datalog—a Logic for Powerful Retrieval Methods” by Norbert Fuhr, in Proceedings of the 1995 ACM SIGIR Conference on Research in Information Retrieval, pages 282-290, New York, 1995. Other methods are surveyed in Uncertainty Management in Information Systems, edited by Motro and Smelts, Kluwer Academic Publishers, 1997. Database systems that have been extended in this way are called probabilistic databases.
  • Another way of storing information is with a text database. Here information is stored as a collection of documents, also known as a corpus. Each document is simply a textual document, typically in English or some other human language. One standard method for representing text in such a database so that it can be easily accessed by a computer is to represent each document as a so-called document vector. A document vector representation of a document is a vector with one component for each term appearing in the corpus. A term is typically a single word, a prefix of a word, or a phrase containing a small number of words or prefixes. The value of the component corresponding to a term is zero if that term does not appear in the document, and non-zero otherwise. [0009]
  • Generally the non-zero values are chosen so that words that are likely to be important have larger weights. For instance, word that occur many times is a document, or words that are rare in the corpus, have large weights. A similarity function can then be defined for document vectors, such that documents with the similar term weights have high similarities, and documents with different term weights have low similarity. Such a similarity function is called a term-based similarity metric. [0010]
  • An operation commonly supported by such text databases is called ranked retrieval. The user enters a query, which is a textual description of the documents he or she desires to be retrieved. This query is then converted into a document vector. The database system then presents to the user a list of documents in the database, ordered (for example) by decreasing similarity to the document vector that corresponds to the query. [0011]
  • As an example, the Review column (the column indicated by [0012] 107) of relation R 102 in FIG. 1 might be instead stored in a text database. The answer to the user query “embarrassingly bad science fiction” might be a list containing the review of “Queen of Outer Space” as its first element, and the review of “Space Balls” as its second element.
  • In general, the user will only be interested in seeing a small number of the documents that are highly similar. Techniques are known for efficiently generating a reduced list of documents, say of size K, that contains all or most of the K documents that are most similar to the query vector, without generating as an intermediate result a list of all documents that have non-zero similarity to the query. Such techniques are described in Chapters 8 and 9 of [0013] Automatic Text Processing, edited by Gerard Salton, Addison Wesley, Reading, Massachusetts, 1989, and in Query Evaluation: Strategies and Optimizations by Howard Turtle and James Flood in Information Processing and Management, 3 1(6):831-850, November 1995.
  • In some relational database management systems (RDBMS) relations are stored in a distributed fashion, i.e., different relations are stored on different computers. One issue which arises in distributed databases pertains to joining relations stored at different sites. In order for this join to be performed, it is necessary for the two relations to use comparable keys. For instance, consider two relations M and E, where each tuple in M encodes a single person's medical history, and each tuple in E encodes data pertaining to a single employee of some large company. Joining these relations is feasible if M and E both use social security numbers as keys. However, if E uses some entirely different identifier (say an employee number), then the join cannot be carried out, and there is no known way of aligning the tuples in E with those in M. To take another example, the [0014] relations Q 101 and R 102 of FIG. 1 could not be joined unless they both contained a similar field, such as the MovieID field (column 104.)
  • In practice, the presence of incomparable key fields is often a problem in merging relations that are maintained by different organizations. A collection of relations that are maintained separately are called heterogeneous,. The problem of providing access to a collection of heterogeneous relations is called data integration. The process of finding pairs of keys that are likely to be equivalent key matching is called key matching. [0015]
  • Techniques are known for coping with some sorts of key mismatches that arise in accessing heterogeneous databases. One technique is to normalize the keys. For instance, in the [0016] relations Q 101 and R 102 in FIG. 1, suppose that numeric MovieID's are not available, and it is desirable to join Q 101 and R 102 on strings that contain the name of the movie, specifically, the MovieName field (the column indicated by 106) of Q 101, and the underlined section of the Review field (the column indicated by 107) of R 102. One might normalize these strings by removing all parenthesized text (which contains actor's names in Q 101, and a rating in R 102).
  • A data integration system based on normalization of keys is described in [0017] Querying Heterogeneous Information Sources Using Source Descriptions, by Alon Y. Levy, Anand Rajaraman, and Joann J. Ordille, in {Proceedings of the 22nd International Conference on Very Large Databases (VLDB-96)}, Bombay, India, September 1996.
  • Another known technique for handling key mismatches is to use an equality predicate, a function which, when called with arguments Key[0018] 1 and Key2, indicates if Key1 and Key2 should be considered equivalent for the purpose of a join. Generally such a function is of limited applicability because it is appropriate only for a small number of pairs of columns in a specific database. The use of equality tests is described in the Identification and Resolution of Semantic Heterogeneity in Multidatabase Systems, by Douglas Fang, Joachim Hammer, and Dennis McLeod, in Multidatabase Systems: An Advanced Solution for Global Information Sharing, pages 52-60. IEEE Computer Society Press, Los Alamitos, Calif., 1994. Both normalization and equality predicates are potentially expensive in terms of human effort: for every new type of key field, a new equality predicate or normalization procedure must be written by a human programmer.
  • It is often the case that the keys to be matched are strings that name certain real-world entities. (In our example, for instance, they are the names of movies.) Techniques are known for examining pairs of names and assessing the probability that they refer to the same entity. Once this has been done, then a human can make a decision about what pairs of names should be considered equal for all subsequent queries that require key matching. Such techniques are described in [0019] Record Linkage Techniques—1985, edited by B. Kilss and W. Alvey, Statistics of Income Division, Internal Revenue Service Publication 1299-2-96, available from {http://www.bts.gov/fcsm/methodology/}, 1985, as well as in the Merge/purge Problem for Large Databases, by M. Hernandez and S. Stolfo, in Proceedings of the 1995 ACM SIGMOD, May 1995, and Heuristic Joins to Integrate Structured Heterogeneous Data, by Scott Huffman and David Steier, in Working Notes of the AAAI Spring Symposium on Information Gathering In Heterogeneous Distributed Environments, Palo Alto, California, March 1995, AAAI Press.
  • Many of these techniques require information about the types of objects that are being named. For instance, Soundex is often used to match surnames. An exception to this is the use of the Smith-Waterman edit distance, which provides a general similarity metric for any pairs of strings. The use of the Smith-Waterman edit distance metric key matching is described in an [0020] Efficient Domain-independent Algorithm for Detecting Approximately Duplicate Database Records by A. Monge and C. Elkan, in The proceedings of the SIGMOD 1997 Workshop on Data Mining and Knowledge Discovery, May 1997.
  • It is also known how to use term-based similarity functions, closely related to IR similarity metrics, for key matching. Use of term-based similarity metrics for key matching, as an alternative to Smith-Waterman, is described in [0021] the Field-matching Problem: Algorithm and Applications by A. Monge and C. Elkan in Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, August 1996.
  • In summary, known methods require that data from heterogeneous sources be preprocessed in some manner. In particular, the data fields that will be used as keys must be normalized, using a domain-specific procedure, or a domain-specific equality test must be written, or a determination as to which keys are in fact matches must be made by a user, perhaps guided by some previously computed assessment of the probability that each pair of keys matches. [0022]
  • All of these known procedures are require human intervention, potentially for each pair of data sources. Furthermore, all of these procedures are prone to error. Errors in the process of determining which keys match will lead to incorrect answers to queries to the resulting database. [0023]
  • What is needed is a way of accessing data from many heterogeneous sources without any preprocessing steps that must be guided by a human. Furthermore, when pairs of keys from different sources are assumed to match, the end user should be alerted to these assumptions, and provided with some estimate of the likelihood that the assumptions are correct, or other information with which the end user can assess the quality of the result. [0024]
  • SUMMARY OF THE INVENTION
  • An embodiment of the present invention accesses information stored in heterogeneous databases by using probabilistic database analysis techniques to answer database queries. The embodiment uses uncertain information about possible key matches obtained by using general-purpose similarity metrics to assess the probability that pairs of keys from different databases match. This advantageously allows a user to access heterogeneous sources of information without requiring any preprocessing steps that must be guided by a human. Furthermore, when pairs of keys from different sources are assumed to match, the user is apprised of these assumptions, and provided with some estimate of the likelihood that the assumptions are correct. This likelihood information can help the user to assess the quality of the answer to the user's query. [0025]
  • Data from heterogeneous databases is collected and stored in relations. In one embodiment, the data items in these relations that will be used as keys are represented as text. A query is received by a database system. This query can pertain to any subset of the relations collected from the heterogeneous databases mentioned above. The query may also specify data items from these relations that must or should refer to the same entity. [0026]
  • A set of answer tuples is computed by the database system. These tuples are those that are determined in accordance with the present invention to most likely to satisfy the user's query. A tuple is viewed as likely to satisfy the query if those data items that should refer to the same entity (according to the query) are judged to have a high probability of referring to the same entity. The probability that two data items refer to the same entity is determined using problem-independent similarity metrics that advantageously do not require active human intervention to formulate for any particular problem. [0027]
  • In computing the join of two relations, each of size N, N[0028] 2 pairs of keys must be considered. Hence, for moderately large N, it is impractical to compute a similarity metric (and store the result) for each pair. An embodiment of the present invention advantageously solves this problem by computing similarities between pairs of keys at the time a query is considered, and computing similarities between only those pairs of keys that likely to be highly similar.
  • In some cases, many pairs of keys will be weakly similar, and hence will have some small probability of referring to the same entity. Thus, the answer to a query could consist of a small number of tuples with a high probability of being correct answers, and a huge number of tuples with a small but non-zero probability of being correct answers. Known probabilistic database methods would disadvantageously generate all answer tuples with non-zero probability, which often would be an impractically large set. The present invention advantageously solves this problem by computing and returning to the user only a relatively small set of tuples that are most likely to be correct answers, rather than all tuples that could possibly be correct answers. [0029]
  • In one embodiment of the present invention, the answer tuples are returned to the user in the order of their computed likelihood of being correct answers, i.e., the tuples judged to be most likely to be correct are presented first, and the tuples judged less likely to be correct are presented later. [0030]
  • In accordance with one embodiment of the present invention, queries concerning information stored in a set of collections are answered. Each collection includes a structured entity. Each structured entity in turn includes a field. [0031]
  • In accordance with an embodiment of the present invention, a query is received that specifies a subset of the set of collections and a logical constraint between fields that includes a requirement that a first field match a second field. The probability that the first field matches the second field based upon the contents of the fields is automatically determined. A collection of lists is generated in response to the query, where each list includes members of the subset of collections specified in the query. Each list also has an estimate of the probability that the members of the list satisfies the logical constraint specified in the query. [0032]
  • The present invention advantageously combines probabilistic database techniques with probabilistic assessments of similarity to provide a means for automatically and efficiently accessing heterogeneous data sources without the need for human intervention in identifying similar keys. [0033]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows an prior art example of two relations Q and R and a join of relations Q and R. [0034]
  • FIG. 2 shows an embodiment of a system and apparatus in accordance with the present invention. [0035]
  • FIG. 3 shows a-table of relations upon which experiments were performed to determine properties of the present invention. [0036]
  • DETAILED DESCRIPTION
  • An embodiment of an apparatus and system in accordance with the present invention is shown in FIG. 2. A [0037] search server 201, user 202, amd database server A 203, database server B 204 and database server C 205 are coupled to network 206. Heterogeneous databases U 207, V 208 and W 209 are coupled to database server A 203. Heterogeneous databases X 210 and Y 211 are coupled to database server B 204. Heterogeneous database Z 212 is coupled to database server C 213. User 202 submits a query to search server 101. Search server 101 conducts a search of heterogeneous databases U 207, V 208, W 209, X 210, Y 211 and Z 212 in an automatic fashion in accordance with the method of the present invention.
  • As shown in FIG. 2, [0038] search server 201 includes processor 213 and memory 214 that stores search instructions 215 adapted to be executed on processor 213. In one embodiment of the present invention, processor 213 is a general purpose microprocessor, such as the Pentium II processor manufactured by the Intel Corporation of Santa Clara, Calif. In another embodiment, processor 213 is an Application Specific Integrated Circuit (ASIC) that embodies at least part of the search instructions 215, while the rest are stored at memory 214. In various embodiments of the present invention, memory 214 is a hard disk, read-only memory (ROM), random access memory (RAM), flash memory, or any combination thereof. Memory 214 is meant to encompass any medium capable of storing digital data. As shown in FIG. 2, memory 214 is coupled to processor 213.
  • One embodiment of the present invention is a medium that stores search instructions. As used herein, the phrase “adapted to be executed” is meant to encompass instructions stored in a compressed and/or encrypted format, as well as instructions that have to be compiled or installed by an installer before being executed by [0039] processor 213.
  • In one embodiment, the search server further comprises a [0040] port 216 adapted to be coupled to a network 206. The port is coupled to memory 214 and processor 213.
  • In one embodiment, [0041] network 206 is the Internet. In another embodiment, it is a Local Area Network (LAN). In yet another embodiment, it is a Wide Area Network (WAN). In accordance with the present invention, network 206 is meant to encompass any switched means by which one computer communicates with another.
  • In one embodiment, the user is a personal computer. In one embodiment, database servers A [0042] 203, B 204 and C 205 are computers, adapted to act as interfaces between a network 206 and databases. In one embodiment the database servers 203, 204 and 205 are server computers. In another embodiment, they act as peer computers.
  • As discussed above, many databases contain many fields in which the individual constants correspond to entities in the real world. Examples of such name domains include course numbers, personal names, company names, movie names, and place names. In general, the mapping from name constants to real entities can differ in subtle ways from database to database, making it difficult to determine if two constants are co-referent ({i.e.}, refer to the same entity). [0043]
  • For instance, in two Web databases listing educational software companies, one finds the name constants “Microsoft” and “Microsoft Kids.” Do these denote the same company, or not? In another pair of Web sources, the names “Kestrel” and “American Kestrel” appear. Likewise, it is unclear as to whether these denote the same type of bird. Other examples of this problem include “MIT” and “MIT Media Labs”; and “A&T Bell Labs,” “AT&T Labs”, “AT&T Labs—Research,” “AT&T Research,” “Bell Labs,” and “Bell Telephone Labs.” [0044]
  • As can be seen from the above examples, determining if two name constants are co-referent is far from trivial in many real-world data sources. Frequently it requires detailed knowledge of the world, the purpose of the user's query, or both. These generally necessitate human intervention in preprocessing or otherwise handling a user query. [0045]
  • Unfortunately, answering most database queries require understanding which names in a database are coreferent. Two phrases are coreferent if each refers to the same or approximately the same external entity. An external entity is an entity in the real world to which a phrase refers. For example, Microsoft and Microsoft, Inc. are two phrases that are coreferent in the sense that they refer to the same company. As used herein, the term “phrase” means any fragment of text down to a single character, e.g., a word, a collection of words, a letter, several letters, a number, a punctuation mark or set of punctuation marks, etc. [0046]
  • This requirement of understanding which names in a database are coreferent poses certain problems. For example, to join two databases on Company_name fields, where the values of the company names are Microsoft and Microsoft Kids, one must know in advance if these two names are meant to refer to the same company. This suggests extending database systems to represent the names explicitly so as to compute the probability that two names are coreferent. This in turn requires that the database includes an appropriate way of representing text (phrases). [0047]
  • One widely used method for representing text briefly described above is the vector space model. Assume a vocabulary T of terms, each which will be treated as atomic, i.e., unbreakable. Terms can include words, phrases, or word stems, which are morphologically derived word prefixes. A fragment of text is represented as DocumentVector, which is a vector of real numbers v εR[0048] |T|, each component of which corresponds to a term τΣT. The component of v which corresponds to τΣT is denoted vt.
  • A number of schemes have been proposed for assigning weights to terms, as discussed above. An embodiment of the present invention uses the TF-IDF weighting scheme with unit length normalization. Assuming that the document represented by v is a member of a document collection C, define {circumflex over (ν)}[0049] t to have the value zero if t is not present in the document represented by v, and otherwise the value {circumflex over (ν)}t=(log(TFv,t)+1)·log(IDFt), where the “term frequency” is the number of times that term t occurs in the document represented by v, and the inverse document frequency IDFt is C C t ,
    Figure US20010013035A1-20010809-M00001
  • where C[0050] t is the subset of documents in C that contain the term t. This vector is then normalized to unit length, leading to the following weight for vt: v 2 = v t tET ( v t ) 2
    Figure US20010013035A1-20010809-M00002
  • The “similarity” of two document vectors v and w is given by the formula: sim (v, w)= [0051] t T v t · w t ,
    Figure US20010013035A1-20010809-M00003
  • which is usually interpreted as the cosine of the angle between v and w. Since every document vector v has unit length, sim (v, w) is always between zero and one. [0052]
  • Although these vectors are conceptually very long, they are also very sparse: if a document contains only k terms, then all but k components of its vector representation will have zero weight. Methods for efficiently manipulating these sparse vectors are known. The vector space representation for documents is described in [0053] Automatic Text Processing, edited by Gerard Salton, Addison Welsley, Reading, Mass., 1989.
  • The general idea behind this scheme is that the magnitude of the component v[0054] t is related to the “importance” of the term t in the document represented by v. In accordance with the present invention, two documents are similar when they share many “important” terms. The TF-IDF weighting scheme assigns higher weights to terms that occur infrequently in the collection C. The weighting scheme also gives higher weights to terms that occur frequently in a document. However, in this context, this heuristic is probably not that important, since names are usually short enough so that each term occurs only once. In a collection of company names, for instance, common terms like “Inc.” and “Ltd.” would have low weights. Uniquely appearing terms like “Lucent” and “Microsoft” would have high weights. And terms of intermediate frequency like Acme and American would have intermediate weights.
  • The present invention operates on data is stored in relations, where the primitive elements of each relation are document vectors, rather than atoms. This data model is called SUR, which stands for Simple Texts In Relations. The term “simple” indicates that no additional structure is assumed for the texts. [0055]
  • More precisely, an extensional database (EDB) consists of a term vocabulary T and set of relations {p[0056] 1, . . . pn}. Associated with each relation p is a set of tuples called tuples(p). Every tuple (v1, . . . , vk) ε tuples (p) has exactly k components, and each of these components vi is a document vector. It is also assumed that a score is associated with every tuple in p. This score will always be between zero and one, and will be denoted score ((v1, . . . , vk) ε tuples (p)). In most applications, the score of every tuple in a base relation will be one; however, in certain embodiments, non-unit scores can occur. This allows materialized views to be stored.
  • An embodiment of a language for accessing these relations in accordance with the present invention is called WHIRL, which stands for Word-based Heterogeneous Information Retrieval Logic. A conjunctive WHIRL query is written B[0057] 1
    Figure US20010013035A1-20010809-P00900
    . . .
    Figure US20010013035A1-20010809-P00900
    Bk, where each Bi is a literal. There are two types of literals. An EDB literal is written p(X1, . . . , Xk) where p is the name of an EDB relation, and the Xi's are variables. A similarity literal is written X˜Y, where X and Y are variables. Intuitively, this can be interpreted as a requirement that documents X and Y be similar. If X appears in a similarity literal in a query Q, then X also appears in some EDB literal in Q.
  • To take another example, consider two relations R and S, where tuples of R contain a company name and a brief description of the industry associated with that company, and tuples of S contain a company name and the location of the World Wide Web homepage for that company. The join of the relations R and S might be approximated by the query: [0058]
  • Q1: r(Company1,Industry)
    Figure US20010013035A1-20010809-P00900
    s (Company2,WebSite)
    Figure US20010013035A1-20010809-P00900
    Company1˜Company2
  • This is different from an equijoin of R and S, which could be written: [0059]
  • r(Company,Industry)
    Figure US20010013035A1-20010809-P00900
    s(Company,WebSite).
  • To find Web sites for companies in the telecommunications industry one might use the query: [0060]
  • Q2: r(Company1,Industry)
    Figure US20010013035A1-20010809-P00900
    s(Company2,WebSite)
    Figure US20010013035A1-20010809-P00900
    Company1˜Company2
    Figure US20010013035A1-20010809-P00900
    const1(IO)
    Figure US20010013035A1-20010809-P00900
    Industry˜IO
  • where the relation {const[0061] 1} contains a single document describing the industry of interest, such as “telecommunications equipment and/or services”.
  • The semantics of WHIRL are defined in part by extending the notion of score to single literals, and then to conjunctions. The semantics of WHIRL are best described in terms of substitutions. A substitution θ is a mapping from variables to document vectors. A substitution is denoted as θ={X[0062] 1=vi, . . . , Xn=vn}, where each Xi is mapped to the vector vi. The variables Xi in the substitution are said to be “bound” by θ. If Q is a WHIRL query (or a literal or variable) then Qθ denotes the result of applying that mapping to Q, i.e., the result of taking Q and replacing every variable Xi appearing in Q with the corresponding document vector vi. A substitution θ is “ground for Q” if Qθ contains no variables.
  • Suppose B is a literal, and θ is a substitution such that Bθ is ground. If B is an EDB literal p(X[0063] 1, . . . ,Xk), then score(Bθ)=score((X1θ, . . . ,Xkθ) εp) if (X1θ, . . . ,Xkθ) ε in tuples(p), and score(Bθ)=0 otherwise. If B is a similarity literal X˜Y, then score (Bθ)=sim (Xθ, Yθ).
  • If Q=B[0064] 1
    Figure US20010013035A1-20010809-P00900
    . . .
    Figure US20010013035A1-20010809-P00900
    Bk is a query and Qθ is ground, then define score (Qθ)=IIi=1 n score(B,θ). In other words, conjunctive queries are scored by combining the scores of literals as if they were independent probabilities.
  • Recall that the answer to a conventional conjunctive query is the set of ground substitutions that make the query “true,” i.e., provable against the EDB. In WHIRL, the notion of provability has been replaced with the “soft” notion of score: substitutions with a high score are intended to be better answers than those with a low score. It seems reasonable to assume that users will be most interested in seeing the high-scoring substitutions, and will be less interested in the low-scoring substitutions. This is formalized as follows: Given an EDB, the “full answer set” S[0065] Q for a conjunctive query Q is defined to be the set of all θ such that Qθ is ground and has a non-zero score. An r-answer RQ for a conjunctive query Q is defined to be an ordered list of substitutions θ1, . . .,θi from the full answer set such that:
  • for all θ[0066] i εRQ and σεSQ
    Figure US20010013035A1-20010809-P00900
    RQ; score (Q θi)≧score(Qσ); and
  • for all θ[0067] ij θj εRQ where i<j, score (Qθi)≧score(Qθj).
  • In other words, R[0068] Q contains r highest-scoring substitutions, ordered by non-increasing score.
  • It is assumed that the output of a query -answering algorithm given the query Q will not be a full answer set, but rather an r-answer for Q, where r is a parameter fixed by the user. To understand the notion of an r-answer, observe that in typical situations the full answer set for WHIRL queries will be very large. For example, the full answer set for the query Q[0069] 1 given as an example above would include all pairs of company names Company1, Company2 that both contain the term “Inc.” This set might be very large. Indeed, if it is assumed that a fixed fraction 1 k
    Figure US20010013035A1-20010809-M00004
  • of company names contain the term “Inc.”, and that R and S each contain a random selection of n company names, then one would expect the size of the full answer set to contain [0070] ( n k ) 2
    Figure US20010013035A1-20010809-M00005
  • substitutions simply due to the matches on the term “Inc.” Further, the full answer set for the join of m relations of this sort would be of size at least [0071] ( n k ) m .
    Figure US20010013035A1-20010809-M00006
  • To further illustrate this point, I computed the pairwise similarities of two lists R and S of company names with R containing 1163 names, S containing 976 names. These lists are the relations Hoovers Web [0072] 301 and Iontech 302 shown in FIG. 3. Although the intersection of R and S appears to contain only about 112 companies, over 314,000 name pairs had non-zero similarity. In this case, the number of non-zero similarities can be greatly reduced by discarding a few very frequent terms like “Inc.” However, even after this preprocessing, there are more than 19,000 non-zero pairwise similarities, which is more than 170 times the number of correct pairings. This is due to a large number of moderately frequently terms (like “American” and “Airlines”) that cannot be safely discarded. Thus, it is in general impractical to compute full answer sets for complex queries and present them to a user. This leads to the assumption of an r-answer, which advantageously simplifies the results provided in accordance with the present invention.
  • The scoring scheme given above for conjunctive queries can be fairly easily extended to certain more expressive languages in accordance with the present invention. Below, I consider such an extension, which corresponds to projections of unions of conjunctive queries. [0073]
  • A “basic WHIRL clause” is written p(X[0074] 1, . . . ,Xk)←Q, where Q is a conjunctive WHIRL query that contains all of the Xi's. A “basic WHIRL view υ” is a set of basic WHIRL clauses with heads that have the same predicate symbol p and arity k. Notice that by this definition, all the literals in a clause body are either EDB literals or similarity literals. In other words, the view is flat, involving only extensionally defined predicates.
  • Now, consider a ground instance a=p(x[0075] 1, . . . ,xk) of the head of some view clause. The “support of a” (relative to the view U and a given EDB) is defined to be the following set of triples:
  • support (a)={(A←Q,θ,3): (A←Q)ευand Aθ=a and score (Qθ)=s and s>0} The score of (x[0076] 1, . . . ,xk) in p is defined as follows: score ( ( x 1 , , x k ) p ) = 1 - ( C , Θ , s ) support ( p ( x 1 , , x L ) ) ( 1 - s ) Equation (1)
    Figure US20010013035A1-20010809-M00007
  • To understand this formula, note that it is some sense a dual of multiplication: if e[0077] 1 and e2 are independent probabilistic events with probability p1 and p2respectively, then the probability of (e1
    Figure US20010013035A1-20010809-P00900
    e2) is p1·p2, and the probability of (e1
    Figure US20010013035A1-20010809-P00900
    e2) is 1−(1−p1)(1-p2). The “materialization of the view υ” is defined to be a relation with name p which contains all tuples (x1, . . . ,xk) such that score((x1, . . . ,xk)εp)>0).
  • Unfortunately, while this definition is natural, there is a difficulty with using it in practice. In a conventional setting, it is easy to materialize a view of this sort, given a mechanism for solving a conjunctive query. In WHIRL, one would prefer to assume only a mechanism for computing r-answers to conjunctive queries. However, since Equation (1) involves a support set of unbounded size, it appears that r-answers are not enough to even score a single ground instance a. [0078]
  • Fortunately, however, low-scoring substitutions have only a minimal impact on the score of a. Specifically, if (C,θ,s) is such that s is close to zero, then the corresponding factor of (1−s) in the score for a is close to one. One can thus approximate the score of Equation (1) using a smaller set of high-scoring substitutions, such as those found in an r-answer for moderately large r. [0079]
  • In particular, let υ contain the clauses A[0080] 1←Q1, . . . , An←Qn, let RQ1, . . . ,RQn be r-answers for the Qi's, and let R=UiRQi. Now define the “r-support for a from R” to be the set:
  • {(A←Q,θ,s): (A←Q,θ,s) εsupport(a) and θεR}
  • Also define the r-score for a from R by replacing support (a) in Equation (1) with the r-support set for a. Finally, define the “r-materialization of υ from R” to contain all tuples with non-zero r-score, with the score of x[0081] 1, . . . ,xk in p being its r-score from R.
  • Clearly, the r-materialization of a view can be constructed using only an r-answer for each clause body involved in the view. As r is increased, the r-answers will include more and more high-scoring substitutions, and the r-materialization will become a better and better approximation to the full materialized view. Thus, given an efficient mechanism for computing r-answers for conjunctive views, one can efficiently approximate the answers to more complex queries. [0082]
  • One embodiment of WHIRL implements the operations of finding the r-answer to a query and the r-materialization of a view. As noted above, r-materialization of a view can be implemented easily given a routine for constructing r-answers. First, however, I will give a short overview of the main ideas used in the process. [0083]
  • In an embodiment of WHIRL, finding an r-answer is viewed as an optimization problem. In particular, the query processing algorithm uses a general method called A* search to find the highest-scoring r substitutions for a query. The A* search method is described in [0084] Principles of Artificial Intelligence, by Nils Nilsson, Morgan Kaufmann, 1987. Viewing query processing as search is natural, given that the goal is to find a small number of good substitutions, rather than all satisfying substitutions. The search method of one embodiment also generalizes certain techniques used in IR ranked retrieval. However, using search in query processing is unusual for database systems, which more typically use search only in optimizing a query.
  • To understand the use of search, consider finding an r-answer to the WHIRL query insiderTip(X)[0085]
    Figure US20010013035A1-20010809-P00900
    publicly Traded(Y)
    Figure US20010013035A1-20010809-P00900
    X˜Y, where the relation publicly Traded is very large, but the relation insiderTip is very small. In processing the corresponding equijoin insiderTip(X)
    Figure US20010013035A1-20010809-P00900
    publicly Traded(Y)
    Figure US20010013035A1-20010809-P00900
    X=Y with a known database system, one would first construct a query plan.
  • For example, one might first find all bindings for X, and then use an index to find all values Y in the first column of publicly Traded that are equivalent to some X. It is tempting to extend such a query plan to WHIRL, by simply changing the second step to find all values Y that are similar to some X. However, this natural extension can be quite inefficient. Imagine that insiderTip contains the vector xi, corresponding to the document “Armadillos, Inc.” Due to the frequent occurrence of the term “Inc.”, there will be many documents Y that have non-zero similarity to x[0086] 1, and it will be expensive to retrieve all of these documents Y and compute their similarity to x1. One way of avoiding this expense is to start by retrieving a small number of documents Y that are likely to be highly similar to x1. In this case, one might use an index to find all Y's that contain the rare term “Armadillos.” Since “Armadillos” is rare, this step will be inexpensive, and the Y's retrieved in this step must be somewhat similar to x1. Recall that the weight of a term depends inversely on its frequency, so rare terms have high weight, and hence these Y's will share at least one high-weight term with X. Conversely, any Y's not retrieved in this step must be somewhat dissimilar to X1, since such a Y cannot share with the high-weight term “Armadillos.” This suggests that if r is small, and an appropriate pruning method is used, a subtask like “find the r documents Y that are most similar to x1” might be accomplished efficiently by the subplan of “find all Y's containing the term ‘Armadillos’.” Of course, this subplan depends on the vector x1.
  • To find the Y's most similar to the document “The American Software Company” (in which every term is somewhat frequent), a very different type of subplan might be required. The observations suggest that query processing should proceed in small steps, and that these steps should be scheduled dynamically, in a manner that depends on the specific document vectors being processed. [0087]
  • The query processing method described below searches through a space of partial substitutions. Each substitution is a list of values that could be assigned to some, but not necessarily all, of the values appearing in the query. For example, one state in the search space for the query given above would correspond to the substitution that maps X to x[0088] 1 and leaves Y unbound. Each state in the search space is a “partial list” of possible variable bindings. As used herein, a “partial list” (possible variable bindings) can include bindings to all variables in the query, or bindings to some subset of those variables, including the empty set. The steps taken through this search space are small ones, as suggested by the discussion above. For instance, one operation is to select a single term t and use an inverted index to find plausible bindings for a single unbound variable. Finally, the search algorithm orders these operations dynamically, focusing on those partial substitutions that seem to be most promising, and effectively pruning partial substitutions that cannot lead to a high scoring ground substitution.
  • A* search is a graph search method which attempts to find the highest scoring path between a given start state so and a goal state. A pseudo-code embodiment of A* search as used in an embodiment of the present invention is as, follows: [0089]
  • procedure A* (r s[0090] 0, goalState (.), children(.))
  • Begin [0091]
  • OPEN={s[0092] 0}
  • while (OPEN≠Ø) do [0093]
  • s:=argmax, [0094] εOPEN h(s′)
  • OPEN:=OPEN−{s} [0095]
  • If goalState(s) then [0096]
  • output <s, h (s)> [0097]
  • Exit if r answers printed [0098]
  • else [0099]
  • OPEN:=OPEN U children(s) [0100]
  • endif [0101]
  • endwhile [0102]
  • end [0103]
  • Initial state s[0104] 0: <Ø, Ø>
  • goalState (<Ø, E>): true iff Q Ø is ground [0105]
  • children (<Ø, E>): [0106]
  • if constrain (<Ø, E>)≠Ø then return constrain (<Ø, E>) [0107]
  • else return explode (<Ø, E>) [0108]
  • constrain (<Ø, E>): [0109]
  • 1. pick X, Y, t where [0110]
  • Xθ=x, [0111]
  • Y is unbound in θ with generator p and generation index l (see text) [0112]
  • x[0113] t- maxweight (t, p, l) is maximal over all such X, Y, t combinations
  • 2. If no such X, Y, t exists then return Ø [0114]
  • 3. return {<Ø, E′>): U {Ø[0115] 1, E>, . . . , <Øn, E>}
  • where E′=E U {t, Y>}, and [0116]
  • each θ; is θU {Y[0117] 1=v1, . . . , Yk=vk} for some <v1, . . . vk>ε index (t, p, l) and
  • θ[0118] 1 is E-valid.
  • explode (<θ, E>): [0119]
  • pick p (Y[0120] 1, . . . ,Yk) such all Yi's are unbound by θ
  • return the set of all (θ U {Y[0121] 1=v1, . . . , Yk=vk}, E>
  • such that (v[0122] i, . . . , vk>ε tuples (p) and θU {Y1=v1, . . . , Yk=vk} is E-valid.
  • h<<θ, E>): Π([0123] i=1 h′(Bi,θ) where
  • h′(B[0124] i θ)=score (Bi θ) for ground Bi θ
  • h′((X˜Y) θ)= [0125]
  • Σ[0126] T εT: (t,Y)gExt.maxweight(t, p, l)
  • where Xθ=x, Y is unbound index l (see text) [0127]
  • generator p and generation index l (see text) [0128]
  • As can be seen in the above pseudo-code, goal states are defined by a goalState predicate. The graph being searched is defined by a function children(s), which returns the set of states directly reachable from state s. To conduct the search, the A* algorithm maintains a set OPEN of states that might lie on a path to some goal state. Initially OPEN contains only the start state s[0129] 0.
  • At each subsequent step of the algorithm, a single state is removed from the OPEN set; in particular, the state s that is “best” according to a heuristic function, h(s), is removed from OPEN. If s is a goal state, then this state is output; otherwise, all children of s are added to the OPEN set. The search continues until r goal states have been output, or the search space is exhausted. [0130]
  • I will now explain how this general search method has been instantiated in WHIRL in accordance with an embodiment of the present invention. I will assume that in the query Q, each variable in Q appears exactly once in a EDB literal. In other words, the variables in EDB literals are distinct from each other, and also distinct from variables appearing in other EDB literals, and both variables appearing in a similarity literal also appear in some EDB literal. (This restriction is made innocuous by an additional predicate eq(X,Y) which is true when X and Y are bound to the same document vector. The implementation of the eq predicate is straightforward and known in the art, and will be ignored in the discussion below.) In processing queries, the following data structures will be used. An inverted index will map terms tεT to the tuples that contain them: specifically, I assume a function index (t,p,i) which returns the set of tuples (v[0131] 1, . . . , vi, . . . , vk) in tuples(p) such that vi t>0. This index can be evaluated in linear time (using an appropriate data structure) and precomputed in linear time from the EDB. I also precompute the function maxweight (t,p,i), which returns the maximum value of vi t over all documents v iin the i-th column of p. Inverted indices are commonly used in the field on information retrieval, and means of storing and accessing them efficiently are well known to those skilled in the art of information retrieval. The maxweight function is also used in many known techniques for speeding up processing of ranked retrieval queries, such as those described in Turtle and Flood.
  • The states of the graph searched will be pairs (θ,E), where θ is a substitution, and E is a set of exclusions. Goal states will be those for which θ is ground for Q, and the initial state s[0132] 0 is (0,0). An exclusion is a pair (t,Y) where t is a term and Y is a variable. Intuitively, it means that the variable Y must not be bound to a document containing the term t. Formally, I say that a substitution θ is E-valid in ∀(t,Y)εE, (Yθ)t=0. Below I define the children function so that all descendants of a node <s,E>must be E-valid; making appropriate use of these exclusions will force the graph defined by the children function to be a tree.
  • I will adopt the following terminology. Given a substitution θ and query Q, a similarity literal X˜Y is constraining if and only if exactly one of Xθ and Yθ are ground. Without loss of generality, I assume that Xθ is ground and Yθ is not. For any variable Y, the EDB literal of Q that contains Y is the generator for Y, the position l of Y within this literal is Y's generation index. For well-formed queries, there will be only one generator for a variable Y. [0133]
  • Children are generated in two ways: by exploding a state, or by constraining a state. Exploding a state corresponds to picking all possible bindings of some unbound EDB literal. To explode a state s=<θ,E>, pick some EDB literal p(Y[0134] 1, . . . , Yk) such that all the Yi's are unbound by θ, and then construct all states of the form (θ∪{Y1=v1, . . . ,Yk=vk},E) such that (v1, . . . ,vk) ε in tuples(p) and θ∪{Y1=v1, . . . ,Yk=vk} is E-valid. These are the children of s.
  • The second operation of constraining a state implements a sort of sideways information passing. To constrain a state s=<θ,E>, pick some constraining literal X˜Y and some term t with non-zero weight in the document Xθ such that <t,Y>E. Let p(Y[0135] 1, . . . ,Yk) be the generator for the (unbound) variable Y, and let l be Y's generation index. Two sets of child states will now be constructed. The first is a singleton set containing the state s′=<θ,E>, where E′=E∪{<t,Y>}. Notice that by further constraining s′, other constraining literals and other terms t in Xθ can be used to generate plausible variable bindings. The second set St contains all states <θi,E> such that θi=θ∪{Y1=v1, . . . , Yk=vk} for some <v1, . . . , vk>ε index(t,p,l) and θ is E-valid. The states in St thus correspond to binding Y to some vector containing the term t. The set children(s) is St∪{s′}.
  • It is easy to see that if s[0136] i and sj are two different states in St, then their descendants must be disjoint. Furthermore, the descendants of s′ must be disjoint from the descendants of any s1 εSt, since all descendants of s′ are valid for E′, and none of the descendants of s1 can be valid for E′. Thus the graph generated by this children function is a tree.
  • Given the operations above, there will typically be many ways to “constrain” or “explode” a state. In the current implementation of WHIRL, a state is always constrained using the pair <t,Y>, such that x[0137] t·maxweight(t,p,l) is maximal, where p and l are the generator and generation index for Y. States are exploded only if there are no constraining literals, and then always exploded using the EDB relation containing the fewest tuples.
  • It remains to define the heuristic function, which, when evaluated, produces a heuristic value. Recall that the heuristic function h(θ,E) must be admissible, and must coincide with the scoring function (Qθ) on ground substitutions. This implies that h(θ,E) must be an upper bound on score(q) for any ground instance q of Qθ. I thus define h(θ,E) to be II[0138] t=1 khl(Bt,Θ,E), where h′ will be an appropriate upper bound on score (Biθ). I will let this bound equal score (Biθ) for ground (Biθ), and let it equal 1 for non-ground Bi, with the exception of constraining literals. For constraining literals, h′(*) is defined as follows: h ( B t , Θ , E ) t T ; ( t , Y ) x t · maxweight ( t , p , l )
    Figure US20010013035A1-20010809-M00008
  • where p and l are the generator and generation index for Y. Note that this is an upper bound on the score of B[0139] iσ relative to any ground superset σ of θ that is E-valid.
  • In the current implementation of WHIRL, the terms of a document are stems produced by the Porter stemming algorithm. The Porter stemming algorithm is described in “An Algorithm for Suffix Stripping”, by M. F. Porter, Program, 14(3):130-137, 1980. In general, the term weights for a document v[0140] i are computed relative to the collection C of all documents appearing in the i-th column of p. However, the TF-IDF weighting scheme does not provide sensible weights for relations that contain only a single tuple. (These relations are used as a means of introducing “constant” documents into a query.) Therefore weights for these relations must be calculated as if they belonged to some other collection C′.
  • To set these weights, every query is checked before invoking the query algorithm to see if it contains any EDB literals p(X[0141] 1, . . . ,Xk) for a singleton relation p. If one is found, the weights for the document xi which a variables will be bound are computed using the collection of documents found in the column corresponding to Yi, where Yi is some variable that appears in a similarity literal with Xi. If several such Yi's are found, one is chosen arbitrarily. If Xi does not appear in any similarity literals, then its weights are irrelevant to the computation.
  • The current implementation of WHIRL keeps all indices and document vectors in main memory. [0142]
  • In the following examples of the procedure in accordance with the present invention, it is assumed that terms are words. [0143]
  • Consider the query “const[0144] 1(IO)
    Figure US20010013035A1-20010809-P00900
    p(Company,Industry)
    Figure US20010013035A1-20010809-P00900
    Industry˜IO”, where const1 contains the single document “telecommunications services and/or equipment”. With θ=0, there are no constraining literals, so the first step in answering this query will be to explode the smallest relation, in this case const1. This will produce one child, s1, containing the appropriate binding for IO, which will be placed on the OPEN list.
  • Next s[0145] 1 will be removed from the OPEN list. Since Industry˜IO is now a constraining literal, a term from the bound variable IO will be picked, probably the relatively rare stem “telecommunications”. The inverted index will be used to find all tuples <co1ind1>, . . . , <conindn> such that ind1 contains the term “telecommunications”, and n child substitutions that map Company=coi and Industry=indi will be constructed. Since these substitutions are ground, they will be given h(*) values equal to their actual scores when placed on the OPEN list. A new state s′1 containing the exclusion(telecommunications,Industry)will also be placed on the OPEN list. Note that h(s′1)<h(s1), since the best possible score for the constraining literal Industry˜IO can match at most only four terms: “services” “and”, “or”, “equipment”, all of which are relatively frequent, and hence have low weight.
  • Next, a state will again be removed from the OPEN list. It may be that h(s′[0146] 1) is less than the h(*) value of the best goal state; in this case, a ground substitution will be removed from OPEN, and an answer will be output. Or it may be that h(s′1) is higher than the best goal state, in which case it will be removed and a new term, perhaps equipment”, will be used to generate some additional ground substitutions. These will be added to the OPEN list, along with a state which has large exclusion set and thus a lower value.
  • This process will continue until documents are generated. Note that it is quite likely that low weight terms such as “or” will not be used at all. [0147]
  • In another example of the present invention, consider the query [0148]
  • p(Company1,Industry) {circumflex over ()} q(Company2,WebSite) {circumflex over ()} Company1˜Company2
  • In solving this query, the first step will be to explode the smaller of these relations. Assume that this is p, and that p contains 1000 tuples. This will add 1000 states s[0149] 1, . . . ,s1000 to the OPEN list. In each of these states, Company1 and Industry are bound, and Company1˜Company2 is a constraining literal. Thus each of these 1000 states is analogous to the state s1 in the preceding example.
  • However, the h(*) values for the states s[0150] 1, . . . ,s1000 will not be equal. The value of the state s1 associated with the substitution θi will depend on the maximum possible score for the literal Company1˜Company2, and this will be large only if the high-weight terms in the document Company1θi appear in the company field of q. As an example, a one-word document like “3Com” will have a high h(*) value if that term appears (infrequently) in the company field of q, and a zero h(*) value if it does not appear; similarly, a document like “Agents, Inc” will have a low h(*) value if the term “agents” does not appear in the first column of q.
  • The result is that the next step of the algorithm will be to choose a promising state from the OPEN list, a state that could result in an good final score. A term from the Company[0151] 1 document in s1, e.g., “3Com”, will then be picked and used to generate bindings for Company2 and WebSite. If any of these bindings results in perfect match, then an answer can be generated on the next iteration of the algorithm.
  • In short, the operation of WHIRL is somewhat similar to time-sharing 1000 simpler queries on a machine for which the basic unit of computation is to access a single inverted index. However, WHIRL's use of the h(*) function will schedule the computation of these queries in an intelligent way: queries unlikely to produce good answers can be discarded, and low-weight terms are unlikely to be used. [0152]
  • In yet another example, consider the query [0153]
  • p(Company1,Industry) {circumflex over ()} q(Company2,WebSite) {circumflex over ()} Company1˜Company2 {circumflex over ()} const1(IO) {circumflex over ()} Industry˜IO,
  • where the relation const[0154] 1 contains the single document, “telecommunicationsand/or equipment”. In solving this query, WHIRL will first explode const1 and generate a binding for IO. The literal Industry˜IO then becomes constraining, so it will be used to pick bindings for Company1 and Industry using some high-weight term, perhaps “telecommunications”.
  • At this point there will be two types of states on the OPEN list. There will be one state s′ in which only IO is bound, and (telecommunications,Industry) is excluded. There will also be several states s[0155] 1, . . . ,sn in which IO, Company1 and Industry are bound; in these states, the literal Company1˜Company2 is constraining. If s′ has a higher score than any si, then s′ will be removed from the OPEN list, and another term from the literal Industry˜IO will be used to generate additional variable bindings.
  • However, if some s[0156] i literal has a high h(*) value, then it will be taken ahead of s′. Note that this possible when the bindings in si lead to a good actual similarity score for Industry˜IO as well as a good potential similarity score for Company1˜Company2 (as measured by the h′(*) function). If an si is picked, then bindings for Company 2 and WebSite will be produced, resulting a ground state. This ground state will be removed from the OPEN list on the next iteration only if its h(*) value is higher that of s′ and all of the remaining si.
  • This example illustrates how bindings can be propagated through similarity literals. The binding for IO is first used to generate bindings for Company[0157] 1 and Industry, and then the binding for Company1 is used to bind Company2 and Website. Note that bindings are generated using high-weight, low-frequency terms first, and low-weight, high-frequency terms only when necessary.
  • Embodiments of the invention have been evaluated on data collected from a number of sites on the World Wide Web. I have evaluated the run-time performance with CPU time measurements on a specific class of queries, which I will henceforth call similarity joins. A similarity join is a query of the form p(X[0158] 1, . . . ,Xi, . . . ,Xk) {circumflex over ()} q(Y1, . . . ,Yj, . . . ,Yb) {circumflex over ()} Xi˜Yj
  • An answer to this query will consist of the r tuples from p and q such that X[0159] i and Yj are most similar. WHIRL was compared on queries of this sort to the following known algorithms:
  • 1) The naive method for similarity joins takes each document in the i-th column of relation p in turn, and submits it as a IR ranked retrieval query to a corpus corresponding to the j-column of relation q. The top r results from each of these IR queries are then merged to find the best r pairs overall. This might be more appropriately be called a “semi-naive” method; on each IR query, I use inverted indices, but I employ no special query optimizations. [0160]
  • 2) WHIRL is closely related to the maxscore optimization, which is described in [0161] Query Evaluation: Strategies and Optimizations by Howard Turtle and James Flood, in Information Processing and Management, 31(6):831-850, November 1995. WHIRL was compared to a maxscore method for similarity joins; this method is analogous to the naive method described above, except that the maxscore optimization is used in finding the best r results from each “primitive” query.
  • I computed the top 10 answers for the similarity join of subsets of the IMDB [0162] 303 and VideoFlicks 304 relations show in FIG. 3. In particular, I joined size n subsets of both relations, for various values of n between 2000 and 30,000. WHIRL speeds up the maxscore method by a factor of between 4 and 9, and speeds up the naive method by a factor of 20 or more. The absolute time required to compute the join is fairly modest. With n =30,000, WHIRL takes well under than a minute to pick the best 10 answers from the 900 million possible candidates.
  • To evaluate the accuracy of the answers produced by WHIRL, I adopted the following methodology. Again focusing on similarity joins, I selected pairs of relations which contained two or more plausible “key” fields. One of these fields, the “primary key”, was used in the similarity literal in the join. The second key field was then used to check the correctness of proposed pairings; specifically, a pairing was marked as “correct” if the secondary keys matched (using an appropriate matching procedure) and “incorrect” otherwise. [0163]
  • I then treated “correct” pairings in the same way that “relevant” documents are typically treated in evaluation of a ranking proposed by a standard IR system. In particular, I measured the quality of a ranking using non-interpolated average precision. To motivate this measurement, assume the end user will scan down the list of-answers and stop at some particular target answer that he or she finds to be of interest. The answers listed below this “target” are not relevant, since they are not examined by the user. Above the target, one would like to have a high density of correct pairings; specifically, one would like the set S of answers above the target to have high precision, where the precision of S is the ratio of the number of correct answers in S to the number of total answers in S. Average precision is the average precision for all “plausible” target answers, where an answer is considered a plausible target only if it is correct. To summarize, letting a[0164] k be the number of correct answers in the first k, and letting c(k)=1 iff the k-th answer is correct and letting c(k)=0 otherwise, average precision is the quantity k = 1 r c ( k ) · a k k .
    Figure US20010013035A1-20010809-M00009
  • I used three pairs of relations from three different domains. In the business domain, I joined Iontech [0165] 301 and Hoovers Web 302, using company name as the primary key, and the string representing the “site” portion of the home page as a secondary key. In the movie domain, I joined Review 305 and MovieLink 306 (FIG. 3), using film names as a primary key. As a secondary key, I used a special key constructed by the hand-coded normalization procedure for film names that is used in IM, an implemented heterogeneous data integration system described in Querying Heterogeneous Information Sources Using Source Descriptions by Alon Y. Levy, Anand Rajaraman, and Joann J. Ordille, Proceedings of the 22nd International Conference on Very Large Databases (VLDB-96), Bombay, India, September 1996. In the animal domain, I joined Animal1 307 and Animal2 308 (FIG. 3), using common names as the primary key, and scientific names as a secondary key (and a hand-coded domain-specific matching procedure).
  • On these domains, similarity joins are extremely accurate. In the movie domain, the performance is actually identical to the hand-coded normalization procedure, and thus has an average precision of 100%. In the animal domain, the average precision is 92.1%, and in the business domain, average precision is 84.6%. These results contrast with the typical performance of statistical IR systems on retrieval problems, where the average precision of a state-of-the art IR system is usually closer to 50% than 90%. In other words, the tested embodiment of the present invention was able to achieve results in an efficient, automatic fashion that were just as good as the results obtained using a substantially more expensive technique involving hand-coding, i.e., human intervention. [0166]
  • The foregoing has disclosed to those skilled in the arts of information retrieval and database how to integrate information from many heterogeneous sources using the method of the invention. While the techniques disclosed herein are the best presently known to the inventor, other techniques could be employed without departing from the spirit and scope of the invention. For example, representations other than relational representations are used to store data; some of these representations are described in [0167] Proceedings of the Workshop on Management of Semistructured Data, edited by Dan Suciu, available from http://www.research.att.com/˜suciu/workshop-papers.html. Many of these representations also employ constant values as keys, and could be naturally extended to use instead textual values that are associated with each other based on similarity metrics.
  • In the process of finding answers with high score, the invention employs A* search. Many variants of this search algorithm are known and many of these could be used. The current invention also outputs answer tuples in an order that is strictly dictated by score; some variants of A* search are known that require less compute time, but output answers in an order that is largely, but not completely, consistent with this ordering. [0168]
  • Methods are also known for finding pairs of similar keys by using Monte Carlo sampling methods; these methods are described in [0169] Approximating Matrix Multiplication for Pattern Recognition Tasks, in Eighth Annual ACM-.SIAM Symposium on Discrete Algorithms, pages 682-691, 1997. For certain types of queries, these sampling methods could be used instead of, or as a supplement to, some variant of A* search.
  • Many different term-based similarity functions have been proposed by researchers in information retrieval. Many of these variants could be employed instead of the function employed in the invention. [0170]
  • Finally, while the problem that motivated the development of this invention is integration of data from heterogeneous databases, there are potentially other problems to which the present invention can be advantageously applied. That being the case, the description of the present invention set forth herein is to be understood as being in all respects illustrative and exemplary, but not restrictive. [0171]

Claims (25)

What is claimed is:
1. A method for answering queries concerning information stored in a set of collections, where each collection includes a structured entity, and where each structured entity includes a field, comprising the steps of:
a. receiving a query that specifies
i. a subset of the set of collections;
ii. a logical constraint between fields that includes a requirement that a first field match a second field;
b. automatically determining the probability that the first field matches the second field based upon the contents of the fields; and
c. generating a collection of lists in response to the query, where each list includes members of the subset of collections specified in the query, and where each list has an estimate of the probability that the members of the list satisfies the logical constraint specified in the query.
2. The method of
claim 1
, wherein members of the set of collections are derived from a plurality of distinct sources.
3. The method of
claim 1
, wherein a collection of structured entities is a relation, and wherein a structured entity is a tuple.
4. The method of
claim 1
, wherein the first field and the second field include a group of terms.
5. The method of
claim 4
, wherein a term corresponds to at least one of the following: a word, a word prefix, a word suffix, and a phrase.
6. The method of
claim 4
, wherein the group of terms refers to an external entity.
7. The method of
claim 4
, wherein the group of terms is represented by a vector, where each component of the vector corresponds to one of the terms of a set of terms that can possibly occur in the group, and where each component is assigned a value corresponding to a weight of the term of the component.
8. The method of
claim 7
, further comprising the step of obtaining a value representing the similarity of a first vector to a second vector.
9. The method of
claim 8
, wherein obtaining a value representing the similarity of the first vector to the second vector comprises the steps of computing the sum of the product of the weight of each first vector component with the weight of each second vector component that represents the same term as the first vector component.
10. The method of
claim 9
, further comprising the step of using the similarity value to determine the probability that the first vector matches the second vector.
11. The method of
claim 7
, wherein the weight assigned to a component corresponding to a term is higher if the term is rare in the set of collections of structured entities.
12. The method of
claim 1
, wherein the set of lists includes substantially all of a response set of K possible lists that are estimated to have the highest probability that the members of each list satisfies the logical constraint specified in the query, where K is a parameter supplied by the user.
13. The method of
claim 1
, further comprising the step of searching through a space of partial lists to find the lists that belong to the response set.
14. The method of
claim 13
, wherein searching through a space of partial lists comprises the steps of:
i. choosing a partial list with an extreme heuristic value;
ii. determining if the partial list is complete;
iii. if the partial list is complete, then presenting the partial list to the user as the answer to the query;
iv. if the partial list is not complete, then extending the partial list by adding a member of the set of collections specified in the query to the partial list;
v. assessing the heuristic value of the extended partial list; and
vi. repeating steps i. through iii. until at least K lists have been presented to the user, where K is a parameter supplied by the user.
15. The method of
claim 14
, wherein a partial list is determined to be complete if it includes a member of every collection of structured entities specified in the query.
16. The method of
claim 14
, wherein the heuristic value for a partial list is at least approximately equal to the upper bound of the estimated probability that any possible extension of the partial list satisfies the logical constraint specified in the query.
17. The method of
claim 14
, wherein adding a new potential member to an existing partial list comprises the steps of selecting a member of the set of collections specified in the query, and adding the selected member to the existing partial list.
18. The method of
claim 14
, wherein adding a new member list to an existing partial list comprises the steps of:
i. selecting a logical constraint from the query that a first field match a second field, where a member of the set of collections specified in the query corresponding to the first field is included in the partial list;
ii. selecting a term that is included in the member of the partial list that corresponds to the first field;
iii. finding a potential member that includes the selected term; and
iv. adding the potential member that includes the selected term to the existing partial list.
19. An apparatus for answering queries concerning information stored in a set of collections, where each collection includes a structured entity, and where each structured entity includes a field, comprising:
a. a processor;
b. a memory that stores search instructions adapted to be executed by said processor to receive a query that specifies a subset of the set of collections and a logical constraint between fields that includes a requirement that a first field match a second field, automatically determine the probability that the first field matches the second field based upon the contents of the fields, and generate a collection of lists in response to the query, where each list includes members of the subset of collections specified in the query, and where each list has an estimate of the probability that the members of the list satisfies the logical constraint specified in the query,
said memory coupled to said processor.
20. The apparatus of
claim 19
, further comprising a port adapted to be coupled to a network, said port coupled to said processor and said memory.
21. The apparatus of
claim 19
, wherein said search instructions are further adapted to be executed by said processor to choose a partial list with an extreme heuristic value, determine if the partial list is complete, if the partial list is complete, then to present the partial list to the user as the answer to the query, and if the partial list is not complete, then to extend the partial list by adding a member of the set of collections specified in the query to the partial list, to assessing the heuristic value of the extended partial list, and to continue to search through a space of partial lists until at least K lists have been presented to the user, where K is a parameter supplied by the user.
22. A medium that stores instructions adapted to be executed by a processor to:
a. receive a query that specifies
i. a subset of the set of collections;
ii. a logical constraint between fields that includes a requirement that a first field match a second field;
b. automatically determine the probability that the first field matches the second field based upon the contents, of the fields; and
c. generate a collection of lists in response to the query, where each list includes members of the subset of collections specified in the query, and where each list has an estimate of the probability that the members of the list satisfies the logical constraint specified in the query.
23. A medium that stores instructions adapted to be executed by a processor to:
i. choose a partial list with an extreme heuristic value;
ii. determine if the partial list is complete;
iii. if the partial list is complete, then present the partial list to the user as the answer to the query;
iv. if the partial list is not complete, then extend the partial list by adding a member of the set of collections specified in the query to the partial list;
v. assess the heuristic value of the extended partial list; and
vi. repeat steps i. through iii. until at least K lists have been presented to the user, where K is a parameter supplied by the user.
24. A system for answering queries concerning information stored in a set of collections, where each collection includes a structured entity, and where each structured entity includes a field, comprising:
a. means for receiving a query that specifies
i. a subset of the set of collections;
ii. a logical constraint between fields that includes a requirement that a first field match a second field;
b. means for automatically determining the probability that the first field matches the second field based upon the contents of the fields; and
c. means for generating a collection of lists in response to the query, where each list includes members of the subset of collections specified in the query, and where each list has an estimate of the probability that the members of the list satisfies the logical constraint specified in the query.
25. A system for searching through a space of partial lists, comprising:
i. means for choosing a partial list with an extreme heuristic value;
ii. means for determining if the partial list is complete;
iii. means for if the partial list is complete, then presenting the partial list to the user as the answer to the query;
iv. means for determining if the partial list is complete;
v. means for extending the partial list by adding a member of the set of collections YES specified in the query to the partial list;
v. means for assessing the heuristic value of the extended partial list; and
vi. means for determining if at least K lists have been presented to the user, where K is a parameter supplied by the user.
US09/028,471 1997-02-25 1998-02-24 System and method for accessing heterogeneous databases Expired - Fee Related US6295533B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US09/028,471 US6295533B2 (en) 1997-02-25 1998-02-24 System and method for accessing heterogeneous databases
PCT/US1998/003627 WO1998039697A2 (en) 1997-02-25 1998-02-25 System and method for accessing heterogeneous databases

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US3957697P 1997-02-25 1997-02-25
US09/028,471 US6295533B2 (en) 1997-02-25 1998-02-24 System and method for accessing heterogeneous databases

Publications (2)

Publication Number Publication Date
US20010013035A1 true US20010013035A1 (en) 2001-08-09
US6295533B2 US6295533B2 (en) 2001-09-25

Family

ID=26703728

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/028,471 Expired - Fee Related US6295533B2 (en) 1997-02-25 1998-02-24 System and method for accessing heterogeneous databases

Country Status (2)

Country Link
US (1) US6295533B2 (en)
WO (1) WO1998039697A2 (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6408093B1 (en) * 1999-09-08 2002-06-18 Lucent Technologies Inc. Method for comparing object ranking schemes
US6532470B1 (en) * 1999-12-17 2003-03-11 International Business Machines Corporation Support for summary tables in a database system that does not otherwise support summary tables
US6539378B2 (en) * 1997-11-21 2003-03-25 Amazon.Com, Inc. Method for creating an information closure model
US20040172393A1 (en) * 2003-02-27 2004-09-02 Kazi Zunaid H. System and method for matching and assembling records
US20040177061A1 (en) * 2003-03-05 2004-09-09 Zhichen Xu Method and apparatus for improving querying
US20040250100A1 (en) * 2003-06-09 2004-12-09 Rakesh Agrawal Information integration across autonomous enterprises
US20050004943A1 (en) * 2003-04-24 2005-01-06 Chang William I. Search engine and method with improved relevancy, scope, and timeliness
US20050027717A1 (en) * 2003-04-21 2005-02-03 Nikolaos Koudas Text joins for data cleansing and integration in a relational database management system
US20050192942A1 (en) * 2004-02-27 2005-09-01 Stefan Biedenstein Accelerated query refinement by instant estimation of results
US20060036588A1 (en) * 2000-02-22 2006-02-16 Metacarta, Inc. Searching by using spatial document and spatial keyword document indexes
US20060206477A1 (en) * 2004-11-18 2006-09-14 University Of Washington Computing probabilistic answers to queries
US20070156652A1 (en) * 2005-12-29 2007-07-05 Microsoft Corporation Displaying key differentiators based on standard deviations within a distance metric
US20080243770A1 (en) * 2007-03-29 2008-10-02 Franz Inc. Method for creating a scalable graph database
US20080243908A1 (en) * 2007-03-29 2008-10-02 Jannes Aasman Method for Creating a Scalable Graph Database Using Coordinate Data Elements
US20090138431A1 (en) * 2007-11-28 2009-05-28 International Business Machines Corporation System and computer program product for assembly of personalized enterprise information integrators over conjunctive queries
US20090138430A1 (en) * 2007-11-28 2009-05-28 International Business Machines Corporation Method for assembly of personalized enterprise information integrators over conjunctive queries
US20090319541A1 (en) * 2008-06-19 2009-12-24 Peeyush Jaiswal Efficient Identification of Entire Row Uniqueness in Relational Databases
US7725307B2 (en) * 1999-11-12 2010-05-25 Phoenix Solutions, Inc. Query engine for processing voice based queries including semantic decoding
US8209347B1 (en) * 2005-08-01 2012-06-26 Google Inc. Generating query suggestions using contextual information
US20120278340A1 (en) * 2008-04-24 2012-11-01 Lexisnexis Risk & Information Analytics Group Inc. Database systems and methods for linking records and entity representations with sufficiently high confidence
US20130297661A1 (en) * 2012-05-03 2013-11-07 Salesforce.Com, Inc. System and method for mapping source columns to target columns
US20170068734A1 (en) * 2015-09-09 2017-03-09 International Business Machines Corporation Search Engine Domain Transfer
US9646246B2 (en) 2011-02-24 2017-05-09 Salesforce.Com, Inc. System and method for using a statistical classifier to score contact entities
CN112597315A (en) * 2020-12-28 2021-04-02 中国航天系统科学与工程研究院 System model map construction method based on SysML meta-model ontology
US11023475B2 (en) 2016-07-22 2021-06-01 International Business Machines Corporation Testing pairings to determine whether they are publically known
US20220036209A1 (en) * 2020-07-28 2022-02-03 Intuit Inc. Unsupervised competition-based encoding
US11360990B2 (en) 2019-06-21 2022-06-14 Salesforce.Com, Inc. Method and a system for fuzzy matching of entities in a database system based on machine learning

Families Citing this family (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11297040A (en) * 1998-04-07 1999-10-29 Sony Corp Reproduced signal processing apparatus
US7197451B1 (en) 1998-07-02 2007-03-27 Novell, Inc. Method and mechanism for the creation, maintenance, and comparison of semantic abstracts
US7152031B1 (en) * 2000-02-25 2006-12-19 Novell, Inc. Construction, manipulation, and comparison of a multi-dimensional semantic space
US6532458B1 (en) * 1999-03-15 2003-03-11 Microsoft Corporation Sampling for database systems
US6542886B1 (en) * 1999-03-15 2003-04-01 Microsoft Corporation Sampling over joins for database systems
US8321411B2 (en) 1999-03-23 2012-11-27 Microstrategy, Incorporated System and method for management of an automatic OLAP report broadcast system
US6704721B1 (en) * 1999-04-02 2004-03-09 International Business Machines Corporation Systems and methods for automated navigation between dynamic data with dissimilar structures
US6675161B1 (en) * 1999-05-04 2004-01-06 Inktomi Corporation Managing changes to a directory of electronic documents
US10002167B2 (en) 2000-02-25 2018-06-19 Vilox Technologies, Llc Search-on-the-fly/sort-on-the-fly by a search engine directed to a plurality of disparate data sources
US6587848B1 (en) * 2000-03-08 2003-07-01 International Business Machines Corporation Methods and apparatus for performing an affinity based similarity search
US6516308B1 (en) * 2000-05-10 2003-02-04 At&T Corp. Method and apparatus for extracting data from data sources on a network
US7653530B2 (en) * 2000-07-13 2010-01-26 Novell, Inc. Method and mechanism for the creation, maintenance, and comparison of semantic abstracts
US7389225B1 (en) 2000-10-18 2008-06-17 Novell, Inc. Method and mechanism for superpositioning state vectors in a semantic abstract
US7286977B1 (en) 2000-09-05 2007-10-23 Novell, Inc. Intentional-stance characterization of a general content stream or repository
US7672952B2 (en) * 2000-07-13 2010-03-02 Novell, Inc. System and method of semantic correlation of rich content
US20100122312A1 (en) * 2008-11-07 2010-05-13 Novell, Inc. Predictive service systems
US7177922B1 (en) 2000-09-05 2007-02-13 Novell, Inc. Policy enforcement using the semantic characterization of traffic
US20090234718A1 (en) * 2000-09-05 2009-09-17 Novell, Inc. Predictive service systems using emotion detection
US6941291B1 (en) * 2000-12-07 2005-09-06 Cisco Technology, Inc. Method and device for a user profile repository
US6671402B1 (en) 2000-12-15 2003-12-30 America Online, Inc. Representing an image with weighted joint histogram
US6522779B2 (en) 2000-12-15 2003-02-18 America Online, Inc. Representing an image with a posterized joint histogram
US6556710B2 (en) 2000-12-15 2003-04-29 America Online, Inc. Image searching techniques
US6522782B2 (en) * 2000-12-15 2003-02-18 America Online, Inc. Image and text searching techniques
US6522780B1 (en) 2000-12-15 2003-02-18 America Online, Inc. Indexing of images and/or text
US6895083B1 (en) * 2001-05-02 2005-05-17 Verizon Corporate Services Group Inc. System and method for maximum benefit routing
US7092951B1 (en) * 2001-07-06 2006-08-15 Ncr Corporation Auxiliary relation for materialized view
JP2003288362A (en) * 2002-03-27 2003-10-10 Seiko Epson Corp Specified element vector generating device, character string vector generating device, similarity calculation device, specified element vector generating program, character string vector generating program, similarity calculation program, specified element vector generating method, character string vector generating method, and similarity calculation method
US7698338B2 (en) * 2002-09-18 2010-04-13 Netezza Corporation Field oriented pipeline architecture for a programmable data streaming processor
US7251653B2 (en) * 2004-06-30 2007-07-31 Microsoft Corporation Method and system for mapping between logical data and physical data
US7536398B2 (en) * 2005-03-29 2009-05-19 Is Technologies, Llc On-line organization of data sets
US20090168163A1 (en) * 2005-11-01 2009-07-02 Global Bionic Optics Pty Ltd. Optical lens systems
US7844783B2 (en) * 2006-10-23 2010-11-30 International Business Machines Corporation Method for automatically detecting an attempted invalid access to a memory address by a software application in a mainframe computer
US8195655B2 (en) * 2007-06-05 2012-06-05 Microsoft Corporation Finding related entity results for search queries
US8046339B2 (en) 2007-06-05 2011-10-25 Microsoft Corporation Example-driven design of efficient record matching queries
US8805861B2 (en) * 2008-12-09 2014-08-12 Google Inc. Methods and systems to train models to extract and integrate information from data sources
US8301622B2 (en) * 2008-12-30 2012-10-30 Novell, Inc. Identity analysis and correlation
US8296297B2 (en) * 2008-12-30 2012-10-23 Novell, Inc. Content analysis and correlation
US8386475B2 (en) * 2008-12-30 2013-02-26 Novell, Inc. Attribution analysis and correlation
US20100250479A1 (en) * 2009-03-31 2010-09-30 Novell, Inc. Intellectual property discovery and mapping systems and methods
US20110313762A1 (en) * 2010-06-20 2011-12-22 International Business Machines Corporation Speech output with confidence indication
US8700396B1 (en) * 2012-09-11 2014-04-15 Google Inc. Generating speech data collection prompts
US9830325B1 (en) * 2013-09-11 2017-11-28 Intuit Inc. Determining a likelihood that two entities are the same
US10296655B2 (en) 2016-06-24 2019-05-21 International Business Machines Corporation Unbounded list processing

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5237678A (en) * 1987-05-08 1993-08-17 Kuechler William L System for storing and manipulating information in an information base
US5454106A (en) * 1993-05-17 1995-09-26 International Business Machines Corporation Database retrieval system using natural language for presenting understood components of an ambiguous query on a user interface
US5701453A (en) * 1993-07-01 1997-12-23 Informix Software, Inc. Logical schema to allow access to a relational database without using knowledge of the database structure
US5577241A (en) * 1994-12-07 1996-11-19 Excite, Inc. Information retrieval system and method with implementation extensible query architecture
US5694559A (en) 1995-03-07 1997-12-02 Microsoft Corporation On-line help method and system utilizing free text query
US5701400A (en) 1995-03-08 1997-12-23 Amado; Carlos Armando Method and apparatus for applying if-then-else rules to data sets in a relational data base and generating from the results of application of said rules a database of diagnostics linked to said data sets to aid executive analysis of financial data
US5920859A (en) * 1997-02-05 1999-07-06 Idd Enterprises, L.P. Hypertext document retrieval system and method
US5819291A (en) * 1996-08-23 1998-10-06 General Electric Company Matching new customer records to existing customer records in a large business database using hash key
US5884304A (en) * 1996-09-20 1999-03-16 Novell, Inc. Alternate key index query apparatus and method
US5924090A (en) * 1997-05-01 1999-07-13 Northern Light Technology Llc Method and apparatus for searching a database of records
US5999928A (en) * 1997-06-30 1999-12-07 Informix Software, Inc. Estimating the number of distinct values for an attribute in a relational database table
US5895465A (en) * 1997-09-09 1999-04-20 Netscape Communications Corp. Heuristic co-identification of objects across heterogeneous information sources

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6539378B2 (en) * 1997-11-21 2003-03-25 Amazon.Com, Inc. Method for creating an information closure model
US6571243B2 (en) 1997-11-21 2003-05-27 Amazon.Com, Inc. Method and apparatus for creating extractors, field information objects and inheritance hierarchies in a framework for retrieving semistructured information
US6408093B1 (en) * 1999-09-08 2002-06-18 Lucent Technologies Inc. Method for comparing object ranking schemes
US7725307B2 (en) * 1999-11-12 2010-05-25 Phoenix Solutions, Inc. Query engine for processing voice based queries including semantic decoding
US6532470B1 (en) * 1999-12-17 2003-03-11 International Business Machines Corporation Support for summary tables in a database system that does not otherwise support summary tables
US20060036588A1 (en) * 2000-02-22 2006-02-16 Metacarta, Inc. Searching by using spatial document and spatial keyword document indexes
US20080228754A1 (en) * 2000-02-22 2008-09-18 Metacarta, Inc. Query method involving more than one corpus of documents
US7953732B2 (en) 2000-02-22 2011-05-31 Nokia Corporation Searching by using spatial document and spatial keyword document indexes
US9201972B2 (en) 2000-02-22 2015-12-01 Nokia Technologies Oy Spatial indexing of documents
US7908280B2 (en) * 2000-02-22 2011-03-15 Nokia Corporation Query method involving more than one corpus of documents
US20040172393A1 (en) * 2003-02-27 2004-09-02 Kazi Zunaid H. System and method for matching and assembling records
US8166033B2 (en) * 2003-02-27 2012-04-24 Parity Computing, Inc. System and method for matching and assembling records
US7043470B2 (en) * 2003-03-05 2006-05-09 Hewlett-Packard Development Company, L.P. Method and apparatus for improving querying
US20040177061A1 (en) * 2003-03-05 2004-09-09 Zhichen Xu Method and apparatus for improving querying
US20050027717A1 (en) * 2003-04-21 2005-02-03 Nikolaos Koudas Text joins for data cleansing and integration in a relational database management system
US20050004943A1 (en) * 2003-04-24 2005-01-06 Chang William I. Search engine and method with improved relevancy, scope, and timeliness
US7917483B2 (en) * 2003-04-24 2011-03-29 Affini, Inc. Search engine and method with improved relevancy, scope, and timeliness
US8886621B2 (en) 2003-04-24 2014-11-11 Affini, Inc. Search engine and method with improved relevancy, scope, and timeliness
US20040250100A1 (en) * 2003-06-09 2004-12-09 Rakesh Agrawal Information integration across autonomous enterprises
US8041706B2 (en) 2003-06-09 2011-10-18 International Business Machines Corporation Information integration across autonomous enterprises
US20080065910A1 (en) * 2003-06-09 2008-03-13 International Business Machines Corporation Information integration across autonomous enterprises
US7290150B2 (en) 2003-06-09 2007-10-30 International Business Machines Corporation Information integration across autonomous enterprises
US20050192942A1 (en) * 2004-02-27 2005-09-01 Stefan Biedenstein Accelerated query refinement by instant estimation of results
US7363299B2 (en) * 2004-11-18 2008-04-22 University Of Washington Computing probabilistic answers to queries
US20060206477A1 (en) * 2004-11-18 2006-09-14 University Of Washington Computing probabilistic answers to queries
US8209347B1 (en) * 2005-08-01 2012-06-26 Google Inc. Generating query suggestions using contextual information
US20100318524A1 (en) * 2005-12-29 2010-12-16 Microsoft Corporation Displaying Key Differentiators Based On Standard Deviations Within A Distance Metric
US20070156652A1 (en) * 2005-12-29 2007-07-05 Microsoft Corporation Displaying key differentiators based on standard deviations within a distance metric
US7774344B2 (en) * 2005-12-29 2010-08-10 Microsoft Corporation Displaying key differentiators based on standard deviations within a distance metric
US8244772B2 (en) 2007-03-29 2012-08-14 Franz, Inc. Method for creating a scalable graph database using coordinate data elements
WO2008121886A1 (en) * 2007-03-29 2008-10-09 Franz, Inc. Method for creating a scalable graph database
US20080243908A1 (en) * 2007-03-29 2008-10-02 Jannes Aasman Method for Creating a Scalable Graph Database Using Coordinate Data Elements
US7890518B2 (en) 2007-03-29 2011-02-15 Franz Inc. Method for creating a scalable graph database
US20080243770A1 (en) * 2007-03-29 2008-10-02 Franz Inc. Method for creating a scalable graph database
US20090138430A1 (en) * 2007-11-28 2009-05-28 International Business Machines Corporation Method for assembly of personalized enterprise information integrators over conjunctive queries
US20090138431A1 (en) * 2007-11-28 2009-05-28 International Business Machines Corporation System and computer program product for assembly of personalized enterprise information integrators over conjunctive queries
US8145684B2 (en) * 2007-11-28 2012-03-27 International Business Machines Corporation System and computer program product for assembly of personalized enterprise information integrators over conjunctive queries
US8190596B2 (en) * 2007-11-28 2012-05-29 International Business Machines Corporation Method for assembly of personalized enterprise information integrators over conjunctive queries
US20120278340A1 (en) * 2008-04-24 2012-11-01 Lexisnexis Risk & Information Analytics Group Inc. Database systems and methods for linking records and entity representations with sufficiently high confidence
US8495077B2 (en) * 2008-04-24 2013-07-23 Lexisnexis Risk Solutions Fl Inc. Database systems and methods for linking records and entity representations with sufficiently high confidence
US8984301B2 (en) * 2008-06-19 2015-03-17 International Business Machines Corporation Efficient identification of entire row uniqueness in relational databases
US20090319541A1 (en) * 2008-06-19 2009-12-24 Peeyush Jaiswal Efficient Identification of Entire Row Uniqueness in Relational Databases
US9646246B2 (en) 2011-02-24 2017-05-09 Salesforce.Com, Inc. System and method for using a statistical classifier to score contact entities
US8972336B2 (en) * 2012-05-03 2015-03-03 Salesforce.Com, Inc. System and method for mapping source columns to target columns
US20130297661A1 (en) * 2012-05-03 2013-11-07 Salesforce.Com, Inc. System and method for mapping source columns to target columns
US20170068734A1 (en) * 2015-09-09 2017-03-09 International Business Machines Corporation Search Engine Domain Transfer
US10754897B2 (en) 2015-09-09 2020-08-25 International Business Machines Corporation Search engine domain transfer
US10762144B2 (en) * 2015-09-09 2020-09-01 International Business Machines Corporation Search engine domain transfer
US11023475B2 (en) 2016-07-22 2021-06-01 International Business Machines Corporation Testing pairings to determine whether they are publically known
US11023476B2 (en) * 2016-07-22 2021-06-01 International Business Machines Corporation Testing pairings to determine whether they are publically known
US11360990B2 (en) 2019-06-21 2022-06-14 Salesforce.Com, Inc. Method and a system for fuzzy matching of entities in a database system based on machine learning
US20220036209A1 (en) * 2020-07-28 2022-02-03 Intuit Inc. Unsupervised competition-based encoding
US11763180B2 (en) * 2020-07-28 2023-09-19 Intuit Inc. Unsupervised competition-based encoding
CN112597315A (en) * 2020-12-28 2021-04-02 中国航天系统科学与工程研究院 System model map construction method based on SysML meta-model ontology

Also Published As

Publication number Publication date
WO1998039697A2 (en) 1998-09-11
WO1998039697A3 (en) 1998-12-17
US6295533B2 (en) 2001-09-25

Similar Documents

Publication Publication Date Title
US6295533B2 (en) System and method for accessing heterogeneous databases
US8468156B2 (en) Determining a geographic location relevant to a web page
Cohen Data integration using similarity joins and a word-based information representation language
Cohen Integration of heterogeneous databases without common domains using queries based on textual similarity
Goldman et al. Proximity search in databases
US7627564B2 (en) High scale adaptive search systems and methods
US6965900B2 (en) Method and apparatus for electronically extracting application specific multidimensional information from documents selected from a set of documents electronically extracted from a library of electronically searchable documents
US6289342B1 (en) Autonomous citation indexing and literature browsing using citation context
US7519582B2 (en) System and method for performing a high-level multi-dimensional query on a multi-structural database
US20030115188A1 (en) Method and apparatus for electronically extracting application specific multidimensional information from a library of searchable documents and for providing the application specific information to a user application
US20030217335A1 (en) System and method for automatically discovering a hierarchy of concepts from a corpus of documents
US20070185868A1 (en) Method and apparatus for semantic search of schema repositories
US20050210006A1 (en) Field weighting in text searching
US20070055691A1 (en) Method and system for managing exemplar terms database for business-oriented metadata content
US20090119261A1 (en) Techniques for ranking search results
US9275144B2 (en) System and method for metadata search
Kleb et al. Entity reference resolution via spreading activation on rdf-graphs
Straccia et al. Towards distributed information retrieval in the semantic web: Query reformulation using the omap framework
Dadure et al. Embedding and generalization of formula with context in the retrieval of mathematical information
Barioni et al. Querying complex objects by similarity in SQL.
Agichtein et al. Question answering over implicitly structured web content
Nado et al. Extracting entity profiles from semistructured information spaces
Nogueras-Iso et al. Exploiting disambiguated thesauri for information retrieval in metadata catalogs
Manolopoulos et al. Indexing techniques for web access logs
Dadure et al. An Analysis of Variable-Size Vector Based

Legal Events

Date Code Title Description
AS Assignment

Owner name: AT&T CORP., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:COHEN, WILLIAM W.;REEL/FRAME:009217/0306

Effective date: 19980515

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: RUTGERS, THE STATE UNIVERSITY OF NEW JERSEY, NEW J

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AT&T CORPORATION;REEL/FRAME:016059/0591

Effective date: 20010629

AS Assignment

Owner name: RUTGERS, THE STATE UNIVERSITY OF NEW JERSEY, NEW J

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AT&T CORPORATION;REEL/FRAME:016446/0952

Effective date: 20010629

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20090925