US20070276818A1 - Adapting a search classifier based on user queries - Google Patents

Adapting a search classifier based on user queries Download PDF

Info

Publication number
US20070276818A1
US20070276818A1 US11/830,375 US83037507A US2007276818A1 US 20070276818 A1 US20070276818 A1 US 20070276818A1 US 83037507 A US83037507 A US 83037507A US 2007276818 A1 US2007276818 A1 US 2007276818A1
Authority
US
United States
Prior art keywords
task
mappings
classifier
query
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/830,375
Inventor
Daniel Cook
Chad Oftedal
Scott Seiber
Matthew Goldberg
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US11/830,375 priority Critical patent/US20070276818A1/en
Publication of US20070276818A1 publication Critical patent/US20070276818A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/355Class or cluster creation or modification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • Y10S707/99934Query formulation, input preparation, or translation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99942Manipulating data structure, e.g. compression, compaction, compilation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99943Generating database or data structure, e.g. via user interface

Definitions

  • the present invention relates to text classifiers.
  • the present invention relates to the classification of user queries.
  • search tools have been developed that classify user queries to identify one or more tasks or topics that the user is interested in. In some systems, this was done with simply key-word matching in which each key word was assigned to a particular topic. In other systems, more sophisticated classifiers have been used that use the entire query to make a determination of the most likely topic or task that the user may be interested in. Examples of such classifiers include support vector machines that provide a binary classification relative to each of a set of tasks. Thus, for each task, the support vector machine is able to decide whether the query belongs to the task or not.
  • Such sophisticated classifiers are trained using a set of queries that have been classified by a librarian. Based on the queries and the classification given by the librarian, the support vector machine generates a hyper-boundary between those queries that match to the task and those queries that do not match to the task. Later, when a query is applied to the support vector machine for a particular task, the distance between the query and the hyper-boundary determines the confidence level with which the support vector machine is able to identify the query as either belonging to the task or not belonging to the task.
  • training data provided by the librarian is essential to initially training the support vector machine, such training data limits the performance of the support vector machine over time.
  • training data that includes current-events queries becomes dated over time and results in unwanted topics or tasks being returned to the user.
  • additional librarian-created training data can be added over time to keep the support vector machines current, such maintenance of the support vector machines is time consuming and expensive.
  • a system is needed for updating search classifiers that requires less human intervention, while still maintaining a high standard of precision and recall.
  • Multiple different user queries are applied to an automated classifier to identify multiple tasks. For each query, a task is provided to a user. A task selected by the user is logged and a mapping between each query and each selected task is generated. Fewer than all of the mappings are used to train a new classifiers wherein selecting fewer than all of the mappings to train the new classifier comprises selecting mappings based on when the mappings were generated.
  • the new classifier is stored on a computer-readable storage medium.
  • FIG. 1 is a block diagram of a computing device on which a user may enter a query under the present invention.
  • FIG. 2 is a block diagram of a client-server architecture under one embodiment of the present invention.
  • FIG. 3 is a flow diagram of a method of logging search queries and selected tasks under embodiments of the present invention.
  • FIG. 4 is a display showing a list of tasks provided to the user in response to their query.
  • FIG. 5 is a flow diagram of a system for training a classifier using logged search queries under embodiments of the present invention.
  • FIG. 6 is a display showing an interface for designating the training data to be used in building a classifier under one embodiment of the present invention.
  • FIG. 1 provides a block diagram of a single computing device on which the present invention may be practiced or which may be operated as the client in a client-server architecture.
  • the computing system environment 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 100 .
  • the invention is operational with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, telephony systems, distributed computing environments that include any of the above systems or devices, and the like.
  • the invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer storage media including memory storage devices.
  • an exemplary system for implementing the invention includes a general-purpose computing device in the form of a computer 110 .
  • Components of computer 110 may include, but are not limited to, a processing unit 120 , a system memory 130 , and a system bus 121 that couples various system components including the system memory to the processing unit 120 .
  • the system bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • Computer 110 typically includes a variety of computer readable media.
  • Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media.
  • Computer readable media may comprise computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media includes, but is not limited to, RAM, RON, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 110 .
  • Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
  • the system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132 .
  • ROM read only memory
  • RAM random access memory
  • a basic input/output system 133 (BIOS) containing the basic routines that help to transfer information between elements within computer 110 , such as during start-up, is typically stored in ROM 131 .
  • BIOS basic input/output system
  • RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120 .
  • FIG. 1 illustrates operating system 134 , application programs 135 , other program modules 136 , and program data 137 .
  • the computer 110 may also include other removable/non-removable volatile/nonvolatile computer storage media.
  • FIG. 1 illustrates a hard disk drive 141 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 151 that reads from or writes to a removable, nonvolatile magnetic disk 152 , and an optical disk drive 155 that reads from or writes to a removable, nonvolatile optical disk 156 such as a CD ROM or other optical media.
  • removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like.
  • the hard disk drive 141 is typically connected to the system bus 121 through a non-removable memory interface such as interface 140
  • magnetic disk drive 151 and optical disk drive 155 are typically connected to the system bus 121 by a removable memory interface, such as interface 150 .
  • hard disk drive 141 is illustrated as storing operating system 144 , application programs 145 , other program modules 146 , and program data 147 . Note that these components can either be the same as or different from operating system 134 , application programs 135 , other program modules 136 , and program data 137 . Operating system 144 , application programs 145 , other program modules 146 , and program data 147 are given different numbers here to illustrate that, at a minimum, they are different copies.
  • a user may enter commands and information into the computer 110 through input devices such as a keyboard 162 , a microphone 163 , and a pointing device 161 , such as a mouse, trackball or touch pad.
  • Other input devices may include a joystick, game pad, satellite dish, scanner, or the like.
  • a monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 190 .
  • computers may also include other peripheral output devices such as speakers 197 and printer 196 , which may be connected through an output peripheral interface 190 .
  • the computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180 .
  • the remote computer 180 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110 .
  • the logical connections depicted in FIG. 1 include a local area network (LAN) 171 and a wide area network (WAN) 173 , but may also include other networks.
  • LAN local area network
  • WAN wide area network
  • Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
  • the computer 110 When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170 .
  • the computer 110 When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173 , such as the Internet.
  • the modem 172 which may be internal or external, may be connected to the system bus 121 via the user input interface 160 , or other appropriate mechanism.
  • program modules depicted relative to the computer 110 may be stored in the remote memory storage device.
  • FIG. 1 illustrates remote application programs 185 as residing on remote computer 180 . It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • FIG. 2 provides a block diagram of a client-server architecture under one embodiment of the present invention.
  • a user 200 enters a query using a client computing device 202 .
  • Client 202 communicates the query through a network 206 to a search classifier 204 , which uses a set of classifier models stored in model storage 208 to classify the user query.
  • the classifier models are support vector machines.
  • search classifier 204 when search classifier 204 receives a search query at step 300 , it identifies a set of tasks that may be represented by the query and returns those identified tasks to the users at step 302 .
  • the query is applied to a separate support vector machine for each task, and each separate support vector machine determines whether the query is likely related to a particular task and the confidence level of that determination. This confidence level is typically determined by determining the distance between a vector representing the query and a hyper-boundary defined within the support vector machine.
  • search classifier 204 logs the query and the lists of tasks returned to the client 202 in a log 210 .
  • this log entry includes a session ID that uniquely but abstractly identifies a client 202 such that further communications from the same client will share the same session ID.
  • the session ID is not able to identify a particular user.
  • client 202 displays the returned task to the user so that the user may select one or more of the tasks.
  • An example of such a display is shown in FIG. 4 where tasks 400 , 402 , and 404 are shown displayed near a text edit box 408 containing the user's original query Note that in some embodiments, the query is simultaneously applied to a search engine, which provides a set of results 410 that is displayed next to the identified tasks.
  • step 308 of FIG. 3 if a user does not select a task, the process returns to step 300 where the search classifier waits for a new query to be submitted by one or more users. If a user does select a task at step 308 , search classifier 204 logs the selected task at step 310 . After the selected task has been logged at step 308 , the process returns to a loop between steps 308 and 300 wherein the search classifier waits for one or more users to select a task previously returned to the user and/or waits for a new query from a user.
  • log 210 grows in size to include log entries from many users over many different search sessions. After a period of time, typically a week, log 210 is used to build a new classifier as shown in the steps of FIG. 5 .
  • a log parser 212 accesses log 210 and parses the log to find entries in which a task was returned to a user and a subsequent entry in which a task was selected by the user. Note that the user is able to select more than one task and as such there may be multiple entries for different selected tasks based on a single query. A selected task is identified by matching the task to a task returned in an earlier log entry for the same session ID.
  • log parser 212 applies each query that resulted in a selected task to the classifier model stored in storage 208 to determine the confidence level of the task selected by the user.
  • the query, task and confidence level are then stored in a database 214 .
  • the query and selected task represent an unsupervised query-to-task mapping. This mapping is unsupervised because it is generated automatically without any supervision as to whether the selected task is appropriate for the query.
  • query-to-task mappings stored in database 214 are stored with a confidence bucket indicator that indicates the general confidence level of the query-to-task mapping.
  • a separate bucket is provided for each of the following ranges of confidence levels: 50-60%, 60-70%, 70-80%, 80-90% and 90-100%.
  • These confidence buckets are shown as buckets 216 , 218 , 220 , 222 and 224 in FIG. 2 .
  • the step of assigning query-to-task mappings to buckets is shown as step 504 in FIG. 5 .
  • a build manager 232 selects a combination of training data at step 506 .
  • FIG. 6 provides an example of a build interface used by a build manager to designate the training data to be used in building a candidate classifier.
  • the training data is designated on a per task basis.
  • a task selection box 650 is provided in which the build manager can designate a task. Note that in other embodiments, this task designation is not used and a single designation of the training data is applied to all of the tasks.
  • check boxes 600 , 602 , 604 , 606 , 608 and 610 correspond to portions of the original training data that were formed by a librarian and used to construct the original classifier. These original sets of training data are shown as original librarian data 233 in FIG. 2 .
  • Check box 612 allows the build manager 232 to designate a set of query-to-task mappings that have appeared multiple times in the log. Such multiple mappings are designated by log parser 212 as being duplicates 234 .
  • Check box 614 allows build manager 232 to select training data that has been newly created by a librarian. In other words, a librarian has associated a task with a query and that mapping has been stored as new manual training data 236 in FIG. 2 .
  • Check boxes 616 , 618 , 620 , 622 and 624 allow build manager 232 to select the training data that has been assigned to the buckets associated with 50-60%, 60-70%, 70-80%, 80-90% and 90-100% confidence levels, respectively.
  • build interface 230 uses the selections made in the check boxes of FIG. 6 to construct a vector representing the information contained in the check boxes.
  • each bit position in the vector represents a single check box in FIG. 6 , and the bit position has a one when the check box has been selected and a zero when the check box has not been selected.
  • This vector is passed to a build script 238 so that the build script knows which training data has been selected by the build manager.
  • Build interface 230 also includes a freshness box 652 , which allows the build manager to designate the percent of the training data that is to be used in constructing the classifier. This percentage represents the latest x percent of the training data that was stored in the log. For example, if the percentage is set at twenty percent, the latest 20 percent of task mappings that are found in the database are used to construct the classifier. Thus, the freshness box allows the build manager to select the training data based on when the mappings were produced.
  • Freshness box 652 allows the build manager to tailor how much old training data will be used to construct the classifier.
  • the training data is specified on a per task basis using task selection box 650 , it is possible to set different freshness levels for different tasks. This is helpful because some tasks are highly time-specific and their queries change significantly over time making it desirable to use only the latest training data. Other tasks are not time-specific and their queries change little over time. For these tasks it is desirable to use as much training data as possible to improve the performance of the classifier.
  • build script 238 retrieves the query-to-task mappings with the appropriate designations 216 , 218 , 220 , 222 , 224 , 233 , 234 and/or 236 and uses those query-to-task mappings to build a candidate classifier 240 at step 508 .
  • Candidate classifier 240 is provided to a tester 242 , which at step 510 of FIG. 5 measures the precision, recall and FeelGood performance of candidate classifier 240 .
  • Precision provides a measure of the classifier's ability to return only those tasks that are truly related to a query and not other unrelated tasks.
  • Recall performance provides a measure of the candidate classifier's ability to return all of the tasks that are associated with a particular query.
  • “FeelGood” is a metric that indicates, for a given known test query, whether the associated mapped task would appear as one of the top 4 tasks returned to an end user. If Yes, the mapping is scored a value of 1.0. If no, the mapping is scored a value of 0.0. Averaging this value over the entire testing set, produces a value between zero and one. For well-selected training sets this average is around 85%, meaning that 85 queries out of 100 caused the proper task to appear in the top 4.
  • the step of testing the candidate classifier at step 510 is performed using a “holdout” methodology.
  • the selected training data is divided into N sets. One of the sets is selected and the remaining sets are used to construct a candidate classifier.
  • the set of training data that was not used to build the classifier is then applied to the classifier to determine its precision, recall and FeelGood performance. This is repeated for each set of data such that a separate classifier is built for each set of data that is held out.
  • the performance of the candidate classifier is then determined as the average precision, recall, and FeelGood performance of each of the candidate classifiers generated for the training data.
  • the build interface 230 is provided to build manager 232 once again so that the build manager may change the combination of training data used to construct the candidate classifier. If the build manager selects a new combination of training data, the process returns to step 506 and a new candidate classifier is constructed and tested.
  • the best candidate classifier is selected at step 514 .
  • the performance of this best candidate is then compared to the performance of the current classifier at step 516 . If the performance of the current classifier is better than the performance of the candidate classifier, the current classifier is kept in place at step 518 . If, however, the candidate classifier performs better than the current classifier, the candidate classifier is designated as a release candidate 243 and is provided to a rebuild tool 244 . At step 520 , rebuild tool 244 replaces the current classifier with release candidate 243 in model storage 208 .
  • the changing of the classifier stored in model storage 208 is performed during non-peak times. When the search classifier is operated over multiple servers, the change in classifiers is performed in a step-wise fashion across each of the servers.
  • the present invention provides a method by which a search classifier may be updated using query-to-task mappings that have been designated by the user as being useful.
  • the classifier improves in performance and is able to change over time with new queries such that it is no longer limited by the original training data used during the initial construction of the search classifier.
  • less manually entered training data is needed under the present invention in order to update and expand the performance of the classifier.

Abstract

Multiple different user queries are applied to an automated classifier to identify multiple tasks. For each query, a task is provided to a user. A task selected by the user is logged and a mapping between each query and each selected task is generated. Fewer than all of the mappings are used to train a new classifier, wherein selecting fewer than all of the mappings to train the new classifier comprises selecting mappings based on when the mappings were generated. The new classifier is stored on a computer-readable storage medium.

Description

    REFERENCE TO RELATED APPLICATIONS
  • This application is a divisional of and claims priority from U.S. patent application Ser. No. 10/310,408, filed on Dec. 5, 2002 and entitled METHOD AND APPARATUS FOR ADAPTING A SEARCH CLASSIFIER BASED ON USER QUERIES.
  • BACKGROUND OF THE INVENTION
  • The present invention relates to text classifiers. In particular, the present invention relates to the classification of user queries.
  • In the past, search tools have been developed that classify user queries to identify one or more tasks or topics that the user is interested in. In some systems, this was done with simply key-word matching in which each key word was assigned to a particular topic. In other systems, more sophisticated classifiers have been used that use the entire query to make a determination of the most likely topic or task that the user may be interested in. Examples of such classifiers include support vector machines that provide a binary classification relative to each of a set of tasks. Thus, for each task, the support vector machine is able to decide whether the query belongs to the task or not.
  • Such sophisticated classifiers are trained using a set of queries that have been classified by a librarian. Based on the queries and the classification given by the librarian, the support vector machine generates a hyper-boundary between those queries that match to the task and those queries that do not match to the task. Later, when a query is applied to the support vector machine for a particular task, the distance between the query and the hyper-boundary determines the confidence level with which the support vector machine is able to identify the query as either belonging to the task or not belonging to the task.
  • Although the training data provided by the librarian is essential to initially training the support vector machine, such training data limits the performance of the support vector machine over time. In particular, training data that includes current-events queries becomes dated over time and results in unwanted topics or tasks being returned to the user. Although additional librarian-created training data can be added over time to keep the support vector machines current, such maintenance of the support vector machines is time consuming and expensive. As such, a system is needed for updating search classifiers that requires less human intervention, while still maintaining a high standard of precision and recall.
  • SUMMARY OF THE INVENTION
  • Multiple different user queries are applied to an automated classifier to identify multiple tasks. For each query, a task is provided to a user. A task selected by the user is logged and a mapping between each query and each selected task is generated. Fewer than all of the mappings are used to train a new classifiers wherein selecting fewer than all of the mappings to train the new classifier comprises selecting mappings based on when the mappings were generated. The new classifier is stored on a computer-readable storage medium.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a computing device on which a user may enter a query under the present invention.
  • FIG. 2 is a block diagram of a client-server architecture under one embodiment of the present invention.
  • FIG. 3 is a flow diagram of a method of logging search queries and selected tasks under embodiments of the present invention.
  • FIG. 4 is a display showing a list of tasks provided to the user in response to their query.
  • FIG. 5 is a flow diagram of a system for training a classifier using logged search queries under embodiments of the present invention.
  • FIG. 6 is a display showing an interface for designating the training data to be used in building a classifier under one embodiment of the present invention.
  • DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
  • The present invention may be practiced within a single computing device or in a client-server architecture in which the client and server communicate through a network. FIG. 1 provides a block diagram of a single computing device on which the present invention may be practiced or which may be operated as the client in a client-server architecture.
  • The computing system environment 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 100.
  • The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, telephony systems, distributed computing environments that include any of the above systems or devices, and the like.
  • The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
  • With reference to FIG. 1, an exemplary system for implementing the invention includes a general-purpose computing device in the form of a computer 110. Components of computer 110 may include, but are not limited to, a processing unit 120, a system memory 130, and a system bus 121 that couples various system components including the system memory to the processing unit 120. The system bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
  • Computer 110 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, RON, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 110. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
  • The system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. A basic input/output system 133 (BIOS) , containing the basic routines that help to transfer information between elements within computer 110, such as during start-up, is typically stored in ROM 131. RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120. By way of example, and not limitation, FIG. 1 illustrates operating system 134, application programs 135, other program modules 136, and program data 137.
  • The computer 110 may also include other removable/non-removable volatile/nonvolatile computer storage media. By way of example only, FIG. 1 illustrates a hard disk drive 141 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 151 that reads from or writes to a removable, nonvolatile magnetic disk 152, and an optical disk drive 155 that reads from or writes to a removable, nonvolatile optical disk 156 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive 141 is typically connected to the system bus 121 through a non-removable memory interface such as interface 140, and magnetic disk drive 151 and optical disk drive 155 are typically connected to the system bus 121 by a removable memory interface, such as interface 150.
  • The drives and their associated computer storage media discussed above and illustrated in FIG. 1, provide storage of computer readable instructions, data structures, program modules and other data for the computer 110. In FIG. 1, for example, hard disk drive 141 is illustrated as storing operating system 144, application programs 145, other program modules 146, and program data 147. Note that these components can either be the same as or different from operating system 134, application programs 135, other program modules 136, and program data 137. Operating system 144, application programs 145, other program modules 146, and program data 147 are given different numbers here to illustrate that, at a minimum, they are different copies.
  • A user may enter commands and information into the computer 110 through input devices such as a keyboard 162, a microphone 163, and a pointing device 161, such as a mouse, trackball or touch pad. Other input devices (not shown) may include a joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 120 through a user input interface 160 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 190. In addition to the monitor, computers may also include other peripheral output devices such as speakers 197 and printer 196, which may be connected through an output peripheral interface 190.
  • The computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180. The remote computer 180 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110. The logical connections depicted in FIG. 1 include a local area network (LAN) 171 and a wide area network (WAN) 173, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
  • When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet. The modem 172, which may be internal or external, may be connected to the system bus 121 via the user input interface 160, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 110, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, FIG. 1 illustrates remote application programs 185 as residing on remote computer 180. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • FIG. 2 provides a block diagram of a client-server architecture under one embodiment of the present invention. In FIG. 2, a user 200 enters a query using a client computing device 202. Client 202 communicates the query through a network 206 to a search classifier 204, which uses a set of classifier models stored in model storage 208 to classify the user query. Under one embodiment, the classifier models are support vector machines.
  • As shown in the flow diagram of FIG. 3, when search classifier 204 receives a search query at step 300, it identifies a set of tasks that may be represented by the query and returns those identified tasks to the users at step 302. In embodiments in which support vector machines are used, the query is applied to a separate support vector machine for each task, and each separate support vector machine determines whether the query is likely related to a particular task and the confidence level of that determination. This confidence level is typically determined by determining the distance between a vector representing the query and a hyper-boundary defined within the support vector machine.
  • At step 304 of FIG. 3, search classifier 204 logs the query and the lists of tasks returned to the client 202 in a log 210. Typically, this log entry includes a session ID that uniquely but abstractly identifies a client 202 such that further communications from the same client will share the same session ID. In most embodiments, the session ID is not able to identify a particular user.
  • In step 306 of FIG. 3, client 202 displays the returned task to the user so that the user may select one or more of the tasks. An example of such a display is shown in FIG. 4 where tasks 400, 402, and 404 are shown displayed near a text edit box 408 containing the user's original query Note that in some embodiments, the query is simultaneously applied to a search engine, which provides a set of results 410 that is displayed next to the identified tasks.
  • At step 308 of FIG. 3, if a user does not select a task, the process returns to step 300 where the search classifier waits for a new query to be submitted by one or more users. If a user does select a task at step 308, search classifier 204 logs the selected task at step 310. After the selected task has been logged at step 308, the process returns to a loop between steps 308 and 300 wherein the search classifier waits for one or more users to select a task previously returned to the user and/or waits for a new query from a user.
  • Over time, log 210 grows in size to include log entries from many users over many different search sessions. After a period of time, typically a week, log 210 is used to build a new classifier as shown in the steps of FIG. 5.
  • At step 500 of FIG. 5, a log parser 212 accesses log 210 and parses the log to find entries in which a task was returned to a user and a subsequent entry in which a task was selected by the user. Note that the user is able to select more than one task and as such there may be multiple entries for different selected tasks based on a single query. A selected task is identified by matching the task to a task returned in an earlier log entry for the same session ID.
  • At step 502, log parser 212 applies each query that resulted in a selected task to the classifier model stored in storage 208 to determine the confidence level of the task selected by the user. The query, task and confidence level are then stored in a database 214.
  • The query and selected task represent an unsupervised query-to-task mapping. This mapping is unsupervised because it is generated automatically without any supervision as to whether the selected task is appropriate for the query.
  • Under one embodiment, query-to-task mappings stored in database 214 are stored with a confidence bucket indicator that indicates the general confidence level of the query-to-task mapping. In particular, a separate bucket is provided for each of the following ranges of confidence levels: 50-60%, 60-70%, 70-80%, 80-90% and 90-100%. These confidence buckets are shown as buckets 216, 218, 220, 222 and 224 in FIG. 2. The step of assigning query-to-task mappings to buckets is shown as step 504 in FIG. 5.
  • Using a build interface 230, a build manager 232 selects a combination of training data at step 506. FIG. 6 provides an example of a build interface used by a build manager to designate the training data to be used in building a candidate classifier.
  • Under the embodiment of FIG. 6, the training data is designated on a per task basis. As such, a task selection box 650 is provided in which the build manager can designate a task. Note that in other embodiments, this task designation is not used and a single designation of the training data is applied to all of the tasks.
  • In FIG. 6, check boxes 600, 602, 604, 606, 608 and 610 correspond to portions of the original training data that were formed by a librarian and used to construct the original classifier. These original sets of training data are shown as original librarian data 233 in FIG. 2. Check box 612 allows the build manager 232 to designate a set of query-to-task mappings that have appeared multiple times in the log. Such multiple mappings are designated by log parser 212 as being duplicates 234.
  • Check box 614 allows build manager 232 to select training data that has been newly created by a librarian. In other words, a librarian has associated a task with a query and that mapping has been stored as new manual training data 236 in FIG. 2. Check boxes 616, 618, 620, 622 and 624 allow build manager 232 to select the training data that has been assigned to the buckets associated with 50-60%, 60-70%, 70-80%, 80-90% and 90-100% confidence levels, respectively.
  • Under one embodiment, build interface 230 uses the selections made in the check boxes of FIG. 6 to construct a vector representing the information contained in the check boxes. Under this embodiment, each bit position in the vector represents a single check box in FIG. 6, and the bit position has a one when the check box has been selected and a zero when the check box has not been selected. This vector is passed to a build script 238 so that the build script knows which training data has been selected by the build manager.
  • Build interface 230 also includes a freshness box 652, which allows the build manager to designate the percent of the training data that is to be used in constructing the classifier. This percentage represents the latest x percent of the training data that was stored in the log. For example, if the percentage is set at twenty percent, the latest 20 percent of task mappings that are found in the database are used to construct the classifier. Thus, the freshness box allows the build manager to select the training data based on when the mappings were produced.
  • Freshness box 652 allows the build manager to tailor how much old training data will be used to construct the classifier. In addition, in embodiments where the training data is specified on a per task basis using task selection box 650, it is possible to set different freshness levels for different tasks. This is helpful because some tasks are highly time-specific and their queries change significantly over time making it desirable to use only the latest training data. Other tasks are not time-specific and their queries change little over time. For these tasks it is desirable to use as much training data as possible to improve the performance of the classifier.
  • Based on the check boxes selected in build interface 230, build script 238 retrieves the query-to-task mappings with the appropriate designations 216, 218, 220, 222, 224, 233, 234 and/or 236 and uses those query-to-task mappings to build a candidate classifier 240 at step 508.
  • Candidate classifier 240 is provided to a tester 242, which at step 510 of FIG. 5 measures the precision, recall and FeelGood performance of candidate classifier 240. Precision provides a measure of the classifier's ability to return only those tasks that are truly related to a query and not other unrelated tasks. Recall performance provides a measure of the candidate classifier's ability to return all of the tasks that are associated with a particular query. “FeelGood” is a metric that indicates, for a given known test query, whether the associated mapped task would appear as one of the top 4 tasks returned to an end user. If Yes, the mapping is scored a value of 1.0. If no, the mapping is scored a value of 0.0. Averaging this value over the entire testing set, produces a value between zero and one. For well-selected training sets this average is around 85%, meaning that 85 queries out of 100 caused the proper task to appear in the top 4.
  • Under one embodiment, the step of testing the candidate classifier at step 510 is performed using a “holdout” methodology. Under this method, the selected training data is divided into N sets. One of the sets is selected and the remaining sets are used to construct a candidate classifier. The set of training data that was not used to build the classifier is then applied to the classifier to determine its precision, recall and FeelGood performance. This is repeated for each set of data such that a separate classifier is built for each set of data that is held out. The performance of the candidate classifier is then determined as the average precision, recall, and FeelGood performance of each of the candidate classifiers generated for the training data.
  • At step 512, the build interface 230 is provided to build manager 232 once again so that the build manager may change the combination of training data used to construct the candidate classifier. If the build manager selects a new combination of training data, the process returns to step 506 and a new candidate classifier is constructed and tested.
  • When the build manager has tested all of the desired combinations of training data, the best candidate classifier is selected at step 514. The performance of this best candidate is then compared to the performance of the current classifier at step 516. If the performance of the current classifier is better than the performance of the candidate classifier, the current classifier is kept in place at step 518. If, however, the candidate classifier performs better than the current classifier, the candidate classifier is designated as a release candidate 243 and is provided to a rebuild tool 244. At step 520, rebuild tool 244 replaces the current classifier with release candidate 243 in model storage 208. In many embodiments, the changing of the classifier stored in model storage 208 is performed during non-peak times. When the search classifier is operated over multiple servers, the change in classifiers is performed in a step-wise fashion across each of the servers.
  • Thus, the present invention provides a method by which a search classifier may be updated using query-to-task mappings that have been designated by the user as being useful. As a result, the classifier improves in performance and is able to change over time with new queries such that it is no longer limited by the original training data used during the initial construction of the search classifier. As a result, less manually entered training data is needed under the present invention in order to update and expand the performance of the classifier.
  • While the present invention has been described with reference to queries and tasks, those skilled in the art will recognize that a query is simply one type of example that can be used by an example-based categorizer such as the one described above and a task is just one example of a category. Any type of example and any type of category may be used with the present invention.
  • Although the present invention has been described with reference to particular embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention.

Claims (12)

1. A computer-readable storage medium having computer-executable instructions for performing steps comprising:
applying multiple different user queries to an automated classifier to identify multiple tasks, each user query comprising at least one word;
for each user query:
providing a task identified for the user query to a user;
logging a task selected by the user;
generating a mapping between each query and each selected task;
selecting fewer than all of the mappings to train a new classifier, wherein selecting fewer than all of the mappings to train the new classifier comprises selecting mappings based on when the mappings were generated; and
storing the new classifier on a computer-readable storage medium, the new classifier for identifying at least one task from a user query.
2. The computer-readable storage medium of claim 1 further comprising using a first set of mappings to train a first new classifier and a second set of mappings, different from the first set of mappings, to train a second new classifier.
3. The computer-readable storage medium of claim 2 further comprising testing the first new classifier and the second new classifier to determine which performs better.
4. The computer-readable storage medium of claim 1 wherein training a classifier comprises setting different training parameters for different tasks.
5. The computer-readable storage medium of claim 4 wherein setting a training parameter for a first task comprises selecting a first percentage of mappings produced for the first task, and setting a training parameter for a second task comprises selecting a second percentage of mappings produced for the second task, the first percentage being different from the second percentage.
6. A method comprising:
applying multiple different user queries to an automated classifier to identify multiple tasks;
for each query, providing a task identified for the query to a user;
for at least two queries, logging a task selected by the user;
generating a mapping between each query for which a task was selected and each selected task;
selecting fewer than all of the mappings to train a new classifier by selecting mappings based on when the mappings were generated; and
storing the new classifier on a computer-readable storage medium, the new classifier for identifying at least one task from a user query.
7. The method of claim 6 further comprising using a first set of mappings to train a first new classifier and a second set of mappings, different from the first set of mappings, to train a second new classifier.
8. The method of claim 7 further comprising testing the first new classifier and the second new classifier to determine which performs better.
9. The method of claim 6 wherein training a classifier comprises setting different training parameters for different tasks.
10. The method of claim 9 wherein setting a training parameter for a first task comprises selecting a first percentage of mappings produced for the first task, and setting a training parameter for a second task comprises selecting a second percentage of mappings produced for the second task, the first percentage being different from the second percentage.
11. A method comprising:
receiving input designating a first percentage of mappings between a first task and a first set of queries that is to be used to train a classifier, the first percentage less than one-hundred percent;
receiving input designating a second percentage of mappings between a second task and a second set of queries that is to be used to train the classifier, the second percentage less than one-hundred percent;
retrieving the first percentage of mappings between the first task and the first set of queries by selecting the latest formed mappings between the first task and the first set of queries up to the first percentage;
retrieving the second percentage of mappings between the second task and the second set of queries by selecting the latest formed mappings between the second task and the second set of queries up to the second percentage;
using the retrieved mappings to train a classifier for classifying a query into at least one task; and
storing the classifier on a computer-readable storage medium.
12. The method of claim 11 further comprising forming mappings between the first task and the first set of queries through steps comprising:
receiving a query from a user;
identifying a task for the query and displaying the task to the user;
logging a task selected by the user and the query; and
using the logged task and the query to form the mappings.
US11/830,375 2002-12-05 2007-07-30 Adapting a search classifier based on user queries Abandoned US20070276818A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/830,375 US20070276818A1 (en) 2002-12-05 2007-07-30 Adapting a search classifier based on user queries

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/310,408 US7266559B2 (en) 2002-12-05 2002-12-05 Method and apparatus for adapting a search classifier based on user queries
US11/830,375 US20070276818A1 (en) 2002-12-05 2007-07-30 Adapting a search classifier based on user queries

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/310,408 Division US7266559B2 (en) 2002-12-05 2002-12-05 Method and apparatus for adapting a search classifier based on user queries

Publications (1)

Publication Number Publication Date
US20070276818A1 true US20070276818A1 (en) 2007-11-29

Family

ID=32468031

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/310,408 Expired - Fee Related US7266559B2 (en) 2002-12-05 2002-12-05 Method and apparatus for adapting a search classifier based on user queries
US11/830,375 Abandoned US20070276818A1 (en) 2002-12-05 2007-07-30 Adapting a search classifier based on user queries

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/310,408 Expired - Fee Related US7266559B2 (en) 2002-12-05 2002-12-05 Method and apparatus for adapting a search classifier based on user queries

Country Status (1)

Country Link
US (2) US7266559B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090259635A1 (en) * 2008-04-10 2009-10-15 Ntt Docomo, Inc. Information delivery apparatus and information delivery method
US20100114882A1 (en) * 2006-07-21 2010-05-06 Aol Llc Culturally relevant search results
US20100257202A1 (en) * 2009-04-02 2010-10-07 Microsoft Corporation Content-Based Information Retrieval
US20110167053A1 (en) * 2006-06-28 2011-07-07 Microsoft Corporation Visual and multi-dimensional search
US9330176B2 (en) * 2012-11-14 2016-05-03 Sap Se Task-oriented search engine output

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7904929B1 (en) * 2003-10-30 2011-03-08 Microsoft Corporation Log entries
US8301584B2 (en) * 2003-12-16 2012-10-30 International Business Machines Corporation System and method for adaptive pruning
US7577641B2 (en) * 2004-09-07 2009-08-18 Sas Institute Inc. Computer-implemented system and method for analyzing search queries
DK1666074T3 (en) 2004-11-26 2008-09-08 Bae Ro Gmbh & Co Kg sterilization lamp
US7328199B2 (en) * 2005-10-07 2008-02-05 Microsoft Corporation Componentized slot-filling architecture
US7606700B2 (en) * 2005-11-09 2009-10-20 Microsoft Corporation Adaptive task framework
US20070106496A1 (en) * 2005-11-09 2007-05-10 Microsoft Corporation Adaptive task framework
US7822699B2 (en) * 2005-11-30 2010-10-26 Microsoft Corporation Adaptive semantic reasoning engine
US20070130134A1 (en) * 2005-12-05 2007-06-07 Microsoft Corporation Natural-language enabling arbitrary web forms
US7933914B2 (en) * 2005-12-05 2011-04-26 Microsoft Corporation Automatic task creation and execution using browser helper objects
US7831585B2 (en) * 2005-12-05 2010-11-09 Microsoft Corporation Employment of task framework for advertising
US20070203869A1 (en) * 2006-02-28 2007-08-30 Microsoft Corporation Adaptive semantic platform architecture
US7996783B2 (en) * 2006-03-02 2011-08-09 Microsoft Corporation Widget searching utilizing task framework
US7620634B2 (en) * 2006-07-31 2009-11-17 Microsoft Corporation Ranking functions using an incrementally-updatable, modified naïve bayesian query classifier
US8014591B2 (en) * 2006-09-13 2011-09-06 Aurilab, Llc Robust pattern recognition system and method using socratic agents
US7603348B2 (en) * 2007-01-26 2009-10-13 Yahoo! Inc. System for classifying a search query
US20090164394A1 (en) * 2007-12-20 2009-06-25 Microsoft Corporation Automated creative assistance
US8407214B2 (en) * 2008-06-25 2013-03-26 Microsoft Corp. Constructing a classifier for classifying queries
US8185432B2 (en) 2009-05-08 2012-05-22 Sas Institute Inc. Computer-implemented systems and methods for determining future profitability
US8290884B2 (en) * 2009-11-23 2012-10-16 Palo Alto Research Center Incorporated Method for approximating user task representations by document-usage clustering
US9223888B2 (en) 2011-09-08 2015-12-29 Bryce Hutchings Combining client and server classifiers to achieve better accuracy and performance results in web page classification
US9727652B2 (en) * 2013-07-22 2017-08-08 International Business Machines Corporation Utilizing dependency among internet search results
US9411905B1 (en) * 2013-09-26 2016-08-09 Groupon, Inc. Multi-term query subsumption for document classification
KR101770527B1 (en) * 2013-11-27 2017-08-22 가부시키가이샤 엔티티 도코모 Automatic task classification based upon machine learning

Citations (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5671333A (en) * 1994-04-07 1997-09-23 Lucent Technologies Inc. Training apparatus and method
US6092059A (en) * 1996-12-27 2000-07-18 Cognex Corporation Automatic classifier for real time inspection and classification
US6192360B1 (en) * 1998-06-23 2001-02-20 Microsoft Corporation Methods and apparatus for classifying text and for building a text classifier
US6253169B1 (en) * 1998-05-28 2001-06-26 International Business Machines Corporation Method for improvement accuracy of decision tree based text categorization
US20010037324A1 (en) * 1997-06-24 2001-11-01 International Business Machines Corporation Multilevel taxonomy based on features derived from training documents classification using fisher values as discrimination values
US20020078044A1 (en) * 2000-12-19 2002-06-20 Jong-Cheol Song System for automatically classifying documents by category learning using a genetic algorithm and a term cluster and method thereof
US20020103775A1 (en) * 2001-01-26 2002-08-01 Quass Dallan W. Method for learning and combining global and local regularities for information extraction and classification
US20020107843A1 (en) * 2001-02-07 2002-08-08 International Business Corporation Customer self service subsystem for classifying user contexts
US20020152190A1 (en) * 2001-02-07 2002-10-17 International Business Machines Corporation Customer self service subsystem for adaptive indexing of resource solutions and resource lookup
US20020194161A1 (en) * 2001-04-12 2002-12-19 Mcnamee J. Paul Directed web crawler with machine learning
US20030004966A1 (en) * 2001-06-18 2003-01-02 International Business Machines Corporation Business method and apparatus for employing induced multimedia classifiers based on unified representation of features reflecting disparate modalities
US20030018658A1 (en) * 2001-07-17 2003-01-23 Suermondt Henri Jacques Method of efficient migration from one categorization hierarchy to another hierarchy
US20030033274A1 (en) * 2001-08-13 2003-02-13 International Business Machines Corporation Hub for strategic intelligence
US20030046311A1 (en) * 2001-06-19 2003-03-06 Ryan Baidya Dynamic search engine and database
US20030046297A1 (en) * 2001-08-30 2003-03-06 Kana Software, Inc. System and method for a partially self-training learning system
US20030093395A1 (en) * 2001-05-10 2003-05-15 Honeywell International Inc. Indexing of knowledge base in multilayer self-organizing maps with hessian and perturbation induced fast learning
US20030110147A1 (en) * 2001-12-08 2003-06-12 Li Ziqing Method for boosting the performance of machine-learning classifiers
US20030154181A1 (en) * 2002-01-25 2003-08-14 Nec Usa, Inc. Document clustering with cluster refinement and model selection capabilities
US20030167252A1 (en) * 2002-02-26 2003-09-04 Pliant Technologies, Inc. Topic identification and use thereof in information retrieval systems
US20030200188A1 (en) * 2002-04-19 2003-10-23 Baback Moghaddam Classification with boosted dyadic kernel discriminants
US20030233350A1 (en) * 2002-06-12 2003-12-18 Zycus Infotech Pvt. Ltd. System and method for electronic catalog classification using a hybrid of rule based and statistical method
US20040120572A1 (en) * 2002-10-31 2004-06-24 Eastman Kodak Company Method for using effective spatio-temporal image recomposition to improve scene classification
US20040162852A1 (en) * 2001-06-14 2004-08-19 Kunbin Qu Multidimensional biodata integration and relationship inference
US6789069B1 (en) * 1998-05-01 2004-09-07 Biowulf Technologies Llc Method for enhancing knowledge discovered from biological data using a learning machine
US20050066236A1 (en) * 2001-05-24 2005-03-24 Microsoft Corporation Automatic classification of event data
US6886008B2 (en) * 2001-03-08 2005-04-26 Technion Research & Development Foundation Ltd. Machine learning by construction of a decision function
US20050108200A1 (en) * 2001-07-04 2005-05-19 Frank Meik Category based, extensible and interactive system for document retrieval
US6901398B1 (en) * 2001-02-12 2005-05-31 Microsoft Corporation System and method for constructing and personalizing a universal information classifier
US20050131847A1 (en) * 1998-05-01 2005-06-16 Jason Weston Pre-processed feature ranking for a support vector machine
US20050216426A1 (en) * 2001-05-18 2005-09-29 Weston Jason Aaron E Methods for feature selection in a learning machine
US20060074835A1 (en) * 1999-04-09 2006-04-06 Maggioni Mauro M System and method for hyper-spectral analysis
US7062488B1 (en) * 2000-08-30 2006-06-13 Richard Reisman Task/domain segmentation in applying feedback to command control
US7089226B1 (en) * 2001-06-28 2006-08-08 Microsoft Corporation System, representation, and method providing multilevel information retrieval with clarification dialog
US7092888B1 (en) * 2001-10-26 2006-08-15 Verizon Corporate Services Group Inc. Unsupervised training in natural language call routing
US20060265364A1 (en) * 2000-03-09 2006-11-23 Keith Robert O Jr Method and apparatus for organizing data by overlaying a searchable database with a directory tree structure

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002091211A1 (en) * 2001-05-07 2002-11-14 Biowulf Technologies, Llc Kernels and methods for selecting kernels for use in learning machines

Patent Citations (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5671333A (en) * 1994-04-07 1997-09-23 Lucent Technologies Inc. Training apparatus and method
US6092059A (en) * 1996-12-27 2000-07-18 Cognex Corporation Automatic classifier for real time inspection and classification
US20010037324A1 (en) * 1997-06-24 2001-11-01 International Business Machines Corporation Multilevel taxonomy based on features derived from training documents classification using fisher values as discrimination values
US6789069B1 (en) * 1998-05-01 2004-09-07 Biowulf Technologies Llc Method for enhancing knowledge discovered from biological data using a learning machine
US20050131847A1 (en) * 1998-05-01 2005-06-16 Jason Weston Pre-processed feature ranking for a support vector machine
US6253169B1 (en) * 1998-05-28 2001-06-26 International Business Machines Corporation Method for improvement accuracy of decision tree based text categorization
US6192360B1 (en) * 1998-06-23 2001-02-20 Microsoft Corporation Methods and apparatus for classifying text and for building a text classifier
US20060074835A1 (en) * 1999-04-09 2006-04-06 Maggioni Mauro M System and method for hyper-spectral analysis
US20060265364A1 (en) * 2000-03-09 2006-11-23 Keith Robert O Jr Method and apparatus for organizing data by overlaying a searchable database with a directory tree structure
US7062488B1 (en) * 2000-08-30 2006-06-13 Richard Reisman Task/domain segmentation in applying feedback to command control
US20020078044A1 (en) * 2000-12-19 2002-06-20 Jong-Cheol Song System for automatically classifying documents by category learning using a genetic algorithm and a term cluster and method thereof
US20020103775A1 (en) * 2001-01-26 2002-08-01 Quass Dallan W. Method for learning and combining global and local regularities for information extraction and classification
US20020152190A1 (en) * 2001-02-07 2002-10-17 International Business Machines Corporation Customer self service subsystem for adaptive indexing of resource solutions and resource lookup
US20020107843A1 (en) * 2001-02-07 2002-08-08 International Business Corporation Customer self service subsystem for classifying user contexts
US6901398B1 (en) * 2001-02-12 2005-05-31 Microsoft Corporation System and method for constructing and personalizing a universal information classifier
US6886008B2 (en) * 2001-03-08 2005-04-26 Technion Research & Development Foundation Ltd. Machine learning by construction of a decision function
US20020194161A1 (en) * 2001-04-12 2002-12-19 Mcnamee J. Paul Directed web crawler with machine learning
US20030093395A1 (en) * 2001-05-10 2003-05-15 Honeywell International Inc. Indexing of knowledge base in multilayer self-organizing maps with hessian and perturbation induced fast learning
US20050216426A1 (en) * 2001-05-18 2005-09-29 Weston Jason Aaron E Methods for feature selection in a learning machine
US20050066236A1 (en) * 2001-05-24 2005-03-24 Microsoft Corporation Automatic classification of event data
US20040162852A1 (en) * 2001-06-14 2004-08-19 Kunbin Qu Multidimensional biodata integration and relationship inference
US20030004966A1 (en) * 2001-06-18 2003-01-02 International Business Machines Corporation Business method and apparatus for employing induced multimedia classifiers based on unified representation of features reflecting disparate modalities
US20030046311A1 (en) * 2001-06-19 2003-03-06 Ryan Baidya Dynamic search engine and database
US7089226B1 (en) * 2001-06-28 2006-08-08 Microsoft Corporation System, representation, and method providing multilevel information retrieval with clarification dialog
US20050108200A1 (en) * 2001-07-04 2005-05-19 Frank Meik Category based, extensible and interactive system for document retrieval
US6701333B2 (en) * 2001-07-17 2004-03-02 Hewlett-Packard Development Company, L.P. Method of efficient migration from one categorization hierarchy to another hierarchy
US20030018658A1 (en) * 2001-07-17 2003-01-23 Suermondt Henri Jacques Method of efficient migration from one categorization hierarchy to another hierarchy
US20030033274A1 (en) * 2001-08-13 2003-02-13 International Business Machines Corporation Hub for strategic intelligence
US20030046297A1 (en) * 2001-08-30 2003-03-06 Kana Software, Inc. System and method for a partially self-training learning system
US7092888B1 (en) * 2001-10-26 2006-08-15 Verizon Corporate Services Group Inc. Unsupervised training in natural language call routing
US20030110147A1 (en) * 2001-12-08 2003-06-12 Li Ziqing Method for boosting the performance of machine-learning classifiers
US20030154181A1 (en) * 2002-01-25 2003-08-14 Nec Usa, Inc. Document clustering with cluster refinement and model selection capabilities
US20030167252A1 (en) * 2002-02-26 2003-09-04 Pliant Technologies, Inc. Topic identification and use thereof in information retrieval systems
US20030200188A1 (en) * 2002-04-19 2003-10-23 Baback Moghaddam Classification with boosted dyadic kernel discriminants
US20030233350A1 (en) * 2002-06-12 2003-12-18 Zycus Infotech Pvt. Ltd. System and method for electronic catalog classification using a hybrid of rule based and statistical method
US20040120572A1 (en) * 2002-10-31 2004-06-24 Eastman Kodak Company Method for using effective spatio-temporal image recomposition to improve scene classification

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110167053A1 (en) * 2006-06-28 2011-07-07 Microsoft Corporation Visual and multi-dimensional search
US20100114882A1 (en) * 2006-07-21 2010-05-06 Aol Llc Culturally relevant search results
US8700619B2 (en) * 2006-07-21 2014-04-15 Aol Inc. Systems and methods for providing culturally-relevant search results to users
US20140172847A1 (en) * 2006-07-21 2014-06-19 Aol Inc. Systems and methods for providing culturally-relevant search results to users
US9442985B2 (en) * 2006-07-21 2016-09-13 Aol Inc. Systems and methods for providing culturally-relevant search results to users
US20090259635A1 (en) * 2008-04-10 2009-10-15 Ntt Docomo, Inc. Information delivery apparatus and information delivery method
US20100257202A1 (en) * 2009-04-02 2010-10-07 Microsoft Corporation Content-Based Information Retrieval
US8346800B2 (en) * 2009-04-02 2013-01-01 Microsoft Corporation Content-based information retrieval
US9330176B2 (en) * 2012-11-14 2016-05-03 Sap Se Task-oriented search engine output

Also Published As

Publication number Publication date
US20040111419A1 (en) 2004-06-10
US7266559B2 (en) 2007-09-04

Similar Documents

Publication Publication Date Title
US7266559B2 (en) Method and apparatus for adapting a search classifier based on user queries
US7231375B2 (en) Computer aided query to task mapping
US6012053A (en) Computer system with user-controlled relevance ranking of search results
US6356900B1 (en) Online modifications of relations in multidimensional processing
US7730072B2 (en) Automated adaptive classification system for knowledge networks
US6434557B1 (en) Online syntheses programming technique
US5764973A (en) System for generating structured query language statements and integrating legacy systems
US11288242B2 (en) Similarity-based search engine
US6801910B1 (en) Method and system for guiding drilling in a report generated by a reporting system
US6418427B1 (en) Online modifications of dimension structures in multidimensional processing
KR101150063B1 (en) Analyzing operational and other data from search system or the like
CN100565509C (en) Use the system and method for click distance to the Search Results classification
US7117206B1 (en) Method for ranking hyperlinked pages using content and connectivity analysis
RU2378693C2 (en) Matching request and record
US7373351B2 (en) Generic search engine framework
CN109284363A (en) A kind of answering method, device, electronic equipment and storage medium
US20040139070A1 (en) Method and apparatus for storing data as objects, constructing customized data retrieval and data processing requests, and performing householding queries
US5842219A (en) Method and system for providing a multiple property searching capability within an object-oriented distributed computing network
US11687544B2 (en) Adaptive analytics user interfaces
US6782391B1 (en) Intelligent knowledge base content categorizer (IKBCC)
US6697087B1 (en) Updating diagrams of dynamic representational Models of dynamic systems
US20040181518A1 (en) System and method for an OLAP engine having dynamic disaggregation
US7783657B2 (en) Search authoring metrics and debugging
CA2394713A1 (en) Static drill-through modelling with graphical interface
US6473764B1 (en) Virtual dimensions in databases and method therefor

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034766/0001

Effective date: 20141014