WO2015122466A1 - Computer, method executed by computer, computer program, and face bow - Google Patents

Computer, method executed by computer, computer program, and face bow Download PDF

Info

Publication number
WO2015122466A1
WO2015122466A1 PCT/JP2015/053851 JP2015053851W WO2015122466A1 WO 2015122466 A1 WO2015122466 A1 WO 2015122466A1 JP 2015053851 W JP2015053851 W JP 2015053851W WO 2015122466 A1 WO2015122466 A1 WO 2015122466A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
dentition
image
patient
mandibular
Prior art date
Application number
PCT/JP2015/053851
Other languages
French (fr)
Japanese (ja)
Inventor
らら 高橋
高橋 淳
Original Assignee
有限会社 メディコム
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 有限会社 メディコム filed Critical 有限会社 メディコム
Publication of WO2015122466A1 publication Critical patent/WO2015122466A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • A61C19/045Measuring instruments specially adapted for dentistry for recording mandibular movement, e.g. face bows
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C11/00Dental articulators, i.e. for simulating movement of the temporo-mandibular joints; Articulation forms or mouldings
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C9/00Impression cups, i.e. impression trays; Impression methods
    • A61C9/004Means or methods for taking digitized impressions

Definitions

  • the present invention relates to a computer used in the dental medical field, and more particularly to a computer used in combination with a face bow.
  • Face bows are used in the dental field.
  • a face bow is used when performing dental prosthetic treatment, temporomandibular joint occlusion treatment, and occlusion diagnosis.
  • the most common facebow is, for example, the relative angle between the reference plane of the skull (eg, the Frankfurt plane connecting the lower surface of the right orbit of the skull and the upper edge of the left and right ear canal) and the maxillary dentition or maxilla. It is for measuring and recording the positional relationship including.
  • the facebow is roughly divided into an upper bow that can be fixed to the patient's face by uniquely determining the relationship to the assumed anatomical reference plane in the patient's skull, and the upper dentition of the patient.
  • the bite fork, upper bow, and bite fork that can be fixed while recording the fixed state are arbitrarily positioned relative to each other, and the position and angle are recorded.
  • And connecting means that can be fixed.
  • the upper bow of a certain type of face bow has, for example, a glasses shape.
  • the upper bow includes a main body corresponding to a frame if the glasses are positioned in front of the face when in use, and a temple corresponding to the temple if the glasses are attached to both sides of the main body.
  • the distal end of the temple portion is bent inward so that it can be inserted into the patient's ear canal.
  • a maintenance device that can be fixed to an arbitrary nasal root recess of a patient is attached to a side of the main body portion close to the face when the upper bow is used.
  • the position of the tip of the temple portion can be moved, for example, forward, backward, left and right with respect to the reference plane.
  • the maintenance device fixed to the above-mentioned nasal root recess can also be moved, for example, back and forth and up and down with respect to the above-mentioned reference plane.
  • the face bow When using the face bow, fix both ends of the temples to the patient's ear canal and the maintenance device to the nasal root recess. In this state, adjustment is made so that the main body of the upper bow is in a predetermined posture (for example, parallel to the reference plane of the skull).
  • the upper bow is uniquely positioned with respect to the facial cranium, more specifically with respect to the anatomical reference plane, by the three portions of the temple tip and the rod tip.
  • the bite fork is a U-shaped plate corresponding to the dentition occlusal surface, and on the upper surface thereof, for example, a paste for marking which is a curable substance such as a modeling compound or bite wax is applied. Fixed to the lower surface of the maxillary dentition. The relationship between the bite fork and the maxillary dentition can be recorded by the shape of the lower surface of the maxillary dentition marked on the marking paste.
  • the connecting means is constituted by, for example, a rod-like body connected by a plurality of spherical joints, and can fix the upper bow and the bite fork by arbitrarily positioning relative positions and angles. ing.
  • the spherical joint and the rod-shaped body are provided with a predetermined scale, and the posture of the connecting means taking a certain posture (for example, the position and angle of a certain spherical joint and the rod-shaped body connected thereto) can be recorded. It has become.
  • the upper bow of the face bow can be uniquely positioned with respect to the reference plane as described above. Since the bite fork connected to one end of the connection means can record the posture of the connection means having the upper bow connected to the other end, it is indirectly relative to the upper bow. As a result, the relative positional relationship with the reference plane can be recorded. The relative position and angle of the lower surface of the patient's maxillary dentition with respect to the bite fork can be recorded via the marking paste, and the relative positional relationship between the bite fork and the reference surface is as described above. As a result, the position and angle of the reference plane and the patient's maxillary dentition can be recorded.
  • the face bow records the relative positional relationship between the reference surface and the maxillary dentition, including the angle, unique to each patient.
  • the recording of the relative positional relationship between the reference plane measured by the face bow and the maxillary dentition is generally used in an articulator.
  • the articulator combines a model of the patient's maxillary dentition and a model of the mandibular dentition, in order to reproduce the meshing of the maxillary dentition and mandibular dentition in the patient's living body.
  • the above-mentioned record is used.
  • work which transfers the biting of a living body patient to an articulator is generally called a face bow transfer.
  • the articulator has a member corresponding to the upper jaw and a member corresponding to the lower jaw in the skull.
  • the member corresponding to the upper jaw and the member corresponding to the lower jaw in the cranium are configured to be opened and closed in the same manner as the upper jaw and the lower jaw in the living body, and the member corresponding to the upper jaw is more accurately formed by, for example, shaping the patient's teeth.
  • the model of the patient's maxillary dentition reproduced in (1) can be attached to the member corresponding to the mandible.
  • the relative positional relationship between the lower surface of the patient's upper dentition and the upper surface of the lower dentition can be reproduced by recording the patient's engagement with a known method using, for example, a mouthpiece or a compound.
  • the patient's lower jaw dentition model attached to the member corresponding to the lower jaw of the articulator By positioning the upper surface, the relative positional relationship between the upper dentition model and lower dentition model attached to the articulator and the reference plane is determined by the lower surface of the upper dentition and the lower dentition in a living patient. It reflects the relative positional relationship between the upper surface and the reference surface.
  • the relative positional relationship of the patient's maxillary dentition and mandibular dentition with respect to the reference plane can be accurately reproduced on the articulator.
  • the dentist uses the model of the maxillary dentition and the model of the mandibular dentition attached to the articulator to perform the occlusion diagnosis, or the simulation required prior to performing the dental prosthesis treatment and the temporomandibular joint occlusion treatment ( It is possible to perform a denture meshing test and the like.
  • the relative relationship between the lower surface of the maxillary dentition measured with the face bow and the reference surface is generally the relationship between the maxillary dentition attached to the articulator and the reference surface. It was used to make it possible to accurately reproduce the relationship between the reference plane and the maxillary dentition.
  • the face bow is a device that is subordinate to the articulator to accurately use the articulator, and it has long been common knowledge in the dental industry that the articulator is indispensable for treatment and diagnosis. Met.
  • a virtual articulator in other words, it is a three-dimensional image of the patient's upper and lower dentition, including engagement. Attempts to reproduce have been made.
  • a virtual articulator it is not necessary to create a model of the patient's dentition, so that the speed of diagnosis and treatment can be increased, and the data of the virtual articulator used for the diagnosis and treatment of a patient can be increased. It is convenient because (electronic data) can be held between dentists, for example.
  • the articulator is created once, and the numerical values measured from the articulator are input to the computer to create the data of the virtual articulator.
  • the above numerical values are, for example, how much the member corresponding to the upper jaw of the articulator to which the upper dentition model is attached has moved back and forth or up and down from a predetermined reference point, or occlusion of the upper dentition It is a numerical value indicating how much the surface is inclined.
  • the articulator is not used in this case. Instead, for example, a 3D image of the patient's head taken by a CT (Computed Tomography) imaging device or the like, and an image of the upper and lower dentitions of the patient taken 3D separately.
  • the data of the virtual articulator is made by synthesizing.
  • the first method it is necessary to make a model of each of the maxillary dentition and the mandibular dentition, and it is necessary to make an actual articulator using these models. Cannot be increased. On the contrary, if the articulator is actually made, it is easy to give the dentist the impression that creating the data of the virtual articulator simply increases the effort, so use the data of the virtual articulator to the dentist It is hard to give motivation for.
  • an articulator is not actually made, but instead, a device capable of three-dimensional imaging of the patient's head, such as a CT imaging device, is required. An apparatus capable of performing such three-dimensional imaging is generally very expensive and is not widely used in dental clinics.
  • the present invention applies an upper bow that can be fixed to a patient's skull in a state in which a positional relationship including a relative angle with a predetermined reference plane on the patient's skull is uniquely determined, and a curable substance.
  • a bite fork that can be fixed to the lower surface of the patient's maxillary dentition, and a connection means that can arbitrarily adjust the positional relationship including the relative angle of the upper bow and the bite fork.
  • a virtual articulator data generation device capable of generating virtual articulator data, which is data of a virtual articulator, using an image of a face bow that is accurately attached to a patient.
  • the virtual articulator data generation device of the present invention is a maxillary dentition image data which is data of an upper dentition image which is an image of an upper dentition, and a lower dentition which is data of a lower dentition image which is an image of a lower dentition.
  • Image data upper image data that is upper image data that is an image showing a relative positional relationship including the angle between the bite fork and the lower surface of the maxillary dentition, and posture image data that is an image showing the attitude of the connecting means
  • Receiving means for receiving occlusal image data which is data of a meshing image that is an image showing a meshing state of a maxillary dentition and a mandibular dentition, and an upper jaw received from the receiving means
  • Maxillary dentition model data that is data of an upper dentition model that is a three-dimensional model of the upper dentition is generated from the dentition image data
  • the lower dentition image data received from the receiving unit From the model generation means, the upper image data received from the reception means, and the posture image data, the lower dentition dentition model data, which is the data of the lower dentition dentition model, which is a three-dimensional model of the lower dentition,
  • the relative position of the lower dentition including the angle with respect to the upper dentition is obtained from the occlusion image data received from the receiving means, and data on the position of the lower dentition model with respect to the upper dentition model
  • the position data generating means for generating the second position data, and the upper dentition model data and the lower dentition model from the model generation means And receiving the first position data and the second position data from the position data generating means, and assuming an assumed positional relationship between the maxillary dentition and the mandibular dentition with respect to a reference plane in the living body,
  • a connection means for generating the virtual articulator data by reproducing the positional relationship between the maxillary dentition model and the mandibular dentition model with respect to the virtual reference plane by using the first position data and the second position data.
  • This virtual articulator data generation device is configured to receive upper dentition image data, which is an upper dentition image data that is an image of an upper dentition, and data of a lower dentition image, which is an image of a lower dentition, input from an accepting unit.
  • Virtual articulator data that is data of the virtual articulator is generated using the lower dentition image data.
  • the virtual articulator data generation device of the present invention obtains maxillary dentition model data, which is data of the maxillary dentition model, which is a three-dimensional model of the maxillary dentition, from the maxillary dentition image data received from the receiving means.
  • Model generation means is provided for generating mandibular dentition model data, which is data of a lower dentition dentition model, which is a three-dimensional model of the lower dentition from the lower dentition dentition image data received from the reception means.
  • a maxillary dentition model based on the maxillary dentition model data generated by the model generation means and a mandibular dentition model based on the mandibular dentition model data are generally used in a virtual articulator. Used as a substitute for the upper and lower dentition models in the articulator. Therefore, if this virtual articulator data generation device is used, neither a maxillary dentition model nor a mandibular dentition model is required.
  • the maxillary dentition model and the mandibular dentition model are determined by the patient's maxillary dentition model. It must be aligned to correspond to the dentition and lower dentition.
  • the above-described alignment of the maxillary dentition model and the mandibular dentition model is an upper image that is an image showing a relative positional relationship including the angle between the bite fork and the lower surface of the maxillary dentition.
  • Upper image data, posture image data which is posture image data which is an image showing the posture of the connecting means, and a mesh image which is an image showing the meshing state of the upper dentition and lower dentition This is performed using occlusal image data that is data of the above.
  • the accepting means of this virtual articulator data generation device is not only the input of the maxillary dentition image data and the mandibular dentition image data, but also the above-mentioned three types of data used for their alignment. Is also accepted.
  • the processing executed by the virtual articulator data generation device of the present invention generally follows the procedure for performing face bow transfer from the upper bow to the actual articulator. Is not specified in the conventional face bow transfer concept.
  • the present invention can easily generate virtual articulator data without using excessive labor by a dentist without an expensive apparatus such as a CT imaging apparatus.
  • the position data generation means generates data used for the above-described alignment.
  • the position data generation means obtains the relative position of the maxillary dentition including the angle with respect to the reference plane in the living body patient from the upper image data and the posture image data, and relative to the virtual reference plane which is a virtual reference plane.
  • First position data which is data on the position of the maxillary dentition model, is generated.
  • the first data corresponds to information for determining the position of the model of the upper dentition relative to the reference plane in an actual articulator.
  • the position data generating means obtains a relative position of the lower dentition including an angle with respect to the upper dentition from the occlusal image data, and is data on the position of the lower dentition model with respect to the upper dentition model.
  • a certain second position data is generated.
  • the second position data corresponds to information for determining the position of the upper surface of the lower dentition relative to the lower surface of the upper dentition in an actual articulator.
  • the connecting means uses the first data and the second data to align the maxillary dentition model and the mandibular dentition model, thereby providing virtual articulator data. Is generated.
  • the virtual articulator data generation device of the present invention is as described above, it is possible to obtain the effect that the virtual articulator data can be generated easily and inexpensively.
  • the dentist is not forced to introduce an expensive apparatus for performing the three-dimensional imaging, and the patient is not forced to be exposed by the three-dimensional imaging of the head.
  • the upper dentition image, the lower dentition image, the upper image, the meshing image, and the posture image in the present application may all be three-dimensional images.
  • the lower image described below may be a three-dimensional image.
  • the meshing image used in the virtual articulator data generation device of the present application is an image showing the meshing state of the patient's maxillary dentition and mandibular dentition. It may be an image that captures the meshing part of the row and the lower dentition, and can be used to mark the shape of the upper and lower dentitions that are occluded between the upper and lower dentitions of the patient It may be an image of a paste for use. Regardless of which image is used, it is possible to reproducibly record the meshing state of the patient's maxillary and mandibular dentition.
  • an image of a paste for marking that can mark the shapes of the maxillary dentition and the mandibular dentition is imaged outside the oral cavity, it is a three-dimensional image using a widely used three-dimensional image imaging device for dental technicians.
  • a meshing image can be taken.
  • the connection means for connecting the bite fork to the upper bow is often composed of a plurality of members.
  • the posture of the connecting means for determining the positional relationship between the bite fork and the upper bow can be determined from the state of the connecting means reflected in the posture image.
  • the posture image may be a plurality of images taken from a plurality of directions, and the posture image data may be for each of the posture images, and may be the same number as the posture images.
  • the connecting means When the connecting means is composed of a plurality of members, the members may be appropriately provided with a plurality of marks whose positional relationships change due to changes in relative positional relationships including their angles. .
  • the position data generation means detects the posture of the connection means from the mark reflected in the posture image obtained from the posture image data. It may be. By doing so, it becomes easy for the position data generating means to detect the attitude of the connecting means from the attitude image.
  • the connecting means When the connecting means is composed of a plurality of members, the members may be colored so that the appearance changes depending on the relative positional change including the angles.
  • the position data generation means detects the attitude of the connection means from the color reflected in the attitude image obtained from the attitude image data. It may be. This also makes it easy for the position data generating means to detect the attitude of the connecting means from the attitude image.
  • the inventor of the present application also provides the following method capable of obtaining the same effect as the above virtual articulator data generation device.
  • the method applies an upper bow that can be fixed to the patient's skull in a state in which the positional relationship including a relative angle with respect to a predetermined reference plane on the patient's skull is uniquely determined, and a curable material.
  • a bite fork that can be fixed to the lower surface of the patient's maxillary dentition, and a connection means that can arbitrarily adjust the positional relationship including the relative angle of the upper bow and the bite fork.
  • a virtual articulator data generation device having a computer that can generate virtual articulator data, which is data of a virtual articulator, using an image of a facebow that is accurately attached to a patient. This is a virtual articulator data generation method.
  • the computer executes the maxillary dentition image data which is data of the maxillary dentition image which is an image of the maxillary dentition, and the mandibular dentition which is data of the mandibular dentition image which is an image of the mandibular dentition.
  • Image data upper image data that is upper image data that is an image showing a relative positional relationship including the angle between the bite fork and the lower surface of the maxillary dentition, and posture image data that is an image showing the attitude of the connecting means
  • occlusal image data which is the data of the meshing image which is the image indicating the meshing state of the upper dentition and the lower dentition
  • Maxillary dentition model data which is data of an upper dentition model that is a three-dimensional model of the upper dentition, is generated from the image data, and lower dentition image data received in the reception process From the model generation process, the upper image data received in the reception process, and the posture image data, the lower dentition dentition model data that is the data of the lower dentition dentition model, which is a three-dimensional model of the lower dentition,
  • a relative position of the maxillary dentition including an angle with respect to the reference plane in the patient is obtained, and first position data
  • the relative position of the lower dentition including the angle with respect to the upper dentition is obtained from the occlusal image data received in the reception process, and data on the position of the lower dentition model with respect to the upper dentition model Generating position data, the upper dentition model data generated in the model generation process, and the lower dentition model Based on the data, the first position data generated in the position data generation process, and the second position data, an assumed positional relationship between the maxillary dentition and the mandibular dentition with respect to the reference plane in the living body is determined as a virtual reference plane. And a connection process for generating the virtual articulator data so as to reproduce the positional relationship between the upper dentition model and the lower dentition model with respect to.
  • the inventor of the present application also provides the following computer program capable of obtaining the same effect as the above virtual articulator data generation device.
  • the computer program applies an upper bow that can be fixed to the patient's skull in a state where the positional relationship including the relative angle with a predetermined reference plane on the patient's skull is uniquely defined, and a curable material.
  • a bite fork that can be fixed to the lower surface of the patient's upper dentition, and a connecting means that can arbitrarily adjust the positional relationship including the relative angle of the upper bow and the bite fork.
  • the computer functions as a virtual articulator data generation device that can generate virtual articulator data, which is data of a virtual articulator, using an image of a face bow that is accurately attached to a patient. It is a computer program for making it happen.
  • the computer program causes the computer to perform upper jaw dentition image data that is upper dentition image data that is an image of the upper dentition, and lower dentition image that is lower dentition image data that is an image of the lower dentition.
  • Data upper image data that is data of an upper image that is an image showing a relative positional relationship including the angle between the bite fork and the lower surface of the maxillary dentition, and data of an attitude image that is an image showing the attitude of the connecting means.
  • Receiving means for receiving certain posture image data and occlusal image data which is data of a meshing image which is an image showing a meshing state of the maxillary dentition and the mandibular dentition, and maxillary teeth received from the accepting means
  • Generating maxillary dentition model data that is data of the maxillary dentition model, which is a three-dimensional model of the maxillary dentition, from the sequence image data
  • Model generating means for generating lower jaw dentition model data, which is data of a lower dentition dentition model which is a three-dimensional model of the lower dentition, from the lower dentition dentition image data, upper image data received from the receiving means, and posture image
  • the data on the position of the maxillary dentition model with respect to the virtual reference plane, which is a virtual reference plane is obtained from the data to determine the relative position of the maxillary dentition including the angle with respect to the reference plane in the living patient.
  • the virtual articulator data is generated by reproducing the positional relation to the positional relation between the maxillary dentition model and the mandibular dentition model with respect to the virtual reference plane. It is for functioning as a connecting means.
  • the inventor of the present application also proposes the following face bow that can be used in combination with the virtual articulator data generation device described above as an aspect of the present invention.
  • One example is the application of a curable substance and an upper bow that can be fixed to the patient's skull in a state where the positional relationship including the relative angle with a predetermined reference plane on the patient's skull is uniquely determined.
  • a bite fork that can be fixed to the lower surface of the patient's maxillary dentition, and a connection means that can arbitrarily adjust the positional relationship including the relative angle of the upper bow and the bite fork.
  • It is a face bow.
  • the connecting means of the face bow is composed of a plurality of members, and the members are appropriately provided with a plurality of marks whose positional relationships change due to changes in relative positional relationships including their angles. It is attached.
  • Other examples include applying an upper bow that can be fixed to the patient's skull in a state where the positional relationship including a relative angle with a predetermined reference surface on the patient's skull is uniquely defined, and a curable material.
  • a bite fork that can be fixed to the lower surface of the patient's upper dentition, and a connecting means that can arbitrarily adjust the positional relationship including the relative angle of the upper bow and the bite fork.
  • the connecting means is composed of a plurality of members, and the members are colored so that the appearance changes due to a change in relative positional relationship including their angles. .
  • the present inventor also proposes the following mandibular movement recording device.
  • the mandibular movement recording device can be fixed to the patient's skull in a state in which the positional relationship including the relative angle with a predetermined reference plane on the patient's skull is uniquely defined and is contacted.
  • An upper bow with a flag having an output means for outputting the data of the contacted part to the outside, and a curable substance.
  • a lower bite fork that can be fixed to a patient's lower jaw dentition, and connected to the lower bite fork while touching the flag in accordance with a lower jaw movement that is a lower jaw movement performed by a patient
  • this mandibular movement recording device is a recording means that records virtual articulator data, which is data about a virtual articulator of a patient, and the output means of the flag moves with the mandibular movement of the patient.
  • a second receiving means for receiving movement data which is data about mandibular movement, which is data about a portion in contact with the stylus, and an image showing a relative positional relationship including angles of the lower bite fork and lower jaw dentition.
  • a third receiving means for receiving lower image data, which is data of a certain lower image, and a relative positional relationship including the angles of the lower bite fork and the lower dentition from the lower image data received by the third receiving means; Second position data generating means for generating third position data as data, movement data received by the second receiving means, and the second position data.
  • Second position data generating means for generating third position data as data, movement data received by the second receiving means, and the second position data.
  • Drawing means for writing a mark indicating the mandibular movement of the patient whose position is corrected by the data.
  • the lower jaw moves not only in the rotational movement of the cranium centered on the hinge-like temporomandibular joint, but also in the back and forth, left and right with respect to the cranium.
  • record the lower jaw movement It is preferable. From such a point, attempts have been made to record the patient's mandibular movement.
  • the movement of the mandible with respect to the cranium of the lower jaw is recorded using a flag that is uniquely fixed with respect to the reference plane assumed for the patient's skull and a stylus that is fixed with respect to the lower dentition of the patient. It has been proposed.
  • the mandibular movement which is the movement of the lower jaw with respect to the cranium
  • the trajectory of the mandibular movement by the left and right temporomandibular joints, the angle from the central position of the left and right mandibular heads to about 12 mm forward direction, and the curve of the trajectory are arbitrary Semi-adjustable articulators set in multiple types of variations are used, but the mandibular movement reproduced by it is hard to say that it is an exact copy of that of the patient.
  • the mandibular movement recording device of the present application includes recording means for recording virtual articulator data, which is data about the patient's virtual articulator, and movement data input from the flag via the second receiving means, and recording means Based on the virtual articulator data read out from, a mark indicating the mandibular movement of the patient is written on the three-dimensional image of the virtual articulator specified by the virtual articulator data. By viewing the generated image on a predetermined display, the dentist can easily and accurately grasp the mandibular movement of the patient.
  • the mark indicating the mandibular movement uses the third position data generated from the lower image data which is the data of the lower image which is an image showing the relative positional relationship including the angle of the lower bite fork and the lower dentition.
  • the mark indicating the mandibular movement written in the virtual articulator image is accurate, and the dentist who sees the image can easily perform the complicated mandibular movement of the patient who is a living body, And it can be accurately grasped.
  • the method is such that a positional relationship including a relative angle with a predetermined reference plane on the patient's cranium can be fixed to the patient's cranium in a state in which the positional relationship is uniquely determined, and the contacted part And an upper bow having a flag having an output means for outputting the data of the contacted part to the outside, and a curable substance is applied to the patient.
  • a lower bite fork that can be fixed with respect to the upper surface of the lower jaw dentition, and connected to the lower bite fork, while touching the flag in accordance with a lower jaw movement that is a movement of the lower jaw performed by a patient
  • a mandibular movement recording device including a computer used in combination with a mandibular movement detecting device having a lower bow having a stylus moving on the flag
  • the computer executes a recording process for recording virtual articulator data, which is data about a virtual articulator of a patient, and the mandibular movement of the patient from the output means of the flag.
  • a second receiving process for receiving movement data which is data about mandibular movement, which is data about a portion in contact with the moving stylus, and a relative positional relationship including angles of the lower bite fork and lower jaw dentition.
  • a third reception process for receiving lower image data that is data of a lower image that is an image to be displayed, and a relative position including the angle of the lower bite fork and lower jaw dentition from the lower image data received in the third reception process A second position data generation process for generating third position data which is data indicating a relationship; and the exercise data received in the second reception process; Based on the third position data received from the second position data generation process and the virtual articulator data read after being recorded in the recording process, the virtual articulator data identified by the virtual articulator data A drawing process for writing a mark indicating the mandibular movement of the patient whose position is corrected by the third position data on the three-dimensional image.
  • the computer program is adapted to be fixed to the patient's skull in a state where the positional relationship including a relative angle with respect to a predetermined reference plane on the patient's skull is uniquely determined and touched.
  • a patient who has a planar detection surface capable of detecting a part and has an upper bow having a flag provided with output means for outputting the data of the contacted part to the outside, and a curable substance.
  • the lower bite fork that can be fixed to the upper surface of the lower jaw dentition, and the lower bite fork that is connected to the lower bite fork and that touches the flag in accordance with the lower jaw movement that is performed by the patient.
  • a lower bow having a stylus that moves on the flag, and a lower jaw motion recording device used in combination with a lower jaw motion detection device Is a computer program for causing a computer to function with.
  • the computer program moves the computer from the recording means for recording virtual articulator data, which is data about a virtual articulator of a patient, and the output means of the flag in accordance with the mandibular movement of the patient.
  • Second receiving means for receiving movement data that is data about mandibular movement that is data about a portion that is in contact with the stylus, an image showing a relative positional relationship including angles of the lower bite fork and lower jaw dentition
  • Third receiving means for receiving lower image data, which is data of a certain lower image, and data indicating a relative positional relationship including the angle of the lower bite fork and lower dentition from the lower image data received by the third receiving means
  • the second position data generating means for generating the third position data, and the movement data received by the second receiving means On the image of the virtual articulator specified by the virtual articulator data, based on the third position data received from the second position data generating means and the virtual articulator data read from the recording means, It is made to function as a drawing means for
  • the present inventor proposes the following invention.
  • the mandibular movement recording device is configured to be fixed to the patient's skull in a state in which the positional relationship including a relative angle with a predetermined reference plane on the patient's skull is uniquely determined.
  • An upper bow having a planar detection surface capable of detecting the detected part and having an output means for outputting the data of the contacted part to the outside, and a curable substance is applied.
  • the lower bite fork that can be fixed to the upper surface of the lower jaw dentition of the patient and the lower bite fork connected to the lower bite fork and the lower jaw for the lower jaw movement performed by the patient,
  • this mandibular movement recording device is a recording means that records virtual articulator data, which is data about a virtual articulator of a patient, and the output means of the flag moves with the mandibular movement of the patient.
  • a second receiving means for receiving movement data which is data about mandibular movement, which is data about a portion in contact with the stylus, and an image showing a relative positional relationship including angles of the lower bite fork and lower jaw dentition.
  • a third receiving means for receiving lower image data, which is data of a certain lower image, and a relative positional relationship including the angles of the lower bite fork and the lower dentition from the lower image data received by the third receiving means; Second position data generating means for generating third position data as data, exercise data received by the second receiving means, and the recording means Based on the read virtual articulator data, the mandible whose position is corrected by the third position data in the three-dimensional image of the virtual articulator specified by the virtual articulator data, and the mandibular movement of the patient are animated.
  • Moving image processing means for moving the mandible so as to reproduce it.
  • the mandibular movement recording apparatus includes recording means for recording virtual articulator data, which is data about a patient's virtual articulator, from the movement data input from the flag through the second receiving means, and the recording means. Based on the read virtual articulator data, the lower jaw on the image of the virtual articulator specified by the virtual articulator data is moved so as to reproduce the lower jaw movement of the patient by animation. In this case, the mandibular movement is corrected by the third position data as in the case of the above-described mandibular movement recording apparatus. By looking at the image so generated on a given display, the dentist can easily and accurately represent the patient's mandibular movement as if using a fully adjustable articulator. In this state, the patient's mandibular movement can be grasped.
  • virtual articulator data which is data about a patient's virtual articulator
  • the method is such that a positional relationship including a relative angle with a predetermined reference plane on the patient's cranium can be fixed to the patient's cranium in a state in which the positional relationship is uniquely determined, and the contacted part And an upper bow having a flag having an output means for outputting the data of the contacted part to the outside, and a curable substance is applied to the patient.
  • a lower bite fork that can be fixed with respect to the upper surface of the lower jaw dentition, and connected to the lower bite fork, while touching the flag in accordance with a lower jaw movement that is a movement of the lower jaw performed by a patient
  • a mandibular movement recording device including a computer used in combination with a mandibular movement detecting device having a lower bow having a stylus moving on the flag
  • the method includes a recording process for recording virtual articulator data, which is data about a virtual articulator of a patient, executed by the computer, and the mandibular movement of the patient from the output means of the flag.
  • a second receiving process for receiving movement data which is data about mandibular movement, which is data about a portion in contact with the moving stylus, and a relative positional relationship including angles of the lower bite fork and lower jaw dentition.
  • a third reception process for receiving lower image data that is data of a lower image that is an image to be displayed, and a relative position including the angle of the lower bite fork and lower jaw dentition from the lower image data received in the third reception process A second position data generation process for generating third position data which is data indicating a relationship; and the exercise data received in the second reception process; Based on the virtual articulator data read after being recorded in the recording process, the position is corrected by the third position data in the three-dimensional image of the virtual articulator specified by the virtual articulator data.
  • the computer program is adapted to be fixed to the patient's skull in a state where the positional relationship including a relative angle with respect to a predetermined reference plane on the patient's skull is uniquely determined and touched.
  • a patient who has a planar detection surface capable of detecting a part and has an upper bow having a flag provided with output means for outputting the data of the contacted part to the outside, and a curable substance.
  • the lower bite fork that can be fixed to the upper surface of the lower jaw dentition, and the lower bite fork that is connected to the lower bite fork and that touches the flag in accordance with the lower jaw movement that is performed by the patient.
  • a lower bow having a stylus that moves on the flag, and a lower jaw motion recording device used in combination with a lower jaw motion detection device Is a computer program for causing a computer to function with.
  • the computer program moves the computer from the recording means for recording virtual articulator data, which is data about a virtual articulator of a patient, from the output means of the flag in accordance with the mandibular movement of the patient.
  • virtual articulator data which is data about a virtual articulator of a patient
  • the output means of the flag in accordance with the mandibular movement of the patient.
  • An image showing a relative positional relationship including the angle of the lower bite fork and the lower jaw dentition the second receiving means for receiving the movement data which is the data about the lower jaw movement, which is the data about the portion in contact with the stylus.
  • 3rd receiving means for receiving lower image data as lower image data, and showing a relative positional relationship including the angle of the lower bite fork and lower dentition from the lower image data received by the third receiving means
  • the second position data generating means for generating the third position data as data, and the exercise data received by the second receiving means.
  • the lower jaw whose position is corrected by the third position data in the three-dimensional image of the virtual articulator specified by the virtual articulator data based on the virtual articulator data read from the recording means, To function as a moving image processing means for moving the lower jaw so as to reproduce the lower jaw movement of the patient.
  • the perspective view which shows the whole structure of the face bow by one Embodiment The perspective view which shows the structure of the upper bow contained in the face bow shown in FIG. FIG. 2 is an enlarged perspective view of a part of a connection member of the face bow shown in FIG. 1.
  • the perspective view which shows the structure of the underbow used in combination with the upper bow shown in FIG. The perspective view which shows the external appearance of the diagnostic apparatus in one Embodiment.
  • the hardware block diagram of the main body of the diagnostic apparatus shown in FIG. The block diagram which shows the functional block produced
  • the face bow 100 used in this embodiment will be described.
  • the face bow 100 is used when creating virtual articulator data that is data of a virtual articulator.
  • the face bow 100 is also used when recording a mandibular movement of a patient using virtual articulator data.
  • the face bow 100 is used in these two cases, but the mode is different in each case.
  • the face bow 100 when used to create virtual articulator data will be described.
  • the face bow 100 in that case is configured as shown in the perspective views of FIGS.
  • the face bow 100 in this case includes an upper bow 110, a bite fork 120, and a connecting member 130 that connects the upper bow 110 and the bite fork 120, similarly to a known face bow used for face bow transfer.
  • the face bow 100 of this embodiment may be basically a known general one, but is known in that the connecting member 130 is marked or colored as described later. Different from the face bow.
  • the upper bow 110 can be adjusted in position including the angle with respect to the patient's head, and the positional relationship between the patient's head and a predetermined reference plane assumed for the patient's head is uniquely determined. It can be fixed in the state.
  • the upper bow 110 described in this embodiment has a glasses-like configuration as a whole.
  • the upper bow 110 includes a main body 111 having a frame-like shape of glasses.
  • the main body 111 does not contain a lens, but is not limited to a frame of eyeglasses, but is not limited to this.
  • the bridge portion 111B has a hole (not shown) that is threaded on the inner circumferential surface thereof, and the screw 111B1 can be screwed into the hole.
  • the wisdom 111C is provided with a long hole 111C1, which is a hole extending over substantially the entire length in the length direction.
  • a long hole 111C1 which is a hole extending over substantially the entire length in the length direction.
  • one of the frame portions 111A but not limited to this, an attachment portion 111D is attached to the lower portion of the left frame portion 111A.
  • a screw 111E penetrating vertically is attached to the attachment portion 111D, and a support block 112 is attached to the lower end of the screw 111E.
  • the support block 112 can be rotated about the screw 111E by rotating the screw 111E. However, this rotation is prevented from occurring unless a certain amount of force is applied.
  • the support block 112 is for fixing the positioning rod 113 (FIG. 2. Note that the support block 112 and the positioning rod 113 are not shown in FIG. 1).
  • the support block 112 has holes (not shown) extending in the front-rear direction.
  • the support block 112 is also provided with a threaded hole (not shown) on its inner peripheral surface perpendicular to the above-mentioned hole penetrating the support block 112 in the front-rear direction. 112A is screwed together.
  • the positioning rod 113 includes a horizontal portion 113A that is generally horizontal when the face bow 100 is used, and a vertical portion 113B that is generally vertical when the face bow 100 is used.
  • a positioning sphere 113B1 that is pressed against the patient's nose when the face bow 100 is used is attached to the upper end of the vertical portion 113B.
  • the positioning rod 113 is inserted into the above-mentioned hole penetrating the support block 112 through the horizontal portion 113A.
  • the horizontal portion 113A of the positioning rod 113 is configured to be able to move back and forth in the hole and to rotate around the axis with the center as an axis.
  • the horizontal portion 113A comes into contact with the tip of the screw 112A tightened by rotating.
  • the horizontal portion 113A is sandwiched between the inner surface of the above-described hole penetrating the support block 112 in the front-rear direction and the screw 112A, and the position in any front-rear direction and any axis thereof It can be fixed to the support block 112 while maintaining the surrounding angle. If the screw 112A is loosened, it is naturally released.
  • the vertical portion 113B of the positioning rod 113 can rotate in the horizontal direction around the support block 112, and can also be supported. It can move back and forth relative to the block 112 and can rotate about its axis.
  • temples 114 that are put on both ears of the patient when the face bow 100 is used are attached to the wisdom 111C.
  • the temple 114 includes a front temple portion 114A near the end 111C and a rear temple portion 114B attached to the front temple portion 114A.
  • a hole threaded in the inner peripheral surface is formed in the front surface of the front temple portion 114A in the length direction of the temple 114.
  • a long hole 111C1 provided in the end 111C is penetrated by the screw 114A1, and the tip of the screw 114A1 is screwed into the above-described hole formed in the front surface of the front temple portion 114A.
  • the screw 114A1 is loosened, the screw 114A1 and the front temple portion 114A (in short, the temple 114) can move in a substantially horizontal direction along the long hole 111C1.
  • the wisdom 111C is sandwiched between the rear surface of the head provided on the front side of the screw 114A1 and the front surface of the front temple portion 114A. That is, the temple 114 can move along the length direction of the wisdom 111C and can be fixed to the wisdom 111C at an arbitrary position.
  • the rear temple portion 114B can move back and forth with respect to the front temple portion 114A, whereby the entire length of the temple 114 can be changed.
  • Any mechanism may be employed for allowing the rear temple portion 114B to be moved back and forth with respect to the front temple portion 114A.
  • a so-called rack and pinion structure is employed. ing. Although illustration is omitted, the front temple portion 114A has a built-in rack along the length direction of the front temple portion 114A, and the screw 114B1 attached to the rear temple portion 114B is also omitted from illustration. A pinion is installed. By rotating the screw 114B1, the screw 114B1 meshed with the rack moves back and forth relative to the rack, so that the rear temple portion 114B moves back and forth relative to the front temple portion 114A.
  • the flag 115 is detachably attached to the rear temple portion 114B.
  • the flag 115 includes a planar sensor portion 115A and a frame 115B surrounding the sensor portion 115A.
  • the flag 115 can be detachably fixed to the rear temple portion 114B by being screwed to the rear temple portion 114B via a screw 115C.
  • the sensor unit 115A can detect a part that is in contact with a needle-like part of a stylus, which will be described later, and can output part data about the part in substantially real time. As will be described later, electric power is supplied to the needle-like portion, and the sensor portion 115A detects the portion energized with the needle-like portion as a portion that is in contact with the needle-like portion. Yes.
  • the sensor unit 115 ⁇ / b> A may be configured to detect a portion in contact with the needle-like part based on other parameters such as pressure applied from the needle-like part.
  • the flag 115 is connected to the cable 115D, and the tip thereof is connected to a computer described later.
  • the site data is sent to the computer via the cable 115D.
  • the bite fork 120 is fixed to the lower surface of the patient's maxillary dentition when the facebow 100 is used.
  • the bite fork 120 includes a bite fork main body 121 that is fixed to the lower surface of the patient's maxillary dentition, and a connection portion 122 that fixes the lower end of the bite fork main body 121 and the connection member 130.
  • the connecting portion 122 is a plate-like body and has a hole (not shown).
  • the bite fork body 121 and the lower surface of the upper dentition are fixed by a known method, for example, by applying a modeling compound to the upper surface of the bite fork main body 121 and pressing the bite fork main body 121 against the lower surface of the upper dentition.
  • the connection member 130 is for connecting the upper bow 110 and the bite fork 120.
  • the connection member 130 includes an upper member 131, a lower member 132, and an intermediate member 133.
  • the upper member 131 includes an upper mounting portion 131A, an upper connecting rod 131B, and a ball 131C.
  • the upper mounting portion 131 ⁇ / b> A is for fixing to the upper bow 110.
  • the upper mounting portion 131A has a hole (not shown).
  • the upper mounting portion 131A is tightened by screwing the tip of the screw 111B1 penetrating the hole of the upper mounting portion 131A into the hole of the bridge portion 111B with the hole corresponding to the hole of the bridge portion 111B described above.
  • the upper connecting rod 131B connects the upper mounting portion 131A and the ball 131C.
  • the ball 131C is a metal sphere.
  • the lower member 132 includes a lower mounting portion 132A, a lower connecting rod 132B, and a ball 132C.
  • the lower attachment portion 132A is for fixing to the bite fork 120.
  • the lower mounting portion 132A has a hole (not shown) threaded on its inner peripheral surface.
  • the lower mounting portion 132 is tightened by screwing the tip of the screw 132B1 penetrating the hole of the connecting portion 122 into the hole of the lower mounting portion 132A with the hole corresponding to the hole of the connecting portion 122 described above.
  • the lower connecting rod 132B connects the lower mounting portion 132A and the ball 132C.
  • the ball 132C is a metal sphere.
  • the intermediate member 133 enters between the upper member 131 and the lower member 132 and couples them together.
  • the intermediate member 133 includes a first member 133A and a second member 133B that are coaxially disposed and are both cylindrical.
  • the first member 133A is provided with an upper receiving portion 133A2 having a receiving hole 133A1 for receiving the ball 131C of the upper member 131.
  • the second member 133B is provided with a lower receiving portion 133B2 having a receiving hole 133B1 for receiving the ball 132C of the lower member 132.
  • the receiving hole 133A1 of the upper receiving portion 133A2 and the ball 131C form a ball joint by their combination
  • the receiving hole 133B1 of the lower receiving portion 133B2 and the ball 132C form a ball joint by their combination.
  • the first member 133A and the second member 133B can be rotated around each other about their common axis.
  • a lever 134 is connected to the second member 133B. Due to the rotation of the lever 134, the rotation of the ball 131C with respect to the receiving hole 133A1 of the upper receiving portion 133A2, the rotation of the ball 133C with respect to the receiving hole 133B1 of the lower receiving portion 133B2, and the mutual relationship between the first member 133A and the second member 133B.
  • the upper bow 110 and the bite fork 120 are connected to each other by the connecting member 130 as described above. Since the connecting member 130 has two ball joints and mutual rotation between the first member 133A and the second member 133B is allowed, the angle between the upper bow 110 and the bite fork 120 is The positional relationship including can be freely adjusted, and the positional relationship can be fixed.
  • the connecting member 130 is marked as described above, or otherwise colored, or both.
  • the mark or color indicates the posture of the connecting member 130 composed of a plurality of members, in other words, the positional relationship of the ball 131C with respect to the receiving hole 133A1, the positional relationship of the ball 132C with respect to the receiving hole 133B1, and the first member 133A and the second color. This is to make it possible to grasp the mutual positional relationship of the members 133B from a plurality of images in which they are individually reflected or one image in which they are all reflected. .
  • the positional relationship between the upper mounting portion 131A of the connecting member 130 and the bridge portion 111B of the upper bow 110 is always determined (although these can rotate with respect to the screw 111B1 as an axis).
  • the relative positions of the connecting member 130 and the connecting portion 122 of the bite fork 120 are always determined (the screws 132B1 are fixed to each other).
  • the relative positions of the upper bow 110 and the bite fork 120, including the angle of the upper bow 110 and the bite fork 120 are also assumed. Determined uniquely.
  • the marks or colors are such that the relative positional relationship between the upper bow 110 and the bite fork 120 can be easily grasped from the posture image that is an image of the connection member 130.
  • the positional relationship between the upper mounting portion 131A of the connecting member 130 and the bridge portion 111B of the upper bow 110 and the positional relationship between the lower mounting portion 132A of the connecting member 130 and the connecting portion 122 of the bite fork 120 are always in a certain position. If not, the posture image needs to be copied so that the positional relationship between them can be grasped.
  • a plurality of concentric circles M1 (same lines as latitude lines on the earth) centered on the upper connecting rod 131B are formed on the ball 131C of the upper member 131.
  • writing write a plurality of radial lines M2 (similar to the longitude line on the earth) centered on the upper connecting rod 131B and change their spacing, change the thickness of these lines, or
  • a symbol for example, a number or an alphabet for identifying the lines is written in a portion necessary for distinguishing them from the lines, and the above-mentioned information written on the ball 131C around the receiving hole 133A1 of the upper receiving portion 133A2.
  • An appropriate number of marks M3 for clarifying the relative positional relationship between the line and the upper receiving portion 133A2 may be entered.
  • a similar mark can be written on the ball 132C of the lower member 132 and the lower receiving portion 133B2.
  • the intermediate member 133 only the angle between the first member 133A and the second member 133B arranged coaxially becomes a problem. Therefore, the memory is provided on one outer surface and the memory reference is provided on the other outer surface. For example, by writing an arrow, the angle formed by, for example, the upper connecting rod 131B and the lower connecting rod 133B can be captured. If the connection member 130 is marked as described above, the posture of the connection member 130 is easily clarified from the posture image. Therefore, the relative position including the angles of the upper bow 110 and the bite fork 130 is also obtained from the posture image. This makes it easy to grasp the specific positional relationship.
  • the ball 131C of the upper member 131 is colored so that the hue continuously changes in the longitude direction and the brightness in the latitude direction, and is received by the upper receiving portion 133A2.
  • an achromatic color whose brightness changes continuously may be added around the hole 133A1.
  • the ball 132C of the lower member 132 and the lower receiving portion 133B2 can be similarly colored.
  • an achromatic color whose brightness continuously changes in the circumferential direction is given to one outer surface of the first member 133A and the second member 133B arranged coaxially, and the others.
  • An arrow similar to that described above is marked on the outer surface of the other side, or is attached to one of the first member 133A and the second member 133B so that it can play the same role as the arrow. Just add the same color. Then, since the posture of the connecting member 130 is easily clarified from the posture image, it is possible to easily grasp the relative positional relationship including the angles of the upper bow 110 and the bite fork 120 from the posture image. become.
  • the face bow used when recording the mandibular movement.
  • the face bow uses the upper bow 110 of the face bow 100 described above, which is used when generating virtual articulator data, and the underbow 140 described below. Both are not connected to each other.
  • the connecting member 130 and the bite fork 120 in the face bow 100 described above are not used in this face bow.
  • the front bar 141 is a rod-shaped body.
  • the connecting member 143 includes a hole (not shown) that allows the front rod 141 to pass therethrough.
  • the connecting member 143 has a threaded hole in its inner peripheral surface, and a screw 143A is screwed into the hole.
  • the connecting member 143 can move along the length direction of the front bar 141 and can rotate around the front bar 141 as an axis.
  • the tip of the screw 143A is fixed to the front bar 141, and the position in the length direction of the front bar 141 and the angle with respect to the front bar 141 are maintained.
  • the connecting member 143 also includes a tube 143B for allowing the horizontal bar 142 to pass therethrough.
  • the horizontal bar 142 is attached to the connecting member 143 while penetrating the pipe 143B.
  • the connecting member 143 also has a hole (not shown) that is threaded on its inner peripheral surface, and a screw 143C is screwed into the hole. When the screw 143C is loosened, the horizontal bar 142 can move in the length direction of the tube 143 and can rotate about its axis. When the screw 143C is tightened, the horizontal bar 142 is fixed to the tube 143B.
  • the connecting member 143 has a variable position in the length direction of the front bar 141 and an angle with respect to the front bar 141, the length from the connecting member 143 to the rear end of the horizontal bar 142 is variable.
  • the position of the rear end of the horizontal bar 142 when viewed from the lower bite fork described later can be freely positioned at all the vertical and horizontal heights.
  • a stylus 144 is attached to the rear end of the horizontal bar 142.
  • the stylus 144 is in contact with the above-described flag 115 attached to the upper bow 110 to generate the above-described part data.
  • the position of the rear end of the horizontal bar 142 when viewed from the lower bite fork 145 can be freely positioned in all vertical and horizontal heights.
  • the stylus 144 is positioned at an appropriate position (for example, the center of the flag 115) of the sensor portion 115A of the flag 115 when the patient is normally engaged. Position as shown.
  • the stylus 144 includes a stylus main body 144A and a needle-like portion 144B.
  • a power line is connected to the stylus main body 144A.
  • the acicular portion 144B is supplied with electric power from the stylus main body 144A.
  • the needle-like portion 144B is given an appropriate elastic force toward its tip from an elastic body (not shown) built in the stylus main body 144A. As a result, even if the distance from the stylus main body 144A to the sensor portion 115A of the flag 115 slightly changes, the tip of the needle-like portion 144B is pressed against the sensor portion 115A with an appropriate pressure within a certain range.
  • the underbow 140 includes a lower bite fork 145.
  • the lower bite fork 145 is fixed to the distal end of a fixed bar 145B whose base end is fixed to a fixed plate 145A fixed to the front bar 141.
  • the lower bite fork 145 has its lower surface fixed to the upper surface of the patient's lower dentition.
  • the resin is polymerized in a state suitable for the lower jaw dentition by placing an immediate polymerization resin or the like so that it is maintained on the lower bite fork 145 and pressing the lower buccal dentition on the buccal side of the lower jaw dent,
  • the lower bite fork 145 which is adapted, is fixed by a general method such as fixing to the buccal side surface of the lower jaw dentition without interfering with the lower jaw movement with an instantaneous adhesive or the like.
  • the diagnostic apparatus 200 is as shown in FIG. 5 and is substantially constituted by a computer, for example, a general personal computer.
  • the diagnostic device 200 includes a main body 210 that is a computer, an input device 220, and a display 230.
  • the input device 220 is a device for a user, such as a dentist, to input to the main body 210.
  • the input device 220 in this embodiment includes, for example, a general-purpose keyboard and mouse.
  • the display 230 may be a general-purpose display, for example, a liquid crystal display or a CRT display.
  • the main body 210 includes hardware as shown in FIG. In this embodiment, the main body 210 connects a CPU (Central Processing Unit) 211, a ROM (Read Only Memory) 212, a RAM (Random Access Memory) 213, a HDD (Hard Disc Drive) 214, an interface 215, and these.
  • a bus 216 is provided.
  • the CPU 211 controls the entire main body 210.
  • the CPU 211 executes various processes as described below by executing the program.
  • the ROM 212 stores a program for operating the CPU 211, data necessary for controlling the main body 210, and the like.
  • the RAM 213 provides a work area for the CPU 211 to perform data processing.
  • the HDD 214 also records a program and data for operating the CPU 211.
  • an OS for operating the CPU 211 is recorded in the HDD 214.
  • the program of the present invention is recorded in the ROM 212 or the HDD 214.
  • the program of the present invention may be installed in the main body 210 from the time of shipment of the main body 210, or may be installed in the main body 210 by the user after the main body 210 is shipped, for example.
  • the program of the present invention may be recorded on the main body 210 from a recording medium such as a CD-ROM, or may be recorded on the main body 210 after downloading from a predetermined network such as the Internet. It may be what was done.
  • the program of the present invention may be one that causes the CPU 211 to execute processing described later alone, or may cause the CPU 211 to execute processing described below in cooperation with the OS or other programs. I do not care.
  • the interface 215 serves as a window for connecting the CPU 211, ROM 212, RAM 213, and HDD 214 to the outside, and the CPU 211, ROM 212, RAM 213, and HDD 214 can exchange data with the outside via the interface 215 as necessary. It is like that. As described later, the main body 210 needs to be able to accept posture image data and other image data from an external device (for example, a three-dimensional image capturing camera) or a recording medium on which the image data is recorded. The image data may be received by the interface 215 from an external device or the like via a wired or wireless network.
  • an external device for example, a three-dimensional image capturing camera
  • the interface 215 is connected to, for example, a USB terminal (not shown) provided on the main body 210, and from the 3D image capturing camera to which the USB cable and one end thereof are connected, Image data can be received via a cable and a USB terminal connected to the USB cable and the other end.
  • the interface 215 includes a reader that can read data from a predetermined recording medium such as a DVD or a memory card, and accepts image data from the recording medium that records the image data put in the reader. It has become.
  • this diagnostic device serves as both the virtual articulator data generation device and the mandibular movement recording device referred to in the present invention. Therefore, the program in this embodiment makes a general-purpose computer function as both a virtual articulator data generation device and a mandibular movement recording device, for example.
  • the program to be installed on each computer is a function required for the computer. It will be understood by those skilled in the art that it is sufficient to provide the information to the computer.
  • the main body 210 includes a receiving unit 221, a control unit 222, a model generation unit 223, a position data generation unit 224, a face bow data recording unit 225, a coupling unit 226, a virtual articulator data recording unit 227, a display.
  • a control unit 228, a second position data generation unit 229, and a mandibular movement image data generation unit 230 are generated.
  • the computer main body 210 includes a reception unit 221, a control unit 222, a model generation unit 223, a position data generation unit 224, and facebow data recording.
  • the unit 225, the coupling unit 226, the virtual articulator data recording unit 227, and the display control unit 228 are sufficient.
  • the computer main body 210 includes a receiving unit 221, a control unit 222, a virtual articulator data recording unit 227, a display control unit 228, a second The position data generation unit 229 and the mandibular movement image data generation unit 230 are sufficient.
  • the accepting unit 221 accepts data input from the outside via the interface 215.
  • the accepting unit 221 also serves as accepting means, second accepting means, and third accepting means in the present application.
  • the data received by the receiving unit 221 includes, for example, data described later received from the input device 220, or image data received from an external device or a recording medium.
  • the image data received by the receiving unit 221 includes upper dentition image data that is upper dentition image data that is an image of the patient's upper dentition, and lower dentition image data that is an image of the patient's lower dentition.
  • Upper dentition image data an upper image that is an image showing the relative positional relationship including the angle between the bite fork 120 and the lower surface of the upper dentition, taken with the bite fork 120 fixed to the upper dentition of the patient
  • Upper image data posture image data that is posture image data that is an image showing the posture of the connecting member 130
  • data of a mesh image that is an image showing the meshing state of the maxillary and mandibular dentition And occlusal image data
  • lower image data which is lower image data that is an image showing a relative positional relationship including the angle between the lower bite fork 145 and the lower jaw dentition.
  • the accepting unit 221 also accepts site data from the flag 115 via the interface 215.
  • the site data is used when the diagnostic apparatus 200 functions as a mandibular movement recording apparatus.
  • the receiving unit 221 determines which data is the received data and sends it to an appropriate destination.
  • the receiving unit 221 mainly sends the data received from the input device 220 to the control unit 222, and sends the maxillary dentition image data and the mandibular dentition image data to the model generation unit 223, and receives the upper image data and the posture image data.
  • the occlusal image data is sent to the position data generation unit 224.
  • the reception unit 221 also sends the lower image data to the second position data generation unit 229 and sends the part data to the lower jaw movement image data generation unit 230.
  • the control unit 222 controls the entire main body 210.
  • the model generation unit 223 receives the upper dentition image data and the lower dentition image data from the reception unit 221 as described above.
  • the model generation unit 223 generates maxillary dentition model data that is data of the maxillary dentition model that is a three-dimensional model of the maxillary dentition from the received maxillary dentition image data. This is equivalent to a model of the maxillary dentition if it is an actual articulator.
  • the model generation unit 223 generates mandibular dentition model data that is data of a mandibular dentition model that is a three-dimensional model of the mandibular dentition from the received mandibular dentition image data.
  • the model generation unit 223 generates the maxillary dentition model data and the mandibular dentition model data by applying a known image processing technique using, for example, a polygon.
  • both the maxillary dentition image and the mandibular dentition image are three-dimensional images.
  • a three-dimensional imaging camera for intraoral imaging that can perform such imaging is also in practical use.
  • the model generation unit 223 sends the generated maxillary dentition model data and mandibular dentition model data to the coupling unit 226.
  • the position data generation unit 224 receives the upper image data, the posture image data, and the occlusion image data from the reception unit 221.
  • the position data generation unit 224 obtains the relative position of the maxillary dentition including the angle with respect to the reference surface in the living body patient from the received upper image data and posture image data, and is a virtual reference surface that is a virtual reference surface.
  • First position data which is data about the position of the maxillary dentition relative to the surface, is generated.
  • the position data generation unit 224 also obtains the relative position of the lower dentition including the angle with respect to the upper dentition from the received occlusion image data, and is data on the position of the lower dentition relative to the upper dentition. Second position data is generated.
  • first position data and second position data are used when the upper dentition model and the lower dentition model are later aligned in a virtual articulator created on a computer.
  • first position data By using the first position data, the reference plane and the maxillary dentition model can be aligned with each other so as to match the reference plane and the maxillary dentition in a living patient.
  • second position data By using the second position data, it is possible to align the upper dentition model and the lower dentition model so as to match the upper dentition and the lower dentition in a living patient.
  • the position data generation unit 224 in this embodiment generates the first position data as follows.
  • the first position data is generated from the upper image data and the posture image data.
  • the upper image data is an image showing a relative positional relationship including the angle between the bite fork 120 and the lower surface of the upper dentition.
  • the upper image is preferably a three-dimensional image.
  • the posture image data is an image indicating the posture of the connection member 130 as described above. Although there may be a plurality of positions, since the connection member 130 is reflected in the posture image data, it is easy to detect the posture of the connection member 130 by applying a known image processing technique to the posture image. is there.
  • the position data generation unit 224 uses data recorded in the facebow data recording unit 225 in order to detect the posture of the connection member 130 from the posture image data.
  • the face bow data recording unit 225 data regarding the face bow 100 used in combination with the diagnostic device 200 is recorded.
  • facebow data that is data for each of the plurality of facebows 100 is recorded in the facebow data recording unit 225.
  • the face bow data is calculated by, for example, calculating the attitude of each face bow connecting member 130 from the dimensions of the parts constituting the connecting member 130 of each face bow, the marks or colors attached to the connecting members 130 of each face bow. Or, information necessary for obtaining a comparison with a data table is included.
  • the position data generation unit 224 analyzes the posture image based on the posture image data using the face bow data regarding the face bow in which the connection member 130 is reflected in the posture image.
  • the connection member 130 shown in the posture image is marked or colored, the connection member 130 is connected based on the information about how the mark or the color appears in the posture image. There is no difficulty in obtaining the posture of the member 130.
  • the posture image is preferably a three-dimensional image. If the posture image is a three-dimensional image, it is relatively easy to accurately determine the posture of the connecting member even if the connecting member is not marked or colored.
  • connection member 130 When the connection member 130 is colored, a plurality of predetermined colors are used so that the position data generation unit 224 can correctly grasp the color in the posture image based on the posture image data.
  • the attached color sample is reflected in the posture image, and the position data generation unit 224 executes a process of correcting the color in the posture image based on a plurality of colors attached to the color sample. May be.
  • the relative positional relationship between the upper bow 110 connected to both ends of the connecting member 130 and the bite fork 120 can be grasped.
  • the upper bow 110 is uniquely fixed with respect to the reference plane assumed on the patient's head, it is practically possible to grasp the relative positional relationship between the upper bow 110 and the bite fork 120. This means that the relative positional relationship between the reference plane and the bite fork 120 can be grasped.
  • the relative positional relationship between the bite fork 120 and the maxillary dentition is also grasped, by combining this with the relative positional relationship of the bite fork 120 with respect to the reference plane, The positional relationship of the maxillary dentition can be grasped.
  • the positional relationship between the patient's reference plane and the maxillary dentition thus obtained is reproduced as the positional relation between the virtual reference plane, which is a virtual reference plane in the virtual articulator formed on the computer, and the maxillary dentition model. Therefore, it is used as the first position data that is data about the mutual positions of the virtual reference plane and the maxillary dentition model.
  • the position data generation unit 224 in this embodiment generates the second position data as follows.
  • the second position data is generated from the occlusal image data.
  • the occlusal image data is an image showing a state of meshing between the upper dentition and the lower dentition.
  • the occlusal image may show multiple occlusal and mandibular dentitions (which may be part of them) in the occlusion image, or the patient's maxillary and mandibular dentitions.
  • the marking paste that can mark the shape of the maxillary dentition and the mandibular dentition that are occluded between the upper and lower jaws is shown by applying known image processing to the occlusal image data. It is easy to generate the second position data indicating the mutual positional relationship of the dentition.
  • the posture image is a three-dimensional image.
  • the position data generation unit 224 sends the first position data and the second position data generated as described above to the combining unit 226.
  • the coupling unit 226 receives the maxillary dentition model data and the mandibular dentition model data from the model generation unit 223.
  • the combining unit 226 also receives the first position data and the second position data from the position data generation unit 224.
  • the coupling unit 226 uses these to generate virtual articulator data for the virtual articulator.
  • the virtual articulator is a three-dimensional image of an actual articulator. In the virtual articulator, the models of the maxillary dentition and the mandibular dentition in the actual articulator are replaced with the maxillary dentition model and the mandibular dentition model.
  • first position data which is data about the mutual positions of the virtual reference plane and the maxillary dentition model
  • second position data indicating the mutual positional relationship between the maxillary dentition and the mandibular dentition is used.
  • virtual articulator data for the virtual articulator is generated.
  • the virtual articulator can open and close the maxillary dentition model and the mandibular dentition model with a virtual temporomandibular joint as an axis.
  • image processing is also possible using known techniques.
  • the coupling unit 226 sends the virtual articulator data to the virtual articulator data recording unit 227 and the display control unit 228.
  • the virtual articulator data recording unit 227 records virtual articulator data.
  • Virtual articulator data is generally recorded in the virtual articulator data recording unit 227 together with data specifying which patient the virtual articulator belongs to.
  • the display control unit 228 controls the display 230. Upon receiving the virtual articulator data, the display control unit 228 creates, for example, moving image data for displaying the virtual articulator on the display 230 based on the virtual articulator data, and sends the image data to the display 230 via the interface 215. .
  • the second position data generation unit 229 receives the lower image data from the reception unit 221 as described above.
  • the second position data generation unit 229 is similar to the position data generation unit 224 that determines the positional relationship including the angle between the upper dentition and the bite fork 120 from the upper image. A relative positional relationship including an angle is obtained, and third positional data that is data about the positional relationship is generated.
  • the second position data generation unit 229 sends the third position data to the mandibular movement image data generation unit 230.
  • the mandibular movement image data generation unit 230 receives the part data from the reception unit 221 as described above.
  • the site data is a signal sent from the flag 115 of the face bow 100 including the underbow 140 used when recording the mandibular movement, and the needle-like portion 144B of the stylus 144 that is in contact with the sensor portion 115A of the flag 115. It is the data which shows the position of.
  • the flag 115 is fixed to the upper bow 110 fixed to the patient's head.
  • the stylus 144 is fixed to an underbow 140 fixed by a lower bite fork 145 on the upper surface of a lower jaw dentition that is a part of the lower jaw of the patient.
  • the upper bow 110 and the underbow 140 are not connected to each other.
  • the entire underbow 140 moves according to the mandibular movement, and the needle-like part 144B of the stylus 144 traces the sensor part 115A of the flag 115 according to the mandibular movement. That is, the above-mentioned part data is data representing mandibular movement.
  • the mandibular movement image data generation unit 230 receives the part data, receives the third position data from the second position data generation unit 229, and reads out the virtual articulator data from the virtual articulator data recording unit 227. Then, on the image of the virtual articulator specified by the virtual articulator data, the relative positional relationship between the lower bite fork 145 and the lower dentition based on the third position data (that is, the stylus 144 and the lower dentition).
  • the mandibular movement data generating unit 230 is configured to send the generated mandibular movement image data to the virtual articulator data recording unit 227 and the display control unit 228.
  • the virtual articulator data recording unit 227 records mandibular movement image data in addition to the virtual articulator data.
  • the display control unit 228 displays not only the above-described virtual articulator image but also a virtual articulator image in which a mark indicating the patient's mandibular movement based on the mandibular movement image data is written. An image of a moving image in which the lower jaw on the image of the articulator is moved so as to reproduce the lower jaw movement of the patient is displayed on the display 230.
  • the diagnostic device can generate virtual articulator data and record mandibular movement. These will be described in order.
  • virtual articulator data first, a patient's maxillary dentition image, mandibular dentition image, and occlusion image are taken, and maxillary dentition image data, mandibular dentition image data, and occlusion image Generate data and. This process may be performed any time as long as the face bow 100 is not fixed to the patient.
  • the maxillary dentition image reflects the maxillary dentition, for example, a three-dimensional image, which is sufficient to generate the maxillary dentition model later.
  • the lower dentition image is, for example, a three-dimensional image in which the lower dentition is reflected, and is sufficient to generate a lower dentition model later, and may be a plurality of images in some cases.
  • the meshing image is an image that can grasp the relative position of the lower dentition including the angle with respect to the upper dentition, and the second position data on the relative position of the lower dentition with respect to the upper dentition is later described. It is sufficient to generate, and in some cases, a plurality of images.
  • the occlusal image shows the maxillary and mandibular dentitions (possibly part of them), or the maxillary dentition occluded between the patient's maxillary and mandibular dentition And there is a marking paste that can mark the shape of the lower dentition.
  • the face bow 100 is attached to the patient's head.
  • the method of attaching the face bow 100 to the patient's head is not different from the case of the general face bow 100.
  • the upper bow 110 is positioned so that the two temples 114 are placed on the patient's ears, the bridge part 111B is placed on the patient's nose muscles, and the frame part 111A of the main body part 111 is correctly positioned in front of the patient's eyes. It is assumed that the vertical portion 113B of the rod 113 is correctly positioned on the patient's nose.
  • the upper bow 110 is uniquely positioned in a predetermined positional relationship with respect to the reference plane of the patient's head. At this time, the flag 115 may not be attached to the temple 114.
  • a modeling compound is applied to the upper surface of the bite fork 120, and the upper surface of the bite fork 120 is fixed to the lower surface of the patient's maxillary dentition. Then, the upper end and the lower end of the connection member 130 are connected to the upper bow 110 and the bite fork 120, respectively. At this time, the posture of the connection member 130 is appropriately corrected so that such a connection can be made naturally.
  • the upper attachment portion 131A of the connection member 130 and the bridge portion 111B of the upper bow 110 are set in a predetermined positional relationship, and the lower attachment portion 132A of the connection member 130 and the bite fork 120 are provided.
  • the positional relationship of the connecting portion 122 is set to a predetermined positional relationship.
  • the upper image is, for example, a three-dimensional image in which they (at least a part thereof) are reflected so that the relative positional relationship including the angle between the upper dentition and the bite fork 120 can be grasped. It is sufficient to generate the first position data later together with the posture image, and in some cases, it is a plurality of images.
  • the posture image data is, for example, a three-dimensional image in which the connection unit 130 is reflected so that the posture of the connection unit 130 can be grasped, and the first position data is generated later together with the upper image. In some cases, a plurality of images are used.
  • a process for generating virtual articulator data is executed in the diagnostic apparatus 200.
  • information such as the patient's name for identifying the patient is input from the input device 220.
  • the information input from the input device 220 is sent from the interface 215 to the control unit 222.
  • the control unit 222 records the information in the virtual articulator data recording unit 227 as information for specifying a patient for whom virtual articulator data is to be created.
  • various image data are input to the diagnostic apparatus 200.
  • maxillary dentition image data that is data of the maxillary dentition image
  • mandibular dentition image data that is data of the mandibular dentition image
  • occlusion image data that is data of the mesh image
  • upper image data which is data
  • posture image data which is posture image data
  • the accepting unit 221 sends the maxillary dentition image data and the mandibular dentition image data to the model generation unit 223, and sends the upper image data, the posture image data, and the occlusion image data to the position data generation unit 224.
  • the model generation unit 223 that has received the maxillary dentition image data and the mandibular dentition image data from the receiving unit 221 has maxillary teeth that are data of the maxillary dentition model that is a three-dimensional model of the maxillary dentition from the maxillary dentition image data.
  • lower row dentition model data which is data of the lower row dentition model, which is a three-dimensional model of the lower row of dentition, is generated from the lower row dentition image data.
  • the model generation unit 223 sends the generated upper dentition model data and lower dentition model data to the combining unit 226.
  • the position data generation unit 224 that has received the upper image data, the posture image data, and the occlusion image data from the reception unit 221 generates first position data from the upper image data and the posture image data, and from the occlusion image data. Second position data is generated.
  • the data recorded in the facebow data recording unit 225 is used.
  • the data about the facebow used by the patient is selected from the plurality of data, and then the position data generation unit Read to 224.
  • Information necessary for the selection to identify the face bow used by the patient is sent from the input device 220 operated by the dentist to the control unit 222 via the interface 215, and the position from the control unit 222, for example. This is transmitted to the data generation unit 224.
  • the position data generation unit 224 can select data about the face bow to be read out from a plurality of data based on the information.
  • the position data generation unit 224 sends the generated first position data and second position data to the combining unit 226.
  • the combination unit 226 that receives the maxillary dentition model data and the mandibular dentition model data from the model generation unit 223 and receives the first position data and the second position data from the position data generation unit 224, based on them Virtual articulator data for the virtual articulator is generated.
  • the coupling unit 226 sends the virtual articulator data to the virtual articulator data recording unit 227 and the display control unit 228.
  • the virtual articulator data recording unit 227 records the virtual articulator data sent from the coupling unit 226. Basically, the virtual articulator data is recorded in the virtual articulator data recording unit 227 together with information for identifying a patient for which the virtual articulator data was previously recorded in the virtual articulator data recording unit 227. To be recorded.
  • the display control unit 228 Upon receiving the virtual articulator data from the coupling unit 226, the display control unit 228 creates, for example, moving image data for displaying the virtual articulator on the display 230 based on the virtual articulator data, and sends the image data via the interface 215. Send to display 230. Thereby, the image of the virtual articulator is displayed on the display 230 as a moving image, for example. Next, recording of mandibular movement will be described.
  • the underbow 140 When recording mandibular movement, the underbow 140, and more particularly the lower bite fork 145 of the underbow 140, is secured to the patient's lower dentition. Then, the connection member 143, the horizontal bar 142, and the like are adjusted so that the relative positional relationship of the stylus 144 with respect to the flag 115 becomes a predetermined appropriate relationship. For example, when the patient naturally meshes the upper dentition and the lower dentition, the adjustment described above is performed so that the needle-like portion 144B of the stylus 144 is in contact with the center of the sensor portion 115A of the flag 115 with an appropriate pressure. Do. Next, the lower image is captured, and lower image data for the lower image is generated.
  • the lower bite fork 145 and at least a part of the lower jaw dentition are reflected, and the relative positional relationship including the angle of the lower bite fork 145 and the lower jaw dentition can be grasped from the lower image.
  • the lower image data is sent to the diagnostic apparatus 200.
  • the lower image data is received by the receiving unit 221 and sent to the second position data generating unit 229.
  • the second position data generation unit 229 obtains a relative positional relationship including the angle between the lower jaw dentition and the lower bite fork 145, and generates third position data that is data about the positional relationship.
  • the second position data generation unit 229 sends the third position data to the mandibular movement image data generation unit 230.
  • the mandibular movement image data generation unit 230 receives the part data from the reception unit 221 as described above, and receives the third position data from the second position data generation unit 229.
  • the positions of the sensor portion 115A of the flag 115 and the needle-like portion 144B of the stylus 144 are in a predetermined positional relationship.
  • the positional relationship between the needle-like portion 144B and the lower bite fork 145 is also fixed at least while the part data is generated, in other words, while the mandibular movement is recorded. Therefore, as long as the positional relationship between the lower jaw dentition and the lower bite fork 145 can be specified, the positional relationship of the needle-like portion 144B with respect to the lower dentition can be specified. It is the third position data that specifies the positional relationship.
  • the mandibular movement image data generation unit 230 corrects the part data with the third position data, and then, on the virtual articulator image specified by the virtual articulator data read from the virtual articulator data recording unit 227, Execute the process of writing a mark indicating the mandibular movement, or execute the process of moving the mandible on the virtual articulator image to reproduce the patient's mandibular movement by animation, and generate the results of those processes Mandibular movement image data that is the generated data is generated.
  • the mandibular movement data generation unit 230 sends the generated mandibular movement image data to the virtual articulator data recording unit 227 and the display control unit 228.
  • the virtual articulator data recording unit 227 records mandibular movement image data in addition to the virtual articulator data.
  • the virtual articulator data and mandibular movement image data of the same patient are recorded in the virtual articulator data recording unit 227 in association with each other.
  • the display control unit 228 reproduces the patient's mandibular movement on the virtual articulator image in which a mark indicating the patient's mandibular movement is written or the mandible on the virtual articulator image based on the mandibular movement image data.
  • An image of a moving image that causes the lower jaw to move is displayed on the display 230. Thereby, the dentist can grasp the patient's mandibular movement easily and accurately.
  • a virtual articulator that produces diagnostics or prosthetics in the dental field provides accurate three-dimensional relative positional relationship between the reference plane and maxillary and mandibular dentition in a living patient, creating a model of the patient's dentition This increases the speed of diagnosis and treatment, and allows sharing of virtual articulator data (electronic data) used for diagnosis and treatment of a patient.

Abstract

  [Problem] To provide an inexpensive and easily popularized technique for creating data for a virtual articulator on a computer, the technique not requiring an articulator to be created. [Solution] A diagnostic device is provided with a position data generating unit (224) and a combining unit (226). The position data generating unit (224) generates first position data and second position data using an orientation image of the orientation of a connecting member for connecting an upper bow and a bite fork, an upper image indicating the positional relationship of the bite fork and the maxillary teeth of a patient, and an occlusion image indicating the positional relationship of the maxillary teeth and the mandibular teeth of the patient. The combining unit (226) aligns a three-dimensional model of the maxillary teeth and mandibular teeth of the patient generated by a model generating unit (223) using the first position data and the second position data, and generates a virtual articulator.

Description

コンピュータ、コンピュータで実行される方法、及びコンピュータプログラム、並びにフェイスボウComputer, computer-implemented method, computer program, and facebow
 本発明は、歯科の医療分野で用いるコンピュータに関し、特にはフェイスボウと組合せて用いられるコンピュータに関する。 The present invention relates to a computer used in the dental medical field, and more particularly to a computer used in combination with a face bow.
 歯科の分野でフェイスボウが用いられている。例えば、歯科補綴治療、顎関節症咬合治療、咬合診断を行う場合に、フェイスボウが用いられる。最も一般的なフェイスボウは、例えば、頭蓋の基準面(例えば、頭蓋右側の眼窩下面と左右外耳道上縁とを結んだフランクフルト平面)と、上顎歯列或は上顎骨との相対的な角度を含めた位置関係を測定し記録するためのものである。 フ ェ イ ス Face bows are used in the dental field. For example, a face bow is used when performing dental prosthetic treatment, temporomandibular joint occlusion treatment, and occlusion diagnosis. The most common facebow is, for example, the relative angle between the reference plane of the skull (eg, the Frankfurt plane connecting the lower surface of the right orbit of the skull and the upper edge of the left and right ear canal) and the maxillary dentition or maxilla. It is for measuring and recording the positional relationship including.
 フェイスボウは、おおまかに、患者の頭蓋の中に想定される解剖学的基準面との関係を一意に決定して患者の顔に固定できるようになっているアッパーボウと、患者の上顎歯列に対してその固定の状態を記録しつつ固定できるようになっているバイトフォークと、アッパーボウと、バイトフォークとを、相対的な位置と角度を任意に位置決めして、その位置と角度を記録しつつ固定することができる接続手段と、を備えて構成される。 The facebow is roughly divided into an upper bow that can be fixed to the patient's face by uniquely determining the relationship to the assumed anatomical reference plane in the patient's skull, and the upper dentition of the patient. The bite fork, upper bow, and bite fork that can be fixed while recording the fixed state are arbitrarily positioned relative to each other, and the position and angle are recorded. And connecting means that can be fixed.
 フェイスボウには幾つかの種類がある。ある種のフェイスボウのアッパーボウは、例えば、メガネ形状をしている。アッパーボウは、使用時に顔の前に位置するメガネであればフレームに相当する本体部と、本体部の両側に取付けられたメガネであればテンプルに当たるテンプル部とを備えている。テンプル部の先端は、内側に曲折されるなどしており、患者の外耳道に挿入できるようになっている。本体部のアッパーボウの使用時における顔に近い側には患者の任意の鼻根凹部に固定することのできる維持装置が取付けられている。テンプル部の先端の位置は、基準面に対して、例えば前後左右に移動できるようになっている。また、上述の鼻根凹部に固定する維持装置も、上述の基準面に対して例えば前後、上下に移動できるようになっている。 There are several types of face bows. The upper bow of a certain type of face bow has, for example, a glasses shape. The upper bow includes a main body corresponding to a frame if the glasses are positioned in front of the face when in use, and a temple corresponding to the temple if the glasses are attached to both sides of the main body. The distal end of the temple portion is bent inward so that it can be inserted into the patient's ear canal. A maintenance device that can be fixed to an arbitrary nasal root recess of a patient is attached to a side of the main body portion close to the face when the upper bow is used. The position of the tip of the temple portion can be moved, for example, forward, backward, left and right with respect to the reference plane. Moreover, the maintenance device fixed to the above-mentioned nasal root recess can also be moved, for example, back and forth and up and down with respect to the above-mentioned reference plane.
 フェイスボウの使用時においては、テンプル部の両先端が患者の両外耳道に、維持装置を鼻根凹部に固定する。この状態で、アッパーボウの本体部が予め定められた姿勢(例えば、頭蓋の基準面に対して平行)になるように調節する。アッパーボウはテンプル部の両先端と棒の先端という3つの部位により、顔頭蓋に対して、もっと言えば解剖学的基準面に対して一意に位置決めされる。 When using the face bow, fix both ends of the temples to the patient's ear canal and the maintenance device to the nasal root recess. In this state, adjustment is made so that the main body of the upper bow is in a predetermined posture (for example, parallel to the reference plane of the skull). The upper bow is uniquely positioned with respect to the facial cranium, more specifically with respect to the anatomical reference plane, by the three portions of the temple tip and the rod tip.
 バイトフォークは、歯列咬合面に相当するU字型をした板状体であり、その上面に、モデリングコンパウンドやバイトワックスなどの例えば硬化性の物質である印記用ペーストを塗布するなどして、上顎歯列の下面に固定される。印記用ペーストに印記された上顎歯列の下面の形状により、バイトフォークと上顎歯列の関係を記録できる。 The bite fork is a U-shaped plate corresponding to the dentition occlusal surface, and on the upper surface thereof, for example, a paste for marking which is a curable substance such as a modeling compound or bite wax is applied. Fixed to the lower surface of the maxillary dentition. The relationship between the bite fork and the maxillary dentition can be recorded by the shape of the lower surface of the maxillary dentition marked on the marking paste.
 接続手段は、例えば複数の球体関節により接続された棒状体によって構成されており、アッパーボウと、バイトフォークとを、相対的な位置と角度を任意に位置決めして固定することができるようになっている。球体関節と棒状体に所定の目盛を付すなどされており、ある姿勢を取った接続手段の姿勢(例えば、ある球体関節とそれに接続された棒状体の位置や角度)を記録することができるようになっている。 The connecting means is constituted by, for example, a rod-like body connected by a plurality of spherical joints, and can fix the upper bow and the bite fork by arbitrarily positioning relative positions and angles. ing. The spherical joint and the rod-shaped body are provided with a predetermined scale, and the posture of the connecting means taking a certain posture (for example, the position and angle of a certain spherical joint and the rod-shaped body connected thereto) can be recorded. It has become.
 フェイスボウのアッパーボウは上述のように、基準面に対して一意に位置決めすることができる。接続手段の一端に接続されたバイトフォークは、その他端にアッパーボウが接続された接続手段が、その姿勢を録することができるようになっているので、間接的に、アッパーボウとの相対的な位置関係を記録できるようにされており、結果的に、基準面との相対的な位置関係を記録できるようにされている。そして、患者の上顎歯列の下面のバイトフォークに対する相対的な位置と角度は、印記用ペーストを介して記録できるようになっており、且つバイトフォークと基準面との相対的な位置関係は上述の如く記録できるようになっているので、結果として、基準面と患者の上顎歯列の位置と角度は、記録できることになる。 The upper bow of the face bow can be uniquely positioned with respect to the reference plane as described above. Since the bite fork connected to one end of the connection means can record the posture of the connection means having the upper bow connected to the other end, it is indirectly relative to the upper bow. As a result, the relative positional relationship with the reference plane can be recorded. The relative position and angle of the lower surface of the patient's maxillary dentition with respect to the bite fork can be recorded via the marking paste, and the relative positional relationship between the bite fork and the reference surface is as described above. As a result, the position and angle of the reference plane and the patient's maxillary dentition can be recorded.
 フェイスボウは、このようにして、個々の患者にユニークな、基準面と上顎歯列の相対的な位置関係を、角度を含めて記録する。フェイスボウで測定された基準面と上顎歯列との相対的な位置関係についての記録は、一般に、咬合器にて利用される。咬合器には後述のように、患者の上顎歯列の模型と、下顎歯列の模型を組合せるが、そこに患者の生体における上顎歯列と、下顎歯列の噛合わせを再現させるために、上述の記録が使われるのである。このように、生体の患者の噛み合わせを咬合器に移す作業を一般に、フェイスボウトランスファーと称する。 In this way, the face bow records the relative positional relationship between the reference surface and the maxillary dentition, including the angle, unique to each patient. The recording of the relative positional relationship between the reference plane measured by the face bow and the maxillary dentition is generally used in an articulator. As described below, the articulator combines a model of the patient's maxillary dentition and a model of the mandibular dentition, in order to reproduce the meshing of the maxillary dentition and mandibular dentition in the patient's living body. The above-mentioned record is used. Thus, the operation | work which transfers the biting of a living body patient to an articulator is generally called a face bow transfer.
 咬合器は、頭蓋における上顎に相当する部材と下顎に相当する部材を備えている。頭蓋における上顎に相当する部材と下顎に相当する部材、は、生体における上顎と下顎と同様に開閉できるように構成されており、且つ上顎に相当する部材に、例えば患者の歯の型取りにより正確に再現された患者の上顎歯列の模型を、下顎に相当する部材に、患者の下顎歯列の模型を取付けられるようになっている。 The articulator has a member corresponding to the upper jaw and a member corresponding to the lower jaw in the skull. The member corresponding to the upper jaw and the member corresponding to the lower jaw in the cranium are configured to be opened and closed in the same manner as the upper jaw and the lower jaw in the living body, and the member corresponding to the upper jaw is more accurately formed by, for example, shaping the patient's teeth. The model of the patient's maxillary dentition reproduced in (1) can be attached to the member corresponding to the mandible.
 この状態で、フェイスボウで計測された上述の基準面と上顎歯列の下面の位置関係を咬合器上で再現することにより、咬合器に、患者の生体における基準面と上顎歯列の相対的な位置関係を再現することができるようになっている。 In this state, the positional relationship between the above-described reference surface measured by the face bow and the lower surface of the maxillary dentition is reproduced on the articulator, so that the relative position between the reference surface and the maxillary dentition on the patient's living body It is possible to reproduce various positional relationships.
 なお、患者の上顎歯列の下面と下顎歯列の上面の相対的な位置関係は、例えばマウスピースやコンパウンドを用いる周知の方法で患者の噛合わせの記録を行うことにより再現できる。咬合器の上顎に相当する部材に取付けた患者の上顎歯列の模型の下面に対して、上述の記録を用いて、咬合器の下顎に相当する部材に取付けた患者の下顎歯列の模型の上面の位置決めを行えば、咬合器に取付けられた上顎歯列の模型と下顎歯列の模型と基準面の相対的な位置関係は、生体の患者における上顎歯列の下面と、下顎歯列の上面と基準面の相対的な位置関係を反映させたものとなる。 Note that the relative positional relationship between the lower surface of the patient's upper dentition and the upper surface of the lower dentition can be reproduced by recording the patient's engagement with a known method using, for example, a mouthpiece or a compound. Using the above-mentioned record against the lower surface of the patient's upper jaw dentition model attached to the member corresponding to the upper jaw of the articulator, the patient's lower jaw dentition model attached to the member corresponding to the lower jaw of the articulator By positioning the upper surface, the relative positional relationship between the upper dentition model and lower dentition model attached to the articulator and the reference plane is determined by the lower surface of the upper dentition and the lower dentition in a living patient. It reflects the relative positional relationship between the upper surface and the reference surface.
 上述の如く、咬合器を用いれば、患者の上顎歯列と、下顎歯列の基準面に対する相対的な位置関係を、正確に、咬合器上に再現できる。歯科医は、咬合器に取付けられた上顎歯列の模型と、下顎歯列の模型を用いて、咬合診断を行い、或いは歯科補綴治療、顎関節症咬合治療を行うに先立って必要なシミュレーション(義歯の噛合わせのテスト等)を行う等することができるようになる。 As described above, when the articulator is used, the relative positional relationship of the patient's maxillary dentition and mandibular dentition with respect to the reference plane can be accurately reproduced on the articulator. The dentist uses the model of the maxillary dentition and the model of the mandibular dentition attached to the articulator to perform the occlusion diagnosis, or the simulation required prior to performing the dental prosthesis treatment and the temporomandibular joint occlusion treatment ( It is possible to perform a denture meshing test and the like.
 以上に説明したように、フェイスボウで測定された上顎歯列の下面と、基準面との相対的な関係は一般に、咬合器に取付けられる上顎歯列と基準面との関係が、生体の患者における基準面と上顎歯列との関係を正確に再現できるようにするために用いられていた。言い換えれば、フェイスボウは、もっぱら咬合器を正確に使いこなすための咬合器に従属的な装置という位置づけであり、治療や診断のために咬合器は不可欠であるというのが古くからの歯科業界における常識であった。 As described above, the relative relationship between the lower surface of the maxillary dentition measured with the face bow and the reference surface is generally the relationship between the maxillary dentition attached to the articulator and the reference surface. It was used to make it possible to accurately reproduce the relationship between the reference plane and the maxillary dentition. In other words, the face bow is a device that is subordinate to the articulator to accurately use the articulator, and it has long been common knowledge in the dental industry that the articulator is indispensable for treatment and diagnosis. Met.
 しかしながら近年のコンピュータ技術、特には画像処理技術の進化により、コンピュータ上に仮想咬合器(言い換えれば、それは、噛合わせも含めた患者の上顎歯列と下顎歯列の3次元画像である。)を再現する試みがなされている。仮想咬合器を用いれば、患者の歯列の模型を作らなくても良いので、診断と治療のスピードを増すことができるし、また、ある患者の診断や治療に用いられた仮想咬合器のデータ(電子データ)を例えば歯科医同士で持ち合うこともできるので、便利である。 However, with recent advances in computer technology, in particular image processing technology, a virtual articulator (in other words, it is a three-dimensional image of the patient's upper and lower dentition, including engagement). Attempts to reproduce have been made. By using a virtual articulator, it is not necessary to create a model of the patient's dentition, so that the speed of diagnosis and treatment can be increased, and the data of the virtual articulator used for the diagnosis and treatment of a patient can be increased. It is convenient because (electronic data) can be held between dentists, for example.
特願2014?27052Japanese Patent Application 2014-27052
 しかしながら、コンピュータ上で仮想咬合器を用いる技術はそれほど普及していない。その理由は仮想咬合器のデータをコンピュータ上に作成する適当な技術が存在しないという点にある。
仮想咬合器のデータを作成するための方法として、例えば、以下の2つの方法が知られている。
まず第1の方法であるが、第1の方法では、従来と同様、一旦咬合器を作成し、咬合器から測定された数値を、コンピュータに入力することによって、仮想咬合器のデータが作られる。上述の数値は、例えば、上顎歯列の模型が取付けられる咬合器の上顎に相当する部材が、所定の基準となる点よりどれだけ前後、或いは上下に移動しているか、或いは上顎歯列の咬合面がどれだけ傾いているか、ということを示す数値である。次に第2の方法であるが、この場合には咬合器を用いない。その代わりに、例えば、CT(Computed Tomography:コンピュータ断層撮影)撮像装置などで撮影した患者の頭部の3次元画像に、それとは別に3次元撮影した患者の上顎歯列と、下顎歯列の画像を合成することにより、仮想咬合器のデータを作るというものである。
However, a technique using a virtual articulator on a computer is not so popular. The reason is that there is no suitable technique for creating virtual articulator data on a computer.
For example, the following two methods are known as methods for creating virtual articulator data.
First, in the first method, as in the conventional method, the articulator is created once, and the numerical values measured from the articulator are input to the computer to create the data of the virtual articulator. . The above numerical values are, for example, how much the member corresponding to the upper jaw of the articulator to which the upper dentition model is attached has moved back and forth or up and down from a predetermined reference point, or occlusion of the upper dentition It is a numerical value indicating how much the surface is inclined. Next, as a second method, the articulator is not used in this case. Instead, for example, a 3D image of the patient's head taken by a CT (Computed Tomography) imaging device or the like, and an image of the upper and lower dentitions of the patient taken 3D separately. The data of the virtual articulator is made by synthesizing.
 しかしながら、上述の2つの方法をもってしても仮想咬合器のデータを用いての診断、治療が普及しない理由がある。 However, there is a reason that the diagnosis and treatment using the data of the virtual articulator is not popular even with the above two methods.
 まず、第1の方法では、上顎歯列と下顎歯列のそれぞれの模型を作ることが必要であり、またそれらを用いて実際の咬合器を作ることが必要であるから、診断、治療のスピードを増すことができない。それどころか実際に咬合器を作るのであれば、仮想咬合器のデータを作ることは単に手間を増やすだけであるとの印象を歯科医に与えやすいから、歯科医に仮想咬合器のデータを利用することに対するモチベーションを与えにくい。
第2の方法では実際に咬合器を作ることはしないが、その代わりにCT撮像装置の如き患者の頭部の3次元撮影を行える装置が必要となる。そのような3次元撮影を行える装置は一般に非常に高価であり、歯科医院での普及率が高くない。したがって、そのような3次元撮影を行える装置の存在を前提とする第2の方法は、その普及が難しい。また、患者の頭部の3次元撮影は患者に放射線の被曝を強いることが殆どなので、患者のためにも好ましくない。
本願発明は、コンピュータ上に生体と仮想咬合器のデータを作成するための、実際に咬合器を作成することが不要な、安価で普及させやすい技術を提供することを、その課題とする。
First, in the first method, it is necessary to make a model of each of the maxillary dentition and the mandibular dentition, and it is necessary to make an actual articulator using these models. Cannot be increased. On the contrary, if the articulator is actually made, it is easy to give the dentist the impression that creating the data of the virtual articulator simply increases the effort, so use the data of the virtual articulator to the dentist It is hard to give motivation for.
In the second method, an articulator is not actually made, but instead, a device capable of three-dimensional imaging of the patient's head, such as a CT imaging device, is required. An apparatus capable of performing such three-dimensional imaging is generally very expensive and is not widely used in dental clinics. Therefore, it is difficult to spread the second method based on the existence of an apparatus capable of performing such three-dimensional imaging. Also, three-dimensional imaging of the patient's head is not preferable for the patient because it almost forces the patient to be exposed to radiation.
It is an object of the present invention to provide an inexpensive and easy-to-use technique that does not require actual creation of an articulator for creating data on a living body and a virtual articulator on a computer.
課題を解決するための手段及び効果Means and effects for solving the problems
 上述の課題を解決するため、本願発明者は以下の発明を提案する。 In order to solve the above-mentioned problems, the present inventor proposes the following invention.
 本願発明は、患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるアッパーボウと、硬化性の物質を塗布して患者の上顎歯列の下面に対して固定することができるバイトフォークと、前記アッパーボウ、及び前記バイトフォークの相対的な角度を含めた位置関係を任意に調節可能な接続手段と、を備えたフェイスボウであり、患者に正確に取付けられたものの画像を用いて、仮想の咬合器のデータである仮想咬合器データを生成することができる、仮想咬合器データ生成装置である。 The present invention applies an upper bow that can be fixed to a patient's skull in a state in which a positional relationship including a relative angle with a predetermined reference plane on the patient's skull is uniquely determined, and a curable substance. A bite fork that can be fixed to the lower surface of the patient's maxillary dentition, and a connection means that can arbitrarily adjust the positional relationship including the relative angle of the upper bow and the bite fork. A virtual articulator data generation device capable of generating virtual articulator data, which is data of a virtual articulator, using an image of a face bow that is accurately attached to a patient.
 本願発明の仮想咬合器データ生成装置は、上顎歯列の画像である上顎歯列画像のデータである上顎歯列画像データ、下顎歯列の画像である下顎歯列画像のデータである下顎歯列画像データ、前記バイトフォークと上顎歯列下面の角度を含めた相対的な位置関係を示す画像である上部画像のデータである上部画像データ、前記接続手段の姿勢を示す画像である姿勢画像のデータである姿勢画像データ、及び上顎歯列と下顎歯列の噛合わせの状態を示す画像である噛合わせ画像のデータである咬合画像データを受付けるための、受付手段と、前記受付手段から受付けた上顎歯列画像データから上顎歯列の3次元モデルである上顎歯列モデルのデータである上顎歯列モデルデータを生成するとともに、前記受付手段から受付けた下顎歯列画像データから下顎歯列の3次元モデルである下顎歯列モデルのデータである下顎歯列モデルデータを生成する、モデル生成手段と、前記受付手段から受付けた上部画像データ、及び姿勢画像データから、生体の患者における前記基準面に対する角度を含めた上顎歯列の相対的な位置を求めて、仮想の基準面である仮想基準面に対する前記上顎歯列モデルの位置についてのデータである第1位置データを生成するとともに、前記受付手段から受付けた咬合画像データから、上顎歯列に対する角度を含めた下顎歯列の相対的な位置を求めて、前記上顎歯列モデルに対する前記下顎歯列モデルの位置についてのデータである第2位置データを生成する、位置データ生成手段と、前記モデル生成手段から、前記上顎歯列モデルデータと、前記下顎歯列モデルデータとを受取るとともに、前記位置データ生成手段から、前記第1位置データと、前記第2位置データとを受取り、生体における基準面に対する上顎歯列と下顎歯列の想定的な位置関係を、前記第1位置データと、前記第2位置データを用いて、仮想基準面に対する上顎歯列モデルと下顎歯列モデルの位置関係に再現するようにして、前記仮想咬合器データを生成する、連結手段と、を備えている。 The virtual articulator data generation device of the present invention is a maxillary dentition image data which is data of an upper dentition image which is an image of an upper dentition, and a lower dentition which is data of a lower dentition image which is an image of a lower dentition. Image data, upper image data that is upper image data that is an image showing a relative positional relationship including the angle between the bite fork and the lower surface of the maxillary dentition, and posture image data that is an image showing the attitude of the connecting means Receiving means for receiving occlusal image data, which is data of a meshing image that is an image showing a meshing state of a maxillary dentition and a mandibular dentition, and an upper jaw received from the receiving means Maxillary dentition model data that is data of an upper dentition model that is a three-dimensional model of the upper dentition is generated from the dentition image data, and the lower dentition image data received from the receiving unit From the model generation means, the upper image data received from the reception means, and the posture image data, the lower dentition dentition model data, which is the data of the lower dentition dentition model, which is a three-dimensional model of the lower dentition, A relative position of the maxillary dentition including an angle with respect to the reference plane in the patient is obtained, and first position data that is data on the position of the maxillary dentition model with respect to a virtual reference plane that is a virtual reference plane is generated. In addition, the relative position of the lower dentition including the angle with respect to the upper dentition is obtained from the occlusion image data received from the receiving means, and data on the position of the lower dentition model with respect to the upper dentition model The position data generating means for generating the second position data, and the upper dentition model data and the lower dentition model from the model generation means And receiving the first position data and the second position data from the position data generating means, and assuming an assumed positional relationship between the maxillary dentition and the mandibular dentition with respect to a reference plane in the living body, A connection means for generating the virtual articulator data by reproducing the positional relationship between the maxillary dentition model and the mandibular dentition model with respect to the virtual reference plane by using the first position data and the second position data. And.
 この仮想咬合器データ生成装置は、受付手段から入力された上顎歯列の画像である上顎歯列画像のデータである上顎歯列画像データと、下顎歯列の画像である下顎歯列画像のデータである下顎歯列画像データとを用いて、仮想咬合器のデータである仮想咬合器データを生成する。具体的には、本願発明の仮想咬合器データ生成装置は、受付手段から受付けた上顎歯列画像データから上顎歯列の3次元モデルである上顎歯列モデルのデータである上顎歯列モデルデータを生成するとともに、受付手段から受付けた下顎歯列画像データから下顎歯列の3次元モデルである下顎歯列モデルのデータである下顎歯列モデルデータを生成する、モデル生成手段を備えている。この仮想咬合器データ生成装置では、モデル生成手段が生成した上顎歯列モデルデータに基づく上顎歯列モデルと、下顎歯列モデルデータに基づく下顎歯列モデルとを、仮想咬合器における、一般的な咬合器における上顎歯列の模型と下顎歯列の模型の代わりになるものとして用いる。それにより、この仮想咬合器データ生成装置を用いれば、上顎歯列の模型も、下顎歯列の模型も必要ない。 This virtual articulator data generation device is configured to receive upper dentition image data, which is an upper dentition image data that is an image of an upper dentition, and data of a lower dentition image, which is an image of a lower dentition, input from an accepting unit. Virtual articulator data that is data of the virtual articulator is generated using the lower dentition image data. Specifically, the virtual articulator data generation device of the present invention obtains maxillary dentition model data, which is data of the maxillary dentition model, which is a three-dimensional model of the maxillary dentition, from the maxillary dentition image data received from the receiving means. Model generation means is provided for generating mandibular dentition model data, which is data of a lower dentition dentition model, which is a three-dimensional model of the lower dentition from the lower dentition dentition image data received from the reception means. In this virtual articulator data generation device, a maxillary dentition model based on the maxillary dentition model data generated by the model generation means and a mandibular dentition model based on the mandibular dentition model data are generally used in a virtual articulator. Used as a substitute for the upper and lower dentition models in the articulator. Therefore, if this virtual articulator data generation device is used, neither a maxillary dentition model nor a mandibular dentition model is required.
 ここで、上顎歯列モデルと下顎歯列モデルとを用いて仮想咬合器データを生成するには、上顎歯列モデルと下顎歯列モデルとが角度を含めてどのような位置関係となるかを、患者の生体における上顎歯列と下顎歯列の角度を含む位置関係を正確に再現するようにして決定しなければならず、言い換えれば、上顎歯列モデルと下顎歯列モデルとを患者の上顎歯列と下顎歯列に対応するようにして、位置合わせしなければならない。
 この仮想咬合器データ生成装置では、上顎歯列モデルと下顎歯列モデルの上述の位置合わせを、前記バイトフォークと上顎歯列下面の角度を含めた相対的な位置関係を示す画像である上部画像のデータである上部画像データと、前記接続手段の姿勢を示す画像である姿勢画像のデータである姿勢画像データと、上顎歯列と下顎歯列の噛合わせの状態を示す画像である噛合わせ画像のデータである咬合画像データとを用いて行う。それを可能とするために、この仮想咬合器データ生成装置の受付手段は、上顎歯列画像データと下顎歯列画像データとの入力に加え、それらの位置合わせに用いられる上述の3種類のデータをも受付けられるものとされている。なお、本願発明の仮想咬合器データ生成装置で実行される処理は、概ねアッパーボウから現実の咬合器にフェイスボウトランスファーを行うときの手順に倣っているが、接続手段の姿勢を画像(より正確には、姿勢画像)で特定してしまうという点は、従来のフェイスボウトランスファーの考え方にはまったくない。かかる手法により、本願発明は、CT撮像装置のような高価な装置がなくとも、歯科医師に過大な労力を使わせることなく、簡単に仮想咬合器データを生成できるものとなる。
Here, in order to generate virtual articulator data using the maxillary dentition model and the mandibular dentition model, it is necessary to determine the positional relationship between the maxillary dentition model and the mandibular dentition model including the angle. Therefore, the positional relationship including the angle between the maxillary dentition and the mandibular dentition in the patient's living body must be accurately reproduced, in other words, the maxillary dentition model and the mandibular dentition model are determined by the patient's maxillary dentition model. It must be aligned to correspond to the dentition and lower dentition.
In this virtual articulator data generation device, the above-described alignment of the maxillary dentition model and the mandibular dentition model is an upper image that is an image showing a relative positional relationship including the angle between the bite fork and the lower surface of the maxillary dentition. Upper image data, posture image data which is posture image data which is an image showing the posture of the connecting means, and a mesh image which is an image showing the meshing state of the upper dentition and lower dentition This is performed using occlusal image data that is data of the above. In order to make this possible, the accepting means of this virtual articulator data generation device is not only the input of the maxillary dentition image data and the mandibular dentition image data, but also the above-mentioned three types of data used for their alignment. Is also accepted. The processing executed by the virtual articulator data generation device of the present invention generally follows the procedure for performing face bow transfer from the upper bow to the actual articulator. Is not specified in the conventional face bow transfer concept. By this method, the present invention can easily generate virtual articulator data without using excessive labor by a dentist without an expensive apparatus such as a CT imaging apparatus.
 本願発明の仮想咬合器データ生成装置において、上述の位置合わせに用いられるデータを生成するのが、位置データ生成手段である。位置データ生成手段は、上部画像データ、及び姿勢画像データから、生体の患者における前記基準面に対する角度を含めた上顎歯列の相対的な位置を求めて、仮想の基準面である仮想基準面に対する前記上顎歯列モデルの位置についてのデータである第1位置データを生成する。第1データは、実際の咬合器では、基準面に対して上顎歯列の模型の位置を決定するための情報に相当する。位置データ生成手段は、また、咬合画像データから、上顎歯列に対する角度を含めた下顎歯列の相対的な位置を求めて、前記上顎歯列モデルに対する前記下顎歯列モデルの位置についてのデータである第2位置データを生成する。第2位置データは、実際の咬合器では、上顎歯列の下面に対して下顎歯列の上面の位置を決定するための情報に相当する。 In the virtual articulator data generation device of the present invention, the position data generation means generates data used for the above-described alignment. The position data generation means obtains the relative position of the maxillary dentition including the angle with respect to the reference plane in the living body patient from the upper image data and the posture image data, and relative to the virtual reference plane which is a virtual reference plane. First position data, which is data on the position of the maxillary dentition model, is generated. The first data corresponds to information for determining the position of the model of the upper dentition relative to the reference plane in an actual articulator. The position data generating means obtains a relative position of the lower dentition including an angle with respect to the upper dentition from the occlusal image data, and is data on the position of the lower dentition model with respect to the upper dentition model. A certain second position data is generated. The second position data corresponds to information for determining the position of the upper surface of the lower dentition relative to the lower surface of the upper dentition in an actual articulator.
 そして、本願発明の仮想咬合器データ生成装置では、連結手段が、第1データと第2データを用いて、上顎歯列モデルと、下顎歯列モデルの位置合わせを行うことにより、仮想咬合器データを生成する。 And in the virtual articulator data generating device of the present invention, the connecting means uses the first data and the second data to align the maxillary dentition model and the mandibular dentition model, thereby providing virtual articulator data. Is generated.
 本願発明の仮想咬合器データ生成装置は以上のようなものなので、手軽に、安価に、仮想咬合器データを生成することができるという効果を得ることができる。つまり、本願の仮想咬合器データ生成装置によれば、仮想咬合器データを生成するに当たり、既に述べたように上顎歯列の模型と下顎歯列の模型の作成の必要がなく、また、頭部の3次元撮影を行うための高価な装置の導入を歯科医に強いることもないし、頭部の3次元撮影による被曝を患者に強いることもない。 Since the virtual articulator data generation device of the present invention is as described above, it is possible to obtain the effect that the virtual articulator data can be generated easily and inexpensively. In other words, according to the virtual articulator data generation device of the present application, it is not necessary to create a maxillary dentition model and a mandibular dentition model as described above in generating the virtual articulator data. The dentist is not forced to introduce an expensive apparatus for performing the three-dimensional imaging, and the patient is not forced to be exposed by the three-dimensional imaging of the head.
 なお、本願における上顎歯列画像、下顎歯列画像、上部画像、噛合わせ画像、姿勢画像はすべて、3次元画像であっても構わない。以下に説明する下部画像も同様に、3次元画像であっても構わない。
本願の仮想咬合器データ生成装置で用いられる前記噛合わせ画像は、上述したように、患者の上顎歯列と下顎歯列の噛合わせの状態を示す画像であるが、それは例えば、患者の上顎歯列と下顎歯列の噛合わせ部分を撮像した画像であっても構わないし、患者の上顎歯列と下顎歯列との間で咬合された、上顎歯列と下顎歯列の形状を印記できる印記用ペーストの画像であっても構わない。
いずれの画像を用いても、患者の上顎歯列と下顎歯列の噛合わせの状態を再現可能に記録することができる。
Note that the upper dentition image, the lower dentition image, the upper image, the meshing image, and the posture image in the present application may all be three-dimensional images. Similarly, the lower image described below may be a three-dimensional image.
As described above, the meshing image used in the virtual articulator data generation device of the present application is an image showing the meshing state of the patient's maxillary dentition and mandibular dentition. It may be an image that captures the meshing part of the row and the lower dentition, and can be used to mark the shape of the upper and lower dentitions that are occluded between the upper and lower dentitions of the patient It may be an image of a paste for use.
Regardless of which image is used, it is possible to reproducibly record the meshing state of the patient's maxillary and mandibular dentition.
 例えば、上顎歯列と下顎歯列の形状を印記できる印記用ペーストの画像を口腔外で撮像する場合は、汎用されている歯科技工用の3次元画像撮像装置を用いて、3次元画像である噛み合わせ画像を撮像することができる。例えば、姿勢画像の撮影のように、口腔外での撮像を行う場合も同様である。 For example, when an image of a paste for marking that can mark the shapes of the maxillary dentition and the mandibular dentition is imaged outside the oral cavity, it is a three-dimensional image using a widely used three-dimensional image imaging device for dental technicians. A meshing image can be taken. For example, the same applies to imaging outside the oral cavity, such as taking a posture image.
 バイトフォークをアッパーボウと接続する前記接続手段は、複数の部材から構成されている場合が多い。バイトフォークとアッパーボウの位置関係を決定する接続手段の姿勢は、姿勢画像に映り込んだ接続手段の状態から決定することができる。姿勢画像は1つの画像でなく複数であっても構わない。例えば、前記姿勢画像は複数の方向から撮像された複数の画像であるとともに、前記姿勢画像データは前記姿勢画像のそれぞれについてのものであり、前記姿勢画像と同数であっても良い。 The connection means for connecting the bite fork to the upper bow is often composed of a plurality of members. The posture of the connecting means for determining the positional relationship between the bite fork and the upper bow can be determined from the state of the connecting means reflected in the posture image. There may be a plurality of posture images instead of a single image. For example, the posture image may be a plurality of images taken from a plurality of directions, and the posture image data may be for each of the posture images, and may be the same number as the posture images.
 接続手段が、複数の部材から構成されている場合、前記部材には、それらの角度を含めた相対的な位置関係の変化によってその位置関係が変化する複数の目印が適宜付されていても良い。そのような接続手段がフェイスボウに採用されている場合には、前記位置データ生成手段は、姿勢画像データから得られる姿勢画像に映り込んだ前記目印から、前記接続手段の姿勢を検出するようになっていても良い。そうすることで、位置データ生成手段が姿勢画像から接続手段の姿勢を検出することが容易になる。 When the connecting means is composed of a plurality of members, the members may be appropriately provided with a plurality of marks whose positional relationships change due to changes in relative positional relationships including their angles. . When such a connection means is adopted for the face bow, the position data generation means detects the posture of the connection means from the mark reflected in the posture image obtained from the posture image data. It may be. By doing so, it becomes easy for the position data generating means to detect the attitude of the connecting means from the attitude image.
 接続手段が、複数の部材から構成されている場合、前記部材には、それらの角度を含めた相対的な位置関係の変化によって見え方が変化するようにして色彩が付されていても良い。そのような接続手段がフェイスボウに採用されている場合には、前記位置データ生成手段は、姿勢画像データから得られる姿勢画像に映り込んだ前記色彩から、前記接続手段の姿勢を検出するようになっていても良い。これによっても、位置データ生成手段が姿勢画像から接続手段の姿勢を検出することが容易になる。 When the connecting means is composed of a plurality of members, the members may be colored so that the appearance changes depending on the relative positional change including the angles. When such a connection means is adopted for the face bow, the position data generation means detects the attitude of the connection means from the color reflected in the attitude image obtained from the attitude image data. It may be. This also makes it easy for the position data generating means to detect the attitude of the connecting means from the attitude image.
 本願発明者は、以上の仮想咬合器データ生成装置と同様の効果を得られる以下の方法をも提供する。 The inventor of the present application also provides the following method capable of obtaining the same effect as the above virtual articulator data generation device.
 その方法は、患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるアッパーボウと、硬化性の物質を塗布して患者の上顎歯列の下面に対して固定することができるバイトフォークと、前記アッパーボウ、及び前記バイトフォークの相対的な角度を含めた位置関係を任意に調節可能な接続手段と、を備えたフェイスボウであり、患者に正確に取付けられたものの画像を用いて、仮想の咬合器のデータである仮想咬合器データを生成することができる、コンピュータを有する仮想咬合器データ生成装置で実行される仮想咬合器データ生成方法である。 The method applies an upper bow that can be fixed to the patient's skull in a state in which the positional relationship including a relative angle with respect to a predetermined reference plane on the patient's skull is uniquely determined, and a curable material. A bite fork that can be fixed to the lower surface of the patient's maxillary dentition, and a connection means that can arbitrarily adjust the positional relationship including the relative angle of the upper bow and the bite fork. A virtual articulator data generation device having a computer that can generate virtual articulator data, which is data of a virtual articulator, using an image of a facebow that is accurately attached to a patient. This is a virtual articulator data generation method.
 そしてこの方法は、前記コンピュータが実行する、上顎歯列の画像である上顎歯列画像のデータである上顎歯列画像データ、下顎歯列の画像である下顎歯列画像のデータである下顎歯列画像データ、前記バイトフォークと上顎歯列下面の角度を含めた相対的な位置関係を示す画像である上部画像のデータである上部画像データ、前記接続手段の姿勢を示す画像である姿勢画像のデータである姿勢画像データ、及び上顎歯列と下顎歯列の噛合わせの状態を示す画像である噛合わせ画像のデータである咬合画像データを受付ける、受付過程と、前記受付過程で受付けた上顎歯列画像データから上顎歯列の3次元モデルである上顎歯列モデルのデータである上顎歯列モデルデータを生成するとともに、前記受付過程で受付けた下顎歯列画像データから下顎歯列の3次元モデルである下顎歯列モデルのデータである下顎歯列モデルデータを生成する、モデル生成過程と、前記受付過程で受付けた上部画像データ、及び姿勢画像データから、生体の患者における前記基準面に対する角度を含めた上顎歯列の相対的な位置を求めて、仮想の基準面である仮想基準面に対する前記上顎歯列モデルの位置についてのデータである第1位置データを生成するとともに、前記受付過程で受付けた咬合画像データから、上顎歯列に対する角度を含めた下顎歯列の相対的な位置を求めて、前記上顎歯列モデルに対する前記下顎歯列モデルの位置についてのデータである第2位置データを生成する、位置データ生成過程と、前記モデル生成過程で生成された前記上顎歯列モデルデータ、及び前記下顎歯列モデルデータと、前記位置データ生成過程で生成された前記第1位置データ、及び前記第2位置データとにより、生体における基準面に対する上顎歯列と下顎歯列の想定的な位置関係を、仮想基準面に対する上顎歯列モデルと下顎歯列モデルの位置関係に再現するようにして、前記仮想咬合器データを生成する、連結過程と、を含む。 In this method, the computer executes the maxillary dentition image data which is data of the maxillary dentition image which is an image of the maxillary dentition, and the mandibular dentition which is data of the mandibular dentition image which is an image of the mandibular dentition. Image data, upper image data that is upper image data that is an image showing a relative positional relationship including the angle between the bite fork and the lower surface of the maxillary dentition, and posture image data that is an image showing the attitude of the connecting means And receiving the occlusal image data which is the data of the meshing image which is the image indicating the meshing state of the upper dentition and the lower dentition, and the upper dentition received in the reception process Maxillary dentition model data, which is data of an upper dentition model that is a three-dimensional model of the upper dentition, is generated from the image data, and lower dentition image data received in the reception process From the model generation process, the upper image data received in the reception process, and the posture image data, the lower dentition dentition model data that is the data of the lower dentition dentition model, which is a three-dimensional model of the lower dentition, A relative position of the maxillary dentition including an angle with respect to the reference plane in the patient is obtained, and first position data that is data on the position of the maxillary dentition model with respect to a virtual reference plane that is a virtual reference plane is generated. In addition, the relative position of the lower dentition including the angle with respect to the upper dentition is obtained from the occlusal image data received in the reception process, and data on the position of the lower dentition model with respect to the upper dentition model Generating position data, the upper dentition model data generated in the model generation process, and the lower dentition model Based on the data, the first position data generated in the position data generation process, and the second position data, an assumed positional relationship between the maxillary dentition and the mandibular dentition with respect to the reference plane in the living body is determined as a virtual reference plane. And a connection process for generating the virtual articulator data so as to reproduce the positional relationship between the upper dentition model and the lower dentition model with respect to.
 本願発明者は、以上の仮想咬合器データ生成装置と同様の効果を得られる以下のコンピュータプログラムをも提供する。 The inventor of the present application also provides the following computer program capable of obtaining the same effect as the above virtual articulator data generation device.
 そのコンピュータプログラムは、患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるアッパーボウと、硬化性の物質を塗布して患者の上顎歯列の下面に対して固定することができるバイトフォークと、前記アッパーボウ、及び前記バイトフォークの相対的な角度を含めた位置関係を任意に調節可能な接続手段と、を備えたフェイスボウであり、患者に正確に取付けられたものの画像を用いて、仮想の咬合器のデータである仮想咬合器データを生成することができる、仮想咬合器データ生成装置として、コンピュータを機能させるためのコンピュータプログラムである。 The computer program applies an upper bow that can be fixed to the patient's skull in a state where the positional relationship including the relative angle with a predetermined reference plane on the patient's skull is uniquely defined, and a curable material. A bite fork that can be fixed to the lower surface of the patient's upper dentition, and a connecting means that can arbitrarily adjust the positional relationship including the relative angle of the upper bow and the bite fork. The computer functions as a virtual articulator data generation device that can generate virtual articulator data, which is data of a virtual articulator, using an image of a face bow that is accurately attached to a patient. It is a computer program for making it happen.
 そしてこのコンピュータプログラムは、前記コンピュータを、上顎歯列の画像である上顎歯列画像のデータである上顎歯列画像データ、下顎歯列の画像である下顎歯列画像のデータである下顎歯列画像データ、前記バイトフォークと上顎歯列下面の角度を含めた相対的な位置関係を示す画像である上部画像のデータである上部画像データ、前記接続手段の姿勢を示す画像である姿勢画像のデータである姿勢画像データ、及び上顎歯列と下顎歯列の噛合わせの状態を示す画像である噛合わせ画像のデータである咬合画像データを受付けるための、受付手段と、前記受付手段から受付けた上顎歯列画像データから上顎歯列の3次元モデルである上顎歯列モデルのデータである上顎歯列モデルデータを生成するとともに、前記受付手段から受付けた下顎歯列画像データから下顎歯列の3次元モデルである下顎歯列モデルのデータである下顎歯列モデルデータを生成する、モデル生成手段と、前記受付手段から受付けた上部画像データ、及び姿勢画像データから、生体の患者における前記基準面に対する角度を含めた上顎歯列の相対的な位置を求めて、仮想の基準面である仮想基準面に対する前記上顎歯列モデルの位置についてのデータである第1位置データを生成するとともに、前記受付手段から受付けた咬合画像データから、上顎歯列に対する角度を含めた下顎歯列の相対的な位置を求めて、前記上顎歯列モデルに対する前記下顎歯列モデルの位置についてのデータである第2位置データを生成する、位置データ生成手段と、前記モデル生成手段から、前記上顎歯列モデルデータと、前記下顎歯列モデルデータとを受取るとともに、前記位置データ生成手段から、前記第1位置データと、前記第2位置データとを受取り、生体における基準面に対する上顎歯列と下顎歯列の想定的な位置関係を、前記第1位置データと、前記第2位置データを用いて、仮想基準面に対する上顎歯列モデルと下顎歯列モデルの位置関係に再現するようにして、前記仮想咬合器データを生成する、連結手段と、して機能させるためのものである。 Then, the computer program causes the computer to perform upper jaw dentition image data that is upper dentition image data that is an image of the upper dentition, and lower dentition image that is lower dentition image data that is an image of the lower dentition. Data, upper image data that is data of an upper image that is an image showing a relative positional relationship including the angle between the bite fork and the lower surface of the maxillary dentition, and data of an attitude image that is an image showing the attitude of the connecting means. Receiving means for receiving certain posture image data and occlusal image data which is data of a meshing image which is an image showing a meshing state of the maxillary dentition and the mandibular dentition, and maxillary teeth received from the accepting means Generating maxillary dentition model data that is data of the maxillary dentition model, which is a three-dimensional model of the maxillary dentition, from the sequence image data, and receiving from the receiving means Model generating means for generating lower jaw dentition model data, which is data of a lower dentition dentition model which is a three-dimensional model of the lower dentition, from the lower dentition dentition image data, upper image data received from the receiving means, and posture image The data on the position of the maxillary dentition model with respect to the virtual reference plane, which is a virtual reference plane, is obtained from the data to determine the relative position of the maxillary dentition including the angle with respect to the reference plane in the living patient. 1 position data is generated, and a relative position of the lower dentition including an angle with respect to the upper dentition is obtained from the occlusal image data received from the receiving means, and the lower dentition model with respect to the upper dentition model Position data generating means for generating second position data, which is data on the position of the position, and the maxillary dentition model data from the model generating means, Receiving the lower jaw dentition model data, receiving the first position data and the second position data from the position data generating means, and assuming the upper dentition and the lower dentition relative to the reference plane in the living body. Using the first position data and the second position data, the virtual articulator data is generated by reproducing the positional relation to the positional relation between the maxillary dentition model and the mandibular dentition model with respect to the virtual reference plane. It is for functioning as a connecting means.
 本願発明者は、以上で説明した仮想咬合器データ生成装置と組合せて用いることのできる以下のフェイスボウをも本願発明の一態様として提案する。 The inventor of the present application also proposes the following face bow that can be used in combination with the virtual articulator data generation device described above as an aspect of the present invention.
 その一例は、患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるアッパーボウと、硬化性の物質を塗布して患者の上顎歯列の下面に対して固定することができるバイトフォークと、前記アッパーボウ、及び前記バイトフォークの相対的な角度を含めた位置関係を任意に調節可能な接続手段と、を備えたフェイスボウである。そしてこのフェイスボウの前記接続手段は、複数の部材から構成されているとともに、前記部材には、それらの角度を含めた相対的な位置関係の変化によってその位置関係が変化する複数の目印が適宜付されている。 One example is the application of a curable substance and an upper bow that can be fixed to the patient's skull in a state where the positional relationship including the relative angle with a predetermined reference plane on the patient's skull is uniquely determined. A bite fork that can be fixed to the lower surface of the patient's maxillary dentition, and a connection means that can arbitrarily adjust the positional relationship including the relative angle of the upper bow and the bite fork. It is a face bow. The connecting means of the face bow is composed of a plurality of members, and the members are appropriately provided with a plurality of marks whose positional relationships change due to changes in relative positional relationships including their angles. It is attached.
 その他の例は、患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるアッパーボウと、硬化性の物質を塗布して患者の上顎歯列の下面に対して固定することができるバイトフォークと、前記アッパーボウ、及び前記バイトフォークの相対的な角度を含めた位置関係を任意に調節可能な接続手段と、を備えたフェイスボウである。そして、その接続手段は、複数の部材から構成されているとともに、前記部材には、それらの角度を含めた相対的な位置関係の変化によって見え方が変化するようにして色彩が付されている。 Other examples include applying an upper bow that can be fixed to the patient's skull in a state where the positional relationship including a relative angle with a predetermined reference surface on the patient's skull is uniquely defined, and a curable material. A bite fork that can be fixed to the lower surface of the patient's upper dentition, and a connecting means that can arbitrarily adjust the positional relationship including the relative angle of the upper bow and the bite fork. A face bow. The connecting means is composed of a plurality of members, and the members are colored so that the appearance changes due to a change in relative positional relationship including their angles. .
 本願発明者はまた、以下のような下顎運動記録装置をも提案する。 The present inventor also proposes the following mandibular movement recording device.
 下顎運動記録装置は、患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるようにされているとともに、接触された部位を検出することができる面状の検出面を有するとともに、接触された部位のデータを外部へ出力するための出力手段を備えたフラッグを有するアッパーボウと、硬化性の物質を塗布して患者の下顎歯列に対して固定することができる下バイトフォーク、及び前記下バイトフォークと接続されているとともに、患者が行った下顎の運動である下顎運動に伴って、前記フラッグに接触しながら前記フラッグ上を移動するスタイラス、を備えているロウアーボウと、を有する下顎運動検出装置と組合せて用いられる、下顎運動記録装置である。 The mandibular movement recording device can be fixed to the patient's skull in a state in which the positional relationship including the relative angle with a predetermined reference plane on the patient's skull is uniquely defined and is contacted. An upper bow with a flag having an output means for outputting the data of the contacted part to the outside, and a curable substance. A lower bite fork that can be fixed to a patient's lower jaw dentition, and connected to the lower bite fork while touching the flag in accordance with a lower jaw movement that is a lower jaw movement performed by a patient A mandibular movement recording device used in combination with a mandibular movement detecting device having a lower bow having a stylus moving on the flag.
 そしてこの下顎運動記録装置は、ある患者の仮想咬合器についてのデータである仮想咬合器データを記録した記録手段と、前記フラッグの前記出力手段から、その患者の前記下顎運動に伴って移動する前記スタイラスに接触された部分についてのデータである下顎運動についてのデータである運動データを受付ける第2受付手段と、前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示す画像である下部画像のデータである下部画像データを受付ける第3受付手段と、前記第3受付手段で受付けた下部画像データから前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示すデータである第3位置データを生成する、第2位置データ生成手段と、前記第2受付手段で受付けた運動データと、前記第2位置データ生成手段から受付けた第3位置データと、前記記録手段から読出した仮想咬合器データとに基づいて、前記仮想咬合器データにより特定される仮想咬合器の3次元画像上に、前記第3位置データによりその位置が修正された患者の下顎運動を示すマークを書込む描画手段と、を備えている。 And this mandibular movement recording device is a recording means that records virtual articulator data, which is data about a virtual articulator of a patient, and the output means of the flag moves with the mandibular movement of the patient. A second receiving means for receiving movement data, which is data about mandibular movement, which is data about a portion in contact with the stylus, and an image showing a relative positional relationship including angles of the lower bite fork and lower jaw dentition. A third receiving means for receiving lower image data, which is data of a certain lower image, and a relative positional relationship including the angles of the lower bite fork and the lower dentition from the lower image data received by the third receiving means; Second position data generating means for generating third position data as data, movement data received by the second receiving means, and the second position data. On the three-dimensional image of the virtual articulator specified by the virtual articulator data based on the third position data received from the data generation means and the virtual articulator data read from the recording means. Drawing means for writing a mark indicating the mandibular movement of the patient whose position is corrected by the data.
 下顎は、ヒンジ様の顎関節を中心とした頭蓋に対する回転運動だけでなく、頭蓋に対して、前後、左右に動くため、患者の噛み合わせをより精度よく再現するには、下顎運動を記録することが好ましい。そのような点から、患者の下顎運動を記録する試みがなされている。そしてその一例として、患者の頭蓋に想定される基準面に対して一意に固定されるフラッグと、患者の下顎歯列に対して固定されるスタイラスとを用いて、下顎の頭蓋に対する運動を記録することが提案されている。 The lower jaw moves not only in the rotational movement of the cranium centered on the hinge-like temporomandibular joint, but also in the back and forth, left and right with respect to the cranium. To reproduce the patient's bite more accurately, record the lower jaw movement. It is preferable. From such a point, attempts have been made to record the patient's mandibular movement. As an example, the movement of the mandible with respect to the cranium of the lower jaw is recorded using a flag that is uniquely fixed with respect to the reference plane assumed for the patient's skull and a stylus that is fixed with respect to the lower dentition of the patient. It has been proposed.
 しかしながら、下顎の頭蓋に対する運動である下顎運動をフラッグとスタイラスとを用いて記録したとしても、それを実際の咬合器上で再現することは大変難しい。下顎運動を再現するための道具として、一般的には、左右顎関節による下顎運動の軌跡を、左右それぞれの下顎頭中心位から約12ミリ前方位までの角度と、その軌跡の曲線を任意の複数種類のバリエーションで設定した半調節性咬合器が用いられているが、それにより再現される下顎運動は患者のそれを正確に写しとったものとは言い難い。また同様にコンピュータ上の仮想咬合器であっても生体の下顎運動を再現するための適当な手法も存在しない。 However, even if the mandibular movement, which is the movement of the lower jaw with respect to the cranium, is recorded using a flag and a stylus, it is very difficult to reproduce it on an actual articulator. As a tool to reproduce the mandibular movement, in general, the trajectory of the mandibular movement by the left and right temporomandibular joints, the angle from the central position of the left and right mandibular heads to about 12 mm forward direction, and the curve of the trajectory are arbitrary Semi-adjustable articulators set in multiple types of variations are used, but the mandibular movement reproduced by it is hard to say that it is an exact copy of that of the patient. Similarly, there is no appropriate method for reproducing the mandibular movement of a living body even with a virtual articulator on a computer.
 本願の下顎運動記録装置は、患者の仮想咬合器についてのデータである仮想咬合器データを記録した記録手段を備えており、フラッグから第2受付手段を介して入力された運動データと、記録手段から読出した仮想咬合器データとにより、仮想咬合器データで特定される仮想咬合器の3次元画像上に、患者の下顎運動を示すマークを書込むようにしている。このように生成された画像を所定のディスプレイで見ることで、歯科医師は簡単に、且つ正確に、患者の下顎運動を把握できる。しかも、下顎運動を示すマークは、下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示す画像である下部画像のデータである下部画像データから生成された第3位置データを用いてその位置を修正されるから、仮想咬合器の画像に書き込まれた下顎運動を示すマークは正確なものとなり、その画像を見た歯科医は生体である患者の複雑な下顎運動を簡単に、且つ正確に把握できる。 The mandibular movement recording device of the present application includes recording means for recording virtual articulator data, which is data about the patient's virtual articulator, and movement data input from the flag via the second receiving means, and recording means Based on the virtual articulator data read out from, a mark indicating the mandibular movement of the patient is written on the three-dimensional image of the virtual articulator specified by the virtual articulator data. By viewing the generated image on a predetermined display, the dentist can easily and accurately grasp the mandibular movement of the patient. In addition, the mark indicating the mandibular movement uses the third position data generated from the lower image data which is the data of the lower image which is an image showing the relative positional relationship including the angle of the lower bite fork and the lower dentition. Since the position is corrected, the mark indicating the mandibular movement written in the virtual articulator image is accurate, and the dentist who sees the image can easily perform the complicated mandibular movement of the patient who is a living body, And it can be accurately grasped.
 この下顎運動記録装置と同様の効果を、例えば以下の方法によって得ることもできる。 The same effect as this mandibular movement recording device can also be obtained by the following method, for example.
 その方法は、患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるようにされているとともに、接触された部位を検出することができる面状の検出面を有するとともに、接触された部位のデータを外部へ出力するための出力手段を備えたフラッグを有するアッパーボウと、硬化性の物質を塗布して患者の下顎歯列の上面に対して固定することができる下バイトフォーク、及び前記下バイトフォークと接続されているとともに、患者が行った下顎の運動である下顎運動に伴って、前記フラッグに接触しながら前記フラッグ上を移動するスタイラス、を備えているロウアーボウと、を有する下顎運動検出装置と組合せて用いられる、コンピュータを含む下顎運動記録装置で実行される下顎運動記録方法である。 The method is such that a positional relationship including a relative angle with a predetermined reference plane on the patient's cranium can be fixed to the patient's cranium in a state in which the positional relationship is uniquely determined, and the contacted part And an upper bow having a flag having an output means for outputting the data of the contacted part to the outside, and a curable substance is applied to the patient. A lower bite fork that can be fixed with respect to the upper surface of the lower jaw dentition, and connected to the lower bite fork, while touching the flag in accordance with a lower jaw movement that is a movement of the lower jaw performed by a patient A mandibular movement recording device including a computer, used in combination with a mandibular movement detecting device having a lower bow having a stylus moving on the flag A mandibular movement recording method performed.
 そしてこの方法は、前記コンピュータが実行する、ある患者の仮想咬合器についてのデータである仮想咬合器データを記録する記録処理と、前記フラッグの前記出力手段から、その患者の前記下顎運動に伴って移動する前記スタイラスに接触された部分についてのデータである下顎運動についてのデータである運動データを受付ける第2受付処理と、前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示す画像である下部画像のデータである下部画像データを受付ける第3受付処理と、前記第3受付処理で受付けた下部画像データから前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示すデータである第3位置データを生成する、第2位置データ生成処理と、前記第2受付処理で受付けた運動データと、第2位置データ生成処理でから受付けた第3位置データと、前記記録処理で記録された後に読み出された仮想咬合器データとに基づいて、前記仮想咬合器データにより特定される仮想咬合器の3次元画像上に、第3位置データによりその位置が修正された患者の下顎運動を示すマークを書込む描画処理と、を含む。 In this method, the computer executes a recording process for recording virtual articulator data, which is data about a virtual articulator of a patient, and the mandibular movement of the patient from the output means of the flag. A second receiving process for receiving movement data, which is data about mandibular movement, which is data about a portion in contact with the moving stylus, and a relative positional relationship including angles of the lower bite fork and lower jaw dentition. A third reception process for receiving lower image data that is data of a lower image that is an image to be displayed, and a relative position including the angle of the lower bite fork and lower jaw dentition from the lower image data received in the third reception process A second position data generation process for generating third position data which is data indicating a relationship; and the exercise data received in the second reception process; Based on the third position data received from the second position data generation process and the virtual articulator data read after being recorded in the recording process, the virtual articulator data identified by the virtual articulator data A drawing process for writing a mark indicating the mandibular movement of the patient whose position is corrected by the third position data on the three-dimensional image.
 上述の下顎運動記録装置と同様の効果を、例えば以下のコンピュータプログラムによって得ることもできる。 The same effect as the above-described mandibular movement recording device can be obtained by, for example, the following computer program.
 そのコンピュータプログラムは、患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるようにされているとともに、接触された部位を検出することができる面状の検出面を有するとともに、接触された部位のデータを外部へ出力するための出力手段を備えたフラッグを有するアッパーボウと、硬化性の物質を塗布して患者の下顎歯列の上面に対して固定することができる下バイトフォーク、及び前記下バイトフォークと接続されているとともに、患者が行った下顎の運動である下顎運動に伴って、前記フラッグに接触しながら前記フラッグ上を移動するスタイラス、を備えているロウアーボウと、を有する下顎運動検出装置と組合せて用いられる、下顎運動記録装置としてコンピュータを機能させるためのコンピュータプログラムである。 The computer program is adapted to be fixed to the patient's skull in a state where the positional relationship including a relative angle with respect to a predetermined reference plane on the patient's skull is uniquely determined and touched. A patient who has a planar detection surface capable of detecting a part and has an upper bow having a flag provided with output means for outputting the data of the contacted part to the outside, and a curable substance. The lower bite fork that can be fixed to the upper surface of the lower jaw dentition, and the lower bite fork that is connected to the lower bite fork and that touches the flag in accordance with the lower jaw movement that is performed by the patient. And a lower bow having a stylus that moves on the flag, and a lower jaw motion recording device used in combination with a lower jaw motion detection device Is a computer program for causing a computer to function with.
 そしてそのコンピュータプログラムは、前記コンピュータを、ある患者の仮想咬合器についてのデータである仮想咬合器データを記録した記録手段、前記フラッグの前記出力手段から、その患者の前記下顎運動に伴って移動する前記スタイラスに接触された部分についてのデータである下顎運動についてのデータである運動データを受付ける第2受付手段、前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示す画像である下部画像のデータである下部画像データを受付ける第3受付手段、前記第3受付手段で受付けた下部画像データから前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示すデータである第3位置データを生成する、第2位置データ生成手段、前記第2受付手段で受付けた運動データと、第2位置データ生成手段から受付けた第3位置データと、前記記録手段から読出した仮想咬合器データとに基づいて、前記仮想咬合器データにより特定される仮想咬合器の画像上に、第3位置データによりその位置が修正された患者の下顎運動を示すマークを書込む描画手段、として機能させる。 Then, the computer program moves the computer from the recording means for recording virtual articulator data, which is data about a virtual articulator of a patient, and the output means of the flag in accordance with the mandibular movement of the patient. Second receiving means for receiving movement data that is data about mandibular movement that is data about a portion that is in contact with the stylus, an image showing a relative positional relationship including angles of the lower bite fork and lower jaw dentition Third receiving means for receiving lower image data, which is data of a certain lower image, and data indicating a relative positional relationship including the angle of the lower bite fork and lower dentition from the lower image data received by the third receiving means The second position data generating means for generating the third position data, and the movement data received by the second receiving means On the image of the virtual articulator specified by the virtual articulator data, based on the third position data received from the second position data generating means and the virtual articulator data read from the recording means, It is made to function as a drawing means for writing a mark indicating the mandibular movement of the patient whose position is corrected by the position data.
 下顎運動記録装置の他の例として、本願発明者は、以下のような発明を提案する。 As another example of a mandibular movement recording device, the present inventor proposes the following invention.
 この下顎運動記録装置は、患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるようにされているとともに、接触された部位を検出することができる面状の検出面を有するとともに、接触された部位のデータを外部へ出力するための出力手段を備えたフラッグを有するアッパーボウと、硬化性の物質を塗布して患者の下顎歯列の上面に対して固定することができる下バイトフォーク、及び前記下バイトフォークと接続されているとともに、患者が行った下顎の運動である下顎運動に伴って、前記フラッグに接触しながら前記フラッグ上を移動するスタイラス、を備えているロウアーボウと、を有する下顎運動検出装置と組合せて用いられる、下顎運動記録装置である。 The mandibular movement recording device is configured to be fixed to the patient's skull in a state in which the positional relationship including a relative angle with a predetermined reference plane on the patient's skull is uniquely determined. An upper bow having a planar detection surface capable of detecting the detected part and having an output means for outputting the data of the contacted part to the outside, and a curable substance is applied. The lower bite fork that can be fixed to the upper surface of the lower jaw dentition of the patient and the lower bite fork connected to the lower bite fork and the lower jaw for the lower jaw movement performed by the patient, A mandibular movement recording apparatus used in combination with a mandibular movement detection apparatus having a lower bow having a stylus that moves on the flag while in contact with the lower bow
 そしてこの下顎運動記録装置は、ある患者の仮想咬合器についてのデータである仮想咬合器データを記録した記録手段と、前記フラッグの前記出力手段から、その患者の前記下顎運動に伴って移動する前記スタイラスに接触された部分についてのデータである下顎運動についてのデータである運動データを受付ける第2受付手段と、前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示す画像である下部画像のデータである下部画像データを受付ける第3受付手段と、前記第3受付手段で受付けた下部画像データから前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示すデータである第3位置データを生成する、第2位置データ生成手段と、前記第2受付手段で受付けた運動データと、前記記録手段から読出した仮想咬合器データとに基づいて、前記仮想咬合器データにより特定される仮想咬合器の3次元画像における前記第3位置データによりその位置が修正された下顎を、アニメーションにより患者の下顎運動を再現するように下顎運動させる動画処理手段と、を備えている。 And this mandibular movement recording device is a recording means that records virtual articulator data, which is data about a virtual articulator of a patient, and the output means of the flag moves with the mandibular movement of the patient. A second receiving means for receiving movement data, which is data about mandibular movement, which is data about a portion in contact with the stylus, and an image showing a relative positional relationship including angles of the lower bite fork and lower jaw dentition. A third receiving means for receiving lower image data, which is data of a certain lower image, and a relative positional relationship including the angles of the lower bite fork and the lower dentition from the lower image data received by the third receiving means; Second position data generating means for generating third position data as data, exercise data received by the second receiving means, and the recording means Based on the read virtual articulator data, the mandible whose position is corrected by the third position data in the three-dimensional image of the virtual articulator specified by the virtual articulator data, and the mandibular movement of the patient are animated. Moving image processing means for moving the mandible so as to reproduce it.
 この下顎運動記録装置は、患者の仮想咬合器についてのデータである仮想咬合器データを記録した記録手段を備えており、フラッグから第2受付手段を介して入力された運動データと、記録手段から読出した仮想咬合器データとにより、仮想咬合器データで特定される仮想咬合器の画像上における下顎を、アニメーションにより患者の下顎運動を再現するように下顎運動させるようにしている。そして、この場合の下顎運動は、上述の下顎運動記録装置の場合と同様に、第3位置データにより、下顎のアニメーションが修正される。そのように生成された画像を所定のディスプレイで見ることで、歯科医師は簡単に、且つ正確に、患者の下顎運動を仮想咬合器に表現することによってあたかも全調節性咬合器を使用しているかのような状態で、患者の下顎運動を把握できる。 The mandibular movement recording apparatus includes recording means for recording virtual articulator data, which is data about a patient's virtual articulator, from the movement data input from the flag through the second receiving means, and the recording means. Based on the read virtual articulator data, the lower jaw on the image of the virtual articulator specified by the virtual articulator data is moved so as to reproduce the lower jaw movement of the patient by animation. In this case, the mandibular movement is corrected by the third position data as in the case of the above-described mandibular movement recording apparatus. By looking at the image so generated on a given display, the dentist can easily and accurately represent the patient's mandibular movement as if using a fully adjustable articulator. In this state, the patient's mandibular movement can be grasped.
 この下顎運動記録装置と同様の効果を、例えば以下の方法によって得ることもできる。 The same effect as this mandibular movement recording device can also be obtained by the following method, for example.
 その方法は、患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるようにされているとともに、接触された部位を検出することができる面状の検出面を有するとともに、接触された部位のデータを外部へ出力するための出力手段を備えたフラッグを有するアッパーボウと、硬化性の物質を塗布して患者の下顎歯列の上面に対して固定することができる下バイトフォーク、及び前記下バイトフォークと接続されているとともに、患者が行った下顎の運動である下顎運動に伴って、前記フラッグに接触しながら前記フラッグ上を移動するスタイラス、を備えているロウアーボウと、を有する下顎運動検出装置と組合せて用いられる、コンピュータを含む下顎運動記録装置で実行される下顎運動記録方法である。 The method is such that a positional relationship including a relative angle with a predetermined reference plane on the patient's cranium can be fixed to the patient's cranium in a state in which the positional relationship is uniquely determined, and the contacted part And an upper bow having a flag having an output means for outputting the data of the contacted part to the outside, and a curable substance is applied to the patient. A lower bite fork that can be fixed with respect to the upper surface of the lower jaw dentition, and connected to the lower bite fork, while touching the flag in accordance with a lower jaw movement that is a movement of the lower jaw performed by a patient A mandibular movement recording device including a computer, used in combination with a mandibular movement detecting device having a lower bow having a stylus moving on the flag A mandibular movement recording method performed.
 そしてその方法は、前記コンピュータが実行する、ある患者の仮想咬合器についてのデータである仮想咬合器データを記録する記録処理と、前記フラッグの前記出力手段から、その患者の前記下顎運動に伴って移動する前記スタイラスに接触された部分についてのデータである下顎運動についてのデータである運動データを受付ける第2受付処理と、前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示す画像である下部画像のデータである下部画像データを受付ける第3受付処理と、前記第3受付処理で受付けた下部画像データから前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示すデータである第3位置データを生成する、第2位置データ生成処理と、前記第2受付処理で受付けた運動データと、前記記録処理で記録された後に読み出された仮想咬合器データとに基づいて、前記仮想咬合器データにより特定される仮想咬合器の3次元画像における前記第3位置データによりその位置が修正された下顎を、アニメーションにより患者の下顎運動を再現するように下顎運動させる動画処理と、を含む。 The method includes a recording process for recording virtual articulator data, which is data about a virtual articulator of a patient, executed by the computer, and the mandibular movement of the patient from the output means of the flag. A second receiving process for receiving movement data, which is data about mandibular movement, which is data about a portion in contact with the moving stylus, and a relative positional relationship including angles of the lower bite fork and lower jaw dentition. A third reception process for receiving lower image data that is data of a lower image that is an image to be displayed, and a relative position including the angle of the lower bite fork and lower jaw dentition from the lower image data received in the third reception process A second position data generation process for generating third position data which is data indicating a relationship; and the exercise data received in the second reception process; Based on the virtual articulator data read after being recorded in the recording process, the position is corrected by the third position data in the three-dimensional image of the virtual articulator specified by the virtual articulator data. Moving image processing for moving the lower jaw to reproduce the lower jaw movement of the patient by animation.
 上述の下顎運動記録装置と同様の効果を、例えば以下のコンピュータプログラムによって得ることもできる。 The same effect as the above-described mandibular movement recording device can be obtained by, for example, the following computer program.
 そのコンピュータプログラムは、患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるようにされているとともに、接触された部位を検出することができる面状の検出面を有するとともに、接触された部位のデータを外部へ出力するための出力手段を備えたフラッグを有するアッパーボウと、硬化性の物質を塗布して患者の下顎歯列の上面に対して固定することができる下バイトフォーク、及び前記下バイトフォークと接続されているとともに、患者が行った下顎の運動である下顎運動に伴って、前記フラッグに接触しながら前記フラッグ上を移動するスタイラス、を備えているロウアーボウと、を有する下顎運動検出装置と組合せて用いられる、下顎運動記録装置としてコンピュータを機能させるためのコンピュータプログラムである。 The computer program is adapted to be fixed to the patient's skull in a state where the positional relationship including a relative angle with respect to a predetermined reference plane on the patient's skull is uniquely determined and touched. A patient who has a planar detection surface capable of detecting a part and has an upper bow having a flag provided with output means for outputting the data of the contacted part to the outside, and a curable substance. The lower bite fork that can be fixed to the upper surface of the lower jaw dentition, and the lower bite fork that is connected to the lower bite fork and that touches the flag in accordance with the lower jaw movement that is performed by the patient. And a lower bow having a stylus that moves on the flag, and a lower jaw motion recording device used in combination with a lower jaw motion detection device Is a computer program for causing a computer to function with.
 そして、そのコンピュータプログラムは、前記コンピュータを、ある患者の仮想咬合器についてのデータである仮想咬合器データを記録した記録手段、前記フラッグの前記出力手段から、その患者の前記下顎運動に伴って移動する前記スタイラスに接触された部分についてのデータである下顎運動についてのデータである運動データを受付ける第2受付手段、前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示す画像である下部画像のデータである下部画像データを受付ける第3受付手段、前記第3受付手段で受付けた下部画像データから前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示すデータである第3位置データを生成する、第2位置データ生成手段、前記第2受付手段で受付けた運動データと、前記記録手段から読出した仮想咬合器データとに基づいて、前記仮想咬合器データにより特定される仮想咬合器の3次元画像における前記第3位置データによりその位置が修正された下顎を、アニメーションにより患者の下顎運動を再現するように下顎運動させる動画処理手段、として機能させる。 Then, the computer program moves the computer from the recording means for recording virtual articulator data, which is data about a virtual articulator of a patient, from the output means of the flag in accordance with the mandibular movement of the patient. An image showing a relative positional relationship including the angle of the lower bite fork and the lower jaw dentition, the second receiving means for receiving the movement data which is the data about the lower jaw movement, which is the data about the portion in contact with the stylus. 3rd receiving means for receiving lower image data as lower image data, and showing a relative positional relationship including the angle of the lower bite fork and lower dentition from the lower image data received by the third receiving means The second position data generating means for generating the third position data as data, and the exercise data received by the second receiving means. And the lower jaw whose position is corrected by the third position data in the three-dimensional image of the virtual articulator specified by the virtual articulator data based on the virtual articulator data read from the recording means, To function as a moving image processing means for moving the lower jaw so as to reproduce the lower jaw movement of the patient.
一実施形態によるフェイスボウの全体構成を示す斜視図。The perspective view which shows the whole structure of the face bow by one Embodiment. 図1に示したフェイスボウに含まれるアッパーボウの構成を示す斜視図。The perspective view which shows the structure of the upper bow contained in the face bow shown in FIG. 図1に示したフェイスボウの接続部材の一部の拡大斜視図。FIG. 2 is an enlarged perspective view of a part of a connection member of the face bow shown in FIG. 1. 図2に示したアッパーボウと組合せて用いられるアンダーボウの構成を示す斜視図。The perspective view which shows the structure of the underbow used in combination with the upper bow shown in FIG. 一実施形態における診断装置の外観を示す斜視図。The perspective view which shows the external appearance of the diagnostic apparatus in one Embodiment. 図5に示した診断装置の本体のハードウエア構成図。The hardware block diagram of the main body of the diagnostic apparatus shown in FIG. 図5に示した診断装置の本体の内部に生成される機能ブロックを示すブロック図。The block diagram which shows the functional block produced | generated inside the main body of the diagnostic apparatus shown in FIG.
 以下、本発明の好ましい一実施形態を説明する。
まず、この実施形態において使用されるフェイスボウ100について説明する。
フェイスボウ100は、仮想咬合器のデータである仮想咬合器データを作成するときに使われる。また、フェイスボウ100は、仮想咬合器データを利用して患者の下顎運動を記録するときにも使われる。フェイスボウ100は、これら2つの場合に使われるが、それぞれの場合でその態様が異なる。
Hereinafter, a preferred embodiment of the present invention will be described.
First, the face bow 100 used in this embodiment will be described.
The face bow 100 is used when creating virtual articulator data that is data of a virtual articulator. The face bow 100 is also used when recording a mandibular movement of a patient using virtual articulator data. The face bow 100 is used in these two cases, but the mode is different in each case.
 まず、仮想咬合器データの作成に用いられる場合におけるフェイスボウ100について説明を行う。その場合のフェイスボウ100は、図1、図2の斜視図に示したように構成されている。
この場合のフェイスボウ100は、フェイスボウトランスファーに用いられる公知のフェイスボウと同様に、アッパーボウ110、バイトフォーク120、及びアッパーボウ110とバイトフォーク120とを接続する接続部材130を備えている。この実施形態のフェイスボウ100は、基本的には、公知の一般的なもので良いが、後述するようにその接続部材130に印が付されるか、色彩が付されている点で、公知のフェイスボウとは異なる。
アッパーボウ110は、患者の頭部に対する角度も含めた位置調整が可能となっており、患者の頭部に、患者の頭部に想定される所定の基準面との位置関係を一意に決定した状態で固定可能とされている。
First, the face bow 100 when used to create virtual articulator data will be described. The face bow 100 in that case is configured as shown in the perspective views of FIGS.
The face bow 100 in this case includes an upper bow 110, a bite fork 120, and a connecting member 130 that connects the upper bow 110 and the bite fork 120, similarly to a known face bow used for face bow transfer. The face bow 100 of this embodiment may be basically a known general one, but is known in that the connecting member 130 is marked or colored as described later. Different from the face bow.
The upper bow 110 can be adjusted in position including the angle with respect to the patient's head, and the positional relationship between the patient's head and a predetermined reference plane assumed for the patient's head is uniquely determined. It can be fixed in the state.
 これには限られないが、この実施形態で説明するアッパーボウ110は、全体として、メガネ様の構成をしている。アッパーボウ110は、メガネのフレーム様の形状をした本体部111を備えている。本体部111は、レンズは入ってはいないが、メガネのフレームに相当するこれには限らないが円環状とされた左右2つのフレーム部111Aと、両フレーム部111Aを繋ぐブリッジ部111Bと、両フレーム部111Aの外側から水平に伸びる智111Cと、を備えて構成されている。
ブリッジ部111Bには、その内周面にネジ切りのされた図示せぬ孔が穿たれており、その孔には、ネジ111B1を螺合させられるようになっている。
智111Cには、その長さ方向の略全長にわたる孔である長孔111C1が設けられている。
 この実施形態のアッパーボウ110では、フレーム部111Aの一方、これには限られないが、左側のフレーム部111Aの下部には、取付部111Dが取付けられている。取付部111Dにはそれを上下に貫通するネジ111Eが取付けられており、ネジ111Eの下端には、支持ブロック112が取付けられている。支持ブロック112は、ネジ111Eを回転させることで、ネジ111Eを軸として回転できるようになっている。もっともこの回転はある程度大きな力を加えない限り生じないようにされている。
Although not limited thereto, the upper bow 110 described in this embodiment has a glasses-like configuration as a whole. The upper bow 110 includes a main body 111 having a frame-like shape of glasses. The main body 111 does not contain a lens, but is not limited to a frame of eyeglasses, but is not limited to this. The left and right two frame portions 111A, a bridge portion 111B that connects both the frame portions 111A, And 111C extending horizontally from the outside of the frame portion 111A.
The bridge portion 111B has a hole (not shown) that is threaded on the inner circumferential surface thereof, and the screw 111B1 can be screwed into the hole.
The wisdom 111C is provided with a long hole 111C1, which is a hole extending over substantially the entire length in the length direction.
In the upper bow 110 of this embodiment, one of the frame portions 111A, but not limited to this, an attachment portion 111D is attached to the lower portion of the left frame portion 111A. A screw 111E penetrating vertically is attached to the attachment portion 111D, and a support block 112 is attached to the lower end of the screw 111E. The support block 112 can be rotated about the screw 111E by rotating the screw 111E. However, this rotation is prevented from occurring unless a certain amount of force is applied.
 支持ブロック112は、位置決め棒113を固定するためのものである(図2。なお、図1では、支持ブロック112と位置決め棒113の記載を省略している。)。支持ブロック112には前後方向にわたる孔(図示を省略。)が穿たれている。支持ブロック112には、また、それを前後方向に貫く上述の孔に対して垂直なその内周面にネジ切りのされた孔(図示を省略。)が穿たれており、その孔にはネジ112Aが螺合されている。位置決め棒113は、フェイスボウ100の使用時に概ね水平を保つ水平部113Aと、フェイスボウ100の使用時に概ね垂直を保つ垂直部113Bとからなる。垂直部113Bの上端はフェイスボウ100の使用時において、患者の鼻に対して押し当てられる位置決め用の球体113B1がその上端に取付けられている。位置決め棒113は、その水平部113Aを支持ブロック112を前後に貫く上述の孔に挿入されている。位置決め棒113の水平部113Aは、孔の中を前後に移動できるように、且つその中心を軸として軸周りに回転できるようにされている。水平部113Aは、回転させることにより締められたネジ112Aの先端と当接するようにされている。回転させることによりネジ112Aを締めると、水平部113Aは、支持ブロック112を前後方向に貫く上述の孔の内面とネジ112Aとの間で挟持され、任意の前後方向の位置、及び任意のその軸周りの角度を保ったまま、支持ブロック112に対して固定できるようになっている。ネジ112Aを緩めれば、当然その固定は解除される。 The support block 112 is for fixing the positioning rod 113 (FIG. 2. Note that the support block 112 and the positioning rod 113 are not shown in FIG. 1). The support block 112 has holes (not shown) extending in the front-rear direction. The support block 112 is also provided with a threaded hole (not shown) on its inner peripheral surface perpendicular to the above-mentioned hole penetrating the support block 112 in the front-rear direction. 112A is screwed together. The positioning rod 113 includes a horizontal portion 113A that is generally horizontal when the face bow 100 is used, and a vertical portion 113B that is generally vertical when the face bow 100 is used. A positioning sphere 113B1 that is pressed against the patient's nose when the face bow 100 is used is attached to the upper end of the vertical portion 113B. The positioning rod 113 is inserted into the above-mentioned hole penetrating the support block 112 through the horizontal portion 113A. The horizontal portion 113A of the positioning rod 113 is configured to be able to move back and forth in the hole and to rotate around the axis with the center as an axis. The horizontal portion 113A comes into contact with the tip of the screw 112A tightened by rotating. When the screw 112A is tightened by rotating, the horizontal portion 113A is sandwiched between the inner surface of the above-described hole penetrating the support block 112 in the front-rear direction and the screw 112A, and the position in any front-rear direction and any axis thereof It can be fixed to the support block 112 while maintaining the surrounding angle. If the screw 112A is loosened, it is naturally released.
 支持ブロック112が取付部111Dに対して略水平方向で回転可能であることも考慮すれば、位置決め棒113の垂直部113Bは、支持ブロック112を中心に水平方向で回転することができ、また支持ブロック112に対して前後動することができ、またその軸周りに回転できることになる。
智111Cには、一般的なメガネのテンプルと同様に、フェイスボウ100の使用時には患者の両耳にかけられるテンプル114が取付けられている。テンプル114は、智111C寄りの前テンプル部114Aと、前テンプル部114Aに取付けられた後テンプル部114Bとからなる。
Considering that the support block 112 can rotate in a substantially horizontal direction with respect to the mounting portion 111D, the vertical portion 113B of the positioning rod 113 can rotate in the horizontal direction around the support block 112, and can also be supported. It can move back and forth relative to the block 112 and can rotate about its axis.
Similar to the temples of general glasses, temples 114 that are put on both ears of the patient when the face bow 100 is used are attached to the wisdom 111C. The temple 114 includes a front temple portion 114A near the end 111C and a rear temple portion 114B attached to the front temple portion 114A.
 前テンプル部114Aの前面には、図示を省略のその内周面にネジ切りのされた穴がテンプル114の長さ方向に向けて穿たれている。智111Cに設けられた長孔111C1は、ネジ114A1に貫かれており、且つネジ114A1の先端は前テンプル部114Aの前面に穿たれた上述の孔に螺合されている。ネジ114A1が緩められた状態では、ネジ114A1と前テンプル部114Aは(要するに、テンプル114は)、長孔111C1に沿って略水平方向に移動できるようになっている。ネジ114A1を回転させて締めると、智111Cは、ネジ114A1の前側に設けられた頭部の後面と前テンプル部114Aの前面との間で挟持された状態となる。つまり、テンプル114は、智111Cの長さ方向に沿って移動可能であり、任意の位置で智111Cに固定することができる。 A hole threaded in the inner peripheral surface (not shown) is formed in the front surface of the front temple portion 114A in the length direction of the temple 114. A long hole 111C1 provided in the end 111C is penetrated by the screw 114A1, and the tip of the screw 114A1 is screwed into the above-described hole formed in the front surface of the front temple portion 114A. When the screw 114A1 is loosened, the screw 114A1 and the front temple portion 114A (in short, the temple 114) can move in a substantially horizontal direction along the long hole 111C1. When the screw 114A1 is rotated and tightened, the wisdom 111C is sandwiched between the rear surface of the head provided on the front side of the screw 114A1 and the front surface of the front temple portion 114A. That is, the temple 114 can move along the length direction of the wisdom 111C and can be fixed to the wisdom 111C at an arbitrary position.
 後テンプル部114Bは、前テンプル部114Aに対して前後動することができるようになっており、それによりテンプル114の全長が変化させられるようになっている。後テンプル部114Bを前テンプル部114Aに対して前後させられるようにするための機構としてはどのようなものを採用しても構わないが、この実施形態ではいわゆるラック・アンド・ピニオン構造を採用している。図示を省略するが、前テンプル部114Aには前テンプル部114Aの長さ方向に沿うラックが内蔵されており、また、後テンプル部114Bに取付けられたネジ114B1にはこれも図示を省略するがピニオンが取付けられている。ネジ114B1を回転させることにより、ラックと噛み合ったネジ114B1がラックに対して前後動するので、後テンプル部114Bが前テンプル部114Aに対して前後動する。 The rear temple portion 114B can move back and forth with respect to the front temple portion 114A, whereby the entire length of the temple 114 can be changed. Any mechanism may be employed for allowing the rear temple portion 114B to be moved back and forth with respect to the front temple portion 114A. In this embodiment, a so-called rack and pinion structure is employed. ing. Although illustration is omitted, the front temple portion 114A has a built-in rack along the length direction of the front temple portion 114A, and the screw 114B1 attached to the rear temple portion 114B is also omitted from illustration. A pinion is installed. By rotating the screw 114B1, the screw 114B1 meshed with the rack moves back and forth relative to the rack, so that the rear temple portion 114B moves back and forth relative to the front temple portion 114A.
 後テンプル部114Bには、フラッグ115が着脱自在として取付けられている。フラッグ115は面状のセンサ部115Aとそれを囲む枠115Bとから構成されている。フラッグ115は、ネジ115Cを介して後テンプル部114Bに対してネジ止めすることにより、後テンプル部114Bに対して着脱自在に固定できるようになっている。センサ部115Aは、後述するスタイラスの針状部に接触された部位を検出し、その部位についての部位データを略実時間で出力できるようになっている。針状部には、後述するように電力が供給されるようになっており、センサ部115Aは、針状部と通電した部位を、針状部に接触された部位として検出することとなっている。もっともセンサ部115Aは、針状部からかかる圧力などの他のパラメータにより、針状部に接触された部位を検出するようになっていても構わない。フラッグ115は、ケーブル115Dと接続されており、その先端は後述するコンピュータと接続されるようになっている。部位データは、ケーブル115Dを介してコンピュータに送られるようになっている。 The flag 115 is detachably attached to the rear temple portion 114B. The flag 115 includes a planar sensor portion 115A and a frame 115B surrounding the sensor portion 115A. The flag 115 can be detachably fixed to the rear temple portion 114B by being screwed to the rear temple portion 114B via a screw 115C. The sensor unit 115A can detect a part that is in contact with a needle-like part of a stylus, which will be described later, and can output part data about the part in substantially real time. As will be described later, electric power is supplied to the needle-like portion, and the sensor portion 115A detects the portion energized with the needle-like portion as a portion that is in contact with the needle-like portion. Yes. However, the sensor unit 115 </ b> A may be configured to detect a portion in contact with the needle-like part based on other parameters such as pressure applied from the needle-like part. The flag 115 is connected to the cable 115D, and the tip thereof is connected to a computer described later. The site data is sent to the computer via the cable 115D.
 バイトフォーク120は、フェイスボウ100の使用時において、患者の上顎歯列の下面に固定されるものである。バイトフォーク120は、患者の上顎歯列の下面に固定されるバイトフォーク本体121と、バイトフォーク本体121と接続部材130の下端の固定をなす、接続部122とを備えている。接続部122は板状体であり、図示を省略の孔が穿たれている。
バイトフォーク本体121と上顎歯列の下面との固定は、公知の方法で、例えば、モデリングコンパウンドをバイトフォーク本体121の上面に塗布し、バイトフォーク本体121を上顎歯列の下面に押し付けることによって行う。
接続部材130は、アッパーボウ110と、バイトフォーク120とを接続するためのものである。接続部材130は、上部材131、下部材132、中間部材133からなる。
The bite fork 120 is fixed to the lower surface of the patient's maxillary dentition when the facebow 100 is used. The bite fork 120 includes a bite fork main body 121 that is fixed to the lower surface of the patient's maxillary dentition, and a connection portion 122 that fixes the lower end of the bite fork main body 121 and the connection member 130. The connecting portion 122 is a plate-like body and has a hole (not shown).
The bite fork body 121 and the lower surface of the upper dentition are fixed by a known method, for example, by applying a modeling compound to the upper surface of the bite fork main body 121 and pressing the bite fork main body 121 against the lower surface of the upper dentition. .
The connection member 130 is for connecting the upper bow 110 and the bite fork 120. The connection member 130 includes an upper member 131, a lower member 132, and an intermediate member 133.
 上部材131は、上取付部131A、上接続棒131B、及びボール131Cからなる。上取付部131Aは、アッパーボウ110との固定を行うためのものである。上取付部131Aには図示を省略の孔が穿たれている。上取付部131Aは、その孔を上述したブリッジ部111Bの孔に対応させた状態で、上取付部131Aの孔を貫通させたネジ111B1の先端をブリッジ部111Bの孔に螺合させ締め付けることにより、アッパーボウ110に固定される。上接続棒131Bは、上取付部131Aとボール131Cとを接続する。ボール131Cは、金属製の球体である。
下部材132は、下取付部132A、下接続棒132B、及びボール132Cからなる。下取付部132Aは、バイトフォーク120との固定を行うためのものである。下取付部132Aには、その内周面にネジ切りのされた図示を省略の孔が穿たれている。下取付部132は、その孔を上述した接続部122の孔に対応させた状態で、接続部122の孔を貫通させたネジ132B1の先端を、下取付部132Aの孔に螺合させ締め付けることにより、バイトフォーク120に固定される。下接続棒132Bは、下取付部132Aとボール132Cとを接続する。ボール132Cは、金属製の球体である。
The upper member 131 includes an upper mounting portion 131A, an upper connecting rod 131B, and a ball 131C. The upper mounting portion 131 </ b> A is for fixing to the upper bow 110. The upper mounting portion 131A has a hole (not shown). The upper mounting portion 131A is tightened by screwing the tip of the screw 111B1 penetrating the hole of the upper mounting portion 131A into the hole of the bridge portion 111B with the hole corresponding to the hole of the bridge portion 111B described above. , Fixed to the upper bow 110. The upper connecting rod 131B connects the upper mounting portion 131A and the ball 131C. The ball 131C is a metal sphere.
The lower member 132 includes a lower mounting portion 132A, a lower connecting rod 132B, and a ball 132C. The lower attachment portion 132A is for fixing to the bite fork 120. The lower mounting portion 132A has a hole (not shown) threaded on its inner peripheral surface. The lower mounting portion 132 is tightened by screwing the tip of the screw 132B1 penetrating the hole of the connecting portion 122 into the hole of the lower mounting portion 132A with the hole corresponding to the hole of the connecting portion 122 described above. Thus, the bite fork 120 is fixed. The lower connecting rod 132B connects the lower mounting portion 132A and the ball 132C. The ball 132C is a metal sphere.
 中間部材133は、上部材131と、下部材132との中間に入り、両者を連結するものである。中間部材133は、同軸に配置された、共に円筒形の第1部材133Aと第2部材133Bを備えている。第1部材133Aには、上部材131のボール131Cを受けるための受け穴133A1を備えた上受け部133A2が設けられている。第2部材133Bには、下部材132のボール132Cを受けるための受け穴133B1を備えた下受け部133B2が設けられている。上受け部133A2の受け穴133A1と、ボール131Cは、それらの組合せによりボールジョイントをなし、下受け部133B2の受け穴133B1と、ボール132Cは、それらの組合せによりボールジョイントをなす。第1部材133Aと第2部材133Bは、それらの共通の軸周りに相互に回転できるようになっている。また、第2部材133Bには、レバー134が接続されている。レバー134は、その回転により、上受け部133A2の受け穴133A1に対するボール131Cの回転と、下受け部133B2の受け穴133B1に対するボール133Cの回転と、第1部材133Aと第2部材133Bの相互の回転を許容する状態とするか、それらを許容しない状態とするかを決定できるものである。レバー134を緩めると、2つのボールジョイントはフリーとなるとともに、第1部材133Aと第2部材133Bの相互の回転が許容され、レバー134を締め付けると、2つのボールジョイントはロックされるとともに、第1部材133Aと第2部材133Bも互いにロックされ相互に回転できない状態となる。 The intermediate member 133 enters between the upper member 131 and the lower member 132 and couples them together. The intermediate member 133 includes a first member 133A and a second member 133B that are coaxially disposed and are both cylindrical. The first member 133A is provided with an upper receiving portion 133A2 having a receiving hole 133A1 for receiving the ball 131C of the upper member 131. The second member 133B is provided with a lower receiving portion 133B2 having a receiving hole 133B1 for receiving the ball 132C of the lower member 132. The receiving hole 133A1 of the upper receiving portion 133A2 and the ball 131C form a ball joint by their combination, and the receiving hole 133B1 of the lower receiving portion 133B2 and the ball 132C form a ball joint by their combination. The first member 133A and the second member 133B can be rotated around each other about their common axis. A lever 134 is connected to the second member 133B. Due to the rotation of the lever 134, the rotation of the ball 131C with respect to the receiving hole 133A1 of the upper receiving portion 133A2, the rotation of the ball 133C with respect to the receiving hole 133B1 of the lower receiving portion 133B2, and the mutual relationship between the first member 133A and the second member 133B. It is possible to determine whether the rotation is allowed or not. When the lever 134 is loosened, the two ball joints become free and the first member 133A and the second member 133B are allowed to rotate with each other. When the lever 134 is tightened, the two ball joints are locked and the first ball joint is locked. The first member 133A and the second member 133B are also locked with each other and cannot rotate with each other.
 アッパーボウ110と、バイトフォーク120は上述のようにして、接続部材130によって互いに接続される。そして、その接続部材130は、2つのボールジョイントを持ち、また、第1部材133Aと第2部材133Bの間での相互の回転が許容されているので、アッパーボウ110と、バイトフォーク120の角度を含む位置関係は、自由に調節でき、且つその位置関係を固定することも可能となる。
接続部材130には、上述のように印が付されるか、さもなくば色彩が付されるか、或いはその双方が付されている。
印、或いは色彩は、複数の部材から構成される接続部材130の姿勢、言い換えれば、受け穴133A1に対するボール131Cの位置関係、受け穴133B1に対するボール132Cの位置関係、及び第1部材133Aと第2部材133Bの相互の位置関係を、それらが個別に映り込んだ複数の画像か、或いはそれらがすべて映り込んだ1つの画像等のそれらが映り込んだ画像から把握できるようにするためのものである。
The upper bow 110 and the bite fork 120 are connected to each other by the connecting member 130 as described above. Since the connecting member 130 has two ball joints and mutual rotation between the first member 133A and the second member 133B is allowed, the angle between the upper bow 110 and the bite fork 120 is The positional relationship including can be freely adjusted, and the positional relationship can be fixed.
The connecting member 130 is marked as described above, or otherwise colored, or both.
The mark or color indicates the posture of the connecting member 130 composed of a plurality of members, in other words, the positional relationship of the ball 131C with respect to the receiving hole 133A1, the positional relationship of the ball 132C with respect to the receiving hole 133B1, and the first member 133A and the second color. This is to make it possible to grasp the mutual positional relationship of the members 133B from a plurality of images in which they are individually reflected or one image in which they are all reflected. .
 それらの位置関係が一意に決定されれば、接続部材130の上取付部131Aとアッパーボウ110のブリッジ部111Bとの位置関係が常に決まっており(これらはネジ111B1を軸として相互に回転できるが、それらの相互の位置を必ず一定の位置にするものとする。)、且つ接続部材130の下取付部132Aとバイトフォーク120の接続部122の位置関係が常に決まっている(これらはネジ132B1を軸として相互に回転できるが、それらの相互の位置を必ず一定の位置にするものとする。)ことが前提となるが、アッパーボウ110とバイトフォーク120の角度も含めた相対的な位置関係も一意に決定される。印、或いは色彩は、接続部材130の画像である姿勢画像から、アッパーボウ110とバイトフォーク120の相対的な位置関係を簡単に把握できるようなものとする。 If these positional relationships are uniquely determined, the positional relationship between the upper mounting portion 131A of the connecting member 130 and the bridge portion 111B of the upper bow 110 is always determined (although these can rotate with respect to the screw 111B1 as an axis). In addition, the relative positions of the connecting member 130 and the connecting portion 122 of the bite fork 120 are always determined (the screws 132B1 are fixed to each other). As a premise, the relative positions of the upper bow 110 and the bite fork 120, including the angle of the upper bow 110 and the bite fork 120, are also assumed. Determined uniquely. The marks or colors are such that the relative positional relationship between the upper bow 110 and the bite fork 120 can be easily grasped from the posture image that is an image of the connection member 130.
 なお、接続部材130の上取付部131Aとアッパーボウ110のブリッジ部111Bとの位置関係と、接続部材130の下取付部132Aとバイトフォーク120の接続部122の位置関係を、必ず一定の位置にしないのであれば、姿勢画像はそれらの位置関係をも把握できるように写す必要がある。 Note that the positional relationship between the upper mounting portion 131A of the connecting member 130 and the bridge portion 111B of the upper bow 110 and the positional relationship between the lower mounting portion 132A of the connecting member 130 and the connecting portion 122 of the bite fork 120 are always in a certain position. If not, the posture image needs to be copied so that the positional relationship between them can be grasped.
 例えば、印を用いる場合であれば、図4に示したように、上部材131のボール131Cに、上接続棒131Bを中心とする、同心円M1(地球で言う緯度線と同様の線)を複数書込むとともに、上接続棒131Bを中心とする放射状の線M2(地球でいう経度線と同様の線)を複数書き込み、且つそれらの間隔を変えるとか、それらの線の太さを変える、或いはそれら線のうちそれらを区別するのに必要な部分に、それら線を特定する記号(例えば数字やアルファベット)を書込むとともに、上受け部133A2の受け穴133A1の周囲にボール131Cに書き込まれた上述の線と上受け部133A2との相対的な位置関係を明確にするための印M3を適当な数打っておけば良い。下部材132のボール132Cと、下受け部133B2にも同様の印を書込むことができる。また、中間部材133においては、同軸に配置された第1部材133Aと第2部材133Bの角度のみが問題となるので、その一方の外側面にメモリを、他方の外側面にそのメモリの基準となる例えば矢印を書き込んでおくことで、それらの例えば上接続棒131Bと下接続棒133Bがなす角を捉えることができる。以上のような印を接続部材130に付しておけば、姿勢画像から、接続部材130の姿勢が簡単に明らかになるから、姿勢画像から、アッパーボウ110とバイトフォーク130の角度も含めた相対的な位置関係を簡単に把握できるようになる。 For example, in the case of using a mark, as shown in FIG. 4, a plurality of concentric circles M1 (same lines as latitude lines on the earth) centered on the upper connecting rod 131B are formed on the ball 131C of the upper member 131. While writing, write a plurality of radial lines M2 (similar to the longitude line on the earth) centered on the upper connecting rod 131B and change their spacing, change the thickness of these lines, or A symbol (for example, a number or an alphabet) for identifying the lines is written in a portion necessary for distinguishing them from the lines, and the above-mentioned information written on the ball 131C around the receiving hole 133A1 of the upper receiving portion 133A2. An appropriate number of marks M3 for clarifying the relative positional relationship between the line and the upper receiving portion 133A2 may be entered. A similar mark can be written on the ball 132C of the lower member 132 and the lower receiving portion 133B2. Further, in the intermediate member 133, only the angle between the first member 133A and the second member 133B arranged coaxially becomes a problem. Therefore, the memory is provided on one outer surface and the memory reference is provided on the other outer surface. For example, by writing an arrow, the angle formed by, for example, the upper connecting rod 131B and the lower connecting rod 133B can be captured. If the connection member 130 is marked as described above, the posture of the connection member 130 is easily clarified from the posture image. Therefore, the relative position including the angles of the upper bow 110 and the bite fork 130 is also obtained from the posture image. This makes it easy to grasp the specific positional relationship.
 色彩を用いる場合であれば、上部材131のボール131Cに、例えば、その経度方向で色相が、その緯度方向で明度が連続的に変化するように色を付するとともに、上受け部133A2の受け穴133A1の周囲に、例えば、その明度が連続的に変化するような(例えば、白から黒まで連続的に色彩が変化するような)無彩色の色を付せば良い。下部材132のボール132Cと、下受け部133B2にも同様に色を付すことができる。また、中間部材133においては、同軸に配置された第1部材133Aと第2部材133Bの一方の外側面にその明度が周方向に連続的に変化するような無彩色の色を付すとともに、その他方の外側面に上述の場合と同様の矢印を記しておくか、さもなくば矢印と同様の役割を果たすことができるように、第1部材133Aと第2部材133Bの一方に付されたのと同様の色を付しておけば良い。そうしておけば、姿勢画像から、接続部材130の姿勢が簡単に明らかになるから、姿勢画像から、アッパーボウ110とバイトフォーク120の角度も含めた相対的な位置関係を簡単に把握できるようになる。 In the case of using a color, for example, the ball 131C of the upper member 131 is colored so that the hue continuously changes in the longitude direction and the brightness in the latitude direction, and is received by the upper receiving portion 133A2. For example, an achromatic color whose brightness changes continuously (for example, the color changes continuously from white to black) may be added around the hole 133A1. The ball 132C of the lower member 132 and the lower receiving portion 133B2 can be similarly colored. In addition, in the intermediate member 133, an achromatic color whose brightness continuously changes in the circumferential direction is given to one outer surface of the first member 133A and the second member 133B arranged coaxially, and the others. An arrow similar to that described above is marked on the outer surface of the other side, or is attached to one of the first member 133A and the second member 133B so that it can play the same role as the arrow. Just add the same color. Then, since the posture of the connecting member 130 is easily clarified from the posture image, it is possible to easily grasp the relative positional relationship including the angles of the upper bow 110 and the bite fork 120 from the posture image. become.
 次いで、下顎運動の記録の際に用いられる場合におけるフェイスボウについて説明を行う。その場合のフェイスボウは、仮想咬合器データを生成するときに用いられる上述のフェイスボウ100のアッパーボウ110と、以下に説明するアンダーボウ140とを用いる。両者は互いに接続されていない。上述のフェイスボウ100における接続部材130と、バイトフォーク120は、このフェイスボウには用いられない。
前棒141は棒状体である。
連結部材143は、前棒141を貫通させる図示を省略の孔を備えている。連結部材143にはその内周面にネジ切りのされた孔が穿たれており、その孔にはネジ143Aが螺合されている。ネジ143Aを緩めた状態では連結部材143は前棒141の長さ方向に沿って移動でき、且つ前棒141を軸として回転することができる。他方、ネジ143Aを締めると、ネジ143Aの先端が前棒141に固定され、前棒141の長さ方向における位置と、前棒141に対する角度が固定された状態で維持される。
Next, the face bow used when recording the mandibular movement will be described. In this case, the face bow uses the upper bow 110 of the face bow 100 described above, which is used when generating virtual articulator data, and the underbow 140 described below. Both are not connected to each other. The connecting member 130 and the bite fork 120 in the face bow 100 described above are not used in this face bow.
The front bar 141 is a rod-shaped body.
The connecting member 143 includes a hole (not shown) that allows the front rod 141 to pass therethrough. The connecting member 143 has a threaded hole in its inner peripheral surface, and a screw 143A is screwed into the hole. In a state where the screw 143A is loosened, the connecting member 143 can move along the length direction of the front bar 141 and can rotate around the front bar 141 as an axis. On the other hand, when the screw 143A is tightened, the tip of the screw 143A is fixed to the front bar 141, and the position in the length direction of the front bar 141 and the angle with respect to the front bar 141 are maintained.
 連結部材143は、また、横棒142を貫通させるための管143Bを備えている。横棒142は、管143Bを貫通した状態で、連結部材143に取付けられている。連結部材143にはまた、その内周面にネジ切りのされた図示せぬ孔が穿たれており、その孔にはネジ143Cが螺合している。ネジ143Cを緩めると横棒142は管143の長さ方向に移動できるようになるとともに、その軸周りに回転できるようになり、ネジ143Cを締めると横棒142は管143Bに固定される。
上述のように、連結部材143は、前棒141の長さ方向における位置と、前棒141に対する角度が可変であるので、連結部材143から横棒142の後端までの長さが可変であることも合わせると、後述する下バイトフォークから見た場合における、横棒142の後端の位置は、縦横高さの全てにおいて、自由に位置決めできることになる。
横棒142の後端には、スタイラス144が取付けられている。
スタイラス144は、アッパーボウ110に取付けられる上述のフラッグ115と接触して、上述した部位データを生成するものである。上述のように、下バイトフォーク145から見た場合における、横棒142の後端の位置は、縦横高さの全てにおいて、自由に位置決めできる。横棒142後端の位置を適当に位置決めすることで、患者が咬合を普通に行っているときに、スタイラス144がフラッグ115のセンサ部115Aの適当な位置(例えばフラッグ115の中心)に位置するように位置決めを行う。
The connecting member 143 also includes a tube 143B for allowing the horizontal bar 142 to pass therethrough. The horizontal bar 142 is attached to the connecting member 143 while penetrating the pipe 143B. The connecting member 143 also has a hole (not shown) that is threaded on its inner peripheral surface, and a screw 143C is screwed into the hole. When the screw 143C is loosened, the horizontal bar 142 can move in the length direction of the tube 143 and can rotate about its axis. When the screw 143C is tightened, the horizontal bar 142 is fixed to the tube 143B.
As described above, since the connecting member 143 has a variable position in the length direction of the front bar 141 and an angle with respect to the front bar 141, the length from the connecting member 143 to the rear end of the horizontal bar 142 is variable. In addition, the position of the rear end of the horizontal bar 142 when viewed from the lower bite fork described later can be freely positioned at all the vertical and horizontal heights.
A stylus 144 is attached to the rear end of the horizontal bar 142.
The stylus 144 is in contact with the above-described flag 115 attached to the upper bow 110 to generate the above-described part data. As described above, the position of the rear end of the horizontal bar 142 when viewed from the lower bite fork 145 can be freely positioned in all vertical and horizontal heights. By appropriately positioning the position of the rear end of the horizontal bar 142, the stylus 144 is positioned at an appropriate position (for example, the center of the flag 115) of the sensor portion 115A of the flag 115 when the patient is normally engaged. Position as shown.
 スタイラス144は、スタイラス本体144Aと、針状部144Bとが含まれる。スタイラス本体144Aには、電力線が接続されている。針状部144Bは、スタイラス本体144Aから電力を供給される。また、針状部144Bは、スタイラス本体144Aに内蔵された図示を省略の弾性体から、その先端に向けて適度な弾性力を与えられている。それにより、スタイラス本体144Aから、フラッグ115のセンサ部115Aまでの距離が多少変化したとしても、針状部144Bは一定範囲の適当な圧力でその先端がセンサ部115Aに押し付けられた状態となる。
アンダーボウ140は、下バイトフォーク145を備えている。下バイトフォーク145は、前棒141に固定された固定板145Aにその基端を固定された固定棒145Bの先端に固定されている。下バイトフォーク145は、その下面を患者の下顎歯列の上面に固定される。その固定は例えば、下バイトフォーク145に維持する様に即時重合レジン等を置いて下顎歯列歯牙頬側に面圧接することによって下顎歯列に適合した状態で重合し、歯牙頬側軸面に適合させた、下バイトフォーク145を下顎運動を干渉する事が無い下顎歯列の頬側面に瞬間接着剤等で固定する等の一般的な方法で行う。
The stylus 144 includes a stylus main body 144A and a needle-like portion 144B. A power line is connected to the stylus main body 144A. The acicular portion 144B is supplied with electric power from the stylus main body 144A. Further, the needle-like portion 144B is given an appropriate elastic force toward its tip from an elastic body (not shown) built in the stylus main body 144A. As a result, even if the distance from the stylus main body 144A to the sensor portion 115A of the flag 115 slightly changes, the tip of the needle-like portion 144B is pressed against the sensor portion 115A with an appropriate pressure within a certain range.
The underbow 140 includes a lower bite fork 145. The lower bite fork 145 is fixed to the distal end of a fixed bar 145B whose base end is fixed to a fixed plate 145A fixed to the front bar 141. The lower bite fork 145 has its lower surface fixed to the upper surface of the patient's lower dentition. For example, the resin is polymerized in a state suitable for the lower jaw dentition by placing an immediate polymerization resin or the like so that it is maintained on the lower bite fork 145 and pressing the lower buccal dentition on the buccal side of the lower jaw dent, The lower bite fork 145, which is adapted, is fixed by a general method such as fixing to the buccal side surface of the lower jaw dentition without interfering with the lower jaw movement with an instantaneous adhesive or the like.
 次に、この実施形態における、診断装置について説明する。この診断装置は治療にも用いるものであるが、簡単のため以下、単に診断装置と称するものとする。
この診断装置は、本願発明における仮想咬合器データ生成装置と、下顎運動記録装置の機能を併せ持つ。もっともそれら機能を別の装置として実現可能なことは当然であろう。
診断装置200は、図5に示したようなものであり、事実上コンピュータ、例えば、一般的なパーソナルコンピュータによって構成される。
診断装置200は、コンピュータである本体210と、入力装置220と、ディスプレイ230とを含んでいる。
入力装置220は、ユーザである歯科医師等が本体210に対して入力を行うための機器である。これには限られないが、この実施形態における入力装置220は、例えば、汎用のキーボード、マウスなどを含んでいる。
ディスプレイ230は、汎用のものでよく、例えば液晶ディスプレイであり、或いはCRTディスプレイである。
次に、本体210の構成について説明する。
本体210は、その内部に、図6に示したようなハードウエアを備えている。本体210は、この実施形態では、CPU(Central Processing Unit)211、ROM(Read Only Memory)212、RAM(Random Access Memory)213、HDD(Hard disc drive)214、インタフェイス215、及びこれらを接続するバス216を備えている。
CPU211は、本体210全体の制御を行う。CPU211は、プログラムを実行することで、以下に説明するような種々の処理を実行する。
ROM212は、CPU211を動作させるためのプログラム、本体210を制御する際に必要なデータなどを記憶している。
RAM213は、CPU211がデータ処理を行うためのワーク領域を提供する。
Next, the diagnostic device in this embodiment will be described. Although this diagnostic apparatus is also used for treatment, for the sake of simplicity, it will be hereinafter simply referred to as a diagnostic apparatus.
This diagnostic device has the functions of a virtual articulator data generation device and a mandibular movement recording device according to the present invention. Of course, these functions can be realized as separate devices.
The diagnostic apparatus 200 is as shown in FIG. 5 and is substantially constituted by a computer, for example, a general personal computer.
The diagnostic device 200 includes a main body 210 that is a computer, an input device 220, and a display 230.
The input device 220 is a device for a user, such as a dentist, to input to the main body 210. Although not limited to this, the input device 220 in this embodiment includes, for example, a general-purpose keyboard and mouse.
The display 230 may be a general-purpose display, for example, a liquid crystal display or a CRT display.
Next, the configuration of the main body 210 will be described.
The main body 210 includes hardware as shown in FIG. In this embodiment, the main body 210 connects a CPU (Central Processing Unit) 211, a ROM (Read Only Memory) 212, a RAM (Random Access Memory) 213, a HDD (Hard Disc Drive) 214, an interface 215, and these. A bus 216 is provided.
The CPU 211 controls the entire main body 210. The CPU 211 executes various processes as described below by executing the program.
The ROM 212 stores a program for operating the CPU 211, data necessary for controlling the main body 210, and the like.
The RAM 213 provides a work area for the CPU 211 to perform data processing.
 HDD214もまた、CPU211を動作させるためのプログラム、データを記録している。例えば、CPU211を動作させるためのOSがHDD214に記録されている。本発明のプログラムは、ROM212か、或いはHDD214に記録されている。なお、本発明のプログラムは、本体210の出荷時から本体210にインストールされていてもよく、本体210の出荷後に、例えばユーザにより本体210にインストールされたものであっても構わない。本体210の出荷後にインストールされる場合、本発明のプログラムは、CD-ROM等の記録媒体から本体210に記録されてもよく、また、インターネット等の所定のネットワークからのダウンロードを経て本体210に記録されたものであってもよい。本発明のプログラムは、それ単独で後述する処理をCPU211に実行させるものとなっていてもよく、OSその他の他のプログラムとの協働により後述の処理をCPU211に実行させるものとなっていても構わない。 The HDD 214 also records a program and data for operating the CPU 211. For example, an OS for operating the CPU 211 is recorded in the HDD 214. The program of the present invention is recorded in the ROM 212 or the HDD 214. The program of the present invention may be installed in the main body 210 from the time of shipment of the main body 210, or may be installed in the main body 210 by the user after the main body 210 is shipped, for example. When installed after shipment of the main body 210, the program of the present invention may be recorded on the main body 210 from a recording medium such as a CD-ROM, or may be recorded on the main body 210 after downloading from a predetermined network such as the Internet. It may be what was done. The program of the present invention may be one that causes the CPU 211 to execute processing described later alone, or may cause the CPU 211 to execute processing described below in cooperation with the OS or other programs. I do not care.
 インタフェイス215は、CPU211、ROM212、RAM213、HDD214と外部とを繋ぐ窓口となるものであり、CPU211、ROM212、RAM213、HDD214は、必要に応じて、インタフェイス215を介して外部とデータ交換を行えるようになっている。本体210は後述のように、姿勢画像データその他の画像データを外部機器(例えば、3次元画像撮像カメラ)、或いは画像データを記録した記録媒体から受付けられるようになっている必要がある。画像データは、有線、或いは無線のネットワークを介して外部機器等からインタフェイス215が受付けるようになっていても構わない。それを可能とするように、インタフェイス215は、例えば、本体210に設けられた図示を省略のUSB端子と接続されており、USBケーブルとその一端が接続された3次元画像撮像カメラから、USBケーブルと、USBケーブルとその他端を接続されたUSB端子を介して、画像データを受付けられるようになっている。或いは、インタフェイス215は、DVD、メモリカード等の所定の記録媒体からデータを読み込むことができるリーダを備えており、そのリーダに入れられた画像データを記録した記録媒体から、画像データを受付けるようになっている。 The interface 215 serves as a window for connecting the CPU 211, ROM 212, RAM 213, and HDD 214 to the outside, and the CPU 211, ROM 212, RAM 213, and HDD 214 can exchange data with the outside via the interface 215 as necessary. It is like that. As described later, the main body 210 needs to be able to accept posture image data and other image data from an external device (for example, a three-dimensional image capturing camera) or a recording medium on which the image data is recorded. The image data may be received by the interface 215 from an external device or the like via a wired or wireless network. In order to make it possible, the interface 215 is connected to, for example, a USB terminal (not shown) provided on the main body 210, and from the 3D image capturing camera to which the USB cable and one end thereof are connected, Image data can be received via a cable and a USB terminal connected to the USB cable and the other end. Alternatively, the interface 215 includes a reader that can read data from a predetermined recording medium such as a DVD or a memory card, and accepts image data from the recording medium that records the image data put in the reader. It has become.
 プログラムをCPU211が実行することにより、本体210の内部には、図7に示した如き機能ブロックが生成される。なお、上述したように、この診断装置は、本願発明でいう仮想咬合器データ生成装置と、下顎運動記録装置とを兼ねている。したがって、この実施形態におけるプログラムは、例えば汎用のコンピュータを、仮想咬合器データ生成装置と、下顎運動記録装置の双方として機能させるものとなっている。もっとも、あるコンピュータを仮想咬合器データ生成装置としてのみ機能させ、またそれとは別のコンピュータを下顎運動記録装置としてのみ機能させる場合には、それぞれのコンピュータにインストールされるべきプログラムは、それに必要な機能をコンピュータに与えるものであれば足りるのは、当業者には当然理解できることであろう。 When the CPU 211 executes the program, a functional block as shown in FIG. Note that, as described above, this diagnostic device serves as both the virtual articulator data generation device and the mandibular movement recording device referred to in the present invention. Therefore, the program in this embodiment makes a general-purpose computer function as both a virtual articulator data generation device and a mandibular movement recording device, for example. However, when a computer functions only as a virtual articulator data generation device and another computer functions only as a mandibular movement recording device, the program to be installed on each computer is a function required for the computer. It will be understood by those skilled in the art that it is sufficient to provide the information to the computer.
 この実施形態では本体210の中には、受付部221、制御部222、モデル生成部223、位置データ生成部224、フェイスボウデータ記録部225、結合部226、仮想咬合器データ記録部227、表示制御部228、第2位置データ生成部229、及び下顎運動画像データ生成部230が生成される。
あるコンピュータを仮想咬合器データ生成装置としてのみ機能させる場合には、そのコンピュータの本体210の中には、受付部221、制御部222、モデル生成部223、位置データ生成部224、フェイスボウデータ記録部225、結合部226、仮想咬合器データ記録部227、及び表示制御部228があれば足りる。
また、あるコンピュータを下顎運動記録装置としてのみ機能させる場合には、そのコンピュータの本体210の中には、受付部221、制御部222、仮想咬合器データ記録部227、表示制御部228、第2位置データ生成部229下顎運動画像データ生成部、及び230があれば足りる。
In this embodiment, the main body 210 includes a receiving unit 221, a control unit 222, a model generation unit 223, a position data generation unit 224, a face bow data recording unit 225, a coupling unit 226, a virtual articulator data recording unit 227, a display. A control unit 228, a second position data generation unit 229, and a mandibular movement image data generation unit 230 are generated.
When a computer functions only as a virtual articulator data generation device, the computer main body 210 includes a reception unit 221, a control unit 222, a model generation unit 223, a position data generation unit 224, and facebow data recording. The unit 225, the coupling unit 226, the virtual articulator data recording unit 227, and the display control unit 228 are sufficient.
When a computer functions only as a mandibular movement recording device, the computer main body 210 includes a receiving unit 221, a control unit 222, a virtual articulator data recording unit 227, a display control unit 228, a second The position data generation unit 229 and the mandibular movement image data generation unit 230 are sufficient.
 受付部221は、外部から入力されるデータをインタフェイス215を介して受付けるものである。受付部221は、本願でいう受付手段、第2受付手段、及び第3受付手段を兼ねる。
受付部221が受付けるデータとしては、例えば、入力装置220から受付ける後述するデータ、或いは外部機器又は記録媒体から受付ける画像データがある。受付部221が受付ける画像データとしては、患者の上顎歯列の画像である上顎歯列画像のデータである上顎歯列画像データ、患者の下顎歯列の画像である下顎歯列画像のデータである下顎歯列画像データ、患者の上顎歯列にバイトフォーク120が固定された状態で撮像された、バイトフォーク120と上顎歯列下面の角度を含めた相対的な位置関係を示す画像である上部画像のデータである上部画像データ、接続部材130の姿勢を示す画像である姿勢画像のデータである姿勢画像データ、上顎歯列と下顎歯列の噛合わせの状態を示す画像である噛合わせ画像のデータである咬合画像データ、及び下バイトフォーク145と下顎歯列の角度を含めた相対的な位置関係を示す画像である下部画像のデータである下部画像データがある。画像についてのこれらのデータは、診断装置200が仮想咬合器データ生成装置として機能するときに用いられる。
The accepting unit 221 accepts data input from the outside via the interface 215. The accepting unit 221 also serves as accepting means, second accepting means, and third accepting means in the present application.
The data received by the receiving unit 221 includes, for example, data described later received from the input device 220, or image data received from an external device or a recording medium. The image data received by the receiving unit 221 includes upper dentition image data that is upper dentition image data that is an image of the patient's upper dentition, and lower dentition image data that is an image of the patient's lower dentition. Upper dentition image data, an upper image that is an image showing the relative positional relationship including the angle between the bite fork 120 and the lower surface of the upper dentition, taken with the bite fork 120 fixed to the upper dentition of the patient Upper image data, posture image data that is posture image data that is an image showing the posture of the connecting member 130, and data of a mesh image that is an image showing the meshing state of the maxillary and mandibular dentition And occlusal image data, and lower image data, which is lower image data that is an image showing a relative positional relationship including the angle between the lower bite fork 145 and the lower jaw dentition. These data about the image are used when the diagnostic apparatus 200 functions as a virtual articulator data generation apparatus.
 受付部221は、また、フラッグ115からインタフェイス215を介して部位データを受付ける。部位データは、診断装置200が下顎運動記録装置として機能するときに用いられる。
受付部221は、受付けたデータがどのデータか判別して、適切な送り先にそれを送るようになっている。
受付部221は、入力装置220から受付けたデータを主に制御部222に送り、上顎歯列画像データと、下顎歯列画像データは、モデル生成部223に送り、上部画像データと、姿勢画像データと、咬合画像データは位置データ生成部224に送るようになっている。
受付部221はまた、下部画像データを第2位置データ生成部229に送り、部位データを、下顎運動画像データ生成部230に送るようになっている。
制御部222は、本体210全体の制御を行う。
The accepting unit 221 also accepts site data from the flag 115 via the interface 215. The site data is used when the diagnostic apparatus 200 functions as a mandibular movement recording apparatus.
The receiving unit 221 determines which data is the received data and sends it to an appropriate destination.
The receiving unit 221 mainly sends the data received from the input device 220 to the control unit 222, and sends the maxillary dentition image data and the mandibular dentition image data to the model generation unit 223, and receives the upper image data and the posture image data. The occlusal image data is sent to the position data generation unit 224.
The reception unit 221 also sends the lower image data to the second position data generation unit 229 and sends the part data to the lower jaw movement image data generation unit 230.
The control unit 222 controls the entire main body 210.
 モデル生成部223は、上述のように受付部221から上顎歯列画像データと、下顎歯列画像データを受付けるようになっている。モデル生成部223は、受付けた上顎歯列画像データから上顎歯列の3次元モデルである上顎歯列モデルのデータである上顎歯列モデルデータを生成するようになっている。これは、現実の咬合器であれば、上顎歯列の模型に相当するものである。モデル生成部223は、受付けた下顎歯列画像データから下顎歯列の3次元モデルである下顎歯列モデルのデータである下顎歯列モデルデータを生成するようになっている。これは、現実の咬合器であれば、下顎歯列の模型に相当するものである。モデル生成部223は、上顎歯列モデルデータと、下顎歯列モデルデータを、例えばポリゴンを用いた既知の画像処理技術を応用することにより生成する。 The model generation unit 223 receives the upper dentition image data and the lower dentition image data from the reception unit 221 as described above. The model generation unit 223 generates maxillary dentition model data that is data of the maxillary dentition model that is a three-dimensional model of the maxillary dentition from the received maxillary dentition image data. This is equivalent to a model of the maxillary dentition if it is an actual articulator. The model generation unit 223 generates mandibular dentition model data that is data of a mandibular dentition model that is a three-dimensional model of the mandibular dentition from the received mandibular dentition image data. This is equivalent to a model of a lower jaw dentition if it is an actual articulator. The model generation unit 223 generates the maxillary dentition model data and the mandibular dentition model data by applying a known image processing technique using, for example, a polygon.
 モデル生成部223が上顎歯列モデルと下顎歯列モデルを正確に生成できるようにするには、上顎歯列画像と下顎歯列画像がいずれも3次元画像であることが好ましい。そのような撮像を行うことのできる口腔内撮像用の3次元撮像用カメラも実用されている。
モデル生成部223は、生成した上顎歯列モデルデータと、下顎歯列モデルデータとを、結合部226に送るようになっている。
位置データ生成部224は上述のように、受付部221から上部画像データと、姿勢画像データと、咬合画像データとを受付けるようになっている。
位置データ生成部224は、受付けた上部画像データ、及び姿勢画像データから、生体の患者における基準面に対する角度を含めた上顎歯列の相対的な位置を求めて、仮想の基準面である仮想基準面に対する上顎歯列の位置についてのデータである第1位置データを生成するようになっている。位置データ生成部224は、また、受付けた咬合画像データから、上顎歯列に対する角度を含めた下顎歯列の相対的な位置を求めて、上顎歯列に対する下顎歯列の位置についてのデータである第2位置データを生成するようになっている。
In order for the model generation unit 223 to accurately generate the maxillary dentition model and the mandibular dentition model, it is preferable that both the maxillary dentition image and the mandibular dentition image are three-dimensional images. A three-dimensional imaging camera for intraoral imaging that can perform such imaging is also in practical use.
The model generation unit 223 sends the generated maxillary dentition model data and mandibular dentition model data to the coupling unit 226.
As described above, the position data generation unit 224 receives the upper image data, the posture image data, and the occlusion image data from the reception unit 221.
The position data generation unit 224 obtains the relative position of the maxillary dentition including the angle with respect to the reference surface in the living body patient from the received upper image data and posture image data, and is a virtual reference surface that is a virtual reference surface. First position data, which is data about the position of the maxillary dentition relative to the surface, is generated. The position data generation unit 224 also obtains the relative position of the lower dentition including the angle with respect to the upper dentition from the received occlusion image data, and is data on the position of the lower dentition relative to the upper dentition. Second position data is generated.
 これら第1位置データと、第2位置データは、コンピュータ上で作られる仮想咬合器における上顎歯列モデルと、下顎歯列モデルの位置合わせを後に行うときに用いられる。第1位置データを用いれば、基準面と、上顎歯列モデルの位置合わせを、生体の患者における基準面と上顎歯列に一致させるようにして行うことができる。第2位置データを用いれば、上顎歯列モデルと下顎歯列モデルの位置合わせを、生体の患者における上顎歯列と下顎歯列に一致させるようにして行うことができる。 These first position data and second position data are used when the upper dentition model and the lower dentition model are later aligned in a virtual articulator created on a computer. By using the first position data, the reference plane and the maxillary dentition model can be aligned with each other so as to match the reference plane and the maxillary dentition in a living patient. By using the second position data, it is possible to align the upper dentition model and the lower dentition model so as to match the upper dentition and the lower dentition in a living patient.
 この実施形態における位置データ生成部224は、以下のようにして第1位置データを生成するようになっている。
 上述したように、第1位置データは、上部画像データ、及び姿勢画像データから生成される。上部画像データは、上述のように、バイトフォーク120と上顎歯列下面の角度を含めた相対的な位置関係を示す画像である。複数であることもあるが、上部画像には、バイトフォーク120と上顎歯列とが写っているので、上部画像データに対して、公知の画像処理を適用することにより、それら相互の位置関係を示すデータを生成するのは容易である。かかる位置関係をより正確に決定するには、上部画像が3次元画像であるのが好ましい。
第1位置データを求めるには、また、姿勢画像データから、接続部材130の姿勢を求める必要がある。姿勢画像データは、上述のように、接続部材130の姿勢を示す画像である。複数であることもあるが、姿勢画像データには接続部材130が写っているから、姿勢画像に対して公知の画像処理技術を適用することにより、接続部材130の姿勢を検出するのは容易である。
The position data generation unit 224 in this embodiment generates the first position data as follows.
As described above, the first position data is generated from the upper image data and the posture image data. As described above, the upper image data is an image showing a relative positional relationship including the angle between the bite fork 120 and the lower surface of the upper dentition. Although there may be a plurality of images, since the bite fork 120 and the maxillary dentition are shown in the upper image, by applying known image processing to the upper image data, the positional relationship between them can be determined. It is easy to generate data to show. In order to determine such a positional relationship more accurately, the upper image is preferably a three-dimensional image.
In order to obtain the first position data, it is necessary to obtain the attitude of the connecting member 130 from the attitude image data. The posture image data is an image indicating the posture of the connection member 130 as described above. Although there may be a plurality of positions, since the connection member 130 is reflected in the posture image data, it is easy to detect the posture of the connection member 130 by applying a known image processing technique to the posture image. is there.
 必ずしもこの限りではないが、この実施形態では、姿勢画像データから接続部材130の姿勢を検出するために、位置データ生成部224は、フェイスボウデータ記録部225に記録されているデータを利用する。フェイスボウデータ記録部225には、この診断装置200と組合せて用いられるフェイスボウ100についてのデータが記録されている。診断装置200と組合せて用いられるフェイスボウ100が複数ある場合には、複数ある各フェイスボウ100についてのデータであるフェイスボウデータが、フェイスボウデータ記録部225に記録されている。フェイスボウデータは、各フェイスボウの接続部材130を構成する各部品の寸法や、各フェイスボウの接続部材130に付されたマーク、或いは色彩から、各フェイスボウの接続部材130の姿勢を例えば演算や、或いはデータのテーブルとの対比にて求めるために必要な情報が含まれる。 Although not necessarily limited to this, in this embodiment, the position data generation unit 224 uses data recorded in the facebow data recording unit 225 in order to detect the posture of the connection member 130 from the posture image data. In the face bow data recording unit 225, data regarding the face bow 100 used in combination with the diagnostic device 200 is recorded. When there are a plurality of facebows 100 used in combination with the diagnostic apparatus 200, facebow data that is data for each of the plurality of facebows 100 is recorded in the facebow data recording unit 225. The face bow data is calculated by, for example, calculating the attitude of each face bow connecting member 130 from the dimensions of the parts constituting the connecting member 130 of each face bow, the marks or colors attached to the connecting members 130 of each face bow. Or, information necessary for obtaining a comparison with a data table is included.
 位置データ生成部224は、姿勢画像データに基づく姿勢画像を、その姿勢画像にその接続部材130が写っているフェイスボウについてのフェイスボウデータを用いて解析する。上述のように姿勢画像に写っている接続部材130には、マークが付されているか、或いは色が付されているので、姿勢画像におけるそのマーク或いは色の写り方についての情報に基づいて、接続部材130の姿勢を求めることに困難はない。かかる位置関係をより正確に決定するには、姿勢画像が3次元画像であるのが好ましい。姿勢画像が3次元画像であるのであれば、接続部材にマークや色が付されていなくとも、正確に接続部材の姿勢を決定するのは比較的容易である。なお、接続部材130に色が付されている場合には、姿勢画像データに基づく姿勢画像中の色を位置データ生成部224が正しく把握できるようにするために、予め定められた複数の色彩が付された色見本を姿勢画像中に写り込ませるとともに、その色見本に付された複数の色彩に基いて、位置データ生成部224が姿勢画像中の色を修正する処理を実行するようになっていても良い。 The position data generation unit 224 analyzes the posture image based on the posture image data using the face bow data regarding the face bow in which the connection member 130 is reflected in the posture image. As described above, since the connection member 130 shown in the posture image is marked or colored, the connection member 130 is connected based on the information about how the mark or the color appears in the posture image. There is no difficulty in obtaining the posture of the member 130. In order to determine such a positional relationship more accurately, the posture image is preferably a three-dimensional image. If the posture image is a three-dimensional image, it is relatively easy to accurately determine the posture of the connecting member even if the connecting member is not marked or colored. When the connection member 130 is colored, a plurality of predetermined colors are used so that the position data generation unit 224 can correctly grasp the color in the posture image based on the posture image data. The attached color sample is reflected in the posture image, and the position data generation unit 224 executes a process of correcting the color in the posture image based on a plurality of colors attached to the color sample. May be.
 以上のようにして、バイトフォーク120と上顎歯列の相対的な位置関係と、接続部材130の姿勢が求められる。 As described above, the relative positional relationship between the bite fork 120 and the maxillary dentition and the posture of the connecting member 130 are obtained.
 接続部材130の姿勢が求められると、既に述べたように、その両端に接続されたアッパーボウ110と、バイトフォーク120の相対的な位置関係が把握できる。そして、アッパーボウ110は、患者の頭部に想定される基準面に対して一意に固定されているから、アッパーボウ110とバイトフォーク120の相対的な位置関係が把握できるというのは事実上、基準面とバイトフォーク120の相対的な位置関係が把握できることを意味する。そして、上述の如く、バイトフォーク120と上顎歯列の相対的な位置関係も把握されているのであるから、それと基準面に対するバイトフォーク120の相対的な位置関係を組合せることにより、基準面に対する上顎歯列の位置関係が把握できる。
このようにして求められる患者の基準面と上顎歯列の位置関係が、コンピュータ上に形成される仮想咬合器における仮想の基準面である仮想基準面と、上顎歯列モデルの位置関係として再現するための、仮想基準面と、上顎歯列モデルの相互の位置についてのデータである第1位置データとして用いられることになる。
When the posture of the connecting member 130 is obtained, as described above, the relative positional relationship between the upper bow 110 connected to both ends of the connecting member 130 and the bite fork 120 can be grasped. And since the upper bow 110 is uniquely fixed with respect to the reference plane assumed on the patient's head, it is practically possible to grasp the relative positional relationship between the upper bow 110 and the bite fork 120. This means that the relative positional relationship between the reference plane and the bite fork 120 can be grasped. As described above, since the relative positional relationship between the bite fork 120 and the maxillary dentition is also grasped, by combining this with the relative positional relationship of the bite fork 120 with respect to the reference plane, The positional relationship of the maxillary dentition can be grasped.
The positional relationship between the patient's reference plane and the maxillary dentition thus obtained is reproduced as the positional relation between the virtual reference plane, which is a virtual reference plane in the virtual articulator formed on the computer, and the maxillary dentition model. Therefore, it is used as the first position data that is data about the mutual positions of the virtual reference plane and the maxillary dentition model.
 この実施形態における位置データ生成部224は、以下のようにして第2位置データを生成するようになっている。
 上述したように、第2位置データは、咬合画像データから生成される。咬合画像データは、上述のように、上顎歯列と下顎歯列の噛合わせの状態を示す画像である。複数であることもあるが、咬合画像には、噛み合わせた上顎歯列と下顎歯列(それらの一部である場合もある)が写っているか、或いは 患者の上顎歯列と下顎歯列との間で咬合された、上顎歯列と下顎歯列の形状を印記できる印記用ペーストが写っているので、咬合画像データに対して、公知の画像処理を適用することにより、上顎歯列と下顎歯列の相互の位置関係を示す第2位置データを生成するのは容易である。
The position data generation unit 224 in this embodiment generates the second position data as follows.
As described above, the second position data is generated from the occlusal image data. As described above, the occlusal image data is an image showing a state of meshing between the upper dentition and the lower dentition. The occlusal image may show multiple occlusal and mandibular dentitions (which may be part of them) in the occlusion image, or the patient's maxillary and mandibular dentitions. The marking paste that can mark the shape of the maxillary dentition and the mandibular dentition that are occluded between the upper and lower jaws is shown by applying known image processing to the occlusal image data. It is easy to generate the second position data indicating the mutual positional relationship of the dentition.
 なお、かかる位置関係をより正確に決定するには、姿勢画像が3次元画像であるのが好ましい。
位置データ生成部224は以上のように生成した第1位置データと第2位置データとを、結合部226へ送る。
結合部226は、モデル生成部223から、上顎歯列モデルデータと下顎歯列モデルデータとを受取る。結合部226はまた、位置データ生成部224から第1位置データと第2位置データとを受取る。
結合部226は、これらを用いて仮想咬合器についての仮想咬合器データを生成する。仮想咬合器は実際の咬合器を立体的に画像化したものである。仮想咬合器では、実際の咬合器における上顎歯列と下顎歯列の模型は、上顎歯列モデルと下顎歯列モデルに置き換えられる。
In order to determine such a positional relationship more accurately, it is preferable that the posture image is a three-dimensional image.
The position data generation unit 224 sends the first position data and the second position data generated as described above to the combining unit 226.
The coupling unit 226 receives the maxillary dentition model data and the mandibular dentition model data from the model generation unit 223. The combining unit 226 also receives the first position data and the second position data from the position data generation unit 224.
The coupling unit 226 uses these to generate virtual articulator data for the virtual articulator. The virtual articulator is a three-dimensional image of an actual articulator. In the virtual articulator, the models of the maxillary dentition and the mandibular dentition in the actual articulator are replaced with the maxillary dentition model and the mandibular dentition model.
 そして、咬合器における基準面に対する上顎歯列の位置合わせは、仮想咬合器においては、仮想基準面に対する上顎歯列モデルの位置合わせとして実行される。その位置合わせには、仮想基準面と、上顎歯列モデルの相互の位置についてのデータである第1位置データが用いられる。
また、咬合器における上顎歯列下面と下顎歯列上面の位置合わせは、仮想咬合器においては、上顎歯列モデルの下面と下顎歯列モデルの上面の位置合わせとして実行される。その位置合わせには、上顎歯列と下顎歯列の相互の位置関係を示す第2位置データが用いられる。
以上のようにして、仮想咬合器についての仮想咬合器データが生成される。仮想咬合器は、例えば、仮想の顎関節を軸として上顎歯列モデルと下顎歯列モデルを開閉させたりすることも可能なものとすることができる。そのような画像処理も、公知の技術を用いれば可能である。
Then, the alignment of the maxillary dentition with respect to the reference plane in the articulator is executed as the alignment of the maxillary dentition model with respect to the virtual reference plane in the virtual articulator. For the alignment, first position data, which is data about the mutual positions of the virtual reference plane and the maxillary dentition model, is used.
Further, the positioning of the lower surface of the upper dentition and the upper surface of the lower dentition in the articulator is performed as the alignment of the lower surface of the upper dentition model and the upper surface of the lower dentition model in the virtual articulator. For the alignment, second position data indicating the mutual positional relationship between the maxillary dentition and the mandibular dentition is used.
As described above, virtual articulator data for the virtual articulator is generated. For example, the virtual articulator can open and close the maxillary dentition model and the mandibular dentition model with a virtual temporomandibular joint as an axis. Such image processing is also possible using known techniques.
 結合部226は仮想咬合器データを、仮想咬合器データ記録部227と、表示制御部228とに送る。
仮想咬合器データ記録部227は、仮想咬合器データを記録するものである。仮想咬合器データは一般に、その仮想咬合器がどの患者のものであるかを特定するデータとともに仮想咬合器データ記録部227に記録される。
表示制御部228は、ディスプレイ230の制御を行う。表示制御部228は、仮想咬合器データを受取ると、それに基づいて仮想咬合器をディスプレイ230に表示するための例えば動画の画像データを作り、その画像データをインタフェイス215を介してディスプレイ230に送る。
The coupling unit 226 sends the virtual articulator data to the virtual articulator data recording unit 227 and the display control unit 228.
The virtual articulator data recording unit 227 records virtual articulator data. Virtual articulator data is generally recorded in the virtual articulator data recording unit 227 together with data specifying which patient the virtual articulator belongs to.
The display control unit 228 controls the display 230. Upon receiving the virtual articulator data, the display control unit 228 creates, for example, moving image data for displaying the virtual articulator on the display 230 based on the virtual articulator data, and sends the image data to the display 230 via the interface 215. .
 それにより、ディスプレイ230には、仮想咬合器の画像が例えば動画により表示されることになる。
第2位置データ生成部229は、上述のように受付部221から下部画像データを受付ける。
第2位置データ生成部229は、位置データ生成部224が上部画像から、上顎歯列とバイトフォーク120の角度を含めた位置関係を求めたのと同様に、下顎歯列と下バイトフォーク145の角度を含めた相対的な位置関係を求め、その位置関係についてのデータである第3位置データを生成する。
第2位置データ生成部229はその第3位置データを、下顎運動画像データ生成部230に送るようになっている。
Thereby, the image of the virtual articulator is displayed on the display 230 by, for example, a moving image.
The second position data generation unit 229 receives the lower image data from the reception unit 221 as described above.
The second position data generation unit 229 is similar to the position data generation unit 224 that determines the positional relationship including the angle between the upper dentition and the bite fork 120 from the upper image. A relative positional relationship including an angle is obtained, and third positional data that is data about the positional relationship is generated.
The second position data generation unit 229 sends the third position data to the mandibular movement image data generation unit 230.
 下顎運動画像データ生成部230は、上述のように受付部221から部位データを受付ける。部位データは、下顎運動の記録を行う際に用いられるアンダーボウ140を備えたフェイスボウ100のフラッグ115から送られる信号であり、フラッグ115のセンサ部115Aに接触した、スタイラス144の針状部144Bの位置を示すデータである。 The mandibular movement image data generation unit 230 receives the part data from the reception unit 221 as described above. The site data is a signal sent from the flag 115 of the face bow 100 including the underbow 140 used when recording the mandibular movement, and the needle-like portion 144B of the stylus 144 that is in contact with the sensor portion 115A of the flag 115. It is the data which shows the position of.
 フラッグ115は、患者の頭部に固定されるアッパーボウ110に固定されている。スタイラス144は患者の下顎の一部である下顎歯列の上面に下バイトフォーク145によって固定されているアンダーボウ140に固定されている。そして、アッパーボウ110とアンダーボウ140は互いに接続されていない。患者が下顎運動を行うと、アンダーボウ140全体が下顎運動にしたがって移動し、その下顎運動にしたがって、スタイラス144の針状部144Bがフラッグ115のセンサ部115Aをなぞる。つまり、上述の部位データは、下顎運動を表すデータとなる。 The flag 115 is fixed to the upper bow 110 fixed to the patient's head. The stylus 144 is fixed to an underbow 140 fixed by a lower bite fork 145 on the upper surface of a lower jaw dentition that is a part of the lower jaw of the patient. The upper bow 110 and the underbow 140 are not connected to each other. When the patient performs a mandibular movement, the entire underbow 140 moves according to the mandibular movement, and the needle-like part 144B of the stylus 144 traces the sensor part 115A of the flag 115 according to the mandibular movement. That is, the above-mentioned part data is data representing mandibular movement.
 下顎運動画像データ生成部230は、部位データを受取るとともに、第2位置データ生成部229から第3位置データを受取り、また、仮想咬合器データ記録部227から仮想咬合器データを読み出す。そして、仮想咬合器データにより特定される仮想咬合器の画像上に、第3位置データに基づいて下バイトフォーク145と下顎歯列との相対的な位置関係(つまり、スタイラス144と下顎歯列の上面の相対的な位置関係)を修正した上で、患者の下顎運動を示すマークを書込む処理を実行するか、或いは、仮想咬合器の画像上における下顎を、アニメーションにより患者の下顎運動を再現するように下顎運動させる処理を実行し、それら処理の結果生成されたデータである下顎運動画像データを生成する。 The mandibular movement image data generation unit 230 receives the part data, receives the third position data from the second position data generation unit 229, and reads out the virtual articulator data from the virtual articulator data recording unit 227. Then, on the image of the virtual articulator specified by the virtual articulator data, the relative positional relationship between the lower bite fork 145 and the lower dentition based on the third position data (that is, the stylus 144 and the lower dentition). Execute the process of writing the mark indicating the patient's mandibular movement after correcting the relative positional relationship of the upper surface, or reproduce the patient's mandibular movement by animation of the mandible on the virtual articulator image In this way, processing for mandibular movement is executed, and mandibular movement image data, which is data generated as a result of these processes, is generated.
 下顎運動データ生成部230は生成した下顎運動画像データを、仮想咬合器データ記録部227及び表示制御部228に送るようになっている。
この実施形態では、仮想咬合器データ記録部227には、仮想咬合器データに加えて、下顎運動画像データも記録されるようになっている。また、この実施形態では、表示制御部228は、上述した仮想咬合器の画像のみならず、下顎運動画像データに基づく、患者の下顎運動を示すマークを書き込まれた仮想咬合器の画像か、仮想咬合器の画像上における下顎を、患者の下顎運動を再現するように下顎運動させる動画の画像を、ディスプレイ230に表示するようになっている。
The mandibular movement data generating unit 230 is configured to send the generated mandibular movement image data to the virtual articulator data recording unit 227 and the display control unit 228.
In this embodiment, the virtual articulator data recording unit 227 records mandibular movement image data in addition to the virtual articulator data. In this embodiment, the display control unit 228 displays not only the above-described virtual articulator image but also a virtual articulator image in which a mark indicating the patient's mandibular movement based on the mandibular movement image data is written. An image of a moving image in which the lower jaw on the image of the articulator is moved so as to reproduce the lower jaw movement of the patient is displayed on the display 230.
 次に、以上説明した診断装置で診断を行う方法について説明する。
診断装置で行えるのは、仮想咬合器データの生成と、下顎運動の記録である。これらについて順に説明する。
仮想咬合器データを生成するにあたり、まず、患者の上顎歯列画像と、下顎歯列画像と、噛み合わせ画像とを撮像し、上顎歯列画像データと、下顎歯列画像データと、噛み合わせ画像データとを生成する。なお、この過程は、フェイスボウ100が患者に固定されていない状態であれば、いつ行っても構わない。
Next, a method for diagnosing with the diagnostic apparatus described above will be described.
The diagnostic device can generate virtual articulator data and record mandibular movement. These will be described in order.
In generating virtual articulator data, first, a patient's maxillary dentition image, mandibular dentition image, and occlusion image are taken, and maxillary dentition image data, mandibular dentition image data, and occlusion image Generate data and. This process may be performed any time as long as the face bow 100 is not fixed to the patient.
 上述のように、上顎歯列画像は上顎歯列が映り込んでいる、例えば3次元の画像であって、上顎歯列モデルを後に生成するのに足りるものとされ、場合によっては複数の画像とされる。
下顎歯列画像は下顎歯列が映り込んでいる、例えば3次元の画像であって、下顎歯列モデルを後に生成するのに足りるものとされ、場合によっては複数の画像とされる。
噛み合わせ画像は、上顎歯列に対する角度を含めた下顎歯列の相対的な位置を把握できるような画像であり、上顎歯列に対する下顎歯列の相対的な位置についての第2位置データを後に生成するのに足りるものとされ、場合によっては複数の画像とされる。噛み合わせ画像には、噛み合わせた上顎歯列と下顎歯列(それらの一部である場合もある)か、或いは 患者の上顎歯列と下顎歯列との間で咬合された、上顎歯列と下顎歯列の形状を印記できる印記用ペーストが写っている。
As described above, the maxillary dentition image reflects the maxillary dentition, for example, a three-dimensional image, which is sufficient to generate the maxillary dentition model later. Is done.
The lower dentition image is, for example, a three-dimensional image in which the lower dentition is reflected, and is sufficient to generate a lower dentition model later, and may be a plurality of images in some cases.
The meshing image is an image that can grasp the relative position of the lower dentition including the angle with respect to the upper dentition, and the second position data on the relative position of the lower dentition with respect to the upper dentition is later described. It is sufficient to generate, and in some cases, a plurality of images. The occlusal image shows the maxillary and mandibular dentitions (possibly part of them), or the maxillary dentition occluded between the patient's maxillary and mandibular dentition And there is a marking paste that can mark the shape of the lower dentition.
 次に、患者の頭部にフェイスボウ100を取付ける。
フェイスボウ100の患者の頭部への取り付け方は、一般的なフェイスボウ100の場合と何ら変わらない。
この実施形態であれば、アッパーボウ110は、2つのテンプル114が患者の耳にかかり、ブリッジ部111Bが患者の鼻筋にかかり、本体部111のフレーム部111Aが正しく患者の眼前に位置し、位置決め棒113の垂直部113Bが、患者の鼻に正しく位置決めされた状態とする。そのために、本体部111に対するテンプル114と位置決め棒113の位置や、テンプル114の前テンプル部114Aに対する後テンプル部114Bの位置の調整を、患者の顔の寸法に応じて行うのも一般的なフェイスボウ100の場合と同様である。こうすることで、アッパーボウ110は、患者の頭部の基準面に対して、予定されていた位置関係に一意に位置決めされる。なお、このときフラッグ115は、テンプル114に取付けられていなくても良い。
Next, the face bow 100 is attached to the patient's head.
The method of attaching the face bow 100 to the patient's head is not different from the case of the general face bow 100.
In this embodiment, the upper bow 110 is positioned so that the two temples 114 are placed on the patient's ears, the bridge part 111B is placed on the patient's nose muscles, and the frame part 111A of the main body part 111 is correctly positioned in front of the patient's eyes. It is assumed that the vertical portion 113B of the rod 113 is correctly positioned on the patient's nose. For this purpose, it is also common to adjust the position of the temple 114 and the positioning rod 113 with respect to the main body 111 and the position of the rear temple portion 114B with respect to the front temple portion 114A of the temple 114 according to the dimensions of the patient's face. The same as the case of the bow 100. By doing so, the upper bow 110 is uniquely positioned in a predetermined positional relationship with respect to the reference plane of the patient's head. At this time, the flag 115 may not be attached to the temple 114.
 他方、バイトフォーク120の上面に、例えばモデリングコンパウンドを塗布し、バイトフォーク120の上面を患者の上顎歯列の下面に固定する。
そうしてから、接続部材130の上端と下端を、アッパーボウ110とバイトフォーク120にそれぞれ接続する。このとき、かかる接続が自然にできるように、接続部材130の姿勢を適宜修正する。また、かかる接続を行う場合には、接続部材130の上取付部131Aとアッパーボウ110のブリッジ部111Bとを予め定められた位置関係にするとともに、接続部材130の下取付部132Aとバイトフォーク120の接続部122の位置関係を予め定められた位置関係にするようにする。
以上のようにして患者にフェイスボウ100を取付けたら、上部画像と、姿勢画像とを撮影し、上部画像データと、姿勢画像データとを生成する。
On the other hand, for example, a modeling compound is applied to the upper surface of the bite fork 120, and the upper surface of the bite fork 120 is fixed to the lower surface of the patient's maxillary dentition.
Then, the upper end and the lower end of the connection member 130 are connected to the upper bow 110 and the bite fork 120, respectively. At this time, the posture of the connection member 130 is appropriately corrected so that such a connection can be made naturally. When such connection is made, the upper attachment portion 131A of the connection member 130 and the bridge portion 111B of the upper bow 110 are set in a predetermined positional relationship, and the lower attachment portion 132A of the connection member 130 and the bite fork 120 are provided. The positional relationship of the connecting portion 122 is set to a predetermined positional relationship.
When the face bow 100 is attached to the patient as described above, the upper image and the posture image are taken, and the upper image data and the posture image data are generated.
 上部画像は、上顎歯列とバイトフォーク120の角度も含めた相対的な位置関係を把握できるようにするために、それら(の少なくとも一部)が映り込んでいる、例えば3次元の画像であり、後に姿勢画像と合わせて第1位置データを生成するのに足りるものとされ、場合によっては複数の画像とされる。
姿勢画像データは、接続手段130の姿勢を把握できるようにするために、接続手段130が映り込んでいる、例えば3次元の画像であり、後に上部画像と合わせて第1位置データを生成するのに足りるものとされ、場合によっては複数の画像とされる。
次いで、診断装置200で仮想咬合器データを生成する処理が実行される。
まず、患者を特定するための患者の氏名等の情報を、入力装置220から入力する。入力装置220から入力されたその情報は、インタフェイス215から制御部222に送られる。
制御部222は、その情報を、今から仮想咬合器データが作られる患者を特定するための情報として仮想咬合器データ記録部227に記録する。
次いで、診断装置200に、種々の画像のデータが入力される。
The upper image is, for example, a three-dimensional image in which they (at least a part thereof) are reflected so that the relative positional relationship including the angle between the upper dentition and the bite fork 120 can be grasped. It is sufficient to generate the first position data later together with the posture image, and in some cases, it is a plurality of images.
The posture image data is, for example, a three-dimensional image in which the connection unit 130 is reflected so that the posture of the connection unit 130 can be grasped, and the first position data is generated later together with the upper image. In some cases, a plurality of images are used.
Next, a process for generating virtual articulator data is executed in the diagnostic apparatus 200.
First, information such as the patient's name for identifying the patient is input from the input device 220. The information input from the input device 220 is sent from the interface 215 to the control unit 222.
The control unit 222 records the information in the virtual articulator data recording unit 227 as information for specifying a patient for whom virtual articulator data is to be created.
Next, various image data are input to the diagnostic apparatus 200.
 具体的には、上顎歯列画像のデータである上顎歯列画像データと、下顎歯列画像のデータである下顎歯列画像データと、噛合わせ画像のデータである咬合画像データと、上部画像のデータである上部画像データと、姿勢画像のデータである姿勢画像データが入力される。これらは、外部機器から、或いは所定の記録媒体を介して入力されるが、いずれにせよ、インタフェイス215を介して受付部221に受付けられる。
なお、これら画像のデータの入力は、必ずしも一度に、或いは連続して行うことは要しない。例えば、それらのデータが生成された都度、受付部221に入力されても良い。
受付部221は、上顎歯列画像データと、下顎歯列画像データを、モデル生成部223に送り、上部画像データと、姿勢画像データと、咬合画像データとを位置データ生成部224に送る。
Specifically, maxillary dentition image data that is data of the maxillary dentition image, mandibular dentition image data that is data of the mandibular dentition image, occlusion image data that is data of the mesh image, and upper image Upper image data, which is data, and posture image data, which is posture image data, are input. These are input from an external device or via a predetermined recording medium, but in any case, they are received by the reception unit 221 via the interface 215.
It is not always necessary to input these image data at once or continuously. For example, each time such data is generated, the data may be input to the reception unit 221.
The accepting unit 221 sends the maxillary dentition image data and the mandibular dentition image data to the model generation unit 223, and sends the upper image data, the posture image data, and the occlusion image data to the position data generation unit 224.
 受付部221から上顎歯列画像データと、下顎歯列画像データを受付けたモデル生成部223は、上顎歯列画像データから上顎歯列の3次元モデルである上顎歯列モデルのデータである上顎歯列モデルデータを生成するとともに、下顎歯列画像データから下顎歯列の3次元モデルである下顎歯列モデルのデータである下顎歯列モデルデータを生成する。
モデル生成部223は、生成した上顎歯列モデルデータと、下顎歯列モデルデータとを、結合部226に送る。
受付部221から上部画像データと、姿勢画像データと、咬合画像データとを受付けた 位置データ生成部224は上部画像データ、及び姿勢画像データから、第1位置データを生成するとともに、咬合画像データから第2位置データを生成するようになっている。
The model generation unit 223 that has received the maxillary dentition image data and the mandibular dentition image data from the receiving unit 221 has maxillary teeth that are data of the maxillary dentition model that is a three-dimensional model of the maxillary dentition from the maxillary dentition image data. In addition to generating row model data, lower row dentition model data, which is data of the lower row dentition model, which is a three-dimensional model of the lower row of dentition, is generated from the lower row dentition image data.
The model generation unit 223 sends the generated upper dentition model data and lower dentition model data to the combining unit 226.
The position data generation unit 224 that has received the upper image data, the posture image data, and the occlusion image data from the reception unit 221 generates first position data from the upper image data and the posture image data, and from the occlusion image data. Second position data is generated.
 上述のように、この実施形態では、第1位置データを作る場合にフェイスボウデータ記録部225に記録されたデータを用いる。フェイスボウデータ記録部225に記録されたフェイスボウについてのデータが複数ある場合には、その複数のデータの中からその患者が使用したフェイスボウについてのデータが選択された上で、位置データ生成部224に読み出される。その選択に必要な、患者に使用されたフェイスボウを特定するための情報は、例えば、歯科医師が操作した入力装置220からインタフェイス215を介して制御部222に送られ、制御部222から位置データ生成部224に伝えられる。位置データ生成部224はその情報に基づいて読み出すべきフェイスボウについてのデータを、複数のデータの中から選択できるようになっている。 As described above, in this embodiment, when the first position data is created, data recorded in the facebow data recording unit 225 is used. When there are a plurality of data about the facebow recorded in the facebow data recording unit 225, the data about the facebow used by the patient is selected from the plurality of data, and then the position data generation unit Read to 224. Information necessary for the selection to identify the face bow used by the patient is sent from the input device 220 operated by the dentist to the control unit 222 via the interface 215, and the position from the control unit 222, for example. This is transmitted to the data generation unit 224. The position data generation unit 224 can select data about the face bow to be read out from a plurality of data based on the information.
 位置データ生成部224は生成した第1位置データと第2位置データとを、結合部226へ送る。
モデル生成部223から、上顎歯列モデルデータと下顎歯列モデルデータとを受取るとともに、位置データ生成部224から第1位置データと第2位置データとを受取った結合部226は、それらに基づいて仮想咬合器についての仮想咬合器データを生成する。
結合部226は仮想咬合器データを、仮想咬合器データ記録部227と、表示制御部228とに送る。
仮想咬合器データ記録部227は、結合部226から送られて来た仮想咬合器データを記録する。基本的には、仮想咬合器データは、先に仮想咬合器データ記録部227に記録されたその仮想咬合器データの作成の対象となった患者を特定する情報とともに、仮想咬合器データ記録部227に記録される。
The position data generation unit 224 sends the generated first position data and second position data to the combining unit 226.
The combination unit 226 that receives the maxillary dentition model data and the mandibular dentition model data from the model generation unit 223 and receives the first position data and the second position data from the position data generation unit 224, based on them Virtual articulator data for the virtual articulator is generated.
The coupling unit 226 sends the virtual articulator data to the virtual articulator data recording unit 227 and the display control unit 228.
The virtual articulator data recording unit 227 records the virtual articulator data sent from the coupling unit 226. Basically, the virtual articulator data is recorded in the virtual articulator data recording unit 227 together with information for identifying a patient for which the virtual articulator data was previously recorded in the virtual articulator data recording unit 227. To be recorded.
 表示制御部228は、結合部226から仮想咬合器データを受取ると、それに基づいて仮想咬合器をディスプレイ230に表示するための例えば動画の画像データを作り、その画像データをインタフェイス215を介してディスプレイ230に送る。
それにより、ディスプレイ230には、仮想咬合器の画像が例えば動画により表示される。
次いで、下顎運動の記録について説明する。
Upon receiving the virtual articulator data from the coupling unit 226, the display control unit 228 creates, for example, moving image data for displaying the virtual articulator on the display 230 based on the virtual articulator data, and sends the image data via the interface 215. Send to display 230.
Thereby, the image of the virtual articulator is displayed on the display 230 as a moving image, for example.
Next, recording of mandibular movement will be described.
 下顎運動を記録する場合には、アンダーボウ140を、より詳細にはアンダーボウ140の下バイトフォーク145を患者の下顎歯列に固定する。
そして、スタイラス144のフラッグ115に対する相対的な位置関係が予め定められた適当な関係になるように、連結部材143、横棒142等の調整を行う。例えば、患者が上顎歯列と下顎歯列とを自然に噛み合わせたときに、フラッグ115のセンサ部115Aの中心にスタイラス144の針状部144Bが適当な圧力で接するように、上述の調整を行う。
次に、下部画像の撮像を行い、下部画像についての下部画像データを生成する。下部画像には、下バイトフォーク145と、下顎歯列の少なくとも一部が映り込むようにし、下部画像から下バイトフォーク145と下顎歯列の角度を含めた相対的な位置関係が把握できるようにする。
次いで、下部画像データを、診断装置200に送る。下部画像データは、受付部221で受付けられ、第2位置データ生成部229に送られる。
When recording mandibular movement, the underbow 140, and more particularly the lower bite fork 145 of the underbow 140, is secured to the patient's lower dentition.
Then, the connection member 143, the horizontal bar 142, and the like are adjusted so that the relative positional relationship of the stylus 144 with respect to the flag 115 becomes a predetermined appropriate relationship. For example, when the patient naturally meshes the upper dentition and the lower dentition, the adjustment described above is performed so that the needle-like portion 144B of the stylus 144 is in contact with the center of the sensor portion 115A of the flag 115 with an appropriate pressure. Do.
Next, the lower image is captured, and lower image data for the lower image is generated. In the lower image, the lower bite fork 145 and at least a part of the lower jaw dentition are reflected, and the relative positional relationship including the angle of the lower bite fork 145 and the lower jaw dentition can be grasped from the lower image. To do.
Next, the lower image data is sent to the diagnostic apparatus 200. The lower image data is received by the receiving unit 221 and sent to the second position data generating unit 229.
 第2位置データ生成部229は、下顎歯列と下バイトフォーク145の角度を含めた相対的な位置関係を求め、その位置関係についてのデータである第3位置データを生成する。第2位置データ生成部229はその第3位置データを、下顎運動画像データ生成部230に送る。
下顎運動画像データ生成部230は、上述のように受付部221から部位データを受付け、また、第2位置データ生成部229から第3位置データを受付ける。
上述したように、フラッグ115のセンサ部115Aとスタイラス144の針状部144Bの位置は予め定められた位置関係となっている。また、針状部144Bと下バイトフォーク145の位置関係も、少なくとも部位データが生成されている間、言い換えれば下顎運動が記録されている間は固定されている。したがって、後は、下顎歯列と下バイトフォーク145の位置関係さえ特定できれば、下顎歯列に対する針状部144Bの位置関係が特定できる。その位置関係を特定するのが、第3位置データである。
The second position data generation unit 229 obtains a relative positional relationship including the angle between the lower jaw dentition and the lower bite fork 145, and generates third position data that is data about the positional relationship. The second position data generation unit 229 sends the third position data to the mandibular movement image data generation unit 230.
The mandibular movement image data generation unit 230 receives the part data from the reception unit 221 as described above, and receives the third position data from the second position data generation unit 229.
As described above, the positions of the sensor portion 115A of the flag 115 and the needle-like portion 144B of the stylus 144 are in a predetermined positional relationship. Further, the positional relationship between the needle-like portion 144B and the lower bite fork 145 is also fixed at least while the part data is generated, in other words, while the mandibular movement is recorded. Therefore, as long as the positional relationship between the lower jaw dentition and the lower bite fork 145 can be specified, the positional relationship of the needle-like portion 144B with respect to the lower dentition can be specified. It is the third position data that specifies the positional relationship.
 つまり、第3位置データを用いれば、下バイトフォーク145の動きに伴って生じる針状部144Bの動きを、下顎歯列の動きに伴って生じる針状部144Bの動きに変換することができる。
下顎運動画像データ生成部230は、部位データを第3位置データにより修正した上で、仮想咬合器データ記録部227から読出した仮想咬合器データにより特定される仮想咬合器の画像上に、患者の下顎運動を示すマークを書込む処理を実行するか、或いは、仮想咬合器の画像上における下顎を、アニメーションにより患者の下顎運動を再現するように下顎運動させる処理を実行し、それら処理の結果生成されたデータである下顎運動画像データを生成する。
下顎運動データ生成部230は生成した下顎運動画像データを、仮想咬合器データ記録部227及び表示制御部228に送る。
仮想咬合器データ記録部227には、仮想咬合器データに加えて、下顎運動画像データが記録される。好ましくは、同じ患者の仮想咬合器データと下顎運動画像データは対応させて仮想咬合器データ記録部227に記録される。
That is, if the third position data is used, the movement of the needle-like portion 144B caused by the movement of the lower bite fork 145 can be converted into the movement of the needle-like portion 144B caused by the movement of the lower jaw dentition.
The mandibular movement image data generation unit 230 corrects the part data with the third position data, and then, on the virtual articulator image specified by the virtual articulator data read from the virtual articulator data recording unit 227, Execute the process of writing a mark indicating the mandibular movement, or execute the process of moving the mandible on the virtual articulator image to reproduce the patient's mandibular movement by animation, and generate the results of those processes Mandibular movement image data that is the generated data is generated.
The mandibular movement data generation unit 230 sends the generated mandibular movement image data to the virtual articulator data recording unit 227 and the display control unit 228.
The virtual articulator data recording unit 227 records mandibular movement image data in addition to the virtual articulator data. Preferably, the virtual articulator data and mandibular movement image data of the same patient are recorded in the virtual articulator data recording unit 227 in association with each other.
 また、表示制御部228は、下顎運動画像データに基づき、患者の下顎運動を示すマークを書き込まれた仮想咬合器の画像か、仮想咬合器の画像上における下顎を、患者の下顎運動を再現するように下顎運動させる動画の画像を、ディスプレイ230に表示する。これにより、歯科医師は、簡単に、且つ正確に、患者の下顎運動を把握することができる。 Further, the display control unit 228 reproduces the patient's mandibular movement on the virtual articulator image in which a mark indicating the patient's mandibular movement is written or the mandible on the virtual articulator image based on the mandibular movement image data. An image of a moving image that causes the lower jaw to move is displayed on the display 230. Thereby, the dentist can grasp the patient's mandibular movement easily and accurately.
歯科領域において診断あるいは補綴物を製作する仮想咬合器に、生体の患者における基準面と上顎歯列及び下顎歯列との正確な三次元相対的位置関係を提供し、患者の歯列の模型作成を省き、診断と治療のスピードを増し、また、ある患者の診断や治療に用いられた仮想咬合器のデータ(電子データ)の共有を可能とする。 A virtual articulator that produces diagnostics or prosthetics in the dental field provides accurate three-dimensional relative positional relationship between the reference plane and maxillary and mandibular dentition in a living patient, creating a model of the patient's dentition This increases the speed of diagnosis and treatment, and allows sharing of virtual articulator data (electronic data) used for diagnosis and treatment of a patient.
  100 フェイスボウ
  110 アッパーボウ
  111 本体部
 111A フレーム部
 111B ブリッジ部
111B1 ネジ
 111C 智
111C1 長孔
 111D 取付部
 111E ネジ
  112 支持ブロック
 112A ネジ
  113 位置決め棒
 113A 水平部
 113B 垂直部
  114 テンプル
 114A 前テンプル部
 114B 後テンプル部
114A1 ネジ
  115 フラッグ
 115A センサ部
 115B 枠
 115C ネジ
 115D ケーブル
  120 バイトフォーク
  121 バイトフォーク本体
  122 接続部
  130 接続部材
  131 上部材
 131A 上取付部
 131B 上接続棒
 131C ボール
  132 下部材
 132A 下取付部
 132B 下接続棒
132B1 ネジ
 132C ボール
  133 中間部材
 133A 第1部材
133A1 受け穴
133A2 上受け部
 133B 第2部材
133B1 受け穴
133B2 下受け部
  134 レバー
  140 アンダーボウ
  141 前棒
  142 横棒
  143 連結部材
 143A ネジ
 143B 管
 143C ネジ
  144 スタイラス
 144A スタイラス本体
 144B 針状部
  145 下バイトフォーク
 145A 固定板
  200 診断装置
  210 入力装置
  220 ディスプレイ
  211 CPU
  212 ROM
  213 RAM
  214 HDD
  215 インタフェイス
  216 バス
  221 受付部
  222 制御部
  223 モデル生成部
  224 位置データ生成部
  225 フェイスボウデータ記録部
  226 結合部
  227 仮想咬合器データ記録部
  228 表示制御部
  229 第2位置データ生成部
  230 下顎運動画像データ生成部
   M1 印
   M2 印
DESCRIPTION OF SYMBOLS 100 Face bow 110 Upper bow 111 Main part 111A Frame part 111B Bridge part 111B1 Screw 111C HI 111C1 Elongate hole 111D Mounting part 111E Screw 112 Support block 112A Screw 113 Positioning rod 113A Horizontal part 113B Vertical part 114 Temple 114A Rear temple part 114B Part 114A1 Screw 115 Flag 115A Sensor part 115B Frame 115C Screw 115D Cable 120 Bite fork 121 Bite fork main body 122 Connection part 130 Connection member 131 Upper member 131A Upper attachment part 131B Upper connection rod 131C Ball 132 Lower member 132A Lower attachment part 132B Lower connection Rod 132B1 Screw 132C Ball 133 Intermediate member 133A First member 133A1 Receiving hole 13 A2 Upper receiving portion 133B Second member 133B1 Receiving hole 133B2 Lower receiving portion 134 Lever 140 Underbow 141 Front rod 142 Horizontal bar 143 Connection member 143A Screw 143B Pipe 143C Screw 144 Stylus 144A Stylus main body 144B Needle-shaped portion 145B Needle-shaped portion 145 Board 200 Diagnostic device 210 Input device 220 Display 211 CPU
212 ROM
213 RAM
214 HDD
215 interface 216 bus 221 reception unit 222 control unit 223 model generation unit 224 position data generation unit 225 face bow data recording unit 226 coupling unit 227 virtual articulator data recording unit 228 display control unit 229 second position data generation unit 230 mandibular movement Image data generator M1 mark M2 mark

Claims (16)

  1.  患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるアッパーボウと、硬化性の物質を塗布して患者の上顎歯列の下面に対して固定することができるバイトフォークと、前記アッパーボウ、及び前記バイトフォークの相対的な角度を含めた位置関係を任意に調節可能な接続手段と、を備えたフェイスボウであり、患者に正確に取付けられたものの画像を用いて、仮想の咬合器のデータである仮想咬合器データを生成することができる、仮想咬合器データ生成装置であって、
     上顎歯列の画像である上顎歯列画像のデータである上顎歯列画像データ、下顎歯列の画像である下顎歯列画像のデータである下顎歯列画像データ、前記バイトフォークと上顎歯列下面の角度を含めた相対的な位置関係を示す画像である上部画像のデータである上部画像データ、前記接続手段の姿勢を示す画像である姿勢画像のデータである姿勢画像データ、及び上顎歯列と下顎歯列の噛合わせの状態を示す画像である噛合わせ画像のデータである咬合画像データを受付けるための、受付手段と、
     前記受付手段から受付けた上顎歯列画像データから上顎歯列の3次元モデルである上顎歯列モデルのデータである上顎歯列モデルデータを生成するとともに、前記受付手段から受付けた下顎歯列画像データから下顎歯列の3次元モデルである下顎歯列モデルのデータである下顎歯列モデルデータを生成する、モデル生成手段と、
     前記受付手段から受付けた上部画像データ、及び姿勢画像データから、生体の患者における前記基準面に対する角度を含めた上顎歯列の相対的な位置を求めて、仮想の基準面である仮想基準面に対する前記上顎歯列モデルの位置についてのデータである第1位置データを生成するとともに、前記受付手段から受付けた咬合画像データから、上顎歯列に対する角度を含めた下顎歯列の相対的な位置を求めて、前記上顎歯列モデルに対する前記下顎歯列モデルの位置についてのデータである第2位置データを生成する、位置データ生成手段と、
     前記モデル生成手段から、前記上顎歯列モデルデータと、前記下顎歯列モデルデータとを受取るとともに、前記位置データ生成手段から、前記第1位置データと、前記第2位置データとを受取り、生体における基準面に対する上顎歯列と下顎歯列の想定的な位置関係を、前記第1位置データと、前記第2位置データを用いて、仮想基準面に対する上顎歯列モデルと下顎歯列モデルの位置関係に再現するようにして、前記仮想咬合器データを生成する、連結手段と、
     を備えている仮想咬合器データ生成装置。
    An upper bow that can be fixed to the patient's skull in a state in which the positional relationship including a relative angle with a predetermined reference plane on the patient's skull is uniquely determined, and a patient's upper jaw by applying a curable material A face bow comprising a bite fork that can be fixed to a lower surface of a dentition, and a connection means that can arbitrarily adjust a positional relationship including a relative angle of the upper bow and the bite fork. A virtual articulator data generation device capable of generating virtual articulator data, which is data of a virtual articulator, using an image of what is accurately attached to a patient,
    Maxillary dentition image data which is data of maxillary dentition image which is an image of maxillary dentition, Mandibular dentition image data which is data of mandibular dentition image which is image of mandibular dentition, The bite fork and lower surface of maxillary dentition Upper image data that is data of an upper image that is an image showing a relative positional relationship including the angle of the image, posture image data that is data of a posture image that is an image showing the posture of the connecting means, and a maxillary dentition Receiving means for receiving occlusal image data that is data of a meshing image that is an image showing a state of meshing of the lower jaw dentition;
    Maxillary dentition model data, which is data of an upper dentition model, which is a three-dimensional model of the upper dentition, is generated from the upper dentition image data received from the reception means, and lower dentition image data received from the reception means Generating a lower dentition model data, which is data of a lower dentition model, which is a three-dimensional model of the lower dentition,
    From the upper image data and posture image data received from the receiving means, the relative position of the maxillary dentition including the angle with respect to the reference surface in a living patient is obtained, and the virtual reference surface that is a virtual reference surface is obtained. First position data that is data on the position of the upper dentition model is generated, and a relative position of the lower dentition including an angle with respect to the upper dentition is obtained from the occlusal image data received from the receiving unit. Position data generating means for generating second position data which is data on the position of the lower dentition model relative to the upper dentition model;
    The maxillary dentition model data and the mandibular dentition model data are received from the model generation means, and the first position data and the second position data are received from the position data generation means, The assumed positional relationship between the maxillary dentition and the mandibular dentition with respect to the reference plane, and the positional relationship between the maxillary dentition model and the mandibular dentition model with respect to the virtual reference plane using the first position data and the second position data. And connecting means for generating the virtual articulator data,
    A virtual articulator data generation device comprising:
  2.  前記噛合わせ画像は、患者の上顎歯列と下顎歯列の噛合わせ部分を撮像した画像である、
     請求項1記載の仮想咬合器データ生成装置。
    The meshing image is an image obtained by imaging a meshing portion of a patient's upper dentition and lower dentition.
    The virtual articulator data generation device according to claim 1.
  3.  前記噛合わせ画像は、患者の上顎歯列と下顎歯列との間で咬合された、上顎歯列と下顎歯列の形状を印記できる印記用ペーストの画像である、
     請求項1記載の仮想咬合器データ生成装置。
    The meshing image is an image of a paste for marking that can mark the shape of the maxillary dentition and the mandibular dentition, occluded between the maxillary dentition and the mandibular dentition of the patient.
    The virtual articulator data generation device according to claim 1.
  4.  前記接続手段は、複数の部材から構成されているとともに、前記部材には、それらの角度を含めた相対的な位置関係の変化によってその位置関係が変化する複数の目印が適宜付されており、
     前記位置データ生成手段は、姿勢画像データから得られる姿勢画像に映り込んだ前記目印から、前記接続手段の姿勢を検出するようになっている、
     請求項1記載の仮想咬合器データ生成装置。
    The connecting means is composed of a plurality of members, and the members are appropriately provided with a plurality of marks whose positional relationships change due to changes in relative positional relationships including their angles,
    The position data generating means is adapted to detect the posture of the connecting means from the mark reflected in the posture image obtained from the posture image data.
    The virtual articulator data generation device according to claim 1.
  5.  前記接続手段は、複数の部材から構成されているとともに、前記部材には、それらの角度を含めた相対的な位置関係の変化によって見え方が変化するようにして色彩が付されており、
     前記位置データ生成手段は、姿勢画像データから得られる姿勢画像に映り込んだ前記色彩から、前記接続手段の姿勢を検出するようになっている、
     請求項1記載の仮想咬合器データ生成装置。
    The connecting means is composed of a plurality of members, and the members are colored so that the appearance changes due to a change in relative positional relationship including their angles,
    The position data generating means is adapted to detect the attitude of the connecting means from the color reflected in the attitude image obtained from the attitude image data.
    The virtual articulator data generation device according to claim 1.
  6.  前記姿勢画像は複数の方向から撮像された複数の画像であるとともに、前記姿勢画像データは前記姿勢画像のそれぞれについてのものであり、前記姿勢画像と同数である、
     請求項1、4又は5記載の仮想咬合器データ生成装置。
    The posture image is a plurality of images taken from a plurality of directions, and the posture image data is for each of the posture images, and is the same number as the posture images.
    The virtual articulator data generation device according to claim 1, 4 or 5.
  7.  患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるアッパーボウと、硬化性の物質を塗布して患者の上顎歯列の下面に対して固定することができるバイトフォークと、前記アッパーボウ、及び前記バイトフォークの相対的な角度を含めた位置関係を任意に調節可能な接続手段と、を備えたフェイスボウであり、患者に正確に取付けられたものの画像を用いて、仮想の咬合器のデータである仮想咬合器データを生成することができる、コンピュータを有する仮想咬合器データ生成装置で実行される仮想咬合器データ生成方法であって、
     前記コンピュータが実行する、
     上顎歯列の画像である上顎歯列画像のデータである上顎歯列画像データ、下顎歯列の画像である下顎歯列画像のデータである下顎歯列画像データ、前記バイトフォークと上顎歯列下面の角度を含めた相対的な位置関係を示す画像である上部画像のデータである上部画像データ、前記接続手段の姿勢を示す画像である姿勢画像のデータである姿勢画像データ、及び上顎歯列と下顎歯列の噛合わせの状態を示す画像である噛合わせ画像のデータである咬合画像データを受付ける、受付過程と、
     前記受付過程で受付けた上顎歯列画像データから上顎歯列の3次元モデルである上顎歯列モデルのデータである上顎歯列モデルデータを生成するとともに、前記受付過程で受付けた下顎歯列画像データから下顎歯列の3次元モデルである下顎歯列モデルのデータである下顎歯列モデルデータを生成する、モデル生成過程と、
     前記受付過程で受付けた上部画像データ、及び姿勢画像データから、生体の患者における前記基準面に対する角度を含めた上顎歯列の相対的な位置を求めて、仮想の基準面である仮想基準面に対する前記上顎歯列モデルの位置についてのデータである第1位置データを生成するとともに、前記受付過程で受付けた咬合画像データから、上顎歯列に対する角度を含めた下顎歯列の相対的な位置を求めて、前記上顎歯列モデルに対する前記下顎歯列モデルの位置についてのデータである第2位置データを生成する、位置データ生成過程と、
     前記モデル生成過程で生成された前記上顎歯列モデルデータ、及び前記下顎歯列モデルデータと、前記位置データ生成過程で生成された前記第1位置データ、及び前記第2位置データとにより、生体における基準面に対する上顎歯列と下顎歯列の想定的な位置関係を、仮想基準面に対する上顎歯列モデルと下顎歯列モデルの位置関係に再現するようにして、前記仮想咬合器データを生成する、連結過程と、
     を含む、仮想咬合器データ生成方法。
    An upper bow that can be fixed to the patient's skull in a state in which the positional relationship including a relative angle with a predetermined reference plane on the patient's skull is uniquely determined, and a patient's upper jaw by applying a curable material A face bow comprising a bite fork that can be fixed to a lower surface of a dentition, and a connection means that can arbitrarily adjust a positional relationship including a relative angle of the upper bow and the bite fork. A virtual articulator executed by a virtual articulator data generation apparatus having a computer capable of generating virtual articulator data, which is data of a virtual articulator, using an image of what is accurately attached to a patient A data generation method,
    The computer executes,
    Maxillary dentition image data which is data of maxillary dentition image which is an image of maxillary dentition, Mandibular dentition image data which is data of mandibular dentition image which is image of mandibular dentition, The bite fork and lower surface of maxillary dentition Upper image data that is data of an upper image that is an image showing a relative positional relationship including the angle of the image, posture image data that is data of a posture image that is an image showing the posture of the connecting means, and a maxillary dentition Accepting occlusal image data, which is data of a meshing image that is an image showing the state of meshing of the lower jaw dentition,
    Maxillary dentition model data, which is data of an upper dentition model, which is a three-dimensional model of the upper dentition, is generated from the upper dentition image data received in the reception process, and lower dentition image data received in the reception process Generating the lower dentition model data, which is the data of the lower dentition model, which is a three-dimensional model of the lower dentition,
    From the upper image data and the posture image data received in the reception process, the relative position of the maxillary dentition including the angle with respect to the reference plane in the living patient is obtained, and the virtual reference plane that is a virtual reference plane is obtained. First position data that is data about the position of the upper dentition model is generated, and a relative position of the lower dentition including an angle with respect to the upper dentition is obtained from the occlusal image data received in the reception process. Generating a second position data that is data about the position of the lower dentition model relative to the upper dentition model;
    In the living body, the upper dentition model data and the lower dentition model data generated in the model generation process, and the first position data and the second position data generated in the position data generation process. The virtual articulator data is generated by reproducing the assumed positional relationship between the maxillary dentition and the mandibular dentition with respect to the reference plane into the positional relationship between the maxillary dentition model and the mandibular dentition model with respect to the virtual reference plane. The connection process;
    A virtual articulator data generation method including:
  8.  患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるアッパーボウと、硬化性の物質を塗布して患者の上顎歯列の下面に対して固定することができるバイトフォークと、前記アッパーボウ、及び前記バイトフォークの相対的な角度を含めた位置関係を任意に調節可能な接続手段と、を備えたフェイスボウであり、患者に正確に取付けられたものの画像を用いて、仮想の咬合器のデータである仮想咬合器データを生成することができる、仮想咬合器データ生成装置として、コンピュータを機能させるためのコンピュータプログラムであって、
     前記コンピュータを、
     上顎歯列の画像である上顎歯列画像のデータである上顎歯列画像データ、下顎歯列の画像である下顎歯列画像のデータである下顎歯列画像データ、前記バイトフォークと上顎歯列下面の角度を含めた相対的な位置関係を示す画像である上部画像のデータである上部画像データ、前記接続手段の姿勢を示す画像である姿勢画像のデータである姿勢画像データ、及び上顎歯列と下顎歯列の噛合わせの状態を示す画像である噛合わせ画像のデータである咬合画像データを受付けるための、受付手段と、
     前記受付手段から受付けた上顎歯列画像データから上顎歯列の3次元モデルである上顎歯列モデルのデータである上顎歯列モデルデータを生成するとともに、前記受付手段から受付けた下顎歯列画像データから下顎歯列の3次元モデルである下顎歯列モデルのデータである下顎歯列モデルデータを生成する、モデル生成手段と、
     前記受付手段から受付けた上部画像データ、及び姿勢画像データから、生体の患者における前記基準面に対する角度を含めた上顎歯列の相対的な位置を求めて、仮想の基準面である仮想基準面に対する前記上顎歯列モデルの位置についてのデータである第1位置データを生成するとともに、前記受付手段から受付けた咬合画像データから、上顎歯列に対する角度を含めた下顎歯列の相対的な位置を求めて、前記上顎歯列モデルに対する前記下顎歯列モデルの位置についてのデータである第2位置データを生成する、位置データ生成手段と、
     前記モデル生成手段から、前記上顎歯列モデルデータと、前記下顎歯列モデルデータとを受取るとともに、前記位置データ生成手段から、前記第1位置データと、前記第2位置データとを受取り、生体における基準面に対する上顎歯列と下顎歯列の想定的な位置関係を、前記第1位置データと、前記第2位置データを用いて、仮想基準面に対する上顎歯列モデルと下顎歯列モデルの位置関係に再現するようにして、前記仮想咬合器データを生成する、連結手段と、
     して機能させるためのコンピュータプログラム。
    An upper bow that can be fixed to the patient's skull in a state in which the positional relationship including a relative angle with a predetermined reference plane on the patient's skull is uniquely determined, and a patient's upper jaw by applying a curable material A face bow comprising a bite fork that can be fixed to a lower surface of a dentition, and a connection means that can arbitrarily adjust a positional relationship including a relative angle of the upper bow and the bite fork. A computer program for causing a computer to function as a virtual articulator data generation device capable of generating virtual articulator data, which is data of a virtual articulator, using an image of what is accurately attached to a patient Because
    The computer,
    Maxillary dentition image data which is data of maxillary dentition image which is an image of maxillary dentition, Mandibular dentition image data which is data of mandibular dentition image which is image of mandibular dentition, The bite fork and lower surface of maxillary dentition Upper image data that is data of an upper image that is an image showing a relative positional relationship including the angle of the image, posture image data that is data of a posture image that is an image showing the posture of the connecting means, and a maxillary dentition Receiving means for receiving occlusal image data that is data of a meshing image that is an image showing a state of meshing of the lower jaw dentition;
    Maxillary dentition model data, which is data of an upper dentition model, which is a three-dimensional model of the upper dentition, is generated from the upper dentition image data received from the reception means, and lower dentition image data received from the reception means Generating a lower dentition model data, which is data of a lower dentition model, which is a three-dimensional model of the lower dentition,
    From the upper image data and posture image data received from the receiving means, the relative position of the maxillary dentition including the angle with respect to the reference surface in a living patient is obtained, and the virtual reference surface that is a virtual reference surface is obtained. First position data that is data on the position of the upper dentition model is generated, and a relative position of the lower dentition including an angle with respect to the upper dentition is obtained from the occlusal image data received from the receiving unit. Position data generating means for generating second position data which is data on the position of the lower dentition model relative to the upper dentition model;
    The maxillary dentition model data and the mandibular dentition model data are received from the model generation means, and the first position data and the second position data are received from the position data generation means, The assumed positional relationship between the maxillary dentition and the mandibular dentition with respect to the reference plane, and the positional relationship between the maxillary dentition model and the mandibular dentition model with respect to the virtual reference plane using the first position data and the second position data. And connecting means for generating the virtual articulator data,
    Computer program to make it function.
  9.  患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるアッパーボウと、硬化性の物質を塗布して患者の上顎歯列の下面に対して固定することができるバイトフォークと、前記アッパーボウ、及び前記バイトフォークの相対的な角度を含めた位置関係を任意に調節可能な接続手段と、を備えたフェイスボウであって、
     前記接続手段は、複数の部材から構成されているとともに、前記部材には、それらの角度を含めた相対的な位置関係の変化によってその位置関係が変化する複数の目印が適宜付されている、
     フェイスボウ。
    An upper bow that can be fixed to the patient's skull in a state in which the positional relationship including a relative angle with a predetermined reference plane on the patient's skull is uniquely determined, and a patient's upper jaw by applying a curable material A face bow comprising a bite fork that can be fixed to a lower surface of a dentition, and a connection means that can arbitrarily adjust a positional relationship including a relative angle of the upper bow and the bite fork. There,
    The connecting means is composed of a plurality of members, and the members are appropriately provided with a plurality of marks whose positional relationships change due to changes in relative positional relationships including their angles.
    Face bow.
  10.  患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるアッパーボウと、硬化性の物質を塗布して患者の上顎歯列の下面に対して固定することができるバイトフォークと、前記アッパーボウ、及び前記バイトフォークの相対的な角度を含めた位置関係を任意に調節可能な接続手段と、を備えたフェイスボウであって、
     前記接続手段は、複数の部材から構成されているとともに、前記部材には、それらの角度を含めた相対的な位置関係の変化によって見え方が変化するようにして色彩が付されている、
     フェイスボウ。
    An upper bow that can be fixed to the patient's skull in a state in which the positional relationship including a relative angle with a predetermined reference plane on the patient's skull is uniquely determined, and a patient's upper jaw by applying a curable material A face bow comprising a bite fork that can be fixed to a lower surface of a dentition, and a connection means that can arbitrarily adjust a positional relationship including a relative angle of the upper bow and the bite fork. There,
    The connecting means is composed of a plurality of members, and the members are colored so that their appearance changes due to a change in relative positional relationship including their angles.
    Face bow.
  11.  患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるようにされているとともに、接触された部位を検出することができる面状の検出面を有するとともに、接触された部位のデータを外部へ出力するための出力手段を備えたフラッグを有するアッパーボウと、硬化性の物質を塗布して患者の下顎歯列に対して固定することができる下バイトフォーク、及び前記下バイトフォークと接続されているとともに、患者が行った下顎の運動である下顎運動に伴って、前記フラッグに接触しながら前記フラッグ上を移動するスタイラス、を備えているロウアーボウと、を有する下顎運動検出装置と組合せて用いられる、下顎運動記録装置であって、
     ある患者の仮想咬合器についてのデータである仮想咬合器データを記録した記録手段と、
     前記フラッグの前記出力手段から、その患者の前記下顎運動に伴って移動する前記スタイラスに接触された部分についてのデータである下顎運動についてのデータである運動データを受付ける第2受付手段と、
     前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示す画像である下部画像のデータである下部画像データを受付ける第3受付手段と、
     前記第3受付手段で受付けた下部画像データから前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示すデータである第3位置データを生成する、第2位置データ生成手段と、
     前記第2受付手段で受付けた運動データと、前記第2位置データ生成手段から受付けた第3位置データと、前記記録手段から読出した仮想咬合器データとに基づいて、前記仮想咬合器データにより特定される仮想咬合器の3次元画像上に、前記第3位置データによりその位置が修正された患者の下顎運動を示すマークを書込む描画手段と、
     を備えている下顎運動記録装置。
    Detecting a contacted portion that can be fixed to a patient's skull in a state in which the positional relationship including a relative angle with a predetermined reference plane on the patient's skull is uniquely determined. An upper bow with a flag that has a planar detection surface that can output the data of the contacted part to the outside and a curable substance applied to the patient's lower dentition A lower bite fork that can be fixed to the lower bite and connected to the lower bite fork, and moves on the flag while touching the flag in accordance with a lower jaw movement that is performed by a patient. A mandibular movement recording device used in combination with a mandibular movement detection device having a lower bow comprising a stylus,
    Recording means for recording virtual articulator data, which is data about a patient's virtual articulator,
    Second receiving means for receiving, from the output means of the flag, movement data that is data about mandibular movement that is data about a portion in contact with the stylus that moves with the movement of the lower jaw of the patient;
    Third receiving means for receiving lower image data that is data of a lower image that is an image showing a relative positional relationship including an angle between the lower bite fork and the lower jaw dentition;
    Second position data generating means for generating third position data which is data indicating a relative positional relationship including the angle of the lower bite fork and lower jaw dentition from the lower image data received by the third receiving means; ,
    Based on the movement data received by the second receiving means, the third position data received from the second position data generating means, and the virtual articulator data read from the recording means, specified by the virtual articulator data A writing means for writing a mark indicating a mandibular movement of the patient whose position is corrected by the third position data on a three-dimensional image of the virtual articulator to be operated;
    A mandibular movement recording device.
  12.  患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるようにされているとともに、接触された部位を検出することができる面状の検出面を有するとともに、接触された部位のデータを外部へ出力するための出力手段を備えたフラッグを有するアッパーボウと、硬化性の物質を塗布して患者の下顎歯列に対して固定することができる下バイトフォーク、及び前記下バイトフォークと接続されているとともに、患者が行った下顎の運動である下顎運動に伴って、前記フラッグに接触しながら前記フラッグ上を移動するスタイラス、を備えているロウアーボウと、を有する下顎運動検出装置と組合せて用いられる、コンピュータを含む下顎運動記録装置で実行される下顎運動記録方法であって、
     前記コンピュータが実行する、
     ある患者の仮想咬合器についてのデータである仮想咬合器データを記録する記録処理と、
     前記フラッグの前記出力手段から、その患者の前記下顎運動に伴って移動する前記スタイラスに接触された部分についてのデータである下顎運動についてのデータである運動データを受付ける第2受付処理と、
     前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示す画像である下部画像のデータである下部画像データを受付ける第3受付処理と、
     前記第3受付処理で受付けた下部画像データから前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示すデータである第3位置データを生成する、第2位置データ生成処理と、
     前記第2受付処理で受付けた運動データと、前記第2位置データ生成処理で受付けた第3位置データと、前記記録処理で記録された後に読み出された仮想咬合器データとに基づいて、前記仮想咬合器データにより特定される仮想咬合器の3次元画像上に、第3位置データによりその位置が修正された患者の下顎運動を示すマークを書込む描画処理と、
     を含む下顎運動記録方法。
    Detecting a contacted portion that can be fixed to a patient's skull in a state in which the positional relationship including a relative angle with a predetermined reference plane on the patient's skull is uniquely determined. An upper bow with a flag that has a planar detection surface that can output the data of the contacted part to the outside and a curable substance applied to the patient's lower dentition A lower bite fork that can be fixed to the lower bite and connected to the lower bite fork, and moves on the flag while touching the flag in accordance with a lower jaw movement that is performed by a patient. Mandibular movement performed on a mandibular movement recording device including a computer, used in combination with a mandibular movement detection device having a lower bow with a stylus A recording method,
    The computer executes,
    A recording process for recording virtual articulator data, which is data about a virtual articulator of a patient;
    A second receiving process for receiving, from the output means of the flag, movement data that is data about mandibular movement, which is data about a portion in contact with the stylus that moves with the lower jaw movement of the patient;
    A third receiving process for receiving lower image data that is data of a lower image that is an image showing a relative positional relationship including an angle between the lower bite fork and the lower jaw dentition;
    A second position data generation process for generating third position data, which is data indicating a relative positional relationship including the angle between the lower bite fork and the lower dentition from the lower image data received in the third reception process; ,
    Based on the movement data received in the second reception process, the third position data received in the second position data generation process, and the virtual articulator data read after being recorded in the recording process, A drawing process for writing a mark indicating the mandibular movement of the patient whose position is corrected by the third position data on the three-dimensional image of the virtual articulator specified by the virtual articulator data;
    Mandibular movement recording method including
  13.  患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるようにされているとともに、接触された部位を検出することができる面状の検出面を有するとともに、接触された部位のデータを外部へ出力するための出力手段を備えたフラッグを有するアッパーボウと、硬化性の物質を塗布して患者の下顎歯列に対して固定することができる下バイトフォーク、及び前記下バイトフォークと接続されているとともに、患者が行った下顎の運動である下顎運動に伴って、前記フラッグに接触しながら前記フラッグ上を移動するスタイラス、を備えているロウアーボウと、を有する下顎運動検出装置と組合せて用いられる、下顎運動記録装置としてコンピュータを機能させるためのコンピュータプログラムであって、
     前記コンピュータを、
     ある患者の仮想咬合器についてのデータである仮想咬合器データを記録した記録手段、
     前記フラッグの前記出力手段から、その患者の前記下顎運動に伴って移動する前記スタイラスに接触された部分についてのデータである下顎運動についてのデータである運動データを受付ける第2受付手段、
     前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示す画像である下部画像のデータである下部画像データを受付ける第3受付手段、
     前記第3受付手段で受付けた下部画像データから前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示すデータである第3位置データを生成する、第2位置データ生成手段、
     前記第2受付手段で受付けた運動データと、前記第2位置データ生成手段から受付けた第3位置データと、前記記録手段から読出した仮想咬合器データとに基づいて、前記仮想咬合器データにより特定される仮想咬合器の3次元画像上に、前記第3位置データによりその位置が修正された患者の下顎運動を示すマークを書込む描画手段、
     として機能させるためのコンピュータプログラム。
    Detecting a contacted portion that can be fixed to a patient's skull in a state in which the positional relationship including a relative angle with a predetermined reference plane on the patient's skull is uniquely determined. An upper bow with a flag that has a planar detection surface that can output the data of the contacted part to the outside and a curable substance applied to the patient's lower dentition A lower bite fork that can be fixed to the lower bite and connected to the lower bite fork, and moves on the flag while touching the flag in accordance with a lower jaw movement that is performed by a patient. A lower bow having a stylus, and a cognition device for causing the computer to function as a mandibular movement recording device used in combination with a mandibular movement detecting device. A computer program,
    The computer,
    Recording means for recording virtual articulator data, which is data about a virtual articulator of a patient,
    Second receiving means for receiving, from the output means of the flag, motion data that is data about mandibular movement that is data about a portion in contact with the stylus that moves with the mandibular movement of the patient;
    A third receiving means for receiving lower image data, which is data of a lower image, which is an image showing a relative positional relationship including an angle between the lower bite fork and the lower dentition;
    Second position data generating means for generating third position data which is data indicating a relative positional relationship including the angle of the lower bite fork and lower jaw dentition from the lower image data received by the third receiving means;
    Based on the movement data received by the second receiving means, the third position data received from the second position data generating means, and the virtual articulator data read from the recording means, specified by the virtual articulator data A writing means for writing a mark indicating the mandibular movement of the patient whose position is corrected by the third position data on the three-dimensional image of the virtual articulator to be operated;
    Computer program to function as.
  14.  患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるようにされているとともに、接触された部位を検出することができる面状の検出面を有するとともに、接触された部位のデータを外部へ出力するための出力手段を備えたフラッグを有するアッパーボウと、硬化性の物質を塗布して患者の下顎歯列に対して固定することができる下バイトフォーク、及び前記下バイトフォークと接続されているとともに、患者が行った下顎の運動である下顎運動に伴って、前記フラッグに接触しながら前記フラッグ上を移動するスタイラス、を備えているロウアーボウと、を有する下顎運動検出装置と組合せて用いられる、下顎運動記録装置であって、
     ある患者の仮想咬合器についてのデータである仮想咬合器データを記録した記録手段と、
     前記フラッグの前記出力手段から、その患者の前記下顎運動に伴って移動する前記スタイラスに接触された部分についてのデータである下顎運動についてのデータである運動データを受付ける第2受付手段と、
     前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示す画像である下部画像のデータである下部画像データを受付ける第3受付手段と、
     前記第3受付手段で受付けた下部画像データから前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示すデータである第3位置データを生成する、第2位置データ生成手段と、
     前記第2受付手段で受付けた運動データと、前記記録手段から読出した仮想咬合器データとに基づいて、前記仮想咬合器データにより特定される仮想咬合器の3次元画像における前記第3位置データによりその位置が修正された下顎を、アニメーションにより患者の下顎運動を再現するように下顎運動させる動画処理手段と、
     を備えている下顎運動記録装置。
    Detecting a contacted portion that can be fixed to a patient's skull in a state in which the positional relationship including a relative angle with a predetermined reference plane on the patient's skull is uniquely determined. An upper bow with a flag that has a planar detection surface that can output the data of the contacted part to the outside and a curable substance applied to the patient's lower dentition A lower bite fork that can be fixed to the lower bite and connected to the lower bite fork, and moves on the flag while touching the flag in accordance with a lower jaw movement that is performed by a patient. A mandibular movement recording device used in combination with a mandibular movement detection device having a lower bow comprising a stylus,
    Recording means for recording virtual articulator data, which is data about a patient's virtual articulator,
    Second receiving means for receiving, from the output means of the flag, movement data that is data about mandibular movement that is data about a portion in contact with the stylus that moves with the movement of the lower jaw of the patient;
    Third receiving means for receiving lower image data that is data of a lower image that is an image showing a relative positional relationship including an angle between the lower bite fork and the lower jaw dentition;
    Second position data generating means for generating third position data which is data indicating a relative positional relationship including the angle of the lower bite fork and lower jaw dentition from the lower image data received by the third receiving means; ,
    Based on the movement data received by the second receiving means and the virtual articulator data read out from the recording means, the third position data in the three-dimensional image of the virtual articulator specified by the virtual articulator data Movie processing means for moving the lower jaw whose position is corrected to reproduce the lower jaw movement of the patient by animation,
    A mandibular movement recording device.
  15.  患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるようにされているとともに、接触された部位を検出することができる面状の検出面を有するとともに、接触された部位のデータを外部へ出力するための出力手段を備えたフラッグを有するアッパーボウと、硬化性の物質を塗布して患者の下顎歯列に対して固定することができる下バイトフォーク、及び前記下バイトフォークと接続されているとともに、患者が行った下顎の運動である下顎運動に伴って、前記フラッグに接触しながら前記フラッグ上を移動するスタイラス、を備えているロウアーボウと、を有する下顎運動検出装置と組合せて用いられる、コンピュータを含む下顎運動記録装置で実行される下顎運動記録方法であって、
     前記コンピュータが実行する、
     ある患者の仮想咬合器についてのデータである仮想咬合器データを記録する記録処理と、
     前記フラッグの前記出力手段から、その患者の前記下顎運動に伴って移動する前記スタイラスに接触された部分についてのデータである下顎運動についてのデータである運動データを受付ける第2受付処理と、
     前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示す画像である下部画像のデータである下部画像データを受付ける第3受付処理と、
     前記第3受付処理で受付けた下部画像データから前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示すデータである第3位置データを生成する、第2位置データ生成処理と、
     前記第2受付処理で受付けた運動データと、前記記録処理で記録された後に読み出された仮想咬合器データとに基づいて、前記仮想咬合器データにより特定される仮想咬合器の3次元画像における前記第3位置データによりその位置が修正された下顎を、アニメーションにより患者の下顎運動を再現するように下顎運動させる動画処理と、
     を含む下顎運動記録方法。
    Detecting a contacted portion that can be fixed to a patient's skull in a state in which the positional relationship including a relative angle with a predetermined reference plane on the patient's skull is uniquely determined. An upper bow with a flag that has a planar detection surface that can output the data of the contacted part to the outside and a curable substance applied to the patient's lower dentition A lower bite fork that can be fixed to the lower bite and connected to the lower bite fork, and moves on the flag while touching the flag in accordance with a lower jaw movement that is performed by a patient. Mandibular movement performed on a mandibular movement recording device including a computer, used in combination with a mandibular movement detection device having a lower bow with a stylus A recording method,
    The computer executes,
    A recording process for recording virtual articulator data, which is data about a virtual articulator of a patient;
    A second receiving process for receiving, from the output means of the flag, movement data that is data about mandibular movement, which is data about a portion in contact with the stylus that moves with the lower jaw movement of the patient;
    A third receiving process for receiving lower image data that is data of a lower image that is an image showing a relative positional relationship including an angle between the lower bite fork and the lower jaw dentition;
    A second position data generation process for generating third position data, which is data indicating a relative positional relationship including the angle between the lower bite fork and the lower dentition from the lower image data received in the third reception process; ,
    In the three-dimensional image of the virtual articulator specified by the virtual articulator data based on the motion data received in the second acceptance process and the virtual articulator data read after being recorded in the recording process Moving image processing for moving the lower jaw whose position is corrected by the third position data to reproduce the lower jaw movement of the patient by animation; and
    Mandibular movement recording method including
  16.  患者の頭蓋における所定の基準面との相対的な角度を含めた位置関係を一意に定めた状態で患者の頭蓋に固定することができるようにされているとともに、接触された部位を検出することができる面状の検出面を有するとともに、接触された部位のデータを外部へ出力するための出力手段を備えたフラッグを有するアッパーボウと、硬化性の物質を塗布して患者の下顎歯列に対して固定することができる下バイトフォーク、及び前記下バイトフォークと接続されているとともに、患者が行った下顎の運動である下顎運動に伴って、前記フラッグに接触しながら前記フラッグ上を移動するスタイラス、を備えているロウアーボウと、を有する下顎運動検出装置と組合せて用いられる、下顎運動記録装置としてコンピュータを機能させるためのコンピュータプログラムであって、
     前記コンピュータを、
     ある患者の仮想咬合器についてのデータである仮想咬合器データを記録した記録手段、
     前記フラッグの前記出力手段から、その患者の前記下顎運動に伴って移動する前記スタイラスに接触された部分についてのデータである下顎運動についてのデータである運動データを受付ける第2受付手段、
     前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示す画像である下部画像のデータである下部画像データを受付ける第3受付手段、
     前記第3受付手段で受付けた下部画像データから前記下バイトフォークと下顎歯列の角度を含めた相対的な位置関係を示すデータである第3位置データを生成する、第2位置データ生成手段、
     前記第2受付手段で受付けた運動データと、前記記録手段から読出した仮想咬合器データとに基づいて、前記仮想咬合器データにより特定される仮想咬合器の3次元画像における前記第3位置データによりその位置が修正された下顎を、アニメーションにより患者の下顎運動を再現するように下顎運動させる動画処理手段、
     として機能させるためのコンピュータプログラム。
    Detecting a contacted portion that can be fixed to a patient's skull in a state in which the positional relationship including a relative angle with a predetermined reference plane on the patient's skull is uniquely determined. An upper bow with a flag that has a planar detection surface that can output the data of the contacted part to the outside and a curable substance applied to the patient's lower dentition A lower bite fork that can be fixed to the lower bite and connected to the lower bite fork, and moves on the flag while touching the flag in accordance with a lower jaw movement that is performed by a patient. A lower bow having a stylus, and a cognition device for causing the computer to function as a mandibular movement recording device used in combination with a mandibular movement detecting device. A computer program,
    The computer,
    Recording means for recording virtual articulator data, which is data about a virtual articulator of a patient,
    Second receiving means for receiving, from the output means of the flag, motion data that is data about mandibular movement that is data about a portion in contact with the stylus that moves with the mandibular movement of the patient;
    A third receiving means for receiving lower image data, which is data of a lower image, which is an image showing a relative positional relationship including an angle between the lower bite fork and the lower dentition;
    Second position data generating means for generating third position data which is data indicating a relative positional relationship including the angle of the lower bite fork and lower jaw dentition from the lower image data received by the third receiving means;
    Based on the movement data received by the second receiving means and the virtual articulator data read out from the recording means, the third position data in the three-dimensional image of the virtual articulator specified by the virtual articulator data Movie processing means for moving the lower jaw whose position is corrected to reproduce the lower jaw movement of the patient by animation,
    Computer program to function as.
PCT/JP2015/053851 2014-02-14 2015-02-12 Computer, method executed by computer, computer program, and face bow WO2015122466A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014027052A JP6332733B2 (en) 2014-02-14 2014-02-14 Computer, computer-implemented method, computer program, and facebow
JP2014-027052 2014-02-14

Publications (1)

Publication Number Publication Date
WO2015122466A1 true WO2015122466A1 (en) 2015-08-20

Family

ID=53800206

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/053851 WO2015122466A1 (en) 2014-02-14 2015-02-12 Computer, method executed by computer, computer program, and face bow

Country Status (2)

Country Link
JP (1) JP6332733B2 (en)
WO (1) WO2015122466A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3173049A4 (en) * 2014-07-22 2018-07-18 Medicom LLC Computer, computer-implemented method, computer program, and face-bow
CN113679405A (en) * 2021-08-27 2021-11-23 吉林大学 Positioning device and positioning method for natural head position of virtual skull model

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102005925B1 (en) * 2017-07-10 2019-07-31 이우형 Dental system with baseline to allow positional merging od digital three-dimensional tooth model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS59101133A (en) * 1982-11-29 1984-06-11 日本アビオニクス株式会社 Apparatus for measuring three-dimensional position and posture
JPS62142549A (en) * 1985-12-05 1987-06-25 デンタトウス・ア−ベ− Quick mount face bow apparatus for articulator
JP2001517480A (en) * 1997-09-22 2001-10-09 ミネソタ マイニング アンド マニュファクチャリング カンパニー How to use in occlusion
US20030204150A1 (en) * 2002-04-25 2003-10-30 Wolfgang Brunner Method and apparatus for the 3-dimensional analysis of movement of the tooth surfaces of the maxilla in relation to the mandible
JP2012196448A (en) * 2011-03-18 2012-10-18 Kaltenbach & Voigt Gmbh Electronic registration device to record motion of jaw

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS59101133A (en) * 1982-11-29 1984-06-11 日本アビオニクス株式会社 Apparatus for measuring three-dimensional position and posture
JPS62142549A (en) * 1985-12-05 1987-06-25 デンタトウス・ア−ベ− Quick mount face bow apparatus for articulator
JP2001517480A (en) * 1997-09-22 2001-10-09 ミネソタ マイニング アンド マニュファクチャリング カンパニー How to use in occlusion
US20030204150A1 (en) * 2002-04-25 2003-10-30 Wolfgang Brunner Method and apparatus for the 3-dimensional analysis of movement of the tooth surfaces of the maxilla in relation to the mandible
JP2012196448A (en) * 2011-03-18 2012-10-18 Kaltenbach & Voigt Gmbh Electronic registration device to record motion of jaw

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3173049A4 (en) * 2014-07-22 2018-07-18 Medicom LLC Computer, computer-implemented method, computer program, and face-bow
CN113679405A (en) * 2021-08-27 2021-11-23 吉林大学 Positioning device and positioning method for natural head position of virtual skull model
CN113679405B (en) * 2021-08-27 2023-10-20 吉林大学 Positioning device and positioning method for natural head position of virtual skull model

Also Published As

Publication number Publication date
JP6332733B2 (en) 2018-05-30
JP2015150258A (en) 2015-08-24

Similar Documents

Publication Publication Date Title
US11432919B2 (en) Physical and virtual systems for recording and simulating dental motion having 3D curvilinear guided pathways and timing controls
US8620045B2 (en) System , method and article for measuring and reporting craniomandibular biomechanical functions
US20160128624A1 (en) Three dimensional imaging of the motion of teeth and jaws
WO2016013359A1 (en) Computer, computer-implemented method, computer program, and face-bow
KR20160143654A (en) Augmented reality dental design method and system
US20080176182A1 (en) System and method for electronically modeling jaw articulation
JP2008136865A (en) Automatic tooth movement measuring method employing three-dimensional reverse engineering technique and program for it
JPH03502770A (en) Devices for measuring and analyzing the movements of the human body or parts thereof
JP2011177451A (en) Dental diagnosis system and dental care system
US8747110B2 (en) Orthognathic planning system and method
US10751152B2 (en) Jaw motion tracking system and operating method using the same
WO2012090211A1 (en) Augmented reality computer model facebow system for use in dentistry
JP6332733B2 (en) Computer, computer-implemented method, computer program, and facebow
JP5891080B2 (en) Jaw movement simulation method, jaw movement simulation apparatus, and jaw movement simulation system
Zambrana et al. Jaw tracking integration to the virtual patient: A 4D dynamic approach
US8753119B2 (en) Mounting method of dental cast
Goob et al. Reproducibility of a magnet-based jaw motion analysis system.
JP4342888B2 (en) Pupil line display method
US20110191083A1 (en) System and Method for Measuring and Reporting the Relative Functions of Dental Anatomical Structures
CN111291507A (en) Modeling and stress analysis method and device for tooth model containing periodontal ligament
KR101645880B1 (en) 3D Digital recording system of Maxillo-Mandibular jaw relation
Farook et al. A 3D printed electronic wearable device to generate vertical, horizontal and phono-articulatory jaw movement parameters: A concept implementation
Soaita Computer analysis of functional parameters and dental occlusion
Ranganathan A comparative evaluation of patient's satisfaction and quality of life and masticatory efficiency with conventional complete denture, single and double implant retained mandibular overdenture using a surface electromyography-An in vivo study
Singh et al. VIRTUAL ARTICULATORS IN PROSTHODONTICS.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15748571

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15748571

Country of ref document: EP

Kind code of ref document: A1