WO2002062535A2 - Method for controlling an articulated and/or deformable mechanical system and its applications - Google Patents

Method for controlling an articulated and/or deformable mechanical system and its applications Download PDF

Info

Publication number
WO2002062535A2
WO2002062535A2 PCT/IT2002/000072 IT0200072W WO02062535A2 WO 2002062535 A2 WO2002062535 A2 WO 2002062535A2 IT 0200072 W IT0200072 W IT 0200072W WO 02062535 A2 WO02062535 A2 WO 02062535A2
Authority
WO
WIPO (PCT)
Prior art keywords
accordance
control
actuators
creation
artificial
Prior art date
Application number
PCT/IT2002/000072
Other languages
French (fr)
Other versions
WO2002062535A3 (en
Inventor
Giovanni Pioggia
Fabio Di Francesco
Luca Marano
Original Assignee
Giovanni Pioggia
Fabio Di Francesco
Luca Marano
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Giovanni Pioggia, Fabio Di Francesco, Luca Marano filed Critical Giovanni Pioggia
Priority to AU2002236201A priority Critical patent/AU2002236201A1/en
Publication of WO2002062535A2 publication Critical patent/WO2002062535A2/en
Publication of WO2002062535A3 publication Critical patent/WO2002062535A3/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40324Simulation, modeling of muscle, musculoskeletal dynamical system
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40527Modeling, identification of link parameters

Definitions

  • the object of the instant invention is a method for controlling an articulated and/or deformable mechanical system such as a device that permits objects to execute complex movements, an automaton, a robot or an artificial face.
  • the invention relates to the operating applications in accordance with the hereinabove mentioned method.
  • Bibliographical references for the hereinabove noted androids can be found in Peter Menzel, Faith D'Aluisio, Robosapiens, evolution of new species, MIT Press, 2000.
  • the aim of the instant invention is to make it possible to control even complex articulated and/or deformable mechanical systems whose description with a system of equations in accordance with the known technique would be difficult or impossible.
  • Another aim of the instant invention is to provide a method for controlling articulated and/or deformable mechanical systems utilizing a virtual graphics development environment whereby it is possible to interpret data supplied by a set of sensors and simultaneously manage a large number of actuators, making it possible to construct a model that corresponds extremely well to reality, both in terms of actuator dynamics and of deformations induced, thereby being able to plan and control, within the realm of the virtual environment, the desired succession of animation.
  • a further aim of the instant invention is to provide a control method of the type hereinabove noted whereby it is possible to construct an unlimited database of configurations and/or animation.
  • Yet another aim of the instant invention is to provide a hereinabove noted type of control method whereby new animation can be obtained from the database already created without necessarily having to recalculate it with the virtual model, i.e. to study the desired range of animation off-line, save it in the database and with it control the induced movements and/ or deformations of the real actuators.
  • the essential feature of the control method of the instant invention consists of the substitution of conventional mathematical models with solid 3D models constructed in a virtual environment- utilizing advanced three dimensional graphics software in a conceptually new fashion in place of systems of equations, i.e. the basic idea is to use information and data gathered from objects created in a virtual environment to drive analogous real objects.
  • figures 1, 2 and 3 illustrate schematic representations of the operation cycle whereby it is possible to obtain animation of a real system in accordance with the control method of the instant invention
  • figure 4 illustrates a schematic representation of the components of an apparatus operating in accordance with the control method object of the instant invention
  • figure 5 illustrates a schematic representation of the motorization system of the example of actuation of the method in accordance with the instant invention
  • figure 6 illustrates a schematic representation of an actuator employed in said example
  • figures 7 and 8 illustrate a lateral and a frontal view of a motor assembly
  • figure 9 illustrates a schematic representation of motor driver electronics
  • figure 10 illustrates a schematic representation of the modeling phase of the application example in accordance with the instant invention
  • figure 11 illustrates an example of a matrix used to drive the application example in accordance with the instant invention
  • figure 12 illustrates a schematic representation of bone and locator hierarchy.
  • the control method in accordance with the instant invention calls for an initial phase wherein, in function of the application objectives, there is analysis of the components and their reciprocal relationships of an articulated and/ or deformable mechanical system having sensors and/or actuators as interface, hereinafter indicated as "real system” for the sake of brevity, that will be managed by the control system.
  • real system an articulated and/ or deformable mechanical system having sensors and/or actuators as interface
  • a virtual model is created with advanced 3D graphics software constructing, in a virtual environment, the three dimensional models of all the structures and of all the actuators that comprise the real system.
  • the virtual model created in this fashion is then "animated" and/or deformed in the desired manner, and the information necessary for control of the real system is drawn from the previously parametrized variables.
  • animation of the model can be simplified by the use of motion capture.
  • the movement to be reproduced is acquired by the real system through appropriate data acquisition systems, processed with specific software and the information useful for the construction of the animated sequences is fed to the virtual model which in turn reconstructs the positions of the actuators and the progressions of the parametrized variables from this information.
  • the information drawn from the model then becomes control driver input for driving the real system.
  • MAYA software known as MAYA, marketed by Alias/Wavefront, or an equivalent
  • MAYA software known as MAYA
  • Meta Motion Capture System software marketed by Meta Motion, or an equivalent
  • the data acquisition systems for capturing the movement to be reproduced can be comprised of magnetic sensors or the equivalent.
  • MAYA software or software such as Filmbox Animation marketed by Kaydara or an equivalent can be used for processing the data acquired by such systems.
  • the virtual model is created in order to obtain the variables necessary to drive the articulated and/ or deformable mechanical system which, in its most general form, comprises n actuators, k constraints, j junctions and p passive subsystems animated and/or deformed by the actuators. These variables represent the control parameters whereby a large number of actuators are driven in parallel in space and time thereby causing the real system and/or the highly complex physical systems dependant upon it to assume the desired animation and/or deformations.
  • the passive subsystems are projected in the virtual environment by defining their properties of animation and/or deformation and maintaining unchanged their proportions and reciprocal distances (modeling phase).
  • the actuators, constraints and junctions are created graphically and placed in their proper spatial positions. Then, the system control variables are defined, i.e. the values in time and space to be extracted from the virtual model to permit control of the real system (parmetrization phase).
  • the spatial-temporal motor patterns are created for each individual actuator (animation phase) in the following possible manners:
  • the model returns the animation and/or deformations induced on the real system (direct cinematics);
  • a skeleton base is created; its volumetric properties are defined; the skeleton base is positioned in space; the properties of animation and/or deformation are defined; the zone of influence that each individual actuator and/ or junction has on the passive subsystems dependent upon it is defined;
  • skeleton base we make what is called "bone” (skeleton) in the most common 3D graphics software. This is a vector oriented with respect to a coordinate system centered on its point of application. In order to assign volumetric properties to it, a cylinder is drawn which, when placed on the bone, is grouped together with it as a single structure. The general command is "create group”.
  • This group can be called actuator whose point of application is a junction.
  • This object is positioned and scaled according to the size and position of the real object.
  • the junction of this object can be constrained or not to other structures thereby defining its animation.
  • a flexor a feature present in graphics software, is associated with each actuator. It is one of the characters that can be assigned to the elements of a virtual environment. So that the flexors can have an influence on the passive subsystems that are dependent upon them, they are interconnected with both the passive subsystems and the actuator, selecting all of them simultaneously and correlating them by means of the generic command rigid bind. The area of effect and influence of each flexor on deformation and/or animation of the dependent subsystems are modified specifically with the paint tool panel. Since the virtual structure can be adapted to any subsystem and to any real configuration of actuators, the potential is unlimited.
  • the spatial-temporal motory patterns of each individual actuator are implemented in the fashion previously indicated.
  • the progression in space of the animation and/ or deformations of the actuators and of their interconnected structures can be defined through the spatial keys of the tool set driven key present in all 3D development environments. With this tool it is possible to constraint tcf set reference values the geometric parameters corresponding to movements and/or deformations of the virtual structures created.
  • the software will animate the model in accordance with the trajectories that satisfy its intrinsic characteristics, and with the passage through these known values in accordance with points 1, 2, 3 hereinabove described.
  • the associated deformations are consequently set, reproducing the real physical movements and/ or deformations that occur in the actuator itself.
  • control variables of the virtualized actuator are associated with another structure, the locator.
  • This is a constraint structure that can be associated with one or more elements whose parameters it influences. Attributes of the locator (control variables), whose variation causes a numerical variation in the corresponding geometric parameters of the flexor by passing through the reference points, are thereby created.
  • the time progression of the actuator animation and/ or deformations and of the structures interconnected to them can be decided by defining the temporal keys with time slider, a tool present in all 3Ddevelopment environments. For every animation, the state of the model at each desired instant is set in time slider. The software will construct all the intermediate states of the progression in time of the actuator animation.
  • the decisional algorithm will operate by drawing the animation from a computer mass memory support.
  • the information necessary for driving the actuators will be extracted in real time from the virtual model animation and sent, still in real time, to the control drivers. This procedure is illustrated in figure 3.
  • Figure 4 illustrates how the most general configuration of the method in accordance with the instant invention can be put into practice.
  • 1 indicates a computer, specifically a personal computer wherein advanced 3D graphics software has been implemented, linked by RS232 serial port to an electronic control, indicated in its entirety by 2, and in turn connected to a closed cycle motor assembly driver.
  • the articulated and deformable mechanical system controlled by the method in accordance with the instant invention comprises, by way of non-limiting example, an artificial face which has been called, for the sake of simplicity "android head” whose appearance and facial expressions are similar to those of the human face.
  • the android head genetically indicated with 10, consists of a skeleton support structure 11 ⁇ whereto artificial muscles (actuators) are attached 12 connected to an artificial skin 13 that covers the entire head.
  • the actuators 12 are operationally connected by mechanical power transmission means 14 to a motor assembly 15 and an encoder 16 which are controlled by means of an electronic motor driver 17 from signals received from a virtual model of the human face reconstructed in an advanced 3D graphics environment on a personal computer equipped with 128 MB of RAM, a Windows NT operating system and a graphics card with OpenGL libraries or equivalent systems.
  • an electronic motor driver 17 from signals received from a virtual model of the human face reconstructed in an advanced 3D graphics environment on a personal computer equipped with 128 MB of RAM, a Windows NT operating system and a graphics card with OpenGL libraries or equivalent systems.
  • the support structure 11 is a resin reproduction of the human cranium and part of the cervical spinal column which serves as anchorage for the actuators 12 and is covered with artificial skin 13.
  • the structure has an internal space (normally occupied by the brain mass) wherein the sheaths for the transmission cables 14 that transmit mechanical power from the motor assembly to the actuators are housed by means of constraints.
  • the sheaths innervate the support structure through perforations that allow the cables to be positioned tangentially to the skin.
  • the artificial skin 13 is prepared by a totally manual technique by first making an alginate impression from a live model. Once the impression has solidified, plaster is poured manually over it to produce the negative of the face. Finally, a mould is obtained from this negative wherein silicone rubberjs poured to form the artificial skin.
  • the actuators 12 that function as muscles are a silicon structure 20 of elongated elliptical form, illustrated schematically in figure 6, whose core is the transmission cable 14 of the mechanical power drawn from the motor assembly.
  • the end of the sheath 21 where the cable enters is attached to the support structure at the muscle's point of origin, while the transmission cable 14 is constrained to the skin in any suitable manner, generically indicated with 22, e.g. bolting its end in the hole of a metal cylinder countersunk in a silicone bulge on the internal surface of the skin where d e real muscle terminates.
  • the actuators are molded by pouring liquid silicon into a hollow mould that has the form of that specific actuator and allowing it to dry. It is thereby possible to create actuators of any shape and size.
  • Linear movement of the cable 14 within the silicon mass 20 causes its longitudinal shortening and consequently its transversal widening, allowing the skin to be both stretched 13 and deformed as desired.
  • the artificial muscle and the skin thereby exchange both actions that are tangential and perpendicular to its internal surface due to the skin's contact with the elliptical muscle surface as well as actions resulting from the direct action of the end of the cable where it is anchored directly to the skin.
  • the rigidity of the cable-sheath transmission system permits the retroaction for control of the actuators to be performed by measuring the displacement directly on the motors and not on the actuators.
  • the motor assembly comprises twenty-six rotating linear motors divided into two groups.
  • Each motor is composed of a coil with floating arm attached 28 placed between two fixed magnets 29 (north and south) in the form of a 45° circular sector.
  • the floating arms of each motor are equidistant one from the other and are mounted to rotate independently on an axis 30.
  • the magnetic circuit closes in a frame 31 that keeps the magnets 29 and the rotation axis 30 in a fixed position. Rotation causes the linear movement of the end 28a of each floating arm 28 wherefrom the necessary mechanical power is drawn.
  • a 45° angle of rotation a 20 mm maximum linear movement is obtained.
  • motors are able to deliver a force of 12.5 N with a maximum absorption of 0.5A of current.
  • the maximum attainable acceleration is 15J m/s 2 and the maximum velocity is 0.78 m/s.
  • Repetitivity and precision depend on the characteristics of the transducer 32, which in this case is Hall-effect transducer, and on the type of drive that normally produce a repetitivity of ⁇ 5 ⁇ m and a precision of ⁇ 0.08 mm.
  • the cable 14 necessary for transmitting power from the motor to the respective actuator is attached to the end 28a of the floating arm by a bolt 33, and the sheath 21 is connected to the groups with a ferrule 34 which is inserted in the holes made in a plate 35 at the front of the groups.
  • a ventilation system not shown, ensures the necessary convection to eliminate the heat produced by the motors.
  • each module indicated genetically with 36, houses a programmable microprocessor 37 and four motor drivers 38.
  • the motherboard 41 which holds the modules 36 is comprised of an RS232 serial port indicated with 39 and a supply 40.
  • the protocol was organized on 3 bytes: the first for the addresses, the second for the position and the third for the speed of movement.
  • the baud rate is 57.6 Kbaud with a command execution time of 520 ⁇ m; all of the actuators execute the command at 12.5 m/s.
  • the microprocessor can be programmed via software. There is a closed cycle control executed by appropriately calibrated Hall-effect sensors 32 that read the angular velocity of the coils 27. With this system, the motors can be activated by very accurate dynamics, to obtain deformations of the actuators and of the artificial skin that correspond to those in the virtual model.
  • the creation of a virtual model of a human face makes it possible to obtain the spatial-temporal trajectories necessary to drive an anthropomorphic head or an android head (real system) that can simulate human facial expressions.
  • the virtual model of the human face of the instant invention makes manifest that with this technique it is possible to have parallel control of a large number of actuators in such a fashion as to allow even the complex physical systems that are dependent upon them undergo the desired deformations and thereby the required facial expressions.
  • modeling phase The best way to reproduce the human face is to project the physical model of the skin in a virtual environment keeping the proportions unchanged and defining this structure as a deformation surface.
  • the virtual model of each individual muscle is then created by parameterizing it and placing it in the correct position (parameterization phase).
  • the dynamics of the facial expressions are created by defining which spatial-temporal motor patterns are to be associated with each actuator in order to achieve the desired surface deformations (animation phase).
  • a digitizer In order to project a surface and its reference system from physical space into a virtual development environment, it is necessary to identify a certain number of points by tracing meridians and parallels directly on the surface using a digitizer.
  • This is a commercially available mobile device comprised of a base, three mechanical arms interconnected by joints and a stylus at the end.
  • the digitizer allows the stylus to identify any position in space with respect to its base. The position is calculated by means of digital encoders at the junctions and is transmitted to the PC via RS232.
  • the points that were previously traced on surface to be projected are thereby felt, mapped in a three-dirnensional reference system and transferred into the virtual development environment. Because of their operating programs, these devices are compatible with most commercial software.
  • the development environment recreates the parallel lines that interpolate them, and from these lines, the model of the surface that had been felt is created.
  • a surface model was created comprised of thirteen counterclockwise superficial section lines originating from the middle of the face and of fifteen transverse section lines originating from the mouth.
  • points manually to these curves for the inside of the mouth. By selecting all of the curves obtained thereby, a surface will be created with the loft command, common to 3D software.
  • the application point of the actuator must be placed where the artificial muscle is attached to the support structure and its end placed where the artificial muscle attaches to the artificial skin. Opening and closing the mouth is actuated by creating a control bone which is placed on the joint of the jaw. It is placed where the jaw actually rotates when the mouth is opened and is linked to the four actuators of the part underneath the jaw. This causes rotation with respect to a fixed point on the new bone so as to define the actual opening and closing of the mouth by rotation rather than by translation.
  • a bone support base is created.
  • This skeleton base when connected to all of the actuators of half of the face, can be duplicated easily, giving a mirror image for the other half of the face, thereby producing perfect symmetry.
  • the two face halves each have their own bone base.
  • We next proceed to the creation of a general control parent bone which is connected to the control bones of the two face halves and, therefore, to all of the actuators, thereby creating a hierarchical structure starting from the parent bone.
  • the actuators that are situated on the median axis of the face are created along with their parent bone.
  • the parent bone of the actuators of the median axis is also enslaved to the general control parent bone.
  • a flexor is associated with each actuator. So that the flexors can influence the face, they are linked both with the cylindrical surface around the bone and with the face surface, by simultaneously selecting them with their respective bones, the face surface and the cylinder and correlating them through the rigid bind command. Once all of the flexors have been defined and properly sized, the influences they have on the surface are indicated. Each actuator's area of effect and its deformations are modified by observing the influence that they have on the surface of the face. By using the appropriate jt ⁇ ?» tool panel it is possible to show and easily modify the influence of the flexor on the interconnected structures. In this way the expression potential is unlimited, since the virtual structure can be adapted to any face and any real configuration of the actuators.
  • the dynamics of the facial expressions are implemented by defining the spatial-temporal motor patterns to be associated with each individual actuator in order to obtain the desired surface deformations.
  • Spatial deformations The spatial progression of the actuator deformations and of the face surface can be decided by defining the spatial keys with the set driven key tool. With this tool it is possible to constraint to known values (keys) the geometric parameters corresponding to deformations of the virtual structure created. The software animates the model in accordance with trajectories that satisfy its intrinsic features and with the passage through these values. The associated deformations are therefore set, reproducing the real physical deformations that occur in the actuator itself. In the specific case, we have constrained only the minimum and maximum parameters of extension/shortening and swelling of each flexor by acting on the scale factors along the longitudinal and transverse axis.
  • Locators (commonly found in 3D software) which have been hierarchically organized can be used to make it possible to control the scale factors of several actuators simultaneously.
  • a locator is a structure whereto one or more elements can be associated and their variables influenced simultaneously. For each locator, the associated elements are the actuators, while the variables are the scale factors.
  • locator attributes are created whose variation determines a numerical variation in the corresponding geometric parameters of the flexor passing through the spatial keys.
  • the preferred procedure is to create a hierarchy of locators by grouping adjacent actuators in a single locator to control the influence of the various actuators.
  • locators are created: - mouth: zigomatic muscle, risorius muscle, depressor muscle of angle of mouth, levator muscle of upper lip, and depressor muscle of lower lip; -forehead: corrugator supercil ⁇ and frontal muscles;
  • - nose levator muscles of upper lip and ala of nose; -jaw. muscles that move the jaw.
  • the locators which were created separately are then interconnected in groups to the parent locator. Because of this hierarchical structure, it is possible to act upon the attributes of the parent locator to influence the movements of all of the actuators simultaneously in the desired manner. In fact, the expressions are defined as attributes of the parent locator. In the specific case, smile, disgust and indecision expressions were defined. Since human beings are able to effect an extremely high number of expressions, it is possible to model approximately forty basic expressions which can be combined to obtain most human facial expressions. The basic expressions can be subdivided into separate groups: forehead, eyes, nose, mouth and jaw. For a more detailed treatment, see Peter Ratner, Mastering 3D animation, Allworth Press, New York, 2000.
  • the progression in time of the actuator deformations and the deformations of the face surface can be decided by defining the temporal keys with time slider, a tool present in all 3D environment development software.
  • the model states are set at the instants desired for each expression.
  • the software reconstructs all of the intermediate states of the deformation progression in time of the actuators and, thereby, the expression.
  • each actuator by acting on the spatial-temporal deformation of each actuator, the deformation that the skin undergoes can be varied until the desired expression is obtained. If we want to act directly on the skin surface and observe the resulting actuator deformations, it is necessary to enclose the external surface in a deformation lattice, another structure present in 3D software. This structure encloses the surface of the face in a three dimensional lattice with the desired resolution. It is possible to modify the expression as desired by acting upon the points of this lattice.
  • the spatial-temporal deformations of the actuators are saved in an ASCII file and are renamed in accordance with a table of equivalencies between the previously defined metric scale and the verbal attributes.
  • the file is a matrix comprised of real values delimited by tabs. The columns correspond to the actuators, the lines to time expressed in photograms (25 photograms/second) and the elements are the percentage of linear actuator deformation obtained from the previously defined attributes.
  • This matrix is formatted in order to be accepted by driver software.
  • a typical matrix obtained for the hereinabove noted expression of disgust is illustrated for a face half.
  • the set of the expressions obtained is stored in a database from which to draw.
  • a function was written whose input is represented by the expression or combination to be adopted.
  • the real number corresponds to a decimal (10 values) included between one expression and another.
  • An ASCII file contains the table of equivalencies between the verbal definition of the expressions and this metric scale.
  • the hereinabove noted function remains active. Any artificial intelligence, data analysis, decisional algorithm, operating system, etc. software can update the expression in real time simply by updating the function input values.
  • the hereinabove noted function can be exemplified with:
  • Drive can occur in one of two ways: off-line operation on-line operation
  • the drive software operates by drawing the previously calculated expression matrices (trajectories) from a mass memory support within the computer.
  • the virtual environment and the drive software can interact with the decisional algorithm and calculate in real time the expression to be created and drive the linear actuators.
  • Common correction algorithms can be used to advantage for any possible shifting between the set trajectory and the one effectively executed due to errors and/ or otherwise irreparable defects in the articulated mechanical system.
  • a corrector algorithm known in common literature, that can be used is Adaptive Learning or equivalent.
  • the model can be developed with the features of any human face desired and can include the most varied configurations of the subcutaneous actuators corresponding to the real ones.
  • this model it is possible to study the range of desired expressions off-line, save it in the database and with the database control the real movements of the actuators.
  • the hereinabove noted example makes manifest the fact that by the method in accordance with the instant invention it is possible to control not only a great number of actuators but even the highly complex physical systems dependent upon them, such as the artificial skin that is stretched over the support structure that reproduces the human face.
  • the model has been created, it is possible to use it to obtain information in two ways: by setting a certain actuator movement, the model produces the modifications in the expression that such movement induces in the skin, by starting from a sequence of expressions it is possible to identify the actuator movement that permits achievement of that sequence.
  • the method in accordance with the instant invention can be used to create, animate and control articulated and/or deformable mechanical systems of any complexity whatsoever such as: anthropomorphic articulated mechanical arms; anthropomorphic mechanical hands; articulated hexapods; androids and/or animatrons of any shape; androids, puppets and/ or animated dolls; androids, animatrons, mechanical animals, any imaginary beings and/ or puppets that can mimic facial expressions and/or speech.
  • the fields of application of the hereinabove noted systems are: robotics, mechanics, cinematography, entertainment, toy industry. Furthermore, it can be used: to control an electronic actor, i.e.
  • an animatron or its model that can express movement, facial expressions, mimic human dialog
  • control and electronic instructor i.e. an animatron or its model that can teach, presenting lessons and conversing with students
  • prostheses replacing parts of the human body such as: arms, legs, fingers, the face and organs
  • orthotic prostheses to study and/or control demonstration prototypes of organic lesions and loss of function resulting from diseases such as muscle and/ or organ malfunction in general
  • models of organs to be retrained indicate the best exercises and evaluate effectiveness.

Abstract

Control method for an erticulated and/or deformable machanical system comprising passive subsystems to be animated and/or defomed by means of interconnected actuators linked to said subsystems through junctions and subject to constraints, characterized by the fact of comprising the phases hereinbelow: creation of a virtual model of said system with advanced 3D graphics software; definition in said virtual model of the control variables necessary for driving said system; animation of said virtual model in the desired fashion in order to draw from it the temporal values of said control variables; saving of the values drawn; sending the saved values to the actuator control driver of said system to drive the actuators.

Description

Description of the Industrial Invention entitled:
METHOD FOR CONTROLLING AN ARTICULATED AND/OR DEFORMABLE
MECHANICAL SYSTEM AND ITS APPLICATIONS
The object of the instant invention is a method for controlling an articulated and/or deformable mechanical system such as a device that permits objects to execute complex movements, an automaton, a robot or an artificial face.
In addition, the invention relates to the operating applications in accordance with the hereinabove mentioned method.
There are known mathematical models which, based on data which have been predetermined or gathered by sensors, allow simultaneous management of a limited number of correlated or non- correlated actuators in such a way that the sum of their synergic and/or concurrent actions can bring about, in space and time, the desired evolution of the structures to which they are connected and/ or the variables they control. The complex mathematical models that deal with these problems attempt to describe the dynamics of the actuators and of the structures connected to them with integro-differential equation systems, bearing in mind the constraints and mutual influence of the various elements that comprise the controlled system. Once the desired evolution has been established, a sequence of movements, for example, the model is used to obtain the values to be assumed in time by the junction variables in the various actuators. The results of the calculations are then translated into real motion of the controlled system, supplying their driver system with the values that the junction variables of the various actuators must assume in space and time.
By way of example, actuation of the independent locomotion of the legs of a hexapod which represents the best possible application with the systems of conventional control systems. For the locomotion of such structure, it is necessary to create a movement wherein the points of contact between the extremities of the legs and the surface vary discontinuously with time. Typically, to make it possible for the robot to traverse a plane, it is necessary to write equations that correlate the junction variables of all the actuators and solve these equations in time to obtain the temporal succession of states of the various motors. For a detailed treatment of this topic see Haruhiko Asada, Jean-Jacques E. Slotine, Robot Analysis and Control, John Wiley & Sons, April 1986. References to current androids, controlled by traditional control systems and, therefore, intrinsically limited in their possible functions are: the humanoid "Cog", invented by Rodney Brooks of MIT (Boston), which can emulate the movements of the human upper arm, or "M2", also of MIT, which can walk. Another example is Nasa's, or more specifically, the Johnson Space Center (Houston) 's "Robonaut" which will be used to substitute astronauts in space walks for repairing space probes and stations. Yet another is "Jack" of the Electrotechnical Lab of Tsukuba (Japan), an android that helps the elderly to live alone. "Face Robot" is being developed at the Science University of Tokyo under the direction of Hideotoshi Akasawa. This android face will be able to recognize and react to different facial expressions.
Bibliographical references for the hereinabove noted androids can be found in Peter Menzel, Faith D'Aluisio, Robosapiens, evolution of new species, MIT Press, 2000.
The aim of the instant invention is to make it possible to control even complex articulated and/or deformable mechanical systems whose description with a system of equations in accordance with the known technique would be difficult or impossible.
Another aim of the instant invention is to provide a method for controlling articulated and/or deformable mechanical systems utilizing a virtual graphics development environment whereby it is possible to interpret data supplied by a set of sensors and simultaneously manage a large number of actuators, making it possible to construct a model that corresponds extremely well to reality, both in terms of actuator dynamics and of deformations induced, thereby being able to plan and control, within the realm of the virtual environment, the desired succession of animation.
A further aim of the instant invention is to provide a control method of the type hereinabove noted whereby it is possible to construct an unlimited database of configurations and/or animation.
Yet another aim of the instant invention is to provide a hereinabove noted type of control method whereby new animation can be obtained from the database already created without necessarily having to recalculate it with the virtual model, i.e. to study the desired range of animation off-line, save it in the database and with it control the induced movements and/ or deformations of the real actuators.
The essential feature of the control method of the instant invention consists of the substitution of conventional mathematical models with solid 3D models constructed in a virtual environment- utilizing advanced three dimensional graphics software in a conceptually new fashion in place of systems of equations, i.e. the basic idea is to use information and data gathered from objects created in a virtual environment to drive analogous real objects.
More specifically, by means of an advanced graphics system, every minute detail of the model of a real structure, no matter how complex, is reproduced and a virtual model is created for each of its actuators. The constraints and limits of the real object are attributed to the virtual model, since a virtual environment has the greatest degree of freedom. As in the traditional case, the key information mat the model supplies for control of the actuators is the values to be assigned in time to the junction variables such as the coordinates of the mobile extremities of a linear actuator or the angle of rotation of a motor axis. After being sent to a driver system, this information allows creation in the real structure of what was simulated in the virtual environment.
The advantages and potential applications of this mode of operation are noteworthy. It is possible to model very complex real systems whose description with a system of equations would be difficult or impossible, since its only limitation would be the calculation capacity of the computer used to create the virtual environment. Furthermore, all of the problems related to the writing and solution of very complex systems of equations can be eliminated, thereby considerably increasing the number of actuators that can be controlled at one time. In this fashion it is possible to obtain the results of traditional robotics more easily, resolving the problem of the cinematics of the various actuators whose values of position, velocity and acceleration assumed in time it is necessary to find.
Further features and advantages of the method for controlling articulated and/or deformable mechanical systems in accordance with the instant invention will become clearly apparent from the following description of a practical application supplied byway of example but which is in no way to be construed as limiting.
In the attached drawings: figures 1, 2 and 3 illustrate schematic representations of the operation cycle whereby it is possible to obtain animation of a real system in accordance with the control method of the instant invention; figure 4 illustrates a schematic representation of the components of an apparatus operating in accordance with the control method object of the instant invention; figure 5 illustrates a schematic representation of the motorization system of the example of actuation of the method in accordance with the instant invention; figure 6 illustrates a schematic representation of an actuator employed in said example; figures 7 and 8 illustrate a lateral and a frontal view of a motor assembly; figure 9 illustrates a schematic representation of motor driver electronics; figure 10 illustrates a schematic representation of the modeling phase of the application example in accordance with the instant invention; figure 11 illustrates an example of a matrix used to drive the application example in accordance with the instant invention; figure 12 illustrates a schematic representation of bone and locator hierarchy. Referring to figures 1 and 2, in its most general fashion of execution, the control method in accordance with the instant invention calls for an initial phase wherein, in function of the application objectives, there is analysis of the components and their reciprocal relationships of an articulated and/ or deformable mechanical system having sensors and/or actuators as interface, hereinafter indicated as "real system" for the sake of brevity, that will be managed by the control system. Thereafter, a virtual model is created with advanced 3D graphics software constructing, in a virtual environment, the three dimensional models of all the structures and of all the actuators that comprise the real system. These models are detailed by parameterizing the variables that serve to animate the corresponding real components and by modeling constraints and interdependencies to which said components are subject. The virtual model created in this fashion is then "animated" and/or deformed in the desired manner, and the information necessary for control of the real system is drawn from the previously parametrized variables. In particular, animation of the model can be simplified by the use of motion capture. In accordance with these techniques, the movement to be reproduced is acquired by the real system through appropriate data acquisition systems, processed with specific software and the information useful for the construction of the animated sequences is fed to the virtual model which in turn reconstructs the positions of the actuators and the progressions of the parametrized variables from this information.
The information drawn from the model then becomes control driver input for driving the real system.
By way of example, software known as MAYA, marketed by Alias/Wavefront, or an equivalent can be used as the advanced 3D graphics system. For the motion capture techniques, Gypsy Motion Capture System software marketed by Meta Motion, or an equivalent can be utilized and the data acquisition systems for capturing the movement to be reproduced can be comprised of magnetic sensors or the equivalent. The same MAYA software or software such as Filmbox Animation marketed by Kaydara or an equivalent can be used for processing the data acquired by such systems.
The virtual model is created in order to obtain the variables necessary to drive the articulated and/ or deformable mechanical system which, in its most general form, comprises n actuators, k constraints, j junctions and p passive subsystems animated and/or deformed by the actuators. These variables represent the control parameters whereby a large number of actuators are driven in parallel in space and time thereby causing the real system and/or the highly complex physical systems dependant upon it to assume the desired animation and/or deformations. For the best construction of the virtual model of that system in accordance with the method that is object of the instant invention, the passive subsystems are projected in the virtual environment by defining their properties of animation and/or deformation and maintaining unchanged their proportions and reciprocal distances (modeling phase). The actuators, constraints and junctions are created graphically and placed in their proper spatial positions. Then, the system control variables are defined, i.e. the values in time and space to be extracted from the virtual model to permit control of the real system (parmetrization phase). The spatial-temporal motor patterns are created for each individual actuator (animation phase) in the following possible manners:
1. by setting a certain succession of values of the actuator control variables, the model returns the animation and/or deformations induced on the real system (direct cinematics);
2. From animation and/or deformations induced graphically on the virtual model, it is possible to obtain the progression of the actuator control variables that permits them to be achieved in the real system (inverse cinematics); 3. From animation and/or deformations acquired through motion capture technique, both offline and in real time, and projected on the virtual model, it is possible to obtain the succession of values of the actuator control variables that permits them to be achieved, since the model can be interfaced with the most common commercial systems of motion capture. The passive subsystems animated and/ or deformed by the actuators can be reproduced graphically making sure to create them geometrically similar (in scale), maintaining the distances therebetween proportional.
For the parametrization phase wherein the actuator, constraint and junction models are created, the following steps are executed: a skeleton base is created; its volumetric properties are defined; the skeleton base is positioned in space; the properties of animation and/or deformation are defined; the zone of influence that each individual actuator and/ or junction has on the passive subsystems dependent upon it is defined; For creation of the skeleton base, we make what is called "bone" (skeleton) in the most common 3D graphics software. This is a vector oriented with respect to a coordinate system centered on its point of application. In order to assign volumetric properties to it, a cylinder is drawn which, when placed on the bone, is grouped together with it as a single structure. The general command is "create group". This group can be called actuator whose point of application is a junction. This object is positioned and scaled according to the size and position of the real object. The junction of this object can be constrained or not to other structures thereby defining its animation. To define the individual deformation properties, a flexor, a feature present in graphics software, is associated with each actuator. It is one of the characters that can be assigned to the elements of a virtual environment. So that the flexors can have an influence on the passive subsystems that are dependent upon them, they are interconnected with both the passive subsystems and the actuator, selecting all of them simultaneously and correlating them by means of the generic command rigid bind. The area of effect and influence of each flexor on deformation and/or animation of the dependent subsystems are modified specifically with the paint tool panel. Since the virtual structure can be adapted to any subsystem and to any real configuration of actuators, the potential is unlimited.
In the animation phase, by means of the structures hereinabove described, the spatial-temporal motory patterns of each individual actuator are implemented in the fashion previously indicated. The progression in space of the animation and/ or deformations of the actuators and of their interconnected structures can be defined through the spatial keys of the tool set driven key present in all 3D development environments. With this tool it is possible to constraint tcf set reference values the geometric parameters corresponding to movements and/or deformations of the virtual structures created. The software will animate the model in accordance with the trajectories that satisfy its intrinsic characteristics, and with the passage through these known values in accordance with points 1, 2, 3 hereinabove described. The associated deformations are consequently set, reproducing the real physical movements and/ or deformations that occur in the actuator itself.
The control variables of the virtualized actuator are associated with another structure, the locator. This is a constraint structure that can be associated with one or more elements whose parameters it influences. Attributes of the locator (control variables), whose variation causes a numerical variation in the corresponding geometric parameters of the flexor by passing through the reference points, are thereby created.
The time progression of the actuator animation and/ or deformations and of the structures interconnected to them can be decided by defining the temporal keys with time slider, a tool present in all 3Ddevelopment environments. For every animation, the state of the model at each desired instant is set in time slider. The software will construct all the intermediate states of the progression in time of the actuator animation.
Thus, by acting upon each individual actuator, spatial-temporal animation and/ or deformations and the structures interconnected to them can be obtained. When direct action on the structures interconnected to ti e actuators is desired to achieve the animation and/or deformations of the actuators, it will be necessary to enclose those structures in the deformation lattice, another structure present in 3D software. During execution of the animation and/or deformations in the virtual model, the time values of the previously defined control variables are extracted and saved. This procedure can be accomplished by simply using the specific 3D graphics software commands, i.e. getAttr and write. These values are saved in an ASCII file and represent the values that the control variables of the real system must assume in time (set points). The columns are the control variables of each actuator and the rows express time. The file is thereby in standard format, commonly accepted by any system and is the actuator control driver input. The drivers receive as input the temporal set points of the corresponding variables that they control and drive their movement
Together these files comprise a database wherefrom to draw. Any decisional algorithm, artificial intelligence, data analysis, etc. software, or simply an operator can update in real time the animation to be executed by drawing it from the database. Driving can occur in two manners: off-line operation on-line operation
In the first case, the decisional algorithm will operate by drawing the animation from a computer mass memory support. In the second case, the information necessary for driving the actuators will be extracted in real time from the virtual model animation and sent, still in real time, to the control drivers. This procedure is illustrated in figure 3.
Figure 4 illustrates how the most general configuration of the method in accordance with the instant invention can be put into practice. In this figure, 1 indicates a computer, specifically a personal computer wherein advanced 3D graphics software has been implemented, linked by RS232 serial port to an electronic control, indicated in its entirety by 2, and in turn connected to a closed cycle motor assembly driver. Means for command transmission 3, generally comprising transmission cables, exit the motor assembly. These means act on a system to be controlled 5 including actuators 6 and subsystems 7 to be animated and/or deformed.
In the following description, the articulated and deformable mechanical system controlled by the method in accordance with the instant invention comprises, by way of non-limiting example, an artificial face which has been called, for the sake of simplicity "android head" whose appearance and facial expressions are similar to those of the human face. In accordance with the general operating principle described hereinabove and compatible with available technology and also with reference to figure 5, the android head, genetically indicated with 10, consists of a skeleton support structure 11 whereto artificial muscles (actuators) are attached 12 connected to an artificial skin 13 that covers the entire head. The actuators 12 are operationally connected by mechanical power transmission means 14 to a motor assembly 15 and an encoder 16 which are controlled by means of an electronic motor driver 17 from signals received from a virtual model of the human face reconstructed in an advanced 3D graphics environment on a personal computer equipped with 128 MB of RAM, a Windows NT operating system and a graphics card with OpenGL libraries or equivalent systems. Hereinbelow the components of the system are described in greater detail.
Skeleton support structure
The support structure 11 is a resin reproduction of the human cranium and part of the cervical spinal column which serves as anchorage for the actuators 12 and is covered with artificial skin 13. The structure has an internal space (normally occupied by the brain mass) wherein the sheaths for the transmission cables 14 that transmit mechanical power from the motor assembly to the actuators are housed by means of constraints. The sheaths innervate the support structure through perforations that allow the cables to be positioned tangentially to the skin.
In its current form, the artificial skin 13 is prepared by a totally manual technique by first making an alginate impression from a live model. Once the impression has solidified, plaster is poured manually over it to produce the negative of the face. Finally, a mould is obtained from this negative wherein silicone rubberjs poured to form the artificial skin.
Artificial muscles The actuators 12 that function as muscles are a silicon structure 20 of elongated elliptical form, illustrated schematically in figure 6, whose core is the transmission cable 14 of the mechanical power drawn from the motor assembly. The end of the sheath 21 where the cable enters is attached to the support structure at the muscle's point of origin, while the transmission cable 14 is constrained to the skin in any suitable manner, generically indicated with 22, e.g. bolting its end in the hole of a metal cylinder countersunk in a silicone bulge on the internal surface of the skin where d e real muscle terminates. The actuators are molded by pouring liquid silicon into a hollow mould that has the form of that specific actuator and allowing it to dry. It is thereby possible to create actuators of any shape and size. Linear movement of the cable 14 within the silicon mass 20 causes its longitudinal shortening and consequently its transversal widening, allowing the skin to be both stretched 13 and deformed as desired. The artificial muscle and the skin thereby exchange both actions that are tangential and perpendicular to its internal surface due to the skin's contact with the elliptical muscle surface as well as actions resulting from the direct action of the end of the cable where it is anchored directly to the skin. The rigidity of the cable-sheath transmission system permits the retroaction for control of the actuators to be performed by measuring the displacement directly on the motors and not on the actuators.
Motor assembly
Referring to figures 7 and 8, the motor assembly comprises twenty-six rotating linear motors divided into two groups. Each motor is composed of a coil with floating arm attached 28 placed between two fixed magnets 29 (north and south) in the form of a 45° circular sector. In each group, the floating arms of each motor are equidistant one from the other and are mounted to rotate independently on an axis 30. Finally, the magnetic circuit closes in a frame 31 that keeps the magnets 29 and the rotation axis 30 in a fixed position. Rotation causes the linear movement of the end 28a of each floating arm 28 wherefrom the necessary mechanical power is drawn. Thus, with a 45° angle of rotation, a 20 mm maximum linear movement is obtained. These motors are able to deliver a force of 12.5 N with a maximum absorption of 0.5A of current. The maximum attainable acceleration is 15J m/s2 and the maximum velocity is 0.78 m/s. Repetitivity and precision depend on the characteristics of the transducer 32, which in this case is Hall-effect transducer, and on the type of drive that normally produce a repetitivity of ± 5 μm and a precision of ±0.08 mm. The cable 14 necessary for transmitting power from the motor to the respective actuator is attached to the end 28a of the floating arm by a bolt 33, and the sheath 21 is connected to the groups with a ferrule 34 which is inserted in the holes made in a plate 35 at the front of the groups. A ventilation system, not shown, ensures the necessary convection to eliminate the heat produced by the motors.
Motor control electronics
The electronic system was created using a modular approach and allows housing of the closed cycle controlled motor drivers, of the programmable microprocessors, of an electronic card for interfacing with a personal computer and of the supplies. Figure 9 illustrates the diagram of the electronic control. In the specific case, each module, indicated genetically with 36, houses a programmable microprocessor 37 and four motor drivers 38. The motherboard 41 which holds the modules 36 is comprised of an RS232 serial port indicated with 39 and a supply 40. The protocol was organized on 3 bytes: the first for the addresses, the second for the position and the third for the speed of movement. The baud rate is 57.6 Kbaud with a command execution time of 520 μm; all of the actuators execute the command at 12.5 m/s. The microprocessor can be programmed via software. There is a closed cycle control executed by appropriately calibrated Hall-effect sensors 32 that read the angular velocity of the coils 27. With this system, the motors can be activated by very accurate dynamics, to obtain deformations of the actuators and of the artificial skin that correspond to those in the virtual model.
Virtual model
The creation of a virtual model of a human face makes it possible to obtain the spatial-temporal trajectories necessary to drive an anthropomorphic head or an android head (real system) that can simulate human facial expressions. The virtual model of the human face of the instant invention makes manifest that with this technique it is possible to have parallel control of a large number of actuators in such a fashion as to allow even the complex physical systems that are dependent upon them undergo the desired deformations and thereby the required facial expressions.
The best way to reproduce the human face is to project the physical model of the skin in a virtual environment keeping the proportions unchanged and defining this structure as a deformation surface (modeling phase). The virtual model of each individual muscle is then created by parameterizing it and placing it in the correct position (parameterization phase). Finally, the dynamics of the facial expressions are created by defining which spatial-temporal motor patterns are to be associated with each actuator in order to achieve the desired surface deformations (animation phase). a) Modeling
In order to project a surface and its reference system from physical space into a virtual development environment, it is necessary to identify a certain number of points by tracing meridians and parallels directly on the surface using a digitizer. This is a commercially available mobile device comprised of a base, three mechanical arms interconnected by joints and a stylus at the end. By using a manual placement system the digitizer allows the stylus to identify any position in space with respect to its base. The position is calculated by means of digital encoders at the junctions and is transmitted to the PC via RS232. The points that were previously traced on surface to be projected are thereby felt, mapped in a three-dirnensional reference system and transferred into the virtual development environment. Because of their operating programs, these devices are compatible with most commercial software. From these points, the development environment recreates the parallel lines that interpolate them, and from these lines, the model of the surface that had been felt is created. As can be seen in the diagram in figure 10, in the specific case, a surface model was created comprised of thirteen counterclockwise superficial section lines originating from the middle of the face and of fifteen transverse section lines originating from the mouth. To create the space of the oral cavity, it is possible to add points manually to these curves for the inside of the mouth. By selecting all of the curves obtained thereby, a surface will be created with the loft command, common to 3D software. An entire real or imaginary face could also be created manually, b) Parameterization The phases herein below were followed to create the models of the artificial muscles: creation of a skeleton base; definition of its volumetric properties; duplication of this structure (actuator and base) and placement of its clones in the application sites of the physical actuators (artificial muscles); definition of the deformation properties of each individual actuator; indication of the zone of influence that each individual actuator has on the surface. As hereinabove described, for creation of the skeleton base, a bone is constructed whereto volumetric properties are assigned. This structure constitutes the generic actuator which is positioned and scaled in accordance with the location and size of the artificial muscle. The application point of the actuator must be placed where the artificial muscle is attached to the support structure and its end placed where the artificial muscle attaches to the artificial skin. Opening and closing the mouth is actuated by creating a control bone which is placed on the joint of the jaw. It is placed where the jaw actually rotates when the mouth is opened and is linked to the four actuators of the part underneath the jaw. This causes rotation with respect to a fixed point on the new bone so as to define the actual opening and closing of the mouth by rotation rather than by translation.
Once all of the actuators have been positioned on half of the face, a bone support base is created. This skeleton base, when connected to all of the actuators of half of the face, can be duplicated easily, giving a mirror image for the other half of the face, thereby producing perfect symmetry. The two face halves each have their own bone base. We next proceed to the creation of a general control parent bone which is connected to the control bones of the two face halves and, therefore, to all of the actuators, thereby creating a hierarchical structure starting from the parent bone. When this structure has been obtained, the actuators that are situated on the median axis of the face are created along with their parent bone. The parent bone of the actuators of the median axis is also enslaved to the general control parent bone.
As hereinabove described, to define the individual deformation properties, a flexor is associated with each actuator. So that the flexors can influence the face, they are linked both with the cylindrical surface around the bone and with the face surface, by simultaneously selecting them with their respective bones, the face surface and the cylinder and correlating them through the rigid bind command. Once all of the flexors have been defined and properly sized, the influences they have on the surface are indicated. Each actuator's area of effect and its deformations are modified by observing the influence that they have on the surface of the face. By using the appropriate jtø?» tool panel it is possible to show and easily modify the influence of the flexor on the interconnected structures. In this way the expression potential is unlimited, since the virtual structure can be adapted to any face and any real configuration of the actuators.
In addition to form, it is possible to confer color, nuance and shadows to the virtual face to make it totally similar to the human face. This procedure can be performed directly by virtual graphics software or is simplified by application of a texture. Texture is an image which, when mapped on the surface, will follow the deformations. It is possible to use the digital photo of the desired face in bitmap format or equivalent as texture. Though the texture is projected from a plane, it adapts in 3D and is automatically modified dynamically with the surface. c) animation
In this phase, by means of the previously developed structures, the dynamics of the facial expressions are implemented by defining the spatial-temporal motor patterns to be associated with each individual actuator in order to obtain the desired surface deformations. cl) Spatial deformations The spatial progression of the actuator deformations and of the face surface can be decided by defining the spatial keys with the set driven key tool. With this tool it is possible to constraint to known values (keys) the geometric parameters corresponding to deformations of the virtual structure created. The software animates the model in accordance with trajectories that satisfy its intrinsic features and with the passage through these values. The associated deformations are therefore set, reproducing the real physical deformations that occur in the actuator itself. In the specific case, we have constrained only the minimum and maximum parameters of extension/shortening and swelling of each flexor by acting on the scale factors along the longitudinal and transverse axis.
Locators (commonly found in 3D software) which have been hierarchically organized can be used to make it possible to control the scale factors of several actuators simultaneously. A locator is a structure whereto one or more elements can be associated and their variables influenced simultaneously. For each locator, the associated elements are the actuators, while the variables are the scale factors. Thus, locator attributes are created whose variation determines a numerical variation in the corresponding geometric parameters of the flexor passing through the spatial keys. The preferred procedure is to create a hierarchy of locators by grouping adjacent actuators in a single locator to control the influence of the various actuators. In the specific case, the following locators are created: - mouth: zigomatic muscle, risorius muscle, depressor muscle of angle of mouth, levator muscle of upper lip, and depressor muscle of lower lip; -forehead: corrugator supercilϋ and frontal muscles;
- ye: ocular muscles;
- nose: levator muscles of upper lip and ala of nose; -jaw. muscles that move the jaw.
The locators which were created separately are then interconnected in groups to the parent locator. Because of this hierarchical structure, it is possible to act upon the attributes of the parent locator to influence the movements of all of the actuators simultaneously in the desired manner. In fact, the expressions are defined as attributes of the parent locator. In the specific case, smile, disgust and indecision expressions were defined. Since human beings are able to effect an extremely high number of expressions, it is possible to model approximately forty basic expressions which can be combined to obtain most human facial expressions. The basic expressions can be subdivided into separate groups: forehead, eyes, nose, mouth and jaw. For a more detailed treatment, see Peter Ratner, Mastering 3D animation, Allworth Press, New York, 2000. By appropriately varying the percentages of the basic expressions in time, the desired expression is obtained. In the specific case, an expression of disgust can be obtained by a 100% deformation of the variables of the previously defined locators. There is an unlimited range of possible expressions and combinations. By means of a variable, it is possible to monitor which expression will be assumed by the model. c2) Temporal deformations
The progression in time of the actuator deformations and the deformations of the face surface can be decided by defining the temporal keys with time slider, a tool present in all 3D environment development software. The model states are set at the instants desired for each expression. The software reconstructs all of the intermediate states of the deformation progression in time of the actuators and, thereby, the expression.
Therefore, by acting on the spatial-temporal deformation of each actuator, the deformation that the skin undergoes can be varied until the desired expression is obtained. If we want to act directly on the skin surface and observe the resulting actuator deformations, it is necessary to enclose the external surface in a deformation lattice, another structure present in 3D software. This structure encloses the surface of the face in a three dimensional lattice with the desired resolution. It is possible to modify the expression as desired by acting upon the points of this lattice.
Driver software for interface between the virtual model and the real system.
During formation of the expression, the spatial-temporal deformations of the actuators are saved in an ASCII file and are renamed in accordance with a table of equivalencies between the previously defined metric scale and the verbal attributes. The file is a matrix comprised of real values delimited by tabs. The columns correspond to the actuators, the lines to time expressed in photograms (25 photograms/second) and the elements are the percentage of linear actuator deformation obtained from the previously defined attributes. This matrix is formatted in order to be accepted by driver software. In figure 11, a typical matrix obtained for the hereinabove noted expression of disgust is illustrated for a face half.
Expression selection to drive the real system
The set of the expressions obtained is stored in a database from which to draw. In the specific case, a function was written whose input is represented by the expression or combination to be adopted. For every expression in the database, there is a corresponding natural number (0 is the value at rest), while for every combination, the real number corresponds to a decimal (10 values) included between one expression and another. By increasing the decimal figures, the number of possible combinations is increased. An ASCII file contains the table of equivalencies between the verbal definition of the expressions and this metric scale. The hereinabove noted function remains active. Any artificial intelligence, data analysis, decisional algorithm, operating system, etc. software can update the expression in real time simply by updating the function input values. In the specific case, we created simulated mastication followed by a hedonistic response to the taste of the food. The hereinabove noted function can be exemplified with:
{ global proc funzione ($espressione) select locator_ρadre; setAttr locator_padre.espressione fespressione; playButtonStart; playButtonForward;
}
Drive
Drive can occur in one of two ways: off-line operation on-line operation In the first case, according to the selection made by the decisional algorithm the drive software operates by drawing the previously calculated expression matrices (trajectories) from a mass memory support within the computer. In the second case, with a work station of the most recent generation, the virtual environment and the drive software can interact with the decisional algorithm and calculate in real time the expression to be created and drive the linear actuators. Common correction algorithms can be used to advantage for any possible shifting between the set trajectory and the one effectively executed due to errors and/ or otherwise irreparable defects in the articulated mechanical system. A corrector algorithm, known in common literature, that can be used is Adaptive Learning or equivalent. For further details see: International Workshop on Nonlinear and Adaptive Control: issues in Robotics, Grenoble, France Nov 21-2, Carlos Caundes De Wit (Editor), 1991; John H. Andreae, Associative Learning: for a Robot Intelligence, Imperial College Press, October 1998.
As described hereinabove, it is manifest that with the method in accordance with the instant invention, because of the use of a virtual graphics development environment, it was possible to construct a model which corresponds very closely to a real human face, both in terms of the actuators and of the cutaneous deformations. This has made it possible, within the context of the virtual environment, to plan and control what will be the succession of expressions that we want to be achieved by the real system, even before any of its actuators has been moved.
With this technique, it is also possible to construct an unlimited database of expressions, obtain new expressions using that database without necessarily having to recalculate them with the virtual model to obtain expressions corresponding to the pronunciation of human phonemes.
Furthermore, the model can be developed with the features of any human face desired and can include the most varied configurations of the subcutaneous actuators corresponding to the real ones. With this model, it is possible to study the range of desired expressions off-line, save it in the database and with the database control the real movements of the actuators. The hereinabove noted example makes manifest the fact that by the method in accordance with the instant invention it is possible to control not only a great number of actuators but even the highly complex physical systems dependent upon them, such as the artificial skin that is stretched over the support structure that reproduces the human face. Furthermore, once the model has been created, it is possible to use it to obtain information in two ways: by setting a certain actuator movement, the model produces the modifications in the expression that such movement induces in the skin, by starting from a sequence of expressions it is possible to identify the actuator movement that permits achievement of that sequence.
By applying motion capture technique, it is also possible to devise a control system that acquires a subject's facial expressions in real time and reproduces them with the real model.
In addition to the application illustrated in the example, the method in accordance with the instant invention can be used to create, animate and control articulated and/or deformable mechanical systems of any complexity whatsoever such as: anthropomorphic articulated mechanical arms; anthropomorphic mechanical hands; articulated hexapods; androids and/or animatrons of any shape; androids, puppets and/ or animated dolls; androids, animatrons, mechanical animals, any imaginary beings and/ or puppets that can mimic facial expressions and/or speech. The fields of application of the hereinabove noted systems are: robotics, mechanics, cinematography, entertainment, toy industry. Furthermore, it can be used: to control an electronic actor, i.e. an animatron or its model that can express movement, facial expressions, mimic human dialog; to control and electronic instructor, i.e. an animatron or its model that can teach, presenting lessons and conversing with students; to control prostheses replacing parts of the human body such as: arms, legs, fingers, the face and organs; and/or orthotic prostheses; to study and/or control demonstration prototypes of organic lesions and loss of function resulting from diseases such as muscle and/ or organ malfunction in general; to control an electronic physiotherapist wherein models of organs to be retrained indicate the best exercises and evaluate effectiveness.
In accordance with the instant invention, variations and/or modifications can be made to the control method of an articulated and/ or deformable mechanical system without having to leave the protective context of the invention.

Claims

1. Control method for an articulated and/or deformable mechanical system comprising passive subsystems to be animated and/or deformed by means of interconnected actuators linked to said subsystems through junctions and subject to constraints, characterized by the fact of comprising the phases hereinbelow: creation of a virtual model of said system with advanced 3D graphics software; definition in said virtual model of the control variables necessary for driving said system; animation of said virtual model in the desired fashion in order to draw from it the temporal values of said control variables; saving of the values drawn; sending the saved values to the actuator control driver of said system to drive the actuators
2. Method in accordance with claim 1, whereby the animation of said virtual model is obtained by setting the desired succession of values of the actuator control variables.
3. Method in accordance with claim 1, -whereby the animation of said virtual model is graphically induced to obtain the succession of values of the actuator control variables that permit creation of corresponding movements and/or deformations in the real system.
4. Method in accordance with claim 1 whereby the animation of said virtual model is obtained through motion capture.
5. Method in accordance with any one of the claims hereinabove whereby creation of said virtual model comprises the phases hereinbelow: projection of said passive subsystems in the virtual environment leaving reciprocal proportions and distances unchanged; creation and correct spatial positioning of actuators, constraints and junctions.
6. Method in accordance with claim 5 whereby the phase of creation and positioning of actuators, constraints and junctions comprises the phases hereinbelow: creation of a skeleton base for each actuator; creation of the volumetric properties of said skeleton base; spatial positioning of said skeleton base; definition of its animation and/ or deformation properties; definition of the zones of influence each actuator, constraint or junction have on the subsystems dependent upon it.
7. Method in accordance with any one of the claims hereinabove whereby the extraction of the values of the control variables is obtained with the getAttr or equivalent command.
8. Method in accordance with any one of the claims hereinabove whereby the values extracted from the control variables are saved in an ASCII file with thef ite or equivalent command.
9. Method in accordance with claim 8, whereby the set of files saved constitutes a database wherefrom to draw an instruction file for specific animation and/ or deformations to drive the motor driver.
10. Method in accordance with any one of the claims hereinabove whereby said articulated and/or deformable mechanical system to be controlled is an artificial face.
11. Method in accordance with claim 10 whereby said passive subsystem is comprised of artificial skin.
12. Method in accordance with claim 11 whereby the movements and/or deformations of said artificial skin are imparted by means of artificial muscles that connect it to a support structure in the form of a cranium and wherein the operating phases hereinbelow are established: creation of a virtual model of said artificial face with advanced 3Dgraρhic software; definition in said virtual model of the control variables necessary for animation of said artificial face; animation of said virtual model in the desired fashion in order to extract from it the temporal values of said control variables; saving of values extracted; sending the saved values to a control driver of said artificial muscles to drive them.
13. Method in accordance with claim 12 whereby creation of the virtual model of said artificial face comprises the phases herinbelow: projection of the physical model of the artificial skin of said face into a virtual environment keeping the proportions unchanged; creation of the virtual model of each individual muscle placing it in its correct spatial position together with respective constraints and junctions;
14. Method in accordance with claim 13 whereby the projection of the physical model of the artificial skin into the virtual environment is achieved by identifying a certain number of points on the skin surface which are obtained by tracing meridians and parallels on it and identifying the spatial position of said points with a digitizer and transferring the measured values to the virtual environment, re-creating parallel lines that interpolate said points and then interpolating said lines to create a surface.
15. Method in accordance with either claim 13 or 14, whereby creation of a model of the artificial muscles comprises creation of a bone whereto volumetric properties are assigned, duplication of said bone in a plurality of clones that, once scaled as desired, are placed in positions corresponding to each artificial muscle, creation of a control bone for each face half and of a general control parent bone for the two control bones, creation of the bones of the median axis of the face and their parent control bone which is also enslaved to said general control parent bone.
16. Method in accordance with claim 15 whereby, in order to define the individual deformation properties, a flexor is assigned to each actuator re-created in the virtual environment. The influence of sύd flexor on the interconnected structures is demonstrated and modified with a paint tool panel.
17. An apparatus to control an articulated and/or deformable mechanical system comprising at least one passive subsystem to be animated and/or deformed by means of actuators which are linked to each other by junctions and constraints. Said apparatus has one or more processor with advanced 3D graphics software to create a virtual model of said mechanical system that can generate instruction files to drive said actuators being equipped with motorized mechanisms to command said actuators and a control interface capable of receiving said instruction files and processing them to drive said motorized mechanisms.
18. An apparatus in accordance with claim 17 whereby said control interface comprises a programmable microprocessor connected to said processors and controlling said motorized mechanisms.
19 Apparatus in accordance with either claim 17 or 18 whereby said articulated and/or deformable mechanical system is an artificial face comprising a support structure, and said passive subsystem to be animated and/ or deformed is an artificial skin of said face, said actuators being artificial muscles formed by an elongated, elliptical body made of a flexible material with a rigid, longitudinal core attached to the end of said body. This end is connected to said skin and can slide, since it is connected to said motorized mechanism, whereas the other end of said body remains fixed to said support structure.
20. Apparatus in accordance with any of claims 17-19 whereby said motorized mechanisms are constituted by linear motors, each capable of transmitting a sliding movement of pre-established magnitude to its respective rigid core of said artificial muscles.
21. Method for control of an articulated and/or deformable mechanical system and associated equipment as essentially described hereinabove and illustrated in the attached diagrams.
PCT/IT2002/000072 2001-02-07 2002-02-06 Method for controlling an articulated and/or deformable mechanical system and its applications WO2002062535A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002236201A AU2002236201A1 (en) 2001-02-07 2002-02-06 Method for controlling an articulated and/or deformable mechanical system and its applications

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
ITPI01A000007 2001-02-07
ITPI20010007 ITPI20010007A1 (en) 2001-02-07 2001-02-07 METHOD FOR THE CONTROL OF AN ARTICULATED AND / OR DEFORMABLE MECHANICAL SYSTEM AND ITS APPLICATIONS

Publications (2)

Publication Number Publication Date
WO2002062535A2 true WO2002062535A2 (en) 2002-08-15
WO2002062535A3 WO2002062535A3 (en) 2004-03-04

Family

ID=11452972

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IT2002/000072 WO2002062535A2 (en) 2001-02-07 2002-02-06 Method for controlling an articulated and/or deformable mechanical system and its applications

Country Status (3)

Country Link
AU (1) AU2002236201A1 (en)
IT (1) ITPI20010007A1 (en)
WO (1) WO2002062535A2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115056231B (en) * 2022-07-26 2023-12-22 江苏邑文微电子科技有限公司 Motion matching method and device for mechanical arm and animation of semiconductor equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0428164A2 (en) * 1989-11-14 1991-05-22 Sony Corporation Animation producing apparatus
US5495410A (en) * 1994-08-12 1996-02-27 Minnesota Mining And Manufacturing Company Lead-through robot programming system
US5511147A (en) * 1994-01-12 1996-04-23 Uti Corporation Graphical interface for robot
US5586224A (en) * 1990-12-25 1996-12-17 Shukyohojin, Kongo Zen Sohonzan Shorinji Robot or numerical control programming method
US5642291A (en) * 1989-12-22 1997-06-24 Amada Company, Limited Of Japan System for creating command and control signals for a complete operating cycle of a robot manipulator device of a sheet metal bending installation by simulating the operating environment
GB2330666A (en) * 1997-10-27 1999-04-28 Honda Motor Co Ltd Off-line teaching method with plurality of robot models
US5933151A (en) * 1997-03-26 1999-08-03 Lucent Technologies Inc. Simulated natural movement of a computer-generated synthesized talking head
EP0938954A2 (en) * 1998-02-25 1999-09-01 Fujitsu Limited Interface apparatus for positioning robot
EP1092513A2 (en) * 1999-10-12 2001-04-18 Fanuc Ltd Graphic display apparatus for robot system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0428164A2 (en) * 1989-11-14 1991-05-22 Sony Corporation Animation producing apparatus
US5642291A (en) * 1989-12-22 1997-06-24 Amada Company, Limited Of Japan System for creating command and control signals for a complete operating cycle of a robot manipulator device of a sheet metal bending installation by simulating the operating environment
US5586224A (en) * 1990-12-25 1996-12-17 Shukyohojin, Kongo Zen Sohonzan Shorinji Robot or numerical control programming method
US5511147A (en) * 1994-01-12 1996-04-23 Uti Corporation Graphical interface for robot
US5495410A (en) * 1994-08-12 1996-02-27 Minnesota Mining And Manufacturing Company Lead-through robot programming system
US5933151A (en) * 1997-03-26 1999-08-03 Lucent Technologies Inc. Simulated natural movement of a computer-generated synthesized talking head
GB2330666A (en) * 1997-10-27 1999-04-28 Honda Motor Co Ltd Off-line teaching method with plurality of robot models
EP0938954A2 (en) * 1998-02-25 1999-09-01 Fujitsu Limited Interface apparatus for positioning robot
EP1092513A2 (en) * 1999-10-12 2001-04-18 Fanuc Ltd Graphic display apparatus for robot system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BOTHE H-H: "Fuzzy-Head: a mechanic human head robot controlled by a fuzzy inference engine" INDUSTRIAL AUTOMATION AND CONTROL, 1995 (I A & C'95)., IEEE/IAS INTERNATIONAL CONFERENCE ON (CAT. NO.95TH8005) HYDERABAD, INDIA 5-7 JAN. 1995, NEW YORK, NY, USA,IEEE, US, 5 January 1995 (1995-01-05), pages 71-76, XP010146917 ISBN: 0-7803-2081-6 *

Also Published As

Publication number Publication date
WO2002062535A3 (en) 2004-03-04
AU2002236201A1 (en) 2002-08-19
ITPI20010007A1 (en) 2002-08-07

Similar Documents

Publication Publication Date Title
US5623428A (en) Method for developing computer animation
US5625577A (en) Computer-implemented motion analysis method using dynamics
Igarashi et al. Spatial keyframing for performance-driven animation
Riley et al. Enabling real-time full-body imitation: a natural way of transferring human movement to humanoids
CN106023288A (en) Image-based dynamic substitute construction method
US5586224A (en) Robot or numerical control programming method
CN115018963B (en) Human-type intelligent body posture generation method based on physical simulation
Zeltzer Representation of complex animated figures
Gillespie Haptic interface to virtual environments
Rosado et al. Reproduction of human arm movements using Kinect-based motion capture data
WO2002062535A2 (en) Method for controlling an articulated and/or deformable mechanical system and its applications
Kim et al. Adaptation of human motion capture data to humanoid robots for motion imitation using optimization
Starodubtsev et al. Animatronic Hand Model on the Basis of ESP8266
KR20220158056A (en) System and method for controlling entertainment figures
JPH08221599A (en) Method for generating and analyzing three-dimensional motion
Lieberman Teaching a robot manipulation skills through demonstration
Luo et al. Puppet playing: An interactive character animation system with hand motion control
Tsai et al. Two-phase optimized inverse kinematics for motion replication of real human models
Wereszczyński et al. ELSA: Euler-Lagrange Skeletal Animations-Novel and Fast Motion Model Applicable to VR/AR Devices
Singh et al. Control and coordination of head, eyes, and facial expressions of virtual actors in virtual environments
Herbison-Evans A human movement language for computer animation
Suzuki et al. Performing virtual surgery with a force feedback system
Maxwell Graphical marionette: a modern-day Pinocchio
Patil et al. POPPY HUMANOID ROBOT
Pattanaik A stylised model for animating Bharata Natyam, an Indian Classical Dance form

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP