CN1125362C - 三维成像系统 - Google Patents

三维成像系统 Download PDF

Info

Publication number
CN1125362C
CN1125362C CN95197619A CN95197619A CN1125362C CN 1125362 C CN1125362 C CN 1125362C CN 95197619 A CN95197619 A CN 95197619A CN 95197619 A CN95197619 A CN 95197619A CN 1125362 C CN1125362 C CN 1125362C
Authority
CN
China
Prior art keywords
display device
pixel
light
optical element
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN95197619A
Other languages
English (en)
Other versions
CN1175309A (zh
Inventor
绍尔顿·S·兹里特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visualabs Inc
Original Assignee
Visualabs Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visualabs Inc filed Critical Visualabs Inc
Publication of CN1175309A publication Critical patent/CN1175309A/zh
Application granted granted Critical
Publication of CN1125362C publication Critical patent/CN1125362C/zh
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/50Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels
    • G02B30/54Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels the 3D volume being generated by moving a 2D surface, e.g. by vibrating or rotating the 2D surface
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/167Synchronising or controlling image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/305Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/324Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/388Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume
    • H04N13/395Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume with depth sampling, i.e. the volume being constructed from a stack or sequence of 2D image planes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/08Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division
    • H04N7/0803Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division using frequency interleaving, e.g. with precision offset
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/25Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/286Image signal generators having separate monoscopic and stereoscopic modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/307Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using fly-eye lenses, e.g. arrangements of circular lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/322Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using varifocal lenses or mirrors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/327Calibration thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/334Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using spectral multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/337Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using polarisation multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/361Reproducing mixed stereoscopic images; Reproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/388Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume
    • H04N13/39Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume the picture elements emitting light at places where a pair of light beams intersect in a transparent material
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/388Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume
    • H04N13/393Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume the volume being generated by a moving, e.g. vibrating or rotating, surface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2213/00Details of stereoscopic systems
    • H04N2213/003Aspects relating to the "2D+depth" image format

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Generation (AREA)
  • Stereoscopic And Panoramic Photography (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Instruments For Viewing The Inside Of Hollow Bodies (AREA)
  • Endoscopes (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Devices For Conveying Motion By Means Of Endless Flexible Members (AREA)
  • Air Bags (AREA)
  • Television Systems (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Details Of Television Scanning (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)
  • Overhead Projectors And Projection Screens (AREA)

Abstract

通过逐个像素地改变图像到观察者的表观距离,从平面显示上获得三维图像。这是通过设置与图像像素对准的像素级光学单元阵列来实现的。在优选形式下,每个光学单元普遍是狭长的且具有沿长度方向变化的焦距,其结果为光束入射在光学单元上沿长度的点决定了有关像素到观察者的明视视觉距离。在应用于阴极射线管的情况下,对入射光位置的控制通过控制电子束在水平扫描时垂直移动一个微小距离来实现。在应用于电视时,此垂直距离决定于设置在电视所接收的广播信号中的深度信号。对涉及计算机显示器,电影及静止印刷图像的应用及实施例也作了描述。

Description

三维成像系统
本发明涉及三维图像显示技术,并特别涉及一种其中无需采用特殊头戴装置或眼镜的技术。
全三维图像的显示已成为二十世纪繁荣时期的一个重大科技目标。早在1908年,Gabriel Lippman发明了一种采用通过微小定位透镜组成的“蝇眼”微透镜板曝光照像底片来产生一景物的真实三维图像的方法。此技术即有名的“整体摄影术”(integral photography),其改进图像的显示通过同种定位透镜的微透镜板得以实施。然而,Lippman历经多年的改进及其发展(例如,美国专利No.3,878,329)并不能提供适于如下要求图像的技术,即图像要易于产生,可用于运动显示,或能够再现电子生成图像这种本世纪后半叶的主要图像形式。
随着时间的推移,三维图像的多像组分方法(multiple-image-component approach)已扩展成多种技术改进,它包括条状微透镜板或网格板的光学元件的种种实施方案,用以从单个特殊处理的图像产生立体图像(例如,美国专利No.4,957,311或美国专利No.4,729,017,引用最近相关实施例)。绝大多数此类实施例都受到通常的一系列缺陷的困扰,包括对观察者相对于观察屏的身体位置的严格限制,由于在两幅分离图像之间分割生成图像亮度而导致的象质降低,并且在多数情况下,视差仅在一个方向可观察到。
产生真实三维图像的其他现有技术包括对物理空间的扫描,或者利用在一旋转的螺旋面屏幕或散射蒸气云上以激光束进行机械扫描,相继激活阴极射线管中的大量内部荧光屏的办法;或者通过使一可弯曲的弯曲反射镜发生物理偏移以产生传统成像装置的可变焦型式。所有这些技术已被证明很麻烦,既难于制造也不便观看,总的说来不适于在消费市场推广应用。
在此期间,出现了大量关于观察者配戴装置的技术,包括采用双色或正交偏振滤光片以分离同时显示的两幅图像的眼镜,及虚拟现实显示头戴装置,其都与产生立体观测效果有关,这就是说,通过使分离的左右眼图像同化而察觉深度。其中有一些已能产生非常高质量的立体图像,但一般其代价为观察者的舒适与方便,眼睛紧张,图像亮度,及让那些不愿或不适观看此立体图像的观众接受。所有这些代价即为最近出现的眼科学与神经学研究的主体,该研究认为持续使用立体成像系统,使用者配戴物等等导致了持久的潜在有害效果。
日本专利公开说明书62077794公开了一种平面显示装置,其上呈现由分立像素产生的图像,此显示装置具有一系列分别对准在像素前面的光学单元和逐个改变每一光学单元的有效焦距从而改变至位于此显示装置前的观察者的明视视觉距离的装置,每个单独像素呈现其上,从而产生一个三维图像。
更特别地,此日本说明书中的光学单元为向列型液晶制成的透镜,这些透镜的焦距可通过改变用以改变液晶取向的电场而加以改变。此系统需要晶体管及控制每一微透镜的其他电子引线,而且对玻璃面板之间的特别包装也是必需的。另外,所实现的有效焦距的改变很小,需要采用附加光学部件如高倍放大透镜,这既使系统大得不可接受,也过度限制了可获得的横向图像视角。
本发明目的在于提供一种改进的三维成像装置,其中克服了上述日本说明书中所述系统的缺点。
此目的如下实现,每个光学单元的焦距沿一般与图像平行定向的表面逐渐改变,且其特征在于采用了用于根据所需深度在像素内精细位移出射光位置的装置,以便使沿光学单元输入表面的光入射位置有一个相应位移,从而动态改变其有效焦距并根据此光入射位置的位移改变至观察者的明视视觉距离。
在一个优选实施例中光学单元制成为一个或多个透镜,但也可代之以制成反射镜或甚至为折射与反射面的结合。
在其最简形式中,像素及上覆光学单元为矩形,每个光学单元的焦距沿其长度方向逐渐改变。在此情况下,光束的入射点沿此长度线性位移。然而,其他形状的光学单元及位移形式仍在本发明范围内。例如,光学单元可为圆形,具有可相对于中央光轴径向改变的焦距。在此情况下,光束以径向分布的环状带入射。
同样,虽然此处像素级光学单元的光学特性的改变是以由物理单元表面的外形改变所引起来加以说明的,但我们在实验室中已试验成功,通过采用其折射率沿光学单元逐渐改变的梯度系数光学材料,来产生光学特性的此种改变。
焦距与位移之间的关系可以是线性或非线性的。
可采用多种装置来提供入射至像素级光学器件阵列的像素级光束。在本发明的一个实施例中,这种光输入装置为设在光学器件阵列后的阴极射线管,以使一束光可在每一行像素级光学器件后进行水平扫描,并且在其通过每个光学器件后产生一相对扫描线的细微区别的垂直位移。在不同实施例中,此光输入装置可为一平板显示装置,诸如液晶,场致发光技术的或等离子体显示装置。场致发光装置包括LED(发光二极管)阵列。在所有这些实施例中,运动图像是通过依次扫描整个图像来产生的,并以与传统二维运动图像所采用的相同方式来进行。在此方式中,运动图像可以仅受对每一像素扫描光束精细垂直控制能力限制的帧频产生。本技术并无限制范围,此处描述的本发明实施例已在我们实验室中成功地以达到每秒111帧的帧频实施。
在另一优选实施例中,像素级整幅图像的照度来自特殊制备的运动画面或摄影透明胶片,其中每帧胶片以传统方式后部照明,但通过上述同样类型的像素级光学元件来观看。在此实施例中,每个透明片格中每一透射光像素沿着光学元件的入射表面特别设置,以使其入射垂直点能在离观察者观看此特定像素所需特定距离处产生一个光点,如上面电子照明实施例一样。这种传统公知系统含有通过由凹面镜或类似图像投射光学器件反射将三维图像投射到自由空间。此技术比传统平面二维图像的投影具有显著优越性,因为位于自由空间的被投射三维图像实际上具有真实的可视的深度。迄今为止,我们已成功采用具有球面、抛物面和双曲面数学曲面的凹面反射镜,但显然其他凹面形状也是可能的。
本发明的这些及其他目的和优点通过下述说明结合附图考察将变得很清楚。在所有这些图示中,相同部分以相同标记数码指示:
图1(a)为像素级光学元件的一个实施例由后部斜视的示意图;
图1(b)为具有三个光学单元的同类像素级光学组件的另一实施例的示意图。
图2说明了通过改变准直光束在像素级光学元件后部(输入端)的入射点,从而改变光点出射处距观察者的空间距离之方式。
图3(a)说明在一优选实施例中,对像素级光学元件的这种变化,入射照明是如何由阴极射线管提供的。
图3(B)显示了这种改变入射照明的另一视图,以及像素级光学器件与阴极射线管的荧光层上像素的对准;
图3(c)说明了准直入射光束的尺寸和纵横比率与像素级光学元件的尺寸和纵横比率之间的关系。
图4(a)说明像素级光学器件是如何设置在照明光源前的,如计算机监视器,电视的阴极射线管或其他大致平面屏幕成像装置。
图4(b)显示了为此目的可采用的图像柱像素的第二优选模式。
图5说明了在电视或计算机监视器图像中将深度信号加入水平扫描栅线中的方式。
图6说明如何利用动画胶片或一些其他形式的照明透明片作为照明源,改变入射到像素级光学元件的光的特定点。
图7说明在三维运动图片显示中,如何采用像素级光学元件阵列来观看一条连续动画电影以显示电影的依次片格。
图8说明通过采用一个主成像镜头和一个辅镜头捕捉图像,可得出记录景象的深度成份的方法。
图9(a)说明了在传统二维图像中倒推深度信号的过程,进而使此图像能在一适当显示装置上以三维显示。
图9(b)显示用于根据图9(a)所示过程将深度加入视频图像的图像处理装置的联接及工作原理。
图10显示了在印刷图像的三维显示改进过程中导出的像素深度显示技术的应用。
图11说明了传统NTSC视频信号的能量分布,显示了亮度及色度载波。
图12说明了相同的NTSC视频信号能量分布,但将深度信号编码到频谱中。
图13(a)说明了传统电视接收机中电路系统的功能设计,它按常规控制扫描电子束在阴极射线管中的垂直偏转。
图13(b)显示了同样的电路系统,它加入了从三维编码视频信号解码深度成份以及将扫描电子束的垂直偏转行为适当转换以产生三维效果所需的电路。
图14显示了基于电视的实现图13(b)所示的深度提取与显示功能的电子线路的一个优选实施例。
图15显示了另一种像素级光学结构,其中输入光的位置辐射状变化,而非直线式变化。
图16与图2近似,但显示了改变单个像素出射光相对观察者的视觉距离的另一种方法。
图17说明如何在一实际实施例中实现图16所示的安排。
图1(a)以大比例放大形式显示了光学单元2的一个可能实施例,它用以改变到观察者的距离,在该距离处入射到此元件的光束可能会聚成点。作为参考,此光学单元的尺寸可明显变化,但应能与显示像素的尺寸相配,这样,对于电视显示器,其尺寸典型为1mm宽3mm高的量级。小至0.5mm×1.5mm的光学元件可用于设计为近距离观看的计算机显示器的显示,而大至5mm宽15mm高的尺寸可用于设计为相当远距离观看的大型商用显示装置。
制作这些像素级光学器件的材料,迄今为止,为熔融石英玻璃(折射率1.498043),或下列两种塑料之一,即聚甲基丙烯酸甲酯(折射率1.498)或甲基丙烯酸甲酯(折射率1.558)。然而,此处并不表明,这些是制造此种像素级光学元件仅有的,或甚至是优选光学材料。
在图1(a)中,像素级光学单元由后部斜视,将可看到,虽然此光学元件的前表面1从顶至底都是凸的,但其后表面形状从顶部的凸形逐渐变至底部的凹形。此光学特征变化的线性和非线性过程都已成功采用。准直光束沿光轴3的方向投射至此光学元件,将会看到,准直光束通过的会聚光学折射表面将随着入射点从元件顶部到底部的移动而改变。
虽然图1(a)所示实施例具有一个固定表面和一个可变表面,但也可设计为两个表面都可变,或者具有多于两个光学折射表面。例如图1(b),显示了第二实施例,其中像素级光学元件是具有三个光学单元的复合光学元件。实验室中的测试表明,复合像素级光学组件可提供相对单个单元光学组件改善的像质和改进的视角,而事实上本技术迄今最成功的实施例采用了三单元光学器件。然而,由于单个单元光学组件如此处所述在本发明中确实有效,所以为清楚说明起见,本说明书中所示的像素级光学组件将以单个单元光学组件加以描述。
为清楚显示,图2以压缩形式显示了在像素级光学单元2之前一定距离的观察者眼睛4。准直光束以不同点入射至光学元件2的背面,其中三束以光束5,6和7表示。由于器件2的焦距随光束入射点不同而变化,图2说明了光束的生成点如何在空间以不同出现点5a,6a或7a处呈现给观察者,对应于入射光束的前述特殊且标号的位置。虽然点5a,6a和7a实际上互相有一垂直位移,但这种垂直位移观察者是察觉不到的,只能看见深度上的明显位移。
图3(a)显示,在本发明一个优选实施例中,每一单个像素级光学元件如何被设置在用作照明光源的阴极射线管表面上。此图中,光学单元2设置在阴极射线管的玻璃前面8上,其后即为通常的荧光层9,它在被投射的准直电子束撞击时发光,电子束此图中以束5b,6b和7b表示不同位置。对于这些图示的三电子束位置,以及对于此像素级光学元件的空间范围内的任意其他电子束位置,一点处的光将会入射到像素级光学元件背面的唯一点上。电子束的垂直位置完全可通过采用传统阴极射线管所见的传统电磁射束定位线圈加以改变,对于特定信号,虽然实验中进行的实验表明,高帧频,即大致超过100帧每秒显示的图像,需要的射束定位线圈应构造成对与高帧频相联系的较高偏转频率应更加灵敏。然而,阴极射线管上荧光物质的分布模式必须既在长度也在空间设置上与像素级光学元件的设置相配,这就是说,光学元件必须能到其下荧光物质在其被设计出的整个线性输入表面的照明。图3(b)通过像素级光学元件2的斜后视图表示了这种设置。此图中,相邻荧光像素35,画出了9个,用于传统彩色阴极射线管中的三种不同颜色,且具有大体矩形的形状。注意每一荧光像素的尺寸和纵横化(即长宽比)应与之面对的像素级光学元件的输入端的尺寸和纵横比大体相配。通过观察阴影所代表荧光像素,将会看到,扫描此荧光像素的电子束可沿荧光像素长度方向在任意点聚焦,这里以相同的三束代表性电子束5b,6b和7b说明。结果是发光点可在此像素中精细位移。
图3(c)说明了入射至像素级光学元件2上的光束的尺寸和纵横比的重要性,此处从后部显示。在电视显像管中深度的视觉显示在分辨率需要方面与视频信号的亮度或黑白份显示相比在所需分辨率方面更近似于色度或颜色显示。据此我们认为,视频信号中绝大多数可察觉的细节是由图像的相对高分辨率亮度成份传送的,其上显示低分辨率的色度成分。因为在考虑感知颜色时眼睛的要求比考虑感知图像细节时要低得多,所以就有可能在色度上采用更低的分辨率。我们实验室中的研究表明,眼睛在感知电视图像深度时也类似地要求不高。
虽然已说过,可视深度的显示仍是由入射至线性像素级光学元件的光束的物理运动产生的,而且很明显,此入射光束的运动范围越大,影响可视深度的机会就越大。
在图3(c)中,像素级光学元件2的高约为宽的三倍。准直入射光束66a,此处以截面表示,为圆形,直径约为光学元件2的宽度。准直入射光束66b同样为圆形,但直径约为光学元件2长度的1/5。一方面,这使得束66b可比束66a移动一个较大的运动范围,提供了在结果图像中更大范围可视深度的可能,但另一方面,这是以截面照明光束区域为代价的,它只有束66a的大约36%。用于保持结果图像的相当亮度,入射束66b的强度将必须为束66a的大约2.7倍,这种提高是完全可以实现的。
束66c与像素级光学元件2同宽,但为一水平椭圆,具有束66b的高度,这就是说,只有光学元件2的1/5高度。这种结果椭圆截面的照明光束比圆形束66a亮度低,但比较小圆形束66b约亮两倍。这种设计很有效,且只比最佳的矩形截面照明光束66d逊色。后者实际上是本发明最后且最优选实施例采用的光束截面。
图4(a)说明如何将像素级光学元件2设置成行阵列,为说明起见画出了其中几个,以及如何将它们设置在照明源的前面,此处画出的为一优选实施例中的阴极射线管10。当受控电子束扫描一行像素级光学元件时,其垂直位移对于每一像素独立变化,产生一水平扫描线,为说明目的以线15代表,既在像素阵列后以虚线表示,也在左边椭圆中以实线作分离清楚表示。将会看到,在传统阴极射线管显示器中为直线的水平扫描线,对于每一单个像素从扫描中线作一细微位移,从而产生一图像,通过改变各个像素到观察者的距离而包含感知深度的大致分辨率。
经验表明,单个像素级光学单元之间的微隙间距可降低光学单元之间的光学“串音干扰”,从而提高了图像清晰度,而且这种光学元件的隔离通过在其间隙中填入黑色不透光材料可得以进一步加强。隙间距在0.25mm量级已证明是很成功的,但小到0.10mm的间距也已演示过,作为光学隔离器作用很好,特别是在注入上述的不透光材料时。
这些像素级光学元件阵列已通过如下工艺制成,即采用光学中性粘合剂将各单个光学元件手工粘接到适当阴极射线管的表面。当然这种过程很费力,并且由于手工辅助工具精确性的限制而容易产生设置错误。然而,光学器件阵列通过如下工艺已可很成功地制造,先制造一相反的整个光学元件阵列的金属“原模”,然后将该可用的光学元件阵列压印到热塑性材料上以制成一个“原模”的“压制”复制品,然而将其整个粘接到阴极射线管的表面。通过压模对高精细表面的复制,由于复制高精细大信息量的载体,诸如光盘和致密盘这类需以高精度和低成本在廉价塑料上进行典型复制的载体的技术需求,近年已形成为一种工艺形式。可以预计,生产大批量制作的像素级光学元件阵列的优选制造技术仍将是包含热塑材料的压模工艺。我们在实验室中也已成功地通过注模技术生产了像素级光学元件阵列。迄今为止,三层不同的像素级光学三件,每一层代表一不同的光学单元,已得到成功对准,制成一个三单元微光学元件阵列。在有些优选实施例中,将这些层粘接起来以有助于保持对准,而另一些实施例中,这些层在边缘处固定而并不粘接在一起。
在将像素级光学元件设置于阴极射线或其他发光装置表面时,应严格保证光学元件与其下像素的精确对准。垂直错位会导致所得图像中在显示深度上具有一永久的偏差,而水平错位会导致三维显示装置提供的横向可视范围受到限制。而且,通过尽可能减小照明荧光物质与光学元件入射面之间的距离,可提高发光像素与像素级光学元件之间的光学连接。在阴极射线管的环境中,这意味着光学元件所附着的管前表面玻璃应在保持足够的结构完整性同时有最小的厚度。在大的阴极射线监测器中,此前表面可厚达8mm,但我们已成功采用了用于特殊构造的具有2mm前表面厚度的阴极射线管的这些光学元件。阴极射线管的一个很成功实施例结构为像素级光学元件实际是由管的前表面制得。
图3(b)和4(a)显示了成像管像素35和像素级直线型光学单元2的一种大致矩形模式,也即,阵列的行是直线的,并且以上下行进行像素与像素的对准。这种像素及光学元件模式可产生很易接收的三维图像,但不应认为这是本发明可能的唯一模式。
图4(b)显示了像素35的另一种优选模式,其中三像素的水平组与其左右组垂直偏位,形成三像素组的“贴瓦”(tiled)模式。此种结构已在实验室建成,此三像素组,包括一个红像素35r,一个绿像素35g和一个蓝像互35b。如通常的2维电视显像管一样,彩色图像由对这些组,或这些相同三色像素的“三色组”的相应照明形成。每一个三色组中三种色彩的不同顺序是可能的,而图4(b)所示顺序为我们实验室中目前采用的实施例。
图5说明了在扫描图像如通常电视图像中水平扫描线的深度信号的精细修正。在图5右上部所示的传统阴极射线电视或计算机显示器管中,按动作顺序的每一单幅画面是由电子束水平地逐行向下扫描屏幕形成的,这些扫描线以图5中四条代表性扫描线17表示。这种非常规则的扫描是由电视或计算机显示器的电子线路通过水平扫描线发生器16进行控制的,而信号的亮度或色度成份的不稳定变化导致水平扫描线有规律的从上至下过程的变化。
本发明以直线水平扫描的精细位移形式对上述规律性施加改变以产生深度效果。这种变化通过采用深度信号发生器18来具体实现,其深度信号通过加法器19加入直线水平线中,以产生每一水平扫描线的垂直位置的细微改变,生成通常与线20类似的线。图5所示深度信号发生器是一般性功能的代表;在电视机中,深度信号发生器为通常的视频信号解码器,它通常从接收的视频信号中提取亮度、色度及时钟信息,而如下所述其功能现在得以加强,即将以整体类似方式编码入上述信号的深度信息提取出来。类似地,在计算机中,深度成份发生器为软驱显示卡,如VGA显示卡,它通常提供亮度、色度及时钟信息给计算机显示器,并将向其同样提供软驱深度信息。
图6说明了在本发明另一优选实施例中采用透明胶片14向像素级光学元件2提供受控输入照明的方式。在此例中,设置在图示光学单元后的胶片部分,除设计用以允许光进入光学元件所需点的一个透明点外,是不透明的。胶卷以传统方式从后部照明,但仅有光束5C是以通过胶片上的透明点透过光学元件2。将会发现,此种情形与图3中情形相类似,那儿采用阴极射线管中的受控电子束来选择照明束的位置。所用透明胶片可以为任意尺寸,已实现了采用大致8英寸乘10英寸的透明胶片的实施例。
图7显示了采用像素级光学元件2,其中画出12个以说明从特殊制备的胶卷13上显示图像的方式。光学阵列11由夹持器12定位保持。胶卷13上的图像以传统方式背后照明,结果图像通过传统投影透镜系统,此处以虚线圆22代表,聚焦在阵列11上,它与胶卷13及投影透镜22在光轴23上共轴。产生的三维图像可以直接观看或用作已知型式的三维真实图像投影机的图像产生装置。同样,产生的三维图像可作为静止图像观看,或象传统动画一样以相同的帧频的真实三维动画顺序观看。在此实施例中,胶卷13上的单个像素应比电视显示所采用的小得多,因为所得像素要经过投影放大;摄影胶片电视显示器具有的优越分辨率,很容易容纳像素尺寸的这种减小。
图8显示了采用两个摄像镜头以决定场景中每个物体的深度的情景,也即,决定场景中任一物体到主摄像镜头的距离。待摄景物,此处从上面观看,用实线矩形24,实线正方形25和实线椭圆26代表,每个都与主摄像镜头27具有不同的距离,因而每个在摄影场景中具有不同的深度。采用主摄影镜头27从美学上优选角度拍摄图像的主要细节。辅助镜头28设置在离第一镜头一定距离处,以倾斜方式对准景物,固而与主摄影镜头同时拍摄相同景物的不同景观。然后采用公知的几何三角测量技术来决定场景中每一物体离主摄影镜头的真实距离。
做出这些计算以及产生所得深充信号的一个优选方式是在生产后阶段进行,其中关于产生深度信号的计算是“脱线”(off-line)作出的,这就是说,它是在图像拍摄后,且一般是在远离图像拍摄的地点,以与实时摄像速率不相关的深度信号产生速度,来作出计算。深度信号产生的另一优选方式为“实时”作出必要的计算,即基本在图像生成时。实时产生深度信号的优点在于它产生实况(live)三维图像。然而实时产生的计算需求要远远大于“脱线”处理的,其中速度可以放慢从而可利用低计算能力,和成本较低。实验室中进行的试验表明,因为成本及紧凑的电子结构而成为优选的实时处理所需计算的方法为,通过采用数字信号处理器(DSP′S)用于图像处理,即数字图像处理器(DIP′S),它们都是专门设计的,功能窄但速度高的处理器。
由于采用辅助镜头28从不同于主摄影镜头的角度单独拍摄物体,此辅助镜头一般比主摄影镜头的像质可以低一些,因而具有低成本。特别对于动画的应用,主摄影镜头较贵且采用昂贵胶片,而而辅助镜头可以是用于胶片或者视频型号的低成本镜头。因此,与传统摄影的立体成像技术相反,其中两个镜头因为都是主摄影镜头,所以每个都必须采用昂贵的35mm或70mm胶片,我们的技术仅需要采用一个高质量,高成本镜头,因为这儿只有一个主摄影镜头。
虽然由不同角度获得的同一景物的两个图像有这种比较分析已证明是很成功的,但也可以通过采用前置的并非固的成像传感器的主动或被动传感器来从一景物中获取深度信号。实验室中,通过采用一系列商业上可得的超声探测器来获取用于照射景物的超声辐射的反射信号,我们已成功获得一个景物的完整的逐个像素的深度分布,我们实验室中称作“深度图”(depth map)。类似地,我们已成功采用扫描红外探测器来顺序获取用以照射景物的红外辐射的反射信号。最后,我们在实验室中已成功进行了利用微波辐射作为照射源并利用微波探测器以获得发射辐射的实验;此技术用于需达系统中获得三维图像会是特别有用的。
图9(a)说明了从通常二维图像导出深度信号的基本步骤,从而使将通常包括胶片图像也包括视频图像的二维图像,处理为三维图像的成为可能。
在图9(a)中,在上面图8中展示的相同系列的三个物体24,25和26在此处从前面看显示在一显示器上。在平面显示器29上,观察者当然看不到深度的任何区别。
在我们将深度成份加入二维图像的过程中,首先利用一个视频数字化模块在计算和工作站上将景物数字化。随后,利用公知的边缘探测及其他技术的物体界定软件组合,确定所考虑景物中每一单个的物体,从而为改进深度可对每一物体单独处理。在软件不能自动地充分确定和分离物体时,一个人工编辑器可作出清楚判断,采用一鼠标器,一支光笔、触摸屏和探针,或类似指向装置以描出和确定物体。一旦景物被分离成单个物体,人工编辑器就向软件L为依次确定场景中每一物体到镜头的相应距离,即表现深度。此过程是完全人为的,很明显,编辑器部分的较差判断将导致产生失真的三维场景。
在此过程的下一步,软件依次扫描场景中的每一像素,并为此像素确定一深度成份。此处理结果以显示器30上的深度成份扫描线31来代表,它表示从显示屏29的中间一行像素获得的代表性深度信号,扫描线横贯屏幕上的每一物体。在图8中所示这些物体的顶视位置与图9(a)中代表性深度成份扫描线31表现的相应深度有关。
图9(b)表示用以根据本方法将深度加入视频图像的设备的连接与工作原理。在此图中,内设有视频数字转换器71的图像处理计算机工作站70,控制着输入磁带录像机(VTR)72,和输出磁带录像机73,以及一个视频矩阵转换器74(图9(b)中控制以短划线表示,信号流动以实线表示)。视频数字转换器根据工作站的指令通过矩阵转换器接收输入VTR的一帧视频信号。此帧信号接着被数字化,图9(a)中所述的物体界定过程就施加于生成的数字场景中了。在对此帧计算深度信号后,伴随计算的深度成份将此相同帧的信号输入一个NTSC视频信号发生器75,深度成分由NTSC发生器加入视频帧信号的视频频谱的适当位置。生成的深度编码视频帧接着写出在输出VTR73上,然后对下一帧重新开始此过程。
关于此过程在实验室中对其开发时出现了几个主要的要点。第一个要点为,由于深度成分由NTSC发生器加入,它只插入深度成分而不改变信号的任何其他方面,所以信号的原始图像部分可无需先对图像数字化而写在输出VTR上。这就消除了由于对图像进行数字化再重新转换成模拟形式而产生的视觉衰退,从而发生的唯一这种衰退为在图像复制过程中固有的产生到产生的衰退,这种衰退通过采用播送形式“组分图像”(component video)的模拟VTR′S如M-II或Betacam装置可减至最低。当然,如成像行业所公知,通过利用全数字录像装置,无论是基于计算机的还是基于磁带的,在任何产生到产生的过程中都没有衰退。
第二点为,由于这很大程度上为逐帧处理过程,所以在深度加入时需要称作“帧校准”(frame-accurate)VTR′S或其他记录装置。编辑器必须能够访问所需的每一单个帧,并可使此经处理的帧写出在输出磁带的正确位置上,而只有设计能访问每一个别帧(例如,按照SMPTE时钟码)的装置才适合用于此。
第三点为,整个过程可置于计算机控制之下,进而可由单个计算机控制面板进行操作,这比由几套分离的控制器操作要方便得多。如果能得到可由计算机控制的播送水平成分VTR′S或其他记录装置,既包括模拟的也包括数字的,通过采用这种计算机-VTR连接用于耗时的自动倒带及预卷,则可实现这种深度加入过程的某些方面的半自动化。
第四点为,应赋于软件一定方面的通常所谓“人工智能”或“机器智能”,以在微观层面提高深度加入的质量。例如,我们在实验室中已发展并改进了在人面深度加入过程中赋予更大真实性的技术,利用了人面的拓扑学,即如下事实,鼻子在脸颊更突出,脸颊逐步向后倾斜至耳朵处,等等,每一特点都有其自身的深度特征。在处理影片或录像中发现的大量普通物体时,这有助于降低对编辑器大量输入的需求(这里采用人面作为例子)。
第五点为,控制软件可设置为能以半自动方式工作。这是指,只要场景中的物体保持相对稳定,则控制工作站就自动地处理连续帧而无需由编辑器的附加输入,从而有助于简化并加速此处理过程。当然,如果新物体进入场景,或景观无规律变化,此处理就重新需要编辑输入。我们在实验室中已发展并且目前正在改进基于人工智能领域的技术,此人工智能根据对于软件所知场面中景观及相应物体尺寸的变化来自动计算对于单个物体深度的改变量。
第六点为,当输入及输出载体为静止或运动画画片时,输入VTR72,输出VTR73和视频矩阵转换器74可分别以一高分辨率图像扫描器,一数字数据开关和一个高分辨率印像机来代替。此过程的剩余部分与上述图像处理情况基本相同。在此情况下,利用NTSC发生器加入深度信号由图8所示的摄影过程所取代。
第七点为,若工作在全数字记录环境中,如基于计算机的图像存储中,则输入VTR72,输出VTR73和视频矩阵转换器74可全部由计算机的大容量存储装置有效代替。这种大容量存储装置典型即为磁盘,如我们实验室中用于基于计算机的编辑工作站上那样,但它也可以是其他形式的数字大容量存储器。在这种全数字环境下,利用NTSC发生器加入深度信号可以避免,而采用向计算机的传统图像存储形式的像素级单元构成的深度图上加入深度信号。
附后的附件A为部分所列软件的拷贝,用在实验室条件下,以实现上面照图9(a)及9(b)所述的改进。
图10说明在这些完善过程中得出的像素级深度显示技术在印刷图像的三维显示上的应用。画面32为传统二维照片或印刷图像。像素级微透镜(为清楚起见此处放大显示)的基片33放在二维图像上,使每一微透镜具有一不同焦距,从而向观察者眼睛以不同的表现深度展示此像素。从大比例放大的截面34看,可看到每一微透镜都具有特殊形状及光学特性,从而由其特殊的图像像素向观察者提供恰当感知的深度。虽然我们实验室中迄今采用的微透镜直径小至1mm,但已进行了小于1mm微透镜的试验,它表明,此尺寸的透镜阵列是完全可行的,并将产生具有很高分辨率的三维印刷图像。
在批量生产时,可以预料,将采用此处所述的深度信号产生技术来制作一个压印原模,以此将给定图像的大批量低成本微透镜阵列再次压印在可印的或热塑性的材料上,其方式类似于对致密盘的携带数据表面的压印或常用于信用卡的大批量复制的反射全息图的压印。此技术有望实现大规模,低成本三维印刷的图像,以用于杂志,报纸及其他印刷载体上。虽然微透镜的基片33描绘成矩形模式,但其他模式的如同心圆式的微透镜同样显得很有效。
有必要指出,在传统NTSC视频信号上的图像、或亮度载波占据的视频宽度显著大于色度或深度副载波。NTSC视频图像的亮度成分具有相当高的清晰度,常以“细铅笔”画出的图画来表征。另一方面,色度信号为产生电视图像中可接受的颜色成分只需携带相当少的信息,常以“宽刷”在高分辨率的黑的图像画出“溅射”的颜色来表征。本发明的深度信号形式上在有限信息量需求方面相对于高分辨率图像载波来说更接近于颜色信号。
视频信号控制的关键要素之一为如何将信息编码在原始信号产生时还不存在的信号中,以及如何做到这一点而不干扰电视接收机的安装基座或使其失效。图11表示传统NTSC视频信号的能量分布,显示了画面或亮度载波36,及色度或颜色信号载波37。图像频谱中的所有信号都是有分离频率间隔的能量载波,此处以分立垂直线代表;频谱的其余部分空置而没有使用。由图11可以看到,彩色NTSC图像信号的构造者成功地将大量添加信息(即颜色)嵌入了生成的信号结构,利用了将分离频率点的信号能量会聚的相同思想,然后将这些点交插在图像载波的生成能量频率点之间,以使二者不发生重叠且不相互干扰。
以同样的方式,本发明将更进一步的添加信息,以所需深度信号形式,编码到存在的NTSC视频信号结构中,它利用了用于色度信号的相同交插方法。图12说明了此方法,再次显示了与图11中相同的亮度载波36及色度副载波37,并加入了深度副载波38。为参考目的,色度副载波占据了大约1.5MHz的频宽,中心频率3.579MHz,而深度副载只占据大约0.4MHz的频宽,中心频率为2.379MHz。这样,色度和深度副载波,每个交插在亮度载波上,就充分分离而不发生相互干扰。尽管所述副载波频率及其占据的频宽工作很好,其它情况实际也是可能的。例如,在实验室进行的试验中,我们成功演示出将深度副载波所需的上述0.4MHz频宽作显著降低,对深度信号在其插入NTSC信号之前运用了公知的压缩技术;在用于驱动深度显示成像装置之前紧接着在放像端进行信号提取解压。同样,将深度信号加入PAL及SECAM图像制式的类似方法已在实验室中试验过,尽管其结构特性及相关频率由于这些图像信号结构的不同特点而有所改变。在全数字环境下,如基于计算机的图像存储,存在大量的图像存储形式,因而加入用于深度图存储的数位的方法也随不同形式而改变。
图13(a)以功能形式说明了传统电视接收机中典型控制扫描电子束在阴极射线管中垂直偏转的电路系统,采用了电视工业中的常用术语。虽然某些细节可能随商标及型号而有所改变,但关键部分是相同的。
在代表电视接收机传统设计的此图中,目的是产生连续的且与输入视频信号同步的扫描电子束的一次扫描。信号由调谐器49获得并由图像信号中频放大器50加以放大,然后传送至视频检波器51以提取视频信号。视频检测器51的输出由检波输出放大器52放大,再进一步由第一视频放大器53放大,然后通过延迟线54。
在传统视频信号中,有三个主要成分:亮度(即明亮程度,或信号的“黑的”部分);色度(或颜色部分);以及信号的定时部分,以保证每个事件都按正确的平面发生。在这些成分中,同步信号由同信号分离器55的放大信号分离出来,然后将垂直同步信号在垂直同步转换器56中进行转换,提供给垂直扫描发生器64。此扫描发生器的输出提供给阴极射线管的电磁线圈,即熟知的偏转线圈65。正是此偏转线圈引起扫描电子束沿着一条平滑的直线路径通过阴极射线管的屏幕。
如前面所述,在三维电视显像管中,此直线电子束路径的细微变化通过像素级光学元件,可产生三维效果。图13(b)以同样功能形式说明了必须加入传统电视中的如下附加电路,用以从适当编码的图像信号中提取深度成分并将信号的此种深度成分转换成扫描电子束的细微变化路径。此图中,短划线外面的功能即图13(a)所示的传统电视接收机的功能,而短划线之内的功能表示为提取深度成分产生三维效果而需添加的部分。
如图12中所示,将深度信号编码到NTSC视频信号中的方式与色度或颜色信号的编码基本相同,但比较简单且在不同频率处。因为其编码过程相同,所以包含深度成分的信号,通过采用传统电视机中在提取前放大彩色信号所采用的相同放大器,即此处表示的第一彩色中频放大器57,可放大至足以提取的水平。
信号中这种放大的深度成分以与从相同信号中提取编码颜色信号所采用的相同方式,从视频信号中提取出来。在此过程中,电视接收机产生一个参考或“标准”(yardstick)信号,其频率为深度成分所应具有的频率。此信号与这个频率处实际出现的信号相比较,与“标准”的任何差别即被看作深度信号。这种参考信号由深度门脉冲产生器59产生,并由深度门脉冲限制器58整形至所需水平。形成的纯参考信号,由与传统电视接收机中同步电子束的水平扫描所采用的相同同步信号分离器55,保持与输入的编码深度信号的同步。
当来自第一彩色中频放大器57的放大的编码深度信号与来自深度门脉冲限制器58的参考信号汇合以进行比较时,其结果由选通深度同步放大器63加以放大。此放大信号将包含色彩与深度成分,因而只有深度信号编码频率2.379MHz周围的那些信号才由提取器62提取出来。这就是提取的深度信号,接着由X′TAL输出放大器61放大至一个有用水平。
在从混合视频信号中提取出深度成分后,此电路现在必须调整通过电视屏幕的电子束的平滑水平扫描,以使在所得图像中能够显示深度。为了调整这种水平扫描,将提取的放大深度信号在深度加法器60中加入传统电视机周期性产生的标准垂直同步信号,如前面图13(a)所述。由深度加法器60输出的调整垂直同步信号用以在垂直扫描发生器64中产生电子束的垂直扫描,垂直扫描发生器64,在传统电视接收机中,驱动控制扫描电子束运动的偏转线圈65。最后结果得到由其通常中心线向上或向下细微偏移的扫描电子束,通过如前所述精细改变光在像素级光学元件上的入射点,在视频图像上产生三维效果。
图14表示实现图13短划线框中所示的那些附加功能的电子线路的优选实施例。
图15说明了改变入射到不同形式的像素级光学结构的光束位置的一种替代装置。在这种替代装置中,像素级光学结构39具有适当的光学转换功能,其焦距相对光学单元39的光轴向外辐射状增加且关于其光轴43对称。准直成柱状形式的光束入射到此光学结构上,准直光柱的半径从零到光学结构的有效工作半径变化。三束这种可能的柱状准直光40,41和42在图中表示出来,分别产生从前面看的环状入射光带40a,41a和42a,根据此装置的特殊光学转换功能,它们每个将产生一个离观察者不同明视距离的生成光像素。
图16以压缩形式清楚表示,改变单个像素出射光至观察者的视觉距离的又一种替代装置。在此图示中,观察者眼睛4位于像素级光学元件前一定距离处。准直光束入射在倾斜设置的反射镜76的不同点上,其中三束由图示画出,即光束5,6和7。反射镜76将入射光束反射至凹面镜77的倾斜部分,由于凹面镜的成像特性,呈现出距观察者不同视觉距离的光束5a,6a和7a,对应于输入光束的前述及标记的特殊位置。此凹面镜可具有多种圆锥截面的数学曲面,我们实验室中已成功采用了双曲面,抛物面及球面曲面的凹面镜。在此实施例中,实验结果表明,平面的以及曲面的反射镜都应是第一表面变化。
图17说明在图16中所示设置的一个优选实施例中,如何将平面反射镜76及凹面镜77的像素级结合设置在用作照明源的阴极射线管表面上。图中一个像素的凹面镜77与相邻(直接上面)像素的平面镜结合在一起,形成一个组合单元78,它设置在阴极射线管的玻璃前表面8上,其后即为传统的荧光层9,它在受到投射的准直电子束的撞击后发光以产生光束,此图中以束5b,6b及7b在不同位置表示上述电子束。对于这三个图示位置的每一个,以及对于此像素级光学元件空间界限内的任何其他光束位置,点光束将入射至此组件的单一点上,并进而在一个对应的单一点上呈现给观察者。至于本发明的折射实施例,不同于阴极射线的其他光源可以很合适地加以采用。
∥      3D0105.cpp                                 附录A
∥      AGENTS OF CHANGE INC.
∥      Advanced Technology 3-D Retrofitting Controller Software
∥      Employing Touch Screen Graphical User Interface
∥      V.01.05
∥      Includes the following control elements:
#include<dos.h>
#include<stdio.h>
#include<conio.h>
#include<graphics.h>
#include<stdlib.h>
#include<string.h>
#include<iostream.h>
#define MOUSE           0x33
#define BUT1PRESSED     1
#define BUT2PRESSED     2
#define TRUE            1
#define FALSE        0
void ActivMouse()
{

       ∥     acuvate mouse.

       _AX=32;

       geninterrupt(MOUSE);
}
int ResetMouse()
{

       ∥    mouse reset.

       _AX=0;

       geninterrupt(MOUSE).

       return(_AX);
}
void ShowMouse()
{

       ∥    turn on mouse cursor.

       _AX=1;

       geninterrupt(MOUSE);
}
void HideMouse()
{

       ∥    turn off mouse cursor.

       _AX=2;

       geninterrupt(MOUSE);
}
void ReadMouse(int*v,int*h,int*but)
{

       int temp;

       _AX=3;

       geninterrupt(MOUSE);

       ∥      which button pressed:1=left,2=right,3=both.

       temp=_BX;

       *but=temp;
				
				<dp n="d19"/>
        ∥     horizontal coordinates.

        *h=_CX;

        ∥     vertical cootdinates.

        *v=_DX;
}
class Button
∥        this class creates screen buttons capable of being displayed raised
∥        or depressed.Labels displayed on the buttons change colour when
∥        the button is depressed.
{
public:

        int button_centrex,button_centrey,button_width,button_height;

        int left,top,right,bottom,text_size,text_fields.lfont;

        char button_text1[40].button_text2[40];

        unsigned upattern;
∥        button_centrex.button_centrey is the centre of the button placement.
∥        button_width and button_height are the dimensions of the button in pixels.
∥        button_text is the label on the button.
∥        text_size is the text size for settextstyle()
        int mouseX.mouseY.mouseButton;

        int oldMouseX.oldMouseY;

        int button1Down.button2Down;

        int pressed;

        Button(int x.int y,int width.int height.int tfields.char*btext1.char*btext2.int tsize.int f)

        ∥        this constructor initializes the button variables.

                  {

                  button_centrex=x;

                  button_centrey=y;

                  button_width=width;

                  button_height=height;

                  strcpy(button_text1.btext1);

                  strcpy(button_text2.btext2);

                  text_size=tsize;

                  text_fields=tfields;

                  lfont=f;

                  left=button_centrex-button_width/2;

                  top=button_centrey-button_height/2;

                  right=button_centrex+button_width/2;    

                  bottom=button_centrey+button_height/2;

                  oldMouseX=0;oldMouseY=0;
                  button1Down=FALSE;

                  button2Down=FALSE;

                  pressed=FALSE;

                  }

        void up()

        ∥        draws a raised button and prints the required label on it.

                  {

                  setcolor(5);

                  setlinestyle(SOLID_LINE.upattern.NORM_WIDTH);

                  setfillstyle(SOLID_FILL.LIGHTGRAY);

                  bar3d(left,top,right,bottom,0,0);

                  setcolor(WHITE);

                  setlinestyle(SOLID_LINE.upattern.THICK_WIDTH);
				
				<dp n="d20"/>
                line(left+2,bottom-1,left+2,top+1);

                line(left+1,top+2,right-1,top+2);

                setcolor(DARKGRAY);

                setlinestyle(SOLID_LINE,upattern.NORM_WIDTH);

                line(left+4,bottom-3,right-1,bottom-3);

                line(left+3,bottom-2,right-1,bottom-2);

                line(left+2,bottom-1,right-1,bottom-1);

                line(right-3,bottom-1,right-3,top+4);

                line(right-2,bottom-1,right-2,top+3);

                line(right-1,bottom-1,right-1,top+2);

                ∥         put the required text in the button

                setcolor(5);

                settextjustify(CENTER_TEXT.CENTER_TEXT);

                settextstyle(lfont.HORIZ_DIR.text_size);
∥                cout<<button_text2<<end1;

                if(text_fields==1)

                         outtextxy(button_centrex.button_centrey-4*(floa:(button_height)/50).
button_text1);

                         else

                         {

                          outtextxy(button_centrex.button_centrey-13*(float(button_height)/50).
button_text1);
                          outtextxy(button_centrex.button_centrey+9*(float(button_height)/50).
button_text2);

                          }

                pressed=FALSE;

                }

        void down()

        ∥      draw a depressed button and prints the required label on it.

                {

                setcolor(5);

                setlinestyle(SOLID_LINE.upattern.NORM_WIDTH);

                setfillstyle(SOLID_FILL.DARKGRAY);

                bar3d(left,top,right,bottom,0.0);

                setcolor(5);

                setlinestyle(SOLID_LINE.upattern.THICK_WIDTH);

                line(left+2,bottom-1,left+2,top+1);

                line(left+1,top+2,right-1,top+2);

                setcolor(LIGHTGRAY);

                setlinestyle(SOLID_LINE.upattern.NORM_WIDTH);

                line(left+4,bottom-3,right-1,bottom-3);

                line(left+3,bottom-2,right-1,bottom-2);

                line(left+2,bottom-1,right-1,bottom-1);

                line(right-3,bottom-1,right-3,top+4);

                line(right-2,bottom-1,right-2,top+3);

                line(right-1,bottom-1,right-1,top+2);

                ∥        put the required text in the button.

                setcolor(WHITE);

                settextjustify(CENTER_TEXT.CENTER_TEXT);

                settextstyle(lfont.HORIZ_DIR.text_size);
∥           cout<<button_text2<<end1;

                if(text_fields==1)

                          outtextxy(button_centrex.button_centrey-4*(float(button_height)/50.).
button_text1);

                          else

                          {

                          outtextxy(button_centrex.button_centrey-13*(float(button_height)/50.).
button_text1);
                          outtextxy(button_centrex.button_centrey+9*(float(button_height)/50.).
button_text2);

                          }
				
				<dp n="d21"/>
                  pressed=TRUE; 
                    }
        int touched()

        ∥        determines whether a button has been touched.and returns

        ∥        TRUE for yes.and FALSE for no.Touching is emulated

        ∥        by a mouse click.

                  {

                  int temp;

                  _AX=3;

                  geninterrupt(MOUSE);

        ∥        which button pressed:1=left,2=right,3=both.

                  temp=_BX:

                  mouseButton=temp;

                  ∥      horizontal coordinates.

                  mouseX=_CX;

                  ∥      verical coordinates.

                  mouseY=_DX;

                  if(mouseButton&amp;BUT1PRESSED)

                  {

                          button1Down=TRUE;

                          return 0;

                  }

                  else if(button1Down)

                  ∥        if button 1 was down and is now up.it was clicked!

                            {

                  ∥        check whether the mouse is positioned in the button.

                            if((((mouseX-left)*(mouseX-right))<0)&amp;&amp;(((mouseY-top)*(mouseY-
bottom))<0))

                  ∥        if this evaluates as TRUE then do the following.

                                       {

                                       button1Down=FALSE;

                                       return 1;

                                       }
                                       button1Down=FALSE;

                                       return 0;

                            }

                  }
};
∥       XXXXXXXXXXXXXXXXXXX M A I N XXXXXXXXXXXXXXXXXXXXX
void main()
{
∥       this is the system main.
int Page_1_flag,Page_2_flag,Page_3_flag,Page_4_flag,Page_5_flag;
int Page_6_flag,Page_7_flag,Page_8_flag,Page_9_flag,Page_10_flag;
char which;
∥        initialize the graphics system.
int gdriver=DETECT,gmode,errorcode;
initgraph(&amp;gdriver,&amp;gmode,″c:\\borlandc\\bgi″);
∥        read the result of initialization.
errorcode=graphresult();
				
				<dp n="d22"/>
if(errorcode!=grOk){∥an error occurred.

        printf(″Graphics error;%s\n″,grapherrormsg(errorcode));
        printf(″Press any key to halt;″);

        getch();

        exit(1);
  }
∥if(!ResetMouse())
∥{
∥        printf(″No Mouse Driver″);
∥}
∥        set the current colours and line style.
∥        set BLACK(normally palette 0)to palette 5(normally MAGENTA)
∥        to correct a colour setting problem inate to C++.
setpalette(5,BLACK);
∥        activate the mouse to emulate a touch screen.
∥ActivMouse();
∥ShowMouse();
∥        construct and initialize buttons.
Button logo(getmaxx()/2,100,260,130,1,″(AOCI LOGO)″,″″,4,1);
Button auto_controll(200,400,160,50,2,″AUTO″,″CONTROL″,2,1);
Button manual_controll(400,400,160,50,2,″MANUAL″,″CONTROL″,2,1);
Button mute1(568,440,110,50,1,″MUTE″,″″,4,1);
∥Button proceed(getmaxx()/2,440,160,50,1,″PROCEED″,″″,4,1);
Button c_vision(getmaxx()/2,350,450,100,1,″3-D RETRO″,″″,8,1);
Button main_menu(245,20,460,30,1,″MAIN MENU″,″″,2,1);
Button time_date2(245,460,460,30,1,″Date;Time;Elapsed;″,″″,2,1);
Button video_screen(245,217,460,345,1,″″,″″,4,1);
Button video_messagel(245,217,160,50,2,″Video Not″,″Detected″,2,1);
Button auto_onoff2(555,20,130,30,2,″AUTO CONTROL″,″ON/OFF″,5,2);
Button manual_control2(555,60,130,30,1,″MANUAL CONTROL″,″″,5,2);
Button name_tags2(555,100,130,30,1,″OBJECT TAGS″,″″,5,2);
Buttton voice_tags2(555,140,130,30,2,″TRIANGULATE/″,″DIST,CALC,″,5,2);
Button custom_session2(555,180,130,30,1,″CUSTOM SESSION″,″″,5,2);
Button memory_framing2(555,220,130,30,1,″MEMORY FRAMING″,″″,5,2);
Button remote_commands2(555,260,130,30,2,″REMOTE ENDS″,″COMMANDS″,5,2);
Button av_options2(555,300,130,30,2,″AUDIO/VISUAL″,″OPTIONS″,5,2);
Button codec_control2(555,340,130,30,1,″CODEC CONTROL″,″″,5,2);
Button mcu_control2(555,380,130,30,1,″MCU CONTROL″,″″,5,2);
Button dial_connects2(555,420,130,30,2,″DIAL-UP″,″CONNECTIONS″,5,2);
Button mute2(555,460,130,30,1,″MUTE″,″″,5,2);
Button ind_id3(245,20,460,30,1,″PERSONAL IDENTIFICATION″,″″,2,1);
Button frame_cam3(555,20,130,30,1,″FRAME CAMERA″,″″,5,2);
Button cam_preset3(555,60,130,30,1,″CAMERA PRESET″,″″,5,2);
Button autofollow3(555,180,130,30,1,″AUTOFOLLOWING″,″″,5,2);
Button return3(555,420,130,30,2,″RETURN TO″,″LAST MENU″,5,2);
Button touch_face3(130,418,230,35,2,″DEFINE AN OBJECT″,″AND THEN TOUCH;″,5,2);
Button type_id3(308,418,105,35,2,″ACQUIRE″,″OBJECT″,5,2);
Button write_id3(423,418,105,35,2,″LOSE″,″OBJECT″,5,2);
Button cancel3(555,340,130,30,1,″CANCEL CHOICE″,″″,5,2);
Button keyboard(245,375,450,200,1,″(Keyboard)″,″″,2,1);
Button writing_space(245,425,450,100,1,″(Writing Space)″,″″,2,1);
Button typing_done(555,260,130,30,2,″TYPE AND THEN″,″PRESS HERE″,5,2);
Button writing_done(555,260,130,30,2,″WRITE AND THEN″,″PRESS HERE″,5,2);
Button dial_connects6(getmaxx()/2,20,604,30,1,″DIAL-UP CONNECTIONS″,″″,2,1);
Button directory6(getmaxx()/2,60,300,30,1,″DIRECTORY″,″″,2,1);
Button manual_dialing6(57,420,84,30,2,″MANUAL″,″DLALING″,5,2);
Button line_16(151,420,84,30,1,″LINE1″,″″,5,2);
				
				<dp n="d23"/>
Button line_26(245,420,84,30,1,″LINE 2″,″″,5,2);
Button dial_tone6(339,420,84,30,1,″DLAL TONE″,″″,5,2);
Button hang_up6(433,420,84,30,1,″HANG UP″,″″,5,2);
Button scroll_up6(104,260,178,30,1,″SCROLL DIRECTORY UP″,″″,5,2);
Button scroll_down6(292,260,178,30,1,″SCROLL DIRECTORY DOWN″,″″,5,2);
Button dial_this6(198,300,84,30,2,″DLAL THIS″,″NUMBER″,5,2);
Button add_entry6(104,340,178,30,1,″ADD AN ENTRY″,″″,5,2);
Button delete_entry6(292,340,178,30,1,″DELETE AN ENTRY″,″″,5,2);
Button keypad6(505,320,230,151,1,″(Keypad)″,″″,2,1);
Page_1;
∥      this is the opening screen.
∥      set the current fill style and draw the background.

      setfillstyle(INTERLEAVE_FILL.DARKGRAY);

      bar3d(0.0.gemaxx(),getmaxy().0.0);

      logo.up();

      c_vision.up();
∥      proceed.up();
∥      auto_controll.up();
∥      manual_controll.up();

      mute1.up();

      settextstyle(TRIPLEX_FONT.HORIZ_DIR.2);

      outtextxy(getmaxx()/2.190.″(C)1993-1995 AGENTS OF CHANGE INC.″);

      settextstyle(TRIPLEX_FONT.HORIZ_DIR.4);

      outtextxy(getmaxx()/2.235.″WELCOME″);

      outtextxy(getmaxx()/2.265.″TO″);

      Page_1_flag=TRUE;
while(Page_1_flag)
{
∥     temporary keypad substitute for the touch screen.
which=getch();

     if(which==′1′)

     {

             if(!c_vision.pressed)

             {

                        c_vision.down();

                        goto Page_2;

             }

             else c_vision.up();

     }

     if(which==′2′)

     {

              if(!mute1.pressed)mute1.down();

              else mute1.up();

      }
if(which==′S′)Page_1_flag=FALSE;
}
goto pgm_terminate;
Page_2;
∥      this is the main menu.

      setfillstyle(INTERLEAVE_FILL.DARKGRAY);
				
				<dp n="d24"/>
      bar3d(0,0,getmaxx(),getmaxy(),0,0);

      main_menu.up();

      video_screen.up();

      video_message1.down();

      time_date2.up();

      auto_onoff2.up();

      manual_control2.up();

      name_tags2.up();

      voice_tags2.up();

      custom_session2.up();

      memory_framing2.up();

      remote_commands2.up();

      av_options2.up();

      codec_control2.up();

      mcu_control2.up();

      dial_connects2.up();

      mute2.up();

      Page_2_flag=TRUE;
while(Page_2_flag)
{
∥     temporary keypad substitute for the touch screen.
which=getch();

     if(which==′1′)

     {

             if(!auto_onoff2.pressed)

             {

                       auto_onoff2.down();

             }

             else auto_onoff2.up();

     }

     if(which==′2′)

     {

             if(!manual_control2.pressed)manual_control2.down();

             else manual_control2.up();

     }

     if(which==′3′)

     {

             if(!name_tags2.pressed)

             {

             name_tags2.down();

             goto Page_3;

             }

             else name_tags2.up();

     }

     if(which==′4′)

     {

             if(!voice_tags2.pressed)

             {

             voice_tags2.down();

             goto Page_3;

             }

             else voice_tags2.up();

     }

     if(which==′5′)

     {
				
				<dp n="d25"/>
             if(!custom_session2.pressed)custom_session2.down();

             else custom_session2.up();

     }

     if(which==′6′)

     {

             if(!memory_framing2.pressed)

             {

             memory_framing2.down();

             goto Page_3;

             }

             else memory_framing2.up();

     }

     if(which==′7′)

     {

             if(!remote_commands2.pressed)remote_commands2.down();

             else remote_commands2.up();

     }

     if(which==′8′)

     {

             if(!av_options2.pressed)av_options2.down();

             else av_options2.up();

     }

     if(which==′9′)

     {

              if(!codec_control2.pressed)codec_control2.down();

              else codec_cortrol2.up();

      }

      if(which==′a′)

      {

              if(!mcu_control2.pressed)mcu_control2.down();

              else mcu_control2.up();

      }

      if(which==′b′)

      {

              if(!dial_connects2.pressed)

      {

      dial_connects2.down();

      goto Page_6;

      }

             else dial_connects2.up();

      }

      if(which==′c′)

      {

              if(!mute2.pressed)mute2.down();

              else mute2.up();
    }
if(which==′S′)Page_2_flag=FALSE; 
}
goto pgm_terminate;
Page_3;
∥       this is the first″individual identification″menu.
∥       and includes the step into nametags.
				
				<dp n="d26"/>
       setfillstyle(INTERLEAVE_FILL.DARKGRAY);

       bar3d(0,0,getmaxx(),getmaxy(),0,0);

       ind_id3.up();

       video_screen.up();

       video_message1.down();

       time_date2.up();

       frame_cam3.up();

       cam_preset3.up();

       name_tags2.up();

       voice_tags2.up();

       autofollow3.up();

       return3.up();

       mute2.up();

       Page_3_flag=TRUE;
while(Page_3_flag)
{
//     temporary keypad substitute for the touch screen.
which=getch();

      if(which==′1′)

      {

              if(!frame_cam3.pressed)

              {

                          frame_cam3.down();

              }

              else frame_cam3.up();

      }

      if(which==′2′)

      {

              if(!cam_preset3.pressed)cam_reset3.down();

              else cam_preset3.up();

      }

      if(which==′3′)

      {

              if(!name_tags2.pressed)

              {

                        name_tags2.down();

                        touch_face3.up();

              type_id3.up();

              write_id3.up();

              cancel3.up();

            type_or_write;

            which=getch();

            ∥the cancel bttton has been pressed.

            if(which==′9′)goto Page_3;

            ∥type nametags.

            if(which==′x′)goto Page_4;

            ∥write nametags.

            if(which==′y′)goto Page_5;

           goto type_or_write;

       }

             else name_tags2.up();

  }

     if(which==′4′)

     {
				
				<dp n="d27"/>
                if(!voice_tags2.pressed)voice_tags2.down();
∥                goto Page_4;

      else voice_tags2.up();

      }

      if(which==′5′)

      {

              if(!autofollow3.pressed)autofollow3.down();
∥              goto Page_4;

              else autofollow3.up();

      }

      if(which==′b′)

      {

              if(!return3.pressed)return3.down();

              goto Page_2;

      }

              else return3.up();

      if(which==′c′)

      {

              if(!mute2.pressed)mute2.down();

              else mute2.up();

      }
if(which==′S′)Page_3_flag=FALSE;
}
goto pgm_terminate;
Page_4;
∥       this is the nametags typing page.

       setfillstyle(INTERLEAVE_FILL.DARKGRAY);

       bar3d(0,0,getmaxx().getmaxy(),0,0);

       ind_id3.up();

       video_screen.up();

       video_message1.down();

       frame_cam3.up();

       cam_preset3.up();

       name_tags2.down();

       voice_tags2.up();

       autofollow3.up();

       return3.up();

       mute2.up();

       keyboard.up();

   typing_done.up();

       Page_4_flag=TRUE;
while(Page_4_flag)
{
∥     temporary keypad substitute for the touch screen.
which=getch();

     if(which==′7′)

     {

             if(!typing_done.pressed)typing_done.down();

             goto Page_3;

     }
				
				<dp n="d28"/>
              else tryping_done.up();

      if(which==′b′)

      {

              if(!return3.pressed)return3.down();

              goto Page_3;

      }

              else return3.up();

      if(which==′c′)

      {

              if(!mute2.pressed)mute2.down();

              else mute2.up();

      }
if(which==′S′)Page_4_flag=FALSE;
}
goto pgm_terminate;
Page_5;
∥       this is the nametags writing page.

       setfillstyle(INTERLEAVE_FILL.DARKGRAY);

       bar3d(0,0,getmaxx().getmaxy(),0,0);

       ind_id3.up();

       video_screen.up();

       video_message1.down();

       frame_cam3.up();

       cam_preset3.up();

       name_tags2.down();

       voice_tags2.up();

       autofollow3.up();

       return3.up();

       mute2.up();

       writing_space.up();

  writing_done.up();

       Page_5_flag=TRUE;
while(Page_5_flag)
{
∥     temporary keypad substitute for the touch screen.
which=getch();

     if(which==′7′)

     {

             if(!typing_done.pressed)typing_done.down();

             goto Page_3;

     }

             else typing_done.up();

     if(which==′b′)

     {

             if(!return3.pressed)return3.down();

             goto Page_3;

     }

             else return3.up();
				
				<dp n="d29"/>
      if(which==′c′)

      {

              if(!mute2.pressed)mute2.down();

              else mute2.up();

      }
if(which==′S′)Page_5_flag=FALSE;
}
goto pgm_terminate;
Page_6;
∥      this is the connections dialing and directory maintenance page.

      setfillstyle(INTERLEAVE_FILL.DARKGRAY);

      bar3d(0,0,getmaxx(),getmaxy(),0,0);

      dial_connects6.up();

  directory6.up();

  keypad6.up();

  scroll_up6.up();

  scroll_down6.up();

  dial_this6.up();

  add_entry6.up();

  delete_entry6.up();

       manual_dialing6.up();

  line_16.up();

  line_26.up();

      dial_tone6.up();

  hang_up6.up();

      return3.up();

      mute2.up();

      Page_6_flag=TRUE;
while(Page_6_flag)
{
∥     temporary keypad substitute for the touch screen.
which=getch();

     if(which==′b′)

     {

             if(!return3.pressed)

             {

             return3.down();

             goto Page_2;

       }

     }

             else return3.up();

     if(which==′c′)

     {

             if(!mute2.pressed)mute2.down();

             else mute2.up();

     }
if(which==′S′)Page_6_flag=FALSE;
}
goto pgm_terminate;
pgm_terminate;
getch();
				
				<dp n="d30"/>
∥        this is the closing sequence.
closegraph();
}
				
				<dp n="d31"/>
/****************************************/
/*            ARPROCES.H           */
/*    Image Processing Header File   */
/*     Area Processing Functions   */
/*       written in Turbo C 2.0   */
/****************************************/
/*Area Process Function Prototypes*/
CompletionCode Convolution(BYTE huge*InImage.unsigned Col.unsigned Row.

                          unsigned Width.unsigned Height.

                          short*Kernel.unsigned KernelCols.

                          unsigned KernelRows.unsigned Scale.

                          unsigned Absolute.BYTE huge**OutImageBufPtr);
CompletionCode RealConvolution(BYTE huge*InImage.

                             unsigned Col.unsigned Row.

                             unsigned Width.unsigned Height.

                             double*Kernel.unsigned KernelCols.

                             unsigned KernelRows.unsigned Scale.

                             unsigned Absolute.BYTE huge**OutImageBufPtr);
CompletionCode MedianFilter(BYTE huge*InImage.unsigned Col.unsigned Row.

                           unsigned Width.unsigned Height.

                           unsigned NeighborhoodCols.unsigned NeighborhoodRows.

                           BYTE huge**OutImageBufPtr);
CompletionCode SobelEdgeDet(BYTE huge*InImage.

                          unsigned Col.unsigned Row.

                          unsigned Width.unsigned Height.

                          unsigned Threshold.unsigned Overlay,

                          BYTE huge**OutImageBufPtr);
				
				<dp n="d32"/>
/****************************************/
/*         ARPROCES.C             */
/*     Image Processing Code       */
/*    Area Processing Functions    */
/*     written in Turbo C 2.0    */
/****************************************/
#include<stdio.h>
#include<stdlib.h>
#include<conio.h>
#include<dos.h>
#include<alloc.h>
#include<process.h>
#include<math.h>
#include<graphics.h>
#include″misc.h″
#include″pcx.h″
#include″vga.h″
#include″imagesup.h″
#include″arprocess.h″
/*
Integer Convolution Function
*/
CompletionCode Convolution(BYTE huge*InImage.unsigned Col.unsigned Row.

                         unsigned Width.unsigned Height.

                         short*Kernel.unsigned KernelCols.

                         unsigned KernelRows.unsigned Scale.

                         unsigned Absolute.BYTE huge**OutImageBufPtr)
{
  register unsigned ColExtent.RowExtent;
  register unsigned ImageCol.ImageRow.KernCol.KernRow;
  unsigned ColOffset.RowOffset.TempCol.TempRow;
  BYTE huge*OutputImageBuffer;
  long Sum;
  short*KernelPtr;
  if(ParameterCheckOK(Col.Row.Col+Width.Row+Height.″Convolution″))
  {

  /*Image must be at least the same size as the kernel*/

  if(Width>=KernelCols&amp;&amp;Height>=KernelRows)

  {

       /*allocate far memory buffer for output image*/

       OutputImageBuffer=(BYTE huge*)

                           farcalloc(RASTERSIZE.(unsigned long)sizeof(BYTE));

       if(OutputImageBuffer==NULL)

       {

         restorecrumode();

         printf(″Error Not enough memory for convolution output buffer\n″);

         return(ENoMemory);

       }

       /*Store address of output image buffer*/

       *OutImageBufPtr=OutputImageBuffer;

       /*

       Clearing the output buffer to white will show the

       boarder areas not touched by the convolution.It also
				
				<dp n="d33"/>
         provides a nice white frame for the output image.

         */

         ClearImageArea(OutputImageBuffer,MINCOLNUM,MINROWNUM,

                         MAXCOLS,MAXROWS,WHITE);

         ColOffset=KernelCols/2;

         RowOffset=KernelRows/2;

         /*Compensate for edge effects*/

         Col+=ColOffset;

         Row+=RowOffset;

         Width-=(KernelCols-1);

         Height-=(KernelRows-1);

         /*Calculate new range of pixels to act upon*/

         ColExtent=Col+Width;

         RowExtent=Row+Height;

         for(ImageRow=Row;ImageRow<RowExtent:ImageRow++)

         {

           TempRow=ImageRow-RowOffset;

           for(ImageCol=Col;ImageCol<ColExtent;ImageCol++)

           {

              TempCol=ImageCol-ColOffset;

              Sum=0L;

              KernelPtr=Kernel;

              for(KernCol=0;KernCol<KernelCols;KernCol++)

                   for(KernRow=0;KernRow<KernelRows;KernRow++)

                     Sum+=(GetPixelFromImage(InImage.

                           TempCol+KernCol.TempRow+KernRow)*

                           (*KernelPtr++));

              /*If absolute value is requested*/

              if(Absolute)

                   Sum=labs(Sum);

              /*Summation performed.Scale and range Sum*/

              Sum>>=(long)Scale;

              Sum=(Sum<MINSAMPLEVAL)?MINSAMPLEVAL:Sum;

              Sum=(Sum>MAXSAMPLEVAL)?MAXSAMPLEVAL:Sum;
              PutPixelInImage(OutputImageBuffer.ImageCol.ImageRow.(BYTE)Sum); 

           }

        }

  }

  else

        rerurn(EKernelSize);
  }
  return(NoError);
}
/*
Real Number Convolution Function.This convolution function is
only used when the kernel entries are floating point numbers
instead of integers.Because of the floating point operations
ervolved.this function is substantially slower than the already
slow integer version above.
*/
CompletionCode RealConvolution(BYTE huge*InImage.

                             unsigned Col.unsigned Row.

                             unsigned Width.unsigned Height,

                             double*Kernel.unsigned KernelCols.
				
				<dp n="d34"/>
                              unsigned KernelRows.unsigned Scale.

                              unsigned Absolute.BYTE huge**OutImageBufPtr)
{
  register unsigned ColExtent.RowExtent;
  register unsigned ImageCol.ImageRow.KernCol.KernRow;
  unsigned ColOffset.RowOffset.TempCol.TempRow;
  BYTE huge*OutputImageBuffer;
  double Sum;
  double*KernelPtr;
  if(ParameterCheckOK(Col.Row.Col+Width.Row+Height.″Convolution″))
  {

  /*Image must be at least the same size as the kernel*/

  if(Width>=KernelCols&amp;&amp;Height>=KernelRows)

  {

       /*allocate far memory buffer for output image*/

       OutputImageBuffer=(BYTE huge*)

                           farcalloc(RASTERSIZE.(unsigned long)sizeof(BYTE));

       if(OutputImageBuffer==NULL)

       {

         restorecrtmode();

         printf(″Error Not enough memory for convolution output buffer\n″);

         return(ENoMemory);

       }

       /*Store address of output image buffer*/

       *OutImageBufPtr=OutputImageBuffer;

       /*

       Clearing the output buffer to white wiil show the

       boarder areas not touched by the convolution.It also

       provides a nice white frame for the output image.

       */

       ClearImageArea(OutputImageBuffer.MINCOLNUM.MINROWNUM.

                        MAXCOLS.MAXROWS.WHITE);

       ColOffset=KernelCols/2;

       RowOffset=KernelRows/2;

       /*Compensate for edge effects*/

       Col+=ColOffset;

       Row+=RowOffset;

       Width-=(KernelCols-1);

       Height-=(KernelRows-1);

       /*Calculate new range of pixels to act upon*/

       ColExtent=Col+Width;

       RowExtent=Row+Height;

       for(ImageRow=Row;ImageRow<RowExtent;ImageRow++)

       {

         TempRow=ImageRow-RowOffset;

         for(ImageCol=Col;ImageCol<ColExtent;ImageCol++)

         {

           TempCol=ImageCol-ColOffset;

           Sum=0,0;

           KernelPtr=Kernel;

           for(KernCol=0;KernCol<KernelCols;KernCol++)

                for(KernRow=0;KernRow<KernelRows;KernRow++)

                  Sum+=(GetPixelFromImage(InImage.

                        TempCol+KernCol.TempRow+KernRow)*
				
				<dp n="d35"/>
                                (*KernelPtr++));

             /*If absolute value is requested*/

             if(Absolute)

                   Sum=fabs(Sum);

             /*Summation performed.Scale and range Sum*/

             Sum/=(double)(1<<Scale);

             Sum=(Sum<MINSAMPLEVAL)?MINSAMPLEVAL:Sum;

             Sum=(Sum>MAXSAMPLEVAL)?MAXSAMPLEVAL;Sum;

             PutPixelInImage(OutputImageBuffer.ImageCol.ImageRow.(BYTE)Sum);

           }

        }

  }

  else

        return(EKernelSize);
  }
  return(NoError);
}
/*
Byte compare for use with the qsort library function call
in the Median filter function.
*/
int ByteCompare(BYTE*Entry1.BYTE*Entry2)
{
  if(*Entry1<*Entry2)

  return(-1);
  else if(*Entry1>*Entry2)

  return(1);
  else

  return(0);
}
CompletionCode MedianFilter(BYTE huge*InImage.unsigned Col.unsigned Row.

                          unsigned Width.unsigned Height.

                          unsigned NeighborhoodCols.unsigned NeighborhoodRows.

                          BYTE huge**OutImageBufPtr)
{
  register unsigned ColExtent.RowExtent;
  register unsigned ImageCol.ImageRow.NeighborCol.NeighborRow;
  unsigned ColOffset.RowOffset.TempCol.TempRow.PixelIndex;
  unsigned TotalPixels.MedianIndex;
  BYTE huge*OutputImageBuffer;
  BYTE*PixelValues;
  if(ParameterCheckOK(Col.Row.Col+Width.Row+Height.″Median Filter″))
  {

  /*Image must be at least the same size as the neighborhood*/

  if(Width>=NeighborhoodCols&amp;&amp;Height>=NeighborhoodRows)

  {

        /*allocate far memory buffer for output image*/

        OutputImageBuffer=(BYTE huge*)

                            farcalloc(RASTERSIZE.(unsigned long)sizeof(BYTE));

        if(OutputImageBuffer==NULL)

        {
				
				<dp n="d36"/>
  restorecrtmode();
  printf(″Error Not ehough memory for median filter output buffer\n″);
  return(ENoMemory);
}
/*Store address of output image buffer*/
*OutImageBufPtr=OutputImageBuffer;
/*
Clearing the output buffer to white will show the
boarder areas not touched by the median filter.It also
provides a nice white frame for the output image.
*/
ClearImageArea(OutputImageBuffer,MINCOLNUM,MINROWNUM,

               MAXCOLS,MAXROWS,WHITE);
/*Calculate border pixel to miss*/
ColOffset=NeighborhoodCols/2;
RowOffset=NeighborhoodRows/2;
/*Compensate for edge effects*/
Col+=ColOffset;
Row+=RowOffset;
Width-=(NeighborhoodCols-1);
Height-=(NeighborhoodRows-1);
/*Calculate new range of pixels to act upon*/
ColExtent=Col+Width;
RowExtent=Row+Height;
TotalPixels=(NeighborhoodCols*NeighborhoodRows);
MedianIndex=(NeighborhoodCols*NeighborhoodRows)/2;
/*allocate memory for pixel buffer*/
PixelValues=(BYTE*)calloc(TotalPixels.(unsigned)sizeof(BYTE));
if(PixelValues==NULL)
{
  restorecrtmode();
  printf(″Error Not enough memory for median filter pixel buffer\n″);
  return(ENoMemory);
}
for(ImageRow=Row;ImageRow<RowExtent;ImageRow++)
{
  TempRow=ImageRow-RowOffset;
  for(ImageCol=Col;ImageCol<ColExtent;ImageCol++)
  {

  TempCol=ImageCol-ColOffset;

  PixelIndex=0;

  for(NeighborCol=0;NeighborCol<NeighborhoodCols;NeighborCol++)

       for(NeighborRow=0;NeighborRow<NeighborhoodRows;NeighborRow++)

         PixelValues[PixelIndex++]=

           GetPixelFromImage(InImage,TempCol+NeighborCol.

                                         TempRow+NeighborRow);

  /*

  Quick sort the brightness values into ascending order

  and then pick out the median or middle value as

  that for the pixel.

  */

  qsort(PixelValues.TotalPixels.sizeof(BYTE).ByteCompare);
				
				<dp n="d37"/>
               PutPixelInImage(OutputImageBuffer.ImageCol.ImageRow.

                                 PixelValues[MedianIndex]);

            }

         }

  }

  else

         return(EKernelSize);
  }
  free(PixelValues);    /*give up the pixel value buffer*/
  return(NoError);
}
/*
Sobel Edge Detection Function
*/
CompletionCode SobelEdgeDet(BYTE huge*InImage.

                          unsigned Col,unsigned Row.

                          unsigned Width,unsigned Height.

                          unsigned Threshold,unsigned Overlay.

                          BYTE huge**OutImageBufPtr)
{
  register unsipned ColExtent,RowExtent;
  register unsigned ImageCol,ImageRow;
  unsigned PtA,PtB,PtC,PtD,PtE,PtF,PtG,PtH,PtI;
  unsigned LineAEIAveAbove,LineAEIAveBelow,LineAEIMaxDif;
  unsigned LineBEHAveAbove,LineBEHAveBelow,LineBEHMaxDif;
  unsigned LineCEGAveAbove,LineCEGAveBelow,LineCEGMaxDif;
  unsigned LineDEFAveAbove,LineDEFAveBelow,LineDEFMaxDif;
  unsigned MaxDif;
  BYTE huge*OutputImageBuffer;
  if(ParameterCheckOK(Col,Row,Col+Width,Row+Height,″Sobel Edge Detector″))
  {

  /*allocate far memory buffer for output image*/

  OutputImageBuffer=(BYTE huge*)

                        farcalloc(RASTERSIZE,(unsigned long)sizeof(BYTE));

  if(OutputImageBuffer==NULL)

  {

       restorecrtmode();

       printf(″Error Not enough memory for Sobel output buffer\n″);

       return(ENoMemory);

  }

  /*Store address of output image buffer*/

  *OutImageBufPtr=OutputImageBuffer;

  /*

  Clearing the output buffer

  */

  ClearImageArea(OutputImageBuffer,MINCOLNUM,MINROWNUM,

                  MAXCOLS,MAXROWS,BLACK);

  /*Compensate for edge effects of 3x3 pixel neighborhood*/

  Col+=1;

  Row+=1;

  Width-=2;

  Height-=2;

  /*Calculate new range of pixels to act upon*/
				
				<dp n="d38"/>
  ColExtent=Col+Width;
  RowExtent=Row+Height;
  for(ImageRow=Row;ImageRow<RowExtent;ImageRow++)

     for(ImageCol=Col;ImageCol<ColExtent;ImageCol++)

     {

       /*Get each pixel in 3x3 neighborhood*/

       PtA=GetPixelFromImage(InImage.ImageCol-1.ImageRow-1);

       PtB=GetPixelFromImage(InImage.ImageCol.ImageRow-1);

       PtC=GetPixelFromImage(InImage.ImageCol+1.ImageRow-1);

       PtD=GetPixelFromImage(InImage.ImageCol-1.ImageRow);

       PtE=GetPixelFromImage(InImage.ImageCol.ImageRow);
       PtF=GetPixelFromImage(InImage.ImageCol+1.ImageRow);

       PtG=GetPixelFromImage(InImage.ImageCol-1.ImageRow+1);

       PtH=GetPixelFromImage(InImage.ImageCol.ImageRow+1);

       PtI=GetPixelFromImage(InImage.ImageCol+1.ImageRow+1);

       /*

       Calculate average above and below the line.

       Take the absolute value of the difference.

       */

       LineAEIAveBelow=(PtD+PtG+PtH)/3;

       LineAEIAveAbove=(PtB+PtC+PtF)/3;

       LineAEIMaxDif=abs(LineAEIAveBelow-LineAEIAveAbove);

       LineBEHAveBelow=(PtA+PtD+PtG)/3;

       LineBEHAveAbove=(PtC+PtF+PtI)/3;

       LineBEHMaxDif=abs(LineBEHAveBelow-LineBEHAveAbove);

       LineCEGAveBelow=(PtF+PtH+PtI)/3;

       LineCEGAveAbove=(PtA+PtB+PtD)/3;

       LineCEGMaxDif=abs(LineCEGAveBelow-LineCEGAveAbove);

       LineDEFAveBelow=(PtG+PtH+PtI)/3;

       LineDEFAveAbove=(PtA+PtB+PtC)/3;

       LineDEFMaxDif=abs(LineDEFAveBelow-LineDEFAveAbove);

       /*

       Find the maximum value of the absolute differences

       from the four possibilities.

       */

       MaxDif=MAX(LineAEIMaxDif.LineBEHMaxDif);

       MaxDif=MAX(LineCEGMaxDif.MaxDif);

       MaxDif=MAX(LineDEFMaxDif.MaxDif);

       /*

       If maximum difference is above the threshold.set

       the pixel of interest(center pixel)to white.If

       below the threshold optionally copy the input image

       to the output image.This copying is controlled by

       the parameter Overlay.

       */

       if(MaxDif>=Threshold)

         PutPixelInImage(OutputImageBuffer.ImageCol.ImageRow.WHITE);

       else if(Overlay)

         PutPixelInImage(OutputImageBuffer.ImageCol.ImageRow.PtE);

       }
  }
  return(NoError);
}
				
				<dp n="d39"/>
/****************************************/
/*          FRPOCES.H              */
/*    Image Processing Header File   */
/*    Frame Processing Functions     */
/*     writter in Turbo C 2.0      */
/****************************************/
/*User defined image combination type*/
typedef enum{And.Or.Xor.Add.Sub.Mult.Div.Min.Max.Ave.Overlay}BitFunction;
/*Frame Process Function Prototypes*/
void CombineImages(BYTE huge*SImage.

                 unsigned SCol.unsigned SRow.

                 unsigned SWidth.unsigned SHeight.

                 BYTE huge*DImage.

                 unsigned DCol.unsigned DRow.

                 enum BitFunction CombineType.

                 short Scale);
				
				<dp n="d40"/>
/****************************************/
/*        FPROCES.C            */
/*    Image Processing Code       */
/*   Frame Process Functions      */
/*    written in Turbo C 2.0   */
/****************************************/
#include<stdio.h>
#include<stdlib.h>
#include<conio.h>
#include<dos.h>
#include<alloc.h>
#include<Process.h>
#include<graphics.h>
#include″misc.h″
#include″pcx.h″
#include″vga.h″
#include″imagesup.h″
#include″frprocess.h″
/*Single function performs all image combinations*/
void CombineImages(BYTE huge*SImage,

                 unsigned SCol,unsigned SRow,

                 unsigned SWidth,unsigned SHeight,

                 BYTE huge*DImage,

                 unsigned DCol,unsigned DRow,

                 enum BitFunction CombineType,

                 short Scale)
{
  register unsigned SImageCol.SImageRow.DestCol;
  short SData.DData;
  unsigned SColExtent.SRowExtent;
  if(ParameterCheckOK(SCol.SRow.SCol+SWidth.SRow+SHeight.″CombineImages″)&amp;&amp;

   ParameterCheckOK(DCol.DRow.DCol+SWidth.DRow+SHeight.″CombineImages″))
  {

   SColExtent=SCol+SWidth;

   SRowExtent=SRow+SHeight;

   for(SImageRow=SRow;SImageRow<SRowExtent;SImageRow++)

   {

         /*Reset the destination Column count every row*/

         DestCol=DCol;

         for(SImageCol=SCol;SImageCol<SColExtent;SImageCol++)

         {

           /*Get a byte of the source and dest image data*/

           SData=GetPixelFromImage(SImage,SImageCol,SImageRow);

           DData=GetPixeIFromImage(DImage,DestCol,DRow);

           /*Combine source and dest data according to parameter*/

           switch(CombineType)

           {

             case And;

                  DData&amp;=SData;

                  break;

             case Or;

                  DData|=SData;

                  break;

             case Xor;

                  DData▲=SData;

                  break;
				
				<dp n="d41"/>
          case Add;

               DData+=SData;

               break;

          case Sub;

               DData-=SData;

               break;

          case Mult;

               DData*=SData;

               break;

          case Div;

               if(SData!=0)

                  DData/=SData;

               break;

          case Min;

               DData=MIN(SData.DData);

               break;

          case Max;

               DData=MAX(SData.DData);

               break;

          case Ave;

               DData=(SData+DData)/2;

               break;

          case Overlay;

               DData=SData;

               break;

          }

          /*

          Scale the resultant data if requested to.A positive

          Scale value shifts the destination data to the right

          thereby dividing it by a power of two.A zero Scale

          value leaves the data untouched.A negative Scale

          value shifts the data left thereby multiplying it by

          a power of two.

          */

          if(Scale<0)

            DData<<=abs(Scale);

          else if(Scale>0)

            DData>>=Scale;

          /*Don′t let the pixel data get out of range*/

          DData=(DData<MINSAMPLEVAL)?MINSAMPLEVAL;DData;

          DData=(DData>MAXSAMPLEVAL)?MAXSAMPLEVAL;DData;
          PutPixelInImage(DImage.DestCol++.DRow.DData);

        }

        /*Bump to next row in the destination image*/

        DRow++;

  }
  }
}
				
				<dp n="d42"/>
/*****************************************/
/*          GEPROCES.H              */
/*    Image Processing Header File     */
/*   Geometric Processing Functions    */
/*     written in Turbo C 2.0       */
/****************************************/
/*Misc user defined types*/
typedef enum{HorizMirror,VertMirror}MirrorType;
/*Geometric processes function prototypes*/
void ScaleImage(BYTE huge*InImage.unsigned SCol.unsigned SRow.

               unsigned SWidth.unsigned SHeight.

               double ScaleH.double ScaleV.

               BYTE huge*OutImage.

               unsigned DCol.unsigned DRow.

               unsigned Interpolate);
void SizeImage(BYTE huge*InImage.unsigned SCol.unsigned SRow.

               unsigned SWidth.unsigned SHeight.

               BYTE huge*OutImage.

               unsigned DCol.unsigned DRow.

               unsigned DWidth.unsigned DHeight.

               unsigned Interpolate);
void RotateImage(BYTE huge*InImage.unsigned Col.unsigned Row.
                unsigned Width.unsigned Height.double Angle.

                BYTE huge*OutImage.unsigned Interpolate);
void TranslateImage(BYTE huge*InImage.

                   unsigned SCol.unsigned SRow.

                   unsigned SWidth.unsigned SHeight.

                   BYTE huge*OutImage.

                   unsigned DCol.unsigned DRow.

                   unsigned EraseFlag);
void MirrorImage(BYTE huge*InImage.

                unsigned SCol.unsigned SRow.

                unsigned SWidth.unsigned SHeight.

                enum MirrorType WhichMirror.

                BYTE huge*OutImage.

                unsigned DCol.unsigned DRow);
				
				<dp n="d43"/>
/****************************************/
/*        GEPROCES.C               */
/*    Image Processing Code        */
/*  Geometric Processing Functions    */
/*    written in Turbo C 2.0       */
/****************************************/
#include<stdio.h>
#include<conio.h>
#include<dos.h>
#include<alloc.h>
#include<process.h>
#include<math.h>
#include<graphics.h>
#include″misc.h″
#include″pcx.h″
#include″vga.h″
#include″imagesup.h″
void ScaleImage(BYTE huge*InImage.unsigned SCol.unsigned SRow.

               unsigned SWidth.unsigned SHeight.

               double ScaleH.double ScaleV.

               BYTE huge*OutImage.

               unsigned DCol.unsigned DRow.

               unsigned Interpolate)
{
  unsigned DestWidth.DestHeight;
  unsigned PtA,PtB,PtC,PtD,PixelValue;
  register unsigned SPixelColNum,SPixelRowNum,DestCol,DestRow;
  double SPixelColAddr.SPixelRowAddr;
  double ColDelta,RowDelta;
  double ContribFromAandB,ContribFromCandD;
  DestWidth=ScaleH*SWidth+0.5;
  DestHeight=ScaleV*SHeight+0.5;
  if(ParameterCheckOK(SCol.SRow.SCol+SWidth.SRow+SHeight.″ScaleImage″)&amp;&amp;

   ParameterCheckOK(DCol.DRow.DCol+DestWidth.DRow+DestHeight.″ScaleImage″))
  {

  /*Calculations from destination perspective*/

  for(DestRow=0;DestRow<DestHeight:DestRow++)

  {

      SPixelRowAddr=DestRow/ScaleV;

      SPixelRowNum=(unsigned)SPixelRowAddr;

      RowDelta   =SPixelRowAddr-SPixelRowNum;

      SPixelRowNum+=SRow;

      for(DestCol=0;DestCol<DestWidth:DestCol++)

      {

        SPixelColAddr=DestCol/ScaleH;

        SPixelColNum =(unsigned)SPixelColAddr;

        ColDelta   =SPixelColAddr-SPixelColNum;

        SPixelColNum+=SCol;

        if(Interpolate)

        {

          /*

          SPixelColNum and SPixelRowNum now contain the pixel

          coordinates of the upper left pixel of the targetted

          pixel′s(point X)neighborhood.This is point A below;

                           A     B

                              X
				
				<dp n="d44"/>
                          C      D

          We must retrieve the brightness level of each of the

          four pixels to calculate the value of the pixel put into

          the destination image.

          Get point A brightness as it will always lie within the

          input image area.Check to make sure the other points are

          within also.If so use their values for the calculations.

          If not.set them all equal to point A′s value.This induces

          an error but only at the edges on an image.

          */

          PtA=GetPixelFromImage(InImage.SPixelColNum.SPixelRowNum);

          if(((SPixelColNum+1)<MAXCOLS)&amp;&amp;((SPixelRowNum+1)<MAXROWS))

          {

                PtB=GetPixelFromImage(InImage.SPixelColNum+1.SPixelRowNum);

                PtC=GetPixelFromImage(InImage.SPixelColNum.SPixelRowNum+1);

                PtD=GetPixelFromImage(InImage.SPixelColNum+1.SPixelRowNum+1);

          }

          else

          {

                /*All points have equal brightness*/

                PtB=PtC=PtD=PtA;

          }

          /*

          Interpolate to find brightness contribution of each pixel

          in neighborhood.Done in both the horizontal and vertical

          directions.

          */

          ContribFromAandB=ColDelta*((double)PtB-PtA)+PtA;

          ContribFromCandD=ColDelta*((double)PtD-PtC)+PtC;

          PixelValue=0.5+ContribFromAandB+

                      (ContribFromCandD-ContribFromAandB)*RowDelta;

        }

        else

          PixelValue=GetPixelFromImage(InImage.SPixelColNum.SPixelRowNum);

        /*Put the pixel into the destination buffer*/

        PutPixelInImage(OutImage.DestCol+DCol.DestRow+DRow.PixelValue);

      }

  }
  } 
}
void SizeImage(BYTE huge*InImage.unsigned SCol.unsigned SRow.

               unsigned SWidth.unsigned SHeight.

               BYTE huge*OutImage.

               unsined DCol.unsigned DRow.

               unsigned DWidth.unsigned DHeight.

               unsigned Interpolate)
{
  double HScale.VScale;
  *Check for parameters out of range*/
  if(ParameterCheckOK(SCol.SRow.SCol+SWidth.SRow+SHeight.″SizeImage″)&amp;&amp;

   ParameterCheckOK(DCol.DRow.DCol+DWidth.DRow+DHeight.″SizeImage″))
  } 

  *

  Calculate horizontal and vertical scale factors required

  to fit specified portion of input image into specified portion

  of output image.

  */
				
				<dp n="d45"/>
  HScale=(double)DWidth/(double)SWidth;

  VScale=(double)DHeight/(double)SHeight;

  /*Call ScaleImage to do the actual work*/

  ScaleImage(InImage.SCol.SRow.SWidth.SHeight.HScale.VScale.

                 OutImage.DCol.DRow.Interpolate);
  }
}
void RotateImage(BYTE huge*InImage.unsigned Col.unsigned Row.

                unsigned Width.unsigned Height.double Angle.

                BYTE huge*OuImage.unsigned Interpolate)
{
  register unsigned ImageCol.ImageRow;
  unsigned CenterCol.CenterRow.SPixelColNum.SPixelRowNum;
  unsigned ColExtent.RowExtent.PixelValue;
  unsigned PtA,PtB,PtC,PtD;
  double  DPixelRelativeColNum.DPixelRelativeRowNum;
  double  CosAngle.SinAngle.SPixelColAddr.SPixelRowAddr;
  double  ColDelta.RowDelta;
  double  ContribFromAandB.ContribFromCandD;
  if(ParameterCheckOK(Col.Row.Col+Width.Row+Height,″RotateImage″))
  {

  /*Angle must be in 0..359.9*/

  while(Angle>=360.0)

        Angle-=360.0;

  /*Convert angle from degrees to radians*/

  Angle*=((double)3.14159/(double)180.0);

  /*Calculate angle values for rotation*/

  CosAngle=cos(Angle);

  SinAngle=sin(Angle);

  /*Center of rotation*/

  CenterCol=Col+Width/2;

  CenterRow=Row+Height/2;
  ColExtent=Col+Width;

  RowExtent=Row+Height;

  /*

  All calculations are performed from the destination image

  perspective.Absolute pixel values must be converted into

  inches of display distance to keep the aspect value

  correct when image is rotated.After rotation.the calculated

  display distance is converted back to real pixel values.

  */

  for(ImageRow=Row;ImageRow<RowExtent;ImageRow++)

  {

        DPixelRelativeRowNum=(double)ImageRow-CenterRow;

        /*Convert row value to display distance from image center*/

        DPixelRelativeRowNum*=LRINCHESPERPIXELVERT;

        for(ImageCol=Col:ImageCol<ColExtent:ImageCol++)

        {

          DPixelRelativeColNum=(double)ImageCol-CenterCol;

          /*Convert col value to display distance from image center*/

          DPixelRelativeColNum*=LRINCHESPERPIXELHORIZ;

          /*

          Calculate source pixel address from destination
				
				<dp n="d46"/>
  pixels position.
  */
  SPixelColAddr=DPixelRelativeColNum*CosAngle-

               DPixelRelativeRowNum*SinAngle;
  SPixelRowAddr=DPixelRelativeColNum*SinAngle+

               DPixelRelativeRowNum*CosAngle;
  /*
  Convert from coordinates relative to image
  center back into absolute coordinates.
  */
  /*Convert display distance to pixel location*/
  SPixelColAddr*=LRPIXELSPERINCHHORIZ;
  SPixelColAddr+=CenterCol;
  SPixelRowAddr*=LRPDELSPERINCHVERT;
  SPixelRowAddr+=CenterRow;
  SPixelColNum=(unsigned)SPixelColAddr;
  SPixelRowNum=(unsigned)SPixelRowAddr;
  ColDelta=SPixelColAddr-SPixelColNum;
  RowDelta=SPixelRowAddr-SPixelRowNum;
  if(Interpolate)
  {

  /*

  SPixelColNum and SPixelRowNum now contain the pixel

  coordinates of the upper left pixel of the targetted

  pixel′s(point X)neighborhood.This is point A below;

                   A     B

                      X

                   C     D

  We must retrieve the brightness level of each of the

  four pixels to calculate the value of the pixel put into

  the destination image.

  Get point A brightness as it will always lie within the

  input image area.Check to make sure the other points are

  within also.If so use their values for the calculations.

  If not.set them all equal to point As value.This induces

  an error but only at the edges on an image.

  */

  PtA=GetPixelFromImage(InImage.SPixelColNum.SPixelRowNum);

  if(((SPixelColNum+1)<MAXCOLS)&amp;&amp;((SPixelRowNum+1)<MAXROWS))

  {

        PtB=GetPixelFromImage(InImage.SPixelColNum+1.SPixelRowNum);

        PtC=GetPixelFromImage(InImage.SPixelColNum.SPixelRowNum+1);

        PtD=GetPixelFromImage(InImage.SPixelColNum+1.SPixelRowNum+1);

   }

   else

   {

        /*All points have equal brightness*/

        PtB=PtC=PtD=PtA;

   }

   /*

   Interpolate to find brightness contribution of each pixel

   in neighborhood.Done in both the horizontal and vertical

   directions.

   */

   ContribFromAandB=ColDelta*((double)PtB-PtA)+PtA;

   ContribFromCandD=ColDelta*((double)PtD-PtC)+PtC;

   PixelValue=0.5+ContribFromAandB+

               (ContribFromCandD-ContribFromAandB)*RowDelta;
				
				<dp n="d47"/>
        }

        else

          PixelValue=GetPixelFromImage(InImage.SPixelColNum.SPixelRowNum);

        /*Put the pixel into the destination buffer*/

        PutPixelInImage(OutImage.ImageCol.ImageRow.PixelValue);

      }

  }
  }
}
/*
Caution:images must not overlap
*/
void TranslateImage(BYTE huge*InImage,

                   unsigned SCol.unsigned SRow.
 
                   unsigned SWidth.unsigned SHeight.

                   BYTE huge*OutImage.

                   unsigned DCol.unsigned DRow.

                   unsigned EraseFlag)
{
  register unsigned SImageCol.SImageRow.DestCol;
  unsigned SColExtent.SRowExtent;
  /*Check for parameters out of range*/
  if(ParameterCheckOK(SCol.SRow.SCol+SWidth.SRow+SHeight.″TranslateImage″)&amp;&amp;

   ParameterCheckOK(DCol.DRow.DCol+SWidth.DRow+SHeight.″TranslateImage″))
  {

  SColExtent=SCol+SWidth;

  SRowExtent=SRow+SHeight;
 
  for(SImageRow=SRow;SImageRow<SRowExtent:SImageRow++)

  {

      /*Reset the destination Column count every row*/

      DestCol=DCol;

      for(SImageCol=SCol:SImageCol<SColExtent:SImageCol++)

      {

        /*Transfer byte of the image data between buffers*/

        PutPixelInImage(OutImage.DestCol++.DRow.

                          GetPixelFromImage(InImage.SImageCol.SImageRow)):

      }

      /*Bump to next row in the destination image*/

      DRow++;

  }

  /*If erasure specified.blot out original image*/

  if(EraseFlag)

        ClearImageArea(InImage.SCol.SRow.SWidth.SHeight.BLACK):
  }
}
void MirrorImage(BYTE huge*InImage,

                unsigned SCol.unsigned SRow.

                unsigned SWidth.unsigned SHeight.

                enum MirrorType WhichMirror.

                BYTE huge*OutImage.

                unsigned DCol.unsigned DRow)
{
  register unsigned SImageCol.SImageRow.DestCol:
  unsigned SColExtent.SRowExtent;
  /*Check for parameters out of range*/
  if(ParameterCheckOK(SCol.SRow.SCol+SWidth.SRow+SHeight.″MirrorImage″)&amp;&amp;
				
				<dp n="d48"/>
  ParameterCheckOK(DCol.DRow.DCol+SWidTH.DRow+SHeight.″MirrorImage″))
  {

  SColExtent=SCol+SWidth;

  SRowExtent=SRow+SHeight;

  switch(WhichMirror)

  {

        case HorizMirror;

          for(SImageRow=SRow;SImageRow<SRowExtent;SImageRow++)

          {

            /*Reset the destination Column count every row*/

            DestCol=DCol+SWidth;

            for(SImageCol=SCol;SImageCol<SColExtent;SImageCol++)

            {

                  /*Transfer byte of the image data between buffers*/

                  PutPixelInImage(OutImage.--DestCol.DRow.

                             GetPixelFromImage(InImage.SImageCol.SImageRow));

            }

            /*Bump to next row in the destination image*/

            DRow++;

          }

          break;

        case VertMirror;

          DRow+=(SHeight-1);

          for(SImageRow=SRow;SImageRow<SRowExtent;SImageRow++)

          {

            /*Reset the destination Column count every row*/

            DestCol=DCol;

            for(SImageCol=SCol;SImageCol<SColExtent;SImageCol++)

            {

                  /*Trahsfer byte of the image data between buffers*/

                  PutPixelInImage(OutImage.DestCol++.DRow.

                             GetPixelFromImage(InImage.SImageCol.SImageRow)):

            }

            /*Bump to next row in the destination image*/

            DRow--;

          }

          break;

  }
  }
}
				
				<dp n="d49"/>
/****************************************/
/*       IMAGESUP.H                 */
/*  Image Processing Header File      */
/* Image Processing Support Functions  */
/*    written in Turbo C 2.0        */
/****************************************/
/*
This file includes the general equates used for all of the
image processing code in part two of this book.Throughout
these equates.a 320x200 256 color image is assumed.If the
resolution of the processed pictures change.the equates
MAXCOLS and MAXROWS must change accordingly.
*/
/*Pixel Sample Information and Equates*/
#define MAXSAMPLEBITS   6          /*6 bits from digitizer*/
#define MINSAMPLEVAL    0          /*Min sample value=0*/
/*Max num of sample values*/
#define MAXQUANTLEVELS(1<<MAXSAMPLEBITS)
/*Max sample value=63*/
#define MAXSAMPLEVAL  (MAXQUANTLEVELS-1)
/*Image Resolution Equates*/
#define MINCOLNUM    0    /*Column 0*/
#define MAXCOLS    LRMAXCOLS /*320 total columns*/
#define MAXCOLNUM  (MAXCOLS-1)/*Last column is 319*/
#define MINROWNUM   0    /*Row 0*/
#define MAXROWS    LRMAXROWS  /*200 total rows*/
#define MAXROWNUM  (MAXROWS-1)/*Last row is 199*/
#define RASTERSIZE((long)MAXCOLS*MAXROWS)
#define MAXNUMGRAYCOLORS MAXQUANTLEVELS
/*histogram equates*/
#define HISTOCOL     0
#define HISTOROW     0
#define HISTOWIDTH   134
#define HISTOHEIGHT  84
#define BLACK        0
#define WHITE        63
#define AXISCOL    (HISTOCOL+3)
#define AXISROW    (HISTOROW+HISTOHEIGHT-5)
#define AXISLENGTH    MAXQUANTLEVELS*2-1
#define DATACOL    AXISCOL
#define DATAROW    AXISROW-1
#define MAXDEFLECTION(HISTOHEIGHT-10)
/*External Function Declarations and Prototypes*/
void CopyImage(BYTE huge*SourceBuf.BYTE huge*DestBuf);
BYTE GetPixelFromImage(BYTE huge*Image.unsigned Col.unsigned Row);
CompletionCode PutPixelInImage(BYTE huge*Image.unsigned Col.

                             unsigned Row.unsigned Color);
CompletionCode DrawHLine(BYTE huge*Image.unsigned Col.unsigned Row.

                       unsigned Length.unsigned Color);
				
				<dp n="d50"/>
CompletionCode DrawVLine(BYTE huge*Image.unsigned Col.unsigned Row.

                       unsigned Length.unsigned Color);
void ReadImageAreaToBuf(BYTE huge*Image.unsigned Col.unsigned Row.

                       unsigned Width.unsigned Height.

                       BYTE huge*Buffer);
void WriteImageAreaFromBuf(BYTE huge*Buffer.unsigned BufWidth.

                         unsigned BufHeight.BYTE huge*Image.

                         unsigned ImageCol.unsigned ImageRow);
void ClearImageArea(BYTE huge*Image.unsigned Col.unsigned Row.

                  unsigned Width.unsigned Height.

                  unsigned PixelValue);
CompletionCode ParameterCheckOK(unsigned Col.unsigned Row.

              unsigned ColExtent.unsigned RowExtent.

              char*ErrorStr);
				
				<dp n="d51"/>
/****************************************/
/*        IMAGESUP.C                */
/* Image Processing Support Functions  */
/*     written in Turbo C 2.0       */
/****************************************/
#include<stdio.h>
#include<process.h>
#include<conio.h>
#include<dos.h>
#include<alloc.h>
#include<mem.h>
#include<graphics.h>
#include″misc.h″
#include″pcx.h″
#include″vga.h″
#include″imagesup.h″
extern struct PCX_File PCXData;
extern unsigned ImageWidth;
extern unsigned ImageHeight;
/*
Image Processing Support Functions-See text for details.
*/
/*
Copy a complete image from source buffer to destination buffer
*/
void CopyImage(BYTE huge*SourceBuf.BYTE huge*DestBuf)
{
  movedata(FP_SEG(SourceBuf).FP_OFF(SourceBuf).

         FP_SEG(DestBuf).FP_OFF(DestBuf).

         (unsigned)RASTERSIZE);
}
/*
NOTE:to index into the image memory like an array.the index
value must be a long variable type.NOT just cast to long.
*/
BYTE GetPixelFromImage(BYTE huge*Image.unsigned Col.unsigned Row)
{
  unsigned long PixelBufOffset;
  if((Col<ImageWidth)&amp;&amp;(Row<ImageHeight))
  {

  PixelBufOffset =Row;     /*done to prevent overflow*/

  PixelBufOffset*=ImageWidth;

  PixelBufOffset+=Col;

  return(Image[PixelBufOffset]);
  }
  printf(″GetPixelFromImageError:Coordinate out of range\n″);
  printf(″Col=%d Row=%d\n″.Col.Row);
  return(FALSE);
}
CompletionCode PutPixelInImage(BYTE huge*Image.unsigned Col.

                             unsigned Row.unsigned Color)
{
				
				<dp n="d52"/>
  unsigned long PixelBufOffset;
  if((Col<ImageWidth)&amp;&amp;(Row<ImageHeight))
  {

  PixelBufOffset =Row;      /*done to prevent overflow*/

  PixelBufOffset*=ImageWidth;

  PixelBufOffset+=Col;

  Image[PixelBufOffset]=Color;

  return(TRUE);
  }
  else
  {

  printf(″PutPixelInImage Error;Coordinate out of range\n″);

  printf(″Col=%d Row=%d\n″.Col.Row);

  return(FALSE);
  }
}
/*
NOTE;A length of 0 is one pixel on.A length of 1 is two pixels
on.That is why length is incremented before being used.
*/
CompletionCode DrawHLine(BYTE huge*Image.unsigned Col.unsigned Row.

                       unsigned Length.unsigned Color)
{
  if((Col<ImageWidth)&amp;&amp;((Col+Length)<=ImageWidth)&amp;&amp;

   (Row<ImageHeight))
  {

  Length++;

  while(Length--)

       PutPixelInImage(Image.Col++.Row.Color);

  return(TRUE);
  }
  else
  {

  printf(″DrawHLine Error:Coordinate out of range\n″);

  printf(″Col =%d Row=%d Length =%d\n″.Col.Row.Length);

  return(FALSE);
  }
}
CompletionCode DrawVLine(BYTE huge*Image.unsigned Col.unsigned Row.

                       unsigned Length.unsigned Color)
{
  if((Row<ImageHeight)&amp;&amp;((Row+Length)<=ImageHeight)&amp;&amp;

  (Col<ImageWidth))
  {

  Length++;

  while(Length--)

        PutPixelInImage(Image.Col.Row++.Color);

  return(TRUE);
  }
  else
  {

  printf(″DrawVLine Error;Coordinate out of range\n″);

  printf(″Col=%d Row=%d Length =%d\n″.Col.Row.Length);

  return(FALSE);
  }
}
void ReadImageAreaToBuf(BYTE huge*Image.unsigned Col.unsigned Row.
				
				<dp n="d53"/>
                            unsigned Width.unsigned Height.BYTE huge*Buffer)
{
  unsigned long PixelBufOffset=0L;
  register unsigned ImageCol.ImageRow;
  for(ImageRow=Row:ImageRow<Row+Height:ImageRow++)

  for(ImageCol=Col:ImageCol<Col+Width:ImageCol++)

    Buffer[PixelBufOffset++}=

           GetPixelFromImage(Image.ImageCol.ImageRow);
}
void WriteImageAreaFromBuf(BYTE huge*Buffer.unsigned BufWidth.

                         unsigned BufHeight.BYTE huge*Image.

                         unsigned ImageCol.unsigned ImageRow)
{
  unsigned long PixelBufOffset;
  register unsigned BufCol.BufRow.CurrentImageCol;
  for(BufRow=0;BufRow<BufHeight;BufRow++)
  {

  CurrentImageCol=ImageCol;

  for(BufCol=0:BufCol<BufWidth:BufCol++)

  {

       PixelBufOffset=(unsigned long)BufRow*BufWidth+BufCol;

       PutPixelInImage(Image.CurrentImageCol.ImageRow.Buffer[PixelBufOffset]);

       CurrentImageCol++;

  }

  ImageRow++;
  }
}
void ClearImageArea(BYTE huge*Image.unsigned Col.unsigned Row.

                  unsigned Width.unsigned Height.

                  unsigned PixelValue)
{
  register unsigned BufCol.BufRow;
  for(BufRow=0:BufRow<Height:BufRow++)

  for(BufCol=0:BufCol<Width:BufCol++)

       PutPixelInImage(Image.BufCol+Col.BufRow+Row.PixelValue);
}
/*
This function checks to make sure the parameters passed to
the image processing functions are all within range.If so
a TRUE is returned.If not.an error message is output and
the calling Program is terminated.
*/
CompletionCode ParameterCheckOK(unsigned Col.unsigned Row.

                               unsigned ColExtent.unsigned RowExtent.

                               char*FunctionName)
  {

  if((Col>MAXCOLNUM)‖(Row>MAXROWNUM)‖

    (ColExtent>MAXCOLS)‖(RowExtent>MAXROWS))

  {

    restorecrtmode();

    printf(″Parameter(s)out of range in function:%s\n″.FunctionName);

    printf(″Col=%d Row=%d ColExtent=%d RowExtent=%d\n″.

                  Col.Row.ColExtent.RowExtent);
				
				<dp n="d54"/>
  exit(EBadParms);
  }
  return(TRUE);
}
				
				<dp n="d55"/>
/*****************************************/
/*           PTPROCES.H             */
/*     Image Processing Header File   */
/*      Point Processing Functions   */
/*       written in Turbo C 2.0     */
/*****************************************/
extern unsigned Histogram[MAXQUANTLEVELS];
/*Function Prototypes for support and histogram functions*/
void InitializeLUT(BYTE*LookUpTable);
void PtTransform(BYTE huge*ImageData.unsigned Col.

                unsigned Row.unsigned Width.

                unsigned Height BYTE*LookUpTable);
void GenHistogram(BYTE huge*ImageData.unsigned Col.

                unsigned Row.unsigned Width.

                unsigned Height);
void DisplayHist(BYTE huge*ImageData.unsigned Col.

                unsigned Row.unsigned Width.

                unsigned Height);
/*Point transform functions*/
void AdjImageBrightness(BYTE huge*ImageData.short BrightnessFactor.

                unsigned Col.unsigned Row.

                unsigned Width.unsigned Height);
void NegateImage(BYTE huge*ImageData.unsigned Threshold.

                unsigned Col.unsigned Row.

                unsigned width.unsigned Height);
void ThresholdImage(BYTE huge*ImageData.unsigned Threshold.

                unsigned Col.unsigned Row.

                unsigned Width.unsigned Height);
void StretchImageContrast(BYTE huge*ImageData.unsigned*HistoData.

                          unsigned Threshold.

                          unsigned Col.unsigned Row.

                          unsigned Width.unsigned Height);
				
				<dp n="d56"/>
/****************************************/
/*        PTPROCES.C             */
/*    Image Processing Code       */
/*   Point Process Functions     */
/*    written in Turbo C 2.0     */
/****************************************/
#include<stdio.h>
#include<stdlib.h>
#include<conio.h>
#include<dos.h>
#include<alloc.h>
#include<process.h>
#include<graphics.h>
#include″misc.h″
#include″pcx.h″
#include″vga.h″
#include″imagesup.h″
*Histogram storage location*/
unsigned Histogram[MAXQUANTLEVELS];
*
Look Up Table(LUT)Functions
Initialize the Look Up Table(LUT)for straight through
mapping.If a point transform is performed on an initialized
LUT.output data will equal input data.This function is
usually called in preparation for modification to a LUT.
*/
void InitializeLUT(BYTE*LookUpTable)
{    
  register unsigned Index;
  for(Index=0:Index<MAXQUANTLEVELS:Index++)

  LookUpTable[Index]=Index;
  }
  *
  This function performs a point transform on the portion of the
  image specified by Col.Row.Width and Height.The actual
  transform is contained in the Look Up Table who address
  as passed as a parameter.
  *
  void PtTransform(BYTE huge*ImageData.unsigned Col.unsigned Row,

                  unsigned Width,unsigned Height,BYTE*LookUpTable)
  } 

  register unsigned ImageCol,ImageRow;

  register unsigned ColExtent.RowExtent;

  ColExtent=Col+Width;
				
				<dp n="d57"/>
  RowExtent=Row+Height;
  if(ParameterCheckOK(Col.Row.ColExtent.RowExtent.″PtTransform″))

  for(ImageRow=Row:ImageRow<RowExtent:ImageRow++)

       for(ImageCol=Col:ImageCol<ColExtent:ImageCol++)

         PutPixelInImage(ImageData.ImageCol.ImageRow.

                LookUpTable[GetPixelFromImage(ImageData.ImageCol.ImageRow)]);
}
/*start of histogram functions
This function calculates the histogram of any portion of an image.
*/
void GenHistogram(BYTE huge*ImageData.unsigned Col.unsigned Row.

                unsigned Width.unsigned Height)
{
  register unsigned ImageRow.ImageCol.RowExtent.ColExtent;
  register unsigned Index;
  /*clear the histogram array*/
  for(Index=0:Index<MAXQUANTLEVELS:Index++)

  Histogram[Index]=0;
  RowExtent=Row+Height;
  ColExtent=Col+Width;
  if(ParameterCheckOK(Col.Row.ColExtent.RowExtent.″GeenHistogram″))
  {

  /*calculate the histogram*/

  for(ImageRow=Row:ImageRow<RowExtent:ImageRow++)

       for(ImageCol=Col:ImageCol<ColExtent:ImageCol++)

         Histogram[GetPixelFromImage(ImageData.ImageCol.ImageRow)]+=1;
  }
}
/*
This function calculates and displays the histogram of an image
or partial image.When called it assumes the VGA is already
in mode 13 hex.
*/
void DisplayHist(BYTE huge*ImageData.unsigned Col.unsigned Row.

                 unsigned Width.unsigned Height)
{
  BYTE huge*Buffer;
  register unsigned Index,LineLength,XPos,YPos;
  unsigned MaxRepeat;
  /*Allocate enough memory to save image under histogram*/
  Buffer=(BYTE huge*)farcalloc((long)HISTOWIDTH*HISTOHEIGHT.sizeof(BYTE));
  if(Buffer==NULL)
  {

  printf(″No buffer memory\n″);

  exit(ENoMemory);
  }
  /*Save a copy of the image*/
  ReadImageAreaToBuf(ImageData.HISTOCOL.HISTOROW.HISTOWIDTH.HISTOHEIGHT.

                  Buffer);
				
				<dp n="d58"/>
/*
Set VGA color register 65 to red.66 to green and 67 to
blue so the histogram can be visually separated from
the continuous tone image.
*/
SetAColorReg(65,63,0,0);
SetAColorReg(66,0,63,0);
SetAColorReg(67,0,0,63);
/*Calculate the histogram for the image*/
GenHistogram(ImageData.Col.Row.Width.Height);
MaxRepeat=0;
/*
Find the pixel value repeated the most.It will be used for
scaling.
*/
for(Index=0:Index<MAXQUANTLEVELS:Index++)
  MaxRepeat=(Histogram[Index]>MaxRepeat)?

       Histogram[Index];MaxRepeat;
 /*Fill background area of histogram graph*/
 ClearImageArea(ImageData.HISTOCOL.HISTOROW.HISTOWIDTH.HISTOHEIGHT.67);
 /*Draw the bounding box for the histogram*/
 DrawVLine(ImageData.HISTOCOL.HISTOROW.HISTOHEIGHT-1.BLACK);
 DrawVLine(ImageData.HISTOCOL+HISTOWIDTH-1.HISTOROW.HISTOHEIGHT-1.BLACK);
 DrawHLine(ImageData.HISTOCOL.HISTOROW+HISTOHEIGHT-1.HISTOWIDTH-1.BLACK);
 DrawHLine(ImageData.HISTOCOL.HISTOROW.HISTOWIDTH-1.BLACK);
 /*Data base line*/
 DrawHLine(ImageData.AXISCOL.AXISROW.AXISLENGTH.WHITE);
 DrawHLine(ImageData.AXISCOL.AXISROW+1.AXISLENGTH.WHITE);
 /*
 Now do the actual histogram rendering into the
 image buffer.
 */
 for(Index=0:Index<MAXQUANTLEVELS:Index++)
 {
   LineLength=(unsigned)(((long)Histogram[Index]*MAXDEFLECTION)/

                            (long)MaxRepeat);
   XPos=DATACOL+Index*2;
   YPos=DATAROW-LineLength;
   DrawVLine(ImageData.XPos.YPos.LineLength.66);
 }
 /*
 Display the image overlayed with the histogram
 */
 DisplayImageInBuf(ImageData.NOVGAINIT.WAITFORKEY);
 /*After display.restore image data under histogram*/
 WriteImageAreaFromBuf(Buffer.HISTOWIDTH.HISTOHEIGHT.ImageData.

                      HISTOCOL.HISTOROW);
 farfree((BYTE far*)Buffer);
}
				
				<dp n="d59"/>
/*Various Point Transformation Functions*/
void AdjImageBrighmess(BYTE huge*ImageData.short BrightnessFactor.

                       unsigned Col.unsigned Row.

                       unsigned Width.unsigned Height)
{
  register unsigned Index;
  register short NewLevel;
  BYTE       LookUpTable[MAXQUANTLEVELS];
  for(Index=MINSAMPLEVAL:Index<MAXQUANTLEVELS:Index++)
  {

  NewLevel=Index+BrightnessFactor;

  NewLevel=(NewLevel<MINSAMPLEVAL)?MINSAMPLEVAL:NewLevel:

  NewLevel=(NewLevel>MAXSAMPLEVAL)?MAXSAMPLEVAL:NewLevel;

  LookUpTable[Index}=NewLevel;
  }
  PtTransform(ImageData.Col.Row.Width.Height.LookUpTable);
}
/*
This function will negate an image pixel by pixel.Threshold is
the value of image data where the negatation begins.If
threshold is 0.all pixel valtues are negated.That is.pixel value 0
becomes 63 and pixel value 63 becomes 0.If threshold is greater
than 0.the pixel values in the range 0..Threshold-1 are left
alone while pixel values between Threshold..63 are negated.
*/
void NegateImage(BYTE huge*ImageData.unsigned Threshold.

         unsigned Col.unsigned Row.

                unsigned Width.unsigned Height)
{
  register unsigned Index;
  BYTE       LookUpTable[MAXQUANTLEVELS];
  /*Straight through mapping initially*/
  InitializeLUT(LookUpTable);
  /*from Threshold onward.negate entry in LUT*/
  for(Index=Threshold:Index<MAXQUANTLEVELS:Index++)

       LookUpTable[Index]=MAXSAMPLEVAL-Index;
  PtTransform(ImageData.Col.Row.Width.Height.LookUpTable);
}
/*
This function converts a gray scale image.to a binary image with each
pixel either on(WHITE)or off(BLACK).The pixel level at
which the cut off is made is controlled by Threshold.Pixels
in the range 0..Threshold-1 become black while pixel values
between Threshold..63 become white.
*/
				
				<dp n="d60"/>
void ThresholdImage(BYTE huge*ImageData.unsigned Threshold.

           unsigned Col.unsigned Row,

                  unsigned Width.unsigned Height)
{
  register unsigned Index;
  BYTE      LookUpTable[NAXQUANTLEVELS];
  for(Index=MINSAMPLEVAL:Index<Threshold:Index++)

       LookUpTable[Index]=BLACK;
  for(Index=Threshold:Index<MAXQUANTLEVELS:Index++)

       LookUpTable[Index]=WHITE;
  PtTransform(ImageData.Col.Row.Width.Height.LookUpTable);
}
void StretchImageContrast(BYTE huge*ImageData.unsigned*HistoData.

                          unsigned Threshold.

               unsigned Col.unsigned Row.

                          unsigned Width.unsigned Height)
{
  register unsigned Index.NewMin.NewMax;
  double   StepSiz.StepVal;
  BYTE     LookUpTable[MAXQUANTLEVELS];
  /*
  Search from the low bin towards the high bin for the first one that
  exceeds the threshold
  */
  for(Index=0:Index<MAXQUANTLEVELS:Index++)

  if(HistoData[Index]>Threshold)

    break;
  NewMin=Index;
  /*
  Search from the high bin towards the low bin for the first one that
  exceeds the threshold
  */
  for(Index=MAXSAMPLEVAL:Index>NewMin:Index--)

  if(HistoData[Index]>Threshold)

    break;
  NewMax=Index;
  StepSiz=(double)MAXQUANTLEVELS/(double)(NewMax-NewMin+1);
  StepVal=0.0;
  /*values below new minimum are assigned zero in the LUT*/
  for(Index=0;Index<NewMin:Index++)

  LookUpTable[Index]=MINSAMPLEVAL;
  /*values above new maximum are assigned the max sample value*/
  for(Index=NewMax+1:Index<MAXQUANTLEVELS:Index++)

  LookUpTable[Index]=MAXSAMPLEVAL;
  /*values between the new minimum and new maximum are stretched*/
  for(Index=NewMin:Index<=NewMax:Index++)
  {
				
				<dp n="d61"/>
  LookUpTable[Index]=StepVal;

  StepVal+=StepSiz;
  }
  /*
  Look Up Table is now prepared to point transform the image data
  */
  PtTransform(ImageData.Col.Row.Width.Height.LookUpTable);
}

Claims (30)

1.一种平面显示装置,其上呈现由分立像素形成的图像,此显示装置具有一系列分别对准在像素前的光学元件和用于逐个改变每一光学元件有效焦距进而以改变至位于其上显示各像素的显示装置前的观察者的明视视觉距离的装置,从而产生一个三维图像,其特征在于,每一光学元件(2)具有一个沿与图像大致平行定向的表面逐渐变化的焦距,其特征还在于在像素内根据所需深度精细位移光出射位置(5b,6b,7b)的装置(18,65),从而使沿光学元件的输入表面的光输入位置(5,6,7)有一相应位移,进而动态改变其有效焦距并根据光输入位置的位移改变距观察者的明视视觉距离(5a,6a,7a)。
2.如权利要求1所述的显示装置,其特征在于,所述光学元件(2)为折射元件且所述输入表面为折射表面。
3.如权利要求2所述的显示装置,其特征在于,所述折射表面构成为可产生变化焦距。
4.如权利要求2所述的显示装置,其特征在于,所述光学折射元件(2)由梯度折射率的光学材料制成,其折射率沿折射元件逐渐变化以产生变化焦距。
5.如权利要求2,3,或4所述的显示装置,其特征在于,所述位移与所述焦距之间的关系为线性的。
6.如权利要求2,3或4所述的显示装置,其特征在于,所述位移与所述焦距之间的关系为非线性的。
7.如权利要求2至4中任一项所述的显示装置,其特征在于,每一光学折射元件(39)具有一个相对此光学折射元件光轴径向变化的焦距,且所述位移装置在像素内径向位移光出射的位置(40a,41a,42a)。
8.如权利要求2至4中任一项所述的显示装置,其特征在于,每一光学折射元件(2)是狭长的且具有从一端沿其长度变化的焦距,且此显示装置在像素内线性位移光出射点。
9.如权利要求2至4之一所述的显示装置,其特征在于,所述显示装置包括液晶显示装置,场效发光装置及等离子体显示装置之一种作为光源。
10.如权利要求8所述的显示装置,其特征在于,所述显示装置包含一个其上具有大量狭长的荧光像素的阴极射线管(10),所述在像素内线性位移光出射位置的装置包括用于沿每一荧光像素位移电子束的装置(65)。
11.如权利要求10所示的显示装置,其特征在于,所述电子束横截面为矩形(66d)。
12.如权利要求10所述的显示装置,其特征在于,所述电子束横截面为椭圆形(66c)。
13.如权利要求10所述的显示装置,其特征在于,所述像素按行排列,且所述显示装置为电视接收机,具有从接收信号中提取每一像素深度成分的装置(58,59,61,62,63)和将此深度成分逐个像素地加入传统水平扫描线以控制水平扫描线垂直高度的装置(60),从而获得一阶梯状光栅扫描线(20)。
14.如权利要求2所述的显示装置,其特征在于,在所述单个光学元件之间设置有微小间隙。
15.如权利要求14所述的显示装置,其特征在于,在所述间隙中填充有黑色不透光材料。
16.如权利要求2所述的显示装置,其特征在于,所述光学元件制作成塑料材料的模压片。
17.如权利要求2所述的显示装置,其特征在于,所述光学元件设置在一层注模塑料材料上。
18.如权利要求2所述的显示装置,其特征在于,每个光学元件为含有至少两个单个光学构件的复合器件(图1(b))。
19.如权利要求18所述的显示装置,其特征在于,所述至少两个单个光学构件被制作成粘接在一起的至少两个塑料材料模压片。
20.如权利要求18所述的显示装置,其特征在于,所述至少两个单个光学构件被制作成在其边缘可靠连接的至少两个塑料材料模压片。
21.如权利要求8所述的显示装置,其特征在于,所述显示装置为用于摄影透明胶片(14)的显示装置或投影机,所述用于位移光出射点的装置包括一个遮光罩,它被施加于透明片的每个像素以提供一个预定透射点(5c)。
22.如权利要求1所述的显示装置,其特征在于,所述光学元件为反射镜(76,77)且所述输入表面为反射表面。
23.如权利要求22所述的显示装置,其特征在于,每一光学元件包括一平面镜(76)和一凹面镜(77)。
24.如权利要求23所述的显示装置,其特征在于,每一平面镜(76)制作成一个组合元件(78)的一个表面,此组合元件的另一表面构成一个用于相邻像素的凹面镜。
25.如权利要求10所述的显示装置,其特征在于,所述显示装置为一个计算机显示器和基于计算机的视频驱动电子设备,此电子设备具有由接收自计算机的数据从每一像素提取深度成分的装置和逐个像素地将此深度成分加入传统水平扫描线的装置(19),从而获得一阶梯形扫描栅线(20)。
26.一种从由分立像素构成的二维图像显示装置上产生三维图像的方法,包括,提供一系列分别对准在像素前的光学元件,并改变每一光学元件的有效焦距以改变距位于呈现每一分立像素的显示装置之前的观察者的明视视觉距离,其特征在于,每一光学元件具有一沿与图像大致平行定向的表面逐渐变化的焦距,所述改变每一光学元件的有效焦距的步骤包括如下步骤,在每一像素内即时位移光由二维图像出射的位置,将此出射光传输至光学元件,由此出射光照射在光学元件上的位置决定像素的明视深度。
27.如权利要求26所述的方法,其特征在于,所述光学元件为折射元件且光束入射在相关折射元件的折射表面。
28.如权利要求26所述的方法,其特征在于,所述光学元件为反射镜且光束照射在相关反射镜的反射表面。
29.如权利要求26,27或28的方法,其特征在于,所述位移光由二维图像出射位置的步骤,包括线性位移由二维图像出射的光的位置。
30.如权利要求26,27或28所述的方法,其特征在于,所述位移由二维图像出射光位置的步骤,包括径向位移由二维图像出射光的位置。
CN95197619A 1995-01-04 1995-12-28 三维成像系统 Expired - Fee Related CN1125362C (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/368,644 US5790086A (en) 1995-01-04 1995-01-04 3-D imaging system
US08/368,644 1995-01-04

Publications (2)

Publication Number Publication Date
CN1175309A CN1175309A (zh) 1998-03-04
CN1125362C true CN1125362C (zh) 2003-10-22

Family

ID=23452130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN95197619A Expired - Fee Related CN1125362C (zh) 1995-01-04 1995-12-28 三维成像系统

Country Status (18)

Country Link
US (1) US5790086A (zh)
EP (3) EP0957386A1 (zh)
JP (4) JP3231330B2 (zh)
KR (1) KR19980701263A (zh)
CN (1) CN1125362C (zh)
AT (1) ATE190410T1 (zh)
AU (1) AU702635B2 (zh)
BR (1) BR9510228A (zh)
CA (1) CA2208711C (zh)
CZ (1) CZ288672B6 (zh)
DE (1) DE69515522T2 (zh)
ES (1) ES2147622T3 (zh)
HK (1) HK1001782A1 (zh)
MX (1) MX9705051A (zh)
NZ (2) NZ297718A (zh)
PL (1) PL181803B1 (zh)
RU (1) RU2168192C2 (zh)
WO (1) WO1996021171A2 (zh)

Families Citing this family (98)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6014259A (en) * 1995-06-07 2000-01-11 Wohlstadter; Jacob N. Three dimensional imaging system
EP0785457A3 (en) * 1996-01-17 1998-10-14 Nippon Telegraph And Telephone Corporation Optical device and three-dimensional display device
US6304263B1 (en) 1996-06-05 2001-10-16 Hyper3D Corp. Three-dimensional display system: apparatus and method
US6259450B1 (en) 1996-06-05 2001-07-10 Hyper3D Corp. Three-dimensional display system apparatus and method
US6414433B1 (en) 1999-04-26 2002-07-02 Chad Byron Moore Plasma displays containing fibers
US6452332B1 (en) 1999-04-26 2002-09-17 Chad Byron Moore Fiber-based plasma addressed liquid crystal display
US7082236B1 (en) * 1997-02-27 2006-07-25 Chad Byron Moore Fiber-based displays containing lenses and methods of making same
US6459200B1 (en) 1997-02-27 2002-10-01 Chad Byron Moore Reflective electro-optic fiber-based displays
US6262694B1 (en) * 1997-03-11 2001-07-17 Fujitsu Limited Image display system
ID27878A (id) * 1997-12-05 2001-05-03 Dynamic Digital Depth Res Pty Konversi image yang ditingkatkan dan teknik mengenkodekan
AU6500999A (en) * 1998-09-28 2000-04-17 Rose Research, L.L.C. Method and apparatus for displaying three-dimensional images
US6611100B1 (en) 1999-04-26 2003-08-26 Chad Byron Moore Reflective electro-optic fiber-based displays with barriers
US6354899B1 (en) 1999-04-26 2002-03-12 Chad Byron Moore Frit-sealing process used in making displays
US6431935B1 (en) 1999-04-26 2002-08-13 Chad Byron Moore Lost glass process used in making display
US6174161B1 (en) * 1999-07-30 2001-01-16 Air Products And Chemical, Inc. Method and apparatus for partial oxidation of black liquor, liquid fuels and slurries
US7068434B2 (en) * 2000-02-22 2006-06-27 3M Innovative Properties Company Sheeting with composite image that floats
US6490092B1 (en) 2000-03-27 2002-12-03 National Graphics, Inc. Multidimensional imaging on a curved surface using lenticular lenses
US6714173B2 (en) * 2000-06-16 2004-03-30 Tdk Corporation Three dimensional screen display
US6570324B1 (en) 2000-07-19 2003-05-27 Eastman Kodak Company Image display device with array of lens-lets
KR20030029649A (ko) * 2000-08-04 2003-04-14 다이나믹 디지탈 텝스 리서치 피티와이 엘티디 화상 변환 및 부호화 기술
US6720961B2 (en) 2000-11-06 2004-04-13 Thomas M. Tracy Method and apparatus for displaying an image in three dimensions
US6985162B1 (en) * 2000-11-17 2006-01-10 Hewlett-Packard Development Company, L.P. Systems and methods for rendering active stereo graphical data as passive stereo
JP2002176660A (ja) * 2000-12-08 2002-06-21 Univ Tokyo 画像表示方法及び画像表示装置
US7061532B2 (en) 2001-03-27 2006-06-13 Hewlett-Packard Development Company, L.P. Single sensor chip digital stereo camera
US20020140133A1 (en) * 2001-03-29 2002-10-03 Moore Chad Byron Bichromal sphere fabrication
US7367885B2 (en) 2001-08-09 2008-05-06 Igt 3-D text in a gaming machine
US8267767B2 (en) 2001-08-09 2012-09-18 Igt 3-D reels and 3-D wheels in a gaming machine
US7901289B2 (en) 2001-08-09 2011-03-08 Igt Transparent objects on a gaming machine
US6887157B2 (en) 2001-08-09 2005-05-03 Igt Virtual cameras and 3-D gaming environments in a gaming machine
US8002623B2 (en) 2001-08-09 2011-08-23 Igt Methods and devices for displaying multiple game elements
US7909696B2 (en) 2001-08-09 2011-03-22 Igt Game interaction in 3-D gaming environments
RU2224273C2 (ru) * 2001-09-11 2004-02-20 Голенко Георгий Георгиевич Устройство голенко для получения объемного изображения объектов
WO2003077758A1 (en) * 2002-03-14 2003-09-25 Netkisr Inc. System and method for analyzing and displaying computed tomography data
US8369607B2 (en) 2002-03-27 2013-02-05 Sanyo Electric Co., Ltd. Method and apparatus for processing three-dimensional images
US7918730B2 (en) 2002-06-27 2011-04-05 Igt Trajectory-based 3-D games of chance for video gaming machines
WO2004038486A1 (ja) * 2002-10-23 2004-05-06 Pioneer Corporation 画像表示装置及び画像表示方法
US20050041163A1 (en) * 2003-05-07 2005-02-24 Bernie Butler-Smith Stereoscopic television signal processing method, transmission system and viewer enhancements
WO2005065085A2 (en) * 2003-12-21 2005-07-21 Kremen Stanley H System and apparatus for recording, transmitting, and projecting digital three-dimensional images
WO2006047487A2 (en) * 2004-10-25 2006-05-04 The Trustees Of Columbia University In The City Of New York Systems and methods for displaying three-dimensional images
US20060238545A1 (en) * 2005-02-17 2006-10-26 Bakin Dmitry V High-resolution autostereoscopic display and method for displaying three-dimensional images
WO2006111893A1 (en) 2005-04-19 2006-10-26 Koninklijke Philips Electronics N.V. Depth perception
JP5213701B2 (ja) * 2005-05-13 2013-06-19 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ レンダリング方法、信号処理システム、ディスプレイ装置及びコンピュータ可読媒体
US20070127909A1 (en) * 2005-08-25 2007-06-07 Craig Mowry System and apparatus for increasing quality and efficiency of film capture and methods of use thereof
JP4712872B2 (ja) 2005-06-03 2011-06-29 メディアポッド リミテッド ライアビリティ カンパニー 多次元画像化システムおよびその方法
CN101203887B (zh) * 2005-06-03 2011-07-27 米迪尔波得股份有限公司 提供用于多维成像的图像的照相机和多维成像系统
EP1938136A2 (en) * 2005-10-16 2008-07-02 Mediapod LLC Apparatus, system and method for increasing quality of digital image capture
WO2007134469A1 (de) * 2006-05-18 2007-11-29 Eth Zurich Anzeigevorrichtung
KR101023262B1 (ko) 2006-09-20 2011-03-21 니폰덴신뎅와 가부시키가이샤 화상 부호화 방법 및 복호 방법, 이들의 장치 및 이들의 프로그램과 프로그램을 기록한 기억매체
US8385628B2 (en) 2006-09-20 2013-02-26 Nippon Telegraph And Telephone Corporation Image encoding and decoding method, apparatuses therefor, programs therefor, and storage media for storing the programs
KR100846498B1 (ko) * 2006-10-18 2008-07-17 삼성전자주식회사 영상 해석 방법 및 장치, 및 동영상 영역 분할 시스템
KR100829581B1 (ko) * 2006-11-28 2008-05-14 삼성전자주식회사 영상 처리 방법, 기록매체 및 장치
US8330801B2 (en) * 2006-12-22 2012-12-11 Qualcomm Incorporated Complexity-adaptive 2D-to-3D video sequence conversion
KR101059178B1 (ko) * 2006-12-28 2011-08-25 니폰덴신뎅와 가부시키가이샤 영상 부호화 방법 및 복호방법, 그들의 장치, 그들의 프로그램을 기록한 기억매체
US8384710B2 (en) 2007-06-07 2013-02-26 Igt Displaying and using 3D graphics on multiple displays provided for gaming environments
TW200910975A (en) * 2007-06-25 2009-03-01 Nippon Telegraph & Telephone Video encoding method and decoding method, apparatuses therefor, programs therefor, and storage media for storing the programs
US7957061B1 (en) 2008-01-16 2011-06-07 Holovisions LLC Device with array of tilting microcolumns to display three-dimensional images
CN101516040B (zh) * 2008-02-20 2011-07-06 华为终端有限公司 视频匹配方法、装置及系统
US8508582B2 (en) * 2008-07-25 2013-08-13 Koninklijke Philips N.V. 3D display handling of subtitles
EP2351377A1 (en) * 2008-10-21 2011-08-03 Koninklijke Philips Electronics N.V. Method and system for processing an input three dimensional video signal
US7889425B1 (en) 2008-12-30 2011-02-15 Holovisions LLC Device with array of spinning microlenses to display three-dimensional images
CA2755164C (en) 2009-03-11 2014-02-25 Sensovation Ag Autofocus method and autofocus device
US7978407B1 (en) 2009-06-27 2011-07-12 Holovisions LLC Holovision (TM) 3D imaging with rotating light-emitting members
US8487836B1 (en) 2009-09-11 2013-07-16 Thomas A. Bodine Multi-dimensional image rendering device
JP2011109294A (ja) * 2009-11-16 2011-06-02 Sony Corp 情報処理装置、情報処理方法、表示制御装置、表示制御方法、およびプログラム
US8587498B2 (en) * 2010-03-01 2013-11-19 Holovisions LLC 3D image display with binocular disparity and motion parallax
EP2843618B1 (en) * 2010-04-18 2016-08-31 Imax Theatres International Limited Double stacked projection
JP6149339B2 (ja) 2010-06-16 2017-06-21 株式会社ニコン 表示装置
US20130093805A1 (en) * 2010-06-21 2013-04-18 Imax Corporation Double stacked projection
EP2418857A1 (en) * 2010-08-12 2012-02-15 Thomson Licensing Stereoscopic menu control
KR101448411B1 (ko) * 2010-08-19 2014-10-07 닛산 지도우샤 가부시키가이샤 입체물 검출 장치 및 입체물 검출 방법
US10139613B2 (en) 2010-08-20 2018-11-27 Sakura Finetek U.S.A., Inc. Digital microscope and method of sensing an image of a tissue sample
CN101986350B (zh) * 2010-10-22 2012-03-28 武汉大学 基于单目结构光的三维建模方法
US8860792B1 (en) 2010-11-02 2014-10-14 Tommy Lee Bolden Two dimensional to three dimensional video display system
RU2010148868A (ru) * 2010-11-30 2012-06-10 Святослав Иванович Арсенич (RU) Проекционная система с торцевой проекцией и видеопроектор для этой системы
CN103329165B (zh) * 2011-01-07 2016-08-24 索尼电脑娱乐美国公司 放缩三维场景中的用户控制的虚拟对象的像素深度值
US9622913B2 (en) * 2011-05-18 2017-04-18 Alcon Lensx, Inc. Imaging-controlled laser surgical system
CN103765869B (zh) 2011-08-16 2017-12-12 图像影院国际有限公司 混合图像分解和投影
JP6147262B2 (ja) 2011-10-20 2017-06-14 アイマックス コーポレイション デュアル投影システムの画像位置合わせの不可視性または低い知覚可能性
WO2013057717A1 (en) 2011-10-20 2013-04-25 Imax Corporation Distortion compensation for image projection
US8553942B2 (en) 2011-10-21 2013-10-08 Navteq B.V. Reimaging based on depthmap information
US9047688B2 (en) * 2011-10-21 2015-06-02 Here Global B.V. Depth cursor and depth measurement in images
CA2857531A1 (en) * 2011-12-08 2013-06-13 Exo U Inc. Method for improving an interaction with a user interface displayed on a 3d touch screen display
US9404764B2 (en) 2011-12-30 2016-08-02 Here Global B.V. Path side imagery
US9024970B2 (en) 2011-12-30 2015-05-05 Here Global B.V. Path side image on map overlay
US8879827B2 (en) * 2012-06-29 2014-11-04 Intel Corporation Analyzing structured light patterns
UA79936U (en) 2012-10-22 2013-05-13 Василий Борисович Однороженко Autostereoscopic system
RU2515489C1 (ru) * 2013-01-11 2014-05-10 Федеральное государственное бюджетное образовательное учреждение высшего профессионального образования "Южно-Российский государственный университет экономики и сервиса" (ФГБОУ ВПО "ЮРГУЭС") Устройство адаптивной фильтрации видеосигналов
DE102013103971A1 (de) 2013-04-19 2014-11-06 Sensovation Ag Verfahren zum Erzeugen eines aus mehreren Teilbildern zusammengesetzten Gesamtbilds eines Objekts
GB2518019B (en) * 2013-12-13 2015-07-22 Aveva Solutions Ltd Image rendering of laser scan data
US10007102B2 (en) 2013-12-23 2018-06-26 Sakura Finetek U.S.A., Inc. Microscope with slide clamping assembly
CN104360533B (zh) * 2014-12-03 2017-08-29 京东方科技集团股份有限公司 一种 3d 显示装置及其显示驱动方法
CN105423170A (zh) * 2015-09-09 2016-03-23 广州市辉科光电科技有限公司 Led裸眼3d蜂巢灯
US11280803B2 (en) 2016-11-22 2022-03-22 Sakura Finetek U.S.A., Inc. Slide management system
EP3425907B1 (en) * 2017-07-03 2022-01-05 Vestel Elektronik Sanayi ve Ticaret A.S. Display device and method for rendering a three-dimensional image
WO2019232768A1 (en) * 2018-06-08 2019-12-12 Chiu Po Hsien Devices for displaying 3d image
CN109765695B (zh) * 2019-03-29 2021-09-24 京东方科技集团股份有限公司 一种显示系统和显示装置
CN111240035B (zh) * 2020-03-31 2022-03-01 吉林省广播电视研究所(吉林省广播电视局科技信息中心) 透射变焦扫描裸眼三维显示方法
CN113280754A (zh) * 2021-07-22 2021-08-20 清华大学 高精度深度计算装置及方法

Family Cites Families (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2961486A (en) * 1951-03-05 1960-11-22 Alvin M Marks Three-dimensional display system
NL260800A (zh) * 1958-09-03
US3555349A (en) * 1968-07-17 1971-01-12 Otto John Munz Three-dimensional television system
US3674921A (en) * 1969-11-12 1972-07-04 Rca Corp Three-dimensional television system
FR2094205A5 (zh) * 1970-03-06 1972-02-04 Anvar
US3878329A (en) * 1973-08-22 1975-04-15 Itt Orthoscopic image tube
JPS5792989A (en) * 1980-12-01 1982-06-09 Kiyoshi Nagata Transmission and receiving system for stereoscopic color television
US4571041A (en) * 1982-01-22 1986-02-18 Gaudyn Tad J Three dimensional projection arrangement
FR2531252B1 (fr) * 1982-07-29 1987-09-25 Guichard Jacques Procede d'affichage d'images en relief et dispositif de mise en oeuvre
JPS59182688A (ja) * 1983-03-31 1984-10-17 Toshiba Corp ステレオ視処理装置
US4925294A (en) * 1986-12-17 1990-05-15 Geshwind David M Method to convert two dimensional motion pictures for three-dimensional systems
US4704627A (en) * 1984-12-17 1987-11-03 Nippon Hoso Kyokai Stereoscopic television picture transmission system
JPS61198896A (ja) * 1985-02-28 1986-09-03 Canon Inc 立体表示装置の立体表示方式
JPS61253993A (ja) * 1985-05-07 1986-11-11 Nippon Hoso Kyokai <Nhk> 立体テレビジョン画像信号の伝送方法
JPS6277794A (ja) * 1985-09-30 1987-04-09 Sony Corp 三次元表示装置
US4829365A (en) * 1986-03-07 1989-05-09 Dimension Technologies, Inc. Autostereoscopic display with illuminating lines, light valve and mask
GB8626527D0 (en) * 1986-11-06 1986-12-10 British Broadcasting Corp 3d video transmission
FR2611926B1 (fr) * 1987-03-03 1989-05-26 Thomson Csf Dispositif de visualisation collimatee en relief
US5081530A (en) * 1987-06-26 1992-01-14 Antonio Medina Three dimensional camera and range finder
GB8716369D0 (en) * 1987-07-10 1987-08-19 Travis A R L Three-dimensional display device
GB2210540A (en) * 1987-09-30 1989-06-07 Philips Electronic Associated Method of and arrangement for modifying stored data,and method of and arrangement for generating two-dimensional images
US4878735A (en) * 1988-01-15 1989-11-07 Lookingglass Technology, Inc. Optical imaging system using lenticular tone-plate elements
JPH07101259B2 (ja) * 1988-05-10 1995-11-01 シャープ株式会社 立体映像表示装置
EP0360903B1 (en) * 1988-09-29 1994-01-26 Kabushiki Kaisha Toshiba Depth information buffer control apparatus
US5159663A (en) * 1988-11-22 1992-10-27 Wake Robert H Imager and process
GB2231750B (en) * 1989-04-27 1993-09-29 Sony Corp Motion dependent video signal processing
US5014126A (en) * 1989-10-23 1991-05-07 Vision Iii Imaging, Inc. Method and apparatus for recording images with a single image receiver for autostereoscopic display
US5220452A (en) * 1990-08-06 1993-06-15 Texas Instruments Incorporated Volume display optical system and method
US5175805A (en) * 1990-10-30 1992-12-29 Sun Microsystems, Inc. Method and apparatus for sequencing composite operations of pixels
US5202793A (en) * 1990-11-23 1993-04-13 John McCarry Three dimensional image display apparatus
JPH0568268A (ja) * 1991-03-04 1993-03-19 Sharp Corp 立体視画像作成装置および立体視画像作成方法
US5293467A (en) * 1991-04-03 1994-03-08 Buchner Gregory C Method for resolving priority between a calligraphically-displayed point feature and both raster-displayed faces and other calligraphically-displayed point features in a CIG system
DE4110951A1 (de) * 1991-04-05 1992-10-08 Bundesrep Deutschland Verfahren zur reduzierung der zu uebertragenden information bei der verarbeitung von stereobildpaaren
JPH05100623A (ja) * 1991-10-09 1993-04-23 Ricoh Co Ltd デイスプレイ装置
JPH05100204A (ja) * 1991-10-09 1993-04-23 Ricoh Co Ltd デイスプレイ装置
US5363241A (en) * 1992-04-07 1994-11-08 Hughes Aircraft Company Focusable virtual image display
US5325386A (en) * 1992-04-21 1994-06-28 Bandgap Technology Corporation Vertical-cavity surface emitting laser assay display system
US5279912A (en) * 1992-05-11 1994-01-18 Polaroid Corporation Three-dimensional image, and methods for the production thereof
GB9221312D0 (en) * 1992-10-09 1992-11-25 Pilkington Visioncare Inc Improvements in or relating to ophthalmic lens manufacture
JPH0764020A (ja) * 1993-06-15 1995-03-10 Nikon Corp 三次元ディスプレイおよびこれを用いた表示方法
US5614941A (en) * 1993-11-24 1997-03-25 Hines; Stephen P. Multi-image autostereoscopic imaging system
GB9325667D0 (en) * 1993-12-15 1994-02-16 Total Process Containment Ltd Aseptic liquid barrier transfer coupling
US5543964A (en) * 1993-12-28 1996-08-06 Eastman Kodak Company Depth image apparatus and method with angularly changing display information
US5475419A (en) * 1994-06-29 1995-12-12 Carbery Dimensions, Ltd. Apparatus and method for three-dimensional video

Also Published As

Publication number Publication date
BR9510228A (pt) 1997-11-04
CZ207797A3 (cs) 1999-04-14
ATE190410T1 (de) 2000-03-15
ES2147622T3 (es) 2000-09-16
EP0957385A1 (en) 1999-11-17
KR19980701263A (zh) 1998-05-15
JP2002084554A (ja) 2002-03-22
EP0801763A2 (en) 1997-10-22
EP0957386A1 (en) 1999-11-17
DE69515522D1 (de) 2000-04-13
NZ297718A (en) 1999-04-29
DE69515522T2 (de) 2000-11-16
PL321264A1 (en) 1997-11-24
CA2208711A1 (en) 1996-07-11
EP0801763B1 (en) 2000-03-08
MX9705051A (es) 1997-10-31
WO1996021171A3 (en) 1996-09-06
CN1175309A (zh) 1998-03-04
HK1001782A1 (en) 1998-07-10
PL181803B1 (pl) 2001-09-28
NZ334276A (en) 2000-09-29
JP2002049005A (ja) 2002-02-15
US5790086A (en) 1998-08-04
CA2208711C (en) 2002-05-21
WO1996021171A2 (en) 1996-07-11
JP2002044685A (ja) 2002-02-08
RU2168192C2 (ru) 2001-05-27
JPH10512060A (ja) 1998-11-17
AU702635B2 (en) 1999-02-25
JP3231330B2 (ja) 2001-11-19
AU4295396A (en) 1996-07-24
CZ288672B6 (cs) 2001-08-15

Similar Documents

Publication Publication Date Title
CN1125362C (zh) 三维成像系统
US5036385A (en) Autostereoscopic display with multiple sets of blinking illuminating lines and light valve
EP0570179B1 (en) Directional display
US20080309754A1 (en) Systems and Methods for Displaying Three-Dimensional Images
CN1476730A (zh) 具有观看者跟踪系统的自动立体图像显示装置
CN1209704A (zh) 柱面图象的远程认可
CN1700776A (zh) 三维图像显示方法、三维图像捕获方法以及三维显示装置
CN1949166A (zh) 自由多视点多投影三维显示系统和方法
CN1640153A (zh) 使用回归反射屏幕的三维图像投影
CN108919503B (zh) 一种基于视角导向层的集成成像360°桌面3d显示系统
US10078228B2 (en) Three-dimensional imaging system
CN1271424A (zh) 用于三维成像和记录的设备
CN102209254A (zh) 一种一维集成成像方法和装置
CN102116938A (zh) 一种基于柱面会聚定向屏的全景视场三维显示装置
JP2000503422A (ja) 動画のための静止画面
CN201352301Y (zh) 全息投影装置
CN106896514A (zh) 一种多方向背光模组及含多方向背光模组的集成成像显示装置和显示方法
US20030146883A1 (en) 3-D imaging system
Zhou et al. Light field projection for lighting reproduction
CN1482491A (zh) 三维摄影技术方法
JP2003307800A (ja) 立体像を撮影、表示する装置
Swash et al. Omnidirectional Holoscopic 3D content generation using dual orthographic projection
AU3122299A (en) 3-D imaging system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C19 Lapse of patent right due to non-payment of the annual fee
CF01 Termination of patent right due to non-payment of annual fee