Method for processing video pictures for display on a display device
The invention relates to a method for processing video pictures for display on a display device.
More specifically the invention is closely related to a kind of video processing for improving the picture quality of pictures which are displayed on matrix displays like plasma display panels (PDP) or other display devices where the pixel values control the generation of a corresponding number of small lighting pulses for luminance degration on the display.
Background
The Plasma technology now makes it possible to achieve flat color panel of large size (greater than possible with CRTs) with very limited depth and without any viewing angle constraints .
Referring to the last generation of european TV sets, a lot of work has been made to improve its picture quality. Consequently, a new technology like the Plasma one has to provide a picture quality as good or better than standard TV tech- nology. On one hand, the Plasma technology gives the possibility of ""unlimited" screen size, of attractive thickness but on the other hand, it generates new kinds of artefacts which could degrade the picture quality.
Most of these artefacts have a different appearing than those in the TV pictures on CRT screen and that makes them more visible since people are used to seeing the old TV artefacts unconsciously.
The subject artefact, with which the invention deals, is called "'dynamic false contour effect" since it corresponds
to disturbances of gray levels and colors in the form of an apparition of colored edges in the picture when an observation point on the PDP screen moves. The effect is most visible when the image has a smooth gradation like skin.
Fig. 1 shows the simulation of such a false contour effect en a natural scene with skin areas. On the arm of the displayed woman are shown two dark lines, which e. g. are caused by this false contour effect. Also in the face of the woman such dark lines occur on the right side.
In addition, the same problem occurs on static images when observers are moving their heads and that leads to the conclusion that such a failure depends on the human visual per- ception and happens on the retina.
Some algorithms are known today, which are based on motion estimation in video pictures in order to be able to anticipate the motion of the critical observation points to reduce or suppress this false contour effect. In most cases, these different algorithms are focused on the sub-field coding part without giving detailed information concerning the motion estimators used.
In the past, the motion estimator evolution was mainly focused on flicker-reduction for european TV standards (e.g. upconversion from 50Hz to 100Hz) or proscan conversion and for video compression in the scope of MPEG-encoding and so one. Nevertheless, the problems which have to be solved for such applications are different from the PDP dynamic false contour issue.
A Plasma Display Panel (PDP) utilizes a matrix array of discharge cells which could only be "ON" or "OFF". Also unlike a CRT or LCD in which gray levels are expressed by analog control of the light emission, a PDP controls the gray level
by modulating the number of light pulses per frame. This time-modulation will be integrated by the eye over a period corresponding to the eye time response.
When an observation point (eye focus area) on the PDP screen moves, the eye will follow this movement. Consequently, it will no more integrate the light from the same cell over a frame period (static integration) but it will integrate information coming from different cells located on the move- ment trajectory and it will mix all these light pulses together which leads to a faulty signal information.
Today, a basic idea to reduce this false contour effect is to detect the movements in the picture (displacement of the eye focus area) and to apply different type of corrections over this displacement in order to be sure the eye will only perceive the correct information through its movement. These solutions are described e.g. in EP-A-0 980 059 and EP-A-0 978 816 which are published European Patent Applica- tions of the applicant.
As already mentioned, a dynamic false contour effect reduction can be done by making specific corrections on a movement trajectory defined by the motion vectors.
Since a PDP is a matrix array of plasma cells, each kind of correction has to respect this matrix segmentation of the panel. A motion vector, will therefore be used to determine in the matrix of pixels a trajectory to apply the compensa- tion. For that purpose it is necessary to convert a vector in a discrete trajectory which can lead to faulty or partial compensations .
Invention
According to the invention it is proposed a way to improve the quality of the false contour effect reduction using a standard motion estimator with a post-processing of the es- timated motion-vectors according to the human visual system.
It can be implemented for each kind of Plasma technology at each level of its development (even if the scanning mode and sub-field distribution is not well defined) .
In the method according to this invention, the characteristics of the human visual system are used to implement a post processing on the vectors coming from the motion estimator. That makes possible to define a compensation trajectory re- specting the human eye in order to improve the global quality of the compensation.
One aspect of the invention is that the estimated motion vectors are converted into a more symmetrical form which al- lows to distribute corrections along the vector more symmetrically around the vectors. This respects better the behavior of the human visual system. An advantageous algorithm for this conversion is claimed in claim 2.
This algorithm is relatively simple to implement since it does not request complicated computations.
To further improve the compensation method it is advantageous that the motion vector components estimated for a pixel are rounded down to integer values irrespective of the rational component value of each vector component before symmetrization. This has the advantage that over- compensation is reliably avoided. In contrast, this means that under-compensation is taken instead. Over-compensation has the disadvantage that the false contour changes its ap-
pearance. E.g. a different color can occur due to over- compensation. This is very disturbing for the viewer.
A third aspect of the invention concerns a specific rounding process for calculating the positions of corrections on signal level as claimed in claim 6. According to this rounding process, the pixel coordinates for a correction value are rounded down if the rational component of the pixel coordinate is in a lower range, the pixel coordinate is rounded up and rounded down if the rational component is in a medium range thus determining two correction positions, the pixel coordinates for a correction value are rounded up if the rational component of the pixel coordinate is in an upper range .
Drawings
Exemplary embodiments of the invention are illustrated in the drawings and are explained in more detail in the follow- ing description.
In the figures:
Fig. 1 shows a video picture in which the false contour effect is simulated;
Fig. 2 shows an illustration for explaining the sub-field organization of a PDP;
Fig. 3 shows an illustration for explaining the false contour effect;
Fig. 4 illustrates the appearance of a dark edge when a display of two frames is being made in the manner shown in Fig. 3;
Fig. 5 shows a possible trajectory for distributing false contour corrections along a motion vector of a critical observation point on a plasma screen;
Fig. 6 shows a model how the human brain analysis visual information;
Fig. 7 shows a 3D-illustration of a gabor wavelet used in the model of Fig.6;
Fig. 8 shows a 2D-illustration of a gabor wavelet projected on a plasma screen;
Fig. 9 shows a first illustration of distributing correc- tions along a motion vector which will lead to a non-optimal false contour correction;
Fig. 10 shows a sub-field organisation with 12 sub-fields;
Fig. 11 shows the centers of gravity for the sub-fields of the sub-field organisation shown in Fig.10;
Fig. 12 shows a second illustration of distributing corrections along a motion vector respecting the sym- metry of the gabor function;
Fig. 13 shows a third illustration of distributing corrections along a motion vector not respecting the symmetry of the gabor function;
Fig. 14 shows an illustration of a special rounding process for distributing corrections on signal amplitude level instead of sub-field level;
Fig. 15 shows an illustration of distributing corrections on signal amplitude level along a motion vector applying a first specific rounding scheme; and
Fig. 16 shows an illustration of distributing corrections on signal amplitude level along a motion vector applying a second specific rounding scheme.
Exemplary embodiments
As previously mentioned, a Plasma Display Panel (PDP) utilizes a matrix array of discharge cell which can only be "ON" or "OFF". The PDP controls the gray level by modulating the number of light pulses per frame. This time modulation will be integrated by the eye over a period corresponding to the human eye time-response.
In TV technology an 8 bit representation of the luminance levels for the RGB colour components is very common. In that case each level will be represented by a combination of the 8 following bits :
1-2-4-8-16-32-64-128
To realize such a coding with the PDP technology, the frame period will be divided in 8 lighting periods (called sub- fields), each one corresponding to a bit. The number of light pulses for the bit "2" is the double as for the bit "1" and so on. With these 8 sub-periods, it is possible through combination, to build the 256 different luminance levels. Without motion, the eye of the observers will integrate over about a frame period these sub-periods and catch the impression of the right gray level. Fig. 2 represents this decomposition. In this figure the addressing and erasing periods of every sub-field are not shown. The plasma driving principle however requires also these periods. It is well known to the skilled man, that during each sub-field a
plasma cell needs to be addressed, first in an addressing or scanning period, afterwards the sustain period follows where the light pulses are generated and finally in an erase period the charge in the plasma cells is quenched.
This PWM-type light generation introduces new categories of image-quality degradation corresponding to disturbances of gray levels or colors. The name for this effect is dynamic false contour effect since the fact that it corresponds to the apparition of colored edges in the picture when an observation point on the PDP screen moves. Such failures on a picture leads to an impression of strong contours appearing on homogeneous area like skin. The degradation is enhanced when the image has a smooth gradation and also when the light-emission period exceeds several milliseconds. In addition, the same problems occur on static images when observers are moving their heads and that leads to the conclusion that such a failure depends on the human visual perception.
To understand a basic mechanism of visual perception of moving plasma images, a simple case will be considered. Let us assume a transition between the level 128 and 127 moving at 5 pixel per frame, and the eye following this movement.
Fig. 3 represents in light gray the lighting sub-fields corresponding to the level 127 and in dark gray, these corresponding to the level 128.
The diagonal parallel lines originating from the eye indi- cate the behavior of the eye integration during a movement. The two outer diagonal eye-integration-lines show the borders of the region with faulty perceived luminance. Between them, the eye will perceive a lack of luminance which leads to the appearing of a dark edge as indicated in the eye stimuli integration curve at the bottom of Fig. 3.
This is also illustrated in Fig. 4 for the same moving transition.
The false contour effect is produced on the eye retina when the eye follows a moving object since the eye does not integrate the right information at the right time. There are different methods to reduce such an effect but the more serious ones are based on motion estimation (dynamic methods) which aim to detect the movement of each pixel in a frame in order to anticipate the eye movement and to reduce the failure appearing on the retina through different corrections. In other words, the goal of each dynamic algorithm is to define for each pixel observed by the eye, the way the eye is following its movement during a frame in order to generate a correction on this trajectory. Such algorithms are described e.g. in EP-A-0 980 059 and EP-A-0 978 816 which are European patent applications of the applicant.
Consequently, for each pixel of the frame N, there is a mo- tion vector V = (Vx;Vy) which describes the complete motion of the pixel from the frame N to the frame N+l. Nevertheless, the goal of a false contour compensation is to apply a compensation on the complete trajectory. In other words, such an algorithm needs a way to convert this vector in a trajec- tory on a matrix display.
Taking the example of a vector V = (7;3) , as shown in Fig. 5, it is evident that the definition of this vector is not enough to determine one trajectory. On trajectory is shown in the figure with dashed line. There are other possibilities to distribute corrections along the vector.
In Fig. 5, the vector represents the real motion of a pixel, that means the real trajectory the eye will follow when it locks this pixel. The dashed line represents a possible trajectory in the matrix array. Yet, there are different
jectory in the matrix array. Yet, there are different trajectories possible and it is necessary to define a trajectory as near as possible from the eye integration trajectory. According to the invention this is done according to the human visual system which will be described more in detail hereinafter.
The complete human visual system can be seen as a picture encoder to reduce the information received by the retina to the essential information which could be rapidly interpreted by the brain.
For instance, the pupil can be seen as a low-pass filter which reduces the amount of high spacial frequencies. It is not required here, to make a complete exposition of the human visual system but to extract some important characteristics from the HVS to explain the ideas included in this invention disclosure.
One key point of the human visual system is the fact that the cortex areas will analyse the incoming picture with a discrete filter bank as illustrated in Fig. 6.
This figure shows that the signal coming from the eye will be analyzed in preferential directions and the number of these directions is limited (discrete analysis) . The signal for each direction is analysed in a filter bank for different spacial frequencies.
In fact, medical experiments have shown that this decomposition by means of a filter bank can be seen as a mathematical decomposition in gabor wavelets and describe very good the behaviour of the simple receptor fields of the cortex cells.
The mathematic formula of such a gabor wavelet is the following one:
fo = ∞p(-π[(χ-χoYa2 +{y- yoYb2 e v(-2iπ[uo(χ-χo)+ vo(y- 0)l) whle<re
(x0,y0) represents the position of the directional filter modulated by the spacial frequencies ( 0,vo) and with an ori¬
entation of arcta
v * a and b are parameters . The value f
0
represents the excitation intensity in the brain corresponding to the perception strength.
In Fig. 7 the characteristic of such a gabor function is illustrated. In the left part of Fig. 7 a 3D plot of the gabor function is given, in the right part of Fig. 7 the characteristic of the gabor function is illustrated with a colored 2D plot of the same function.
These graphics show how the eye will analyse an object tran- sition or a movement in a spacial direction.
In Fig. 8 the 2D plot of the gabor function for a moving pixel with motion vector V = (7;3) is shown, the same pixel movement is shown in Fig. 6.
In Fig. 8, the middle area (directly arround the vector) represents the more sensitive one for the eye integration. Consequently, each kind of dynamic false contour compensation should be spread over this area which is important for the definition of the compensation trajectory.
It is a basic aspect of the invention to define a vector post-processing method which enables such an adapted compensation.
In theory, the different motion vectors could have any kind of values and so any kind of direction. For a computer algorithm in plasma displays it is however advantageous to convert the two vector components to integer values.
The aim of each compensation is to reduce, in the right direction taking into account the right amplitude of the movement, the false contour effect.
In fact, since the compensation has to be applied on a matrix array of pixels (discrete positions), the two motion vector components have to be integer ones to apply a correction on a discrete trajectory (defined with integers) . In that case, it is necessary to round the vector components coming from the motion estimator. Every kind of rounding can be taken. Nevertheless, experiments made on different available motion estimators showed that a rounding down can improve the final result. In order to simplify the further ex- planations in the following it is presupposed that all motion vectors are rounded down.
It is obvious that a compensation which is based on a lower value of the movement amplitude but which respects the right direction will still provide a gain in the reduction of the false contour. On the other hand, if the compensated motion amplitude is too high, this will generate a false contour effect in the opposite level to the effect we will try to compensate. In addition, the fact to jump from an under- compensation to an over-compensation will suddenly change the color of the false contour and that makes it more visible, too.
Consequently, according to the invention for all computa- tions one kind of rounding is precisely defined and in addition under-compensation is used, i.e. the used motion amplitude is lower than the real one.
According to this, the first stage of the new compensation algorithm will convert each vector component to an integer value with a rounding down to the neariest lower integer:
V, = v{round I (Vx }, round I (vy )) .
Before going further, an example of an improperly compensation is illustrated in Fig. 9. In this figure a com- pensation based on 10 corrections from 0 to 9 is illustrated for the motion vector V = (7;3) .
The number 0 to 9 corresponds in each case to one of the ten elements of the compensation. It is evident from Fig. 9 with the 2D-plot of the gabor function included, that this compensation does not respect the symmetry of the human eye integration function and thus leads to a non-optimal false contour correction.
As explained above, the human cerebral cortex decomposes each movement and stimuli in preferential directions. In fact, since the human visual system does not dispose of an infinite number of such directions, those directions can be defined as discrete ones. For that purpose one principle of the new algorithm is to convert the motion vectors to a discrete number of directions (convert all vectors to specific ones which leads to a more symmetrical compensation) .
The vector components are rounded to integers and conse- quently, the direction given by each vector is based on the ratio between the two integer vector components. In order to create a discrete space of directions, a good possibility is to define which integer ratios are allowed for motion vectors. For that purpose, the second stage of the processing will correspond to a modification of the vector components as described below:
select the smallest vector component: S = rcanψx,V )
compute the ratio R between S and the larger vector compo-
V I \ nent : R = — in which V} = maxψx, Vy) round the ratio R and then update the larger vector component : V'= round(R) x S .
For instance the vector V(l; ) will be converted to the vector '(6;3) and the vector (2;9) will be converted to K'(2;8) . These two vectors are elements of a discrete space of vec¬ tors and their form leads to a better symmetry of the com- pensation.
Now, this processing will be illustrated through an example based on a Sub-field shifting algorithm. This algorithm is explained in detail EP-A-0 980 059, which is a European Pat- ent Application of the applicant. For the disclosure regarding this algorithm it is therefore expressively referred to this patent application.
The complete algorithm can best be explained with a concrete example. For this example, the sub-field organisation shown in Fig. 10 is selected. Considered is a motion vector defined by
F = (7.3;3.7).
Rounding down of the vector components results in the motion vector ' = (7;3).
Conversion of the new vector to a basic one results in
V" = (6;3) .
The main idea of the sub-field shifting algorithm is to anticipate the movement in order to position the different bit planes of the moving area on the eye integration trajectory. That means the different bit-planes are shifted depending on
the eye movement to make sure that the eye receives the right information at the right time. For that purpose centers of gravity have been defined for the sub-fields:
=«-l Dur(n)
G(n) = ∑Dur(ι) +
in which G() represents the center of gravity location in the frame, n the current sub-field and Dur ( ) the duration of the sub-field. This duration includes the addressing time as but not the erasing time:
Dur(n) = Tadd + Tn in which Tadd represents the duration of the addressing pe¬ riod and Tn the duration of the sustain period of the sub- field itself. The erasing period is subjectively smaller and is in this embodiment neglected but can alternatively also be taken into account in another embodiment.
The resulting centers of gravity for the considered sub- field organisation are shown in Fig. 11.
Giving a motion vector V = (Vx;Vy) , for a current pixel, the entries in the sub-field code word for this pixel will be shifted to new pixel positions, where the shift coordinates are calculated according to the following formulae:
Vx - G(n) ^ . Vy - G(ri)
Axn = — and Δv = — —
" Dur(F) Dur(F) in which, Dur(F) represents the complete duration of the frame.
In the above mentioned example where V = (6;3) , the following results are achieved:
Since a correction can only be applied at distinct pixel positions defined by two coordinates in the matrix of pixel, it is required to round the previous computed values. Different kind of rounding regulations could be applied and for this example a round down process of each previous computed coordinate has been chosen to make under-compensation. However, it is mentioned that also some other kind of rounding could be used here. With rounding down, the following results are achieved.
In Fig. 12 the resulting compensation is illustrated. For the first 5 sub-fields, there is no shifting at all. The sub-field entries for the sixth and seventh sub-field are shifted one pixel to the right. The sub-field entry for the 12th sub-field is shifted five pixels to the right and three pixels upwards. All the remaining shifts can easily be seen from Fig. 12.
This compensation shall be compared to a standard compensa- tion based on the rounded motion vector V' = (7;3) but without rounding of the vector component ratio and adjusting of the larger vector component in the following table and in Fig. 13. In both figures Fig. 12 and Fig. 13 the motion vector
V' = (7;3j is depicted and the compensation results for this vector can be directly compared. Of course, in Fig. 12 the correction results are based on a calculation with the converted vector V" = (6;3) . It is evident from this comparison that the compensation based on V" = (ό;3) respects more the symmetry of the human visual system for the motion vector V' = {J;3) as those based directly on vector ' = (7;3).
A comparison of Fig. 12 with Fig. 13 shows that the compensation result shown in Fig. 13 second one does not respect the symmetry of the human visual system. There are five corrections on pixel positions below the motion vector against only one correction on pixel positions above the motion vector. In that case, simulations showed some artefact produced by the compensation itself even in areas where no false contour is visible. In addition, when the vector space is discrete, the compensation stays more stable between two serious vector changes .
Today, there are a number of false contour correction methods based on motion vectors which make a correction on the signal amplitude basis for a pixel, not on sub-field level. In that case, one possibility is to add a positive or negative signal amplitude to the original pictures at different positions depending on the motion vectors. There are also some other possibilities known which can be used here but will not be further mentioned in this application.
In the considered case, at each matrix position a combination of two or more corrections can be applied. Consequently, giving a correction, one or two positions for this correction can be defined in the matrix.
Let N be the number of corrections for a the critical moving pixels of a picture. Then a simple way to determine a correction trajectory for the motion vector V = ψx;Vy ) is to compute the position Pt = Δ'X,Δ,) for each correction item, with i
V Vv
= 1,...,N is defined by the formula : Δ =/x—— and Δ = /x— .
N y N
This is simple but does not respect the symmetry of the gabor function. Nevertheless, in order to respect the symmetry of the gabor function, a special rounding processing will be applied according to another aspect of the invention. This aims to produce an artificial symmetry in the compensation. In that case, each compensation could be applied at one or two positions like described below and illustrated in Fig. 14. Fig. 14 illustrates the following processing:
If Δ = /x^-< J% x round l x- then Δ' = round 4- l x- with ε ele-
N N j N ment of ,y and i = 1, N
If 1% x round T / x- < Δ' = / x -^- < S% x round r l. X V then
N N N
Δ'
If Δ = i x^ > S%x round t ι x - ___ then Δr = round / x-
N N N
Here, roundt means rounding up of the value in brackets and round means the mathematical operation of rounding down the value in brackets. 1% is the programmable border for the rounding down region and S% is the corresponding border for the rounding up region. Obviously, the border 1% and S% could have different values depending on the compensation algorithm used. The region inbetween the borders 1% and S is a region where rounding down and rounding up of the correction position components is being done thus leading to two correction positions in case there is only one component treated in this way and leading to four correction positions in case there are components treated in this way.
Also this processing is illustrated with a simple example in which the vector V - (6;3) and N=9 is used. First, the case is considered with the borders I%=40% and S%=60%:
The following table shows the computation of the positions for the 9 correction positions defined by Pt = A'X;A' )
In that case, the implementation of the correction will look like as depicted in Fig. 15.
The numbers used in Fig. 15 denote the correction positions in the pixel matrix. The correction values for these positions need to be calculated. In the easiest case a constant value can be used which adds or subtracts some luminance to the pixels of each correction position depending on the moving transition. Higher sophisticated correction value distribution algorithms can also be used e.g. a correction based on an algorithm which makes the correction values increase and decrease with the correction number. Examples of these algorithms are described in the articles:
0 "4.3: An equalizing Pulse Technique for improving the Gray Scale Capability of Plasma Displays" - Euro Displays Λ96
0 "15.3: A motion dependent equalizing-pulse technique for reducing gray-scale disturbances on PDPs" - SID 97 DIGEST (copy) . Therefore, these algorithms need not be described in greater detail here.
The next case considered is the case with
and the same motion vector and number of corrections.
The following table shows the new computation results of the positions for the 9 corrections defined by P
t = \A
x;A
y ) .
Fig. 16 shows how the compensation will look like. Here, there are a lot of correction position components rounded up and down, thus duplicating the number of corrections. Consequently, a change of the values from 1% and S% will have an impact of the density of the corrections on the movement trajectory.
In fact, in the cells containing more than one correction, a combination of these corrections will be necessary and will depend of the correction type. One possibility to make a combination is to take the mean value of all the correction values for this position.