CA2696721C - Digital broadcasting system and method of processing data in digital broadcasting system - Google Patents

Digital broadcasting system and method of processing data in digital broadcasting system Download PDF

Info

Publication number
CA2696721C
CA2696721C CA2696721A CA2696721A CA2696721C CA 2696721 C CA2696721 C CA 2696721C CA 2696721 A CA2696721 A CA 2696721A CA 2696721 A CA2696721 A CA 2696721A CA 2696721 C CA2696721 C CA 2696721C
Authority
CA
Canada
Prior art keywords
data
field
ensemble
frame
fic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA2696721A
Other languages
French (fr)
Other versions
CA2696721A1 (en
Inventor
Jae Hyung Song
In Hwan Choi
Jong Yeul Suh
Jin Pil Kim
Choon Lee
Chul Soo Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Publication of CA2696721A1 publication Critical patent/CA2696721A1/en
Application granted granted Critical
Publication of CA2696721C publication Critical patent/CA2696721C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/015High-definition television systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H40/00Arrangements specially adapted for receiving broadcast information
    • H04H40/18Arrangements characterised by circuits or components specially adapted for receiving
    • H04H40/27Arrangements characterised by circuits or components specially adapted for receiving specially adapted for broadcast systems covered by groups H04H20/53 - H04H20/95
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/438Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving MPEG packets from an IP network
    • H04N21/4382Demodulation or channel decoding, e.g. QPSK demodulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/64322IP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/6437Real-time Transport Protocol [RTP]

Abstract

A digital broadcast system and a method of processing data disclose. A receiving system of the digital broadcast system may include a baseband processor, a management processor, and a presentation processor. The baseband processor receives broadcast signals including mobile service data and main service data. The mobile service data configures a RS frame, and the RS frame includes the mobile service data and at least one type of channel setting information on the mobile service data. The management processor decodes the RS frame to acquire the mobile service data and the at least one type of channel setting information on the mobile service data, then extracts position information of an SDP message. Herein, the SDP message includes Codec information for each component in the respective virtual channel from the channel setting information, thereby accessing the SDP message from the extracted position information and gathers SDP message information. The presentation processor decodes mobile service data of a corresponding component based upon the gathered SDP
message information.

Description

.74420-424 Description DIGITAL BROADCASTING SYSTEM AND METHOD OF PROCESSING DATA IN
DIGITAL BROADCASTING SYSTEM
Technical Field [1] The present invention relates to a digital broadcasting system and a method of processing data in a digital broadcasting system for transmitting and receiving digital broadcast signals.

Background Art [2] The Vestigial Sideband (VSB) transmission mode, which is adopted as the standard for digital broadcasting in North America and the Republic of Korea, is a system using a single carrier method. Therefore, the receiving performance of the digital broadcast receiving system may be deteriorated in a poor channel environment. Particularly, since resistance to changes in channels and noise is more highly required when using portable and/or mobile broadcast receivers, the receiving performance may be even more deteriorated when transmitting mobile service data by the VSB transmission mode.

Disclosure of Invention According to an aspect of the present invention, there is provided a method of transmitting broadcast data in a digital broadcast transmitting system, the method comprising: Reed Solomon-Cyclic Redundancy Check (RS-CRC) encoding, by a Reed-Solomon (RS) frame encoder, mobile data to build at least one of a primary RS frame belonging to a primary ensemble and a secondary RS frame belonging to a secondary ensemble; mapping the RS-CRC encoded mobile data into data groups and adding known data sequences, a portion of fast information channel (FIC) data, and transmission information channel (TPC) data to each of the data groups, wherein the FIC data includes information for rapid mobile service acquisition, and wherein the TPC data includes version information for indicating an la update of the FIC data and a parade identifier to identify a parade which carries at least one of the primary ensemble and the secondary ensemble; multiplexing data in the data groups and main data; and transmitting a transmission frame including the multiplexed data, wherein the FIC data are divided into a plurality of FIC
segment payloads, and each FIC segment including an FIC segment header and one of the plurality of FIC segment payloads is transmitted in each of the data groups, wherein the primary ensemble includes at least one mobile service and a first service map table and the secondary ensemble includes at least one mobile service and a second service map table, wherein the first service map table comprises a first ensemble identifier to identify the primary ensemble and the second service map table comprises a second ensemble identifier to identify the secondary ensemble, and wherein the first and second ensemble identifiers include the parade identifier, respectively.

According to another aspect of the present invention, there is provided a digital broadcast transmitting system comprising: a Reed-Solomon (RS) frame encoder for Reed Solomon-Cyclic Redundancy Check (RS-CRC) encoding mobile data to build at least one of a primary RS frame belonging to a primary ensemble and a secondary RS frame belonging to a secondary ensemble; a group formatting means for mapping the RS-CRC encoded mobile data into data groups and adding known data sequences, a portion of fast information channel (FIC) data, and transmission information channel (TPC) data to each of the data groups, wherein the FIC data includes information for rapid mobile service acquisition, and wherein the TPC data includes version information for indicating an update of the FIC data and a parade identifier to identify a parade which carries at least one of the primary ensemble and the secondary ensemble; a multiplexing means for multiplexing data in the data groups and main data; and a transmitting means for transmitting a transmission frame including the multiplexed data, wherein the FIC data are divided to a plurality of FIC segment payloads, and each FIC segment including an FIC
segment header and one of the plurality of FIC segment payloads is transmitted in each of the data groups, wherein the primary ensemble includes at least one mobile .74420-424 lb service and a first service map table and the secondary ensemble includes at least one mobile service and a second service map table, wherein the first service map table comprises a first ensemble identifier to identify the primary ensemble and the second service map table comprises a second ensemble identifier to identify the secondary ensemble, and wherein the first and second ensemble identifiers include the parade identifier, respectively.
[3] Some embodiments may provide a digital broadcasting system and a data processing method that are highly resistant to channel changes and noise.
[4] Another aspect may provide a receiving system and a data processing method that is capable of acquiring session description protocol (SDP) information, when a session description protocol (SDP) message for each virtual channel exists, by receiving position information of the corresponding SDP message via signaling information.
[5] Another aspect may provide a receiving system and a data processing method that is capable of receiving internet protocol (IP) access information and description information corresponding to each component for each respective virtual channel via signaling information.
[6] In another aspect, a receiving system includes a baseband processor, a management processor, and a presentation processor. The baseband processor receives broadcast signals including mobile service data and main service data.
Herein, the mobile service data may configure a Reed-Solomon (RS) frame, and the RS frame may include the mobile service data and at least one type of channel setting information on the mobile service data. The management processor decodes the RS frame so as to acquire the mobile service data and the at least one type of channel setting information on the mobile service data. The management processor then extracts position information of a session description protocol (SDP) message.
Herein, the SDP message includes Codec information for each component in the respective virtual channel from the channel setting information. Accordingly, the management processor accesses the SDP message from the extracted position in-formation and gathers SDP message information. The presentation processor decodes mobile service data of a corresponding component based upon the gathered SDP
message information.
[7] The baseband processor may further include a known sequence detector detecting known data sequences included in at least one data group, the data group configuring the RS frame. Herein, the detected known data sequence may be used for de-modulation and channel equalization of the mobile service data.
[8] The channel setting information may correspond to a service map table (SMT), and the SDP position information may be included in the SMT in a descriptor format, so as to be received.
[9] When an SDP reference type included in the SDP position information indicates that the SDP message is being received in a session announcement protocol (SAP) stream format, the management processor may access an SAP stream so as to gather SDP
message information from the SDP position information.
[10] Alternatively, when an SDP reference type included in the SDP position information indicates that the SDP message is being received in an SDP file format through a file delivery over unidirectional transport (FLUTE) session, the management processor may access a FLUTE session so as to gather SDP message information from the SDP
position information.
[11] According to another aspect., a method for processing data in a receiving system includes receiving broadcast signals including mobile service data and main service data, wherein the mobile service data is capable of configuring an RS
frame, and wherein the RS frame includes the mobile service data and at least one type of channel setting information on the mobile service data, decoding the RS
frame so as to acquire the mobile service data and the at least one type of channel setting in-formation on the mobile service data, extracting position information of a session de-scription protocol (SDP) message, the SDP message including Codec information for each component in the respective virtual channel from the channel setting information, thereby accessing the SDP message from the extracted position information and gathering SDP message information, and decoding mobile service data of a cor-responding component based upon the gathered SDP message information.
[12] According to another aspect, a receiving system includes a baseband processor, a management processor, and a presentation processor. The baseband processor receives broadcast signals including mobile service data and main service data. Herein, the mobile service data may configure a Reed-Solomon (RS) frame, and the RS frame may include the mobile service data and at least one type of channel setting information on the mobile service data. The management processor decodes the RS frame so as to acquire the mobile service data and the at least one type of channel setting information on the mobile service data. The management processor then extracts Codec information for each component in the respective virtual channel from the channel setting information. The presentation processor decodes mobile service data of a corresponding component based upon the gathered SDP message in-formation.
[13] Herein, the channel setting information may correspond to a service map table (SMT), and the Codec information may be included in the SMT in a descriptor format, so as to be received.
[14] According to a further aspect:, a method for processing data in a digital broadcast receiving system includes receiving broadcast signals including mobile service data and main service data, wherein the mobile service data is capable of configuring an RS frame, and wherein the RS frame includes the mobile service data and at least one type of channel setting information on the mobile service data, decoding the RS frame so as to acquire the mobile service data and the at least one type of channel setting information on the mobile service data, extracting Codec in-formation for each component in the respective virtual channel from the channel setting information, and decoding mobile service data of a corresponding component based upon the extracted Codec information.

.74420-424 [15] Additional advantages and features of some embodiments of the invention may be realized and attained by the structure particularly pointed out in the written description as well as the appended drawings.
[16] The digital broadcasting system and the data processing method according to some embodiments may have the following advantages. By using the SMT, some embodiments may perform channel setting more quickly and efficiently.
Also, either by including an SDP reference descriptor describing position information on an SDP message in the SMT, or by including an SD descriptor describing IP
access information and description information on each component of the respective virtual channel, so as to be transmitted, some embodiments may expand information associated with channel settings.
[17] Also, some embodiments may reduce the absolute amount of acquisition data for channel setting and IP service access, thereby minimizing bandwidth consumption. For example, when the SDP reference descriptor is included in the SMT and received, the corresponding virtual channel is recognized as a session, and the SDP message of the corresponding session may be received.
Also, when the SD descriptor is included in the SMT and received, the corresponding virtual channel is recognized as a session, thereby enabling access information based upon the access information and media characteristics of each IP media component, which is being transmitted through the corresponding session.
Brief Description of the Drawings [18] The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principle of the invention. In the drawings:
[19] FIG. 1 illustrates a block diagram showing a general structure of a digital broadcasting receiving system according to an embodiment of the present invention;

.74420-424 [20] FIG. 2 illustrates an exemplary structure of a data group according to an embodiment of the present invention;
[21] FIG. 3 illustrates an RS frame according to an embodiment of the present invention;

5 [22] FIG. 4 illustrates an example of an MH frame structure for transmitting and receiving mobile service data according to an embodiment of the present invention;

[23] FIG. 5 illustrates an example of a general VSB frame structure;

[24] FIG. 6 illustrates an example of mapping positions of the first 4 slots of a sub-frame in a spatial area with respect to a VSB frame;

[25] FIG. 7 illustrates an example of mapping positions of the first 4 slots of a sub-frame in a chronological (or time) area with respect to a VSB frame;

[26] FIG. 8 illustrates an exemplary order of data groups being assigned to one of 5 sub-frames configuring an MH frame according to an embodiment of the present invention;

[27] FIG. 9 illustrates an example of a single parade being assigned to an MH frame according to an embodiment of the present invention;

[28] FIG. 10 illustrates an example of 3 parades being assigned to an MH
frame according to an embodiment of the present invention;

[29] FIG. 11 illustrates an example of the process of assigning 3 parades shown in FIG. 10 being expanded to 5 sub-frames within an MH frame;

[30] FIG. 12 illustrates a data transmission structure according to an embodiment of the present invention, wherein signaling data are included in a data group so as to be transmitted;

.74420-424 [31] FIG. 13 illustrates a hierarchical signaling structure according to an embodiment of the present invention;

[32] FIG. 14 illustrates an exemplary FIC body format according to an embodiment of the present invention;

[33] FIG. 15 illustrates an exemplary bit stream syntax structure with respect to an FIC segment according to an embodiment of the present invention;

[34] FIG. 16 illustrates an exemplary bit stream syntax structure with respect to a pay load of an FIC segment according to the present invention, when an FIC
type field value is equal to '0' ;

[35] FIG. 17 illustrates an exemplary bit stream syntax structure of a service map table according to an embodiment of the present invention;

[36] FIG. 18 illustrates an exemplary bit stream syntax structure of an MH
audio descriptor according to an embodiment of the present invention;

[37] FIG. 19 illustrates an exemplary bit stream syntax structure of an MH
RTP payload type descriptor according to an embodiment of the present invention;
[38] FIG. 20 illustrates an exemplary bit stream syntax structure of an MH
current event descriptor according to an embodiment of the present invention;

[39] FIG. 21 illustrates an exemplary bit stream syntax structure of an MH
next event descriptor according to an embodiment of the present invention;

[40] FIG. 22 illustrates an exemplary bit stream syntax structure of an MH
system time descriptor according to an embodiment of the present invention;

.74420-424 6a [41] FIG. 23 illustrates segmentation and encapsulation processes of a service map table according to an embodiment of the present invention;

[42] FIG. 24 illustrates a flow chart for accessing a virtual channel using FIC
and SMT according to an embodiment of the present invention;

[43] FIG. 25 illustrates an exemplary MH system architecture according to an embodiment of the present invention;

[44] FIG. 26 illustrates a 2-step signaling method using the FIC and SMT
according to an embodiment of the present invention;

[45] FIG. 27 illustrates an exemplary bit stream syntax structure of a service map table (SMT) according to another embodiment of the present invention;

[46] FIG. 28 illustrates an exemplary bit stream syntax structure of an SDP_Reference_Descriptor()according to an embodiment of the present invention;
[47] FIG. 29 illustrates an exemplary bit stream syntax structure of a Session_Description_Descriptor() according to an embodiment of the present invention;

[48] FIG. 30 illustrates an exemplary bit stream syntax structure of an AVC Video_Description_Bytes() according to an embodiment of the present invention;

[49] FIG. 31 illustrates an exemplary bit stream syntax structure of a Hierarchy_Description_Bytes() according to an embodiment of the present invention;
[50] FIG. 32 illustrates an exemplary bit stream syntax structure of an SVC_extension_Description_Bytes() according to an embodiment of the present invention;

.74420-424 6b [51] FIG. 33 illustrates an exemplary bit stream syntax structure of an MPEG4 Audio_Description_Bytes() according to an embodiment of the present invention; and [52] FIG. 34 to FIG. 36 illustrate a flow chart showing a method for accessing mobile services according to an embodiment of the present invention.

6c Best Mode for Carrying Out the Invention [53] Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings.
[54]
[55] Definition of the terms used in the present invention [56] In addition, although the terms used in the present invention are selected from generally known and used terms, some of the terms mentioned in the description of the present invention have been selected by the applicant at his or her discretion, the detailed meanings of which are described in relevant parts of the description herein.
Furthermore, it is required that the present invention is understood, not simply by the actual terms used but by the meaning of each term lying within.
[57] Among the terms used in the description of the present invention, main service data correspond to data that can be received by a fixed receiving system and may include audio/video (AN) data. More specifically, the main service data may include AN
data of high definition (HD) or standard definition (SD) levels and may also include diverse data types required for data broadcasting. Also, the known data correspond to data pre-known in accordance with a pre-arranged agreement between the receiving system and the transmitting system.
[58] Additionally, among the terms used in the present invention, "MH"
corresponds to the initials of "mobile" and "handheld" and represents the opposite concept of a fixed-type system. Furthermore, the MH service data may include at least one of mobile service data and handheld service data, and will also be referred to as "mobile service data" for simplicity. Herein, the mobile service data not only correspond to MH service data but may also include any type of service data with mobile or portable charac-teristics. Therefore, the mobile service data according to the present invention are not limited only to the MH service data.
[59] The above-described mobile service data may correspond to data having information, such as program execution files, stock information, and so on, and may also correspond to AN data. Most particularly, the mobile service data may correspond to AN data having lower resolution and lower data rate as compared to the main service data. For example, if an AN codec that is used for a conventional main service corresponds to a MPEG-2 codec, a MPEG-4 advanced video coding (AVC) or scalable video coding (SVC) having better image compression efficiency may be used as the A/
V codec for the mobile service. Furthermore, any type of data may be transmitted as the mobile service data. For example, transport protocol expert group (TPEG) data for broadcasting real-time transportation information may be transmitted as the main service data.
[60] Also, a data service using the mobile service data may include weather forecast services, traffic information services, stock information services, viewer participation quiz programs, real-time polls and surveys, interactive education broadcast programs, gaming services, services providing information on synopsis, character, background music, and filming sites of soap operas or series, services providing information on past match scores and player profiles and achievements, and services providing in-formation on product information and programs classified by service, medium, time, and theme enabling purchase orders to be processed. Herein, the present invention is not limited only to the services mentioned above.
[61] In the present invention, the transmitting system provides backward compatibility in the main service data so as to be received by the conventional receiving system.
Herein, the main service data and the mobile service data are multiplexed to the same physical channel and then transmitted.
[62] Furthermore, the transmitting system according to the present invention performs additional encoding on the mobile service data and inserts the data already known by the receiving system and transmitting system (e.g., known data), thereby transmitting the processed data.
[63] Therefore, when using the transmitting system according to the present invention, the receiving system may receive the mobile service data during a mobile state and may also receive the mobile service data with stability despite various distortion and noise occurring within the channel.
[64]
[65] Receiving Syste [66] FIG. 1 illustrates a block diagram showing a general structure of a receiving system according to an embodiment of the present invention. The receiving system according to the present invention includes a baseband processor 100, a management processor 200, and a presentation processor 300.
[67] The baseband processor 100 includes an operation controller 110, a tuner 120, a de-modulator 130, an equalizer 140, a known sequence detector (or known data detector) 150, a block decoder (or mobile handheld block decoder) 160, a primary Reed-Solomon (RS) frame decoder 170, a secondary RS frame decoder 180, and a signaling decoder 190.
[68] The operation controller 110 controls the operation of each block included in the baseband processor 100.
[69] By tuning the receiving system to a specific physical channel frequency, the tuner 120 enables the receiving system to receive main service data, which correspond to broadcast signals for fixed-type broadcast receiving systems, and mobile service data, which correspond to broadcast signals for mobile broadcast receiving systems.
At this point, the tuned frequency of the specific physical channel is down-converted to an in-termediate frequency (IF) signal, thereby being outputted to the demodulator 130 and the known sequence detector 140. The passband digital IF signal being outputted from the tuner 120 may only include main service data, or only include mobile service data, or include both main service data and mobile service data.
[70] The demodulator 130 performs self-gain control, carrier recovery, and timing recovery processes on the passband digital IF signal inputted from the tuner 120, thereby translating the IF signal to a baseband signal. Then, the demodulator outputs the baseband signal to the equalizer 140 and the known sequence detector 150.
The demodulator 130 uses the known data symbol sequence inputted from the known sequence detector 150 during the timing and/or carrier recovery, thereby enhancing the demodulating performance.
[71] The equalizer 140 compensates channel-associated distortion included in the signal demodulated by the demodulator 130. Then, the equalizer 140 outputs the distortio n-compensated signal to the block decoder 160. By using a known data symbol sequence inputted from the known sequence detector 150, the equalizer 140 may enhance the equalizing performance. Furthermore, the equalizer 140 may receive feed-back on the decoding result from the block decoder 160, thereby enhancing the equalizing performance.
[72] The known sequence detector 150 detects known data place (or position) inserted by the transmitting system from the input/output data (i.e., data prior to being de-modulated or data being processed with partial demodulation). Then, the known sequence detector 150 outputs the detected known data position information and known data sequence generated from the detected position information to the de-modulator 130 and the equalizer 140. Additionally, in order to allow the block decoder 160 to identify the mobile service data that have been processed with additional encoding by the transmitting system and the main service data that have not been processed with any additional encoding, the known sequence detector 150 outputs such corresponding information to the block decoder 160.
[73] If the data channel-equalized by the equalizer 140 and inputted to the block decoder 160 correspond to data processed with both block-encoding and trellis-encoding by the transmitting system (i.e., data within the RS frame, signaling data), the block decoder 160 may perform trellis-decoding and block-decoding as inverse processes of the transmitting system. On the other hand, if the data channel-equalized by the equalizer 140 and inputted to the block decoder 160 correspond to data processed only with trellis-encoding and not block-encoding by the transmitting system (i.e., main service data), the block decoder 160 may perform only trellis-decoding.
[74] The signaling decoder 190 decoded signaling data that have been channel-equalized and inputted from the equalizer 140. It is assumed that the signaling data inputted to the signaling decoder 190 correspond to data processed with both block-encoding and trellis-encoding by the transmitting system. Examples of such signaling data may include transmission parameter channel (TPC) data and fast information channel (FIC) data. Each type of data will be described in more detail in a later process.
The FIC data decoded by the signaling decoder 190 are outputted to the FIC handler 215.
And, the TPC data decoded by the signaling decoder 190 are outputted to the TPC handler 214.
[75] Meanwhile, according to the present invention, the transmitting system uses RS
frames by encoding units. Herein, the RS frame may be divided into a primary RS
frame and a secondary RS frame. However, according to the embodiment of the present invention, the primary RS frame and the secondary RS frame will be divided based upon the level of importance of the corresponding data.
[76] The primary RS frame decoder 170 receives the data outputted from the block decoder 160. At this point, according to the embodiment of the present invention, the primary RS frame decoder 170 receives only the mobile service data that have been Reed-Solomon (RS)-encoded and/or cyclic redundancy check (CRC)-encoded from the block decoder 160. Herein, the primary RS frame decoder 170 receives only the mobile service data and not the main service data. The primary RS frame decoder 170 performs inverse processes of an RS frame encoder (not shown) included in the transmitting system, thereby correcting errors existing within the primary RS
frame.
More specifically, the primary RS frame decoder 170 forms a primary RS frame by grouping a plurality of data groups and, then, correct errors in primary RS
frame units.
In other words, the primary RS frame decoder 170 decodes primary RS frames, which are being transmitted for actual broadcast services.
[77] Additionally, the secondary RS frame decoder 180 receives the data outputted from the block decoder 160. At this point, according to the embodiment of the present invention, the secondary RS frame decoder 180 receives only the mobile service data that have been RS-encoded and/or CRC-encoded from the block decoder 160.
Herein, the secondary RS frame decoder 180 receives only the mobile service data and not the main service data. The secondary RS frame decoder 180 performs inverse processes of an RS frame encoder (not shown) included in the transmitting system, thereby correcting errors existing within the secondary RS frame. More specifically, the secondary RS frame decoder 180 forms a secondary RS frame by grouping a plurality of data groups and, then, correct errors in secondary RS frame units. In other words, the secondary RS frame decoder 180 decodes secondary RS frames, which are being transmitted for mobile audio service data, mobile video service data, guide data, and so on.
[78] Meanwhile, the management processor 200 according to an embodiment of the present invention includes an MH physical adaptation processor 210, an IP
network stack 220, a streaming handler 230, a system information (SI) handler 240, a file handler 250, a multi-purpose internet main extensions (MIME) type handler 260, and an electronic service guide (ESG) handler 270, and an ESG decoder 280, and a storage unit 290.
[79] The MH physical adaptation processor 210 includes a primary RS frame handler 211, a secondary RS frame handler 212, an MH transport packet (TP) handler 213, a TPC
handler 214, an FIC handler 215, and a physical adaptation control signal handler 216.
[80] The TPC handler 214 receives and processes baseband information required by modules corresponding to the MH physical adaptation processor 210. The baseband in-formation is inputted in the form of TPC data. Herein, the TPC handler 214 uses this information to process the FIC data, which have been sent from the baseband processor 100.
[81] The TPC data are transmitted from the transmitting system to the receiving system via a predetermined region of a data group. The TPC data may include at least one of an MH ensemble ID, an MH sub-frame number, a total number of MH groups (TNoG), an RS frame continuity counter, a column size of RS frame (N), and an FIC
version number.
[82] Herein, the MH ensemble ID indicates an identification number of each MH
ensemble carried in the corresponding channel.
[83] The MH sub-frame number signifies a number identifying the MH sub-frame number in an MH frame, wherein each MH group associated with the corresponding MH
ensemble is transmitted.
[84] The TNoG represents the total number of MH groups including all of the MH
groups belonging to all MH parades included in an MH sub-frame.
[85] The RS frame continuity counter indicates a number that serves as a continuity counter of the RS frames carrying the corresponding MH ensemble. Herein, the value of the RS frame continuity counter shall be incremented by 1 modulo 16 for each successive RS frame.
[86] N represents the column size of an RS frame belonging to the corresponding MH

ensemble. Herein, the value of N determines the size of each MH TP.
[87] Finally, the FIC version number signifies the version number of an FIC
carried on the corresponding physical channel.
[88] As described above, diverse TPC data are inputted to the TPC handler 214 via the signaling decoder 190 shown in FIG. 1. Then, the received TPC data are processed by the TPC handler 214. The received TPC data may also be used by the FIC handler in order to process the FIC data.
[89] The FIC handler 215 processes the FIC data by associating the FIC data received from the baseband processor 100 with the TPC data.
[90] The physical adaptation control signal handler 216 collects FIC data received through the FIC handler 215 and SI data received through RS frames. Then, the physical adaptation control signal handler 216 uses the collected FIC data and SI data to configure and process IP datagrams and access information of mobile broadcast services. Thereafter, the physical adaptation control signal handler 216 stores the processed IP datagrams and access information to the storage unit 290.
[91] The primary RS frame handler 211 identifies primary RS frames received from the primary RS frame decoder 170 of the baseband processor 100 for each row unit, so as to configure an MH TP. Thereafter, the primary RS frame handler 211 outputs the configured MH TP to the MH TP handler 213.
[92] The secondary RS frame handler 212 identifies secondary RS frames received from the secondary RS frame decoder 180 of the baseband processor 100 for each row unit, so as to configure an MH TP. Thereafter, the secondary RS frame handler 212 outputs the configured MH TP to the MH TP handler 213.
[93] The MH transport packet (TP) handler 213 extracts a header from each MH
TP
received from the primary RS frame handler 211 and the secondary RS frame handler 212, thereby determining the data included in the corresponding MH TP. Then, when the determined data correspond to SI data (i.e., SI data that are not encapsulated to IP
datagrams), the corresponding data are outputted to the physical adaptation control signal handler 216. Alternatively, when the determined data correspond to an IP
datagram, the corresponding data are outputted to the IP network stack 220.
[94] The IP network stack 220 processes broadcast data that are being transmitted in the form of IP datagrams. More specifically, the IP network stack 220 processes data that are inputted via user datagram protocol (UDP), real-time transport protocol (RTP), real-time transport control protocol (RTCP), asynchronous layered coding/layered coding transport (ALC/LCT), file delivery over unidirectional transport (FLUTE), and so on. Herein, when the processed data correspond to streaming data, the cor-responding data are outputted to the streaming handler 230. And, when the processed data correspond to data in a file format, the corresponding data are outputted to the file handler 250. Finally, when the processed data correspond to SI-associated data, the corresponding data are outputted to the SI handler 240.
[95] The SI handler 240 receives and processes SI data having the form of IP
datagrams, which are inputted to the IP network stack 220.
[96] When the inputted data associated with SI correspond to MIME-type data, the inputted data are outputted to the MIME-type handler 260.
[97] The MIME-type handler 260 receives the MIME-type SI data outputted from the SI
handler 240 and processes the received MIME-type SI data.
[98] The file handler 250 receives data from the IP network stack 220 in an object format in accordance with the ALC/LCT and FLUTE structures. The file handler 250 groups the received data to create a file format. Herein, when the corresponding file includes ESG, the file is outputted to the ESG handler 270. On the other hand, when the cor-responding file includes data for other file-based services, the file is outputted to the presentation controller 330 of the presentation processor 300.
[99] The ESG handler 270 processes the ESG data received from the file handler 250 and stores the processed ESG data to the storage unit 290. Alternatively, the ESG
handler 270 may output the processed ESG data to the ESG decoder 280, thereby allowing the ESG data to be used by the ESG decoder 280.
[100] The storage unit 290 stores the system information (SI) received from the physical adaptation control signal handler 210 and the ESG handler 270 therein.
Thereafter, the storage unit 290 transmits the stored SI data to each block.
[101] The ESG decoder 280 either recovers the ESG data and SI data stored in the storage unit 290 or recovers the ESG data transmitted from the ESG handler 270. Then, the ESG decoder 280 outputs the recovered data to the presentation controller 330 in a format that can be outputted to the user.
[102] The streaming handler 230 receives data from the IP network stack 220, wherein the format of the received data are in accordance with RTP and/or RTCP structures.
The streaming handler 230 extracts audio/video streams from the received data, which are then outputted to the audio/video (A/V) decoder 310 of the presentation processor 300.
The audio/video decoder 310 then decodes each of the audio stream and video stream received from the streaming handler 230.
[103] The display module 320 of the presentation processor 300 receives audio and video signals respectively decoded by the AN decoder 310. Then, the display module provides the received audio and video signals to the user through a speaker and/or a screen.
[104] The presentation controller 330 corresponds to a controller managing modules that output data received by the receiving system to the user.
[105] The channel service manager 340 manages an interface with the user, which enables the user to use channel-based broadcast services, such as channel map management, channel service connection, and so on.
[106] The application manager 350 manages an interface with a user using ESG
display or other application services that do not correspond to channel-based services.
[107]
[108] Data Format Structure [109] Meanwhile, the data structure used in the mobile broadcasting technology according to the embodiment of the present invention may include a data group structure and an RS frame structure, which will now be described in detail.
[110] FIG. 2 illustrates an exemplary structure of a data group according to the present invention.
[111] FIG. 2 shows an example of dividing a data group according to the data structure of the present invention into 10 MH blocks (i.e., MH block 1 (B 1) to MH block 10 (B 10)). In this example, each MH block has the length of 16 segments.
Referring to FIG. 2, only the RS parity data are allocated to portions of the previous 5 segments of the MH block 1 (B 1) and the next 5 segments of the MH block 10 (B 10). The RS
parity data are excluded in regions A to D of the data group.
[112] More specifically, when it is assumed that one data group is divided into regions A, B, C, and D, each MH block may be included in any one of region A to region D
depending upon the characteristic of each MH block within the data group.
Herein, the data group is divided into a plurality of regions to be used for different purposes. More specifically, a region of the main service data having no interference or a very low in-terference level may be considered to have a more resistant (or stronger) receiving performance as compared to regions having higher interference levels.
Additionally, when using a system inserting and transmitting known data in the data group, wherein the known data are known based upon an agreement between the transmitting system and the receiving system, and when consecutively long known data are to be pe-riodically inserted in the mobile service data, the known data having a predetermined length may be periodically inserted in the region having no interference from the main service data (i.e., a region wherein the main service data are not mixed).
However, due to interference from the main service data, it is difficult to periodically insert known data and also to insert consecutively long known data to a region having interference from the main service data.
[113] Referring to FIG. 2, MH block 4 (B4) to MH block 7 (B7) correspond to regions without interference of the main service data. MH block 4 (B4) to MH block 7 (B7) within the data group shown in FIG. 2 correspond to a region where no interference from the main service data occurs. In this example, a long known data sequence is inserted at both the beginning and end of each MH block. In the description of the present invention, the region including MH block 4 (B4) to MH block 7 (B7) will be referred to as "region A (=B4+B5+B6+B7)". As described above, when the data group includes region A having a long known data sequence inserted at both the beginning and end of each MH block, the receiving system is capable of performing equalization by using the channel information that can be obtained from the known data.
Therefore, the strongest equalizing performance may be yielded (or obtained) from one of region A to region D.
[114] In the example of the data group shown in FIG. 2, MH block 3 (B3) and MH
block 8 (B8) correspond to a region having little interference from the main service data.
Herein, a long known data sequence is inserted in only one side of each MH
block B3 and B8. More specifically, due to the interference from the main service data, a long known data sequence is inserted at the end of MH block 3 (B3), and another long known data sequence is inserted at the beginning of MH block 8 (B8). In the present invention, the region including MH block 3 (B3) and MH block 8 (B8) will be referred to as "region B (=B3+B8)". As described above, when the data group includes region B having a long known data sequence inserted at only one side (beginning or end) of each MH block, the receiving system is capable of performing equalization by using the channel information that can be obtained from the known data. Therefore, a stronger equalizing performance as compared to region C/D may be yielded (or obtained).
[115] Referring to FIG. 2, MH block 2 (B2) and MH block 9 (B9) correspond to a region having more interference from the main service data as compared to region B. A
long known data sequence cannot be inserted in any side of MH block 2 (B2) and MH
block 9 (B9). Herein, the region including MH block 2 (B2) and MH block 9 (B9) will be referred to as "region C (=B2+B9)".
[116] Finally, in the example shown in FIG. 2, MH block 1 (B1) and MH block 10 (B 10) correspond to a region having more interference from the main service data as compared to region C. Similarly, a long known data sequence cannot be inserted in any side of MH block 1 (B 1) and MH block 10 (B 10). Herein, the region including MH
block 1 (B 1) and MH block 10 (B 10) will be referred to as "region D (=B 1+B
10)".
Since region C/D is spaced further apart from the known data sequence, when the channel environment undergoes frequent and abrupt changes, the receiving performance of region C/D may be deteriorated.
[117] Additionally, the data group includes a signaling information area wherein signaling information is assigned (or allocated).
[118] In the present invention, the signaling information area may start from the 1st segment of the 4th MH block (B4) to a portion of the 2nd segment. According to an embodiment of the present invention, the signaling information area for inserting signaling information may start from the 1st segment of the 4th MH block (B4) to a portion of the 2nd segment.
[119] More specifically, 276(=207+69) bytes of the 4th MH block (B4) in each data group are assigned as the signaling information area. In other words, the signaling in-formation area consists of 207 bytes of the 1st segment and the first 69 bytes of the 2nd segment of the 4th MH block (B4). The 1st segment of the 4th MH block (B4) corresponds to the 17th or 173rd segment of a VSB field.
[120] Herein, the signaling information may be identified by two different types of signaling channels: a transmission parameter channel (TPC) and a fast information channel (FIC).
[121] Herein, the TPC data may include at least one of an MH ensemble ID, an MH sub-frame number, a total number of MH groups (TNoG), an RS frame continuity counter, a column size of RS frame (N), and an FIC version number. However, the TPC
data (or information) presented herein are merely exemplary. And, since the adding or deleting of signaling information included in the TPC data may be easily adjusted and modified by one skilled in the art, the present invention will, therefore, not be limited to the examples set forth herein. Furthermore, the FIC is provided to enable a fast service acquisition of data receivers, and the FIC includes cross layer information between the physical layer and the upper layer(s).
[122] For example, when the data group includes 6 known data sequences, as shown in FIG. 2, the signaling information area is located between the first known data sequence and the second known data sequence. More specifically, the first known data sequence is inserted in the last 2 segments of the 3rd MH block (B3), and the second known data sequence in inserted in the 2nd and 3rd segments of the 4th MH block (B4).
Furthermore, the 3rd to 6th known data sequences are respectively inserted in the last 2 segments of each of the 4th, 5th, 6th, and 7th MH blocks (B4, B5, B6, and B7).
The 1st and 3rd to 6th known data sequences are spaced apart by 16 segments.
[123] FIG. 3 illustrates an RS frame according to an embodiment of the present invention.
[124] The RS frame shown in FIG. 3 corresponds to a collection of one or more data groups. The RS frame is received for each MH frame in a condition where the receiving system receives the FIC and processes the received FIC and where the receiving system is switched to a time-slicing mode so that the receiving system can receive MH ensembles including ESG entry points. Each RS frame includes IP
streams of each service or ESG, and SMT section data may exist in all RS frames.
[125] The RS frame according to the embodiment of the present invention consists of at least one MH transport packet (TP). Herein, the MH TP includes an MH header and an MH payload.
[126] The MH payload may include mobile service data as well as signaling data. More specifically, an MH payload may include only mobile service data, or may include only signaling data, or may include both mobile service data and signaling data.
[127] According to the embodiment of the present invention, the MH header may identify (or distinguish) the data types included in the MH payload. More specifically, when the MH TP includes a first MH header, this indicates that the MH payload includes only the signaling data. Also, when the MH TP includes a second MH header, this indicates that the MH payload includes both the signaling data and the mobile service data. Finally, when MH TP includes a third MH header, this indicates that the MH
payload includes only the mobile service data.
[128] In the example shown in FIG. 3, the RS frame is assigned with IP
datagrams (IP
datagram 1 and IP datagram 2) for two service types.
[129]
[130] Data Transmission Structure [131] FIG. 4 illustrates a structure of a MH frame for transmitting and receiving mobile service data according to the present invention. In the example shown in FIG.
4, one MH frame consists of 5 sub-frames, wherein each sub-frame includes 16 slots.
In this case, the MH frame according to the present invention includes 5 sub-frames and 80 slots.
[132] Also, in a packet level, one slot is configured of 156 data packets (i.e., transport stream packets), and in a symbol level, one slot is configured of 156 data segments.
Herein, the size of one slot corresponds to one half (1/2) of a VSB field.
More specifically, since one 207-byte data packet has the same amount of data as a data segment, a data packet prior to being interleaved may also be used as a data segment.
At this point, two VSB fields are grouped to form a VSB frame.
[133] FIG. 5 illustrates an exemplary structure of a VSB frame, wherein one VSB frame consists of 2 VSB fields (i.e., an odd field and an even field). Herein, each VSB field includes a field synchronization segment and 312 data segments.
[134] The slot corresponds to a basic time unit for multiplexing the mobile service data and the main service data. Herein, one slot may either include the mobile service data or be configured only of the main service data.
[135] If the first 118 data packets within the slot correspond to a data group, the remaining 38 data packets become the main service data packets. In another example, when no data group exists in a slot, the corresponding slot is configured of 156 main service data packets.
[136] Meanwhile, when the slots are assigned to a VSB frame, an off-set exists for each assigned position.
[137] FIG. 6 illustrates a mapping example of the positions to which the first 4 slots of a sub-frame are assigned with respect to a VSB frame in a spatial area. And, FIG. 7 il-lustrates a mapping example of the positions to which the first 4 slots of a sub-frame are assigned with respect to a VSB frame in a chronological (or time) area.
[138] Referring to FIG. 6 and FIG. 7, a 38th data packet (TS packet #37) of a 1st slot (Slot #0) is mapped to the 1st data packet of an odd VSB field. A 38th data packet (TS
packet #37) of a 2nd slot (Slot #1) is mapped to the 157th data packet of an odd VSB
field. Also, a 38th data packet (TS packet #37) of a 3rd slot (Slot #2) is mapped to the 1st data packet of an even VSB field. And, a 38th data packet (TS packet #37) of a 4th slot (Slot #3) is mapped to the 157th data packet of an even VSB field.
Similarly, the remaining 12 slots within the corresponding sub-frame are mapped in the subsequent VSB frames using the same method.
[139] FIG. 8 illustrates an exemplary assignment order of data groups being assigned to one of 5 sub-frames, wherein the 5 sub-frames configure an MH frame. For example, the method of assigning data groups may be identically applied to all MH
frames or differently applied to each MH frame. Furthermore, the method of assigning data groups may be identically applied to all sub-frames or differently applied to each sub-frame. At this point, when it is assumed that the data groups are assigned using the same method in all sub-frames of the corresponding MH frame, the total number of data groups being assigned to an MH frame is equal to a multiple of `5'.
[140] According to the embodiment of the present invention, a plurality of consecutive data groups is assigned to be spaced as far apart from one another as possible within the sub-frame. Thus, the system can be capable of responding promptly and effectively to any burst error that may occur within a sub-frame.
[141] For example, when it is assumed that 3 data groups are assigned to a sub-frame, the data groups are assigned to a 1st slot (Slot #0), a 5th slot (Slot #4), and a 9th slot (Slot #8) in the sub-frame, respectively. FIG. 8 illustrates an example of assigning 16 data groups in one sub-frame using the above-described pattern (or rule). In other words, each data group is serially assigned to 16 slots corresponding to the following numbers: 0, 8, 4, 12, 1, 9, 5, 13, 2, 10, 6, 14, 3, 11, 7, and 15. Equation 1 below shows the above-described rule (or pattern) for assigning data groups in a sub-frame.
[142]
[143] [Math Figure 1]

[144]
j =(4i+O) mod 16 0=0 if i<4, o= 2 else if i< 8, Herein, 0=1 else if i 12, o = 3 else.

[145] Herein, j indicates the slot number within a sub-frame. The value of j may range from 0 to 15 (i.e.,., o5j:!~ 15 ). Also, variable i indicates the data group number. The value of i may range from 0 to 15 (i.e., o i 15' ).
[146] In the present invention, a collection of data groups included in a MH
frame will be referred to as a "parade". Based upon the RS frame mode, the parade transmits data of at least one specific RS frame.
[147] The mobile service data within one RS frame may be assigned either to all of regions A/B/C/D within the corresponding data group, or to at least one of regions A/B/C/D. In the embodiment of the present invention, the mobile service data within one RS
frame may be assigned either to all of regions A/B/C/D, or to at least one of regions A/B and regions C/D. If the mobile service data are assigned to the latter case (i.e., one of regions A/B and regions C/D), the RS frame being assigned to regions A/B and the RS
frame being assigned to regions C/D within the corresponding data group are different from one another. According to the embodiment of the present invention, the RS
frame being assigned to regions A/B within the corresponding data group will be referred to as a "primary RS frame", and the RS frame being assigned to regions C/D within the corresponding data group will be referred to as a "secondary RS frame", for simplicity.
Also, the primary RS frame and the secondary RS frame form (or configure) one parade. More specifically, when the mobile service data within one RS frame are assigned either to all of regions A/B/C/D within the corresponding data group, one parade transmits one RS frame. Conversely, when the mobile service data within one RS frame are assigned either to at least one of regions A/B and regions C/D, one parade may transmit up to 2 RS frames.
[148] More specifically, the RS frame mode indicates whether a parade transmits one RS
frame, or whether the parade transmits two RS frames. Such RS frame mode is transmitted as the above-described TPC data.
[149] Table 1 below shows an example of the RS frame mode.
[150] Table 1 [Table 1]

RS frame mode Description (2 bits) 00 There is only one primary RS frame for all group regions There are two separate RS frames.
01 - Primary RS frame for group regions A and B
- Secondary RS frame for group regions C and D
Reserved 11 Reserved [151] Table 1 illustrates an example of allocating 2 bits in order to indicate the RS frame mode. For example, referring to Table 1, when the RS frame mode value is equal to `00', this indicates that one parade transmits one RS frame. And, when the RS
frame mode value is equal to `01', this indicates that one parade transmits two RS
frames, i.e., the primary RS frame and the secondary RS frame. More specifically, when the RS frame mode value is equal to `01', data of the primary RS frame for regions A/B
are assigned and transmitted to regions A/B of the corresponding data group.
Similarly, data of the secondary RS frame for regions C/D are assigned and transmitted to regions C/D of the corresponding data group.
[152] As described in the assignment of data groups, the parades are also assigned to be spaced as far apart from one another as possible within the sub-frame. Thus, the system can be capable of responding promptly and effectively to any burst error that may occur within a sub-frame.
[153] Furthermore, the method of assigning parades may be identically applied to all MH
frames or differently applied to each MH frame. According to the embodiment of the present invention, the parades may be assigned differently for each MH frame and identically for all sub-frames within an MH frame. More specifically, the MH
frame structure may vary by MH frame units. Thus, an ensemble rate may be adjusted on a more frequent and flexible basis.
[154] FIG. 9 illustrates an example of multiple data groups of a single parade being assigned (or allocated) to an MH frame. More specifically, FIG. 9 illustrates an example of a plurality of data groups included in a single parade, wherein the number of data groups included in a sub-frame is equal to `3', being allocated to an MH frame.
[155] Referring to FIG. 9, 3 data groups are sequentially assigned to a sub-frame at a cycle period of 4 slots. Accordingly, when this process is equally performed in the 5 sub-frames included in the corresponding MH frame, 15 data groups are assigned to a single MH frame. Herein, the 15 data groups correspond to data groups included in a parade. Therefore, since one sub-frame is configured of 4 VSB frame, and since 3 data groups are included in a sub-frame, the data group of the corresponding parade is not assigned to one of the 4 VSB frames within a sub-frame.
[156] For example, when it is assumed that one parade transmits one RS frame, and that a RS frame encoder (not shown) included in the transmitting system performs RS-encoding on the corresponding RS frame, thereby adding 24 bytes of parity data to the corresponding RS frame and transmitting the processed RS frame, the parity data occupy approximately 11.37% (=24/(187+24)x100) of the total code word length.
Meanwhile, when one sub-frame includes 3 data groups, and when the data groups included in the parade are assigned, as shown in FIG. 9, a total of 15 data groups form an RS frame. Accordingly, even when an error occurs in an entire data group due to a burst noise within a channel, the percentile is merely 6.67% (=1/15x100).
Therefore, the receiving system may correct all errors by performing an erasure RS
decoding process. More specifically, when the erasure RS decoding is performed, a number of channel errors corresponding to the number of RS parity bytes may be corrected. By doing so, the receiving system may correct the error of at least one data group within one parade. Thus, the minimum burst noise length correctable by a RS frame is over 1 VSB frame.
[157] Meanwhile, when data groups of a parade are assigned as shown in FIG. 9, either main service data may be assigned between each data group, or data groups cor-responding to different parades may be assigned between each data group. More specifically, data groups corresponding to multiple parades may be assigned to one MH frame.
[158] Basically, the method of assigning data groups corresponding to multiple parades is very similar to the method of assigning data groups corresponding to a single parade.
In other words, data groups included in other parades that are to be assigned to an MH
frame are also respectively assigned according to a cycle period of 4 slots.
[159] At this point, data groups of a different parade may be sequentially assigned to the respective slots in a circular method. Herein, the data groups are assigned to slots starting from the ones to which data groups of the previous parade have not yet been assigned.
[160] For example, when it is assumed that data groups corresponding to a parade are assigned as shown in FIG. 9, data groups corresponding to the next parade may be assigned to a sub-frame starting either from the 12th slot of a sub-frame.
However, this is merely exemplary. In another example, the data groups of the next parade may also be sequentially assigned to a different slot within a sub-frame at a cycle period of 4 slots starting from the 3rd slot.
[161] FIG. 10 illustrates an example of transmitting 3 parades (Parade #0, Parade #1, and Parade #2) to an MH frame. More specifically, FIG. 10 illustrates an example of transmitting parades included in one of 5 sub-frames, wherein the 5 sub-frames configure one MH frame.
[162] When the 1st parade (Parade #0) includes 3 data groups for each sub-frame, the positions of each data groups within the sub-frames may be obtained by substituting values `0' to `2' for i in Equation 1. More specifically, the data groups of the 1st parade (Parade #0) are sequentially assigned to the 1st, 5th, and 9th slots (Slot #0, Slot #4, and Slot #8) within the sub-frame.
[163] Also, when the 2nd parade includes 2 data groups for each sub-frame, the positions of each data groups within the sub-frames may be obtained by substituting values `3' and `4' for i in Equation 1. More specifically, the data groups of the 2nd parade (Parade #1) are sequentially assigned to the 2nd and 12th slots (Slot #1 and Slot #11) within the sub-frame.
[164] Finally, when the 3rd parade includes 2 data groups for each sub-frame, the positions of each data groups within the sub-frames may be obtained by substituting values `5' and `6' for i in Equation 1. More specifically, the data groups of the 3rd parade (Parade #2) are sequentially assigned to the 7th and 11th slots (Slot #6 and Slot #10) within the sub-frame.
[165] As described above, data groups of multiple parades may be assigned to a single MH
frame, and, in each sub-frame, the data groups are serially allocated to a group space having 4 slots from left to right.
[166] Therefore, a number of groups of one parade per sub-frame (NoG) may correspond to any one integer from `1' to W. Herein, since one MH frame includes 5 sub-frames, the total number of data groups within a parade that can be allocated to an MH
frame may correspond to any one multiple of `5' ranging from `5' to '40'.
[167] FIG. 11 illustrates an example of expanding the assignment process of 3 parades, shown in FIG. 10, to 5 sub-frames within an MH frame.
[168] FIG. 12 illustrates a data transmission structure according to an embodiment of the present invention, wherein signaling data are included in a data group so as to be transmitted.
[169] As described above, an MH frame is divided into 5 sub-frames. Data groups cor-responding to a plurality of parades co-exist in each sub-frame. Herein, the data groups corresponding to each parade are grouped by MH frame units, thereby configuring a single parade.
[170] The data structure shown in FIG. 12 includes 3 parades, one ESG
dedicated channel (EDC) parade (i.e., parade with NoG=1), and 2 service parades (i.e., parade with NoG=4 and parade with NoG=3). Also, a predetermined portion of each data group (i.e., 37 bytes/data group) is used for delivering (or sending) FIC
information associated with mobile service data, wherein the FIC information is separately encoded from the RS-encoding process. The FIC region assigned to each data group consists of one FIC segments. Herein, each segment is interleaved by MH sub-frame units, thereby configuring an FIC body, which corresponds to a completed FIC
transmission structure. However, whenever required, each segment may be interleaved by MH
frame units and not by MH sub-frame units, thereby being completed in MH frame units.
[171] Meanwhile, the concept of an MH ensemble is applied in the embodiment of the present invention, thereby defining a collection (or group) of services. Each MH
ensemble carries the same QoS and is coded with the same FEC code. Also, each MH
ensemble has the same unique identifier (i.e., ensemble ID) and corresponds to consecutive RS frames.
[172] As shown in FIG. 12, the FIC segment corresponding to each data group described service information of an MH ensemble to which the corresponding data group belongs. When FIC segments within a sub-frame are grouped and deinterleaved, all service information of a physical channel through which the corresponding FICs are transmitted may be obtained. Therefore, the receiving system may be able to acquire the channel information of the corresponding physical channel, after being processed with physical channel tuning, during a sub-frame period.
[173] Furthermore, FIG. 12 illustrates a structure further including a separate EDC parade apart from the service parade and wherein electronic service guide (ESG) data are transmitted in the 1st slot of each sub-frame.
[174]
[175] Hierarchical Signaling Structure [176] FIG. 13 illustrates a hierarchical signaling structure according to an embodiment of the present invention. As shown in FIG. 13, the mobile broadcasting technology according to the embodiment of the present invention adopts a signaling method using FIC and SMT. In the description of the present invention, the signaling structure will be referred to as a hierarchical signaling structure.
[177] Hereinafter, a detailed description on how the receiving system accesses a virtual channel via FIC and SMT will now be given with reference to FIG. 13.
[178] The FIC body defined in an MH transport (M1) identifies the physical location of each the data stream for each virtual channel and provides very high level descriptions of each virtual channel.
[179] Being MH ensemble level signaling information, the service map table (SMT) provides MH ensemble level signaling information. The SMT provides the IP
access information of each virtual channel belonging to the respective MH ensemble within which the SMT is carried. The SMT also provides all IP stream component level in-formation required for the virtual channel service acquisition.
[180] Referring to FIG. 13, each MH ensemble (i.e., Ensemble 0, Ensemble 1, ..., Ensemble K) includes a stream information on each associated (or corresponding) virtual channel (e.g., virtual channel 0 IP stream, virtual channel 1 IP
stream, and virtual channel 2 IP stream). For example, Ensemble 0 includes virtual channel stream and virtual channel 1 IP stream. And, each MH ensemble includes diverse in-formation on the associated virtual channel (i.e., Virtual Channel 0 Table Entry, Virtual Channel 0 Access Info, Virtual Channel 1 Table Entry, Virtual Channel Access Info, Virtual Channel 2 Table Entry, Virtual Channel 2 Access Info, Virtual Channel N Table Entry, Virtual Channel N Access Info, and so on).
[181] The FIC body payload includes information on MH ensembles (e.g., ensemble-id field, and referred to as "ensemble location" in FIG. 13) and information on a virtual channel associated with the corresponding MH ensemble (e.g., when such information corresponds to a major_channel_num field and a minor_channel_num field, the in-formation is expressed as Virtual Channel 0, Virtual Channel 1, ..., Virtual Channel N
in FIG. 13).
[182] The application of the signaling structure in the receiving system will now be described in detail.
[183] When a user selects a channel he or she wishes to view (hereinafter, the user-selected channel will be referred to as "channel 0" for simplicity), the receiving system first parses the received FIC. Then, the receiving system acquires information on an MH
ensemble (i.e., ensemble location), which is associated with the virtual channel cor-responding to channel 0 (hereinafter, the corresponding MH ensemble will be referred to as "MH ensemble 0" for simplicity). By acquiring slots only corresponding to the MH ensemble 0 using the time-slicing method, the receiving system configures ensemble 0. The ensemble 0 configured as described above, includes an SMT on the associated virtual channels (including channel 0) and IP streams on the corresponding virtual channels. Therefore, the receiving system uses the SMT included in the MH
ensemble 0 in order to acquire various information on channel 0 (e.g., Virtual Channel 0 Table Entry) and stream access information on channel 0 (e.g., Virtual Channel 0 Access Info). The receiving system uses the stream access information on channel 0 to receive only the associated IP streams, thereby providing channel 0 services to the user.
[184]
[185] Fast Information Channel (FIC) [186] The digital broadcast receiving system according to the present invention adopts the fast information channel (FIC) for a faster access to a service that is currently being broadcasted.
[187] More specifically, the FIC handler 215 of FIG. 1 parses the FIC body, which corresponds to an FIC transmission structure, and outputs the parsed result to the physical adaptation control signal handler 216.
[188] FIG. 14 illustrates an exemplary FIC body format according to an embodiment of the present invention. According to the embodiment of the present invention, the FIC
format consists of an FIC body header and an FIC body payload.
[189] Meanwhile, according to the embodiment of the present invention, data are transmitted through the FIC body header and the FIC body payload in FIC
segment units. Each FIC segment has the size of 37 bytes, and each FIC segment consists of a 2-byte FIC segment header and a 35-byte FIC segment payload. More specifically, an FIC body configured of an FIC body header and an FIC body payload, is segmented in units of 35 data bytes, which are then carried in at least one FIC segment within the FIC segment payload, so as to be transmitted.
[190] In the description of the present invention, an example of inserting one FIC segment in one data group, which is then transmitted, will be given. In this case, the receiving system receives a slot corresponding to each data group by using a time-slicing method.
[191] The signaling decoder 190 included in the receiving system shown in FIG.
1 collects each FIC segment inserted in each data group. Then, the signaling decoder 190 uses the collected FIC segments to created a single FIC body. Thereafter, the signaling decoder 190 performs a decoding process on the FIC body payload of the created FIC
body, so that the decoded FIC body payload corresponds to an encoded result of a signaling encoder (not shown) included in the transmitting system.
Subsequently, the decoded FIC body payload is outputted to the FIC handler 215. The FIC handler parses the FIC data included in the FIC body payload, and then outputs the parsed FIC
data to the physical adaptation control signal handler 216. The physical adaptation control signal handler 216 uses the inputted FIC data to perform processes associated with MH ensembles, virtual channels, SMTs, and so on.
[192] According to an embodiment of the present invention, when an FIC body is segmented, and when the size of the last segmented portion is smaller than 35 data bytes, it is assumed that the lacking number of data bytes in the FIC segment payload is completed with by adding the same number of stuffing bytes therein, so that the size of the last FIC segment can be equal to 35 data bytes.
[193] However, it is apparent that the above-described data byte values (i.e., 37 bytes for the FIC segment, 2 bytes for the FIC segment header, and 35 bytes for the FIC
segment payload) are merely exemplary, and will, therefore, not limit the scope of the present invention.
[194] FIG. 15 illustrates an exemplary bit stream syntax structure with respect to an FIC
segment according to an embodiment of the present invention.
[195] Herein, the FIC segment signifies a unit used for transmitting the FIC
data. The FIC
segment consists of an FIC segment header and an FIC segment payload.
Referring to FIG. 15, the FIC segment payload corresponds to the portion starting from the `for' loop statement. Meanwhile, the FIC segment header may include a FIC_type field, an error-indicator field, an FIC_seg_number field, and an FIC_last_seg_number field. A
detailed description of each field will now be given.
[196] The FIC_type field is a 2-bit field indicating the type of the corresponding FIC.
[197] The error-indicator field is a 1-bit field, which indicates whether or not an error has occurred within the FIC segment during data transmission. If an error has occurred, the value of the error-indicator field is set to `1'. More specifically, when an error that has failed to be recovered still remains during the configuration process of the FIC
segment, the error-indicator field value is set to `1'. The error_indicator field enables the receiving system to recognize the presence of an error within the FIC
data.
[198] The FIC_seg_number field is a 4-bit field. Herein, when a single FIC
body is divided into a plurality of FIC segments and transmitted, the FIC_seg_number field indicates the number of the corresponding FIC segment.
[199] Finally, the FIC_last_seg_number field is also a 4-bit field. The FIC_last_seg_number field indicates the number of the last FIC segment within the corresponding FIC body.
[200] FIG. 16 illustrates an exemplary bit stream syntax structure with respect to a payload of an FIC segment according to the present invention, when an FIC type field value is equal to V.
[201] According to the embodiment of the present invention, the payload of the FIC
segment is divided into 3 different regions.
[202] A first region of the FIC segment payload exists only when the FIC_seg_number field value is equal to V. Herein, the first region may include a current-next-indicator field, an ESG_version field, and a transport-stream-id field. However, depending upon the embodiment of the present invention, it may be assumed that each of the 3 fields exists regardless of the FIC_seg_number field.
[203] The current_next_indicator field is a 1-bit field. The current_next_indicator field acts as an indicator identifying whether the corresponding FIC data carry MH
ensemble configuration information of an MH frame including the current FIC segment, or whether the corresponding FIC data carry MH ensemble configuration information of a next MH frame.
[204] The ESG_version field is a 5-bit field indicating ESG version information. Herein, by providing version information on the service guide providing channel of the cor-responding ESG, the ESG_version field enables the receiving system to notify whether or not the corresponding ESG has been updated.
[205] Finally, the transport _stream_id field is a 16-bit field acting as a unique identifier of a broadcast stream through which the corresponding FIC segment is being transmitted.
[206] A second region of the FIC segment payload corresponds to an ensemble loop region, which includes an ensemble-id field, an SI_version field, and a num_channel field.
[207] More specifically, the ensemble-id field is an 8-bit field indicating identifiers of an MH ensemble through which MH services are transmitted. The MH services will be described in more detail in a later process. Herein, the ensemble-id field binds the MH
services and the MH ensemble.
[208] The SI_version field is a 4-bit field indicating version information of SI data included in the corresponding ensemble, which is being transmitted within the RS
frame.
[209] Finally, the num_channel field is an 8-bit field indicating the number of virtual channel being transmitted via the corresponding ensemble.
[210] A third region of the FIC segment payload a channel loop region, which includes a channel-type field, a channel_activity field, a CA_indicator field, a stand-alone-service-indicator field, a major_channel_num field, and a minor channel num field.
[211] The channel-type field is a 5-bit field indicating a service type of the corresponding virtual channel. For example, the channel-type field may indicates an audio/video channel, an audio/video and data channel, an audio-only channel, a data-only channel, a file download channel, an ESG delivery channel, a notification channel, and so on.
[212] The channel-activity field is a 2-bit field indicating activity information of the cor-responding virtual channel. More specifically, the channel_activity field may indicate whether the current virtual channel is providing the current service.
[213] The CA_indicator field is a 1-bit field indicating whether or not a conditional access (CA) is applied to the current virtual channel.
[214] The stand-alone-service-indicator field is also a 1-bit field, which indicates whether the service of the corresponding virtual channel corresponds to a stand alone service.
[215] The major_channel_num field is an 8-bit field indicating a major channel number of the corresponding virtual channel.
[216] Finally, the minor_channel_num field is also an 8-bit field indicating a minor channel number of the corresponding virtual channel.
[217]
[218] Service Table Map [219] FIG. 17 illustrates an exemplary bit stream syntax structure of a service map table (hereinafter referred to as "SMT") according to the present invention.
[220] According to the embodiment of the present invention, the SMT is configured in an MPEG-2 private section format. However, this will not limit the scope and spirit of the present invention. The SMT according to the embodiment of the present invention includes description information for each virtual channel within a single MH
ensemble.
And, additional information may further be included in each descriptor area.
[221] Herein, the SMT according to the embodiment of the present invention includes at least one field and is transmitted from the transmitting system to the receiving system.
[222] As described in FIG. 3, the SMT section may be transmitted by being included in the MH TP within the RS frame. In this case, each of the RS frame decoders 170 and 180, shown in FIG. 1, decodes the inputted RS frame, respectively. Then, each of the decoded RS frames is outputted to the respective RS frame handler 211 and 212.
Thereafter, each RS frame handler 211 and 212 identifies the inputted RS frame by row units, so as to create an MH TP, thereby outputting the created MH TP to the MH
TP handler 213.
[223] When it is determined that the corresponding MH TP includes an SMT
section based upon the header in each of the inputted MH TP, the MH TP handler 213 parses the cor-responding SMT section, so as to output the SI data within the parsed SMT
section to the physical adaptation control signal handler 216. However, this is limited to when the SMT is not encapsulated to IP datagrams.
[224] Meanwhile, when the SMT is encapsulated to IP datagrams, and when it is determined that the corresponding MH TP includes an SMT section based upon the header in each of the inputted MH TP, the MH TP handler 213 outputs the SMT
section to the IP network stack 220. Accordingly, the IP network stack 220 performs IP
and UDP processes on the inputted SMT section and, then, outputs the processed SMT
section to the SI handler 240. The SI handler 240 parses the inputted SMT
section and controls the system so that the parsed SI data can be stored in the storage unit 290.
[225] The following corresponds to example of the fields that may be transmitted through the SMT.
[226] A table-id field corresponds to an 8-bit unsigned integer number, which indicates the type of table section. The table_id field allows the corresponding table to be defined as the service map table (SMT).
[227] An ensemble-id field is an 8-bit unsigned integer field, which corresponds to an ID
value associated to the corresponding MH ensemble. Herein, the ensemble-id field may be assigned with a value ranging from range `0x00' to `Ox3F'. It is preferable that the value of the ensemble-id field is derived from the parade-id of the TPC
data, which is carried from the baseband processor of MH physical layer subsystem.
When the corresponding MH ensemble is transmitted through (or carried over) the primary RS frame, a value of `0' may be used for the most significant bit (MSB), and the remaining 7 bits are used as the parade-id value of the associated MH parade (i.e., for the least significant 7 bits). Alternatively, when the corresponding MH
ensemble is transmitted through (or carried over) the secondary RS frame, a value of `1' may be used for the most significant bit (MSB).
[228] A num_channels field is an 8-bit field, which specifies the number of virtual channels in the corresponding SMT section.
[229] Meanwhile, the SMT according to the embodiment of the present invention provides information on a plurality of virtual channels using the `for' loop statement.
[230] A major_channel_num field corresponds to an 8-bit field, which represents the major channel number associated with the corresponding virtual channel. Herein, the major_channel_num field may be assigned with a value ranging from `0x00' to `0xFF'.
[231] A minor_channel_num field corresponds to an 8-bit field, which represents the minor channel number associated with the corresponding virtual channel. Herein, the minor_channel_num field may be assigned with a value ranging from 'OxOO' to 'OxFF'.
[232] A short-channel-name field indicates the short name of the virtual channel. The service-id field is a 16-bit unsigned integer number (or value), which identifies the virtual channel service.
[233] A service-type field is a 6-bit enumerated type field, which designates the type of service carried in the corresponding virtual channel as defined in Table 2 below.
[234] Table 2 [Table 2]

Ox00 [Reserved]

MH_digital_television field : the virtual channel Ox01 carries television programming (audio, video and optional associated data) conforming to ATSC standards.

MHaudio field : the virtual channel carries 0x02 audio programming (audio service and optional associated data) conforming to ATSC standards.
MH_data_only_service field : the virtual channel 0x03 carries a data service conforming to ATSC
standards, but no video or audio component.
Ox04 to OxFF [Reserved for future ATSC usage]

[235] A virtual-channel-activity field is a 2-bit enumerated field identifying the activity status of the corresponding virtual channel. When the most significant bit (MSB) of the virtual-channel-activity field is `1', the virtual channel is active, and when the most significant bit (MSB) of the virtual-channel-activity field is `0', the virtual channel is inactive. Also, when the least significant bit (LSB) of the virtual-channel-activity field is `1', the virtual channel is hidden (when set to 1), and when the least significant bit (LSB) of the virtual-channel-activity field is `0', the virtual channel is not hidden.
[236] A num_components field is a 5-bit field, which specifies the number of IP stream components in the corresponding virtual channel.
[237] An IP_version_flag field corresponds to a 1-bit indicator. More specifically, when the value of the IP_version_flag field is set to `1', this indicates that a source_IP_address field, a virtual_channel_target_IP_address field, and a component_target_IP_address field are IPv6 addresses. Alternatively, when the value of the IP_version_flag field is set to `0', this indicates that the source_IP_address field, the virtual_channel_target_IP_address field, and the component_target_IP_address field are IPv4.
[238] A source_IP_address_flag field is a 1-bit Boolean flag, which indicates, when set, that a source IP address of the corresponding virtual channel exist for a specific multicast source.
[239] A virtual_channel_target_IP_address_flag field is a 1-bit Boolean flag, which indicates, when set, that the corresponding IP stream component is delivered through IP datagrams with target IP addresses different from the virtual_channel_target_IP_address. Therefore, when the flag is set, the receiving system (or receiver) uses the component_target_IP_address as the target_IP_address in order to access the corresponding IP stream component. Accordingly, the receiving system (or receiver) may ignore the virtual_channel_target_IP_address field included in the num_channels loop.
[2401 The source_IP_address field corresponds to a 32-bit or 128-bit field.
Herein, the source_IP_address field will be significant (or present), when the value of the source_IP_address_flag field is set to `1'. However, when the value of the source_IP_address_flag field is set to `0', the source_IP_address field will become in-significant (or absent). More specifically, when the source_IP_address_flag field value is set to `1', and when the IP_version_flag field value is set to `0', the source_IP_address field indicates a 32-bit IPv4 address, which shows the source of the corresponding virtual channel. Alternatively, when the IP_version_flag field value is set to `1', the source_IP_address field indicates a 128-bit IPv6 address, which shows the source of the corresponding virtual channel.
[2411 The virtual_channel_target_IP_address field also corresponds to a 32-bit or 128-bit field. Herein, the virtual_channel_target_IP_address field will be significant (or present), when the value of the virtual_channel_target_IP_address_flag field is set to `1'. However, when the value of the virtual _channel_target_IP_address _flag field is set to `0', the virtual_channel_target_IP_address field will become insignificant (or absent). More specifically, when the virtual_channel_target_IP_address_flag field value is set to `1', and when the IP_version_flag field value is set to `0', the virtual_channel_target_IP_address field indicates a 32-bit target IPv4 address associated to the corresponding virtual channel. Alternatively, when the virtual_channel_target_IP_address_flag field value is set to `1', and when the IP_version_flag field value is set to `1', the virtual_channel_target_IP_address field indicates a 64-bit target IPv6 address associated to the corresponding virtual channel.
If the virtual_channel_target_IP_address field is insignificant (or absent), the component_target_IP_address field within the num_channels loop should become significant (or present). And, in order to enable the receiving system to access the IP
stream component, the component_target_IP_address field should be used.
[2421 Meanwhile, the SMT according to the embodiment of the present invention uses a `for' loop statement in order to provide information on a plurality of components.
[2431 Herein, an RTP_payload_type field, which is assigned with 7 bits, identifies the encoding format of the component based upon Table 3 shown below. When the IP
stream component is not encapsulated to RTP, the RTP_payload_type field shall be ignored (or deprecated).

[244] Table 3 below shows an example of the RTP_payload_type.
[245] Table 3 [Table 3]

RTP payload type Meaning 35 AVC video 36 MH audio 37 to 72 [Reserved for future ATSC use]
[246] A component_target_IP_address_flag field is a 1-bit Boolean flag, which indicates, when set, that the corresponding IP stream component is delivered through IP
datagrams with target IP addresses different from the virtual_channel_target_IP_address. Furthermore, when the component_target_IP_address_flag is set, the receiving system (or receiver) uses the component_target_IP_address field as the target IP address for accessing the cor-responding IP stream component. Accordingly, the receiving system (or receiver) will ignore the virtual_channel_target_IP_address field included in the num_channels loop.
[247] The component_target_IP_address field corresponds to a 32-bit or 128-bit field.
Herein, when the value of the IP_version_flag field is set to `0', the component_target_IP_address field indicates a 32-bit target IPv4 address associated to the corresponding IP stream component. And, when the value of the IP_version_flag field is set to `1', the component_target_IP_address field indicates a 128-bit target IPv6 address associated to the corresponding IP stream component.
[248] A port_num_count field is a 6-bit field, which indicates the number of UDP ports associated with the corresponding IP stream component. A target UDP port number value starts from the target_UDP_port_num field value and increases (or is in-cremented) by 1. For the RTP stream, the target UDP port number should start from the target_UDP_port_num field value and shall increase (or be incremented) by 2. This is to incorporate RTCP streams associated with the RTP streams.
[249] A target_UDP_port_num field is a 16-bit unsigned integer field, which represents the target UDP port number for the corresponding IP stream component. When used for RTP streams, the value of the target_UDP_port_num field shall correspond to an even number. And, the next higher value shall represent the target UDP port number of the associated RTCP stream.
[250] A component_level_descriptor() represents zero or more descriptors providing additional information on the corresponding IP stream component.
[251] A virtual_channel_level_descriptor() represents zero or more descriptors providing additional information for the corresponding virtual channel.
[252] An ensemble_level_descriptor() represents zero or more descriptors providing additional information for the MH ensemble, which is described by the corresponding SMT.
[253] FIG. 18 illustrates an exemplary bit stream syntax structure of an MH
audio descriptor according to the present invention.
[254] When at least one audio service is present as a component of the current event, the MH_audio_descriptor() shall be used as a component_level_descriptor of the SMT.
The MH_audio_descriptor() may be capable of informing the system of the audio language type and stereo mode status. If there is no audio service associated with the current event, then it is preferable that the MH_audio_descriptor() is considered to be insignificant (or absent) for the current event.
[255] Each field shown in the bit stream syntax of FIG. 18 will now be described in detail.
[256] A descriptor_tag field is an 8-bit unsigned integer having a TBD value, which indicates that the corresponding descriptor is the MH_audio_descriptorO.
[257] A descriptor_length field is also an 8-bit unsigned integer, which indicates the length (in bytes) of the portion immediately following the descriptor-length field up to the end of the MH_audio_descriptorO.
[258] A channel-configuration field corresponds to an 8-bit field indicating the number and configuration of audio channels. The values ranging from `1' to `6' respectively indicate the number and configuration of audio channels as given for "Default bit stream index number" in Table 42 of ISO/IEC 13818-7:2006. All other values indicate that the number and configuration of audio channels are undefined.
[259] A sample-rate-code field is a 3-bit field, which indicates the sample rate of the encoded audio data. Herein, the indication may correspond to one specific sample rate, or may correspond to a set of values that include the sample rate of the encoded audio data as defined in Table A3.3 of ATSC A/52B.
[260] A bit-rate-code field corresponds to a 6-bit field. Herein, among the 6 bits, the lower bits indicate a nominal bit rate. More specifically, when the most significant bit (MSB) is `0', the corresponding bit rate is exact. On the other hand, when the most significant bit (MSB) is `0', the bit rate corresponds to an upper limit as defined in Table A3.4 of ATSC A/53B.
[261] An ISO_639_language_code field is a 24-bit (i.e., 3-byte) field indicating the language used for the audio stream component, in conformance with ISO 639.2/B
[x].
When a specific language is not present in the corresponding audio stream component, the value of each byte will be set to `0x00'.
[262] FIG. 19 illustrates an exemplary bit stream syntax structure of an MH
RTP payload type descriptor according to the present invention.

[263] The MH_RTP_payload_type_descriptor() specifies the RTP payload type.
Yet, the MH_RTP_payload_type_descriptor() exists only when the dynamic value of the RTP_payload_type field within the num_components loop of the SMT is in the range of `96' to `127'. The MH_RTP_payload_type_descriptor() is used as a component_level_descriptor of the SMT.
[264] The MH_RTP_payload_type_descriptor translates (or matches) a dynamic RTP_payload_type field value into (or with) a MIME type. Accordingly, the receiving system (or receiver) may collect (or gather) the encoding format of the IP
stream component, which is encapsulated in RTP.
[265] The fields included in the MH_RTP_payload_type_descriptor() will now be described in detail.
[266] A descriptor_tag field corresponds to an 8-bit unsigned integer having the value TBD, which identifies the current descriptor as the MH_RTP_payload_type_descriptorQ.
[267] A descriptor_length field also corresponds to an 8-bit unsigned integer, which indicates the length (in bytes) of the portion immediately following the descriptor_length field up to the end of the MH_RTP_payload_type_descriptorO.
[268] An RTP_payload_type field corresponds to a 7-bit field, which identifies the encoding format of the IP stream component. Herein, the dynamic value of the RTP_payload_type field is in the range of `96' to `127'.
[269] A MIME_type_length field specifies the length (in bytes) of a MIME_type field.
[270] The MIME_type field indicates the MIME type corresponding to the encoding format of the IP stream component, which is described by the MH_RTP_payload_type_descriptorQ.
[271] FIG. 20 illustrates an exemplary bit stream syntax structure of an MH
current event descriptor according to the present invention.
[272] The MH_current_event_descriptor() shall be used as the virtual_channel_level_descriptor() within the SMT. Herein, the MH_current_event_descriptor() provides basic information on the current event (e.g., the start time, duration, and title of the current event, etc.), which is transmitted via the respective virtual channel.
[273] The fields included in the MH_current_event_descriptor() will now be described in detail.
[274] A descriptor_tag field corresponds to an 8-bit unsigned integer having the value TBD, which identifies the current descriptor as the MH_current_event_descriptorQ.
[275] A descriptor_length field also corresponds to an 8-bit unsigned integer, which indicates the length (in bytes) of the portion immediately following the descriptor-length field up to the end of the MH_current_event_descriptorQ.

[276] A current-event-start-time field corresponds to a 32-bit unsigned integer quantity.
The current-event-start-time field represents the start time of the current event and, more specifically, as the number of GPS seconds since 00:00:00 UTC, January 6, 1980.
[277] A current-event-duration field corresponds to a 24-bit field. Herein, the current-event-duration field indicates the duration of the current event in hours, minutes, and seconds (wherein the format is in 6 digits, 4-bit BCD = 24 bits).
[278] A title-length field specifies the length (in bytes) of a title-text field. Herein, the value `0' indicates that there are no titles existing for the corresponding event.
[279] The title-text field indicates the title of the corresponding event in event title in the format of a multiple string structure as defined in ATSC A/65C [x].
[280] FIG. 21 illustrates an exemplary bit stream syntax structure of an MH
next event descriptor according to the present invention.
[281] The optional MH_next_event_descriptor() shall be used as the virtual_channel_level_descriptor() within the SMT. Herein, the MH_next_event_descriptor() provides basic information on the next event (e.g., the start time, duration, and title of the next event, etc.), which is transmitted via the respective virtual channel.
[282] The fields included in the MH_next_event_descriptor() will now be described in detail.
[283] A descriptor_tag field corresponds to an 8-bit unsigned integer having the value TBD, which identifies the current descriptor as the MH_next_event_descriptorQ.
[284] A descriptor_length field also corresponds to an 8-bit unsigned integer, which indicates the length (in bytes) of the portion immediately following the descriptor_length field up to the end of the MH_next_event_descriptorQ.
[285] A next-event-start-time field corresponds to a 32-bit unsigned integer quantity. The next-event-start-time field represents the start time of the next event and, more specifically, as the number of GPS seconds since 00:00:00 UTC, January 6, 1980.
[286] A next-event-duration field corresponds to a 24-bit field. Herein, the next-event-duration field indicates the duration of the next event in hours, minutes, and seconds (wherein the format is in 6 digits, 4-bit BCD = 24 bits).
[287] A title-length field specifies the length (in bytes) of a title-text field. Herein, the value `0' indicates that there are no titles existing for the corresponding event.
[288] The title-text field indicates the title of the corresponding event in event title in the format of a multiple string structure as defined in ATSC A/65C [x].
[289] FIG. 22 illustrates an exemplary bit stream syntax structure of an MH
system time descriptor according to the present invention.
[290] The MH_system_time_descriptor() shall be used as the ensemble_level_descriptor() within the SMT. Herein, the MH_system_time_descriptor() provides information on current time and date. The MH_system_time_descriptor() also provides information on the time zone in which the transmitting system (or transmitter) transmitting the cor-responding broadcast stream is located, while taking into consideration the mobile/
portable characteristics of the MH service data.
[291] The fields included in the MH_system_time_descriptor() will now be described in detail.
[292] A descriptor-tag field corresponds to an 8-bit unsigned integer having the value TBD, which identifies the current descriptor as the MH_system_time_descriptorQ.
[293] A descriptor_length field also corresponds to an 8-bit unsigned integer, which indicates the length (in bytes) of the portion immediately following the descriptor_length field up to the end of the MH_system_time_descriptorQ.
[294] A system-time field corresponds to a 32-bit unsigned integer quantity.
The system-time field represents the current system time and, more specifically, as the number of GPS seconds since 00:00:00 UTC, January 6, 1980.
[295] A GPS_UTC_offset field corresponds to an 8-bit unsigned integer, which defines the current offset in whole seconds between GPS and UTC time standards. In order to convert GPS time to UTC time, the GPS_UTC_offset is subtracted from GPS time.
Whenever the International Bureau of Weights and Measures decides that the current offset is too far in error, an additional leap second may be added (or subtracted). Ac-cordingly, the GPS_UTC_offset field value will reflect the change.
[296] A time-zone-offset-polarity field is a 1-bit field, which indicates whether the time of the time zone, in which the broadcast station is located, exceeds (or leads or is faster) or falls behind (or lags or is slower) than the UTC time. When the value of the time-zone-offset-polarity field is equal to `0', this indicates that the time on the current time zone exceeds the UTC time. Therefore, a time-zone-offset field value is added to the UTC time value. Conversely, when the value of the time-zone-offset-polarity field is equal to `1', this indicates that the time on the current time zone falls behind the UTC time. Therefore, the time-zone-offset field value is subtracted from the UTC time value.
[297] The time-zone-offset field is a 31-bit unsigned integer quantity. More specifically, the time-zone-offset field represents, in GPS seconds, the time offset of the time zone in which the broadcast station is located, when compared to the UTC time.
[298] A daylight-savings field corresponds to a 16-bit field providing information on the Summer Time (i.e., the Daylight Savings Time).
[299] A time-zone field corresponds to a (5x8)-bit field indicating the time zone, in which the transmitting system (or transmitter) transmitting the corresponding broadcast stream is located.

[300] FIG. 23 illustrates segmentation and encapsulation processes of a service map table (SMT) according to the present invention.
[301] According to the present invention, the SMT is encapsulated to UDP, while including a target IP address and a target UDP port number within the IP datagram. More specifically, the SMT is first segmented into a predetermined number of sections, then encapsulated to a UDP header, and finally encapsulated to an IP header.
[302] In addition, the SMT section provides signaling information on all virtual channel included in the MH ensemble including the corresponding SMT section. At least one SMT section describing the MH ensemble is included in each RS frame included in the corresponding MH ensemble. Finally, each SMT section is identified by an ensemble id included in each section.
[303] According to the embodiment of the present invention, by informing the receiving system of the target IP address and target UDP port number, the corresponding data (i.e., target IP address and target UDP port number) may be parsed without having the receiving system to request for other additional information.
[304] FIG. 24 illustrates a flow chart for accessing a virtual channel using FIC and SMT
according to the present invention.
[305] More specifically, a physical channel is tuned (S501). And, when it is determined that an MH signal exists in the tuned physical channel (S502), the corresponding MH
signal is demodulated (S503). Additionally, FIC segments are grouped from the de-modulated MH signal in sub-frame units (S504 and S505).
[306] According to the embodiment of the present invention, an FIC segment is inserted in a data group, so as to be transmitted. More specifically, the FIC segment corresponding to each data group described service information on the MH ensemble to which the corresponding data group belongs. When the FIC segments are grouped in sub-frame units and, then, deinterleaved, all service information on the physical channel through which the corresponding FIC segment is transmitted may be acquired. Therefore, after the tuning process, the receiving system may acquire channel information on the cor-responding physical channel during a sub-frame period. Once the FIC segments are grouped, in S504 and S505, a broadcast stream through which the corresponding FIC
segment is being transmitted is identified (S506). For example, the broadcast stream may be identified by parsing the transport-stream-id field of the FIC body, which is configured by grouping the FIC segments.
[307] Furthermore, an ensemble identifier, a major channel number, a minor channel number, channel type information, and so on, are extracted from the FIC body (S507).
And, by using the extracted ensemble information, only the slots corresponding to the designated ensemble are acquired by using the time-slicing method, so as to configure an ensemble (S508).

[3081 Subsequently, the RS frame corresponding to the designated ensemble is decoded (S509), and an IP socket is opened for SMT reception (S510).
[3091 According to the example given in the embodiment of the present invention, the SMT is encapsulated to UDP, while including a target IP address and a target UDP
port number within the IP datagram. More specifically, the SMT is first segmented into a predetermined number of sections, then encapsulated to a UDP header, and finally encapsulated to an IP header. According to the embodiment of the present invention, by informing the receiving system of the target IP address and target UDP port number, the receiving system parses the SMT sections and the descriptors of each SMT section without requesting for other additional information (S511).
[3101 The SMT section provides signaling information on all virtual channel included in the MH ensemble including the corresponding SMT section. At least one SMT
section describing the MH ensemble is included in each RS frame included in the cor-responding MH ensemble. Also, each SMT section is identified by an ensemble-id included in each section.
[3111 Furthermore each SMT provides IP access information on each virtual channel subordinate to the corresponding MH ensemble including each SMT. Finally, the SMT
provides IP stream component level information required for the servicing of the cor-responding virtual channel.
[3121 Therefore, by using the information parsed from the SMT, the IP stream component belonging to the virtual channel requested for reception may be accessed (S513). Ac-cordingly, the service associated with the corresponding virtual channel is provided to the user (S514).
[3131 Meanwhile, the present invention relates to acquiring access information on an IP-based virtual channel service through an SMT and its respective descriptors.
Herein, the virtual channel service, mobile service, and MH service are all used in the same meaning.
[3141 More specifically, the SMT is included in an RS frame, which transmits mobile service data corresponding to a single MH ensemble, so as to be received. The SMT
includes signaling information on the virtual channel and IP-based mobile service, which are included in the MH ensemble, through which the corresponding SMT is transmitted (or delivered). Furthermore, the SMT may include a plurality of de-scriptors.
[3151 The SMT according to an embodiment of the present invention may include a session description protocol (SDP) reference descriptor. And, in this case, the digital broadcast receiving system may recognize (or acknowledge) the corresponding virtual channel as a session and may acquire an SDP message with respect to the corresponding session.
More specifically, according to the embodiment of the present invention, when an SDP

exists for each virtual channel, the position information of the corresponding SDP
message is received through the SDP reference descriptor.
[316] The SMT according to another embodiment of the present invention may include a session description (SD) descriptor. And, in this case, the digital broadcast receiving system may recognize (or acknowledge) the corresponding virtual channel as a session and may acquire IP access information and description information on the cor-responding session. More specifically, according to the other embodiment of the present invention, the IP access information and description information corresponding to each stream component for each respective virtual channel may be received through the SD descriptor. The SD descriptor may provide access information for each respective IP media component being transmitted though the corresponding session and access information based upon the corresponding media characteristic.
Furthermore, the SD descriptor may also provide Codec information for each component.
[317] FIG. 25 illustrates an exemplary MH system architecture according to the present invention. Referring to FIG. 25, the system architecture provides IP-based virtual channel service and rich media (RM) services. More specifically, the virtual channel services and RM service IP-packetized in the IP layer are first encapsulated into an MH TP within an RS frame. Thereafter, the encapsulated services are delivered (or transmitted) through a physical layer. At this point, in order to provide and ensure fast channel setting on the IP-based virtual channel service, a 2-step signaling method using FIC and SMT will be used. And, an IP-based signaling method is used for the IP-based services.
[318] FIG. 26 illustrates a 2-step signaling method using the FIC and SMT
according to the present invention. More specifically, the FIC provides the receiving system with in-formation on the IP-based virtual channel service, more particularly, in which MH
ensemble the corresponding IP-based virtual channel service exists. After receiving the corresponding FIC information, the receiving system decodes the RS frame cor-responding to the desired (or requested) MH ensemble. Subsequently, the receiving system acquires access information of the IP-based virtual channel service within the corresponding MH ensemble through an SMT included in the decoded RS frame.
Herein, the FIC includes information linking the MH ensemble to the virtual channel service. Furthermore, each row of the RS frame configures an MH TP, as shown in FIG. 3. Each MH TP is configured of any one of IP datagrams, signaling data, such as or SMT, and a combination of IP datagrams and signaling data encapsulated therein. If the SMT exists in a well-known position (or pre-arranged position) within the RS
frame, the receiving system may be able to process the SMT first when receiving the RS frame.

[3191 FIG. 27 illustrates an exemplary bit stream syntax structure of a service map table (SMT) according to another embodiment of the present invention. The SMT shown in FIG. 27 is configured in an MPEG-2 private section format. However, this will not limit the scope of the present invention. The SMT includes description information for each virtual channel within a single MH ensemble. And, other additional information may be included in the Descriptor field. The SMT includes at least one field and is transmitted from the transmitting system (or transmitter) to the receiving system (or receiver). The difference between the SMT shown in FIG. 17 and the SMT shown in FIG. 27 is the presence of IP access information. More specifically, the SMT
of FIG.
17 provides IP access information of virtual channels and/or IP access information of IP stream components in a field format. Alternatively, when IP access information of virtual channel or IP stream components are required, the SMT of FIG. 27 may provide the requested information through descriptors within a virtual channel loop.
[3201 Also, as described in FIG. 3, the SMT section may be included in an RS
frame of the MH TP, which is then transmitted. In this case, each of the RS frame decoders 170 and 180 (shown in FIG. 1) decodes the inputted RS frame, and the decoded RS frame is outputted to each respective RS frame handler 211 and 212. Also, each RS frame handler 211 and 212 distinguishes the inputted RS frame in row units, thereby configuring an MH TP. Then, each RS frame handler 211 and 212 outputs the configured MH TP to the MH TP handler 213.
[3211 When the system determines, based upon the header of each received MH
TP, that the corresponding MH TP includes an SMT section, the MH TP handler 213 parses the included SMT section. Then, the MH TP handler 213 outputs the SI data included in the parsed SMT section to the physical adaptation control signal handler 216.
However, in this case, the SMT is not encapsulated into IP datagrams.
[3221 Meanwhile, when the SMT is encapsulated into IP datagrams, and when the system determines, based upon the header of each received MH TP, that the corresponding MH TP includes an SMT section, the MH TP handler 213 outputs the corresponding SMT section to the IP network stack 220. Accordingly, the IP network stack 220 performs IP and UDP processes on the SMT section and outputs the processed SMT
section to the SI handler 240. The SI handler 240 parses the inputted SMT
section and controls the system so that the parsed SI data are stored in the storage unit 290.
[3231 Examples of the fields that can be transmitted through the service map table (SMT) will now be described.
[3241 A table-id field corresponds to an 8-bit unsigned integer number, which indicates the type of table section being defined in the SMT.
[3251 An ensemble-id field corresponds to an 8-bit unsigned integer field, the value of which ranges from 'OxOO' to 'Ox3F'. Herein, the value of the ensemble-id field corresponds to an ID value associated with the corresponding MH ensemble. It is preferable that the value of the ensemble_id field is derived from the parade_id carried from the baseband processor of MH physical layer subsystem. When the corresponding MH ensemble is carried over (or transmitted through) the primary RS frame, the most significant bit (MSB) is set to `0', and the remaining (or least significant) 7 bits are used as identification values (i.e., parade-id) of the corresponding MH
ensemble. On the other hand, when the corresponding MH ensemble is carried over (or transmitted through) the secondary RS frame, the most significant bit (MSB) is set to `1', and the remaining (or least significant) 7 bits are used as identification values (i.e., parade_id) of the corresponding MH ensemble.
[326] A num_channels field corresponds to an 8-bit field, which specifies the number of virtual channels in the corresponding SMT section.
[327] Additionally, the SMT uses a `for' loop statement so as to provide information on a plurality of virtual channels.
[328] A transport_stream_ID field corresponds to a 16-bit field indicating an identification value for distinguishing the corresponding SMT from other SMTs that may be broadcasted via different physical channels.
[329] A major_channel_num field corresponds to an 8-bit unsigned integer field, which represents the major channel number associated with the corresponding virtual channel. Herein, the major_channel_num field is assigned with a value ranging from `0x00' to 'OxFF'.
[330] A minor_channel_num field corresponds to an 8-bit unsigned integer field, which represents the minor channel number associated with the corresponding virtual channel. Herein, the minor_channel_num field is assigned with a value ranging from `0x00' to 'OxFF'.
[331] A source-id field corresponds to a 16-bit unsigned integer number, which identifies the programming source associated with the virtual channel. Accordingly, a source corresponds to any one specific source of video, text, data, and audio programs. The source-id field is not assigned with the value `0' (i.e., the source_id value zero (`0') is reserved). The source_id field is assigned with a value ranging from `0x0001' to 'OxOFFF'. Herein, the source-id field value is a unique value, at the regional level, within the physical channel carrying the SMT.
[332] A short _channel_name field indicates a short textual name of the virtual channel.
[333] Furthermore, the `for' loop statement may further include a descriptors() field. The descriptors() field included in the `for' loop statement corresponds to a descriptor in-dividually applied to each virtual channel. The SDP reference descriptor or SD
descriptor according to the present invention may be received by being included in any one of the SMT shown in FIG. 17 and the SMT shown in FIG. 27.

[334] FIG. 28 illustrates an exemplary bit stream syntax structure of an SDP_Reference_Descriptor() according to the present invention.
[335] Referring to FIG. 28, a descriptor_tag field is assigned with 8 bits.
Herein, the descriptor_tag field indicates that the corresponding descriptor is an SDP_Reference_DescriptorO.
[336] A descriptor-length field is an 8-bit field, which indicates the length (in bytes) of the portion immediately following the descriptor-length field up to the end of the SDP_Reference_DescriptorO.
[337] An SDP_Reference_type field corresponds to an indicator indicating whether or not the corresponding SDP message is being transmitted in a session announcement protocol (SAP) stream format, or whether or not the corresponding SDP message is being transmitted in an SDP file format via a file delivery over unidirectional transport (FLUTE) session. More specifically, the SDP message may be received in a stream format and may also be received in a file format. When the SDP message is being received in a stream format, the session announcement protocol (SAP) may be used as the transmission protocol. On the other hand, when the SDP message is being received in a file format, the file delivery over unidirectional transport (FLUTE) protocol may be used as the transmission protocol.
[338] For example, when the SDP_Reference_type field value indicates the SAP
stream, the SDP_Reference_Descriptor() may include an Address-type field, an Address-count field, a Target_IP_address field, a Target_Port_Num field, a Port_Count field, and a SDP_Session_ID field.
[339] The Address-type field represents an indicator indicating whether the corresponding IP address corresponds to an IPv4 address or an IPv6 address.
[340] The Address-count field indicates the number of IP streams that are transmitted through the corresponding session. The address of each IP stream is assigned with a value increased by `1' starting from the last bit of the Target_IP_address field.
[341] The Target_IP_address field either indicates an IP address of the corresponding IP
stream or indicates a representative IP address of the corresponding session.
[342] The Target_Port_Num field either indicates a UDP port number of the corresponding IP stream or indicates a representative UDP port number of the corresponding session.
[343] The Port-Count field indicates the number of port numbers that are transmitted through the corresponding session. The UDP port number of each IP stream is assigned with a value increased by `1' starting from the last bit of the Target_Port_Num field.
[344] Finally, the SDP_Session_ID field represents an identifier assigned to the SDP
message respective of the corresponding virtual channel.
[345] Meanwhile, when the SDP_Reference_type field value indicates the FLUTE
file delivery, the SDP_Reference_Descriptor() may include a TSI_length field, an Address-type field, an Address_count field, a Transport_Session_ID field, a Target_IP_address field, a Target_Port_Num field, a Port_Count field, and an SDP Session ID field.
[346] The TSI_length field indicates to which of three options the length of the Transport_Session_ID field corresponds.
[347] The Address-type field represents an indicator indicating whether the corresponding IP address corresponds to an IPv4 address or an IPv6 address.
[348] The Address-count field indicates the number of IP streams that are transmitted through the corresponding session. The address of each IP stream is assigned with a value increased by `1' starting from the last bit of the Target_IP_address field.
[349] The Transport_Session_ID field represents an identifier for an IP
address being transmitted (or delivered) to the respective session. Any one of a 16-bit length, a 32-bit length, and a 64-bit length may be optionally assigned as the length of the Transport_Session_ID field.
[350] The Target_IP_address field either indicates an IP address of the corresponding IP
stream or indicates a representative IP address of the corresponding session.
[351] The Target_Port_Num field either indicates a UDP port number of the corresponding IP stream or indicates a representative UDP port number of the corresponding session.
[352] The Port-Count field indicates the number of port numbers that are transmitted through the corresponding session. The UDP port number of each IP stream is assigned with a value increased by `1' starting from the last bit of the Target_Port_Num field.
[353] Finally, the SDP_Session_ID field represents an identifier assigned to the SDP
message respective of the corresponding virtual channel. More specifically, when an SDP message exists for each virtual channel, the receiving system may be informed of the location information of a corresponding SDP message through the SDP
reference descriptor, thereby enabling the receiving system to acquire the SDP message.
[354] FIG. 29 illustrates an exemplary bit stream syntax structure of a Session_Description_Descriptor() according to the present invention.
[355] Referring to FIG. 29, a descriptor_tag field is assigned with 8 bits.
Herein, the descriptor_tag field indicates that the corresponding descriptor is a Session_Description_Descriptor() (i.e., SD descriptor).
[356] A descriptor-length field is an 8-bit field, which indicates the length (in bytes) of the portion immediately following the descriptor-length field up to the end of the Session_Description_DescriptorO.
[357] A Session-version field indicates the version of the corresponding session.
[358] An Address-type field represents an indicator indicating whether the corresponding IP address corresponds to an IPv4 address or an IPv6 address.
[359] A Target_IP_address field either indicates an IP address of the corresponding IP

stream or indicates a representative IP address of the corresponding session.
[360] An Address_count field indicates the number of IP streams that are transmitted through the corresponding session. The address of each IP stream is assigned with a value increased by `1' starting from the last bit of the Target_IP_address field.
[361] A Num_components field indicates the number of components included in the cor-responding virtual channel.
[362] The SD descriptor (i.e., Session_Description_DescriptorO) uses a `for' loop statement so as to provide information on a plurality of components.
[363] Herein, the `for' loop statement may includes a Media-type field, a Num_Ports field, a Target_Port_Num field, an RTP_payload_type field, a Codec_type field, and an MPEG4 ES ID field.
[364] The Media-type field indicates the media type of the corresponding component. For example, the Media-type field indicates whether the component corresponds to audio-type media, video-type media, or data-type media.
[365] The Num_ports field indicates the number of ports transmitting (or delivering) the corresponding component.
[366] The Target_Port_Num field indicates the UDP port number of the corresponding component.
[367] The RTP_payload_type field represents the coding format of the corresponding. In case the corresponding component has not been encapsulated to RTP, the RTP_payload_type field shall be disregarded.
[368] The Codec_type field indicates to which Codec type the corresponding component has been encoded. For example, if the component corresponds to video-type media, H.264 or SVC may be used as the Codec type of the video component.
[369] The MPEG4_ES_ID field represents an identifier that can identify an element stream (ES) of the corresponding component.
[370] When the Media_type field value indicates video-type media, and when the codec_type field value indicates the H.264, the Session_Description_Descriptor() may further include an AVC_Video_Description_Bytes() field.
[371] Also, when the Media_type field value indicates video-type media, and when the codec_type field value indicates the SVC, the Session_Description_Descriptor() may further include an AVC_Video_Description_Bytes() field, a Hierarchy_Description_Bytes0 field, and an SVC_extension_Description_Bytes() field.
[372] Moreover, when the Media_type field value indicates audio-type media, the Session_Description_Descriptor() may further include an MPEG4_Audio_Description_Bytes0 field.
[373] The above-described AVC_Video_Description_Bytes() field, the Hierarchy_Description_Bytes0 field, the SVC_ extension_Description_Bytes() field, and the MPEG4_Audio_Description_Bytes() field respectively include parameters that are used when decoding the corresponding component.
[374] FIG. 30 illustrates an exemplary bit stream syntax structure of an AVC_Video_Description_Bytes() according to the present invention. The AVC_Video_Description_Bytes() may include a profile_idc field, a constraint_set0_flag field, a constraint _setl_flag field, a constraint_set2_flag field, a constraint_set3_flag field, an AVC_compatible_flags field, a level_idc field, an AVC_still_present field, and an AVC_24_hour_picture_flag field.
[375] More specifically, the profile_idc field indicates a profile of the corresponding video.
The constraint_setO_flag to constraint_set3_flag fields respectively indicate a sat-isfaction status of the constraint respective of the corresponding profile.
[376] The level_idc field indicates the level of the corresponding video. For example, the level_idc field defined in ISO/IEC 14496-10 may be used without modification as the level_idc field included in AVC_Video_Description_Bytes() according to the embodiment of the present invention.
[377] FIG. 31 illustrates an exemplary bit stream syntax structure of a Hierarchy_Description_Bytes() according to the present invention. The Hierarchy_Description_Bytes() may include a temporal_scalability_flag field, a spatial_scalability_flag field, a quality_scalability_flag field, a hierarchy-type field, a hierarchy-layer-index field, a hierarchy-embedded-layer-index field, and a hierarchy-channel field. More specifically, the temporal_scalability_flag field indicates a temporal scalability status of the corresponding video.
[378] The spatial_scalability_flag field indicates a spatial scalability status of the cor-responding video. And, the quality_scalability_flag field indicates a qualitative scalability status of the corresponding video.
[379] FIG. 32 illustrates an exemplary bit stream syntax structure of an SVC_extension_Description_Bytes() according to the present invention. Herein, the SVC_extension_Description_Bytes() may include a profile_idc field, a level_idc field, a width field, a height field, a frame-rate field, an average_bitrate field, a maximum_bitrate field, a dependency-id field, a quality_id_start field, a quality-id-end field, a temporal_id_start field, and a temporal-id-end field.
[380] More specifically, the profile_idc field indicates a profile of the corresponding video.
[381] The level_idc field indicates the level of the corresponding video.
[382] The width field indicates the horizontal size (i.e., width) of the screen on which the corresponding video is to be displayed.
[383] The height field indicates the vertical size (i.e., height) of the screen on which the corresponding video is to be displayed.

[384] The frame_rate field indicates a frame rate of the corresponding video.
[385] The average_bitrate field indicates the average bit transmission rate of the cor-responding video.
[386] And, the maximum_bitrate field indicates the maximum bit transmission rate of the corresponding video.
[387] FIG. 33 illustrates an exemplary bit stream syntax structure of an MPEG4_Audio_Description_Bytes() according to the present invention. The MPEG4_Audio_Description_Bytes() may include an MPEG4_audio_profile_and_level field.
[388] Herein, the MPEG4_audio_profile_and_level field indicates a profile and a level value of the corresponding audio.
[389] By receiving the SD descriptor included in the SMT, and by using the received SD
descriptor, the receiving system according to the present invention may acquire de-scription information including IP access information and Codec information on each component for each respective virtual channel. If the received SD descriptor is included in the SMT shown in FIG. 17, then the IP access information for a cor-responding component may be omitted from the SD descriptor.
[390] As described above, the description information on each component (i.e., Codec in-formation) may be received by being included in the SMT shown in FIG. 17 as the component level descriptor. Alternatively, the description information on each component (i.e., Codec information) may be described in the SD descriptor in a text format, and the SD descriptor may be received by being included either in the SMT
shown in FIG. 17 or in the SMT shown in FIG. 27 as the virtual channel level descriptor.
[391] If the description information on each component (i.e., Codec information) is received by being included in the SMT of FIG. 17 as the component level descriptor, the description information would be more effective in describing the Codec in-formation of the component, which is encoded in a Codec pre-decided by a specific standard. On the other hand, if the description information on each component (i.e., Codec information) is described in the SD descriptor in a text format, and if the SD
descriptor is received by being included either in the SMT of FIG. 17 or in the SMT of FIG. 27 as the virtual channel level descriptor, the description information would be more effective in describing the Codec information of the component, which is encoded in an undecided Codec.
[392] Since the value of each field is pre-decided, the former case is advantageous in that the descriptor size is small and the processing is simplified. However, the disadvantage of the former case is that only the information of the pre-decided Codec can be described. Alternatively, since the Codec information is provided in a text format via the SD descriptor, the latter case is disadvantageous in that the descriptor size for the Codec information may become larger. However, the latter case has an advantage in expandability, since the information may be described even though the corresponding component is coded in an undecided Codec.
[3931 An example of accessing the SDP message by referring to the SDP
descriptor or the SD descriptor, so as to acquire SDP message information will now be described in detail. For example, it is assumed that the SMT is encapsulated into IP
datagrams, so as to be received. Accordingly, when the system determines, based upon the header of each received MH TP, that the corresponding MH TP includes an SMT section, the MH TP handler 213 outputs the corresponding SMT section to the IP network stack 220. Thereafter, the IP network stack 220 performs IP and UDP processes on the SMT
section and outputs the processed SMT section to the SI handler 240. The SI
handler 240 parses the inputted SMT section and controls the system so that the parsed SI data are stored in the storage unit 290.
[3941 At this point, when an SDP reference descriptor (shown in FIG. 28) is included in the SMT section, the SI handler 240 acquires position information of the corresponding SDP message from the SDP reference descriptor. Among the position information, the SI handler 240 may use the SDP reference type information to determine whether the corresponding SDP message is being received in an SAP stream format, or whether the corresponding SDP message is being received in an SDP file format through a FLUTE
session. If it is determined that the SDP message is being received in an SAP
stream format, the SI handler 240 parses the value of each of the Address-type field, the Address-count field, the Target_IP_address field, the Target_Port_Num field, the Port_Count field, and the SDP_Session_ID field, which are all included in the descriptor. Then, the SI handler 240 refers to the parsed values to access the cor-responding SAP stream, which is then outputted to the MIME handler 260.
Thereafter, the MIME handler 260 gathers (or collects) SDP message information from the inputted SAP stream, thereby storing the gathered (or collected) SDP message in-formation in the storage unit 290 through the SI handler 240.
[3951 Alternatively, if it is determined that the SDP message is being received in an SDP
file format through a FLUTE session, the SI handler 240 parses the value of each of the TSI_length field, the Address-type field, the Address-count field, the Transport_Session_ID field, the Target_IP_address field, the target_Port_Num field, the Port_Count field, and the SDP_Session_ID field, which are all included in the descriptor. Then, the SI handler 240 refers to the parsed values to access the cor-responding FLUTE session, which is then outputted to the FLUTE handler 250.
The FLUTE handler 250 extracts the SDP file from the inputted FLUTE session, which is then outputted to the MIME handler 260. The MIME handler 260 gathers SDP

message information from the inputted SDP file, thereby storing the gathered SDP
message information in the storage unit 290 through the SI handler 240.
[396] Meanwhile, when an SD descriptor (shown in FIG. 29) is included in the SMT
section, the SI handler 240 acquires IP access information and description information on each component within the corresponding virtual channel from the SD
descriptor.
For example, the SI handler 240 extracts media-type information and Codec-type in-formation from the SD descriptor. Then, the SI handler 240 acquires Codec in-formation of the corresponding component based upon the extracted media-type in-formation and Codec-type information. The acquired Codec information is then stored in the storage unit 290. When required, however, the acquired Codec information is outputted to the AN decoder 310.
[397] If the media type corresponds to a video component, and if the Codec type corresponds to H.264, the SI handler 240 parses the AVC_Video_Description_BytesQ, thereby acquiring the Codec information of the corresponding video component.
[398] Meanwhile, if the media type corresponds to a video component, and if the Codec type corresponds to SVC, the SI handler 240 parses the AVC_Video_Description_BytesQ, the Hierarchy_Description_BytesO, and the SVC_extension_Description_BytesQ, thereby acquiring the Codec information of the corresponding video component. Furthermore, if the media type corresponds to an audio component, the SI handler 240 parses the MPEG4_Audio_Description_BytesO, thereby extracting the Codec information of the corresponding audio component.
[399] FIG. 34 to FIG. 36 illustrate flow charts showing a method for accessing a mobile service according to an embodiment of the present invention. FIG. 34 illustrates an example of a method for accessing a mobile service using one of the SDP
reference descriptor and SD descriptor in the receiving system according to the present invention. More specifically, a physical channel is tuned (S701). And, FIC
segments are gathered in sub-frame units through an MH sub-frame of the tuned MH
signal, so as to be demodulated (S702). According to the embodiment of the present invention, an FIC segment is inserted in a data group, so as to be transmitted. More specifically, the FIC segment corresponding to each data group described service information on the MH ensemble to which the corresponding data group belongs. When the FIC
segments are gathered (or grouped) in sub-frame units and, then, deinterleaved, all service information on the physical channel through which the corresponding FIC
segment is transmitted may be acquired. Therefore, after the tuning process, the receiving system may acquire channel information on the corresponding physical channel during a sub-frame period.
[400] In Step 702, when the FIC data are processed, reference may be made to the processed FIC data, so as to locate (or detect) the MH ensemble transmitting the requested mobile service (S703). Then, data groups including the MH ensemble are gathered from the MH frame, so as to configure an RS frame corresponding to the MH
ensemble, thereby decoding the configured RS frame (S704). Thereafter, an MH
TP, which transmits an SMT from the decoded RS frame, is located (or found) (S705).
Each field of the SMT found in Step 705 is parsed, so as to gather descriptive in-formation on each virtual channel (S706).
[4011 If the SMT corresponds to the SMT of FIG. 17, the descriptive information may correspond to a major channel number, a minor channel number, a virtual channel short name, service ID, service type, activity status information on the corresponding virtual channel, IP address information, UDP port information, and so on. Al-ternatively, if the SMT corresponds to the SMT of FIG. 27, the descriptive information may correspond to a transport stream ID, a major channel number, a minor channel number, a source ID, a channel short name, and so on.
[4021 Once the descriptive information is gathered, the descriptors within the virtual channel loop of the SMT are gathered and processed (S707). At this point, the system determines whether an SDP reference descriptor (as shown in FIG. 28) or an SD
descriptor (as shown in FIG. 29) is included in the descriptors within the virtual channel loop of the SMT (S708). If the system determines, in Step 708, that the SDP
reference descriptor is included in the virtual channel loop of the SMT, the process step moves on to the steps shown in FIG. 35, thereby acquiring position information of the corresponding SDP message from the SDP reference descriptor (S709).
[4031 Alternatively, if the system determines, in Step 708, that the SD
descriptor is included in the virtual channel loop of the SMT, the process step moves on to the steps shown in FIG. 36, thereby acquiring IP access information and description information for each component of the corresponding virtual channel from the SD descriptor (S710). After processing Steps 709 and 710, the system verifies for any remaining un-processed virtual channels (S711). If the system detects unprocessed virtual channels, Step 706 is repeated so as to gather more information on the corresponding virtual channel. Conversely, if the system does not detect any unprocessed virtual channels, the system prepares to provide the mobile service (S712).
[4041 When it is determined in FIG. 34 that an SDP reference descriptor is included in the parsed SMT, FIG. 35 illustrates a flow chart of a method for acquiring position in-formation of the corresponding SDP message. More specifically, the receiving system extracts SDP reference type information from the SDP reference descriptor (S801).
Then, it is determined whether the extracted SDP reference type information is being received in an SAP stream format, or whether the extracted SDP reference type in-formation is being received in an SDP file format through a FLUTE session (S802).
[4051 If the system determines, in Step 802, that the extracted SDP reference type in-formation is being received in an SAP stream format, the value of each of the Address-type field, the Address_count field, the Target_IP_address field, the Target_Port_Num field, the Port_Count field, and the SDP_Session_ID field, which are all included in the descriptor, is parsed. And, the system refers to the parsed field values to access the corresponding SAP stream (S803). Then, the system gathers SDP
message information from the accessed SAP stream, so as to provide the information to the respective block, thereby moving on to the subsequent process steps shown in FIG. 34 (S804).
[406] On the other hand, if the system determines, in Step 802, that the extracted SDP
reference type information is being received in an SDP file format through a FLUTE
session, the value of each of the TSI_length field, the Address_type field, the Address-count field, the Transport_Session_ID field, the Target_IP_address field, the target _Port _Num field, the Port_Count field, and the SDP_Session_ID field, which are all included in the descriptor, is parsed. And, the system refers to the parsed field values to access the corresponding FLUTE session (S805). Then, the system gathers SDP message information from the accessed FLUTE session, so as to provide the in-formation to the respective block, thereby moving on to the subsequent process steps shown in FIG. 34 (S806).
[407] When it is determined in FIG. 34 that an SD descriptor is included in the parsed SMT, FIG. 35 illustrates a flow chart of a method for acquiring IP access information and description information on each component within the corresponding virtual channel. More specifically, IP address information is extracted from the SD
descriptor (S901). Herein, the IP address information may be acquired by parsing the Session-version field, the Address-type field, the Target_IP_address field, and the Address-count field. Then, with respect to each component within the virtual channel, media type information (Media_type) is extracted from the SD descriptor (S902), and UDP/RTP information is extracted from the SD descriptor (S903). Herein, the UDP/
RTP information may be acquired by parsing the Num_Ports field, the Target_Port_Num field, and the RTP_payload_type field.
[408] Subsequently, the receiving system determines whether the media type extracted in Step 902 corresponds to a video component or an audio component (S904). When it is verified in Step 904 that the media type corresponds to a video component, Codec type (codec_type) information is extracted (S905). Thereafter, Codec information of the corresponding video component is acquired based upon the extracted Codec type (S906). For example, when the Codec type indicates H.264, the receiving system parses the AVC_Video_Description_BytesQ. Meanwhile, when the Codec type indicates SVC, the receiving system parses the AVC_Video_Description_BytesQ, the Hierarchy-Description _BytesQ, and the SVC_extension_Description_BytesQ, thereby acquiring Codec information of the corresponding video component (S906).
Thereafter, the acquired video Codec information is outputted to the AN
decoder 310.
[4091 Alternatively, when it is verified in Step 904 that the media type corresponds to an audio component, Codec type (codec_type) information is extracted (S907).
Then, the receiving system parses the MPEG4_Audio_Description_BytesO, thereby extracting the Codec information of the corresponding audio component (S908). Thereafter, the acquired audio Codec information is also outputted to the AN decoder 310.
Based upon the Codec information of the inputted video and/or audio component(s), the AN
decoder 310 decodes audio and/or video stream(s) outputted from the stream handler 230, thereby outputted the decoded stream(s) to the display module 320.
[4101 As described above, the digital broadcasting system and the data processing method according to the present invention have the following advantages. By using the SMT, the present invention may perform channel setting more quickly and efficiently. Also, either by including an SDP reference descriptor describing position information on an SDP message in the SMT, or by including an SD descriptor describing IP access in-formation and description information on each component of the respective virtual channel, so as to be transmitted, the present invention may expand information associated with channel settings.
[4111 Also, the present invention reduces the absolute amount of acquisition data for channel setting and IP service access, thereby minimizing bandwidth consumption. For example, when the SDP reference descriptor is included in the SMT and received, the corresponding virtual channel is recognized as a session, and the SDP message of the corresponding session may be received. Also, when the SD descriptor is included in the SMT and received, the corresponding virtual channel is recognized as a session, thereby enabling access information based upon the access information and media characteristics of each IP media component, which is being transmitted through the corresponding session.
[4121 It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the inventions. Thus, it is intended that the present invention covers the modi-fications and variations of this invention provided they come within the scope of the appended claims and their equivalents.
Mode for the Invention [4131 Meanwhile, the mode for the embodiment of the present invention is described together with the 'best Mode' description.
Industrial Applicability [4141 The embodiments of the method for transmitting and receiving signals and the apparatus for transmitting and receiving signals according to the present invention can be used in the fields of broadcasting and communication.

Claims (10)

1. A method of transmitting broadcast data in a digital broadcast transmitting system, the method comprising:

Reed Solomon-Cyclic Redundancy Check (RS-CRC) encoding, by a Reed-Solomon (RS) frame encoder, mobile data to build at least one of a primary RS
frame belonging to a primary ensemble and a secondary RS frame belonging to a secondary ensemble;

mapping the RS-CRC encoded mobile data into data groups and adding known data sequences, a portion of fast information channel (FIC) data, and transmission information channel (TPC) data to each of the data groups, wherein the FIC data includes information for rapid mobile service acquisition, and wherein the TPC data includes version information for indicating an update of the FIC data and a parade identifier to identify a parade which carries at least one of the primary ensemble and the secondary ensemble;

multiplexing data in the data groups and main data; and transmitting a transmission frame including the multiplexed data, wherein the FIC data are divided into a plurality of FIC segment payloads, and each FIC segment including an FIC segment header and one of the plurality of FIC segment payloads is transmitted in each of the data groups, wherein the primary ensemble includes at least one mobile service and a first service map table and the secondary ensemble includes at least one mobile service and a second service map table, wherein the first service map table comprises a first ensemble identifier to identify the primary ensemble and the second service map table comprises a second ensemble identifier to identify the secondary ensemble, and wherein the first and second ensemble identifiers include the parade identifier, respectively.
2. The method of claim 1, wherein a most significant bit of the first ensemble identifier is set to 0.
3. The method of claim 1 or 2, wherein a most significant bit of the second ensemble identifier is set to 1.
4. The method of any one of claims 1 to 3, wherein a least significant 7 bits of the first ensemble identifier correspond to the parade identifier.
5. The method of any one of claims 1 to 3, wherein a least significant 7 bits of the second ensemble identifier correspond to the parade identifier.
6. A digital broadcast transmitting system comprising:

a Reed-Solomon (RS) frame encoder for Reed Solomon-Cyclic Redundancy Check (RS-CRC) encoding mobile data to build at least one of a primary RS frame belonging to a primary ensemble and a secondary RS frame belonging to a secondary ensemble;

a group formatting means for mapping the RS-CRC encoded mobile data into data groups and adding known data sequences, a portion of fast information channel (FIC) data, and transmission information channel (TPC) data to each of the data groups, wherein the FIC data includes information for rapid mobile service acquisition, and wherein the TPC data includes version information for indicating an update of the FIC data and a parade identifier to identify a parade which carries at least one of the primary ensemble and the secondary ensemble;

a multiplexing means for multiplexing data in the data groups and main data; and a transmitting means for transmitting a transmission frame including the multiplexed data, wherein the FIC data are divided to a plurality of FIC segment payloads, and each FIC segment including an FIC segment header and one of the plurality of FIC segment payloads is transmitted in each of the data groups, wherein the primary ensemble includes at least one mobile service and a first service map table and the secondary ensemble includes at least one mobile service and a second service map table, wherein the first service map table comprises a first ensemble identifier to identify the primary ensemble and the second service map table comprises a second ensemble identifier to identify the secondary ensemble, and wherein the first and second ensemble identifiers include the parade identifier, respectively.
7. The digital broadcast transmitting system of claim 6, wherein a most significant bit of the first ensemble identifier is set to 0.
8. The digital broadcast transmitting system of claim 6 or 7, wherein a most significant bit of the second ensemble identifier is set to 1.
9. The digital broadcast transmitting system of any one of claims 6 to 8, wherein a least significant 7 bits of the first ensemble identifier correspond to the parade identifier.
10. The digital broadcast transmitting system of any one of claims 6 to 8, wherein a least significant 7 bits of the second ensemble identifier correspond to the parade identifier.
CA2696721A 2007-08-24 2008-08-25 Digital broadcasting system and method of processing data in digital broadcasting system Expired - Fee Related CA2696721C (en)

Applications Claiming Priority (15)

Application Number Priority Date Filing Date Title
US95771407P 2007-08-24 2007-08-24
US60/957,714 2007-08-24
US97408407P 2007-09-21 2007-09-21
US60/974,084 2007-09-21
US97737907P 2007-10-04 2007-10-04
US60/977,379 2007-10-04
US4450408P 2008-04-13 2008-04-13
US61/044,504 2008-04-13
US5981108P 2008-06-09 2008-06-09
US61/059,811 2008-06-09
US7668608P 2008-06-29 2008-06-29
US61/076,686 2008-06-29
KR10-2008-0082929 2008-08-25
PCT/KR2008/004966 WO2009028846A1 (en) 2007-08-24 2008-08-25 Digital broadcasting system and method of processing data in digital broadcasting system
KR1020080082929A KR101599527B1 (en) 2007-08-24 2008-08-25 Digital broadcasting system and method of processing data in digital broadcasting system

Publications (2)

Publication Number Publication Date
CA2696721A1 CA2696721A1 (en) 2009-03-05
CA2696721C true CA2696721C (en) 2012-07-24

Family

ID=40382129

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2696721A Expired - Fee Related CA2696721C (en) 2007-08-24 2008-08-25 Digital broadcasting system and method of processing data in digital broadcasting system

Country Status (6)

Country Link
US (2) US8014333B2 (en)
KR (2) KR101599527B1 (en)
CA (1) CA2696721C (en)
IN (1) IN2010KN00592A (en)
MX (1) MX2010002029A (en)
WO (1) WO2009028846A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2696721C (en) 2007-08-24 2012-07-24 Lg Electronics Inc. Digital broadcasting system and method of processing data in digital broadcasting system
WO2010107167A1 (en) * 2009-03-19 2010-09-23 Lg Electronics Inc. Transmitting/receiving system and method of processing data in the transmitting/receiving system
WO2010120156A2 (en) * 2009-04-17 2010-10-21 엘지전자 주식회사 Transmitting/receiving system and broadcast signal processing method
KR101643616B1 (en) * 2009-11-06 2016-07-29 삼성전자주식회사 Method for receiving of mobile service and receiver of mobile service
KR20110063327A (en) 2009-11-30 2011-06-10 삼성전자주식회사 Digital broadcast transmitter, digital broadcast receiver, methods for constructing and processing streams thereof
US8953478B2 (en) * 2012-01-27 2015-02-10 Intel Corporation Evolved node B and method for coherent coordinated multipoint transmission with per CSI-RS feedback
US9900166B2 (en) * 2013-04-12 2018-02-20 Qualcomm Incorporated Methods for delivery of flows of objects over broadcast/multicast enabled networks
JP2015073197A (en) 2013-10-02 2015-04-16 ソニー株式会社 Transmitter and transmitting method, receiver and receiving method and computer program
KR101861696B1 (en) 2014-08-22 2018-05-28 엘지전자 주식회사 Method for transmitting broadcast signals, apparatus for transmitting broadcast signals, method for receiving broadcast signals and apparatus for receiving broadcast signals
CA3161483A1 (en) * 2015-01-19 2016-07-28 Samsung Electronics Co., Ltd. Method and apparatus for transmitting and receiving multimedia content
EP3249914A4 (en) * 2015-01-21 2018-07-18 LG Electronics Inc. Broadcast signal transmission apparatus, broadcast signal receiving apparatus, broadcast signal transmission method, and broadcast signal receiving method
WO2017030344A1 (en) * 2015-08-17 2017-02-23 엘지전자(주) Apparatus and method for transmitting and receiving broadcast signal
KR101967299B1 (en) * 2017-12-19 2019-04-09 엘지전자 주식회사 Autonomous vehicle for receiving a broadcasting signal and method of Autonomous vehicle for receiving a broadcasting signal

Family Cites Families (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5754651A (en) 1996-05-31 1998-05-19 Thomson Consumer Electronics, Inc. Processing and storage of digital data and program specific information
JPH1169253A (en) 1997-08-22 1999-03-09 Hitachi Ltd Broadcast receiver with general program guide
WO1999063752A1 (en) 1998-05-29 1999-12-09 Sony Corporation Information processing apparatus and method, and providing medium
US6317462B1 (en) 1998-10-22 2001-11-13 Lucent Technologies Inc. Method and apparatus for transmitting MPEG video over the internet
JP3652176B2 (en) 1999-08-13 2005-05-25 株式会社日立製作所 Digital broadcast receiving apparatus and semiconductor device thereof
JP4250832B2 (en) 1999-10-14 2009-04-08 三菱電機株式会社 Data transmission device
JP4631235B2 (en) 2000-08-25 2011-02-16 ソニー株式会社 Digital broadcast transmission method, digital broadcast transmission apparatus, and digital broadcast reception apparatus
GB0119569D0 (en) 2001-08-13 2001-10-03 Radioscape Ltd Data hiding in digital audio broadcasting (DAB)
KR20030030175A (en) 2001-10-09 2003-04-18 주식회사 대우일렉트로닉스 Digital broadcasting receiver by using descriptor
JP2003134117A (en) 2001-10-22 2003-05-09 Hitachi Communication Technologies Ltd Ip telephone set, call manager and method for acquiring ip address for ip telephone set
KR100440687B1 (en) 2001-11-02 2004-07-15 한국전자통신연구원 System for transceiving information of digital cable broadcast and method thereof
US6909753B2 (en) 2001-12-05 2005-06-21 Koninklijke Philips Electronics, N.V. Combined MPEG-4 FGS and modulation algorithm for wireless video transmission
JP3916542B2 (en) 2002-10-07 2007-05-16 沖電気工業株式会社 Address assignment system
KR100920726B1 (en) * 2002-10-08 2009-10-07 삼성전자주식회사 Single carrier transmission system and a method using the same
KR100920723B1 (en) * 2002-10-08 2009-10-07 삼성전자주식회사 Single carrier transmission system capable of acclimating dynamic environment and a method therefore
US20060098937A1 (en) 2002-12-20 2006-05-11 Koninklijke Philips Electronics N.V. Method and apparatus for handling layered media data
BR0318015A (en) 2003-01-21 2005-11-29 Nokia Corp Method, system and apparatus for receiving and transmitting a digital broadband transmission for storing power to the receiver, data processing system, and computer program
EP1463309A1 (en) 2003-03-26 2004-09-29 THOMSON Licensing S.A. Data stream format processing for mobile audio/video reception
GB2406483A (en) 2003-09-29 2005-03-30 Nokia Corp Burst transmission
KR100565646B1 (en) 2003-12-19 2006-03-29 엘지전자 주식회사 Method of synchronizing service component in DMB receiver
KR100565900B1 (en) 2003-12-26 2006-03-31 한국전자통신연구원 Apparatus and Method of the broadcasting signal transformation for transforming a digital TV broadcasting signal to a digital radio broadcasting signal
FR2864869A1 (en) 2004-01-06 2005-07-08 Thomson Licensing Sa Digital video broadcasting performing process for e.g. Internet protocol network, involves connecting receiver to part of stream conveying description information of digital services to obtain information on services
KR20050072988A (en) * 2004-01-08 2005-07-13 엘지전자 주식회사 Apparatus and method for transmitting reference information of broadcasting contents from digital broadcasting receiver to mobile information terminal
KR100606827B1 (en) 2004-01-27 2006-08-01 엘지전자 주식회사 Data architecture of VCT, method for judging transmitted stream, and broadcasting receiver
US7626960B2 (en) 2004-04-20 2009-12-01 Nokia Corporation Use of signaling for auto-configuration of modulators and repeaters
KR100552678B1 (en) 2004-06-10 2006-02-20 한국전자통신연구원 Apparauts and method for transmitting and receiving with reducing the setup time of data packet
KR100626665B1 (en) 2004-08-03 2006-09-25 한국전자통신연구원 Base of IP DMB data translation apparatus and method for DMB receiving system using that
KR100666981B1 (en) 2004-08-09 2007-01-10 삼성전자주식회사 Apparatus and method of data receive management in digital broadcasting system
KR100651939B1 (en) 2004-08-18 2006-12-06 엘지전자 주식회사 Broadcasting receiver and decoding method
JP4828906B2 (en) 2004-10-06 2011-11-30 三星電子株式会社 Providing and receiving video service in digital audio broadcasting, and apparatus therefor
KR101080966B1 (en) 2004-11-23 2011-11-08 엘지전자 주식회사 Apparatus and method for transmitting/receiving broadcast signal
KR20060066444A (en) 2004-12-13 2006-06-16 한국전자통신연구원 Internet broadcasting system and its method
KR100687614B1 (en) 2004-12-21 2007-02-27 엘지노텔 주식회사 Method for dynamic assignment of IP address in IP based keyphone system
KR100689479B1 (en) 2005-02-15 2007-03-02 삼성전자주식회사 Extended Electronic Program Guide Providing Method for Data Broadcasting
KR100713481B1 (en) 2005-08-01 2007-04-30 삼성전자주식회사 Digital broadcasting receiving apparatus and method for generating channel map for switching of broadcasting channels
KR100754676B1 (en) 2005-09-21 2007-09-03 삼성전자주식회사 Apparatus and method for managing electronic program guide data in digital broadcasting reception terminal
JP4643406B2 (en) 2005-09-27 2011-03-02 株式会社東芝 Broadcast receiver
KR101191181B1 (en) 2005-09-27 2012-10-15 엘지전자 주식회사 Transmitting/receiving system of digital broadcasting and data structure
US8320819B2 (en) 2005-11-01 2012-11-27 Nokia Corporation Mobile TV channel and service access filtering
KR101199369B1 (en) 2005-11-25 2012-11-09 엘지전자 주식회사 Digital broadcasting system and processing method
KR101191182B1 (en) 2005-11-26 2012-10-15 엘지전자 주식회사 Digital broadcasting system and processing method
KR101208504B1 (en) 2005-12-27 2012-12-05 엘지전자 주식회사 Digital broadcasting system and processing method
KR20070075549A (en) 2006-01-13 2007-07-24 엘지전자 주식회사 Digital broadcasting system and processing method
KR101227487B1 (en) * 2006-01-21 2013-01-29 엘지전자 주식회사 Method of transmitting Digital Broadcasting Signal and Method and Apparatus of decoding Digital Broadcasting Signal
KR100771631B1 (en) 2006-05-23 2007-10-31 엘지전자 주식회사 Broadcasting system and method of processing data in a Broadcasting system
KR101430484B1 (en) * 2007-06-26 2014-08-18 엘지전자 주식회사 Digital broadcasting system and method of processing data in digital broadcasting system
US8332896B2 (en) * 2007-07-05 2012-12-11 Coherent Logix, Incorporated Transmission of multimedia streams to mobile devices with cross stream association
CA2696721C (en) 2007-08-24 2012-07-24 Lg Electronics Inc. Digital broadcasting system and method of processing data in digital broadcasting system

Also Published As

Publication number Publication date
KR101599527B1 (en) 2016-03-03
KR20090021120A (en) 2009-02-27
KR20160026963A (en) 2016-03-09
US8014333B2 (en) 2011-09-06
US8149755B2 (en) 2012-04-03
KR101689615B1 (en) 2016-12-26
US20110280343A1 (en) 2011-11-17
MX2010002029A (en) 2010-03-15
US20090052580A1 (en) 2009-02-26
IN2010KN00592A (en) 2015-10-02
CA2696721A1 (en) 2009-03-05
WO2009028846A1 (en) 2009-03-05

Similar Documents

Publication Publication Date Title
CA2696721C (en) Digital broadcasting system and method of processing data in digital broadcasting system
CA2696726C (en) Digital broadcasting system and method of processing data in digital broadcasting system
CA2694704C (en) Digital broadcasting system and method of processing data in digital broadcasting system
CA2695548C (en) Digital broadcasting system and method of processing data in digital broadcasting system
CA2697483C (en) Digital broadcasting receiver and method for controlling the same
US8161511B2 (en) Digital broadcasting system and method of processing data in digital broadcasting system
US20140059633A1 (en) Digital broadcasting receiver and method for controlling the same
US20100322344A1 (en) Digital broadcasting system and method of processing data in digital broadcasting system
US10091009B2 (en) Digital broadcasting system and method of processing data in the digital broadcasting system
US9608766B2 (en) Digital broadcasting system and method of processing data in digital broadcasting system
CA2697481C (en) Digital broadcasting system and method of processing data in digital broadcasting system
US20090080573A1 (en) Digital broadcasting receiver and method for controlling the same
US8533762B2 (en) Digital broadcasting system and method of processing data in digital broadcasting system
US20100211850A1 (en) Digital broadcasting system and method of processing data in digital broadcasting system
US8223787B2 (en) Digital broadcasting system and method of processing data in digital broadcasting system

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed

Effective date: 20180827