WO2014127826A1 - Downloading to a cache - Google Patents

Downloading to a cache Download PDF

Info

Publication number
WO2014127826A1
WO2014127826A1 PCT/EP2013/053506 EP2013053506W WO2014127826A1 WO 2014127826 A1 WO2014127826 A1 WO 2014127826A1 EP 2013053506 W EP2013053506 W EP 2013053506W WO 2014127826 A1 WO2014127826 A1 WO 2014127826A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
indication
request
content
downloading
Prior art date
Application number
PCT/EP2013/053506
Other languages
French (fr)
Inventor
Hannu Flinck
Heikki-Stefan Almay
Janne Einari TUONONEN
Ove Bjorn STRANDBERG
Original Assignee
Nokia Solutions And Networks Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Solutions And Networks Oy filed Critical Nokia Solutions And Networks Oy
Priority to PCT/EP2013/053506 priority Critical patent/WO2014127826A1/en
Publication of WO2014127826A1 publication Critical patent/WO2014127826A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/14Multichannel or multilink protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/566Grouping or aggregating service requests, e.g. for unified processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Definitions

  • the present invention relates to cache downloads in a communications network, and especially scheduling one or more downloads to a cache.
  • content is stored and delivered from the edge of an operator's network to subscribers requesting the content behind the edge.
  • the content may be delivered even while it is downloaded to the cache. While the content is downloaded to the cache it competes with other traffic sharing the same downstream bandwidth towards the cache.
  • a general aspect of the invention relates to collecting and/or receiving information that can be used in adjusting scheduling of downstream downloads.
  • Various aspects of the invention comprise methods, an apparatus, a system and a computer program product as defined in the independent claims. Further embodiments of the invention are disclosed in the dependent claims.
  • FIG. 1 shows simplified architecture of an exemplary system having schematic block diagrams of exemplary apparatuses
  • Figures 2 to 6 are flow charts illustrating different exemplary functionalities; and Figures 7 and 8 are block diagrams of exemplary apparatuses.
  • the present invention is applicable to any system, radio access network and apparatus that support caching.
  • Examples of such systems include LTE/SAE (Long Term Evolution/SAE) (Long Term Evolution/SAE) (Long Term Evolution/SAE) (Long Term Evolution/SAE)
  • WiMAX Worldwide Interoperability for Microwave Access
  • GSM Global System for Mobile communications
  • GERAN GSM EDGE radio access Network
  • GRPS General Packet Radio Service
  • UMTS Universal Mobile Telecommunication System
  • W-CDMA basic wideband-code division multiple access
  • HSPA high-speed packet access
  • LTE-A LTE-advanced
  • LTE-A LTE-advanced
  • LTE-A and EPC Evolved Packet Core
  • Figure 1 A general architecture of an exemplary system 100 is illustrated in Figure 1 .
  • Figure 1 is a simplified system architecture only showing some elements and functional entities, all being logical units whose implementation may differ from what is shown.
  • the connections shown in Figure 1 are logical connections; the actual physical connections may be different. It is apparent to a person skilled in the art that the systems also comprise other functions and structures that are not illustrated.
  • the illustrated parts of the system 100 in Figure 1 are a radio access network 200 with evolved Nodes B (eNB) 210, 210' and an intermediate router 220, a core network 300 with a gateway apparatus 310, and an external network 400, like Internet, providing access to an original source of a content 500.
  • a connection between eNB 210, 210' and the intermediate router 220 may be a wireless or a wired connection, and the same applies to a connection between the intermediate router 220 and the gateway 310.
  • the hierarchical group of caches contain two first level caches 21 , 21 ', each being coupled or integrated to an evolved Node B (eNB) 210, 210', and a highest level cache 31 coupled to the gateway apparatus 310.
  • eNB evolved Node B
  • the highest level cache 31 may be a so called out of band root cache that does not receive the transferred content, i.e. does not actually cache the content but receives the requests to receive the content and any control signals that may assist in the caching of content, and/or transferring the content.
  • the gateway apparatus 310 (or the highest level cache 31 ) comprises a packet downstream scheduler adjuster 32 that is configured to take into account requests for prioritizing content downloads, and hence provide dynamical adjusting of downlink resources towards the user apparatuses.
  • the intermediate router 220 comprises a packet
  • a packet downstream scheduler adjuster 22 configured to take into account requests for prioritizing content downloads, and hence provide dynamical adjusting of downlink resources from the intermediate router to eNBs, i.e. towards the user apparatuses.
  • a packet downstream scheduler adjuster 22, 32 may be an integral part of a traffic scheduler, like a packet scheduler or a downlink packet scheduler, or coupled to such a traffic scheduler.
  • some or all of the lower level caches or at least some or all of the lowest level (first level) caches or a corresponding apparatuses to which a cache is coupled to comprises a collector unit 21 -1 , 21 -1 ' configured to provide information on the amount of different contents to be downloaded to the cache and content-specific amount of requests received for the same content while it is downloaded.
  • the collector unit 21 -1 configured to provide information on the amount of different contents to be downloaded to the cache and content-specific amount of requests received for the same content while it is downloaded.
  • 21 -1 ' may be configured to provide the requests/information as part of a cache
  • FIG. 1 Although a hierarchical caching with hierarchical packet downstream scheduler adjusters is illustrated in Figure 1 , it should be appreciated that the invention is implementable to a system having only one cache and one collector unit for the cache and one traffic scheduler with one packet downstream scheduler adjuster somewhere in a downlink towards the cache, and that at least the collector and the packet downstream scheduler adjuster can communicate with each other.
  • FIG. 2 is a flow chart illustrating an exemplary functionality of eNB comprising a cache and a collector unit.
  • eNB comprising a cache and a collector unit.
  • a requesting user apparatus is served by eNB and is allowed to download the requested content.
  • a cache miss i.e. the content not being in the cache
  • cache management schema and policies are used to determine what content and when will be downloaded to a cache in case of a cache miss.
  • a sent request for downloading is interpreted to mean that the requested content is downloading to the cache.
  • a request for a specific content i.e. a request to download a specific content
  • a request for a specific content is received in step 201 from a user apparatus
  • it is checked in step 202, whether or not the content is already in the cache. If it is, the content is delivered in step 203 from the cache to the user apparatus.
  • step 204 If the specific content is not in the cache (step 202), it is checked in step 204, whether or not the specific content is currently being downloaded to the cache. If the specific content is not being downloaded to the cache, a value "n" indicating the content requests for the specific content is set to be one in step 205 and a request for downloading the content is sent towards the source in step 206. Then the content is delivered in point 207 to the user apparatus while it is being downloaded to the cache.
  • step 208 If the content is being downloaded to the cache (step 204), in this example it is checked in step 208, whether or not the same user apparatus has requested the same content earlier.
  • the user apparatus may be identified based on the radio bearer used, or on an identifier identifying the user apparatus to the radio access network, like a globally unique temporary identifier (GUTI). If the same user apparatus has requested the same content earlier (step 208), the request is ignored in step 209 since the user apparatus already receives or soon will receive the content.
  • GUITI globally unique temporary identifier
  • the value of "n" is incremented by one in step 210, and then a request for a higher priority of the stream delivering the content to the cache is sent in step 21 1 to the cache.
  • the request for the higher priority which may also be called as a request to prioritize the downlink stream (i.e. downlink session), may indicate the value of "n", or an incrementation to the value of "n”.
  • requests from eNB to the gateway, and vice versa are transmitted using a tunneled bearer (also called GTP tunnel) providing a virtual circuit, and a payload of a request is invisible to the intermediate nodes.
  • GTP tunnel also called GTP tunnel
  • the request to be forwarded towards the target address may indicate the request for the higher priority when a "Differentiated Services Field" of the upstream encapsulating IP (Internet Protocol) header, like the outer header of the GTP tunnel, is set to a predefined code point while the request is
  • the packet downstream scheduler adjuster receives in addition to information indicating how many different contents are requested via one eNB also information/indication on how many user apparatuses are requesting the same content via the same eNB. With the eight less significance bits, 2 8 priorities may be expressed. Then the process proceeds to step 207 to deliver the content also to the user apparatus wherefrom the request was received.
  • a traffic scheduler of the cache is requested in step 21 1 to favor the downstream session in question since now the same content is delivered to multiple users via the cache.
  • the request is for increasing the quality of service/bandwidth for the session over which the content is downloaded to the cache.
  • a value in the header is interpreted to indicate another value, for example the apparatuses may be configured to interpret value 2 in the header to mean 10 requests for the same content
  • Figure 3 is a flow chart illustrating another exemplary functionality of eNB comprising a cache and a collector unit. The same assumptions described above with Figure 2 apply also with Figure 3.
  • step 302 when a request for a specific content is received in step 301 from a user apparatus, it is checked in step 302, whether or not the content is already in the cache. If it is, the content is delivered in step 303 from the cache to the user apparatus.
  • step 304 If the specific content is not in the cache (step 302), it is checked in step 304, whether or not the specific content is currently being downloaded to the cache. If the specific content is not being downloaded to the cache, a request for downloading the content is sent towards the source in step 305. Then the content is delivered in point 306 to the user apparatus while it is being downloaded to the cache. If the content is being downloaded to the cache (step 304), in this example a further (i.e. a new) bearer for the content download is requested for in step 307. The request for the further bearer is an implicit request for a higher priority, since it requests additional bearer resources for a content already being downloaded over one or more bearers. If the request for the further bearer is accepted (step 308), the further bearer is set up in step 309, and then the process proceeds to step 306 to deliver the content to user
  • step 308 If the request for the further bearer is not accepted (step 308), the process proceeds directly to step 306 to deliver the content to user apparatuses.
  • Figure 4 is a flow chart illustrating an exemplary functionality of an intermediate router in a hierarchical cache system, the intermediate router comprising a cache, a collector unit, a packet downstream scheduler adjuster and a traffic scheduler. Also herein it is also assumed for the sake of clarity, that a cache miss causes downloading of the requested content and that a sent request for downloading is interpreted to mean that the requested content is downloading to the cache.
  • a request relating to a specific content is received in step 401 from eNB or from another intermediate router that is in the hierarchy below the intermediate router whose exemplary functionality is illustrated in Figure 4, it is checked in step 402, whether or not the specific content is already in the cache.
  • the request relating to the specific content may be interpreted to be a first request, or it may indicate to prioritize the request. Therefore it is checked in step 403, whether or not the content is currently being downloaded from the cache. If it is not, the request is interpreted to be the first request (although corresponding content may have been downloaded in the past), and the traffic scheduler settings are set in step 404 to correspond bearer settings of a bearer set up between the gateway apparatus and the user apparatus wherefrom the request originates. After that the content is delivered in point 405 downlink towards the user apparatus.
  • the request may be a prioritize request from eNB or the intermediate router towards which the content is currently being downloaded, or a first request for the specific content from eNB or an intermediate router. Therefore the downlink packet scheduler adjuster updates the traffic scheduler settings in step 406 to correspond to the changed situation, and then the content is delivered in point 405 downlink(s) towards the user apparatuses.
  • step 407 If the specific content is not in the cache (step 402), it is checked in step 407, whether or not the specific content is currently being downloaded to the cache. If the specific content is not being downloaded to the cache, a value "n" indicating the content requests for the specific content is set to be one in step 408 and a request for downloading the content is sent towards the source in step 409. Then the content is delivered in point 410 to eNB or the other intermediate node towards the user apparatus while the content is being downloaded to the cache.
  • step 407 If the content is being downloaded to the cache (step 407), in this example the value of "n" is incremented by one in step 41 1 , and then it is checked, whether or not the updated value "n" exceeds a threshold th.
  • the threshold is used in the example to ensure that requests to favor the download are not sent too frequently to the packet downstream scheduler adjuster adjusting the traffic scheduler responsible for downlink towards the intermediate router, thereby saving both uplink resources over which the requests to favor are sent and the processing resources of the packet downstream scheduler adjuster.
  • the threshold may be a preset value that is configurable via a management system or via the network entity being on a higher hierarchy level, and it may be preset network-specifically or hierarchy-level specifically to be the same in each (corresponding) entity comprising a collector unit, or it may be preset entity-specifically. Examples of possible threshold values include 3, 5, 10 and 20.
  • the content is delivered in point 410 to eNB(s) and/or the other intermediate node(s) towards the user apparatuses while the content is being downloaded to the cache.
  • step 412 If the updated value "n" exceeds the threshold th (step 412), the value "n" is set in step 413 to be one and a a request for a higher priority of the stream delivering the content to the cache is sent in step 414 and the content is delivered in point 410 to eNB(s) and/or the other intermediate node(s) towards the user apparatuses while the content is being downloaded to the cache.
  • the threshold may be maintained served entity -specifically so that a first request for a content, received via a served entity, like eNB or another intermediate router, while the content is downloaded, is sent as a "favor request", i.e. a request to prioritize the download.
  • FIG. 5 is a flow chart illustrating exemplary functionality of a network apparatus comprising a packet downstream scheduler adjuster.
  • a priorization request i.e. a request that has a value of "n" marking the differentiated service code point in the outer header
  • a new (updated) downstream final setting is calculated in step 502.
  • static setting like static scheduling setting for the connection in question provided by a network management and free downlink capacity is taken into account in addition to a dynamic setting determined on the basis of received requests for different contents and
  • a corresponding traffic scheduler is adjusted for downstream traffic correspondingly in step 503.
  • the download sessions serving multiple users are favored but at the same time other traffic, like time sensitive traffic, is not suffering because of cache downloading.
  • FIG. 6 is a flow chart illustrating another exemplary functionality of a network apparatus comprising a packet downlink scheduler adjuster, the example being a counterpart to the example illustrated with Figure 3.
  • the request for a further bearer is received in step 601 , it is determined, based on the current downlink load and taking into account that other traffic does not suffer from a cache downloading, or at least there is a balance between different downloads and downlink traffic, whether or not the request for the further bearer is acceptable.
  • the mechanism used for the determining bears no significance to the invention, and any known or future method may be used. If the request for the further bearer is determined to be acceptable, and hence accepted (step 602), the further bearer is set up in step 603, and the downloading to the cache is performed in step 604 using the one or more bearers that already existed and the new further bearer. In other words, the packet downstream scheduler adjuster adjusts the traffic scheduler to schedule packets to a new set of bearers.
  • the downloading continues in step 605 using the one or more already existing bearers.
  • the packet downlink scheduler adjuster does not adjust the traffic scheduler.
  • FIG. 7 is a simplified block diagram illustrating some units for an apparatus 700 configured to be a "cache apparatus", i.e. an apparatus providing at least a collector unit for a cache 701 .
  • the apparatus comprises a memory 703 for the cache 701 , one or more interfaces (IF) 704 for receiving and transmitting information, including content to be cached, a processor 702 configured to implement at least the collector unit functionality described herein with an exemplary functionality and memory 703 usable for storing a program code required for the collector unit.
  • the apparatus may be connected to the cache.
  • FIG. 8 is a simplified block diagram illustrating some units for an apparatus 800 configured to be a "downstream controlling apparatus", i.e. an apparatus providing at least the packet downstream scheduler adjuster unit.
  • the apparatus comprises one or more interfaces (IF) 804 for receiving and transmitting information, a processor 802 configured to implement at least the packet downstream scheduler adjuster functionality described herein with an exemplary functionality and memory 803 usable for storing a program code required at least for the packet downstream scheduler adjuster unit.
  • IF interfaces
  • processor 802 configured to implement at least the packet downstream scheduler adjuster functionality described herein with an exemplary functionality
  • memory 803 usable for storing a program code required at least for the packet downstream scheduler adjuster unit.
  • a computing device that may be any apparatus or device or equipment or network node or network entity configured to perform one or more of corresponding apparatus functionalities described with an embodiment/example/implementation, and it may be configured to perform functionalities from different
  • the unit(s) described with an apparatus may be separate units, even located in another physical apparatus, the physical apparatuses forming one logical apparatus providing the functionality, or integrated to another unit in the same apparatus.
  • a unit in an apparatus, or part of the unit's functionality may be located in another physical apparatus.
  • the apparatus may be in one physical apparatus or distributed to two or more physical apparatuses acting as one logical apparatus.
  • the units and entities illustrated in Figure 1
  • the units and entities may be software and/or software-hardware and/or firmware components (recorded indelibly on a medium such as read-only-memory or embodied in hard-wired computer circuitry).
  • the techniques described herein may be implemented by various means so that an apparatus
  • implementing one or more functions of a corresponding apparatus/entity described with an embodiment/example/implementation comprises not only prior art means, but also means for implementing the one or more functions of a corresponding apparatus described with an embodiment and it may comprise separate means for each separate function, or means may be configured to perform two or more functions.
  • these techniques may be implemented in hardware (one or more apparatuses), firmware (one or more apparatuses), software (one or more modules), or combinations thereof.
  • firmware or software implementation can be through modules (e.g., procedures, functions, and so on) that perform the functions described herein.
  • Software codes may be stored in any suitable, processor/computer-readable data storage medium(s) or memory unit(s) or article(s) of manufacture and executed by one or more processors/computers.
  • An apparatus configured to provide the cache apparatus, and/or an apparatus configured to provide the downstream controlling apparatus, and/or an apparatus configured to provide both the cache apparatus and the downstream controlling apparatus, or an apparatus configured to provide one or more corresponding functionalities may generally include a processor, controller, control unit, micro-controller, or the like connected to a memory and to various interfaces of the apparatus.
  • the processor is a central processing unit, but the processor may be an additional operation processor.
  • Each or some or one of the units/entities described herein may be configured as a computer or a processor, or a microprocessor, such as a single-chip computer element, or as a chipset, including at least a memory for providing storage area used for arithmetic operation and an operation processor for executing the arithmetic operation.
  • Each or some or one of the units/entities described above may comprise one or more computer processors, application-specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field- programmable gate arrays (FPGA), and/or other hardware components that have been programmed in such a way to carry out one or more functions of one or more
  • each or some or one of the units/entities described above may be an element that comprises one or more arithmetic logic units, a number of special registers and control circuits.
  • an apparatus implementing functionality or some functionality according to an embodiment/example/implementation of an apparatus configured to provide the cache apparatus, and/or an apparatus configured to provide the downstream controlling apparatus, and/or an apparatus configured to provide both the cache apparatus and the downstream controlling apparatus, or an apparatus configured to provide one or more corresponding functionalities may generally include volatile and/or non- volatile memory, for example EEPROM, ROM, PROM, RAM, DRAM, SRAM, double floating-gate field effect transistor, firmware, programmable logic, etc. and typically store content, data, or the like.
  • the memory or memories, especially when a cache is provided, may be of any type (different from each other), have any possible storage structure and, if required, being managed by any database/cache management system.
  • the memory may also store computer program code such as software applications (for example, for one or more of the units/entities) or operating systems, information, data, content, or the like for the processor to perform steps associated with operation of the apparatus in accordance with
  • the memory may be, for example, random access memory, a hard drive, or other fixed data memory or storage device implemented within the processor/apparatus or external to the processor/apparatus in which case it can be communicatively coupled to the processor/network node via various means as is known in the art.
  • Examples of an external memory include a removable memory detachably connected to the apparatus, a distributed database and a cloud server.
  • an apparatus configured to provide the cache apparatus, and/or an apparatus configured to provide the downstream controlling apparatus, and/or an apparatus configured to provide both the cache apparatus and the downstream controlling apparatus, or an apparatus configured to provide one or more corresponding functionalities may generally comprise different interface units, such as one or more receiving units for receiving user data, control information, requests and responses, for example, and one or more sending units for sending user data, control information, responses and requests, for example.
  • the receiving unit and the transmitting unit each provides an interface in an apparatus, the interface including a transmitter and/or a receiver or any other means for receiving and/or transmitting information, and performing necessary functions so that content and other user data, control information, etc. can be received and/or transmitted.
  • the receiving and sending units may comprise a set of antennas, the number of which is not limited to any particular number.
  • an apparatus implementing functionality or some functionality according to an embodiment/example/implementation of an apparatus configured to provide the cache apparatus, and/or an apparatus configured to provide the downstream controlling apparatus, and/or an apparatus configured to provide both the cache apparatus and the downstream controlling apparatus, or an apparatus configured to provide one or more corresponding functionalities may comprise other units, like a user interface unit for maintenance.
  • steps 210, 21 1 and 207 may be performed simultaneously or in another order.
  • Other functions can also be executed between the steps or within the steps.
  • the request may be forwarded towards its destination although content is delivered via the cache.
  • Another example is to perform the check relating to whether or not a user apparatus is pumping up its requests, illustrated with steps 208 and 209, before sending a request for a further bearer (step 308), and/or before step 41 1 .
  • Yet another example is to add the steps relating to the threshold to an implementation based on Figure 2 or Figure 3.
  • a further example is to check, prior to adjusting the traffic scheduler in step 503, whether or not the new final setting are different enough from the final setting currently in use, and if the difference is big enough, to perform the adjustment of the traffic scheduler.
  • Still another example is to check after a prioritizing request is detected in step 501 , what is the current load situation and are there any possibilities to prioritize the downloading, the possibilities depending on the type and time-critically of the other traffic using the same downlink resources.
  • the messages are only exemplary and may even comprise several separate messages for transmitting the same information.
  • the messages may also contain other information.
  • a collector unit may be configured to collect cache-specifically information on requests relating to two or more caches, and the collector unit may be located in the same network entity as the packet downlink scheduler adjuster, as long as the requests passes the collector unit.

Abstract

To give priority to downloads serving multiple users served by a network entity, when an indication to prioritize a downloading of a specific content to a cache is received, downlink resources for the downloading are adjusted. Sending the indication takes place upon detecting that a specific content requested to be downloaded is currently being downloaded to the cache. In an embodiment, an evolved NodeB sends the indication to an intermediate router or to a gateway.

Description

DESCRIPTION
TITLE
DOWNLOADING TO A CACHE
FIELD
The present invention relates to cache downloads in a communications network, and especially scheduling one or more downloads to a cache.
BACKGROUND ART
The following description of background art may include insights, discoveries,
understandings or disclosures, or associations together with dis-closures not known to the relevant art prior to the present invention but provided by the invention. Some such contributions of the invention may be specifically pointed out below, whereas other such contributions of the invention will be apparent from their context. In recent years, the phenomenal growth of mobile Internet services and proliferation of smart phones and tablets has increased a demand for mobile broadband services. Some of the services, like downloading a video, especially a high definition (HD) video, require quite a lot of downstream bandwidth. Caching, especially transparent caching, is used to avoid downloading of multiple copies of the same content over a link by storing the content so that it can be delivered from a cache instead of always retrieving the content from an original, remote source. In the transparent caching, content is stored and delivered from the edge of an operator's network to subscribers requesting the content behind the edge. The content may be delivered even while it is downloaded to the cache. While the content is downloaded to the cache it competes with other traffic sharing the same downstream bandwidth towards the cache.
SUMMARY
A general aspect of the invention relates to collecting and/or receiving information that can be used in adjusting scheduling of downstream downloads. Various aspects of the invention comprise methods, an apparatus, a system and a computer program product as defined in the independent claims. Further embodiments of the invention are disclosed in the dependent claims.
BRIEF DESCRIPTION OF THE DRAWINGS
In the following, embodiments will be described in greater detail with reference to accompanying drawings, in which Figure 1 shows simplified architecture of an exemplary system having schematic block diagrams of exemplary apparatuses;
Figures 2 to 6 are flow charts illustrating different exemplary functionalities; and Figures 7 and 8 are block diagrams of exemplary apparatuses. DETAILED DESCRIPTION OF SOME EMBODIMENTS
The following embodiments are exemplary. Although the specification may refer to "an", "one", or "some" embodiment(s) in several locations, this does not necessarily mean that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
The present invention is applicable to any system, radio access network and apparatus that support caching. Examples of such systems include LTE/SAE (Long Term
Evolution/System Architecture Evolution) radio system, Worldwide Interoperability for Microwave Access (WiMAX), Global System for Mobile communications (GSM, 2G), GSM EDGE radio access Network (GERAN), General Packet Radio Service (GRPS), Universal Mobile Telecommunication System (UMTS, 3G) based on basic wideband-code division multiple access (W-CDMA), high-speed packet access (HSPA), LTE-A (LTE-advanced), and/or beyond LTE-A with corresponding core networks 3G, 4G etc.
The specifications of different systems and networks, especially in wireless
communication, develop rapidly. Such development may require extra changes to an embodiment. Therefore, all words and expressions should be interpreted broadly and they are intended to illustrate, not to restrict, the embodiment.
Below different exemplary embodiments are explained using LTE-A and EPC (Evolved Packet Core) to provide the system for mobile users, without limiting the examples and the invention to such a solution.
A general architecture of an exemplary system 100 is illustrated in Figure 1 . Figure 1 is a simplified system architecture only showing some elements and functional entities, all being logical units whose implementation may differ from what is shown. The connections shown in Figure 1 are logical connections; the actual physical connections may be different. It is apparent to a person skilled in the art that the systems also comprise other functions and structures that are not illustrated.
The illustrated parts of the system 100 in Figure 1 are a radio access network 200 with evolved Nodes B (eNB) 210, 210' and an intermediate router 220, a core network 300 with a gateway apparatus 310, and an external network 400, like Internet, providing access to an original source of a content 500. A connection between eNB 210, 210' and the intermediate router 220 may be a wireless or a wired connection, and the same applies to a connection between the intermediate router 220 and the gateway 310. In the illustrated example, the hierarchical group of caches contain two first level caches 21 , 21 ', each being coupled or integrated to an evolved Node B (eNB) 210, 210', and a highest level cache 31 coupled to the gateway apparatus 310. However, it should be appreciated that there may be one or more intermediate level caches in one or more intermediate level, like the intermediate router 220, and each first level cache and/or intermediate level cache may be located in any location of the radio access network hierarchy.
The highest level cache 31 may be a so called out of band root cache that does not receive the transferred content, i.e. does not actually cache the content but receives the requests to receive the content and any control signals that may assist in the caching of content, and/or transferring the content. For that purpose the gateway apparatus 310 (or the highest level cache 31 ) comprises a packet downstream scheduler adjuster 32 that is configured to take into account requests for prioritizing content downloads, and hence provide dynamical adjusting of downlink resources towards the user apparatuses. Further, in the illustrated example also the intermediate router 220 comprises a packet
downstream scheduler adjuster 22 configured to take into account requests for prioritizing content downloads, and hence provide dynamical adjusting of downlink resources from the intermediate router to eNBs, i.e. towards the user apparatuses. A packet downstream scheduler adjuster 22, 32 may be an integral part of a traffic scheduler, like a packet scheduler or a downlink packet scheduler, or coupled to such a traffic scheduler. To provide the packet downstream scheduler adjuster 22 request or corresponding information for prioritizing, some or all of the lower level caches or at least some or all of the lowest level (first level) caches or a corresponding apparatuses to which a cache is coupled to, comprises a collector unit 21 -1 , 21 -1 ' configured to provide information on the amount of different contents to be downloaded to the cache and content-specific amount of requests received for the same content while it is downloaded. The collector unit 21 -1 ,
21 -1 ' may be configured to provide the requests/information as part of a cache
management process.
Although a hierarchical caching with hierarchical packet downstream scheduler adjusters is illustrated in Figure 1 , it should be appreciated that the invention is implementable to a system having only one cache and one collector unit for the cache and one traffic scheduler with one packet downstream scheduler adjuster somewhere in a downlink towards the cache, and that at least the collector and the packet downstream scheduler adjuster can communicate with each other.
Figure 2 is a flow chart illustrating an exemplary functionality of eNB comprising a cache and a collector unit. In the example it is assumed, for the sake of clarity, that a requesting user apparatus is served by eNB and is allowed to download the requested content.
Further, since the illustrated functionality relates to downloading to a cache, it is assumed for the sake of clarity, that a cache miss (i.e. the content not being in the cache) causes downloading of the requested content. However, it should be appreciated that in real-life implementations cache management schema and policies are used to determine what content and when will be downloaded to a cache in case of a cache miss. Further, below it is assumed, for the sake of clarity, that a sent request for downloading is interpreted to mean that the requested content is downloading to the cache.
When a request for a specific content, i.e. a request to download a specific content, is received in step 201 from a user apparatus, it is checked in step 202, whether or not the content is already in the cache. If it is, the content is delivered in step 203 from the cache to the user apparatus.
If the specific content is not in the cache (step 202), it is checked in step 204, whether or not the specific content is currently being downloaded to the cache. If the specific content is not being downloaded to the cache, a value "n" indicating the content requests for the specific content is set to be one in step 205 and a request for downloading the content is sent towards the source in step 206. Then the content is delivered in point 207 to the user apparatus while it is being downloaded to the cache.
If the content is being downloaded to the cache (step 204), in this example it is checked in step 208, whether or not the same user apparatus has requested the same content earlier. The user apparatus may be identified based on the radio bearer used, or on an identifier identifying the user apparatus to the radio access network, like a globally unique temporary identifier (GUTI). If the same user apparatus has requested the same content earlier (step 208), the request is ignored in step 209 since the user apparatus already receives or soon will receive the content. Hence, by performing the check, a user apparatus, or more precisely a user of the user apparatus, is prevented to pump up his/her request by sending multiple content requests. However, it should be appreciated that the check in step 208, and hence step 209 may be omitted in another implementation.
If the user apparatus has not requested the content earlier (step 208), the value of "n" is incremented by one in step 210, and then a request for a higher priority of the stream delivering the content to the cache is sent in step 21 1 to the cache. The request for the higher priority, which may also be called as a request to prioritize the downlink stream (i.e. downlink session), may indicate the value of "n", or an incrementation to the value of "n". Typically requests from eNB to the gateway, and vice versa, are transmitted using a tunneled bearer (also called GTP tunnel) providing a virtual circuit, and a payload of a request is invisible to the intermediate nodes. The request to be forwarded towards the target address may indicate the request for the higher priority when a "Differentiated Services Field" of the upstream encapsulating IP (Internet Protocol) header, like the outer header of the GTP tunnel, is set to a predefined code point while the request is
encapsulated. For example, eight less significance bits of a differentiated service code point E0 may be used for forwarding the value of "n", or indication that the value of "n" has increased, or corresponding information. Hence, the packet downstream scheduler adjuster receives in addition to information indicating how many different contents are requested via one eNB also information/indication on how many user apparatuses are requesting the same content via the same eNB. With the eight less significance bits, 28 priorities may be expressed. Then the process proceeds to step 207 to deliver the content also to the user apparatus wherefrom the request was received.
In other words, a traffic scheduler of the cache is requested in step 21 1 to favor the downstream session in question since now the same content is delivered to multiple users via the cache. Actually the request is for increasing the quality of service/bandwidth for the session over which the content is downloaded to the cache.
In a further embodiment, a value in the header is interpreted to indicate another value, for example the apparatuses may be configured to interpret value 2 in the header to mean 10 requests for the same content Figure 3 is a flow chart illustrating another exemplary functionality of eNB comprising a cache and a collector unit. The same assumptions described above with Figure 2 apply also with Figure 3.
Referring to Figure 3, when a request for a specific content is received in step 301 from a user apparatus, it is checked in step 302, whether or not the content is already in the cache. If it is, the content is delivered in step 303 from the cache to the user apparatus.
If the specific content is not in the cache (step 302), it is checked in step 304, whether or not the specific content is currently being downloaded to the cache. If the specific content is not being downloaded to the cache, a request for downloading the content is sent towards the source in step 305. Then the content is delivered in point 306 to the user apparatus while it is being downloaded to the cache. If the content is being downloaded to the cache (step 304), in this example a further (i.e. a new) bearer for the content download is requested for in step 307. The request for the further bearer is an implicit request for a higher priority, since it requests additional bearer resources for a content already being downloaded over one or more bearers. If the request for the further bearer is accepted (step 308), the further bearer is set up in step 309, and then the process proceeds to step 306 to deliver the content to user
apparatuses. If the request for the further bearer is not accepted (step 308), the process proceeds directly to step 306 to deliver the content to user apparatuses.
Figure 4 is a flow chart illustrating an exemplary functionality of an intermediate router in a hierarchical cache system, the intermediate router comprising a cache, a collector unit, a packet downstream scheduler adjuster and a traffic scheduler. Also herein it is also assumed for the sake of clarity, that a cache miss causes downloading of the requested content and that a sent request for downloading is interpreted to mean that the requested content is downloading to the cache. When a request relating to a specific content is received in step 401 from eNB or from another intermediate router that is in the hierarchy below the intermediate router whose exemplary functionality is illustrated in Figure 4, it is checked in step 402, whether or not the specific content is already in the cache. If it is in the cache, the request relating to the specific content may be interpreted to be a first request, or it may indicate to prioritize the request. Therefore it is checked in step 403, whether or not the content is currently being downloaded from the cache. If it is not, the request is interpreted to be the first request (although corresponding content may have been downloaded in the past), and the traffic scheduler settings are set in step 404 to correspond bearer settings of a bearer set up between the gateway apparatus and the user apparatus wherefrom the request originates. After that the content is delivered in point 405 downlink towards the user apparatus.
If the content is currently being downloaded from the cache (step 403), the request may be a prioritize request from eNB or the intermediate router towards which the content is currently being downloaded, or a first request for the specific content from eNB or an intermediate router. Therefore the downlink packet scheduler adjuster updates the traffic scheduler settings in step 406 to correspond to the changed situation, and then the content is delivered in point 405 downlink(s) towards the user apparatuses.
If the specific content is not in the cache (step 402), it is checked in step 407, whether or not the specific content is currently being downloaded to the cache. If the specific content is not being downloaded to the cache, a value "n" indicating the content requests for the specific content is set to be one in step 408 and a request for downloading the content is sent towards the source in step 409. Then the content is delivered in point 410 to eNB or the other intermediate node towards the user apparatus while the content is being downloaded to the cache.
If the content is being downloaded to the cache (step 407), in this example the value of "n" is incremented by one in step 41 1 , and then it is checked, whether or not the updated value "n" exceeds a threshold th. The threshold is used in the example to ensure that requests to favor the download are not sent too frequently to the packet downstream scheduler adjuster adjusting the traffic scheduler responsible for downlink towards the intermediate router, thereby saving both uplink resources over which the requests to favor are sent and the processing resources of the packet downstream scheduler adjuster. The threshold may be a preset value that is configurable via a management system or via the network entity being on a higher hierarchy level, and it may be preset network-specifically or hierarchy-level specifically to be the same in each (corresponding) entity comprising a collector unit, or it may be preset entity-specifically. Examples of possible threshold values include 3, 5, 10 and 20.
If the updated value "n" does not exceed the threshold th (step 412), the content is delivered in point 410 to eNB(s) and/or the other intermediate node(s) towards the user apparatuses while the content is being downloaded to the cache.
If the updated value "n" exceeds the threshold th (step 412), the value "n" is set in step 413 to be one and a a request for a higher priority of the stream delivering the content to the cache is sent in step 414 and the content is delivered in point 410 to eNB(s) and/or the other intermediate node(s) towards the user apparatuses while the content is being downloaded to the cache.
In another embodiment, the threshold may be maintained served entity -specifically so that a first request for a content, received via a served entity, like eNB or another intermediate router, while the content is downloaded, is sent as a "favor request", i.e. a request to prioritize the download.
Figure 5 is a flow chart illustrating exemplary functionality of a network apparatus comprising a packet downstream scheduler adjuster. In response to detecting in step 501 a priorization request, i.e. a request that has a value of "n" marking the differentiated service code point in the outer header, a new (updated) downstream final setting is calculated in step 502. In the illustrated example static setting, like static scheduling setting for the connection in question provided by a network management and free downlink capacity is taken into account in addition to a dynamic setting determined on the basis of received requests for different contents and
corresponding "n" values.
For example, following formula may be used:
final setting = static setting + (free link capacity)*(dynamic setting)
After the new downstream final setting is calculated, a corresponding traffic scheduler is adjusted for downstream traffic correspondingly in step 503. Hence, the download sessions serving multiple users are favored but at the same time other traffic, like time sensitive traffic, is not suffering because of cache downloading.
It should be appreciated that also a change in the free link capacity triggers calculating a new downstream final setting. Further, an exact mechanism used to take into account the amount of requests for the same content via the same cache for dynamic settings bears no significance to the invention, and any known or future method, including different proprietory methods, may be used.
As is evident from the above, in implementations based on indicating, by means of having "n" in a predetermined point in an outer header, transmit the information due to the fact that there are multiple requests for the same content downloading via the same cache without any extra signaling. Figure 6 is a flow chart illustrating another exemplary functionality of a network apparatus comprising a packet downlink scheduler adjuster, the example being a counterpart to the example illustrated with Figure 3.
When the request for a further bearer is received in step 601 , it is determined, based on the current downlink load and taking into account that other traffic does not suffer from a cache downloading, or at least there is a balance between different downloads and downlink traffic, whether or not the request for the further bearer is acceptable. The mechanism used for the determining bears no significance to the invention, and any known or future method may be used. If the request for the further bearer is determined to be acceptable, and hence accepted (step 602), the further bearer is set up in step 603, and the downloading to the cache is performed in step 604 using the one or more bearers that already existed and the new further bearer. In other words, the packet downstream scheduler adjuster adjusts the traffic scheduler to schedule packets to a new set of bearers.
If the request for the further bearer is not accepted (step 602), the downloading continues in step 605 using the one or more already existing bearers. In other words, the packet downlink scheduler adjuster does not adjust the traffic scheduler. An advantage of the implementation utilizing a request for a further bearer as a request to prioritize (prioritization request) the download session in question is that there is no need to agree what header bits indicate the request to prioritize and no need to detect such bits amongst a traffic intended to only pass the entity.
Figure 7 is a simplified block diagram illustrating some units for an apparatus 700 configured to be a "cache apparatus", i.e. an apparatus providing at least a collector unit for a cache 701 . In the illustrated example, the apparatus comprises a memory 703 for the cache 701 , one or more interfaces (IF) 704 for receiving and transmitting information, including content to be cached, a processor 702 configured to implement at least the collector unit functionality described herein with an exemplary functionality and memory 703 usable for storing a program code required for the collector unit. However, instead of having the cache, the apparatus may be connected to the cache.
Figure 8 is a simplified block diagram illustrating some units for an apparatus 800 configured to be a "downstream controlling apparatus", i.e. an apparatus providing at least the packet downstream scheduler adjuster unit. In the illustrated example the apparatus comprises one or more interfaces (IF) 804 for receiving and transmitting information, a processor 802 configured to implement at least the packet downstream scheduler adjuster functionality described herein with an exemplary functionality and memory 803 usable for storing a program code required at least for the packet downstream scheduler adjuster unit. In other words, an apparatus configured to provide the cache apparatus, and/or an apparatus configured to provide the downstream controlling apparatus, and/or an apparatus configured to provide both the cache apparatus and the downstream controlling apparatus, or an apparatus configured to provide one or more corresponding
functionalities, is a computing device that may be any apparatus or device or equipment or network node or network entity configured to perform one or more of corresponding apparatus functionalities described with an embodiment/example/implementation, and it may be configured to perform functionalities from different
embodiments/examples/implementations. The unit(s) described with an apparatus may be separate units, even located in another physical apparatus, the physical apparatuses forming one logical apparatus providing the functionality, or integrated to another unit in the same apparatus. In other embodiments, a unit in an apparatus, or part of the unit's functionality, may be located in another physical apparatus. It should be appreciated that the apparatus may be in one physical apparatus or distributed to two or more physical apparatuses acting as one logical apparatus. More precisely, the units and entities (illustrated in Figure 1 ) may be software and/or software-hardware and/or firmware components (recorded indelibly on a medium such as read-only-memory or embodied in hard-wired computer circuitry). The techniques described herein may be implemented by various means so that an apparatus
implementing one or more functions of a corresponding apparatus/entity described with an embodiment/example/implementation comprises not only prior art means, but also means for implementing the one or more functions of a corresponding apparatus described with an embodiment and it may comprise separate means for each separate function, or means may be configured to perform two or more functions. For example, these techniques may be implemented in hardware (one or more apparatuses), firmware (one or more apparatuses), software (one or more modules), or combinations thereof. For a firmware or software, implementation can be through modules (e.g., procedures, functions, and so on) that perform the functions described herein. Software codes may be stored in any suitable, processor/computer-readable data storage medium(s) or memory unit(s) or article(s) of manufacture and executed by one or more processors/computers. An apparatus configured to provide the cache apparatus, and/or an apparatus configured to provide the downstream controlling apparatus, and/or an apparatus configured to provide both the cache apparatus and the downstream controlling apparatus, or an apparatus configured to provide one or more corresponding functionalities, may generally include a processor, controller, control unit, micro-controller, or the like connected to a memory and to various interfaces of the apparatus. Generally the processor is a central processing unit, but the processor may be an additional operation processor. Each or some or one of the units/entities described herein may be configured as a computer or a processor, or a microprocessor, such as a single-chip computer element, or as a chipset, including at least a memory for providing storage area used for arithmetic operation and an operation processor for executing the arithmetic operation. Each or some or one of the units/entities described above may comprise one or more computer processors, application-specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field- programmable gate arrays (FPGA), and/or other hardware components that have been programmed in such a way to carry out one or more functions of one or more
embodiments. In other words, each or some or one of the units/entities described above may be an element that comprises one or more arithmetic logic units, a number of special registers and control circuits.
Further, an apparatus implementing functionality or some functionality according to an embodiment/example/implementation of an apparatus configured to provide the cache apparatus, and/or an apparatus configured to provide the downstream controlling apparatus, and/or an apparatus configured to provide both the cache apparatus and the downstream controlling apparatus, or an apparatus configured to provide one or more corresponding functionalities, may generally include volatile and/or non- volatile memory, for example EEPROM, ROM, PROM, RAM, DRAM, SRAM, double floating-gate field effect transistor, firmware, programmable logic, etc. and typically store content, data, or the like. The memory or memories, especially when a cache is provided, may be of any type (different from each other), have any possible storage structure and, if required, being managed by any database/cache management system. The memory may also store computer program code such as software applications (for example, for one or more of the units/entities) or operating systems, information, data, content, or the like for the processor to perform steps associated with operation of the apparatus in accordance with
embodiments. The memory, or part of it, may be, for example, random access memory, a hard drive, or other fixed data memory or storage device implemented within the processor/apparatus or external to the processor/apparatus in which case it can be communicatively coupled to the processor/network node via various means as is known in the art. Examples of an external memory include a removable memory detachably connected to the apparatus, a distributed database and a cloud server.
An apparatus implementing functionality or some functionality according to an
embodiment/example/implementation of an apparatus configured to provide the cache apparatus, and/or an apparatus configured to provide the downstream controlling apparatus, and/or an apparatus configured to provide both the cache apparatus and the downstream controlling apparatus, or an apparatus configured to provide one or more corresponding functionalities, may generally comprise different interface units, such as one or more receiving units for receiving user data, control information, requests and responses, for example, and one or more sending units for sending user data, control information, responses and requests, for example. The receiving unit and the transmitting unit each provides an interface in an apparatus, the interface including a transmitter and/or a receiver or any other means for receiving and/or transmitting information, and performing necessary functions so that content and other user data, control information, etc. can be received and/or transmitted. The receiving and sending units may comprise a set of antennas, the number of which is not limited to any particular number. Further, an apparatus implementing functionality or some functionality according to an embodiment/example/implementation of an apparatus configured to provide the cache apparatus, and/or an apparatus configured to provide the downstream controlling apparatus, and/or an apparatus configured to provide both the cache apparatus and the downstream controlling apparatus, or an apparatus configured to provide one or more corresponding functionalities, may comprise other units, like a user interface unit for maintenance.
The steps and related functions described above in Figures 2 to 6 are in no absolute chronological order, and some of the steps may be performed simultaneously or in an order differing from the given one. For example, steps 210, 21 1 and 207 may be performed simultaneously or in another order. Other functions can also be executed between the steps or within the steps. For example, the request may be forwarded towards its destination although content is delivered via the cache. Another example is to perform the check relating to whether or not a user apparatus is pumping up its requests, illustrated with steps 208 and 209, before sending a request for a further bearer (step 308), and/or before step 41 1 . Yet another example is to add the steps relating to the threshold to an implementation based on Figure 2 or Figure 3. A further example is to check, prior to adjusting the traffic scheduler in step 503, whether or not the new final setting are different enough from the final setting currently in use, and if the difference is big enough, to perform the adjustment of the traffic scheduler. Still another example is to check after a prioritizing request is detected in step 501 , what is the current load situation and are there any possibilities to prioritize the downloading, the possibilities depending on the type and time-critically of the other traffic using the same downlink resources. Some of the steps or part of the steps can also be left out or replaced by a corresponding step or part of the step. For example, steps 408, 41 1 , 412 and 413 may be left out, or only steps
412 and 413. The messages (requests, responses, etc.) are only exemplary and may even comprise several separate messages for transmitting the same information. In addition, the messages may also contain other information.
Although in the above different examples are illustrated assuming that each cache has a dedicated collector unit, a collector unit may be configured to collect cache-specifically information on requests relating to two or more caches, and the collector unit may be located in the same network entity as the packet downlink scheduler adjuster, as long as the requests passes the collector unit.
It will be obvious to a person skilled in the art that, as technology advances, the inventive concept can be implemented in various ways. The invention and its embodiments are not limited to the examples described above but may vary within the scope of the claims.

Claims

Claims
1 . A method comprising: receiving a request to download a specific content; detecting that the specific content is currently being downloaded to a cache; and sending an indication to prioritize the downloading to the cache.
2. A method as claimed in claim 1 , further comprising: checking, whether the request is originating from a user apparatus wherefrom a corresponding request has been received earlier; and sending the indication to prioritize only if a corresponding request has not been received earlier from the user apparatus.
3. A method as claimed in claim 1 or 2, further comprising: maintaining information indicating the amount of received requests to the specific content; and sending the indication when the amount exceeds a preset threshold.
4. A method as claimed in claim 1 , 2 or 3, further comprising: setting a differentiated services field of an upstream encapsulating header of the request to a predefined code point as the indication.
5. A method as claimed in claim 4, wherein eight less significance bits of a differentiated service code point are used for the indication.
6. A method as claimed in claim 1 , 2 or 3, further comprising sending as the indication a request for a further bearer for the download .
7. A method comprising: receiving an indication to prioritize a downloading of a specific content to a cache; in response to the indication, adjusting downlink resources for the downloading.
8. A method as claimed in claim 7, further comprising calculating a new downstream setting by taking into account the indication; and adjusting the downlink resourced by updating a downstream setting in a traffic scheduler responsible for the downlink resources to correspond the new downstream final setting.
9. A method as claimed in claim 7 or 8, further comprising: checking whether or not it is acceptable to prioritize the downloading; and performing the adjusting in response to a checking result indicating acceptance.
10. A method as claimed in claim 7, 8, or 9, further comprising: receiving the indication in the form of a differentiated services field of an upstream encapsulating header being set to a predefined code.
1 1 . A method as claimed in claim 10, wherein eight less significance bits of a
differentiated service code point are used for the indication.
12. A method as claimed in claim 7, 8, or 9, further comprising receiving the indication as a request for a further bearer for the downloading.
13. A computer program product comprising program instructions adapted to perform any of the steps of a method as claimed in any one of claims 1 to 12 when the computer program is run.
14. An apparatus comprising means for implementing a method as claimed in any of claims 1 to 12.
15. An apparatus as claimed in claim 14, the apparatus comprising at least one processor; and one memory including computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor, provide the means for implementing.
16. A system comprising at least: a cache; a first apparatus comprising means for performing any of the steps of a method as claimed in any one of claims 1 to 6; and a second apparatus comprising means for performing any of the steps of a method as claimed in any one of claims 7 to 1 1 .
17. A system as claimed in claim 16, wherein the system is a mobile communications system, the first apparatus is an evolved node B and the second apparatus is an intermediate router.
PCT/EP2013/053506 2013-02-22 2013-02-22 Downloading to a cache WO2014127826A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2013/053506 WO2014127826A1 (en) 2013-02-22 2013-02-22 Downloading to a cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2013/053506 WO2014127826A1 (en) 2013-02-22 2013-02-22 Downloading to a cache

Publications (1)

Publication Number Publication Date
WO2014127826A1 true WO2014127826A1 (en) 2014-08-28

Family

ID=47754484

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2013/053506 WO2014127826A1 (en) 2013-02-22 2013-02-22 Downloading to a cache

Country Status (1)

Country Link
WO (1) WO2014127826A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020042828A1 (en) * 2000-10-05 2002-04-11 Christopher Peiffer Connection management system and method
US20120265856A1 (en) * 2011-04-18 2012-10-18 Cisco Technology, Inc. System and method for data streaming in a computer network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020042828A1 (en) * 2000-10-05 2002-04-11 Christopher Peiffer Connection management system and method
US20120265856A1 (en) * 2011-04-18 2012-10-18 Cisco Technology, Inc. System and method for data streaming in a computer network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CLARK R J ET AL: "Providing scalable Web services using multicast communication", COMPUTER NETWORKS AND ISDN SYSTEMS, NORTH HOLLAND PUBLISHING. AMSTERDAM, NL, vol. 29, no. 7, 1 August 1997 (1997-08-01), pages 841 - 858, XP004096541, ISSN: 0169-7552, DOI: 10.1016/S0169-7552(97)00005-6 *
GEVROS P ET AL: "ANALYSIS OF A METHOD FOR DIFFERENTIAL TCP SERVICE", 1999 IEEE GLOBAL TELECOMMUNICATIONS CONFERENCE. GLOBECOM'99. SEAMLESS INTERCONNECTION FOR UNIVERSAL SERVICES. RIO DE JANEIRO, BRAZIL, DEC. 5-9, 1999; [IEEE GLOBAL TELECOMMUNICATIONS CONFERENCE], NEW YORK, NY : IEEE, US, 5 December 1999 (1999-12-05), pages 1699 - 1708, XP001003902, ISBN: 978-0-7803-5797-6, DOI: 10.1109/GLOCOM.1999.832453 *
RONIT NOSSENSON: "Base Station Application Optimizer", DATA COMMUNICATION NETWORKING (DCNET), PROCEEDINGS OF THE 2010 INTERNATIONAL CONFERENCE ON, IEEE, 26 July 2010 (2010-07-26), pages 1 - 6, XP031936309 *

Similar Documents

Publication Publication Date Title
JP6008968B2 (en) Communication terminal and method
JP2022174234A (en) session context conversion
US20120184258A1 (en) Hierarchical Device type Recognition, Caching Control & Enhanced CDN communication in a Wireless Mobile Network
US20160353325A1 (en) Load balanced gateway selection in lte communications
CN112165725A (en) Message processing method and device
US11750411B2 (en) Method of and devices for supporting selective forwarding of messages in a network of communicatively coupled communication devices
US10455641B2 (en) Protocol data unit management
US20210345237A1 (en) Communication Method and Communications Apparatus
EP3162138B1 (en) Guaranteed download time
US20130188598A1 (en) Local storage of content in a wireless network
EP2875616B1 (en) Content optimization based on real time network dynamics
JP2014531810A (en) Communication terminal and method
KR20130091051A (en) Method and apparatus for controlling traffic transfer rate based on cell capacity in mobile communication system
US10581944B2 (en) Transmission resource distribution for streaming of variable bitrate encoded media data
WO2015100647A1 (en) Content distribution method and device
US20150095503A1 (en) Method and apparatus for accessing to web server in a mobile communication system
WO2014127826A1 (en) Downloading to a cache
US20180343325A1 (en) Service data transmission method and apparatus
EP2809094B1 (en) Method and device for allowing wireless communication equipments to access to contents stored into near delivery nodes of a cdn
CN109076625A (en) Access the methods, devices and systems of resource
WO2015028057A1 (en) Packet processing in communications
GB2494470A (en) Caching of content in a mobile communications network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13706494

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13706494

Country of ref document: EP

Kind code of ref document: A1