WO2014112250A1 - Node and method for routing packets based on measured workload of node in low-power and lossy network - Google Patents

Node and method for routing packets based on measured workload of node in low-power and lossy network Download PDF

Info

Publication number
WO2014112250A1
WO2014112250A1 PCT/JP2013/083146 JP2013083146W WO2014112250A1 WO 2014112250 A1 WO2014112250 A1 WO 2014112250A1 JP 2013083146 W JP2013083146 W JP 2013083146W WO 2014112250 A1 WO2014112250 A1 WO 2014112250A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
packets
nodes
workload
time
Prior art date
Application number
PCT/JP2013/083146
Other languages
French (fr)
Inventor
Jianlin Guo
Xinxin LIU
Ghulam Bhatti
Philip Orlik
Kieran Parsons
Original Assignee
Mitsubishi Electric Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corporation filed Critical Mitsubishi Electric Corporation
Publication of WO2014112250A1 publication Critical patent/WO2014112250A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0231Traffic management, e.g. flow control or congestion control based on communication conditions
    • H04W28/0236Traffic management, e.g. flow control or congestion control based on communication conditions radio quality, e.g. interference, losses or delay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • H04W40/02Communication route or path selection, e.g. power-based or shortest path routing
    • H04W40/04Communication route or path selection, e.g. power-based or shortest path routing based on wireless node resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/26Route discovery packet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • H04W40/02Communication route or path selection, e.g. power-based or shortest path routing
    • H04W40/12Communication route or path selection, e.g. power-based or shortest path routing based on transmission quality or channel quality
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S40/00Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them

Definitions

  • This invention relates generally to routing packets in wireless networks, and particularly to load balanced routing for low power and lossy networks.
  • nodes and communication links are constrained.
  • Nodes in the LLN typically operate with resource constrains on processing power, memory, power consumption, lifetime, rate of activity, and physical size.
  • the communication links between the nodes can be characterized by high loss rate, low data rate, instability, low transmission power, and short transmission range. There can be from a few dozen up to thousands of nodes within a practical LLN. Examples of LLN include a smart meter network, and a wireless sensor network for building monitoring.
  • the LLN can also have constrained traffic pattern.
  • Multipoint-to-point e.g., from nodes inside the LLN towards a central control or data concentrator node, traffic is dominant.
  • the point-to-multipoint e.g., from a central control point to a subset of nodes inside the LLN, traffic is less common.
  • the point-to-point e.g., between nodes in the LLN, traffic is rare.
  • the control node in LLNs usually acts as data sink and collects data from all other nodes in LLNs.
  • LLN applications typically require uneven node deployment and a high packet delivery rate, which can result in uneven workload of the nodes, referred as load unbalanced routing.
  • Load unbalanced routing can result in packet loss by LLN nodes due to small buffer sizes.
  • routing overhead can increase the workload of LLN nodes, and therefore can result in extra packet loss.
  • the conventional routing methods are not designed for workload balancing and routing overhead minimization suitable for usage in LLN.
  • load balanced routing in LLN can increase bandwidth, improve reliability, reduce interference and transmission delay.
  • RPL internet engineering task force
  • IETF internet engineering task force
  • IETF internet engineering task force
  • RPL is a multi-path routing protocol
  • routes are discovered based on a predefined metric, such as hop count or expected transmission count.
  • a node transmits all packets to a single node, called a preferred parent node that is one "hop" away.
  • Packets can be data, control or management packets.
  • Fig. 1 shows an example of unbalanced routing in a smart meter network, including a concentrator node C 110, and a set of smart meter nodes (M).
  • the smart meter nodes e.g., a node 130, transmit their metering data packets along certain routes, e.g., a route 140, to the concentrator C.
  • the smart meter network can include a subset of smart meter nodes, e.g., a set 120, that can be densely deployed.
  • nodes Ml 125 Based on a shortest path routing protocol, all these smart meter nodes transmit their metering data packets to concentrator C via node Ml 125. Such transmission scheme can overload the node 125 when compared with other nodes, such as nodes M4 150 and M5 155.
  • U.S. Patent 7,936,704 describes a method of configuring the topology of a communication network as a forest structure comprising trees and subtrees.
  • that method is a single path routing method, because after discovery of the route, all packets have to be sent to a selected next hop node. Also, the method increases communication overhead, which must be minimized in LLNs.
  • U.S. 7,633,940 Bl describes an adaptive load-balanced routing method for interconnection networks. Approximate global congestion is sensed as a function of channel queues, with routing methods selected in accordance with the sensed congestion. However, that usage of the channel queue can result in packet loss by a LLN node due to limited storage of the nodes.
  • U.S. 7,366,100 describes an architecture that allows multipath packets to be distributed over multiple paths using a hash function. However, that method is not a load balanced routing method, but a multipath routing method.
  • U.S. Publication 2008/0112326 describes a method for load-balancing routes in multi-hop ad-hoc wireless networks.
  • the node waits before retransmitting the message, where the amount of time that the node waits is based on a value of a load metric at the node, which is independent of metrics of any other nodes in the network.
  • a load metric at the node which is independent of metrics of any other nodes in the network.
  • That method also establishes a single parent node route.
  • Load balanced routing provides numerous benefits to LLN, such as improving network throughput, increasing energy efficiency, prolonging network operating duration, especially for battery-powered networks, and reducing communication overhead. Therefore, load balancing under the non-uniform node distribution is critical. It is desirable to develop a load balanced routing method for LLNs to deliver data packets and control packets reliably minimizing packet loss or shortening network operability.
  • Various embodiments of the invention are based on a general realization that load balancing can be improved by transmitting packets from a node to multiple neighboring nodes, as contrasted with transmission to a single parent node. Moreover, if the allocation of the packets transmitted to the neighboring nodes is based on current workloads of those neighboring nodes, the overall workloads of the neighboring nodes are balanced.
  • Some embodiments of the invention are based on another realization that for allocating packets according to the workloads of neighboring nodes, sometimes it is not necessary to determine the actual workloads of those nodes, because relative comparison of the workloads can be sufficient for the allocation. For example, packets can be transmitted more often to a neighboring node having a lower workload than packets transmitted to a neighboring node having a higher workload. Specifically, one exemplar embodiment determines a ratio of workloads indicating the relative values of the workloads and allocates transmission of the packets to the neighboring nodes according to that ratio.
  • Some embodiments determine the ratio of the workloads based on local information collected by the node, e.g., by comparing times of receiving packets from neighboring nodes.
  • the local information collected during an operation of the LLN can include information indicative of transmission itself, as contrasted with global information including content transmitted as part of the packets. Such approach allows for comparison of the workloads without submitting additional information. Examples of the collected information include time of transmission, a number and/or rate of received packets.
  • a node including a receiver for receiving a first packet from a first node at a first time and a second packet from a second node at a second time, a processor for comparing the first time with the second time to produce a ratio of workloads of the first node and the second node, and a transmitter for transmitting packets to the first and the second nodes based on the ratio.
  • This embodiment allows allocating transmission of the packets to multiple nodes based on workloads of those nodes to achieve a balanced routing.
  • the node also includes a timer for delaying transmission of the packets to implicitly indicate the workload to other nodes. In those embodiments, the nodes can influence the allocation of the packets.
  • a node can determine a first node having a workload less than a workload of a second node based on time of receiving packets from the first and the second nodes, and generate a command to transmit a set of data packets to the first and the second nodes, such that more packets from the set are transmitted to the first node than to the second node.
  • a ratio of the packets transmitted to the first and the second nodes can be determined as a function of the ratio of the workloads. For example, in one embodiment, the processor updates the ratio of the transmitted packets based on quality of links connecting the node with the first node and with the second node. Considering quality of links in determining the ratio can advantageously use the load balancing to reduce the loss rate of the packets during upward transmission.
  • Some embodiments of invention provides a delay timer calculation method to signal the workload of a node.
  • a node uses a time delay mechanism to signal the workload such that a node with small workload has a shorter time delay in transmitting route discovery packet, and a node with large workload has a longer time delay in transmitting route discovery packet.
  • Timer value can be proportional to workload.
  • the magnitude of the delay is set such that the route discovery packet transmission backoff delay incurred by lower layers does not affect workload signaling.
  • a node can record the time of receiving a route discovery packet from each potential parent nodes. An earlier time indicates that route discovery packet transmitter has a smaller workload and a later time indicates that route discovery packet transmitter has a larger workload. Periodic route discovery allows updating the workloads dynamically.
  • Fig. 1 is schematic of a smart meter network with a conventional load unbalanced routing
  • Fig. 2A is a block diagram of a method suitable for routing packets by a node in a low-power and lossy network (LLN) according to some embodiments of an invention
  • Fig. 2B is a schematic of a structure of the node performing the method of Fig. 2A according to some embodiments of an invention
  • Fig. 3 is an example of load balanced routing for smart meter networks of Fig. 1;
  • Fig. 4A is a schematic of LLN in which some embodiments of the invention can operate
  • Fig. 4B is a schematic of routing discovery packet propagation in a LLN
  • Fig. 5 is a schematic of a queuing model for relay-based networks
  • Fig. 6 is a timing diagram of a relationship between workload and delay period according to some embodiments of an invention.
  • Fig. 7A is a schematic of a network ready for priority order assignment
  • Fig. 7B is a table illustrating an example of priority order assignment according to some embodiments of an invention.
  • Fig. 8 is an example of parent priority order assignment variation according to some embodiments of an invention.
  • Fig. 9A is a schematic of routing method for upward transmission according to some embodiments of an invention.
  • Fig. 9B is a schematic of routing method for parent selection and upward transmission according to some embodiments of an invention.
  • Fig. 10 is a schematic of routing method for downward transmission according to some embodiments of an invention.
  • Fig. 11 is a table illustrating packet loss distribution based on buffer size.
  • Fig. 2A shows a block diagram of a method for routing packets by a node 200 in a low-power and lossy network (LLN).
  • Fig. 2B shows schematically a structure of the node 200.
  • the load balancing is achieved by transmitting the packets to the multiple nodes neighboring the node 200, e.g., at a rate proportional to the workloads of those neighboring nodes. Such transmittal allows balancing the load over multiple neighboring nodes, instead of transmitting all packets to a single parent node in an unbalanced manner.
  • the allocation of the packets transmitted to the neighboring nodes is based on current workloads of those nodes to balance the overall workload of the neighboring nodes.
  • the comparison of the workloads is performed without increasing communication overhead of the LLN.
  • a first packet 212 is received 210 by the node 200 from a first node at a first time 214
  • a second packet 216 is received 210 by the node 200 from a second node at a second time 218.
  • the first and the second nodes are not shown on the Figures, but can have structures and functionalities similar to the structure and the functionality of the node 200.
  • the node 200, and the first and the second nodes can form at least a part of the LLN.
  • the first and the second nodes are the neighboring nodes for the node 200, i.e., are within one hop from the node 200.
  • the node 200 includes a receiver 250 for receiving the packets.
  • the node 200 also includes a processor 270 for comparing the workloads of the neighboring nodes.
  • the node 200 also includes a transmitter 260 for transmitting packets to the neighboring nodes.
  • the first packet and the second packet include a route discovery packet 440 as shown in Fig. 4B, and the transmitted packets include data packets.
  • the processor determines that a first node has a workload less than a workload of a second node based on packets received from the first and the second nodes, then the processor generates a command to the transmitter to transmit a set of packets to the first and the second nodes, such that more packets from the set are transmitted to the first node than to the second node.
  • the transmitted packets can include data packets, and control packets.
  • the processor determines 215 the first time 214 and the second time 218.
  • the processor determines 220 the workloads of the first and the second nodes as a function of the first time 214 and the second time 218. Based on the workloads 220, the processor determines 225 the ratio of packets to be transmitted to the first node and the second node.
  • the transmitter transmits 230 packets to the first and the second nodes based on the ratio 225.
  • the ratio 235 is updated dynamically.
  • the processor updates 240 the ratio of the transmitted packets based on workloads 220 and quality of links 245 connecting the node with the first node and with the second node.
  • An example, of the ratio of transmission is given in Equation (8). Considering quality of the links in determining the ratio can advantageously use the load balancing to reduce the loss rate of the packets during upward transmission.
  • This embodiment can further update the ratio of transmission based 3146
  • this embodiment can be advantageous during downward packets transmission to multiple child nodes based on workload of the child nodes to achieve load balancing.
  • the node also includes a timer 280 for delaying transmission of the packets to implicitly indicate the workload.
  • the node 200 and the neighboring nodes can influence the allocation of the packets.
  • workload is determined in a distributed manner such that each node can determine its workload independently. Several criteria can be used to calculate workload. According to one embodiment, the average or total number of packets queued in buffer of a memory 290 of the node within a certain time period indicate the workload of the node.
  • the processor multiplies the workload of the node with a delay coefficient 285 to determine a period of the delay. Also, the processor updates the delay coefficient during an operation of the node.
  • the delay coefficient can be selected based on the structure, density and application of the LLN, such that the packets of the node with smaller workloads are received earlier that the packets of the node with heavier workload despite the natural delays in the LLN.
  • Fig. 3 shows load balanced routing for the smart meter network of Fig. 1.
  • the workload is distributed by smart meters forwarding packets to nodes M4 150, M5 155, and Ml 125.
  • some nodes such as nodes 310 and 320, transmit packets to multiple nodes.
  • the nodes 310 and 320 transmit to the node 125 as before, but also transmit 330 to the node 150.
  • the allocation is based on the relative workload of the nodes 125 and 150, the workloads of the nodes 125 and 150 are balanced.
  • Fig. 4A shows a schematic of a LLN in which embodiments of the invention can operate.
  • the LLN includes of a set of nodes 410 and a set of sinks 420.
  • the LLN nodes and sinks are organized in a two-tier hierarchy. Each sink is responsible for communicating with a sub-set of nodes. Sinks can transmit control and management packets to other LLN nodes.
  • All nodes can be data sources.
  • the LLN typically has non-uniform node distribution. Nodes and sinks form a mesh topology and communicate using wireless links 430. To ensure connectivity, nodes are arranged such that each node has a non-empty set of neighboring nodes. Each node in a LLN transmits data through one or multiple hops to one of the data sinks. A node may also need to forward packets received from neighbors towards data sinks.
  • Fig. 4B shows a schematic of route discovery packet propagation in a LLN in which embodiments of the invention can operate.
  • Transmission of route discovery packet 440 is initiated by a sink node 420.
  • the first hop neighboring nodes of sink node 420 receive the route discovery packet 440, select routes to sink node, update the route discovery packet and transmit the updated route discovery packets.
  • the second hop neighboring nodes of sink node 420 receive the route discovery packets transmitted by the first hop nodes, select routes to sink node, update the route discovery packets and transmit the updated route discovery packets. This process continues until all nodes 410 receive route discovery packets and discovery routes to sink node. [0036]
  • a size of a buffer in a node in a LLN is relatively small. Therefore, a LLN node can only buffer a relatively small number of packets. When the buffer is full, a LLN node has to ignore subsequent packets or delete already buffered packets. In a multi-hop LLN, most of nodes need to relay packets, especially, nodes near the sinks. Unbalanced routing can cause buffers of the nodes with large workload to fill much faster than the buffers of the nodes with smaller workload.
  • Fig. 5 shows an example of unbalanced routing, in which four source nodes 510 are SI, S2, S3 and S4, and three relay nodes 520 are Rl, R2 and R3. All four source nodes forward their packets to relay node R2 instead of distributing packets among the three relay nodes. As a result, R2's buffer is full 530, and R2 starts losing packet.
  • the packet loss can be caused by an inadequate buffer size. Assume the total number of packets generated by all source nodes follows a Markov process and the processing of packets at a relay node i is also Markov process, the system can be modeled as an M/M/l/qi queue, where qj denotes a buffer size of relay node i. According to finite queue analysis, the packets are lost when the number of arriving packets exceeds the buffer size of the relays node. The probability of packet loss by node i is
  • p A / ⁇ and ⁇ is the packet arrival rate at node i and ⁇ is a service rate at node i.
  • the number of packet sources has a critical role in determining the ratio of the packet arrival and departure rates.
  • the buffer size at the relay node significantly affects the packet loss rate. Therefore, the buffer size is an important factor to be considered in balanced routing.
  • the commonly used metrics are packet delivery rate, end-to-end delay, routing packet overhead, etc. Some embodiments use packet delivery rate as the metric to develop a load balanced routing model.
  • the probability of successfully sending its packet to the next hop node is
  • p p ⁇ + p- is the packet delivery rate at node i.
  • a node decides which workload distribution can result i the maximum packet delivery rate. Denote the set of workload distribution using a distribution matrix as
  • S is set of workload distribution matrices and N is set of nodes in LLN.
  • the embodiments of the invention provide a load balancing method that aims to improve workload balance problem for routing in LLNs and therefore, maximize the packet delivery rate.
  • the load balancing of some embodiments has following features:
  • Non-intrusive Although there are existing solutions for collecting node information to strategically select better routing paths, the periodical information collection and control messages make them unsuitable for LLNs. A better strategy is to detect and signal workload imbalance in a non-intrusive way.
  • RPL IPv6 Routing Protocol for LLNs
  • DAG Directed Acyclic Graph
  • DAG Directed Acyclic Graph
  • the root configures the DODAG parameters such as DODAG Version Number, DODAGID, and Root Rank and advertises these parameters in DIO messages.
  • DODAG DODAG Information Object
  • a node selects a set of DIO message senders as parents on the routes towards the root and computes its own rank. It also selects a preferred parent as next hop for upward traffic.
  • a node Upon joining a DODAG, a node transmits the DIO messages to advertise the DODAG parameters.
  • the Rank of the nodes must monotonically decrease as the DODAG Version is followed towards the DODAG root. Downward routes, from the root to other destinations, are provided by these destination nodes transmitting the Destination Advertisement Object (DAO) messages.
  • DAO Destination Advertisement Object
  • Some embodiments of the invention use RPL for load balanced routing in LLNs. For simplicity, some embodiments use one data sink to describe the load balanced method. However, load balanced method can be applied to other routing protocols and to LLNs with multiple data sinks.
  • each node when establishing a DODAG for data collection, each node selects a set of parent nodes, referred as parent set, towards data sink.
  • parent set One of the members in the parent set is selected as preferred parent for upward data traffic.
  • routing metrics typically belong to a specific layer. For example, hop count indicates network layer distance, ETX represents the aggregated link layer communication quality, RSSI (Received Signal Strength Indicator) captures the physical layer signal quality for communication.
  • RPL does not consider load balancing.
  • a node forwards all packets to its preferred parent.
  • load balanced packet routing not only the channel condition should be considered, but also resource limitations of parents should be considered.
  • Challenges for distributed load balanced routing are how to determine workload imbalance, and how to implicitly signal workload imbalance without increasing the communication overhead.
  • the root of a DODAG initiates DODAG formation process by transmitting the DIO message containing RPLInstancelD, DODAG identifier, DODAG version number, rank, and other parameters.
  • the node decides whether to join the DODAG or not. If the node decides to join the advertised DODAG, the node selects a subset of DIO message transmitters as its DIO parents and computes a rank value for itself. After joining DODAG, the node transmits a DIO message with Rank field set to its rank value. This process continues until DODAG formation is complete.
  • the DIO parents are potential next hop nodes for transmitting upward data to DODAG root.
  • RPL only one parent, called the preferred parent, is used for upward data transmission.
  • Other parents are used as backup parents.
  • a node For downward data transmission, a node selects a subset of DIO parents as its DAO parents and transmits DAO messages to DAO parents so that downward routes are discovered.
  • the nodes In order to achieve load balanced routing, the nodes according to some embodiments determine their workload. It is impractical to globally calculate network-wide workload in large scale LLNs. Therefore, workload calculation is locally done in a distributed fashion such that each node determines its workload independently.
  • the node includes a memory having a buffer for storing the packets, and the processor of the node determines the workload based on a number of packets stored in the buffer. Packets queued include packets received by a node from its neighbor nodes for forwarding and packets generated by a node itself for transmission.
  • the average number of packets queued in buffer within a certain time period is defined as the workload of a node.
  • the total number of packets that have been stored in the buffer within a certain time period is defined as workload of a node.
  • the node After a node determines its workload, the node signals to the neighboring nodes the extent of its workload so that the neighboring nodes can decide whether or not to forward their packets to the node.
  • One way is to explicitly transmit a workload announcement packet or incorporate the workload explicitly in a packet to be transmitted. However, that method transmits more data and therefore increases overhead in network. The increased overhead increases the bandwidth, which must be minimized in LLN routing.
  • the embodiments of the invention use 83146
  • Fig. 6 shows a schematic of the method for indicating the workload of the node by delaying transmission of the packets.
  • the node starts 620 a delay timer 610 with a delay period 640 proportional to its workload 650 such that the delay period is long when its workload is large and delay period is short if its workload is small.
  • the delay length 640 can be determined by multiplying the workload 650 with a delay coefficient 285.
  • the workload can be determined in a number of packets and the delay coefficients can be in seconds.
  • the product of the workload and the delay coefficient result in the delay period expressed in seconds.
  • node starts DIO message transmission.
  • the node determines the workload, but instead of transmitting the workload explicitly and increasing the overhead of the network, the node delays a transmission of a packet proportionally to that workload to signal the workload implicitly.
  • buffer storage limitation of a node significantly affects the probability of packet loss.
  • the workload is used in delay period calculation. With the workload, following is a method to calculate a delay timer value at a node .
  • T 0 is a delay coefficient for the timer calculation.
  • T 0 is used in a way such that back off delay of DIO message transmission does not affect delivery time of DIO message. That is, if two or more nodes start their DIO delay timers in about same time, then the DIO message with a longer delay must be received later than a 3146
  • DIO message with a shorter delay timer even though DIO message with shorter delay may have longer back off time than the DIO message with longer delay. This calculation results in a delay that is proportional to a workload of the node and can be used for signaling the workload.
  • Some embodiments of the invention compare the workloads of next hop parent nodes.
  • a node receives multiple copies of the DIO messages from its parent nodes. The node records the time at which it receives the DIO message from each parent. Depending on the time the DIO messages are received from these parent nodes, the node implicitly determines the workload of parents. An earlier time indicates a smaller workload and a later time indicates a larger workload.
  • the packets received by the node and used for determining the workloads include a route discovery packet, e.g., DIO, wherein the packets transmitted by the node include data packets.
  • Fig. 7A and 7B show the workloads allocation based on DIO receiving time.
  • Fig. 7A shows that node N6 710 select three parents N3, N4 and N5 based on corresponding DIO messages 720.
  • Fig. 7B shows that node N6 received DIO message from the node N5 at time 10:00 am, from the node N3 at 10:01 am and from the node N4 at 10:02am.
  • This time records indicate that the node N5 had small workload, node N3 had medium workload, and node N4 had large workload. Therefore, node N6 assigns a high priority to parent N5, a medium priority to parent N3, and a low priority to parent N4.
  • the node N6 forwards its packets to its parents by distributing packets according to priority, for example, 50% of packets to parent N5, 30% of packets to parent N3 and 20% of packets to parent N4. In this way, more packets are forwarded to parent node with lower workload and fewer packets are forwarded to parent node with larger workload.
  • the comparison of workloads are determined for each time period.
  • Parent selection and priority can vary over the time period. Even for the same parents, the priority can vary.
  • Fig. 8 shows an example of priority assignment variation at different periods, in which the nodes Nl 820 and N2 830 received DIO message from the root node 810.
  • Node N3 840 receives the DIO messages from nodes Nl and N2.
  • Node N3 also has its children 850.
  • Nl transmits its DIO message 825 without delay due to its small workload in period I-l .
  • node N2 delays the DIO message transmission 835 due to its large workload in period I-l.
  • node N3 840 assigns node Nl a high priority order and assigns node N2 a low priority order. As a result, N3 forwards more packets 845 to node Nl in period I.
  • node Nl delayed its DIO message transmission due to its large workload in period / and node N2 did not delay its DIO message transmission due to small workload in period /. Therefore, in period 1+1, node N3 assigns node Nl a low priority order and assigns node N2 a high priority order. Therefore, N3 forwards more packets 845 to node N2 and fewer packets to node N3 in period 1+1.
  • the root of a DODAG acts as data sink and initiates DIO message transmission.
  • the root transmits the DIO with new DODAG version periodically or non-periodically. Therefore, the DIO period may vary in length.
  • a new DODAG version indicates a new DODAG formation.
  • the node decides whether to join the new DODAG or not.
  • the node re-selects parents and computes a rank value.
  • the rank of a node can be computed according to routing metrics, such as the hop count distance to the root, or other metrics, according to RPL.
  • the node does not transmit a DIO with the new version number immediately. Instead, the node initializes a delay timer that is proportional to its workload in the previous period, and transmits the DIO message after the delay timer expires. This DIO transmission procedure continues at each increased level of the DODAG until the DIO message propagates to all nodes in network.
  • This delay timer mechanism allows nodes with large workload to signal neighboring nodes the workloads in a previous period. As a result, nodes with large workload in previous period may have fewer children or have lower parent priority. Therefore, workload based route discovery balances workload among nodes.
  • some embodiments of the invention select k top priority parent nodes as potential next hop nodes for upward data forwarding.
  • link quality between a node and a parent is also used as parameter for upward data packets forwarding.
  • the possibility of a node i forwarding data packet to a particular parent node ; is calculated as wher t loss probability due to channel condition between node and node j, and f is the priority factor defined as
  • a higher priority has a larger value and a lower priority has a smaller value.
  • the packets forwarded by a node i are distributed among top k parent nodes based on both the pairwise link qualities and workload of parents.
  • that parent node increases the delay in DIO message transmission in the next period, and may result in a priority that is not in the top k in the parent table of the child node.
  • this node is not be used as next hop for data forwarding.
  • the value of k depends on the applications, network environment and other factors.
  • the priority can be dynamically obtained using information of successful or failed network-layer transmissions via a media access control (MAC) layer feedback mechanism.
  • MAC media access control
  • the receiver after successfully receiving a unicast packet at the MAC layer, the receiver will reply with an acknowledgement packet to the MAC layer of the sender.
  • the MAC layer of the sender sends this information to network routing layer.
  • the sender knows the transmission was successful. If the sender does not receive the acknowledgement and the transmission reaches the maximum retry limit specified by the IEEE 802.11 MAC layer protocol, then the MAC layer failure is reported to the network layer.
  • top k priority parent nodes can be verified dynamically. After a top k parent node is detected unreachable, this parent node is not used as the next hop node. In this case, the top k-1 parent nodes can be used as next hop nodes or another parent node can be selected to replace the unreachable parent node. However, such unreachable parent node can be reused as next hop node when the node becomes reachable.
  • Fig. 9A shows an example of load balanced upward data forwarding, in which node N3 has two parent nodes Nl and N2.
  • Node Nl signaled a small workload by using a short DIO message delay time and node N2 signaled a large workload by using a long DIO message delay time.
  • node N3 detects a better link quality 910 to node Nl than a link quality 920 to node N2. Therefore, node N3 forwards more its packets to node Nl and fewer packets to node N2 to achieve load balance.
  • Fig. 9B shows an example of parent selection and load balanced upward data forwarding, in which based on wireless link 430, node NO 930 has six neighbors Nl 820, N2 830, N3 840, N4 940, N5 950 and N6 960. NO selects three neighbors Nl, N2 and N3 as its parent nodes. Therefore, node NO forwards its packets to three parent nodes to achieve load balance. [0075]
  • each router node stores downward routes to the nodes in its sub- DODAG.
  • the root stores downward routes to all nodes.
  • DODAG structure allows multipath routing. Therefore, multiple downward routes to a destination node are also possible, and load balanced downward data forwarding can be necessary.
  • a node To achieve load balanced downward data routing, a node records the number of upward data packets the node receives from each of child nodes in a current period. If there are multiple downward routes to a destination, then the forwarding node distributes downward packets to its children on the multiple routes based on the upward workload records of the children. The forwarding node sends more downward packets to a child with smaller upward workload and fewer downward packets to a child with a larger upward workload.
  • the forwarding node can be any router node in storing mode and is the root only in non-storing mode.
  • the node e.g., the processor of the node, determines the ratio of the transmitted packets based on a number of packets received from the children nodes.
  • the processor updates the ratio of the transmitted packets based on a number of packets received from the first and the second nodes.
  • Fig. 10 shows an example of load balanced downward data routing 1010, as contrasted with upward data routing 1020.
  • the node Nl 1030 transmits downward packets destined to node N4 1040.
  • Node 1 forwards more downward packets to node N2 1050 and fewer packets to node N3 1060, because upward workload 1055 of the node N2 is smaller than upward workload 1065 of the N3.
  • Fig. 11 shows a table illustrating a number of lost packets based on buffer size 1110 in LLN nodes.
  • buffer size 1110 When the buffer size at each node is smaller, more packets are lost due to buffer limitation. However, nodes using non-balanced routing method 1120 lose much more packets. When buffer is larger, the nodes using balanced routing 1130 do not loose packets.
  • the embodiments can be implemented in any of numerous ways.
  • the embodiments may be implemented using hardware, software or a combination thereof.
  • the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers.
  • processors may be implemented as integrated circuits, with one or more processors in an integrated circuit component.
  • a processor may be implemented using circuitry in any suitable format.
  • the processor can be connected to memory, transceiver, and input/output interfaces as known in the art.
  • the various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms.
  • the invention may be embodied as a computer readable medium other than a computer-readable storage medium, such as signals.
  • program or "software” are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of the present invention as discussed above.

Abstract

A node includes a receiver for receiving a first packet from a first node at a first time and a second packet from a second node at a second time, a processor for determining the first time and the second time and for comparing the first time with the second time to produce a ratio of workloads of the first node and the second node, and a transmitter for transmitting packets to the first and the second nodes based on the ratio.

Description

[DESCRIPTION]
[Title of Invention]
NODE AND METHOD FOR ROUTING PACKETS BASED ON MEASURED WORKLOAD OF NODE IN LOW-POWER AND LOSSY NETWORK
[Technical Field]
[0001]
This invention relates generally to routing packets in wireless networks, and particularly to load balanced routing for low power and lossy networks.
[Background Art]
[0002]
In low-power and lossy networks (LLNs), nodes and communication links are constrained. Nodes in the LLN typically operate with resource constrains on processing power, memory, power consumption, lifetime, rate of activity, and physical size. The communication links between the nodes can be characterized by high loss rate, low data rate, instability, low transmission power, and short transmission range. There can be from a few dozen up to thousands of nodes within a practical LLN. Examples of LLN include a smart meter network, and a wireless sensor network for building monitoring.
[0003]
In contrast with other networks, the LLN can also have constrained traffic pattern. Multipoint-to-point, e.g., from nodes inside the LLN towards a central control or data concentrator node, traffic is dominant. The point-to-multipoint, e.g., from a central control point to a subset of nodes inside the LLN, traffic is less common. The point-to-point, e.g., between nodes in the LLN, traffic is rare. The control node in LLNs usually acts as data sink and collects data from all other nodes in LLNs.
[0004] LLN applications typically require uneven node deployment and a high packet delivery rate, which can result in uneven workload of the nodes, referred as load unbalanced routing. Load unbalanced routing can result in packet loss by LLN nodes due to small buffer sizes. Also, routing overhead can increase the workload of LLN nodes, and therefore can result in extra packet loss. Unfortunately, the conventional routing methods are not designed for workload balancing and routing overhead minimization suitable for usage in LLN. However, load balanced routing in LLN can increase bandwidth, improve reliability, reduce interference and transmission delay.
[0005]
For example, the internet engineering task force (IETF) has developed an IPv6 routing protocol for low-power and lossy networks (RPL). Even though RPL is a multi-path routing protocol, routes are discovered based on a predefined metric, such as hop count or expected transmission count. After the routes are discovered, a node transmits all packets to a single node, called a preferred parent node that is one "hop" away. As a result, the preferred parent node might have a larger workload than other nodes. Packets can be data, control or management packets.
[0006]
Fig. 1 shows an example of unbalanced routing in a smart meter network, including a concentrator node C 110, and a set of smart meter nodes (M). The smart meter nodes, e.g., a node 130, transmit their metering data packets along certain routes, e.g., a route 140, to the concentrator C. The smart meter network can include a subset of smart meter nodes, e.g., a set 120, that can be densely deployed.
[0007]
Based on a shortest path routing protocol, all these smart meter nodes transmit their metering data packets to concentrator C via node Ml 125. Such transmission scheme can overload the node 125 when compared with other nodes, such as nodes M4 150 and M5 155.
[0008]
Accordingly, it is desired to provide load balanced routing suitable for LLNs. Several conventional routing methods are developed for load balancing. However, those conventional routing methods are not optimal for LLNs.
[0009]
For example, U.S. Patent 7,936,704 describes a method of configuring the topology of a communication network as a forest structure comprising trees and subtrees. However, that method is a single path routing method, because after discovery of the route, all packets have to be sent to a selected next hop node. Also, the method increases communication overhead, which must be minimized in LLNs.
[0010]
U.S. 7,633,940 Bl describes an adaptive load-balanced routing method for interconnection networks. Approximate global congestion is sensed as a function of channel queues, with routing methods selected in accordance with the sensed congestion. However, that usage of the channel queue can result in packet loss by a LLN node due to limited storage of the nodes.
[0011]
U.S. 7,366,100 describes an architecture that allows multipath packets to be distributed over multiple paths using a hash function. However, that method is not a load balanced routing method, but a multipath routing method.
[0012]
U.S. Publication 2008/0112326 describes a method for load-balancing routes in multi-hop ad-hoc wireless networks. In accordance with that method, when a node receives a routing-protocol message, the node waits before retransmitting the message, where the amount of time that the node waits is based on a value of a load metric at the node, which is independent of metrics of any other nodes in the network. As a result, a node that has a larger load will wait longer to transmit a routing-protocol message, and consequently, the node is less likely to be selected for inclusion in the new route. That method also establishes a single parent node route.
[0013]
Load balanced routing provides numerous benefits to LLN, such as improving network throughput, increasing energy efficiency, prolonging network operating duration, especially for battery-powered networks, and reducing communication overhead. Therefore, load balancing under the non-uniform node distribution is critical. It is desirable to develop a load balanced routing method for LLNs to deliver data packets and control packets reliably minimizing packet loss or shortening network operability.
[Summary of Invention]
[0014]
Various embodiments of the invention are based on a general realization that load balancing can be improved by transmitting packets from a node to multiple neighboring nodes, as contrasted with transmission to a single parent node. Moreover, if the allocation of the packets transmitted to the neighboring nodes is based on current workloads of those neighboring nodes, the overall workloads of the neighboring nodes are balanced.
[0015]
Some embodiments of the invention are based on another realization that for allocating packets according to the workloads of neighboring nodes, sometimes it is not necessary to determine the actual workloads of those nodes, because relative comparison of the workloads can be sufficient for the allocation. For example, packets can be transmitted more often to a neighboring node having a lower workload than packets transmitted to a neighboring node having a higher workload. Specifically, one exemplar embodiment determines a ratio of workloads indicating the relative values of the workloads and allocates transmission of the packets to the neighboring nodes according to that ratio.
[0016]
Some embodiments determine the ratio of the workloads based on local information collected by the node, e.g., by comparing times of receiving packets from neighboring nodes. The local information collected during an operation of the LLN can include information indicative of transmission itself, as contrasted with global information including content transmitted as part of the packets. Such approach allows for comparison of the workloads without submitting additional information. Examples of the collected information include time of transmission, a number and/or rate of received packets.
[0017]
Accordingly one embodiment discloses a node including a receiver for receiving a first packet from a first node at a first time and a second packet from a second node at a second time, a processor for comparing the first time with the second time to produce a ratio of workloads of the first node and the second node, and a transmitter for transmitting packets to the first and the second nodes based on the ratio. This embodiment allows allocating transmission of the packets to multiple nodes based on workloads of those nodes to achieve a balanced routing. In some embodiments, the node also includes a timer for delaying transmission of the packets to implicitly indicate the workload to other nodes. In those embodiments, the nodes can influence the allocation of the packets.
[0018]
For example, to achieve load balanced routing, a node can determine a first node having a workload less than a workload of a second node based on time of receiving packets from the first and the second nodes, and generate a command to transmit a set of data packets to the first and the second nodes, such that more packets from the set are transmitted to the first node than to the second node.
[0019]
A ratio of the packets transmitted to the first and the second nodes can be determined as a function of the ratio of the workloads. For example, in one embodiment, the processor updates the ratio of the transmitted packets based on quality of links connecting the node with the first node and with the second node. Considering quality of links in determining the ratio can advantageously use the load balancing to reduce the loss rate of the packets during upward transmission.
[0020]
Some embodiments of invention provides a delay timer calculation method to signal the workload of a node. A node uses a time delay mechanism to signal the workload such that a node with small workload has a shorter time delay in transmitting route discovery packet, and a node with large workload has a longer time delay in transmitting route discovery packet. Timer value can be proportional to workload. In one embodiment, the magnitude of the delay is set such that the route discovery packet transmission backoff delay incurred by lower layers does not affect workload signaling.
[0021]
A node can record the time of receiving a route discovery packet from each potential parent nodes. An earlier time indicates that route discovery packet transmitter has a smaller workload and a later time indicates that route discovery packet transmitter has a larger workload. Periodic route discovery allows updating the workloads dynamically.
[Brief Description of the Drawings]
[0022] [Fig- 1]
Fig. 1 is schematic of a smart meter network with a conventional load unbalanced routing;
[Fig. 2A]
Fig. 2A is a block diagram of a method suitable for routing packets by a node in a low-power and lossy network (LLN) according to some embodiments of an invention;
[Fig. 2B]
Fig. 2B is a schematic of a structure of the node performing the method of Fig. 2A according to some embodiments of an invention;
[Fig. 3]
Fig. 3 is an example of load balanced routing for smart meter networks of Fig. 1;
[Fig. 4A]
Fig. 4A is a schematic of LLN in which some embodiments of the invention can operate;
[Fig. 4B]
Fig. 4B is a schematic of routing discovery packet propagation in a LLN; [Fig. 5]
Fig. 5 is a schematic of a queuing model for relay-based networks;
[Fig- 6]
Fig. 6 is a timing diagram of a relationship between workload and delay period according to some embodiments of an invention;
[Fig. 7A]
Fig. 7A is a schematic of a network ready for priority order assignment;
[Fig. 7B]
Fig. 7B is a table illustrating an example of priority order assignment according to some embodiments of an invention;
[Fig- 8]
Fig. 8 is an example of parent priority order assignment variation according to some embodiments of an invention;
[Fig. 9A]
Fig. 9A is a schematic of routing method for upward transmission according to some embodiments of an invention;
[Fig. 9B]
Fig. 9B is a schematic of routing method for parent selection and upward transmission according to some embodiments of an invention;
[Fig. 10]
Fig. 10 is a schematic of routing method for downward transmission according to some embodiments of an invention; and
[Fig. 11]
Fig. 11 is a table illustrating packet loss distribution based on buffer size. [Description of Embodiments]
[0023]
Fig. 2A shows a block diagram of a method for routing packets by a node 200 in a low-power and lossy network (LLN). Fig. 2B shows schematically a structure of the node 200. In various embodiments, the load balancing is achieved by transmitting the packets to the multiple nodes neighboring the node 200, e.g., at a rate proportional to the workloads of those neighboring nodes. Such transmittal allows balancing the load over multiple neighboring nodes, instead of transmitting all packets to a single parent node in an unbalanced manner. Moreover, the allocation of the packets transmitted to the neighboring nodes is based on current workloads of those nodes to balance the overall workload of the neighboring nodes. In various embodiments, the comparison of the workloads is performed without increasing communication overhead of the LLN.
[0024]
For example, a first packet 212 is received 210 by the node 200 from a first node at a first time 214, and a second packet 216 is received 210 by the node 200 from a second node at a second time 218. The first and the second nodes are not shown on the Figures, but can have structures and functionalities similar to the structure and the functionality of the node 200. The node 200, and the first and the second nodes can form at least a part of the LLN. The first and the second nodes are the neighboring nodes for the node 200, i.e., are within one hop from the node 200.
[0025]
The node 200 includes a receiver 250 for receiving the packets. The node 200 also includes a processor 270 for comparing the workloads of the neighboring nodes. The node 200 also includes a transmitter 260 for transmitting packets to the neighboring nodes. In some embodiments, the first packet and the second packet include a route discovery packet 440 as shown in Fig. 4B, and the transmitted packets include data packets.
[0026]
If the processor determines that a first node has a workload less than a workload of a second node based on packets received from the first and the second nodes, then the processor generates a command to the transmitter to transmit a set of packets to the first and the second nodes, such that more packets from the set are transmitted to the first node than to the second node. The transmitted packets can include data packets, and control packets.
[0027]
In some embodiments, the processor determines 215 the first time 214 and the second time 218. The processor then determines 220 the workloads of the first and the second nodes as a function of the first time 214 and the second time 218. Based on the workloads 220, the processor determines 225 the ratio of packets to be transmitted to the first node and the second node. The transmitter transmits 230 packets to the first and the second nodes based on the ratio 225. For example, in one embodiment, the processor determines a number of packets Zx for transmitting to the first node according to Zx =
Figure imgf000011_0001
+T2))*Z, wherein the processor determines a number of packets Z2 for transmitting to the second node according to Z2 = (Ti/(Ti +T2))*Z, wherein Tx is the first time, T2 is the second time, and Z is a total number of the packets transmitted to the first and the second nodes.
[0028]
In alternative embodiments, the ratio 235 is updated dynamically. For example, in one embodiment, the processor updates 240 the ratio of the transmitted packets based on workloads 220 and quality of links 245 connecting the node with the first node and with the second node. An example, of the ratio of transmission is given in Equation (8). Considering quality of the links in determining the ratio can advantageously use the load balancing to reduce the loss rate of the packets during upward transmission.
[0029]
In alternative embodiment, the processor updates 240 the ratio of the transmitted packets based on a number of packets 247 received from the first and the second nodes. For example, in one embodiment, the processor determines a number of packets Zx for transmitting to the first node according to Z1 = (R2/(R1 +R2))*Z, wherein the processor determines a number of packets Z2 for transmitting to the second node according to Z2 =
Figure imgf000011_0002
+R2))*Z, wherein Rx is the number of packets received from the first node, R2 is the number of packets received from the second node, and Z is a total number of the packets transmitted to the first and the second nodes. This embodiment can further update the ratio of transmission based 3146
on statistics collected during the operation of the network without transmitting any additional information. Also, this embodiment can be advantageous during downward packets transmission to multiple child nodes based on workload of the child nodes to achieve load balancing.
[0030]
In some embodiments, the node also includes a timer 280 for delaying transmission of the packets to implicitly indicate the workload. In those embodiments, the node 200 and the neighboring nodes can influence the allocation of the packets. In some embodiments, workload is determined in a distributed manner such that each node can determine its workload independently. Several criteria can be used to calculate workload. According to one embodiment, the average or total number of packets queued in buffer of a memory 290 of the node within a certain time period indicate the workload of the node.
[0031]
In some embodiments, the processor multiplies the workload of the node with a delay coefficient 285 to determine a period of the delay. Also, the processor updates the delay coefficient during an operation of the node. The delay coefficient can be selected based on the structure, density and application of the LLN, such that the packets of the node with smaller workloads are received earlier that the packets of the node with heavier workload despite the natural delays in the LLN.
[0032]
Fig. 3 shows load balanced routing for the smart meter network of Fig. 1. According to some embodiments of the invention, the workload is distributed by smart meters forwarding packets to nodes M4 150, M5 155, and Ml 125. For example, some nodes, such as nodes 310 and 320, transmit packets to multiple nodes. Specifically, in this example, the nodes 310 and 320 transmit to the node 125 as before, but also transmit 330 to the node 150. As a result, because the allocation is based on the relative workload of the nodes 125 and 150, the workloads of the nodes 125 and 150 are balanced.
[0033]
Low-power and lossy network
Fig. 4A shows a schematic of a LLN in which embodiments of the invention can operate. The LLN includes of a set of nodes 410 and a set of sinks 420. The LLN nodes and sinks are organized in a two-tier hierarchy. Each sink is responsible for communicating with a sub-set of nodes. Sinks can transmit control and management packets to other LLN nodes.
[0034]
All nodes can be data sources. The LLN typically has non-uniform node distribution. Nodes and sinks form a mesh topology and communicate using wireless links 430. To ensure connectivity, nodes are arranged such that each node has a non-empty set of neighboring nodes. Each node in a LLN transmits data through one or multiple hops to one of the data sinks. A node may also need to forward packets received from neighbors towards data sinks.
[0035]
Fig. 4B shows a schematic of route discovery packet propagation in a LLN in which embodiments of the invention can operate. Transmission of route discovery packet 440 is initiated by a sink node 420. The first hop neighboring nodes of sink node 420 receive the route discovery packet 440, select routes to sink node, update the route discovery packet and transmit the updated route discovery packets. The second hop neighboring nodes of sink node 420 receive the route discovery packets transmitted by the first hop nodes, select routes to sink node, update the route discovery packets and transmit the updated route discovery packets. This process continues until all nodes 410 receive route discovery packets and discovery routes to sink node. [0036]
Load balanced routing
A size of a buffer in a node in a LLN is relatively small. Therefore, a LLN node can only buffer a relatively small number of packets. When the buffer is full, a LLN node has to ignore subsequent packets or delete already buffered packets. In a multi-hop LLN, most of nodes need to relay packets, especially, nodes near the sinks. Unbalanced routing can cause buffers of the nodes with large workload to fill much faster than the buffers of the nodes with smaller workload.
[0037]
Fig. 5 shows an example of unbalanced routing, in which four source nodes 510 are SI, S2, S3 and S4, and three relay nodes 520 are Rl, R2 and R3. All four source nodes forward their packets to relay node R2 instead of distributing packets among the three relay nodes. As a result, R2's buffer is full 530, and R2 starts losing packet.
[0038]
The packet loss can be caused by an inadequate buffer size. Assume the total number of packets generated by all source nodes follows a Markov process and the processing of packets at a relay node i is also Markov process, the system can be modeled as an M/M/l/qi queue, where qj denotes a buffer size of relay node i. According to finite queue analysis, the packets are lost when the number of arriving packets exceeds the buffer size of the relays node. The probability of packet loss by node i is
Figure imgf000014_0001
[0039]
Where p = A / μ and λ is the packet arrival rate at node i and μ is a service rate at node i. Based on the equation (1), the number of packet sources has a critical role in determining the ratio of the packet arrival and departure rates. In addition, the buffer size at the relay node significantly affects the packet loss rate. Therefore, the buffer size is an important factor to be considered in balanced routing.
[0040]
There are many metrics that can be used to evaluate the performance of outing methods. The commonly used metrics are packet delivery rate, end-to-end delay, routing packet overhead, etc. Some embodiments use packet delivery rate as the metric to develop a load balanced routing model.
[0041]
For a node in a LLN, denote the packet loss rate caused by channel condition as p- . Using Equation (1), the probability of successfully relaying a packet to next hop node by node i is = (1- ? )(1 - C ) (2)
[0042]
For a node , the probability of successfully sending its packet to the next hop node is
[0043]
Therefore, p = p\ + p- is the packet delivery rate at node i.
[0044]
For each node, there may be multiple nodes that can be used as the next hop nodes. A node decides which workload distribution can result i the maximum packet delivery rate. Denote the set of workload distribution using a distribution matrix as
11 S
S = (4)
s nl s nn
where denotes the number of packets forwarded from node i to node j. The optimal workload distribution for overall LLN is given by
Figure imgf000016_0001
where S is set of workload distribution matrices and N is set of nodes in LLN.
[0045]
The optimization process for other performance metrics can be developed similarly.
[0046]
Load balanced routing for LLNs
Usually, given a sink, for any routing method, there are only a small number of non-zero elements in each row of workload distribution matrix given by equation (4). For example, for a single path routing method such as ad hoc on demand vector (AODV), there is only one non-zero element in each row. For a two path routing method, there are at most two non-zero elements in each row. In general, for a multipath routing protocol such as RPL, the number of non-zero elements in a row is less than or equal to the number of next hop candidates of the corresponding node.
[0047]
In large scale LLNs, it can be impractical to obtain system level optimization as in Equation (5) because of a lack of system level information. A practical approach for routing in large scale LLNs is to let the nodes make decisions in a 2013/083146
distributed fashion. Based on load balanced routing model given by equation (5), the embodiments of the invention provide a load balancing method that aims to improve workload balance problem for routing in LLNs and therefore, maximize the packet delivery rate.
[0048]
To improve the workload balance LLNs, the load balancing of some embodiments has following features:
a. Distributed: Due to the size of the LLNs and the resource limitations with the LLN nodes, it is difficult to obtain global information about communication status of each node. Therefore, a distributed method is preferred.
b. Non-intrusive: Although there are existing solutions for collecting node information to strategically select better routing paths, the periodical information collection and control messages make them unsuitable for LLNs. A better strategy is to detect and signal workload imbalance in a non-intrusive way.
c. Reliability: In order to balance workload among nodes, some data traffic may be relayed through a path with imperfect communication link quality. To maintain reliability, both workload balance and communication link quality need to be considered.
[0049]
To provide a routing protocol for LLNs, the Internet Engineering Task Force (IETF) developed IPv6 Routing Protocol for LLNs (RPL). Based on routing metrics, such as hop count or expected transmission count (ETX), RPL builds a Directed Acyclic Graph (DAG) topology to establish bidirectional routes for LLNs. RPL routes are optimized for traffic to or from one or more roots that act as data sinks. A DAG is partitioned into one or more Destination Oriented DAGs (DODAGs), one DODAG per sink. Therefore, DODAG is basic logic structure in RPL. The traffic of LLNs flows along the edges of DODAG, either upwards to the root or downwards from the root. Upward routes, having the root as destination, are provided by the DODAG construction mechanism using the DODAG Information Object (DIO) messages. The root configures the DODAG parameters such as DODAG Version Number, DODAGID, and Root Rank and advertises these parameters in DIO messages. To join a DODAG, a node selects a set of DIO message senders as parents on the routes towards the root and computes its own rank. It also selects a preferred parent as next hop for upward traffic. Upon joining a DODAG, a node transmits the DIO messages to advertise the DODAG parameters. The Rank of the nodes must monotonically decrease as the DODAG Version is followed towards the DODAG root. Downward routes, from the root to other destinations, are provided by these destination nodes transmitting the Destination Advertisement Object (DAO) messages.
[0050]
Some embodiments of the invention use RPL for load balanced routing in LLNs. For simplicity, some embodiments use one data sink to describe the load balanced method. However, load balanced method can be applied to other routing protocols and to LLNs with multiple data sinks.
[0051]
In RPL, when establishing a DODAG for data collection, each node selects a set of parent nodes, referred as parent set, towards data sink. One of the members in the parent set is selected as preferred parent for upward data traffic. Depending on the routing metrics used, the selection of preferred parent may be different. Most commonly used routing metrics typically belong to a specific layer. For example, hop count indicates network layer distance, ETX represents the aggregated link layer communication quality, RSSI (Received Signal Strength Indicator) captures the physical layer signal quality for communication.
[0052]
RPL does not consider load balancing. A node forwards all packets to its preferred parent. In practice, for load balanced packet routing, not only the channel condition should be considered, but also resource limitations of parents should be considered. Challenges for distributed load balanced routing are how to determine workload imbalance, and how to implicitly signal workload imbalance without increasing the communication overhead.
[0053]
According to RPL, the root of a DODAG initiates DODAG formation process by transmitting the DIO message containing RPLInstancelD, DODAG identifier, DODAG version number, rank, and other parameters. After a node receives DIO messages, the node decides whether to join the DODAG or not. If the node decides to join the advertised DODAG, the node selects a subset of DIO message transmitters as its DIO parents and computes a rank value for itself. After joining DODAG, the node transmits a DIO message with Rank field set to its rank value. This process continues until DODAG formation is complete.
[0054]
The DIO parents are potential next hop nodes for transmitting upward data to DODAG root. In RPL, only one parent, called the preferred parent, is used for upward data transmission. Other parents are used as backup parents.
[0055]
For downward data transmission, a node selects a subset of DIO parents as its DAO parents and transmits DAO messages to DAO parents so that downward routes are discovered.
[0056]
Workload determination In order to achieve load balanced routing, the nodes according to some embodiments determine their workload. It is impractical to globally calculate network-wide workload in large scale LLNs. Therefore, workload calculation is locally done in a distributed fashion such that each node determines its workload independently.
[0057]
Several criteria can be used to calculate workload. For example, in some embodiments the node includes a memory having a buffer for storing the packets, and the processor of the node determines the workload based on a number of packets stored in the buffer. Packets queued include packets received by a node from its neighbor nodes for forwarding and packets generated by a node itself for transmission.
[0058]
Other criteria for determining the workloads can also be used. For example, according to one embodiment, the average number of packets queued in buffer within a certain time period is defined as the workload of a node. According to another embodiment, the total number of packets that have been stored in the buffer within a certain time period is defined as workload of a node.
[0059]
Non-intrusive workload signaling
After a node determines its workload, the node signals to the neighboring nodes the extent of its workload so that the neighboring nodes can decide whether or not to forward their packets to the node. One way is to explicitly transmit a workload announcement packet or incorporate the workload explicitly in a packet to be transmitted. However, that method transmits more data and therefore increases overhead in network. The increased overhead increases the bandwidth, which must be minimized in LLN routing. The embodiments of the invention use 83146
an implicit way to signal workload, without transmitting the workload as a part of the content of the packets.
[0060]
Fig. 6 shows a schematic of the method for indicating the workload of the node by delaying transmission of the packets. For example, after a node joins a DODAG, the node starts 620 a delay timer 610 with a delay period 640 proportional to its workload 650 such that the delay period is long when its workload is large and delay period is short if its workload is small. The delay length 640 can be determined by multiplying the workload 650 with a delay coefficient 285. For example, the workload can be determined in a number of packets and the delay coefficients can be in seconds. Thus, the product of the workload and the delay coefficient result in the delay period expressed in seconds. After a delay time expires 630, node starts DIO message transmission. Thus, in some embodiments, the node determines the workload, but instead of transmitting the workload explicitly and increasing the overhead of the network, the node delays a transmission of a packet proportionally to that workload to signal the workload implicitly.
[0061]
For example, as shown in Equation (1), buffer storage limitation of a node significantly affects the probability of packet loss. In some embodiments, the workload is used in delay period calculation. With the workload, following is a method to calculate a delay timer value at a node .
Ti = To x Workload, (6) where T0 is a delay coefficient for the timer calculation. T0 is used in a way such that back off delay of DIO message transmission does not affect delivery time of DIO message. That is, if two or more nodes start their DIO delay timers in about same time, then the DIO message with a longer delay must be received later than a 3146
DIO message with a shorter delay timer even though DIO message with shorter delay may have longer back off time than the DIO message with longer delay. This calculation results in a delay that is proportional to a workload of the node and can be used for signaling the workload.
[0062]
Non-intrusive workload detection
Some embodiments of the invention compare the workloads of next hop parent nodes. A node receives multiple copies of the DIO messages from its parent nodes. The node records the time at which it receives the DIO message from each parent. Depending on the time the DIO messages are received from these parent nodes, the node implicitly determines the workload of parents. An earlier time indicates a smaller workload and a later time indicates a larger workload. In some embodiments, the packets received by the node and used for determining the workloads include a route discovery packet, e.g., DIO, wherein the packets transmitted by the node include data packets.
[0063]
Fig. 7A and 7B show the workloads allocation based on DIO receiving time. Fig. 7A shows that node N6 710 select three parents N3, N4 and N5 based on corresponding DIO messages 720. Fig. 7B shows that node N6 received DIO message from the node N5 at time 10:00 am, from the node N3 at 10:01 am and from the node N4 at 10:02am. This time records indicate that the node N5 had small workload, node N3 had medium workload, and node N4 had large workload. Therefore, node N6 assigns a high priority to parent N5, a medium priority to parent N3, and a low priority to parent N4. As a result of priority assignment, the node N6 forwards its packets to its parents by distributing packets according to priority, for example, 50% of packets to parent N5, 30% of packets to parent N3 and 20% of packets to parent N4. In this way, more packets are forwarded to parent node with lower workload and fewer packets are forwarded to parent node with larger workload.
[0064]
In some embodiments the comparison of workloads are determined for each time period. Parent selection and priority can vary over the time period. Even for the same parents, the priority can vary.
[0065]
Fig. 8 shows an example of priority assignment variation at different periods, in which the nodes Nl 820 and N2 830 received DIO message from the root node 810. Node N3 840 receives the DIO messages from nodes Nl and N2. Node N3 also has its children 850. In period /, upon receiving the DIO message from the root 810, Nl transmits its DIO message 825 without delay due to its small workload in period I-l . On the other hand, upon receiving DIO message from the root, node N2 delays the DIO message transmission 835 due to its large workload in period I-l. Therefore, upon receiving DIO messages from nodes Nl and N2, node N3 840 assigns node Nl a high priority order and assigns node N2 a low priority order. As a result, N3 forwards more packets 845 to node Nl in period I.
[0066]
In period 1+1, node Nl delayed its DIO message transmission due to its large workload in period / and node N2 did not delay its DIO message transmission due to small workload in period /. Therefore, in period 1+1, node N3 assigns node Nl a low priority order and assigns node N2 a high priority order. Therefore, N3 forwards more packets 845 to node N2 and fewer packets to node N3 in period 1+1.
[0067]
Workload based route discovery
According to RPL, the root of a DODAG acts as data sink and initiates DIO message transmission. The root transmits the DIO with new DODAG version periodically or non-periodically. Therefore, the DIO period may vary in length. A new DODAG version indicates a new DODAG formation. After a node receives DIO message with new version number, the node decides whether to join the new DODAG or not. The node re-selects parents and computes a rank value. The rank of a node can be computed according to routing metrics, such as the hop count distance to the root, or other metrics, according to RPL.
[0068]
Different from RPL, to perform workload imbalance detection and signaling according to some embodiments of the invention, the node does not transmit a DIO with the new version number immediately. Instead, the node initializes a delay timer that is proportional to its workload in the previous period, and transmits the DIO message after the delay timer expires. This DIO transmission procedure continues at each increased level of the DODAG until the DIO message propagates to all nodes in network. This delay timer mechanism allows nodes with large workload to signal neighboring nodes the workloads in a previous period. As a result, nodes with large workload in previous period may have fewer children or have lower parent priority. Therefore, workload based route discovery balances workload among nodes.
[0069]
Load balanced upward data forwarding
Unlike RPL, where a node forwards all its packets to its preferred parent, based on the parent priority, some embodiments of the invention select k top priority parent nodes as potential next hop nodes for upward data forwarding. Moreover, link quality between a node and a parent is also used as parameter for upward data packets forwarding. The possibility of a node i forwarding data packet to a particular parent node ; is calculated as wher
Figure imgf000025_0001
t loss probability due to channel condition between node and node j, and f is the priority factor defined as
Figure imgf000025_0002
where is the priority. A higher priority has a larger value and a lower priority has a smaller value.
[0070]
As a result, the packets forwarded by a node i are distributed among top k parent nodes based on both the pairwise link qualities and workload of parents. Moreover, when one of the parent nodes has a large workload during a current data collection period, that parent node increases the delay in DIO message transmission in the next period, and may result in a priority that is not in the top k in the parent table of the child node. As a result, this node is not be used as next hop for data forwarding. Hence, workload imbalance can be reduced. For example, in Fig. 7A, if node N6 selects top 2 (k = 2) parents, node N6 will not forward packets to parent node N4.
[0071]
In equation (8), the value of k depends on the applications, network environment and other factors. The priority
Figure imgf000025_0003
can be dynamically obtained using information of successful or failed network-layer transmissions via a media access control (MAC) layer feedback mechanism. For example, in networks according to the IEEE 802.11 standard, after successfully receiving a unicast packet at the MAC layer, the receiver will reply with an acknowledgement packet to the MAC layer of the sender. The MAC layer of the sender sends this information to network routing layer. Hence, the sender knows the transmission was successful. If the sender does not receive the acknowledgement and the transmission reaches the maximum retry limit specified by the IEEE 802.11 MAC layer protocol, then the MAC layer failure is reported to the network layer.
[0072]
To acquire short-term wireless channel variation, the reachability of top k priority parent nodes can be verified dynamically. After a top k parent node is detected unreachable, this parent node is not used as the next hop node. In this case, the top k-1 parent nodes can be used as next hop nodes or another parent node can be selected to replace the unreachable parent node. However, such unreachable parent node can be reused as next hop node when the node becomes reachable.
[0073]
Fig. 9A shows an example of load balanced upward data forwarding, in which node N3 has two parent nodes Nl and N2. Node Nl signaled a small workload by using a short DIO message delay time and node N2 signaled a large workload by using a long DIO message delay time. Also, node N3 detects a better link quality 910 to node Nl than a link quality 920 to node N2. Therefore, node N3 forwards more its packets to node Nl and fewer packets to node N2 to achieve load balance.
[0074]
Fig. 9B shows an example of parent selection and load balanced upward data forwarding, in which based on wireless link 430, node NO 930 has six neighbors Nl 820, N2 830, N3 840, N4 940, N5 950 and N6 960. NO selects three neighbors Nl, N2 and N3 as its parent nodes. Therefore, node NO forwards its packets to three parent nodes to achieve load balance. [0075]
Two methods below provide examples for implementing load balanced upward data forwarding.
[0076]
Method 1: Node Initialization Procedure
1: Initialize parent set and buffer utilization counter
2: Update latest received DODAG version number
3: Insert selected DIO message sources to parent set according to the message arrival time
4: Calculate rank value
5: Set timer value Τ according to Equation (6)
6: Generate a DIO message with its own rank number and the latest DODAG version number
7: When time Τ expires, broadcast a DIO packet with current rank and DODAG version number
[0077]
Method 2: Load Balanced Routing for RPL
1: A node listens to the radio channel
2: Once a message M arrives, check the type of the message
3: if M is a DIO message then
4: if New version of DIO then
5: Invoke Sensor Node Initialization Procedure
6: else
7: if Current DIO version then
8: if Rank value carried in the message is less than current node's rank then 9: Insert the DIO message source to parent set according to message arrival time 10: end if
11: else
12: Discard this message
13: end if
14: end if
15: else
16: if M is a DAO message then
17: Process it according to RPL
18: end if
19: else
20: if M is data message then
21: Update buffer utilization counter
22: Forward this message by choosing the first k parent nodes from parent table, and selecting one as next hop with probability Equation (8)
23: end if
24: end if
[0078]
Load balanced downward data forwarding
According to RPL, downward routing depends on a mode of operations. In storing mode, each router node stores downward routes to the nodes in its sub- DODAG. In non-storing mode, the root stores downward routes to all nodes. DODAG structure allows multipath routing. Therefore, multiple downward routes to a destination node are also possible, and load balanced downward data forwarding can be necessary.
[0079]
To achieve load balanced downward data routing, a node records the number of upward data packets the node receives from each of child nodes in a current period. If there are multiple downward routes to a destination, then the forwarding node distributes downward packets to its children on the multiple routes based on the upward workload records of the children. The forwarding node sends more downward packets to a child with smaller upward workload and fewer downward packets to a child with a larger upward workload. The forwarding node can be any router node in storing mode and is the root only in non-storing mode.
[0080]
Accordingly, in one embodiment, the node, e.g., the processor of the node, determines the ratio of the transmitted packets based on a number of packets received from the children nodes. In another embodiment, the processor updates the ratio of the transmitted packets based on a number of packets received from the first and the second nodes.
[0081]
Fig. 10 shows an example of load balanced downward data routing 1010, as contrasted with upward data routing 1020. The node Nl 1030 transmits downward packets destined to node N4 1040. Node 1 forwards more downward packets to node N2 1050 and fewer packets to node N3 1060, because upward workload 1055 of the node N2 is smaller than upward workload 1065 of the N3.
[0082]
Performance comparison
Fig. 11 shows a table illustrating a number of lost packets based on buffer size 1110 in LLN nodes. When the buffer size at each node is smaller, more packets are lost due to buffer limitation. However, nodes using non-balanced routing method 1120 lose much more packets. When buffer is larger, the nodes using balanced routing 1130 do not loose packets.
[0083]
The above-described embodiments of the present invention can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. Such processors may be implemented as integrated circuits, with one or more processors in an integrated circuit component. Though, a processor may be implemented using circuitry in any suitable format. The processor can be connected to memory, transceiver, and input/output interfaces as known in the art.
[0084]
Also, the various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Alternatively or additionally, the invention may be embodied as a computer readable medium other than a computer-readable storage medium, such as signals.
[0085]
The terms "program" or "software" are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of the present invention as discussed above.

Claims

[CLAIMS]
[Claim 1]
A node, comprising:
a receiver for receiving a first packet from a first node at a first time and a second packet from a second node at a second time;
a processor for determining the first time and the second time and for comparing the first time with the second time to produce a ratio of workloads of the first node and the second node; and
a transmitter for transmitting packets to the first and the second nodes based on the ratio of workloads.
[Claim 2]
The node of claim 1, wherein the first packet and the second packet include a route discovery packet, wherein the transmitted packets include data packets, and wherein the first time is earlier than the second time, such that data packets transmitted to the first node are transmitted more often than data packets transmitted to the second node.
[Claim 3]
The node of claim 1, wherein the processor determines a ratio of the packets transmitted to the first and the second nodes based on the ratio of the workloads.
[Claim 4]
The node of claim 3, wherein the processor updates the ratio of the transmitted packets based on quality of links connecting the node with the first node and with the second node.
[Claim 5]
The node of claim 3, wherein the processor updates the ratio of the transmitted packets based on a number of packets received from the first and the second nodes.
[Claim 6]
The node of claim 1, wherein the processor determines a number of packets Zi for transmitting to the first node according to
¾ = (T2/(T1 +T2))*Z,
wherein the processor determines a number of packets Z2 for transmitting to the second node according to
Figure imgf000032_0001
wherein T\ is the first time, T2 is the second time, and Z is a total number of the packets transmitted to the first and the second nodes.
[Claim 7]
The node of claim 1, wherein the processor determines a workload of the node, the node further comprising:
a timer operatively connected to the processor and to the transmitter for delaying the transmitting of a route discovery packet based on the workload.
[Claim 8]
The node of claim 7, wherein the processor multiplies the workload of the node with a delay coefficient to determine an extent of the delaying.
[Claim 9]
The node of claim 8, wherein the processor updates the delay coefficient during an operation of the node.
[Claim 10]
The node of claim 7, further comprising:
a memory having a buffer for storing the packets, wherein the processor determines the workload based on a number of packets stored in the buffer.
[Claim 11]
The node of claim 10, wherein the processor multiplies the number of packets with a delay coefficient to determine an extent of the delaying.
[Claim 12]
The node of claim 1, wherein the node, the first node and the second node form a part of a low-power and lossy network.
[Claim 13]
A method for routing packets by a node in a low-power and lossy network, comprising:
determining a first node having a workload less than a workload of a second node based on packets received from the first and the second nodes; and
generating a command to transmit a set of data packets to the first and the second nodes, such that more packets from the set are transmitted to the first node than to the second node, wherein steps of the method are performed by a processor of the node.
[Claim 14]
The method of claim 13, wherein the determining comprises:
comparing a first time of receiving a route discovery packet from the first node with a second time of receiving a route discovery packet from the second node.
[Claim 15]
The method of claim 14, further comprising:
determining a number of packets Z scheduled for transmission;
determining a number of packets Zl for transmitting to the first node according to
Figure imgf000033_0001
determining a number of packets Z2 for transmitting to the second node according to
Figure imgf000033_0002
wherein Ίχ is the first time, T2 is the second time.
[Claim 16]
The method of claim 13, wherein the determining comprises:
comparing a number of packets received from the first node with a number of packets received from the second node.
[Claim 17]
The method of claim 14, further comprising:
multiplying the workload with a delay coefficient to determine the delay.
[Claim 18]
A node, comprising:
a receiver for receiving route discovery packets from neighboring nodes; a processor for comparing workloads of the neighboring nodes in response to the receiving to produce a ratio of workloads; and
a transmitter for transmitting data packets to the neighboring nodes according to the ratio.
[Claim 19]
The node of claim 18, wherein the processor determines the workloads independently from the neighboring nodes based on times of the receiving the route discovery packets.
PCT/JP2013/083146 2013-01-21 2013-12-04 Node and method for routing packets based on measured workload of node in low-power and lossy network WO2014112250A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/746,173 2013-01-21
US13/746,173 US20140204759A1 (en) 2013-01-21 2013-01-21 Load Balanced Routing for Low Power and Lossy Networks

Publications (1)

Publication Number Publication Date
WO2014112250A1 true WO2014112250A1 (en) 2014-07-24

Family

ID=49920572

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/083146 WO2014112250A1 (en) 2013-01-21 2013-12-04 Node and method for routing packets based on measured workload of node in low-power and lossy network

Country Status (2)

Country Link
US (1) US20140204759A1 (en)
WO (1) WO2014112250A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108496391A (en) * 2015-11-20 2018-09-04 布鲁无线科技有限公司 The routing of wireless mesh communication network

Families Citing this family (88)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9400390B2 (en) 2014-01-24 2016-07-26 Osterhout Group, Inc. Peripheral lighting for head worn computing
US9298007B2 (en) 2014-01-21 2016-03-29 Osterhout Group, Inc. Eye imaging in head worn computing
US9965681B2 (en) 2008-12-16 2018-05-08 Osterhout Group, Inc. Eye imaging in head worn computing
US9229233B2 (en) 2014-02-11 2016-01-05 Osterhout Group, Inc. Micro Doppler presentations in head worn computing
US9366867B2 (en) 2014-07-08 2016-06-14 Osterhout Group, Inc. Optical systems for see-through displays
US9952664B2 (en) 2014-01-21 2018-04-24 Osterhout Group, Inc. Eye imaging in head worn computing
US9565111B2 (en) * 2013-02-05 2017-02-07 Cisco Technology, Inc. Mixed centralized/distributed algorithm for risk mitigation in sparsely connected networks
KR20140123121A (en) * 2013-04-10 2014-10-22 한국전자통신연구원 Method for data transmission in multi-hop network and apparatus therefor
US10038617B2 (en) * 2013-06-18 2018-07-31 Cisco Technology, Inc. Asynchronous broadcast communication based on time-based events in channel-hopping networks
EP3032914B1 (en) * 2013-08-28 2017-10-18 Huawei Technologies Co., Ltd. Method and apparatus for selecting preferred parent node in wireless sensor network
WO2015096004A1 (en) * 2013-12-23 2015-07-02 Orange A method for configuring a network comprising several nodes, a method for transmitting data in said network, and corresponding equipment and computer program
US9374281B2 (en) * 2014-01-06 2016-06-21 Cisco Technology, Inc. Learning machine-based mechanism to improve QoS dynamically using selective tracking of packet retransmissions
US9366868B2 (en) 2014-09-26 2016-06-14 Osterhout Group, Inc. See-through computer display systems
US11227294B2 (en) 2014-04-03 2022-01-18 Mentor Acquisition One, Llc Sight information collection in head worn computing
US9810906B2 (en) 2014-06-17 2017-11-07 Osterhout Group, Inc. External user interface for head worn computing
US20150228119A1 (en) 2014-02-11 2015-08-13 Osterhout Group, Inc. Spatial location presentation in head worn computing
US9299194B2 (en) 2014-02-14 2016-03-29 Osterhout Group, Inc. Secure sharing in head worn computing
US20160019715A1 (en) 2014-07-15 2016-01-21 Osterhout Group, Inc. Content presentation in head worn computing
US10684687B2 (en) 2014-12-03 2020-06-16 Mentor Acquisition One, Llc See-through computer display systems
US10649220B2 (en) 2014-06-09 2020-05-12 Mentor Acquisition One, Llc Content presentation in head worn computing
US9746686B2 (en) 2014-05-19 2017-08-29 Osterhout Group, Inc. Content position calibration in head worn computing
US9841599B2 (en) 2014-06-05 2017-12-12 Osterhout Group, Inc. Optical configurations for head-worn see-through displays
US9939934B2 (en) 2014-01-17 2018-04-10 Osterhout Group, Inc. External user interface for head worn computing
US9594246B2 (en) 2014-01-21 2017-03-14 Osterhout Group, Inc. See-through computer display systems
US10191279B2 (en) 2014-03-17 2019-01-29 Osterhout Group, Inc. Eye imaging in head worn computing
US9575321B2 (en) 2014-06-09 2017-02-21 Osterhout Group, Inc. Content presentation in head worn computing
US11103122B2 (en) 2014-07-15 2021-08-31 Mentor Acquisition One, Llc Content presentation in head worn computing
US9671613B2 (en) 2014-09-26 2017-06-06 Osterhout Group, Inc. See-through computer display systems
US9829707B2 (en) 2014-08-12 2017-11-28 Osterhout Group, Inc. Measuring content brightness in head worn computing
US10254856B2 (en) 2014-01-17 2019-04-09 Osterhout Group, Inc. External user interface for head worn computing
US9448409B2 (en) 2014-11-26 2016-09-20 Osterhout Group, Inc. See-through computer display systems
US20150277118A1 (en) 2014-03-28 2015-10-01 Osterhout Group, Inc. Sensor dependent content position in head worn computing
US11892644B2 (en) 2014-01-21 2024-02-06 Mentor Acquisition One, Llc See-through computer display systems
US9529199B2 (en) 2014-01-21 2016-12-27 Osterhout Group, Inc. See-through computer display systems
US9753288B2 (en) 2014-01-21 2017-09-05 Osterhout Group, Inc. See-through computer display systems
US9615742B2 (en) 2014-01-21 2017-04-11 Osterhout Group, Inc. Eye imaging in head worn computing
US11487110B2 (en) 2014-01-21 2022-11-01 Mentor Acquisition One, Llc Eye imaging in head worn computing
US9836122B2 (en) 2014-01-21 2017-12-05 Osterhout Group, Inc. Eye glint imaging in see-through computer display systems
US9740280B2 (en) 2014-01-21 2017-08-22 Osterhout Group, Inc. Eye imaging in head worn computing
US9651784B2 (en) 2014-01-21 2017-05-16 Osterhout Group, Inc. See-through computer display systems
US20150205135A1 (en) 2014-01-21 2015-07-23 Osterhout Group, Inc. See-through computer display systems
US9766463B2 (en) 2014-01-21 2017-09-19 Osterhout Group, Inc. See-through computer display systems
US11737666B2 (en) 2014-01-21 2023-08-29 Mentor Acquisition One, Llc Eye imaging in head worn computing
US9494800B2 (en) 2014-01-21 2016-11-15 Osterhout Group, Inc. See-through computer display systems
US11669163B2 (en) 2014-01-21 2023-06-06 Mentor Acquisition One, Llc Eye glint imaging in see-through computer display systems
US20150241963A1 (en) 2014-02-11 2015-08-27 Osterhout Group, Inc. Eye imaging in head worn computing
US9401540B2 (en) 2014-02-11 2016-07-26 Osterhout Group, Inc. Spatial location presentation in head worn computing
US9852545B2 (en) 2014-02-11 2017-12-26 Osterhout Group, Inc. Spatial location presentation in head worn computing
US20160187651A1 (en) 2014-03-28 2016-06-30 Osterhout Group, Inc. Safety for a vehicle operator with an hmd
US9672210B2 (en) 2014-04-25 2017-06-06 Osterhout Group, Inc. Language translation with head-worn computing
US9651787B2 (en) 2014-04-25 2017-05-16 Osterhout Group, Inc. Speaker assembly for headworn computer
US10853589B2 (en) 2014-04-25 2020-12-01 Mentor Acquisition One, Llc Language translation with head-worn computing
US10663740B2 (en) 2014-06-09 2020-05-26 Mentor Acquisition One, Llc Content presentation in head worn computing
US10212087B2 (en) 2014-09-17 2019-02-19 Vivint, Inc. Mesh network assessment and transmission
US9684172B2 (en) 2014-12-03 2017-06-20 Osterhout Group, Inc. Head worn computer display systems
USD751552S1 (en) 2014-12-31 2016-03-15 Osterhout Group, Inc. Computer glasses
USD753114S1 (en) 2015-01-05 2016-04-05 Osterhout Group, Inc. Air mouse
US20160239985A1 (en) 2015-02-17 2016-08-18 Osterhout Group, Inc. See-through computer display systems
US10878775B2 (en) 2015-02-17 2020-12-29 Mentor Acquisition One, Llc See-through computer display systems
US11812312B2 (en) * 2015-05-25 2023-11-07 Apple Inc. Link quality based single radio-voice call continuity and packet scheduling for voice over long term evolution communications
WO2017118383A1 (en) 2016-01-07 2017-07-13 Huawei Technologies Co., Ltd. Device and method for balanced ad-hoc network formation
US10591728B2 (en) 2016-03-02 2020-03-17 Mentor Acquisition One, Llc Optical systems for head-worn computers
US10667981B2 (en) 2016-02-29 2020-06-02 Mentor Acquisition One, Llc Reading assistance system for visually impaired
US10684478B2 (en) 2016-05-09 2020-06-16 Mentor Acquisition One, Llc User interface systems for head-worn computers
US10466491B2 (en) 2016-06-01 2019-11-05 Mentor Acquisition One, Llc Modular systems for head-worn computers
US10824253B2 (en) 2016-05-09 2020-11-03 Mentor Acquisition One, Llc User interface systems for head-worn computers
US9910284B1 (en) 2016-09-08 2018-03-06 Osterhout Group, Inc. Optical systems for head-worn computers
US10454877B2 (en) 2016-04-29 2019-10-22 Cisco Technology, Inc. Interoperability between data plane learning endpoints and control plane learning endpoints in overlay networks
US10091070B2 (en) 2016-06-01 2018-10-02 Cisco Technology, Inc. System and method of using a machine learning algorithm to meet SLA requirements
JP6599044B2 (en) * 2017-03-01 2019-10-30 三菱電機株式会社 Wireless communication device
US10749786B2 (en) * 2017-03-01 2020-08-18 Cisco Technology, Inc. Path optimization based on reducing dominating set membership to essential parent devices
US10015075B1 (en) 2017-03-09 2018-07-03 Cisco Technology, Inc. Directed acyclic graph optimization for future time instance advertised by a parent network device
US10963813B2 (en) 2017-04-28 2021-03-30 Cisco Technology, Inc. Data sovereignty compliant machine learning
US10477148B2 (en) 2017-06-23 2019-11-12 Cisco Technology, Inc. Speaker anticipation
US10608901B2 (en) 2017-07-12 2020-03-31 Cisco Technology, Inc. System and method for applying machine learning algorithms to compute health scores for workload scheduling
US11409105B2 (en) 2017-07-24 2022-08-09 Mentor Acquisition One, Llc See-through computer display systems
US10578869B2 (en) 2017-07-24 2020-03-03 Mentor Acquisition One, Llc See-through computer display systems with adjustable zoom cameras
US10422995B2 (en) 2017-07-24 2019-09-24 Mentor Acquisition One, Llc See-through computer display systems with stray light management
US10091348B1 (en) 2017-07-25 2018-10-02 Cisco Technology, Inc. Predictive model for voice/video over IP calls
US10969584B2 (en) 2017-08-04 2021-04-06 Mentor Acquisition One, Llc Image expansion optic for head-worn computer
US10448335B2 (en) * 2017-12-07 2019-10-15 Landis+Gyr Innovations, Inc. Communicating timing information for a next DIO transmission
US20190199626A1 (en) * 2017-12-26 2019-06-27 Cisco Technology, Inc. Routing traffic across isolation networks
US10867067B2 (en) 2018-06-07 2020-12-15 Cisco Technology, Inc. Hybrid cognitive system for AI/ML data privacy
US10446170B1 (en) 2018-06-19 2019-10-15 Cisco Technology, Inc. Noise mitigation using machine learning
US11431518B2 (en) * 2020-02-13 2022-08-30 Cisco Technology, Inc. Localized multicast in a low power and lossy network based on rank-based distance
US11044682B1 (en) * 2020-03-10 2021-06-22 Cisco Technology, Inc. Localized optimization of isolated sub-DAG based on edge node parenting and distributed density based optimization
CN111522277B (en) * 2020-05-21 2021-04-09 苏州讯如电子科技有限公司 Network signal control system and method adopting MCU control unit
US11595874B2 (en) * 2021-05-11 2023-02-28 Mitsubishi Electric Research Laboratories, Inc. Routing data in wireless network that coexists with interfering wireless networks

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040008663A1 (en) * 2000-12-29 2004-01-15 Devabhaktuni Srikrishna Selection of routing paths based upon path quality of a wireless mesh network
US20060114881A1 (en) * 2000-12-29 2006-06-01 Tropos Networks, Inc. Mesh network that includes fixed and mobile access nodes
US7366100B2 (en) 2002-06-04 2008-04-29 Lucent Technologies Inc. Method and apparatus for multipath processing
US20080112326A1 (en) 2006-11-09 2008-05-15 Avaya Technology Llc Load-Balancing Routes In Multi-Hop Ad-Hoc Wireless Networks
US7633940B1 (en) 2005-06-27 2009-12-15 The Board Of Trustees Of The Leland Stanford Junior University Load-balanced routing
US20100085886A1 (en) * 2007-07-13 2010-04-08 Fujitsu Limited Packet delay characteristic measuring apparatus and method
US7936704B2 (en) 2007-11-22 2011-05-03 Thomson Licensing Method for routing and load balancing in communication networks

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7898957B2 (en) * 2005-10-03 2011-03-01 The Hong Kong University Of Science And Technology Non-blocking destination-based routing networks
US7688788B2 (en) * 2005-10-11 2010-03-30 Microsoft Corporation Congestion level and signal quality based estimator for bit-rate and automated load balancing for WLANS
US8001365B2 (en) * 2007-12-13 2011-08-16 Telefonaktiebolaget L M Ericsson (Publ) Exchange of processing metric information between nodes
JP5368812B2 (en) * 2008-07-15 2013-12-18 京セラ株式会社 Wireless terminal and communication terminal
US8767557B1 (en) * 2012-09-14 2014-07-01 Sprint Spectrum L.P. Determining a data flow metric
US9667536B2 (en) * 2012-10-16 2017-05-30 Cisco Technology, Inc. Network traffic shaping for Low power and Lossy Networks
US9154370B2 (en) * 2012-11-05 2015-10-06 Cisco Technology, Inc. Seamless multipath retransmission using source-routed tunnels

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040008663A1 (en) * 2000-12-29 2004-01-15 Devabhaktuni Srikrishna Selection of routing paths based upon path quality of a wireless mesh network
US20060114881A1 (en) * 2000-12-29 2006-06-01 Tropos Networks, Inc. Mesh network that includes fixed and mobile access nodes
US7366100B2 (en) 2002-06-04 2008-04-29 Lucent Technologies Inc. Method and apparatus for multipath processing
US7633940B1 (en) 2005-06-27 2009-12-15 The Board Of Trustees Of The Leland Stanford Junior University Load-balanced routing
US20080112326A1 (en) 2006-11-09 2008-05-15 Avaya Technology Llc Load-Balancing Routes In Multi-Hop Ad-Hoc Wireless Networks
US20100085886A1 (en) * 2007-07-13 2010-04-08 Fujitsu Limited Packet delay characteristic measuring apparatus and method
US7936704B2 (en) 2007-11-22 2011-05-03 Thomson Licensing Method for routing and load balancing in communication networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DANIELE PUCCINELLI ET AL: "Lifetime Benefits through Load Balancing in Homogeneous Sensor Networks", WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, 2009. WCNC 2009. IEEE, IEEE, PISCATAWAY, NJ, USA, 5 April 2009 (2009-04-05), pages 1 - 6, XP031454274, ISBN: 978-1-4244-2947-9 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108496391A (en) * 2015-11-20 2018-09-04 布鲁无线科技有限公司 The routing of wireless mesh communication network
CN108496391B (en) * 2015-11-20 2022-05-10 布鲁无线科技有限公司 Routing for wireless mesh communication networks

Also Published As

Publication number Publication date
US20140204759A1 (en) 2014-07-24

Similar Documents

Publication Publication Date Title
US20140204759A1 (en) Load Balanced Routing for Low Power and Lossy Networks
US8432820B2 (en) Radio and bandwidth aware routing metric for multi-radio multi-channel multi-hop wireless networks
JP6538849B2 (en) Scheduling algorithm and method for time slotted channel hopping (TSCH) MAC
EP2899930B1 (en) Gateways and routing in software-defined manets
Liu et al. Load balanced routing for low power and lossy networks
EP2899931B1 (en) Service-oriented routing in software-defined manets
CN101932062B (en) Multipath routing method in Ad Hoc network environment
Zhang et al. Energy-efficient duty cycle assignment for receiver-based convergecast in wireless sensor networks
Tekaya et al. Multipath routing with load balancing and QoS in ad hoc network
Farooq et al. RPL-based routing protocols for multi-sink wireless sensor networks
Jaiswal et al. An optimal QoS-aware multipath routing protocol for IoT based wireless sensor networks
Ma et al. Opportunistic communications in WSN using UAV
Le Multipath routing design for wireless mesh networks
Wayong et al. A scheduling scheme for channel hopping in Wi-SUN fan systems toward data throughput enhancement
Guo et al. Resource aware routing protocol in heterogeneous wireless machine-to-machine networks
Lutz et al. ATLAS: Adaptive topology-and load-aware scheduling
Devi et al. EESOR: Energy efficient selective opportunistic routing in wireless sensor networks
Sharma et al. Performance evaluation of MANETs with Variation in transmission power using ad-hoc on-demand multipath distance vector routing protocol
Ashwini et al. RPRDC: Reliable proliferation routing with low duty-cycle in wireless sensor networks
Pease et al. Cross-layer signalling and middleware: A survey for inelastic soft real-time applications in MANETs
Gurung et al. A survey of multipath routing schemes of wireless mesh networks
Gan et al. Cross-layer optimization of OLSR with a clustered MAC
KR100928897B1 (en) Method and apparatus for transmitting data packet in wireless multihop network considering hop count and wireless multihop network system
Sharma et al. Congestion Aware Link Cost Routing for MANETS
KV et al. MEGOR: Multi-constrained Energy efficient Geographic Opportunistic Routing in Wireless Sensor Network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13818467

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13818467

Country of ref document: EP

Kind code of ref document: A1