Energy efficient algorithms and techniques for wireless mobile clients 3

47 316 0
Energy efficient algorithms and techniques for wireless mobile clients 3

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

CHAPTER RELATED WORK Power management in battery operated mobile devices has been an active area of research There are several proposals that attempt to reduce the power consumption of various components of the mobile devices In this chapter, we review a range of techniques available in the literature, primarily focusing on display, network and processor components 2.1 LCD Power Conservation Most commonly, the Liquid-Crystal (LC) cells forms the pixels of Liquid-Crystal Displays (LCDs) These cells can react to the modulation of electricity fields and change the polarization direction of light passing through them in response to an electrical voltage Thin Film Transistor LCD (TFT LCD) has a sandwich-like structure with LC filled between two glass plates as shown in Figure 2.1 TFT glass has as many TFTs as the number of pixels displayed, while a Colour Filter glass has colour filter which generates colour LCs move according to the difference in voltage between the Colour Filter glass and the TFT glass These LCs are aligned electronically to form a pattern or image Based on this pattern the backlight passed or blocked to an outer layer (transmissive layer) to create the image 18 Figure 2.1 Structure of a Transmissive TFT LCD Most of the LCD power is consumed by its Backlight There are a several works to reduce the backlight power consumption Dynamically dimming the backlight is considered an effective method to save energy consumed by the mobile device displays The resultant reduced brightness can be compensated by image enhancement techniques such as, scaling up the pixel luminance Traditional LCD displays used Cold Cathode Fluorescent Lamps (CCFL) for the backlight while modern displays use Light Emitting Diode (LED) arrays instead Dynamic dimming techniques described below can be applied for both CCFL and LED based backlights However, as LEDs consume less power than CCFL, LED based LCDs are power efficient than CCFL based LCDs [44] A 3-in-1 RGB LED can obtain a wider colour gamut and better pre-mixed colours Various techniques in the literature for conserving LCD backlight energy are discussed below 19 Figure 2.2 Visibility of the Image in a Transmissive TFT in some Environment Luminance Condition [2] Backlight Auto-regulation In Gatti et al [2], a technique called backlight auto-regulation was devised which regulates the backlight based on ambient lighting levels The relation between required backlight luminance and ambient lighting is shown in Figure 2.2 This technique employs an on-board environment luminance sensor to determine the appropriate backlight needs based on an ambient light condition The authors adjusted the input voltage of the backlight driver to modify the luminance Power saving ratio becomes more significant, reaching up to 74% as the environment becomes darker and the backlight luminance becomes lower This is a very simple backlight dimming technique and it is already implemented in almost all modern smartphones with some better calibrations However, this technique can be complemented with image enhancement technique to save more energy 20 Figure 2.3 Image and its Discrete Histogram DLS - Dynamic Backlight Luminance Scaling The principle of DLS is to save power by backlight dimming while restoring the brightness of the image by appropriate image compensation This scheme is generally known as Dynamic Backlight Luminance Scaling In Chang et al [33], a dynamic backlight luminance scaling scheme is proposed Based on different scenarios, three compensation strategies are discussed, that is, brightness compensation, image enhancement, and context processing Brightness compensation results in image distortion Hence, amount of increase (transformation) to pixel luminance is controlled by a threshold TH (shown in Figure 2.3) which is, determined using distortion ratio (Di ) Distortion ratio (Di ) is given in Equation 2.1 Di = 2n −1 j=TH Hj (Mi ) 2n −1 j=0 Hj (Mi ) (2.1) where, Mi represents the image or frame as a matrix of pixels, H is histogram function,Hj represents histogram value of a particular colour j 21 Brightness compensation allows a significant degree of backlight dimming while keeping the distortion ratio reasonable, as long as the image has a continuous histogram (adjacent histogram values are close to each other) which is not severely skewed to bright areas Brightness compensation is not efficient for images with discrete histogram For discrete histograms Image enhancement techniques such as histogram stretching and histogram equalization are proposed The transformation to the pixels are controlled by lower threshold TL and upper threshold TH as shown in Figure 2.3 However, some minor colours may be merged into each other and are thus no longer distinguishable after histogram equalization This may result in small parts of images which has similar colour as background to become indistinguishable from each other Hence, a context based processing is proposed However, their calculation of the distortion does not consider that the clipped pixel values not contribute equally to the quality distortion CBCS - Concurrent Brightness and Contrast Scaling In Cheng et al [34], a similar method, namely, Concurrent Brightness and Contrast Scaling (CBCS), is proposed CBCS aims at conserving power by reducing the backlight illumination while retaining the image fidelity through preservation of the image contrast The authors defined a contrast fidelity function (fc (x)) for measuring the image fidelity after backlight scaling The authors also added a logic which controls the distribution of output voltages to an original voltage divider in implementing a scaling function of LCD’s transmissivity This consequently eliminated the pixel-by-pixel manipulation on the image A voltage divider is a specific hardware 22 designed to produce fixed voltages required by a source driver for setting a certain level of LCD’s transmissivity Cheng et al [34] clearly modelled the observed luminance of a transmissive object as a product of the backlight luminance and transmissivity of TFT-LCD The modified voltage divider was used to implement a programmable LCD reference driver (PLRD) which takes two input arguments, a lower bound and an upper bound, as guidance in modifying the voltage to control the transmissivity of an LCD panel The relation between a luminance function (bt(x)) and those two bounds is shown in Figure 2.4, where x is pixel value, t is the transmissivity of a pixel value, and b is a backlight factor The luminance function consists of three regions: the undershot region [0,gl], the linear region [gl,gu], and the overshot region [gu,1] In other words, the lower bound gl and upper bound gu are the darkest and the brightest pixel values that can be displayed without contrast distortion (overshooting or undershooting) after applying CBCS The contrast fidelity function is defined as the derivative of bt(x) as shown in Equation 2.2     0, x < gl       fc (x) =  c, gl x gu,        1, gu < x  c (2.2) where, ‘c’ is limited between ‘0’ and ‘1’ If ‘c > 1’, the contrast increases and deviates from that of the original image and the dynamic range ‘[gl,gu]’ shrinks The principle of CBCS is to scale the brightness and contrast simultaneously to balance the contrast loss and the number of saturated pixels As shown in Figure 2.5 23 Figure 2.4 Luminance as a function of Backlight and Transmissivity simultaneous scaling both brightness and contrast presents better image fidelity than scaling just one of these attributes The goal is to find the optimal bounds where the overall contrast fidelity reaches its maximum A large overall contrast fidelity (almost or the same as 1, which is the original image contrast) is better The authors of CBCS method found the optimum bounds by using this in a number of experiments From their experiments, they claimed that CBCS can achieve a significant power saving of more than 50% with small contrast distortion for still images However, CBCS cannot maximize the potential of a dynamic backlight scaling scheme in saving power due to its overestimation in measuring distortion [3] CBCS maximizes only the number of preserved pixel values or minimizes the number of saturated pixels HEBS - Histogram Equalization for Backlight Scaling Iranli et al [3] proposed histogram equalization technique for backlight scaling with a pre-defined distortion level HEBS tries to find an appropriate pixel transformation function for each displayed image They argued that the image distortion should be considered as a complex function of visual perception, and it should be mea24 (a) Original (b) 50% Contrast (c) 50% Brightness (d) 50% CBCS Figure 2.5 Visual Effects of Adjusting Brightness (b), Contrast (c), and Both (d) when the Backlight is Dimmed to 50% 25 sured by combining the mathematical differences between pixel values (histograms) and the characteristic of a Human Visual System (HVS) HEBS works on an image histogram and a transformation function which transform an original histogram into a new uniformly distributed one with a specified minimum dynamic range Dynamic range is a ratio or range between the brightest and the darkest available pixel values in an image A new histogram should be different minimally from the original one of a backlight-scaled image After they get the histogram transformation function, they define a dynamic linear function to transform an original image to a desired resultant one Practically, since it is difficult to measure a distortion degree, this technique needed a number of experiments to get a mapping table of dynamic ranges to distortion ratios from some benchmark images An example of a mapping result is shown in Figure 2.6 The results are then used to specify the minimum dynamic range in a new uniform distribution histogram HEBS method results in about 45% power saving with an effective distortion rate of 5% and 65% power saving for a 20% distortion rate This is significantly higher power savings compared to previously reported ambient independent backlight dimming approaches HVS Based Dynamic Tone Mapping All of the aforementioned techniques rely on the luminance values of pixels of the displayed image as their optimization variables They not consider the HVS perceived quality Luminance value of a light source is not the same as its perceived brightness Figure 2.7 shows the relation between luminance and perceived brightness The slope of each curve represents the luminance contrast sensitivity of human 26 Figure 2.6 Luminance as a function of Backlight and Transmissivity [3] eyes, that is, sensitivity of the HVS brightness perception to the changes in the luminance As luminance adaptation level of human eyes decreases (each curve represents different luminance adaptation level), the luminance contrast sensitivity decreases It is also observable from the figure that, the HVS exhibits higher sensitivity to changes in luminance in the darker regions of an image Iranli et al [36] considered the HVS characteristics and proposed a more efficient way of backlight scaling using tone mapping Their method is known as Dynamic Tone Mapping (DTM) method Tone mapping is a classic photographic task of mapping of the potentially high dynamic range of real world luminance values to the low dynamic range of the photographic print The success of photography has shown that it is possible to produce images with limited dynamic range that convey the appearance of realistic scenes This is fundamentally possible because the human eye is sensitive to relative, rather than absolute, luminance values Many researchers have worked on automatically using 27 In a server-client wireless network environment, data packets are transmitted as discrete bursts Yong Wei et al [5] have proposed a power saving scheme based on the observation that the length of a no-data interval bears statistical correlation to previously observed no-data interval length values Their prediction scheme is based on standard linear prediction model shown in Equation 2.6 p x(n − 1) x (n) = (2.6) i=1 Where x(n) is the estimation of future no-data period and x(n − i) are previous actual measurements and are predictor coefficients The error generated in this method is modelled as shown in Equation 2.7 e(n) = x(n) − x(n) (2.7) The error can be used as a feedback to dynamically adjust the co-efficient values to reduce error This statistical linear prediction-based approach is shown to yield more accurate prediction of sleep interval lengths compared to a typical history-based prediction approach which predicts the current sleep interval length value as a simple average of previously observed sleep interval length values Figure 2.20 compares the two in a real-player environment Energy management for 802.11n Multiple-Input-Multiple-Output (MIMO) is qualitatively different from that for its predecessor standards 802.11n devices uses multiple antennas to increase throughput Hence, 802.11n provides additional power states (shown in Table 2.1): beyond a low-power sleep mode, 802.11n MIMO technology offers the possibility of selectively disabling one or more RF-front ends (RF-chains) 50 Figure 2.20 Drop Rate vs Energy Metric (Real player format at 512 Kbps) [5] 51 associated with its antennas, there by saving energy Jang et al [70] propose an energy management technique called Snooze for 802.11n, in which the AP monitors traffic on the WLAN and directs client sleep times and durations as well as antenna configurations, without significantly affecting throughput or delay Snooze achieves 30 85% energy-savings over CAM across workloads ranging from VoIP and video streaming to file downloads and chats Table 2.1 Power consumption (in Watts) for various modes of Intel (3x3 MIMO) and Atheros (2x2 MIMO) NICs Proxy Based Approaches Another common approach is to use a proxy controlled communication with the wireless client for power efficient communication In Armstrong et al [71], the polling responsibility of applications is shifted from the mobile device to a network based proxy which then aggregates and sends the poll responses in a batch Anastasi et al [40] have proposed a proxy based architecture and protocol for multimedia streaming The paper has two goals, reducing power in wireless part and smoothing traffic in wired part with the architecture shown in Figure 2.21 The main idea in this paper is providing congestion control with by adding Transmission Control Protocol (TCP)-Friendly Rate Control information (TFRC) [72] to the Real Time Protocol (RTP) and Real Time Control Protocol (RTCP) packets 52 Figure 2.21 Split Communication for Multimedia Streaming This smoothes the traffic but reduces the peak transmission rate resulting in increased delay which conflicts with the power saving objective Hence, the authors have split the communication protocols as shown in Figure 2.21 To save power at the mobile client the authors have designed new protocol known as RTPS (Real Time and Power Saving) protocol According to the RTPS protocol the proxy schedules packets to the client in an on-off fashion During a streaming session, the time can be subdivided, from the wireless net standpoint, in transmission periods, i.e., on-periods, and nontransmission periods, i.e., off-periods During on-periods the proxy transmits frames to the mobile host at the highest possible rate whereas during off-periods the mobile host sets the WNIC to the sleep mode The traffic shaping proposed by the authors exploits a priori knowledge of the frame lengths, the knowledge of the client buffer size and an estimate of the current available bandwidth on the wireless link which corresponds to the maximum throughput that the proxy can exploit for transmission The RTPS protocol is used to exchange these information between the client and AP to shape the traffic During an on-period the proxy decides whether to stop transmitting or not depending on the available bandwidth: when the bandwidth is high it continues transmitting until it 53 fills up the client buffer; when low, it stops transmitting in order to avoid increasing congestion and transfer delay The duration of an off-period is decided by the proxy when it stops transmitting and depends on the client buffer level: an off-period ends when the client buffer level falls down a dynamic threshold, low water level that warns the proxy about the risk of a playback starvation The low water level depends on the available bandwidth too: when the current available bandwidth is low then the low water level is high in order to preserve a large supply for the playback process; when the current available bandwidth is high the low water level is lower since the proxy can quickly feed the buffer and the risk of underflow is very low Finally, in order to avoid bandwidth wastage, only frames that are expected to arrive in time for their playback are delivered to the client whereas the others are discarded The computation of frame arrival times makes use of the estimate of the available throughput on the wireless link Chandra and Vahdat [73] use a proxy to batch packets from various streaming applications in a coordinated manner with the mobile device so that the device can sleep between the receptions of batched packets In PSM-throttling [74], the authors utilize traffic shaping for energy reduction without using a proxy by increasing the burstiness of traffic from a streaming server using targeted zero sized TCP receive window messages The packet arrival time of a video stream is variable Tough the client can predict packet arrival times based on past history, it is hard to predict accurately, resulting in packet losses Shenoy and Radkov [75] propose a proxy assisted approach where, at the end of each burst, the proxy sends a control packet that indicates the transmission 54 time of the next set of frames The client uses this control information to switch its interface to the active mode at the specified instant (and uses the passive mode in the interim) Specifically, if t denotes the instant at which the proxy will begin transmitting the next set of frames and d denotes the link delay, then the client switches its interface to the active mode at t + d + x, where x is variations in link delay Secondary-Channel Information Given the high energy costs of WiFi scanning and CAM operation, several systems have been proposed to turn on the WiFi radio based on some secondary-channel information Shih et al [76] propose Wake-on-Wireless that uses a low power radio to turn on WiFi In Cell2Notify, Agarwal et al [77] use the incoming ring over the Global System for Mobile Communication (GSM) channel as a wake-up call for WiFi for a VoIP session over WiFi A unique caller-ID of the wake-up call helps the smartphone can distinguish between a wakeup ring and a regular incoming phone call over the GSM interface Upon reception of a wakeup ring, the smartphone powers on the Wi-Fi interface and then receives the actual incoming VoIP call Cell2Notify protocol is depicted in Figure 2.22 In addition, context information such as current location from cellular towers or usage history for energy efficiency data transfer is proposed by Ahmad and Lin [78] Ganesh an Ion [79] use Bluetooth contact patterns and cell tower information to avoid unnecessary WiFi scanning The device to intelligently switch the Wi-Fi interface on only when there is Wi-Fi connectivity available 55 Figure 2.22 Cell-to-Notify Protocol AP Assisted Power Management A set of approaches allows the AP to help its clients save power In Centralised PSM (CPSM) [80] the AP selects certain parameters for its clients, such as beacon interval, listen interval, contention window size and reduces the simultaneous wakeups of clients in order to save energy CPSM is able to maximize the total energy efficiency for all clients Lin et al [81] propose wakeup scheduling to reduce probability of collision In addition, the AP advertises a subset of PSM clients in the beacon and clients use information in the beacons to determine their polling sequence in order to help avoid client contention Lee et al [82] uses heuristics to approximate the global optima in power-savings over all PSM clients in a generalized PSM setting All of these approaches require both client and AP modifications Rozner et al [83] found that competing background traffic in some PSM implementations results in a significant increase, up to 300%, in a client’s energy consump- 56 tion, a decrease in wireless network capacity due to unnecessary retransmissions, and unfairness The authors have proposed Network-Assisted Power Management (NAPman) that leverages AP virtualization and a new energy-aware fair scheduling algorithm to minimize client energy consumption and unnecessary retransmissions, while ensuring fairness among competing traffic NAPman reduces energy consumption of clients by up to 70% Application Assisted Power Management Anand et al [39] propose a Self- Tuning Power Management (STPM) protocol to exploit hints provided by network applications The hints describe the near-future network activities of applications The hints are used to save energy by managing the states of the WNIC The authors show that, though PSM saves significant energy for latency tolerable applications, it substantially degrades performance and may even increase overall energy usage when used with latency-sensitive applications In contrast to ‘one size fits all’ approach of PSM, it tunes the energy saving functions to the requirements of the application Davide Bertozzi et al [41] recommend a mechanism to save power while the streaming application is in progress based on clients playback buffer Playback buffers are often investigated by researches that focus on bandwidth and delay The paper shows methods for precise calculation of time to turn-off the WNIC based on playback buffer size and Low-Water Mark Level (LWM) Davide Bertozzi’s approach is also different from previous works which turn-off the WNIC when there is no active network traffic (no traffic from/to the application) The approach is based on system level knowledge (buffer size and LWM) hence, it precisely calculates the turn off time for 57 the WNIC without effecting user experience The transaction, being client-controlled, prevents incoming packets of the stream from incurring an additional delay because of their buffering at the access point (AP) The authors provide an estimation of the minimum playback buffer size (25 media units -MU) to enjoy the benefits of the scheme by assuming 8kB data as an MU With this recommended buffer size the scheme takes up to 25% less power consumption than the consumption incurred by the standard IEEE 802.11 mechanism Another common way of power saving is to transcode the video such that it requires less power for decoding Poellabauer and Schwan [84] propose a selection mechanism to select the most appropriate transcoder and transcoder parameters according to the global system state There are many useful transcoders [85] [86], and selection algorithm for selecting suitable transcoder parameters [87] By leveraging such results, the authors can focus on maintaining a clients desired Quality of Service (QoS) characteristics, using different transcoders that result in varying energy savings, depending on QoS specifications, device and transcoder characteristics, and data content Power-aware State Dissemination for Games All of the aforementioned techniques are applicable for latency tolerable applications Mobile games are highly time sensitive Packets must reach the client within the bounded latency, otherwise, it becomes useless Hence, buffering packets or sending consolidated data from group of packets will not help much Packets should arrive at consistent intervals, high jitter will make the game not playable The following techniques focus on energy conservation for mobile game applications All these tech- 58 niques use application level information for calculating sleep duration precisely to avoid any degradation to the user experience Interactive digital entertainment (game), simulation and process monitoring applications can modify the dead reckoning (extrapolation) threshold based on the power situation to increase the battery lifetime Higher threshold will lead to lower number of ‘update packets’ per unit of time, which in turn results in power saving Though higher threshold helps in saving power it should be tuned (limit) to the acceptable level of state consistency error The threshold can be intelligently adjusted during run-time by keeping the state consistency error within user acceptable range In their paper, Weidong and Kalyan [31] show that their modified power aware dead reckoning algorithm which optimizes the trade-off between state consistency and power consumption achieves significant power savings In their experiments they have shown 69% power saving with a threshold value of 16 pixels and when it is reduced to pixels it results in minimal power saving (31%) for the simple mobile game they have implemented The algorithm triggers update at d which is 75% of the actual dead-reckoning threshold d The difference between d and d is used to calculate the WNIC suspension period h as shown in Figure 2.23 The suspension interval ‘h’ is increased linearly at each iteration when there is no update The rationale is, when there is no update (no transmission), the deviation or error is within the threshold and hence, the WNIC can be suspended for longer time When there is no acknowledgement until the error reaches d, then h is recalculated (reduced to check for acknowledgement frequently) and a wake-up trigger is sent to 59 Figure 2.23 Suspension Period other hosts The value of h dynamically updated in relation to suspension period of other hosts in the game which results in aggressive power reduction without degrading consistency The algorithm assumes reliable transport, but most of the games in the market use unreliable transport In fast pace games like Quake III, the algorithm cannot save significant power as there are multiple extrapolated variables The results are based on simulation and ignores the mode switch penalty and mode switch latency costs of wireless interface card Dead-Reckoning Based Power Management for Games Harvey et al [27] use a similar scheme as previous one (Weidong and Kalyan [31]), but they dynamically change the sleep period based on the dead-reckoning threshold error In dead reckoning, future locations of objects in a game are estimated based on their current locations and velocities The difference between the extrapolated and 60 Figure 2.24 Suspension Period true locations is known as the dead reckoning error Typical game interactions with the wireless network interface is shown in Figure 2.24 As can be seen in the figure, quiet periods exist in which the device is not transmitting updates The wireless interface can be put into low power sleep mode during these periods The intuition behind their algorithm is that the interval for which the wireless is not needed is longer when the dead reckoning error is far from the threshold Through simulations they show that up to 36% of WiFi energy saving is possible However, the results are again based on simulation and ignore the mode switch penalty and mode switch latency costs of wireless interface card 2.4 Processing Unit Power Conservation There are several works on power conservation at CPU level Dynamic Voltage and Frequency Scaling (DVFS) is the most popular scheme for CPU power management as most of the modern mobile CPUs support multiple voltage and frequency levels 61 Accurate predictions of task run-times are key to computing the frequencies and voltages that ensure meeting the real-time constraints of all tasks There are many proposals for DVFS they basically vary in the methodology used for predicting the application workload One approach is to capture the memory overheads experienced by memory-bound tasks and then use these overheads to better predict task run-times as described by Poellabauor et al [88] The method used is to dynamically monitor a tasks cache miss rate, which determines the memory access rate experienced by this task Dynamically monitored cache miss rates are used as feedback input by the DVFS algorithm, which then computes the frequencies and voltages to be used for task execution Another prediction approach, proposed by Chung et al [89], requires the content provider (media server) to provide the information on execution time variations together with the content This approach gives better results than prediction by the client side alone Yuan and Nahrstedt [90] discusse GRACE-OS which is an integration DVFS into soft real-time CPU scheduling algorithm The major goal of GRACE-OS is to support application quality of service and save energy To achieve this goal, GRACE-OS integrates dynamic voltage scaling into soft real-time scheduling and decides how fast to execute applications in addition to when and how long to execute them GRACE-OS makes such scheduling decisions based on the probability distribution of application cycle demands, and obtains the demand distribution via online profiling and estimation Yuan and Nahrstedt [91] have designed and implemented Practical Voltage Scaling (PDVS) algorithm which extends traditional real-time scheduling by deciding on when to execute, what applications at what execution speed PDVS also differs significantly from previous DVS 62 algorithms in that it minimizes the total energy of the whole device, rather than only CPU energy, while meeting multimedia timing requirements Another proposal by Huang et al [92] eases the work of prediction algorithms by using metadata The paper suggests doing offline Bitstream analysis of multimedia files and adding metadata information describing the computational demand that will be generated when decoding the file Such bitstream analysis and metadata insertion can be done when the multimedia file is being downloaded into a portable device from a desktop computer Yan Gu and Chakraborty [93] [94] [95] propose DVFS techniques for 3D games They predict the game workload and scale voltage using Proportional Integral Derivative (PID) controllers, where the PID gain values had to be hand-tuned In other words, the proportional, integral and derivative gain values had to be carefully chosen in order to maximize both power savings and the quality of the game play (measured by the number of frame deadline misses) Dietrich et al [96] propose Least Mean Squares (LMS) linear predictor for game work load prediction which has lower complexity than PID based approaches while providing comparable power saving and game quality with PID based approaches 2.5 Summary In general, the works described above for LCDs linearly increase the brightness of the image to compensate for the LCD dimming However, linear techniques cannot save large amounts of energy as they start suffering from clipping or saturation effects which distorts the image heavily In addtion, they not consider the resource 63 limitations in mobile devices Our work focuses on building non-linear tone mapping technique for energy saving that is computationally efficient for strict real-time applications on mobile devices Details of our techniques are presented in Chapter three The works on OLED displays described above change colors of web pages without considering the brand identity and legibility of the contents In addtion, the image manipulation techniques not consider the non-linearities of HVS in sensing colors and luminance levels Our works presented in Chapter four retains brand identity while transforming colors of the text For images, we exploit the charateristics of HVS for saving higher amount of energy while preserving visual quality of the images Existing works on wireless interface energy saving trades-off latency for energy saving They are not suitable for highly real-time applications such as games Dead reckoning based approaches not consider the game state of the client To the best of our knowledge our work is the first one to use client’s game state as the key parameter in determining the sleep periods of wireless interface Our approches use both visibility and distance based techniques to estimate client’s game state to conserve energy These techniques are discussed in detail in Chapter five 64 ... for saving higher amount of energy while preserving visual quality of the images Existing works on wireless interface energy saving trades-off latency for energy saving They are not suitable for. .. data The time instants for waking up to listen for beacon frames are statically determined The BSD protocol trades off energy for lower additional delay It performs well for highly latency tolerable... across workloads ranging from VoIP and video streaming to file downloads and chats Table 2.1 Power consumption (in Watts) for various modes of Intel (3x3 MIMO) and Atheros (2x2 MIMO) NICs Proxy

Ngày đăng: 08/09/2015, 21:58

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan