Friday, October 21, 2011

A summary of : BMAC: Versatile Low Power Media Access for Wireless Sensor Networks

A summary of :

BMAC: Versatile Low Power Media Access for Wireless Sensor Networks

J. Polastre, J. Hill, D. Culler


In Proceedings of the Second ACM Conference on Embedded Networked Sensor Systems (SenSys), November 3-5, 2004.

    This paper represents a MAC protocol named B-MAC for wireless sensor networks. The initial goal of this idea is to perform this protocol as a low power operation, effective collision avoidance, simple and predictable , small code size and RAM usage , tolerable to changing RF/networking conditions and scalable to large numbers of nodes . So , actually it is a configurable MAC protocol for wireless sensor networks which has small core and it is energy efficient.
    After reading this paper I have found several interesting parts of B-MAC. One problem is about the channel assessment. Usually , a MAC must accurately determine if channel is clear or not , it needs to tell what is noise and what is a signal, and ambient noise is prone to environmental changes , so it is hard to determine it in this way. The solution of B-MAC is let the software automatic gain control of it. So the signal strength samples taken when the channel is assumed to be free, samples will go in a FIFO queue. Once noise floor is established , a TX requests starts monitoring RSSI from the radio. But as my point of view, what is the suitable time to the channel assumed to be free? Is it possible to fix a certain time and make it to be periodical? And also , what is a good estimate to establish the noise floor? How to determine it? Maybe give some certain value in a invariant situation or make it dynamic change with the environment? Another smart thing that B-MAC use to solve this problem is that it search for outliers in RSSI, so , if a sample has significantly lower energy that the noise floor during the sampling period, then the channel is clear. This solution can avoid the large number of falser negatives if compare the single sample with the noise floor. As I have mentioned before , the power issues is what B-MAC try to solve. So it comes out a solution that let the node periodically wakes up to minimize the listen cost of it. The basic principle is that it periodically wakes up, turns radio on and checks channel. The wakeup time is fixed and the “check time ” is variable. If energy is detected , node powers up in order to receive the packet. The node will go back to sleep if a packet is received or after a timeout. So I am just wondering , dose the fixed wakeup time performance good in most cases? We need some practical tests to get the answer. It has LPL to check the interval , they supply a test result shows that if the checking is too small the energy wasted on idle listening , otherwise if it is too large , energy wasted on transmissions. But in general , it is better to have larger preambles than to check more often. So the problem comes out that how to find out the best check interval. It has a solution that set the reporting rate and estimate neighborhood size. The best result is that check interval that gives lowest effective duty cycle.
    The paper gives some experimental result to show the throughput of B-MAC, it indicates that B-MAC is about 4.5 faster than S-MAC. But there’s some interesting phenomenon of it. It shows that B-MAC is not as fast when ACK or RTS/CTS is used. And differences less pronounced as number of nodes increases. And another issue is that B-MAC has CCA, thus it backs off less frequently and perhaps the backoff timer is faster. So , what about hidden terminal without RTS/CTS? So the conclusion is that B-MAC appears better than S-MAC , but the main issues of B-MAC is that it has a large preambles . And we don’t know if it is always good. So I google it , and found a paper about another protocol called X-MAC http://www.cs.colorado.edu/~rhan/Papers/xmac_sensys06.pdf
    B-MAC and X-MAC both are asynchronous protocols for wireless sensor networks, and the data transfer across duty cycled nodes is the primary objective. The main difference between these two protocols is that B-MAC sends a long continuous preamble but X-MAX sends short strobes and a target ID is encoded in the X-MAC strobe. By comparing these two protocols we can see that B-MAC requires more energy than X-MAC at any sleep time. Moreover , energy required increases with increased sleep time. It is due to the overhead of listening to the long preamble in B-MAC. And B-MAC requires more time to transfer packets from the source to the destination. It is because the entire preamble has to be always sent, even though the receiver was already awake. But X-MAC saves this time. Another thing is that energy required to transfer data for B-MAC increases faster than that for X-MAC. This is a benefit of using strobe preamble. So , the conclusion is that X-MAC outperforms B-MAC in both energy consumed and the latency of end to end packet transfer and it offers better duty cycling opportunities, means nodes sleep for more time.

Tuesday, October 4, 2011

A review of: Understanding Packet Delivery Performance In Dense Wireless Sensor Networks


A review of:

Understanding Packet Delivery Performance In Dense Wireless Sensor Networks
Jerry Zhao & Ramesh Govindan
The First ACM Conference on Embedded Networked Sensor Systems (Sensys'03), November 2003.






    This paper talks about several experiments results based on three different kind of environments : an indoor office building , a habitat with moderate foliage , and an open parking lot. In order to address the problem of packet delivery performance in dense wireless sensor networks. Basically, the key point which the author tries to figure out is that by measuring at physical & MAC layers packet transmission to get understand of packet delivery in dense sensor networks.
    The motivation of this paper is that wireless sensor networks can be deployed in harsh environment by using low power radio (that means not much frequency diversity) and can be densely deployed. Hence, they try to get a quantitative understanding of packet delivery by physical-layer measurement and MAC layer measurement. So I am wondering why we need focus on packet delivery? As my own opinion, the main reason is that it is the very basic element for a wireless sensor network. Moreover, packet delivery radio determines energy efficiency and network lifetime, which is important for the real world design. Hence, by studying the packet delivery we can avoid the poor packet delivery situation , so that may help improve the performance of applications . Moreover , it is also very important for evaluating almost all communication protocols. As the paper mentioned before a low-power RF transceivers for multiple short hops is more energy efficient than a single hop over a long range , so according to these experiment results people can experimentally verify WSN design principles.
The authors use mica motes based on an experiment software to test the packet delivery performance under three different environments : the indoor environment (office building), the natural habitat and the empty parking lot . The key result from these experiment is the heavy-tailed distributions of packet losses . For example , in an indoor setting , half of the links experience more than 10% packet loss , and a third suffer more than 30% loss. There also has some interesting point in different layers . In the physical layer , receivers suffer choppy packet reception, in some case, gray area is 1/3 of the common range. In the MAC layer, the packet loss is heavy-tailed.50%-80% common energy is wasted to overcome packet collisions and environmental effects. About 10% of links exhibit asymmetric packet loss . This kind of phenomena can be caused by several wireless communication problems.
As my own knowledge of view, it may be caused by hidden node problem. Say , node A transmits to B, node C cannot hear it and transmits to B, so they both collision at node B. It may be caused by multipath problem. A radio signal is reflected by obstacles and parts of the signal may take different paths to the sink, hence confusing the receiver. Here's a link to illustrate it                                                                    
And also it may be caused by signal attenuation, etc. After get many tests data from the experiments . I can simply conclude what I have learned from these experiments in two aspects. One is that , by selecting a shortest path simply based on the geographic distance or hop count is not sufficient. And another one is that , nodes need to carefully select neighbors based on the measured packet delivery performance. And another important thing we can learn from their experiment result is that there's no specify relationship between signal strength and packet delivery performance. In other word , signal strength cannot estimate the link quality by itself, there may have other conditions to be concerned. About the gray area, is it possible for a sophisticated physical layer coding to mask it? According to their test , the answer is not necessary , because the SECDED has the lowest effective bandwidth.
    In a word , this paper performed experiments to understand packet delivery performance in dense sensor network deployments , quantify the prevalence of gray are. But still, many causes for the tested phenomena are not for sure , most of them are conjectures , guesses, etc.

A summary of: System Architecture Directions for Networked Sensors


 
A summary of:
System Architecture Directions for Networked Sensors
Jason Hill, Robert Szewczyk, Alec Woo, Seth Hollar, David Culler, Kristofer Pister
April 27, 2000







This paper represents a design of a tiny event-driven operating system , which is called TinyOS to address the problem of manage and operate small devices and sensors. The motivation of this idea is that because of the sensor networks are new kind of computing environments in the post-PC era which are different from traditional desktop and server environments. There are several trends working together to enable the network sensors, system on a chip, micro-electrom echanical sensors(MEMS) and integrated low power communication. Today , sensors exist on the scale of a square inch in size, and a fraction of a watt in power. Furthermore, in the future, it may be possible to reduce sensors to the size of a cubic millimeter, or smaller. So the key missing technology for these devices is a kind of system software to manage and operate them efficiently.
   The objectives of this project is to create a prototype of a current generation sensor constructed from off the shelf components. And identify key requirements that an operating system for such a sensor must satisfy. Building an operating system that meets these requirements. Last, evaluate the operating system's performance while running a real application. First , it is about the hardware features of the device. It has 3 kind of sleep modes which can shut off the processor and other components which are not required for wake up. The radio of the device can transfer data at up to 19Kbps, with no data buffering. The sensor consumes about 19.5 mA at peak load, and 10 uA when inactive. A 575 mAh battery can power a sensor for 30 hours at peak load, or for over a year when inactive. They mentioned some requirements for the operating system. Because of the small physical size and low power consumption, the devices have limited memory and power resources. It should concurrency intensive operation , that means it needs to be able to service packets on-the-fly in real-time. In terms of the limited hardware parallelism and controller hierarchy , it has limited number and capability of controllers , and unsophisticated processor-memory-switch level interconnect. In order to satisfy the diversity in design and usage, it should provide a high degree of software modularity for application specific sensors. And of course , it needs a robust operation , the OS should be reliable, and assist applications in surviving individual device failures.
In order to satisfy the hardware requirements , they address the problem by design an OS called TinyOS. It is a microthreaded OS that draws on previous work done for lightweight tread support, and efficient network interfaces. It has two level scheduling structure. So that long running tasks can be interrupted by hardware events. Small , tightly integrated design that allows crossover of software components into hardware. The component of TinyOS has three basic things , frame to serve as storage, tasks for the computation and command & event interface. To facilitate modularity, each component declares the commands it uses and the events it signals. Statically allocated, fixed sized frames allow us to know the memory requirements of a component at compile time, and avoid the overhead associated with dynamic allocation. The tasks of the TinyOS is performed as the primary computation work, atomic with respect to other tasks, and run to completion, but can be preempted by events. It allows the OS to allocate a single stack assigned to the currently executing task. It has lower level commands , signal higher level events, can schedule other tasks within a component and simulate concurrency using events. The commands of the operation system is non-blocking requests to lower level components. The commands deposit request parameters into a component's frame and post a task for later execution. It can also invoke lower level commands, but cannot block it. In order to avoid cycles, commands cannot signal events. It also returns status to the caller. About the events, the event handlers deal with hardware events (interrupts) directly or indirectly. They deposit information into a frame as well. Post tasks, signal higher level events and call lower level commands. In order to examine the composition and interaction within a complete configuration , they develop a network sensor application consist of a number of sensors distributed within a localized area, monitor temperature and light conditions and periodically transmit measurements to a base station. The task scheduler of it is a simple FIFO scheduler. The scheduler puts the processor to sleep when the task queue is empty. The peripherals keep operating and can wake up the processor. Communication across components takes the form of a function call. This results in low overhead, and allows compile time type checking. The sensors can forward data for other sensors that are out of range of the base station, it can dynamically determine the correct routing topology for the network. The base station periodically broadcasts route updates , any sensors in range of this broadcast record the identity of the base station, and rebroadcast the update. Each sensor remembers the first update received in an era, and uses the source of the update as the destination for routing data back to the base station. Finally , they give an evaluation by the given data to evaluate the system concurrency intensive operations , modularity and robust ,etc.
In conclude, TinyOS is a highly modular software environment tailored to the requirements of Network Sensors, stressing efficiency , modularity and concurrency. TinyOS should be able to support new sensor devices as they evolve. Running an application on TinyOS can help reveal the impact of architectural changes in the underlying hardware, making it easier to design hardware that is optimized for a particular application.

Thursday, September 29, 2011

Wavelet Transform

1. Time-Frequency Localization
    Fourier analysis provides more flexibility for signal and image processing and offers more insight into the signal characteristics
    Uncertainty principle dictates that Fourier analysis is unable to simultaneously capture both time and frequency localization.
    One solution to such problem is to apply short time(dynamic) Fourier transform, when signals are divided into many small windows.
    Wavelet transform is able to capture both time and frequency localization through dynamic bases.
2. 1D Continuous Wavelet Transform









Discrete Cosine Transform

Discrete cosine transform(DCT) is another linear and orthonormal transform which is closely related to DFT.
Periodic extension is needed for DCT and it requires that the extension is an even function.
One major difference between DCT and DFT is that DCT generates all real coefficients when the input is real(true for all common images)
DCT has excellent energy compaction property for practical images- perfect for image and video compression.

Discrete Fourier Transform

1. From continuous to discrete
    when proper samples are obtained, one can usually treat these samples as an ordered discrete signal without considering sampling intervals in time or spatial domain, making these true digital signals.
2. 1D Discrete Fourier Transform







3. Fast Fourier Transform
    1D DFT can be implemented by some fast algorithm that is called Fast Fourier Transform.
4. From 1D DFT to 2D DFT










5. 2D DFT Properties
    The 2D DFT can be implemented by two 2 separable 1D DFT because the exponential base is separable




    The purple-colored summation inside the bracket is a 1D DFT can be applied to mth image line in a line by line fashion.
    The second 1D DFT can be applied to the results of the first 1D DFT in a column- by- column fashion.
6. DFT Periodicity and DFT Spectrum






7. DFT Pairs of Image Patterns
http://www.cs.unm.edu/~brayer/vision/fourier.html






Important Implications of FT

1. The energy of a signal f(x) can be defined as:



    This is called the Parceval's Theorem, it directly relates the energy of signal in time domain to the energy of signal in Fourier Transform domain.

2. Time-Frequency Uncertainty Principle














3. Graphic Representation of FT







http://research.opt.indiana.edu/library/fourierbook/ch12.html








4. Time-Frequency Localization
    It is important to localize the signal properties either in the time(spatial) domain or frequency domain for the convenience of an analysis.

5. Time-Frequency Uncertainty Principles
 





6. Sampling Theorem for Images