Date of this Version
Wireless Embedded Smart Camera Networks has received a lot of attention from academia and industry because they are small in size and easy to deploy, and they offer a multitude of attractive applications. They can be used for embedded single unit applications, or can be networked for multi-camera applications.
We first analyze three different operation scenarios for a wireless vision sensor network wherein different levels of local processing is performed. A detailed quantitative comparison of three operation scenarios are presented in terms of energy consumption and latency. This quantitative analysis provides the motivation for performing high-level local processing and decision making at the embedded sensor level.
Then, we present a multi-camera tracking application wherein the amount of data exchanged between cameras has an effect on the tracking accuracy, the energy consumption of the camera nodes and the latency. We analyzed the tradeoff for these important parameters using different scenarios.
Finally, we present a lightweight algorithm to perform fall detection for elder-care using wearable embedded smart cameras. This method uses image analysis for a single-unit application. It has low computational cost and fast response time. The proposed lightweight fall detection algorithm uses spatial and tempo-ral derivatives, and a moving sum approach. The experimental results show the success of the algorithm in detecting actual falls, and differentiating falls from most of the regular daily actions. All the experiments have been performed with an actual wireless embedded smart camera(s). We employed CITRIC motes in our experiments.
Adviser: Senem Velipasalar