Friday, November 14, 2014

Step 2.2: Literature Review

Camera network optimization is an important problem in computer vision and has been explored by many researchers. Most of the early works were based on a single camera focused on a static object, and the problem was to find the best position for the camera that maximize the quality of features on the object [1, 2].  Later, Chen and Davis in [3] proposed a metric that evaluates the quality of multiple camera network configurations. The metric assesses the system based on their resolution and occlusion characteristics. The configuration is optimized based on this metric such that minimum occlusion occurs by ensuring a certain resolution. Mittal and Davis in [4] suggested a probabilistic approach for visibility analysis. The probability of visibility of an object from at least one camera was calculated. Then a cost function is defined that maps the sensor parameters to the probability. Simulated annealing is performed to minimize the cost function.

Erdem and Sclaroff in [5] suggested a binary optimization approach for the camera placement problem. The polygon representing the space is fragmented into an occupancy grid and and the algorithm tries to minimize the camera set while maintaining some specified spatial resolution. Horster and Lienhart in [6, 7, 8] proposed a linear programming approach that determines the calibration for each camera in the network that maximizes the coverage of the space assuring a certain resolution. Ram et al. in [9] proposed a performance metric that evaluates the probability of accomplishing a task as a function of set of cameras and their placement. The metric allows the camera system to be evaluated in a "directional aware" sense, i.e. that metric realizes that only images obtained in a certain direction (frontal image of the person) are useful.  This metric is maximized to estimate the camera configuration. Bodor et al. in [10] proposed a method, where the goal is to maximize the aggregate observability across multiple cameras. An objective function is defined that measures the resolution of the image and the object motions (trajectories) in the scene. A variant of hill climbing method was used to maximize this objective function.

Murray et al. in [12] applied coverage optimization combined with visibility analysis to address this problem. For each camera location, the coverage was calculated using visibility analysis. Maximal covering location problem (MCLP) and backup coverage location problem (BCLP) were used to model the optimum camera combinations and locations. Malik and Bajscy in [13] suggested a method for optimizing the placement of multiple stereo cameras for 3D reconstruction. An optimization framework was defined using an error based objective function that quantifies the stereo localization error along with some constraints. Genetic algorithms were used to generate a preliminary solution and later refined using gradient descent. Kim and Murray in [14] also employed BCLP to solve the camera coverage problem. They suggested an enhanced representation of the coverage area by representing it as an continuous variable as opposed to using a discrete variable to represent the whole area. The work in [15, 16] also employed a combination of MCLP and BCLP for solving the optimum camera coverage problem. The former problem takes into consideration the 3D geometry of the environment and supplemented the MCLP/BCLP problem by including a minimal localization error variable for both monoscopic and stereoscopic cameras. The optimization problem was solved using simulated annealing. In the latter the MCLP/BCLP problem was supplemented using visibility analysis for optimization. Huang et al. in [17] proposed a 2-approximation algorithm, the first part proposes a solution for the minimum watchmen tour problem and places cameras along the estimated tour, the second one finds the solution to art gallery problem and adds extra cameras to connect the guards.

Considering the 3D geometry of the environment is of significant value for the camera coverage optimization problem. This work deals with indoor scenarios and a complete 3D model of the environment where the camera network would be deployed is designed. To the best of our knowledge, this is the first work that does not need any observations of the human activity in scenario for designing an optimal camera network. The only input to this model is the 3D geometry of the environment. In [10, 18]  the observed human activity (trajectories) were used to find an optimal camera position, unlike this, in the proposed work the human trajectories are simulated in order to identify areas with high volume of human activity. Furthermore in [9] the camera position is optimized to maximize the frontal view of the humans, which again required observation, again the proposed work does not require any training to maximize the frontal view. The directional information of the trajectories were used to maximize the frontal view of the humans. Finally, the human behavior in a given scenario is influenced by the 3D geometry of that environment. To the best of our knowledge, this is the first work that incorporates this information to optimize the camera network locations for video surveillance.


Update: not the first to use 3d model "automated camera placement for large scale surveillance networks"

References

[1] K. Tarabanis, P. Allen, and R. Tsai, "A survey of sensor planning in computer vision," Robotics and Automation, IEEE Transactions on, vol. 11, pp. 86-104, Feb 1995.
[2] S. Fleishman, D. Cohen-Or, and D. Lischinski, "Automatic camera placement for image-based modeling," in Computer Graphics and Applications, 1999. Proceedings. Seventh Pacific Conference on, pp. 12-20, 315, 1999.
[3] X. Chen and J. Davis, "Camera placement considering occlusion for robust motion capture," tech. rep., 2000.
[4] A. Mittal and L. Davis, "Visibility analysis and sensor planning in dynamic environments," in Computer Vision - ECCV 2004 (T. Pajdla and J. Matas, eds.), vol. 3021 of Lecture Notes in Computer Science, pp. 175-189, Springer Berlin Heidelberg, 2004
[5]U. M. Erdem and S. Sclaro , "Optimal placement of cameras in floorplans to satisfy task requirements and cost constraints," in In Proc. of OMNIVIS Workshop, 2004.
[6] E. Hrster and R. Lienhart, "Calibrating and optimizing poses of visual sensors in distributed platforms," Multimedia Systems, vol. 12, no. 3, pp. 195-210, 2006.
[7] E. Horster and R. Lienhart, "Approximating optimal visual sensor placement," in Multimedia and Expo, 2006 IEEE International Conference on, pp. 1257-1260, July 2006.
[8] E. Horster and R. Lienhart, "On the optimal placement of multiple visual sensors," in Proceedings of the 4th ACM International Workshop on Video Surveillance and Sensor Networks, VSSN '06, (New York, NY, USA), pp. 111-120, ACM, 2006.
[9] S. Ram, K. R. Ramakrishnan, P. K. Atrey, V. K. Singh, and M. S. Kankanhalli, "A design methodology for selection and placement of sensors in multimedia surveillance systems," in Proceedings of the 4th ACM International Workshop on Video Surveillance and Sensor Networks, VSSN '06, (New York, NY, USA), pp. 121-130, ACM, 2006.
[10] R. Bodor, A. Drenner, P. Schrater, and N. Papanikolopoulos, "Optimal camera placement for automated surveillance tasks," Journal of Intelligent and Robotic Systems, vol. 50, no. 3, pp. 257-295, 2007.
[11] F. Janoos, R. Machiraju, R. Parent, J. W. Davis, and A. Murray, "Sensor con guration for coverage optimization for surveillance applications," 2007.
[12] A. T. Murray, K. Kim, J. W. Davis, R. Machiraju, and R. Parent, "Coverage optimization to support security monitoring," Computers, Environment and Urban Systems, vol. 31, no. 2, pp. 133-147, 2007.
[13] R. Malik and P. Bajcsy, "Automated Placement of Multiple Stereo Cameras," in The 8th Workshop on Omnidirectional Vision, Camera Networks and Non-classical Cameras - OMNIVIS, (Marseille, France), Rahul Swaminathan and Vincenzo Caglioti and Antonis Argyros, 2008.
[14] K. Kim and A. T. Murray, "Enhancing spatial representation in primary and secondary coverage location modeling*," Journal of Regional Science, vol. 48, no. 4, pp. 745-768, 2008.
[15] K. Yabuta and H. Kitazawa, "Optimum camera placement considering camera specification for security monitoring," in Circuits and Systems, 2008. ISCAS 2008. IEEE International Symposium on, pp. 2114-2117,May 2008.
[16] B. Debaque, R. Jedidi, and D. Prevost, "Optimal video camera network deployment to support security monitoring," in Information Fusion, 2009. FUSION '09. 12th International Conference on, pp. 1730-1736, July 2009.
[17] H. Huang, C.-C. Ni, X. Ban, J. Gao, A. Schneider, and S. Lin, "Connected wireless camera network deployment with visibility coverage," in INFOCOM, 2014 Proceedings IEEE, pp.  204-1212, April 2014.
[18]  F. Janoos, R. Machiraju, R. Parent, J. W. Davis, and A. Murray, "Sensor con figuration for coverage optimization for surveillance applications," 2007.

No comments:

Post a Comment