Visible to the public Biblio

Filters: Keyword is video surveillance  [Clear All Filters]
2019-08-12
Benzer, R., Yildiz, M. C..  2018.  YOLO Approach in Digital Object Definition in Military Systems. 2018 International Congress on Big Data, Deep Learning and Fighting Cyber Terrorism (IBIGDELFT). :35–37.

Today, as surveillance systems are widely used for indoor and outdoor monitoring applications, there is a growing interest in real-time generation detection and there are many different applications for real-time generation detection and analysis. Two-dimensional videos; It is used in multimedia content-based indexing, information acquisition, visual surveillance and distributed cross-camera surveillance systems, human tracking, traffic monitoring and similar applications. It is of great importance for the development of systems for national security by following a moving target within the scope of military applications. In this research, a more efficient solution is proposed in addition to the existing methods. Therefore, we present YOLO, a new approach to object detection for military applications.

Eetha, S., Agrawal, S., Neelam, S..  2018.  Zynq FPGA Based System Design for Video Surveillance with Sobel Edge Detection. 2018 IEEE International Symposium on Smart Electronic Systems (iSES) (Formerly iNiS). :76–79.

Advancements in semiconductor domain gave way to realize numerous applications in Video Surveillance using Computer vision and Deep learning, Video Surveillances in Industrial automation, Security, ADAS, Live traffic analysis etc. through image understanding improves efficiency. Image understanding requires input data with high precision which is dependent on Image resolution and location of camera. The data of interest can be thermal image or live feed coming for various sensors. Composite(CVBS) is a popular video interface capable of streaming upto HD(1920x1080) quality. Unlike high speed serial interfaces like HDMI/MIPI CSI, Analog composite video interface is a single wire standard supporting longer distances. Image understanding requires edge detection and classification for further processing. Sobel filter is one the most used edge detection filter which can be embedded into live stream. This paper proposes Zynq FPGA based system design for video surveillance with Sobel edge detection, where the input Composite video decoded (Analog CVBS input to YCbCr digital output), processed in HW and streamed to HDMI display simultaneously storing in SD memory for later processing. The HW design is scalable for resolutions from VGA to Full HD for 60fps and 4K for 24fps. The system is built on Xilinx ZC702 platform and TVP5146 to showcase the functional path.

Liu, Y., Yang, Y., Shi, A., Jigang, P., Haowei, L..  2019.  Intelligent monitoring of indoor surveillance video based on deep learning. 2019 21st International Conference on Advanced Communication Technology (ICACT). :648–653.

With the rapid development of information technology, video surveillance system has become a key part in the security and protection system of modern cities. Especially in prisons, surveillance cameras could be found almost everywhere. However, with the continuous expansion of the surveillance network, surveillance cameras not only bring convenience, but also produce a massive amount of monitoring data, which poses huge challenges to storage, analytics and retrieval. The smart monitoring system equipped with intelligent video analytics technology can monitor as well as pre-alarm abnormal events or behaviours, which is a hot research direction in the field of surveillance. This paper combines deep learning methods, using the state-of-the-art framework for instance segmentation, called Mask R-CNN, to train the fine-tuning network on our datasets, which can efficiently detect objects in a video image while simultaneously generating a high-quality segmentation mask for each instance. The experiment show that our network is simple to train and easy to generalize to other datasets, and the mask average precision is nearly up to 98.5% on our own datasets.

2019-01-31
Seetanadi, Gautham Nayak, Oliveira, Luis, Almeida, Luis, Arzén, Karl-Erik, Maggio, Martina.  2018.  Game-Theoretic Network Bandwidth Distribution for Self-Adaptive Cameras. SIGBED Rev.. 15:31–36.

Devices sharing a network compete for bandwidth, being able to transmit only a limited amount of data. This is for example the case with a network of cameras, that should record and transmit video streams to a monitor node for video surveillance. Adaptive cameras can reduce the quality of their video, thereby increasing the frame compression, to limit network congestion. In this paper, we exploit our experience with computing capacity allocation to design and implement a network bandwidth allocation strategy based on game theory, that accommodates multiple adaptive streams with convergence guarantees. We conduct some experiments with our implementation and discuss the results, together with some conclusions and future challenges.

Ouyang, Deqiang, Shao, Jie, Zhang, Yonghui, Yang, Yang, Shen, Heng Tao.  2018.  Video-Based Person Re-Identification via Self-Paced Learning and Deep Reinforcement Learning Framework. Proceedings of the 26th ACM International Conference on Multimedia. :1562–1570.

Person re-identification is an important task in video surveillance, focusing on finding the same person across different cameras. However, most existing methods of video-based person re-identification still have some limitations (e.g., the lack of effective deep learning framework, the robustness of the model, and the same treatment for all video frames) which make them unable to achieve better recognition performance. In this paper, we propose a novel self-paced learning algorithm for video-based person re-identification, which could gradually learn from simple to complex samples for a mature and stable model. Self-paced learning is employed to enhance video-based person re-identification based on deep neural network, so that deep neural network and self-paced learning are unified into one frame. Then, based on the trained self-paced learning, we propose to employ deep reinforcement learning to discard misleading and confounding frames and find the most representative frames from video pairs. With the advantage of deep reinforcement learning, our method can learn strategies to select the optimal frame groups. Experiments show that the proposed framework outperforms the existing methods on the iLIDS-VID, PRID-2011 and MARS datasets.

Wang, Siqi, Zeng, Yijie, Liu, Qiang, Zhu, Chengzhang, Zhu, En, Yin, Jianping.  2018.  Detecting Abnormality Without Knowing Normality: A Two-Stage Approach for Unsupervised Video Abnormal Event Detection. Proceedings of the 26th ACM International Conference on Multimedia. :636–644.

Abnormal event detection in video surveillance is a valuable but challenging problem. Most methods adopt a supervised setting that requires collecting videos with only normal events for training. However, very few attempts are made under unsupervised setting that detects abnormality without priorly knowing normal events. Existing unsupervised methods detect drastic local changes as abnormality, which overlooks the global spatio-temporal context. This paper proposes a novel unsupervised approach, which not only avoids manually specifying normality for training as supervised methods do, but also takes the whole spatio-temporal context into consideration. Our approach consists of two stages: First, normality estimation stage trains an autoencoder and estimates the normal events globally from the entire unlabeled videos by a self-adaptive reconstruction loss thresholding scheme. Second, normality modeling stage feeds the estimated normal events from the previous stage into one-class support vector machine to build a refined normality model, which can further exclude abnormal events and enhance abnormality detection performance. Experiments on various benchmark datasets reveal that our method is not only able to outperform existing unsupervised methods by a large margin (up to 14.2% AUC gain), but also favorably yields comparable or even superior performance to state-of-the-art supervised methods.

Bisagno, Niccoló, Conci, Nicola, Rinner, Bernhard.  2018.  Dynamic Camera Network Reconfiguration for Crowd Surveillance. Proceedings of the 12th International Conference on Distributed Smart Cameras. :4:1–4:6.

Crowd surveillance will play a fundamental role in the coming generation of video surveillance systems, in particular for improving public safety and security. However, traditional camera networks are mostly not able to closely survey the entire monitoring area due to limitations in coverage, resolution and analytics performance. A smart camera network, on the other hand, offers the ability to reconfigure the sensing infrastructure by incorporating active devices such as pan-tilt-zoom (PTZ) cameras and UAV-based cameras, which enable the adaptation of coverage and target resolution over time. This paper proposes a novel decentralized approach for dynamic network reconfiguration, where cameras locally control their PTZ parameters and position, to optimally cover the entire scene. For crowded scenes, cameras must deal with a trade-off among global coverage and target resolution to effectively perform crowd analysis. We evaluate our approach in a simulated environment surveyed with fixed, PTZ, and UAV-based cameras.

Sandifort, Maguell L. T. L., Liu, Jianquan, Nishimura, Shoji, Hürst, Wolfgang.  2018.  An Entropy Model for Loiterer Retrieval Across Multiple Surveillance Cameras. Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval. :309–317.

Loitering is a suspicious behavior that often leads to criminal actions, such as pickpocketing and illegal entry. Tracking methods can determine suspicious behavior based on trajectory, but require continuous appearance and are difficult to scale up to multi-camera systems. Using the duration of appearance of features works on multiple cameras, but does not consider major aspects of loitering behavior, such as repeated appearance and trajectory of candidates. We introduce an entropy model that maps the location of a person's features on a heatmap. It can be used as an abstraction of trajectory tracking across multiple surveillance cameras. We evaluate our method over several datasets and compare it to other loitering detection methods. The results show that our approach has similar results to state of the art, but can provide additional interesting candidates.

Wang, Jiabao, Miao, Zhuang, Zhang, Yanshuo, Li, Yang.  2018.  An Effective Framework for Person Re-Identification in Video Surveillance. Proceedings of the 3rd International Conference on Multimedia Systems and Signal Processing. :24–28.

Although the deep learning technology effectively improves the effect of person re-identification (re-ID) in video surveillance, there is still a lack of efficient framework in practical, especially in terms of computational cost, which usually requires GPU support. So this paper explores to solve the actual running performance and an effective person re-ID framework is proposed. A tiny network is designed for person detection and a triplet network is adopted for training feature extraction network. The motion detection and person detection is combined to speed up the whole process. The proposed framework is tested in practice and the results show that it can run in real-time on an ordinary PC machine. And the accuracy achieves 91.6% in actual data set. It has a good guidance for person re-ID in actual application.

Xu, Ke, Li, Yu, Huang, Bo, Liu, Xiangkai, Wang, Hong, Wu, Zhuoyan, Yan, Zhanpeng, Tu, Xueying, Wu, Tongqing, Zeng, Daibing.  2018.  A Low-Power 4096x2160@30Fps H.265/HEVC Video Encoder for Smart Video Surveillance. Proceedings of the International Symposium on Low Power Electronics and Design. :38:1–38:6.

This paper presents the design and VLSI implementation of a low-power HEVC main profile encoder, which is able to process up to 4096x2160@30fps 4:2:0 encoding in real-time with five-stage pipeline architecture. A pyramid ME (Motion Estimation) engine is employed to reduce search complexity. To compensate for the video sequences with fast moving objects, GME (Global Motion Estimation) are introduced to alleviate the effect of limited search range. We also implement an alternative 5x5 search along with 3x3 to boost video quality. For intra mode decision, original pixels, instead of reconstructed ones are used to reduce pipeline stall. The encoder supports DVFS (Dynamic Voltage and Frequency Scaling) and features three operating modes, which helps to reduce power consumption by 25%. Scalable quality that trades encoding quality for power by reducing size of search range and intra prediction candidates, achieves 11.4% power reduction with 3.5% quality degradation. Furthermore, a lossless frame buffer compression is proposed which reduced DDR bandwidth by 49.1% and power consumption by 13.6%. The entire video surveillance SoC is fabricated with TSMC 28nm technology with 1.96 mm2 area. It consumes 2.88M logic gates and 117KB SRAM. The measured power consumption is 103mW at 350MHz for 4K encoding with high-quality mode. The 0.39nJ/pixel of energy efficiency of this work, which achieves 42% $\backslash$textasciitilde 97% power reduction as compared with reference designs, make it ideal for real-time low-power smart video surveillance applications.

Sandifort, Maguell L.T.L., Liu, Jianquan, Nishimura, Shoji, Hürst, Wolfgang.  2018.  VisLoiter+: An Entropy Model-Based Loiterer Retrieval System with User-Friendly Interfaces. Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval. :505–508.

It is very difficult to fully automate the detection of loitering behavior in video surveillance, therefore humans are often required for monitoring. Alternatively, we could provide a list of potential loiterer candidates for a final yes/no judgment of a human operator. Our system, VisLoiter+, realizes this idea with a unique, user-friendly interface and by employing an entropy model for improved loitering analysis. Rather than using only frequency of appearance, we expand the loiter analysis with new methods measuring the amount of person movements across multiple camera views. The interface gives an overview of loiterer candidates to show their behavior at a glance, complemented by a lightweight video playback for further details about why a candidate was selected. We demonstrate that our system outperforms state-of-the-art solutions using real-life data sets.

Zhang, Caixia, Liu, Xiaoxiao, Xu, Qingyang.  2018.  Mixed Gaussian Model Based Video Foreground Object Detection. Proceedings of the 2Nd International Conference on Computer Science and Application Engineering. :112:1–112:5.

Video1 surveillance has been applied to various industries and business, but the video surveillance is mainly to provide video capture and display services at current, and does not care about video content. In some applications, users want to be able to monitor the video content and implement automatic prompting and alarming functions. Therefore, this paper uses the hybrid Gaussian model (GMM) to achieve the separation of the foreground and the background of the video and the experiments validate the effectiveness of the algorithm.

Grambow, Martin, Hasenburg, Jonathan, Bermbach, David.  2018.  Public Video Surveillance: Using the Fog to Increase Privacy. Proceedings of the 5th Workshop on Middleware and Applications for the Internet of Things. :11–14.

In public video surveillance, there is an inherent conflict between public safety goals and privacy needs of citizens. Generally, societies tend to decide on middleground solutions that sacrifice neither safety nor privacy goals completely. In this paper, we propose an alternative to existing approaches that rely on cloud-based video analysis. Our approach leverages the inherent geo-distribution of fog computing to preserve privacy of citizens while still supporting camera-based digital manhunts of law enforcement agencies.

2018-04-04
Xie, D., Wang, Y..  2017.  High definition wide dynamic video surveillance system based on FPGA. 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). :2403–2407.

A high definition(HD) wide dynamic video surveillance system is designed and implemented based on Field Programmable Gate Array(FPGA). This system is composed of three subsystems, which are video capture, video wide dynamic processing and video display subsystem. The images in the video are captured directly through the camera that is configured in a pattern have long exposure in odd frames and short exposure in even frames. The video data stream is buffered in DDR2 SDRAM to obtain two adjacent frames. Later, the image data fusion is completed by fusing the long exposure image with the short exposure image (pixel by pixel). The video image display subsystem can display the image through a HDMI interface. The system is designed on the platform of Lattice ECP3-70EA FPGA, and camera is the Panasonic MN34229 sensor. The experimental result shows that this system can expand dynamic range of the HD video with 30 frames per second and a resolution equal to 1920*1080 pixels by real-time wide dynamic range (WDR) video processing, and has a high practical value.

Bao, D., Yang, F., Jiang, Q., Li, S., He, X..  2017.  Block RLS algorithm for surveillance video processing based on image sparse representation. 2017 29th Chinese Control And Decision Conference (CCDC). :2195–2200.

Block recursive least square (BRLS) algorithm for dictionary learning in compressed sensing system is developed for surveillance video processing. The new method uses image blocks directly and iteratively to train dictionaries via BRLS algorithm, which is different from classical methods that require to transform blocks to columns first and then giving all training blocks at one time. Since the background in surveillance video is almost fixed, the residual of foreground can be represented sparsely and reconstructed with background subtraction directly. The new method and framework are applied in real image and surveillance video processing. Simulation results show that the new method achieves better representation performance than classical ones in both image and surveillance video.

Nawaratne, R., Bandaragoda, T., Adikari, A., Alahakoon, D., Silva, D. De, Yu, X..  2017.  Incremental knowledge acquisition and self-learning for autonomous video surveillance. IECON 2017 - 43rd Annual Conference of the IEEE Industrial Electronics Society. :4790–4795.

The world is witnessing a remarkable increase in the usage of video surveillance systems. Besides fulfilling an imperative security and safety purpose, it also contributes towards operations monitoring, hazard detection and facility management in industry/smart factory settings. Most existing surveillance techniques use hand-crafted features analyzed using standard machine learning pipelines for action recognition and event detection. A key shortcoming of such techniques is the inability to learn from unlabeled video streams. The entire video stream is unlabeled when the requirement is to detect irregular, unforeseen and abnormal behaviors, anomalies. Recent developments in intelligent high-level video analysis have been successful in identifying individual elements in a video frame. However, the detection of anomalies in an entire video feed requires incremental and unsupervised machine learning. This paper presents a novel approach that incorporates high-level video analysis outcomes with incremental knowledge acquisition and self-learning for autonomous video surveillance. The proposed approach is capable of detecting changes that occur over time and separating irregularities from re-occurrences, without the prerequisite of a labeled dataset. We demonstrate the proposed approach using a benchmark video dataset and the results confirm its validity and usability for autonomous video surveillance.

Parchami, M., Bashbaghi, S., Granger, E..  2017.  CNNs with cross-correlation matching for face recognition in video surveillance using a single training sample per person. 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). :1–6.

In video surveillance, face recognition (FR) systems seek to detect individuals of interest appearing over a distributed network of cameras. Still-to-video FR systems match faces captured in videos under challenging conditions against facial models, often designed using one reference still per individual. Although CNNs can achieve among the highest levels of accuracy in many real-world FR applications, state-of-the-art CNNs that are suitable for still-to-video FR, like trunk-branch ensemble (TBE) CNNs, represent complex solutions for real-time applications. In this paper, an efficient CNN architecture is proposed for accurate still-to-video FR from a single reference still. The CCM-CNN is based on new cross-correlation matching (CCM) and triplet-loss optimization methods that provide discriminant face representations. The matching pipeline exploits a matrix Hadamard product followed by a fully connected layer inspired by adaptive weighted cross-correlation. A triplet-based training approach is proposed to optimize the CCM-CNN parameters such that the inter-class variations are increased, while enhancing robustness to intra-class variations. To further improve robustness, the network is fine-tuned using synthetically-generated faces based on still and videos of non-target individuals. Experiments on videos from the COX Face and Chokepoint datasets indicate that the CCM-CNN can achieve a high level of accuracy that is comparable to TBE-CNN and HaarNet, but with a significantly lower time and memory complexity. It may therefore represent the better trade-off between accuracy and complexity for real-time video surveillance applications.

Gajjar, V., Khandhediya, Y., Gurnani, A..  2017.  Human Detection and Tracking for Video Surveillance: A Cognitive Science Approach. 2017 IEEE International Conference on Computer Vision Workshops (ICCVW). :2805–2809.

With crimes on the rise all around the world, video surveillance is becoming more important day by day. Due to the lack of human resources to monitor this increasing number of cameras manually, new computer vision algorithms to perform lower and higher level tasks are being developed. We have developed a new method incorporating the most acclaimed Histograms of Oriented Gradients, the theory of Visual Saliency and the saliency prediction model Deep Multi-Level Network to detect human beings in video sequences. Furthermore, we implemented the k - Means algorithm to cluster the HOG feature vectors of the positively detected windows and determined the path followed by a person in the video. We achieved a detection precision of 83.11% and a recall of 41.27%. We obtained these results 76.866 times faster than classification on normal images.

Jin, Y., Eriksson, J..  2017.  Fully Automatic, Real-Time Vehicle Tracking for Surveillance Video. 2017 14th Conference on Computer and Robot Vision (CRV). :147–154.

We present an object tracking framework which fuses multiple unstable video-based methods and supports automatic tracker initialization and termination. To evaluate our system, we collected a large dataset of hand-annotated 5-minute traffic surveillance videos, which we are releasing to the community. To the best of our knowledge, this is the first publicly available dataset of such long videos, providing a diverse range of real-world object variation, scale change, interaction, different resolutions and illumination conditions. In our comprehensive evaluation using this dataset, we show that our automatic object tracking system often outperforms state-of-the-art trackers, even when these are provided with proper manual initialization. We also demonstrate tracking throughput improvements of 5× or more vs. the competition.

Rupasinghe, R. A. A., Padmasiri, D. A., Senanayake, S. G. M. P., Godaliyadda, G. M. R. I., Ekanayake, M. P. B., Wijayakulasooriya, J. V..  2017.  Dynamic clustering for event detection and anomaly identification in video surveillance. 2017 IEEE International Conference on Industrial and Information Systems (ICIIS). :1–6.

This work introduces concepts and algorithms along with a case study validating them, to enhance the event detection, pattern recognition and anomaly identification results in real life video surveillance. The motivation for the work underlies in the observation that human behavioral patterns in general continuously evolve and adapt with time, rather than being static. First, limitations in existing work with respect to this phenomena are identified. Accordingly, the notion and algorithms of Dynamic Clustering are introduced in order to overcome these drawbacks. Correspondingly, we propose the concept of maintaining two separate sets of data in parallel, namely the Normal Plane and the Anomaly Plane, to successfully achieve the task of learning continuously. The practicability of the proposed algorithms in a real life scenario is demonstrated through a case study. From the analysis presented in this work, it is evident that a more comprehensive analysis, closely following human perception can be accomplished by incorporating the proposed notions and algorithms in a video surveillance event.

Markosyan, M. V., Safin, R. T., Artyukhin, V. V., Satimova, E. G..  2017.  Determination of the Eb/N0 ratio and calculation of the probability of an error in the digital communication channel of the IP-video surveillance system. 2017 Computer Science and Information Technologies (CSIT). :173–176.

Due to the transition from analog to digital format, it possible to use IP-protocol for video surveillance systems. In addition, wireless access, color systems with higher resolution, biometrics, intelligent sensors, software for performing video analytics are becoming increasingly widespread. The paper considers only the calculation of the error probability (BER — Bit Error Rate) depending on the realized value of S/N.

Babiker, M., Khalifa, O. O., Htike, K. K., Hassan, A., Zaharadeen, M..  2017.  Automated daily human activity recognition for video surveillance using neural network. 2017 IEEE 4th International Conference on Smart Instrumentation, Measurement and Application (ICSIMA). :1–5.

Surveillance video systems are gaining increasing attention in the field of computer vision due to its demands of users for the seek of security. It is promising to observe the human movement and predict such kind of sense of movements. The need arises to develop a surveillance system that capable to overcome the shortcoming of depending on the human resource to stay monitoring, observing the normal and suspect event all the time without any absent mind and to facilitate the control of huge surveillance system network. In this paper, an intelligent human activity system recognition is developed. Series of digital image processing techniques were used in each stage of the proposed system, such as background subtraction, binarization, and morphological operation. A robust neural network was built based on the human activities features database, which was extracted from the frame sequences. Multi-layer feed forward perceptron network used to classify the activities model in the dataset. The classification results show a high performance in all of the stages of training, testing and validation. Finally, these results lead to achieving a promising performance in the activity recognition rate.

Nguyen-Meidine, L. T., Granger, E., Kiran, M., Blais-Morin, L. A..  2017.  A comparison of CNN-based face and head detectors for real-time video surveillance applications. 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA). :1–7.

Detecting faces and heads appearing in video feeds are challenging tasks in real-world video surveillance applications due to variations in appearance, occlusions and complex backgrounds. Recently, several CNN architectures have been proposed to increase the accuracy of detectors, although their computational complexity can be an issue, especially for realtime applications, where faces and heads must be detected live using high-resolution cameras. This paper compares the accuracy and complexity of state-of-the-art CNN architectures that are suitable for face and head detection. Single pass and region-based architectures are reviewed and compared empirically to baseline techniques according to accuracy and to time and memory complexity on images from several challenging datasets. The viability of these architectures is analyzed with real-time video surveillance applications in mind. Results suggest that, although CNN architectures can achieve a very high level of accuracy compared to traditional detectors, their computational cost can represent a limitation for many practical real-time applications.

2018-02-28
Boyarinov, K., Hunter, A..  2017.  Security and trust for surveillance cameras. 2017 IEEE Conference on Communications and Network Security (CNS). :384–385.

We address security and trust in the context of a commercial IP camera. We take a hands-on approach, as we not only define abstract vulnerabilities, but we actually implement the attacks on a real camera. We then discuss the nature of the attacks and the root cause; we propose a formal model of trust that can be used to address the vulnerabilities by explicitly constraining compositionality for trust relationships.

2018-02-06
MüUller, W., Kuwertz, A., Mühlenberg, D., Sander, J..  2017.  Semantic Information Fusion to Enhance Situational Awareness in Surveillance Scenarios. 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). :397–402.

In recent years, the usage of unmanned aircraft systems (UAS) for security-related purposes has increased, ranging from military applications to different areas of civil protection. The deployment of UAS can support security forces in achieving an enhanced situational awareness. However, in order to provide useful input to a situational picture, sensor data provided by UAS has to be integrated with information about the area and objects of interest from other sources. The aim of this study is to design a high-level data fusion component combining probabilistic information processing with logical and probabilistic reasoning, to support human operators in their situational awareness and improving their capabilities for making efficient and effective decisions. To this end, a fusion component based on the ISR (Intelligence, Surveillance and Reconnaissance) Analytics Architecture (ISR-AA) [1] is presented, incorporating an object-oriented world model (OOWM) for information integration, an expressive knowledge model and a reasoning component for detection of critical events. Approaches for translating the information contained in the OOWM into either an ontology for logical reasoning or a Markov logic network for probabilistic reasoning are presented.