Visible to the public Biblio

Found 1525 results

Filters: Keyword is human factors  [Clear All Filters]
Kuze, N., Ishikura, S., Yagi, T., Chiba, D., Murata, M..  2016.  Detection of vulnerability scanning using features of collective accesses based on information collected from multiple honeypots. NOMS 2016 - 2016 IEEE/IFIP Network Operations and Management Symposium. :1067–1072.

Attacks against websites are increasing rapidly with the expansion of web services. An increasing number of diversified web services make it difficult to prevent such attacks due to many known vulnerabilities in websites. To overcome this problem, it is necessary to collect the most recent attacks using decoy web honeypots and to implement countermeasures against malicious threats. Web honeypots collect not only malicious accesses by attackers but also benign accesses such as those by web search crawlers. Thus, it is essential to develop a means of automatically identifying malicious accesses from mixed collected data including both malicious and benign accesses. Specifically, detecting vulnerability scanning, which is a preliminary process, is important for preventing attacks. In this study, we focused on classification of accesses for web crawling and vulnerability scanning since these accesses are too similar to be identified. We propose a feature vector including features of collective accesses, e.g., intervals of request arrivals and the dispersion of source port numbers, obtained with multiple honeypots deployed in different networks for classification. Through evaluation using data collected from 37 honeypots in a real network, we show that features of collective accesses are advantageous for vulnerability scanning and crawler classification.

Chopade, P., Zhan, J., Bikdash, M..  2016.  Micro-Community detection and vulnerability identification for large critical networks. 2016 IEEE Symposium on Technologies for Homeland Security (HST). :1–7.

In this work we put forward our novel approach using graph partitioning and Micro-Community detection techniques. We firstly use algebraic connectivity or Fiedler Eigenvector and spectral partitioning for community detection. We then used modularity maximization and micro level clustering for detecting micro-communities with concept of community energy. We run micro-community clustering algorithm recursively with modularity maximization which helps us identify dense, deeper and hidden community structures. We experimented our MicroCommunity Clustering (MCC) algorithm for various types of complex technological and social community networks such as directed weighted, directed unweighted, undirected weighted, undirected unweighted. A novel fact about this algorithm is that it is scalable in nature.

Holm, H., Sommestad, T..  2016.  SVED: Scanning, Vulnerabilities, Exploits and Detection. MILCOM 2016 - 2016 IEEE Military Communications Conference. :976–981.

This paper presents the Scanning, Vulnerabilities, Exploits and Detection tool (SVED). SVED facilitates reliable and repeatable cyber security experiments by providing a means to design, execute and log malicious actions, such as software exploits, as well the alerts provided by intrusion detection systems. Due to its distributed architecture, it is able to support large experiments with thousands of attackers, sensors and targets. SVED is automatically updated with threat intelligence information from various services.

Kim, S. S., Lee, D. E., Hong, C. S..  2016.  Vulnerability detection mechanism based on open API for multi-user's convenience. 2016 International Conference on Information Networking (ICOIN). :458–462.

Vulnerability Detection Tools (VDTs) have been researched and developed to prevent problems with respect to security. Such tools identify vulnerabilities that exist on the server in advance. By using these tools, administrators must protect their servers from attacks. They have, however, different results since methods for detection of different tools are not the same. For this reason, it is recommended that results are gathered from many tools rather than from a single tool but the installation which all of the tools have requires a great overhead. In this paper, we propose a novel vulnerability detection mechanism using Open API and use OpenVAS for actual testing.

Mohammadi, M., Chu, B., Lipford, H. R., Murphy-Hill, E..  2016.  Automatic Web Security Unit Testing: XSS Vulnerability Detection. 2016 IEEE/ACM 11th International Workshop in Automation of Software Test (AST). :78–84.

Integrating security testing into the workflow of software developers not only can save resources for separate security testing but also reduce the cost of fixing security vulnerabilities by detecting them early in the development cycle. We present an automatic testing approach to detect a common type of Cross Site Scripting (XSS) vulnerability caused by improper encoding of untrusted data. We automatically extract encoding functions used in a web application to sanitize untrusted inputs and then evaluate their effectiveness by automatically generating XSS attack strings. Our evaluations show that this technique can detect 0-day XSS vulnerabilities that cannot be found by static analysis tools. We will also show that our approach can efficiently cover a common type of XSS vulnerability. This approach can be generalized to test for input validation against other types injections such as command line injection.

Halevi, Tzipora, Memon, Nasir, Lewis, James, Kumaraguru, Ponnurangam, Arora, Sumit, Dagar, Nikita, Aloul, Fadi, Chen, Jay.  2016.  Cultural and Psychological Factors in Cyber-security. Proceedings of the 18th International Conference on Information Integration and Web-based Applications and Services. :318–324.

Increasing cyber-security presents an ongoing challenge to security professionals. Research continuously suggests that online users are a weak link in information security. This research explores the relationship between cyber-security and cultural, personality and demographic variables. This study was conducted in four different countries and presents a multi-cultural view of cyber-security. In particular, it looks at how behavior, self-efficacy and privacy attitude are affected by culture compared to other psychological and demographics variables (such as gender and computer expertise). It also examines what kind of data people tend to share online and how culture affects these choices. This work supports the idea of developing personality based UI design to increase users' cyber-security. Its results show that certain personality traits affect the user cyber-security related behavior across different cultures, which further reinforces their contribution compared to cultural effects.

Mallikarjunan, K. N., Muthupriya, K., Shalinie, S. M..  2016.  A survey of distributed denial of service attack. 2016 10th International Conference on Intelligent Systems and Control (ISCO). :1–6.

Information security deals with a large number of subjects like spoofed message detection, audio processing, video surveillance and cyber-attack detections. However the biggest threat for the homeland security is cyber-attacks. Distributed Denial of Service attack is one among them. Interconnected systems such as database server, web server, cloud computing servers etc., are now under threads from network attackers. Denial of service is common attack in the internet which causes problem for both the user and the service providers. Distributed attack sources can be used to enlarge the attack in case of Distributed Denial of Service so that the effect of the attack will be high. Distributed Denial of Service attacks aims at exhausting the communication and computational power of the network by flooding the packets through the network and making malicious traffic in the network. In order to be an effective service the DDoS attack must be detected and mitigated quickly before the legitimate user access the attacker's target. The group of systems that is used to perform the DoS attack is known as the botnets. This paper introduces the overview of the state of art in DDoS attack detection strategies.

Du, H., Jung, T., Jian, X., Hu, Y., Hou, J., Li, X. Y..  2016.  User-Demand-Oriented Privacy-Preservation in Video Delivering. 2016 12th International Conference on Mobile Ad-Hoc and Sensor Networks (MSN). :145–151.

This paper presents a framework for privacy-preserving video delivery system to fulfill users' privacy demands. The proposed framework leverages the inference channels in sensitive behavior prediction and object tracking in a video surveillance system for the sequence privacy protection. For such a goal, we need to capture different pieces of evidence which are used to infer the identity. The temporal, spatial and context features are extracted from the surveillance video as the observations to perceive the privacy demands and their correlations. Taking advantage of quantifying various evidence and utility, we let users subscribe videos with a viewer-dependent pattern. We implement a prototype system for off-line and on-line requirements in two typical monitoring scenarios to construct extensive experiments. The evaluation results show that our system can efficiently satisfy users' privacy demands while saving over 25% more video information compared to traditional video privacy protection schemes.

Yap, B. L., Baskaran, V. M..  2016.  Active surveillance using depth sensing technology \#8212; Part I: Intrusion detection. 2016 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW). :1–2.

In part I of a three-part series on active surveillance using depth-sensing technology, this paper proposes an algorithm to identify outdoor intrusion activities by monitoring skeletal positions from Microsoft Kinect sensor in real-time. This algorithm implements three techniques to identify a premise intrusion. The first technique observes a boundary line along the wall (or fence) of a surveilled premise for skeletal trespassing detection. The second technique observes the duration of a skeletal object within a region of a surveilled premise for loitering detection. The third technique analyzes the differences in skeletal height to identify wall climbing. Experiment results suggest that the proposed algorithm is able to detect trespassing, loitering and wall climbing at a rate of 70%, 85% and 80% respectively.

Li, H., He, Y., Sun, L., Cheng, X., Yu, J..  2016.  Side-channel information leakage of encrypted video stream in video surveillance systems. IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications. :1–9.

Video surveillance has been widely adopted to ensure home security in recent years. Most video encoding standards such as H.264 and MPEG-4 compress the temporal redundancy in a video stream using difference coding, which only encodes the residual image between a frame and its reference frame. Difference coding can efficiently compress a video stream, but it causes side-channel information leakage even though the video stream is encrypted, as reported in this paper. Particularly, we observe that the traffic patterns of an encrypted video stream are different when a user conducts different basic activities of daily living, which must be kept private from third parties as obliged by HIPAA regulations. We also observe that by exploiting this side-channel information leakage, attackers can readily infer a user's basic activities of daily living based on only the traffic size data of an encrypted video stream. We validate such an attack using two off-the-shelf cameras, and the results indicate that the user's basic activities of daily living can be recognized with a high accuracy.

Shahrak, M. Z., Ye, M., Swaminathan, V., Wei, S..  2016.  Two-way real time multimedia stream authentication using physical unclonable functions. 2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP). :1–4.

Multimedia authentication is an integral part of multimedia signal processing in many real-time and security sensitive applications, such as video surveillance. In such applications, a full-fledged video digital rights management (DRM) mechanism is not applicable due to the real time requirement and the difficulties in incorporating complicated license/key management strategies. This paper investigates the potential of multimedia authentication from a brand new angle by employing hardware-based security primitives, such as physical unclonable functions (PUFs). We show that the hardware security approach is not only capable of accomplishing the authentication for both the hardware device and the multimedia stream but, more importantly, introduce minimum performance, resource, and power overhead. We justify our approach using a prototype PUF implementation on Xilinx FPGA boards. Our experimental results on the real hardware demonstrate the high security and low overhead in multimedia authentication obtained by using hardware security approaches.

Aqel, S., Aarab, A., Sabri, M. A..  2016.  Shadow detection and removal for traffic sequences. 2016 International Conference on Electrical and Information Technologies (ICEIT). :168–173.

This paper address the problem of shadow detection and removal in traffic vision analysis. Basically, the presence of the shadow in the traffic sequences is imminent, and therefore leads to errors at segmentation stage and often misclassified as an object region or as a moving object. This paper presents a shadow removal method, based on both color and texture features, aiming to contribute to retrieve efficiently the moving objects whose detection are usually under the influence of cast-shadows. Additionally, in order to get a shadow-free foreground segmentation image, a morphology reconstruction algorithm is used to recover the foreground disturbed by shadow removal. Once shadows are detected, an automatic shadow removal model is proposed based on the information retrieved from the histogram shape. Experimental results on a real traffic sequence is presented to test the proposed approach and to validate the algorithm's performance.

Wei, Zhuo, Yan, Zheng, Wu, Yongdong, Deng, Robert Huijie.  2016.  Trustworthy Authentication on Scalable Surveillance Video with Background Model Support. ACM Trans. Multimedia Comput. Commun. Appl.. 12:64:1–64:20.

H.264/SVC (Scalable Video Coding) codestreams, which consist of a single base layer and multiple enhancement layers, are designed for quality, spatial, and temporal scalabilities. They can be transmitted over networks of different bandwidths and seamlessly accessed by various terminal devices. With a huge amount of video surveillance and various devices becoming an integral part of the security infrastructure, the industry is currently starting to use the SVC standard to process digital video for surveillance applications such that clients with different network bandwidth connections and display capabilities can seamlessly access various SVC surveillance (sub)codestreams. In order to guarantee the trustworthiness and integrity of received SVC codestreams, engineers and researchers have proposed several authentication schemes to protect video data. However, existing algorithms cannot simultaneously satisfy both efficiency and robustness for SVC surveillance codestreams. Hence, in this article, a highly efficient and robust authentication scheme, named TrustSSV (Trust Scalable Surveillance Video), is proposed. Based on quality/spatial scalable characteristics of SVC codestreams, TrustSSV combines cryptographic and content-based authentication techniques to authenticate the base layer and enhancement layers, respectively. Based on temporal scalable characteristics of surveillance codestreams, TrustSSV extracts, updates, and authenticates foreground features for each access unit dynamically with background model support. Using SVC test sequences, our experimental results indicate that the scheme is able to distinguish between content-preserving and content-changing manipulations and to pinpoint tampered locations. Compared with existing schemes, the proposed scheme incurs very small computation and communication costs.

Costin, Andrei.  2016.  Security of CCTV and Video Surveillance Systems: Threats, Vulnerabilities, Attacks, and Mitigations. Proceedings of the 6th International Workshop on Trustworthy Embedded Devices. :45–54.

Video surveillance, closed-circuit TV and IP-camera systems became virtually omnipresent and indispensable for many organizations, businesses, and users. Their main purpose is to provide physical security, increase safety, and prevent crime. They also became increasingly complex, comprising many communication means, embedded hardware and non-trivial firmware. However, most research to date focused mainly on the privacy aspects of such systems, and did not fully address their issues related to cyber-security in general, and visual layer (i.e., imagery semantics) attacks in particular. In this paper, we conduct a systematic review of existing and novel threats in video surveillance, closed-circuit TV and IP-camera systems based on publicly available data. The insights can then be used to better understand and identify the security and the privacy risks associated with the development, deployment and use of these systems. We study existing and novel threats, along with their existing or possible countermeasures, and summarize this knowledge into a comprehensive table that can be used in a practical way as a security checklist when assessing cyber-security level of existing or new CCTV designs and deployments. We also provide a set of recommendations and mitigations that can help improve the security and privacy levels provided by the hardware, the firmware, the network communications and the operation of video surveillance systems. We hope the findings in this paper will provide a valuable knowledge of the threat landscape that such systems are exposed to, as well as promote further research and widen the scope of this field beyond its current boundaries.

Liu, Junbin, Sridharan, Sridha, Fookes, Clinton.  2016.  Recent Advances in Camera Planning for Large Area Surveillance: A Comprehensive Review. ACM Comput. Surv.. 49:6:1–6:37.

With recent advances in consumer electronics and the increasingly urgent need for public security, camera networks have evolved from their early role of providing simple and static monitoring to current complex systems capable of obtaining extensive video information for intelligent processing, such as target localization, identification, and tracking. In all cases, it is of vital importance that the optimal camera configuration (i.e., optimal location, orientation, etc.) is determined before cameras are deployed as a suboptimal placement solution will adversely affect intelligent video surveillance and video analytic algorithms. The optimal configuration may also provide substantial savings on the total number of cameras required to achieve the same level of utility. In this article, we examine most, if not all, of the recent approaches (post 2000) addressing camera placement in a structured manner. We believe that our work can serve as a first point of entry for readers wishing to start researching into this area or engineers who need to design a camera system in practice. To this end, we attempt to provide a complete study of relevant formulation strategies and brief introductions to most commonly used optimization techniques by researchers in this field. We hope our work to be inspirational to spark new ideas in the field.

Saito, Susumu, Nakano, Teppei, Akabane, Makoto, Kobayashi, Tetsunori.  2016.  Evaluation of Collaborative Video Surveillance Platform: Prototype Development of Abandoned Object Detection. Proceedings of the 10th International Conference on Distributed Smart Camera. :172–177.

This paper evaluates a new video surveillance platform presented in a previous study, through an abandoned object detection task. The proposed platform has a function of automated detection and alerting, which is still a big challenge for a machine algorithm due to its recall-precision tradeoff problem. To achieve both high recall and high precision simultaneously, a hybrid approach using crowdsourcing after image analysis is proposed. This approach, however, is still not clear about what extent it can improve detection accuracy and raise quicker alerts. In this paper, the experiment is conducted for abandoned object detection, as one of the most common surveillance tasks. The results show that detection accuracy was improved from 50% (without crowdsourcing) to stable 95-100% (with crowdsourcing) by majority vote of 7 crowdworkers for each task. In contrast, alert time issue still remains open to further discussion since at least 7+ minutes are required to get the best performance.

Zulkarnine, A. T., Frank, R., Monk, B., Mitchell, J., Davies, G..  2016.  Surfacing collaborated networks in dark web to find illicit and criminal content. 2016 IEEE Conference on Intelligence and Security Informatics (ISI). :109–114.
The Tor Network, a hidden part of the Internet, is becoming an ideal hosting ground for illegal activities and services, including large drug markets, financial frauds, espionage, child sexual abuse. Researchers and law enforcement rely on manual investigations, which are both time-consuming and ultimately inefficient. The first part of this paper explores illicit and criminal content identified by prominent researchers in the dark web. We previously developed a web crawler that automatically searched websites on the internet based on pre-defined keywords and followed the hyperlinks in order to create a map of the network. This crawler has demonstrated previous success in locating and extracting data on child exploitation images, videos, keywords and linkages on the public internet. However, as Tor functions differently at the TCP level, and uses socket connections, further technical challenges are faced when crawling Tor. Some of the other inherent challenges for advanced Tor crawling include scalability, content selection tradeoffs, and social obligation. We discuss these challenges and the measures taken to meet them. Our modified web crawler for Tor, termed the “Dark Crawler” has been able to access Tor while simultaneously accessing the public internet. We present initial findings regarding what extremist and terrorist contents are present in Tor and how this content is connected to each other in a mapped network that facilitates dark web crimes. Our results so far indicate the most popular websites in the dark web are acting as catalysts for dark web expansion by providing necessary knowledgebase, support and services to build Tor hidden services and onion websites.
Park, A. J., Beck, B., Fletche, D., Lam, P., Tsang, H. H..  2016.  Temporal analysis of radical dark web forum users. 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). :880–883.
Extremist groups have turned to the Internet and social media sites as a means of sharing information amongst one another. This research study analyzes forum posts and finds people who show radical tendencies through the use of natural language processing and sentiment analysis. The forum data being used are from six Islamic forums on the Dark Web which are made available for security research. This research project uses a POS tagger to isolate keywords and nouns that can be utilized with the sentiment analysis program. Then the sentiment analysis program determines the polarity of the post. The post is scored as either positive or negative. These scores are then divided into monthly radical scores for each user. Once these time clusters are mapped, the change in opinions of the users over time may be interpreted as rising or falling levels of radicalism. Each user is then compared on a timeline to other radical users and events to determine possible connections or relationships. The ability to analyze a forum for an overall change in attitude can be an indicator of unrest and possible radical actions or terrorism.
Iliou, C., Kalpakis, G., Tsikrika, T., Vrochidis, S., Kompatsiaris, I..  2016.  Hybrid Focused Crawling for Homemade Explosives Discovery on Surface and Dark Web. 2016 11th International Conference on Availability, Reliability and Security (ARES). :229–234.
This work proposes a generic focused crawling framework for discovering resources on any given topic that reside on the Surface or the Dark Web. The proposed crawler is able to seamlessly traverse the Surface Web and several darknets present in the Dark Web (i.e. Tor, I2P and Freenet) during a single crawl by automatically adapting its crawling behavior and its classifier-guided hyperlink selection strategy based on the network type. This hybrid focused crawler is demonstrated for the discovery of Web resources containing recipes for producing homemade explosives. The evaluation experiments indicate the effectiveness of the proposed ap-proach both for the Surface and the Dark Web.
Baravalle, A., Lopez, M. S., Lee, S. W..  2016.  Mining the Dark Web: Drugs and Fake Ids. 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW). :350–356.
In the last years, governmental bodies have been futilely trying to fight against dark web marketplaces. Shortly after the closing of "The Silk Road" by the FBI and Europol in 2013, new successors have been established. Through the combination of cryptocurrencies and nonstandard communication protocols and tools, agents can anonymously trade in a marketplace for illegal items without leaving any record. This paper presents a research carried out to gain insights on the products and services sold within one of the larger marketplaces for drugs, fake ids and weapons on the Internet, Agora. Our work sheds a light on the nature of the market, there is a clear preponderance of drugs, which accounts for nearly 80% of the total items on sale. The ready availability of counterfeit documents, while they make up for a much smaller percentage of the market, raises worries. Finally, the role of organized crime within Agora is discussed and presented.
Preotiuc-Pietro, Daniel, Carpenter, Jordan, Giorgi, Salvatore, Ungar, Lyle.  2016.  Studying the Dark Triad of Personality Through Twitter Behavior. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :761–770.
Research into the darker traits of human nature is growing in interest especially in the context of increased social media usage. This allows users to express themselves to a wider online audience. We study the extent to which the standard model of dark personality – the dark triad – consisting of narcissism, psychopathy and Machiavellianism, is related to observable Twitter behavior such as platform usage, posted text and profile image choice. Our results show that we can map various behaviors to psychological theory and study new aspects related to social media usage. Finally, we build a machine learning algorithm that predicts the dark triad of personality in out-of-sample users with reliable accuracy.
Collarana, Diego, Lange, Christoph, Auer, Sören.  2016.  FuhSen: A Platform for Federated, RDF-based Hybrid Search. Proceedings of the 25th International Conference Companion on World Wide Web. :171–174.
The increasing amount of structured and semi-structured information available on the Web and in distributed information systems, as well as the Web's diversification into different segments such as the Social Web, the Deep Web, or the Dark Web, requires new methods for horizontal search. FuhSen is a federated, RDF-based, hybrid search platform that searches, integrates and summarizes information about entities from distributed heterogeneous information sources using Linked Data. As a use case, we present scenarios where law enforcement institutions search and integrate data spread across these different Web segments to identify cases of organized crime. We present the architecture and implementation of FuhSen and explain the queries that can be addressed with this new approach.
Truvé, Staffan.  2016.  Temporal Analytics for Predictive Cyber Threat Intelligence. Proceedings of the 25th International Conference Companion on World Wide Web. :867–868.
Recorded Future has developed its Temporal Analytics Engine as a general purpose platform for harvesting and analyzing unstructured text from the open, deep, and dark web, and for transforming that content into a structured representation suitable for different analyses. In this paper we present some of the key components of our system, and show how it has been adapted to the increasingly important domain of cyber threat intelligence. We also describe how our data can be used for predictive analytics, e.g. to predict the likelihood of a product vulnerability being exploited or to assess the maliciousness of an IP address.
Knote, Robin, Baraki, Harun, Söllner, Matthias, Geihs, Kurt, Leimeister, Jan Marco.  2016.  From Requirement to Design Patterns for Ubiquitous Computing Applications. Proceedings of the 21st European Conference on Pattern Languages of Programs. :26:1–26:11.
Ubiquitous Computing describes a concept where computing appears around us at any time and any location. Respective systems rely on context-sensitivity and adaptability. This means that they constantly collect data of the user and his context to adapt its functionalities to certain situations. Hence, the development of Ubiquitous Computing systems is not only a technical issue and must be considered from a privacy, legal and usability perspective, too. This indicates a need for several experts from different disciplines to participate in the development process, mentioning requirements and evaluating design alternatives. In order to capture the knowledge of these interdisciplinary teams to make it reusable for similar problems, a pattern logic can be applied. In the early phase of a development project, requirement patterns are used to describe recurring requirements for similar problems, whereas in a more advanced development phase, design patterns are deployed to find a suitable design for recurring requirements. However, existing literature does not give sufficient insights on how both concepts are related and how the process of deriving design patterns from requirements (patterns) appears in practice. In our work, we give insights on how trust-related requirements for Ubiquitous Computing applications evolve to interdisciplinary design patterns. We elaborate on a six-step process using an example requirement pattern. With this contribution, we shed light on the relation of interdisciplinary requirement and design patterns and provide experienced practitioners and scholars regarding UC application development a way for systematic and effective pattern utilization.
Zhang, Chenwei, Xie, Sihong, Li, Yaliang, Gao, Jing, Fan, Wei, Yu, Philip S..  2016.  Multi-source Hierarchical Prediction Consolidation. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :2251–2256.
In big data applications such as healthcare data mining, due to privacy concerns, it is necessary to collect predictions from multiple information sources for the same instance, with raw features being discarded or withheld when aggregating multiple predictions. Besides, crowd-sourced labels need to be aggregated to estimate the ground truth of the data. Due to the imperfection caused by predictive models or human crowdsourcing workers, noisy and conflicting information is ubiquitous and inevitable. Although state-of-the-art aggregation methods have been proposed to handle label spaces with flat structures, as the label space is becoming more and more complicated, aggregation under a label hierarchical structure becomes necessary but has been largely ignored. These label hierarchies can be quite informative as they are usually created by domain experts to make sense of highly complex label correlations such as protein functionality interactions or disease relationships. We propose a novel multi-source hierarchical prediction consolidation method to effectively exploits the complicated hierarchical label structures to resolve the noisy and conflicting information that inherently originates from multiple imperfect sources. We formulate the problem as an optimization problem with a closed-form solution. The consolidation result is inferred in a totally unsupervised, iterative fashion. Experimental results on both synthetic and real-world data sets show the effectiveness of the proposed method over existing alternatives.