Visible to the public Biblio

Found 287 results

Filters: First Letter Of Last Name is J  [Clear All Filters]
A B C D E F G H I [J] K L M N O P Q R S T U V W X Y Z   [Show ALL]
J
J. Brynielsson, R. Sharma.  2015.  "Detectability of low-rate HTTP server DoS attacks using spectral analysis". 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). :954-961.

Denial-of-Service (DoS) attacks pose a threat to any service provider on the internet. While traditional DoS flooding attacks require the attacker to control at least as much resources as the service provider in order to be effective, so-called low-rate DoS attacks can exploit weaknesses in careless design to effectively deny a service using minimal amounts of network traffic. This paper investigates one such weakness found within version 2.2 of the popular Apache HTTP Server software. The weakness concerns how the server handles the persistent connection feature in HTTP 1.1. An attack simulator exploiting this weakness has been developed and shown to be effective. The attack was then studied with spectral analysis for the purpose of examining how well the attack could be detected. Similar to other papers on spectral analysis of low-rate DoS attacks, the results show that disproportionate amounts of energy in the lower frequencies can be detected when the attack is present. However, by randomizing the attack pattern, an attacker can efficiently reduce this disproportion to a degree where it might be impossible to correctly identify an attack in a real world scenario.

J. Chen, C. S. Gates, Z. Jorgensen, W. Yang.  2015.  Effective risk communication for end users: A multi-granularity approach. Women in CyberSecurity (WiCyS) Conference.

We proposed a multi-granularity approach to present risk information of mobile apps to the end users. Within this approach the highest level is a summary risk index, which allows quick and easy comparison among multiple apps that provide similar functionality. We have developed several types of risk index, such as text saying “High Risk” or number of filled circles (Gates, Chen, Li, & Proctor, 2014). Through both online and in-lab studies, we found that when presented the interface with the summary risk index, participants made more secure app-selection decisions. Subsequent research showed that framing of the summary risk information affects users’ app-selection decisions, and positive framing in terms of safety has an advantage over negative framing in terms of risk (Chen, Gates, Li, & Proctor, 2014).

In addition to the summary risk index, some users may also want more detailed risk information for the apps. We have been developing an intermediate-level risk display that presents only the major risk categories. As a first step, we conducted user studies to have expert users’ identify the major risk categories (personal privacy, monetary loss, and device stability) and validate the categories on typical users (Jorgensen, Chen, Gates, Li, Proctor, & Yu, 2015). In a subsequent study, we are developing a graphical display to incorporate these risk categories into the current app interface and test its effectiveness.

This multi-granularity approach can be applied to risk communication in other contexts. For example, in the context of communicating the potential risk associated with phishing attacks, an effective warning should be designed to include both higher-level and lower-level risk information: A higher-level index information about how likely an email message or website is a phishing one should be presented to users and inform them about the potential risk in an easy-to-comprehend manner; a more detailed explanation should also be available for users who want to know more about the warning and the index. We have completed a pilot study in this area and are initiating a full study to investigate the effectiveness of such an interface in preventing users from being phished successfully.

J. Choi, C. Choi, H. M. Lynn, P. Kim.  2015.  "Ontology Based APT Attack Behavior Analysis in Cloud Computing". 2015 10th International Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA). :375-379.

Recently personal information due to the APT attack, the economic damage and leakage of confidential information is a serious social problem, a great deal of research has been done to solve this problem. APT attacks are threatening traditional hacking techniques as well as to increase the success rate of attacks using sophisticated attack techniques such attacks Zero-Day vulnerability in order to avoid detection techniques and state-of-the-art security because it uses a combination of intelligence. In this paper, the malicious code is designed to detect APT attack based on APT attack behavior ontology that occur during the operation on the target system, it uses intelligent APT attack than to define inference rules can be inferred about malicious attack behavior to propose a method that can be detected.

J. J. Li, P. Abbate, B. Vega.  2015.  "Detecting Security Threats Using Mobile Devices". 2015 IEEE International Conference on Software Quality, Reliability and Security - Companion. :40-45.

In our previous work [1], we presented a study of using performance escalation to automatic detect Distributed Denial of Service (DDoS) types of attacks. We propose to enhance the work of security threat detection by using mobile phones as the detector to identify outliers of normal traffic patterns as threats. The mobile solution makes detection portable to any services. This paper also shows that the same detection method works for advanced persistent threats.

J. Kim, I. Moon, K. Lee, S. C. Suh, I. Kim.  2015.  "Scalable Security Event Aggregation for Situation Analysis". 2015 IEEE First International Conference on Big Data Computing Service and Applications. :14-23.

Cyber-attacks have been evolved in a way to be more sophisticated by employing combinations of attack methodologies with greater impacts. For instance, Advanced Persistent Threats (APTs) employ a set of stealthy hacking processes running over a long period of time, making it much hard to detect. With this trend, the importance of big-data security analytics has taken greater attention since identifying such latest attacks requires large-scale data processing and analysis. In this paper, we present SEAS-MR (Security Event Aggregation System over MapReduce) that facilitates scalable security event aggregation for comprehensive situation analysis. The introduced system provides the following three core functions: (i) periodic aggregation, (ii) on-demand aggregation, and (iii) query support for effective analysis. We describe our design and implementation of the system over MapReduce and high-level query languages, and report our experimental results collected through extensive settings on a Hadoop cluster for performance evaluation and design impacts.

J. Knight, J. Xiang, Kevin Sullivan.  2015.  Real-World Types and Their Applications. The International Conference on Computer Safety, Reliability, and Security.
J. Pan, R. Jain, S. Paul.  2015.  "Enhanced Evaluation of the Interdomain Routing System for Balanced Routing Scalability and New Internet Architecture Deployments". IEEE Systems Journal. 9:892-903.

Internet is facing many challenges that cannot be solved easily through ad hoc patches. To address these challenges, many research programs and projects have been initiated and many solutions are being proposed. However, before we have a new architecture that can motivate Internet service providers (ISPs) to deploy and evolve, we need to address two issues: 1) know the current status better by appropriately evaluating the existing Internet; and 2) find how various incentives and strategies will affect the deployment of the new architecture. For the first issue, we define a series of quantitative metrics that can potentially unify results from several measurement projects using different approaches and can be an intrinsic part of future Internet architecture (FIA) for monitoring and evaluation. Using these metrics, we systematically evaluate the current interdomain routing system and reveal many “autonomous-system-level” observations and key lessons for new Internet architectures. Particularly, the evaluation results reveal the imbalance underlying the interdomain routing system and how the deployment of FIAs can benefit from these findings. With these findings, for the second issue, appropriate deployment strategies of the future architecture changes can be formed with balanced incentives for both customers and ISPs. The results can be used to shape the short- and long-term goals for new architectures that are simple evolutions of the current Internet (so-called dirty-slate architectures) and to some extent to clean-slate architectures.

J. Ponniah, Y. C. Hu, P. R. Kumar.  2015.  "A clean slate design for secure wireless ad-hoc networks #x2014; Part 2: Open unsynchronized networks". 2015 13th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt). :183-190.

We build upon the clean-slate, holistic approach to the design of secure protocols for wireless ad-hoc networks proposed in part one. We consider the case when the nodes are not synchronized, but instead have local clocks that are relatively affine. In addition, the network is open in that nodes can enter at arbitrary times. To account for this new behavior, we make substantial revisions to the protocol in part one. We define a game between protocols for open, unsynchronized nodes and the strategies of adversarial nodes. We show that the same guarantees in part one also apply in this game: the protocol not only achieves the max-min utility, but the min-max utility as well. That is, there is a saddle point in the game, and furthermore, the adversarial nodes are effectively limited to either jamming or conforming with the protocol.

J. Qadir, O. Hasan.  2015.  "Applying Formal Methods to Networking: Theory, Techniques, and Applications". IEEE Communications Surveys Tutorials. 17:256-291.

Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet, which began as a research experiment, was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, particularly for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification and to an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design-in particular, the software defined networking (SDN) paradigm-offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods and present a survey of its applications to networking.

J. Shen, S. Ji, J. Shen, Z. Fu, J. Wang.  2015.  "Auditing Protocols for Cloud Storage: A Survey". 2015 First International Conference on Computational Intelligence Theory, Systems and Applications (CCITSA). :222-227.

So far, cloud storage has been accepted by an increasing number of people, which is not a fresh notion any more. It brings cloud users a lot of conveniences, such as the relief of local storage and location independent access. Nevertheless, the correctness and completeness as well as the privacy of outsourced data are what worry could users. As a result, most people are unwilling to store data in the cloud, in case that the sensitive information concerning something important is disclosed. Only when people feel worry-free, can they accept cloud storage more easily. Certainly, many experts have taken this problem into consideration, and tried to solve it. In this paper, we survey the solutions to the problems concerning auditing in cloud computing and give a comparison of them. The methods and performances as well as the pros and cons are discussed for the state-of-the-art auditing protocols.

J. Vukalović, D. Delija.  2015.  "Advanced Persistent Threats - detection and defense". 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). :1324-1330.

The term “Advanced Persistent Threat” refers to a well-organized, malicious group of people who launch stealthy attacks against computer systems of specific targets, such as governments, companies or military. The attacks themselves are long-lasting, difficult to expose and often use very advanced hacking techniques. Since they are advanced in nature, prolonged and persistent, the organizations behind them have to possess a high level of knowledge, advanced tools and competent personnel to execute them. The attacks are usually preformed in several phases - reconnaissance, preparation, execution, gaining access, information gathering and connection maintenance. In each of the phases attacks can be detected with different probabilities. There are several ways to increase the level of security of an organization in order to counter these incidents. First and foremost, it is necessary to educate users and system administrators on different attack vectors and provide them with knowledge and protection so that the attacks are unsuccessful. Second, implement strict security policies. That includes access control and restrictions (to information or network), protecting information by encrypting it and installing latest security upgrades. Finally, it is possible to use software IDS tools to detect such anomalies (e.g. Snort, OSSEC, Sguil).

J. Zhang.  2015.  "Semantic-Based Searchable Encryption in Cloud: Issues and Challenges". 2015 First International Conference on Computational Intelligence Theory, Systems and Applications (CCITSA). :163-165.

Searchable encryption is a new developing information security technique and it enables users to search over encrypted data through keywords without having to decrypt it at first. In the last decade, many researchers are engaging in the field of searchable encryption and have proposed a series of efficient search schemes over encrypted cloud data. It is the time to survey this field to conclude a comprehensive framework by analyzing individual contributions. This paper focuses on the searchable encryption schemes in cloud. We firstly summarize the general model and threat model in searchable encryption schemes, and then present the privacy-preserving issues in these schemes. In addition, we compare the efficiency and security between semantic search and preferred search in detail. At last, some open issues and research challenges in the future are proposed.

J.Y.V., Manoj Kumar, Swain, Ayas Kanta, Kumar, Sudeendra, Sahoo, Sauvagya Ranjan, Mahapatra, Kamalakanta.  2018.  Run Time Mitigation of Performance Degradation Hardware Trojan Attacks in Network on Chip. 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). :738—743.
Globalization of semiconductor design and manufacturing has led to several hardware security issues. The problem of Hardware Trojans (HT) is one such security issue discussed widely in industry and academia. Adversary design engineer can insert the HT to leak confidential data, cause a denial of service attack or any other intention specific to the design. HT in cryptographic modules and processors are widely discussed. HT in Multi-Processor System on Chips (MPSoC) are also catastrophic, as most of the military applications use MPSoCs. Network on Chips (NoC) are standard communication infrastructure in modern day MPSoC. In this paper, we present a novel hardware Trojan which is capable of inducing performance degradation and denial of service attacks in a NoC. The presence of the Hardware Trojan in a NoC can compromise the crucial details of packets communicated through NoC. The proposed Trojan is triggered by a particular complex bit pattern from input messages and tries to mislead the packets away from the destined addresses. A mitigation method based on bit shuffling mechanism inside the router with a key directly extracted from input message is proposed to limit the adverse effects of the Trojan. The performance of a 4×4 NoC is evaluated under uniform traffic with the proposed Trojan and mitigation method. Simulation results show that the proposed mitigation scheme is useful in limiting the malicious effect of hardware Trojan.
Jaatun, M. G., Moe, M. E. Gaup, Nordbø, P. E..  2018.  Cyber Security Considerations for Self-healing Smart Grid Networks. 2018 International Conference on Cyber Security and Protection of Digital Services (Cyber Security). :1–7.
Fault Location, Isolation and System Restoration (FLISR) mechanisms allow for rapid restoration of power to customers that are not directly implicated by distribution network failures. However, depending on where the logic for the FLISR system is located, deployment may have security implications for the distribution network. This paper discusses alternative FLISR placements in terms of cyber security considerations, concluding that there is a case for both local and centralized FLISR solutions.
Jabeen, Gul, Ping, Luo.  2019.  A Unified Measurable Software Trustworthy Model Based on Vulnerability Loss Speed Index. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :18—25.

As trust becomes increasingly important in the software domain. Due to its complex composite concept, people face great challenges, especially in today's dynamic and constantly changing internet technology. In addition, measuring the software trustworthiness correctly and effectively plays a significant role in gaining users trust in choosing different software. In the context of security, trust is previously measured based on the vulnerability time occurrence to predict the total number of vulnerabilities or their future occurrence time. In this study, we proposed a new unified index called "loss speed index" that integrates the most important variables of software security such as vulnerability occurrence time, number and severity loss, which are used to evaluate the overall software trust measurement. Based on this new definition, a new model called software trustworthy security growth model (STSGM) has been proposed. This paper also aims at filling the gap by addressing the severity of vulnerabilities and proposed a vulnerability severity prediction model, the results are further evaluated by STSGM to estimate the future loss speed index. Our work has several features such as: (1) It is used to predict the vulnerability severity/type in future, (2) Unlike traditional evaluation methods like expert scoring, our model uses historical data to predict the future loss speed of software, (3) The loss metric value is used to evaluate the risk associated with different software, which has a direct impact on software trustworthiness. Experiments performed on real software vulnerability datasets and its results are analyzed to check the correctness and effectiveness of the proposed model.

Jackson, K. A., Bennett, B. T..  2018.  Locating SQL Injection Vulnerabilities in Java Byte Code Using Natural Language Techniques. SoutheastCon 2018. :1-5.

With so much our daily lives relying on digital devices like personal computers and cell phones, there is a growing demand for code that not only functions properly, but is secure and keeps user data safe. However, ensuring this is not such an easy task, and many developers do not have the required skills or resources to ensure their code is secure. Many code analysis tools have been written to find vulnerabilities in newly developed code, but this technology tends to produce many false positives, and is still not able to identify all of the problems. Other methods of finding software vulnerabilities automatically are required. This proof-of-concept study applied natural language processing on Java byte code to locate SQL injection vulnerabilities in a Java program. Preliminary findings show that, due to the high number of terms in the dataset, using singular decision trees will not produce a suitable model for locating SQL injection vulnerabilities, while random forest structures proved more promising. Still, further work is needed to determine the best classification tool.

Jacob, C., Rekha, V. R..  2017.  Secured and Reliable File Sharing System with De-Duplication Using Erasure Correction Code. 2017 International Conference on Networks Advances in Computational Technologies (NetACT). :221–228.
An effective storage and management of file systems is very much essential now a days to avoid the wastage of storage space provided by the cloud providers. Data de-duplication technique has been used widely which allows only to store a single copy of a file and thus avoids duplication of file in the cloud storage servers. It helps to reduce the amount of storage space and save bandwidth of cloud service and thus in high cost savings for the cloud service subscribers. Today data that we need to store are in encrypted format to ensure the security. So data encryption by data owners with their own keys makes the de-duplication impossible for the cloud service subscriber as the data encryption with a key converts data into an unidentifiable format called cipher text thus encrypting, even the same data, with different keys may result in different cipher texts. But de-duplication and encryption need to work in hand to hand to ensure secure, authorized and optimized storage. In this paper, we propose a scheme for file-level de-duplication on encrypted files like text, images and even on video files stored in cloud based on the user's privilege set and file privilege set. This paper proposed a de-duplication system which distributes the files across different servers. The system uses an Erasure Correcting Code technique to re-construct the files even if the parts of the files are lost by attacking any server. Thus the proposed system can ensure both the security and reliability of encrypted files.
Jacobsen, Hans-Arno, Sadoghi, Mohammad, Tabatabaei, Mohammad Hossein, Vitenberg, Roman, Zhang, Kaiwen.  2018.  Blockchain Landscape and AI Renaissance: The Bright Path Forward. Proceedings of the 19th International Middleware Conference Tutorials. :2:1–2:1.
Known for powering cryptocurrencies such as Bitcoin and Ethereum, blockchain is seen as a disruptive technology capable of revolutionizing a wide variety of domains, ranging from finance to governance, by offering superior security, reliability, and transparency founded upon a decentralized and democratic computational model. In this tutorial, we first present the original Bitcoin design, along with Ethereum and Hyperledger, and reflect on their design choices through the academic lens. We further provide an overview of potential applications and associated research challenges, as well as a survey of ongoing research directions related to byzantine fault-tolerance consensus protocols. We highlight the new opportunities blockchain creates for building the next generation of secure middleware platforms and explore the possible interplay between AI and blockchains, or more specifically, how blockchain technology can enable the notion of "decentralized intelligence." We conclude with a walkthrough demonstrating the process of developing a decentralized application using a popular Smart Contract language (Solidity) over the Ethereum platform
Jacomme, Charlie, Kremer, Steve.  2018.  An Extensive Formal Analysis of Multi-factor Authentication Protocols. 2018 IEEE 31st Computer Security Foundations Symposium (CSF). :1–15.
Passwords are still the most widespread means for authenticating users, even though they have been shown to create huge security problems. This motivated the use of additional authentication mechanisms used in so-called multi-factor authentication protocols. In this paper we define a detailed threat model for this kind of protocols: while in classical protocol analysis attackers control the communication network, we take into account that many communications are performed over TLS channels, that computers may be infected by different kinds of malwares, that attackers could perform phishing, and that humans may omit some actions. We formalize this model in the applied pi calculus and perform an extensive analysis and comparison of several widely used protocols - variants of Google 2-step and FIDO's U2F. The analysis is completely automated, generating systematically all combinations of threat scenarios for each of the protocols and using the P ROVERIF tool for automated protocol analysis. Our analysis highlights weaknesses and strengths of the different protocols, and allows us to suggest several small modifications of the existing protocols which are easy to implement, yet improve their security in several threat scenarios.
Jacq, Olivier, Brosset, David, Kermarrec, Yvon, Simonin, Jacques.  2019.  Cyber Attacks Real Time Detection: Towards a Cyber Situational Awareness for Naval Systems. 2019 International Conference on Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA). :1–2.
Over the last years, the maritime sector has seen an important increase in digital systems on board. Whether used for platform management, navigation, logistics or office tasks, a modern ship can be seen as a fully featured, complex and moving information system. Meanwhile, cyber threats on the sector are real and, for instance, the year 2018 has seen a number of harmful public ransomware attacks impacting shore and ashore assets. Gaining cyber situation recognition, comprehension and projection through Maritime Cyber Situational Awareness is therefore a challenging but essential task for the sector. However, its elaboration has to face a number of issues, such as the collect and fusion of real-time data coming from the ships and an efficient visualization and situation sharing across maritime actors. In this paper, we describe our current work and results for maritime cyber situational awareness elaboration. Even if its development is still going on, the first operational feedback is very encouraging.
Jadhao, Ankita R., Agrawal, Avinash J..  2016.  A Digital Forensics Investigation Model for Social Networking Site. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :130:1–130:4.

Social Networking is fundamentally shifting the way we communicate, sharing idea and form opinions. All people try to use social media for there need, people from every age group are involved in social media site or e-commerce site. Nowadays almost every illegal activity is happened using the social network and instant messages. It means that present system is not capable to found all suspicious words. In this paper, we provided a brief description of problem and review on the different framework developed so far. Propose a better system which can be indentify criminal activity through social networking more efficiently. Use Ontology Based Information Extraction (OBIE) technique to identify domain of word and Association Rule mining to generate rules. Heuristic method checks in user database for malicious users according to predefine elements and Naïve Bayes method is use to identify the context behind the message or post. The experimental result is used for further action on victim by cyber crime department.

Jadhav, S., Dutia, S., Calangutkar, K., Oh, T., Kim, Y. H., Kim, J. N..  2015.  Cloud-based Android botnet malware detection system. 2015 17th International Conference on Advanced Communication Technology (ICACT). :347–352.

Increased use of Android devices and its open source development framework has attracted many digital crime groups to use Android devices as one of the key attack surfaces. Due to the extensive connectivity and multiple sources of network connections, Android devices are most suitable to botnet based malware attacks. The research focuses on developing a cloud-based Android botnet malware detection system. A prototype of the proposed system is deployed which provides a runtime Android malware analysis. The paper explains architectural implementation of the developed system using a botnet detection learning dataset and multi-layered algorithm used to predict botnet family of a particular application.

Jadidi, Mahya Soleimani, Zaborski, Mariusz, Kidney, Brian, Anderson, Jonathan.  2019.  CapExec: Towards Transparently-Sandboxed Services. 2019 15th International Conference on Network and Service Management (CNSM). :1–5.
Network services are among the riskiest programs executed by production systems. Such services execute large quantities of complex code and process data from arbitrary — and untrusted — network sources, often with high levels of system privilege. It is desirable to confine system services to a least-privileged environment so that the potential damage from a malicious attacker can be limited, but existing mechanisms for sandboxing services require invasive and system-specific code changes and are insufficient to confine broad classes of network services. Rather than sandboxing one service at a time, we propose that the best place to add sandboxing to network services is in the service manager that starts those services. As a first step towards this vision, we propose CapExec, a process supervisor that can execute a single service within a sandbox based on a service declaration file in which, required resources whose limited access to are supported by Caper services, are specified. Using the Capsicum compartmentalization framework and its Casper service framework, CapExec provides robust application sandboxing without requiring any modifications to the application itself. We believe that this is the first step towards ubiquitous sandboxing of network services without the costs of virtualization.
Jae Min Cho, Kiyoung Choi.  2014.  An FPGA implementation of high-throughput key-value store using Bloom filter. VLSI Design, Automation and Test (VLSI-DAT), 2014 International Symposium on. :1-4.

This paper presents an efficient implementation of key-value store using Bloom filters on FPGA. Bloom filters are used to reduce the number of unnecessary accesses to the hash tables, thereby improving the performance. Additionally, for better hash table utilization, we use a modified cuckoo hashing algorithm for the implementation. They are implemented in FPGA to further improve the performance. Experimental results show significant performance improvement over existing approaches.
 

Jaeger, D., Cheng, F., Meinel, C..  2018.  Accelerating Event Processing for Security Analytics on a Distributed In-Memory Platform. 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). :634-643.

The analysis of security-related event logs is an important step for the investigation of cyber-attacks. It allows tracing malicious activities and lets a security operator find out what has happened. However, since IT landscapes are growing in size and diversity, the amount of events and their highly different representations are becoming a Big Data challenge. Unfortunately, current solutions for the analysis of security-related events, so called Security Information and Event Management (SIEM) systems, are not able to keep up with the load. In this work, we propose a distributed SIEM platform that makes use of highly efficient distributed normalization and persists event data into an in-memory database. We implement the normalization on common distribution frameworks, i.e. Spark, Storm, Trident and Heron, and compare their performance with our custom-built distribution solution. Additionally, different tuning options are introduced and their speed advantage is presented. In the end, we show how the writing into an in-memory database can be tuned to achieve optimal persistence speed. Using the proposed approach, we are able to not only fully normalize, but also persist more than 20 billion events per day with relatively small client hardware. Therefore, we are confident that our approach can handle the load of events in even very large IT landscapes.