Visible to the public Biblio

Filters: First Letter Of Title is Q  [Clear All Filters]
A B C D E F G H I J K L M N O P [Q] R S T U V W X Y Z   [Show ALL]
Q
Asanjarani, Azam.  2016.  QBD Modelling of a Finite State Controller for Queueing Systems with Unobservable Markovian Environments. Proceedings of the 11th International Conference on Queueing Theory and Network Applications. :20:1–20:4.

We address the problem of stabilizing control for complex queueing systems with known parameters but unobservable Markovian random environment. In such systems, the controller needs to assign servers to queues without having full information about the servers' states. A control challenge is to devise a policy that matches servers to queues in a way that takes state estimates into account. Maximally attainable stability regions are non-trivial. To handle these situations, we model the system under given decision rules. The model is using Quasi-Birth-and-Death (QBD) structure to find a matrix analytic expression for the stability bound. We use this formulation to illustrate how the stability region grows as the number of controller belief states increases.

Phan, Trung V., Islam, Syed Tasnimul, Nguyen, Tri Gia, Bauschert, Thomas.  2019.  Q-DATA: Enhanced Traffic Flow Monitoring in Software-Defined Networks applying Q-learning. 2019 15th International Conference on Network and Service Management (CNSM). :1–9.
Software-Defined Networking (SDN) introduces a centralized network control and management by separating the data plane from the control plane which facilitates traffic flow monitoring, security analysis and policy formulation. However, it is challenging to choose a proper degree of traffic flow handling granularity while proactively protecting forwarding devices from getting overloaded. In this paper, we propose a novel traffic flow matching control framework called Q-DATA that applies reinforcement learning in order to enhance the traffic flow monitoring performance in SDN based networks and prevent traffic forwarding performance degradation. We first describe and analyse an SDN-based traffic flow matching control system that applies a reinforcement learning approach based on Q-learning algorithm in order to maximize the traffic flow granularity. It also considers the forwarding performance status of the SDN switches derived from a Support Vector Machine based algorithm. Next, we outline the Q-DATA framework that incorporates the optimal traffic flow matching policy derived from the traffic flow matching control system to efficiently provide the most detailed traffic flow information that other mechanisms require. Our novel approach is realized as a REST SDN application and evaluated in an SDN environment. Through comprehensive experiments, the results show that-compared to the default behavior of common SDN controllers and to our previous DATA mechanism-the new Q-DATA framework yields a remarkable improvement in terms of traffic forwarding performance degradation protection of SDN switches while still providing the most detailed traffic flow information on demand.
Wang, Xiaolan, Meliou, Alexandra, Wu, Eugene.  2016.  QFix: Demonstrating Error Diagnosis in Query Histories. Proceedings of the 2016 International Conference on Management of Data. :2177–2180.

An increasing number of applications in all aspects of society rely on data. Despite the long line of research in data cleaning and repairs, data correctness has been an elusive goal. Errors in the data can be extremely disruptive, and are detrimental to the effectiveness and proper function of data-driven applications. Even when data is cleaned, new errors can be introduced by applications and users who interact with the data. Subsequent valid updates can obscure these errors and propagate them through the dataset causing more discrepancies. Any discovered errors tend to be corrected superficially, on a case-by-case basis, further obscuring the true underlying cause, and making detection of the remaining errors harder. In this demo proposal, we outline the design of QFix, a query-centric framework that derives explanations and repairs for discrepancies in relational data based on potential errors in the queries that operated on the data. This is a marked departure from traditional data-centric techniques that directly fix the data. We then describe how users will use QFix in a demonstration scenario. Participants will be able to select from a number of transactional benchmarks, introduce errors into the queries that are executed, and compare the fixes to the queries proposed by QFix as well as existing alternative algorithms such as decision trees.

Nikravesh, Ashkan, Hong, David Ke, Chen, Qi Alfred, Madhyastha, Harsha V., Mao, Z. Morley.  2016.  QoE Inference Without Application Control. Proceedings of the 2016 Workshop on QoE-based Analysis and Management of Data Communication Networks. :19–24.
Network quality-of-service (QoS) does not always directly translate to users' quality-of-experience (QoE), e.g., changes in a video streaming app's frame rate in reaction to changes in packet loss rate depend on various factors such as the adaptation strategy used by the app and the app's use of forward error correction (FEC) codes. Therefore, knowledge of user QoE is desirable in several scenarios that have traditionally operated on QoS information. Examples include traffic management by ISPs and resource allocation by the operating system (OS). However, today, entities such as ISPs and OSes that implement these optimizations typically do not have a convenient way of obtaining input from applications on user QoE. To address this problem, we propose offline generation of per-application models mapping application-independent QoS metrics to corresponding application-specific QoE metrics, thereby enabling entities (such as ISPs and OSes) that can observe a user's network traffic to infer the user's QoE, in the absence of direct input. In this paper, we describe how such models can be generated and present our results from two popular video applications with significantly different QoE metrics. We also showcase the use of these models for ISPs to perform QoE-aware traffic management and for the OS to offer an efficient QoE diagnosis service.
Murudkar, Chetana V., Gitlin, Richard D..  2019.  QoE-Driven Anomaly Detection in Self-Organizing Mobile Networks Using Machine Learning. 2019 Wireless Telecommunications Symposium (WTS). :1–5.
Current procedures for anomaly detection in self-organizing mobile communication networks use network-centric approaches to identify dysfunctional serving nodes. In this paper, a user-centric approach and a novel methodology for anomaly detection is proposed, where the Quality of Experience (QoE) metric is used to evaluate the end-user experience. The system model demonstrates how dysfunctional serving eNodeBs are successfully detected by implementing a parametric QoE model using machine learning for prediction of user QoE in a network scenario created by the ns-3 network simulator. This approach can play a vital role in the future ultra-dense and green mobile communication networks that are expected to be both self- organizing and self-healing.
Arifeen, F. U., Ali, M., Ashraf, S..  2016.  QoS and security in VOIP networks through admission control mechanism. 2016 13th International Bhurban Conference on Applied Sciences and Technology (IBCAST). :373–380.

With the developing understanding of Information Security and digital assets, IT technology has put on tremendous importance of network admission control (NAC). In NAC architecture, admission decisions and resource reservations are taken at edge devices, rather than resources or individual routers within the network. The NAC architecture enables resilient resource reservation, maintaining reservations even after failures and intra-domain rerouting. Admission Control Networks destiny is based on IP networks through its Security and Quality of Service (QoS) demands for real time multimedia application via advance resource reservation techniques. To achieve Security & QoS demands, in real time performance networks, admission control algorithm decides whether the new traffic flow can be admitted to the network or not. Secure allocation of Peer for multimedia traffic flows with required performance is a great challenge in resource reservation schemes. In this paper, we have proposed our model for VoIP networks in order to achieve security services along with QoS, where admission control decisions are taken place at edge routers. We have analyzed and argued that the measurement based admission control should be done at edge routers which employs on-demand probing parallel from both edge routers to secure the source and destination nodes respectively. In order to achieve Security and QoS for a new call, we choose various probe packet sizes for voice and video calls respectively. Similarly a technique is adopted to attain a security allocation approach for selecting an admission control threshold by proposing our admission control algorithm. All results are tested on NS2 based simulation to evalualate the network performance of edge router based upon network admission control in VoIP traffic.

Khelifi, Hakima, Luo, Senlin, Nour, Boubakr, Moungla, Hassine.  2019.  A QoS-Aware Cache Replacement Policy for Vehicular Named Data Networks. 2019 IEEE Global Communications Conference (GLOBECOM). :1—6.

Vehicular Named Data Network (VNDN) uses Named Data Network (NDN) as a communication enabler. The communication is achieved using the content name instead of the host address. NDN integrates content caching at the network level rather than the application level. Hence, the network becomes aware of content caching and delivering. The content caching is a fundamental element in VNDN communication. However, due to the limitations of the cache store, only the most used content should be cached while the less used should be evicted. Traditional caching replacement policies may not work efficiently in VNDN due to the large and diverse exchanged content. To solve this issue, we propose an efficient cache replacement policy that takes the quality of service into consideration. The idea consists of classifying the traffic into different classes, and split the cache store into a set of sub-cache stores according to the defined traffic classes with different storage capacities according to the network requirements. Each content is assigned a popularity-density value that balances the content popularity with its size. Content with the highest popularity-density value is cached while the lowest is evicted. Simulation results prove the efficiency of the proposed solution to enhance the overall network quality of service.

Roumeliotis, Anargyros J., Panagopoulos, Athanasios D..  2016.  QoS-Based Allocation Cooperative Mechanism for Spectrum Leasing in Overlay Cognitive Radio Networks. Proceedings of the 20th Pan-Hellenic Conference on Informatics. :49:1–49:6.

The cooperative spectrum leasing process between the primary user (PU) and the secondary user (SU) in a cognitive radio network under the overlay approach and the decode and forward (DF) cooperative protocol is studied. Considering the Quality of Service (QoS) provisioning of both users, which participate in a three-phase leasing process, we investigate the maximization of PU's effective capacity subject to an average energy constraint for the SU under a heuristic power and time allocation mechanism. The aforementioned proposed scheme treats with the basic concepts of the convex optimization theory and outperforms a baseline allocation mechanism which is proven by the simulations. Finally, important remarks for the PU's and the SU's performance are extracted for different system parameters.

Mao, Huajian, Chi, Chenyang, Yu, Jinghui, Yang, Peixiang, Qian, Cheng, Zhao, Dongsheng.  2019.  QRStream: A Secure and Convenient Method for Text Healthcare Data Transferring. 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). :3458–3462.
With the increasing of health awareness, the users become more and more interested in their daily health information and healthcare activities results from healthcare organizations. They always try to collect them together for better usage. Traditionally, the healthcare data is always delivered by paper format from the healthcare organizations, and it is not easy and convenient for data usage and management. They would have to translate these data on paper to digital version which would probably introduce mistakes into the data. It would be necessary if there is a secure and convenient method for electronic health data transferring between the users and the healthcare organizations. However, for the security and privacy problems, almost no healthcare organization provides a stable and full service for health data delivery. In this paper, we propose a secure and convenient method, QRStream, which splits original health data and loads them onto QR code frame streaming for the data transferring. The results shows that QRStream can transfer text health data smoothly with an acceptable performance, for example, transferring 10K data in 10 seconds.
S. Chen, F. Xi, Z. Liu, B. Bao.  2015.  "Quadrature compressive sampling of multiband radar signals at sub-Landau rate". 2015 IEEE International Conference on Digital Signal Processing (DSP). :234-238.

Sampling multiband radar signals is an essential issue of multiband/multifunction radar. This paper proposes a multiband quadrature compressive sampling (MQCS) system to perform the sampling at sub-Landau rate. The MQCS system randomly projects the multiband signal into a compressive multiband one by modulating each subband signal with a low-pass signal and then samples the compressive multiband signal at Landau-rate with output of compressive measurements. The compressive inphase and quadrature (I/Q) components of each subband are extracted from the compressive measurements respectively and are exploited to recover the baseband I/Q components. As effective bandwidth of the compressive multiband signal is much less than that of the received multiband one, the sampling rate is much less than Landau rate of the received signal. Simulation results validate that the proposed MQCS system can effectively acquire and reconstruct the baseband I/Q components of the multiband signals.

Luo, Linghui, Bodden, Eric, Späth, Johannes.  2019.  A Qualitative Analysis of Android Taint-Analysis Results. 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE). :102–114.
In the past, researchers have developed a number of popular taint-analysis approaches, particularly in the context of Android applications. Numerous studies have shown that automated code analyses are adopted by developers only if they yield a good "signal to noise ratio", i.e., high precision. Many previous studies have reported analysis precision quantitatively, but this gives little insight into what can and should be done to increase precision further. To guide future research on increasing precision, we present a comprehensive study that evaluates static Android taint-analysis results on a qualitative level. To unravel the exact nature of taint flows, we have designed COVA, an analysis tool to compute partial path constraints that inform about the circumstances under which taint flows may actually occur in practice. We have conducted a qualitative study on the taint flows reported by FlowDroid in 1,022 real-world Android applications. Our results reveal several key findings: Many taint flows occur only under specific conditions, e.g., environment settings, user interaction, I/O. Taint analyses should consider the application context to discern such situations. COVA shows that few taint flows are guarded by multiple different kinds of conditions simultaneously, so tools that seek to confirm true positives dynamically can concentrate on one kind at a time, e.g., only simulating user interactions. Lastly, many false positives arise due to a too liberal source/sink configuration. Taint analyses must be more carefully configured, and their configuration could benefit from better tool assistance.
Koehler, Henning, Link, Sebastian.  2016.  Qualitative Cleaning of Uncertain Data. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :2269–2274.

We propose a new view on data cleaning: Not data itself but the degrees of uncertainty attributed to data are dirty. Applying possibility theory, tuples are assigned degrees of possibility with which they occur, and constraints are assigned degrees of certainty that say to which tuples they apply. Classical data cleaning modifies some minimal set of tuples. Instead, we marginally reduce their degrees of possibility. This reduction leads to a new qualitative version of the vertex cover problem. Qualitative vertex cover can be mapped to a linear-weighted constraint satisfaction problem. However, any off-the-shelf solver cannot solve the problem more efficiently than classical vertex cover. Instead, we utilize the degrees of possibility and certainty to develop a dedicated algorithm that is fixed parameter tractable in the size of the qualitative vertex cover. Experiments show that our algorithm is faster than solvers for the classical vertex cover problem by several orders of magnitude, and performance improves with higher numbers of uncertainty degrees.

Zaidan, Firas, Hannebauer, Christoph, Gruhn, Volker.  2016.  Quality Attestation: An Open Source Pattern. Proceedings of the 21st European Conference on Pattern Languages of Programs. :2:1–2:7.

A number of small Open Source projects let independent providers measure different aspects of their quality that would otherwise be hard to see. This paper describes this observation as the pattern Quality Attestation. Quality Attestation belongs to a family of Open Source patterns written by various authors.

Liang, Danwei, An, Jian, Cheng, Jindong, Yang, He, Gui, Ruowei.  2018.  The Quality Control in Crowdsensing Based on Twice Consensuses of Blockchain. Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers. :630–635.
In most crowdsensing systems, the quality of the collected data is varied and difficult to evaluate while the existing crowdsensing quality control methods are mostly based on a central platform, which is not completely trusted in reality and results in fraud and other problems. To solve these questions, a novel crowdsensing quality control model is proposed in this paper. First, the idea of blockchain is introduced into this model. The credit-based verifier selection mechanism and twice consensuses are proposed to realize the non-repudiation and non-tampering of information in crowdsensing. Then, the quality grading evaluation (QGE) is put forward, in which the method of truth discovery and the idea of fuzzy theories are combined to evaluate the quality of sensing data, and the garbled circuit is used to ensure that evaluation criteria can not be leaked. Finally, the Experiments show that our model is feasible in time and effective in quality evaluation.
Prechelt, Lutz, Schmeisky, Holger, Zieris, Franz.  2016.  Quality Experience: A Grounded Theory of Successful Agile Projects Without Dedicated Testers. Proceedings of the 38th International Conference on Software Engineering. :1017–1027.

Context: While successful conventional software development regularly employs separate testing staff, there are successful agile teams with as well as without separate testers. Question: How does successful agile development work without separate testers? What are advantages and disadvantages? Method: A case study, based on Grounded Theory evaluation of interviews and direct observation of three agile teams; one having separate testers, two without. All teams perform long-term development of parts of e-business web portals. Results: Teams without testers use a quality experience work mode centered around a tight field-use feedback loop, driven by a feeling of responsibility, supported by test automation, resulting in frequent deployments. Conclusion: In the given domain, hand-overs to separate testers appear to hamper the feedback loop more than they contribute to quality, so working without testers is preferred. However, Quality Experience is achievable only with modular architectures and in suitable domains.

Mukherjee, M., Edwards, J., Kwon, H., Porta, T. F. L..  2015.  Quality of information-aware real-time traffic flow analysis and reporting. 2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops). :69–74.

In this paper we present a framework for Quality of Information (QoI)-aware networking. QoI quantifies how useful a piece of information is for a given query or application. Herein, we present a general QoI model, as well as a specific example instantiation that carries throughout the rest of the paper. In this model, we focus on the tradeoffs between precision and accuracy. As a motivating example, we look at traffic video analysis. We present simple algorithms for deriving various traffic metrics from video, such as vehicle count and average speed. We implement these algorithms both on a desktop workstation and less-capable mobile device. We then show how QoI-awareness enables end devices to make intelligent decisions about how to process queries and form responses, such that huge bandwidth savings are realized.

Koul, Ajay, Kaur, Harinder.  2017.  Quality of Service Oriented Secure Routing Model for Mobile Ad Hoc Networks. Proceedings of the 2017 International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence. :88–92.

Mobile Ad hoc Networks (MANETs) always bring challenges to the designers in terms of its security deployment due to their dynamic and infrastructure less nature. In the past few years different researchers have proposed different solutions for providing security to MANETs. In most of the cases however, the solution prevents either a particular attack or provides security at the cost of sacrificing the QoS. In this paper we introduce a model that deploys security in MANETs and takes care of the Quality of Services issues to some extent. We have adopted the concept of analyzing the behavior of the node as we believe that if nodes behave properly and in a coordinated fashion, the insecurity level goes drastically down. Our methodology gives the advantage of using this approach

Kumar, S. Dinesh, Thapliyal, Himanshu.  2016.  QUALPUF: A Novel Quasi-Adiabatic Logic Based Physical Unclonable Function. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :24:1–24:4.

In the recent years, silicon based Physical Unclonable Function (PUF) has evolved as one of the popular hardware security primitives. PUFs are a class of circuits that use the inherent variations in the Integrated Circuit (IC) manufacturing process to create unique and unclonable IDs. There are various security related applications of PUFs such as IC counterfeiting, piracy detection, secure key management etc. In this paper, we are presenting a novel QUasi-Adiabatic Logic based PUF (QUALPUF) which is designed using energy recovery technique. To the best of our knowledge, this is the first work on the hardware design of PUF using adiabatic logic. The proposed design is energy efficient compared to recent designs of hardware PUFs proposed in the literature. Further, we are proposing a novel bit extraction method for our proposed PUF which improves the space set of challenge-response pairs. QUALPUF is evaluated in security metrics including reliability, uniqueness, uniformity and bit-aliasing. Power and area of QUALPUF is also presented. SPICE simulations show that QUALPUF consumes 0.39μ Watt of power to generate a response bit.

Fei, Y., Ning, J., Jiang, W..  2018.  A quantifiable Attack-Defense Trees model for APT attack. 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). :2303–2306.
In order to deal with APT(Advanced Persistent Threat) attacks, this paper proposes a quantifiable Attack-Defense Tree model. First, the model gives both attack and defense leaf node a variety of security attributes. And then quantifies the nodes through the analytic hierarchy process. Finally, it analyzes the impact of the defense measures on the attack behavior. Through the application of the model, we can see that the quantifiable Attack-Defense Tree model can well describe the impact of defense measures on attack behavior.
Fink, G.A., Griswold, R.L., Beech, Z.W..  2014.  Quantifying cyber-resilience against resource-exhaustion attacks. Resilient Control Systems (ISRCS), 2014 7th International Symposium on. :1-8.

Resilience in the information sciences is notoriously difficult to define much less to measure. But in mechanical engineering, the resilience of a substance is mathematically well-defined as an area under the stress-strain curve. We combined inspiration from mechanics of materials and axioms from queuing theory in an attempt to define resilience precisely for information systems. We first examine the meaning of resilience in linguistic and engineering terms and then translate these definitions to information sciences. As a general assessment of our approach's fitness, we quantify how resilience may be measured in a simple queuing system. By using a very simple model we allow clear application of established theory while being flexible enough to apply to many other engineering contexts in information science and cyber security. We tested our definitions of resilience via simulation and analysis of networked queuing systems. We conclude with a discussion of the results and make recommendations for future work.
 

Pfister, J., Gomes, M. A. C., Vilela, J. P., Harrison, W. K..  2017.  Quantifying equivocation for finite blocklength wiretap codes. 2017 IEEE International Conference on Communications (ICC). :1–6.

This paper presents a new technique for providing the analysis and comparison of wiretap codes in the small blocklength regime over the binary erasure wiretap channel. A major result is the development of Monte Carlo strategies for quantifying a code's equivocation, which mirrors techniques used to analyze forward error correcting codes. For this paper, we limit our analysis to coset-based wiretap codes, and give preferred strategies for calculating and/or estimating the equivocation in order of preference. We also make several comparisons of different code families. Our results indicate that there are security advantages to using algebraic codes for applications that require small to medium blocklengths.

Jurado, Mireya, Smith, Geoffrey.  2019.  Quantifying Information Leakage of Deterministic Encryption. Proceedings of the 2019 ACM SIGSAC Conference on Cloud Computing Security Workshop. :129–139.
In order to protect user data while maintaining application functionality, encrypted databases can use specialized cryptography such as property-revealing encryption, which allows a property of the underlying plaintext values to be computed from the ciphertext. One example is deterministic encryption which ensures that the same plaintext encrypted under the same key will produce the same ciphertext. This technology enables clients to make queries on sensitive data hosted in a cloud server and has considerable potential to protect data. However, the security implications of deterministic encryption are not well understood. We provide a leakage analysis of deterministic encryption through the application of the framework of quantitative information flow. A key insight from this framework is that there is no single "right'' measure by which leakage can be quantified: information flow depends on the operational scenario and different operational scenarios require different leakage measures. We evaluate leakage under three operational scenarios, modeled using three different gain functions, under a variety of prior distributions in order to bring clarity to this problem.
Lyu, Minzhao, Sherratt, Dainel, Sivanathan, Arunan, Gharakheili, Hassan Habibi, Radford, Adam, Sivaraman, Vijay.  2017.  Quantifying the Reflective DDoS Attack Capability of Household IoT Devices. Proceedings of the 10th ACM Conference on Security and Privacy in Wireless and Mobile Networks. :46–51.

Distributed Denial-of-Service (DDoS) attacks are increasing in frequency and volume on the Internet, and there is evidence that cyber-criminals are turning to Internet-of-Things (IoT) devices such as cameras and vending machines as easy launchpads for large-scale attacks. This paper quantifies the capability of consumer IoT devices to participate in reflective DDoS attacks. We first show that household devices can be exposed to Internet reflection even if they are secured behind home gateways. We then evaluate eight household devices available on the market today, including lightbulbs, webcams, and printers, and experimentally profile their reflective capability, amplification factor, duration, and intensity rate for TCP, SNMP, and SSDP based attacks. Lastly, we demonstrate reflection attacks in a real-world setting involving three IoT-equipped smart-homes, emphasising the imminent need to address this problem before it becomes widespread.

Chen, Huashan, Cho, Jin-Hee, Xu, Shouhuai.  2018.  Quantifying the Security Effectiveness of Firewalls and DMZs. Proceedings of the 5th Annual Symposium and Bootcamp on Hot Topics in the Science of Security. :9:1–9:11.

Firewalls and Demilitarized Zones (DMZs) are two mechanisms that have been widely employed to secure enterprise networks. Despite this, their security effectiveness has not been systematically quantified. In this paper, we make a first step towards filling this void by presenting a representational framework for investigating their security effectiveness in protecting enterprise networks. Through simulation experiments, we draw useful insights into the security effectiveness of firewalls and DMZs. To the best of our knowledge, these insights were not reported in the literature until now.

Blenn, Norbert, Ghiëtte, Vincent, Doerr, Christian.  2017.  Quantifying the Spectrum of Denial-of-Service Attacks Through Internet Backscatter. Proceedings of the 12th International Conference on Availability, Reliability and Security. :21:1–21:10.
Denial of Service (DoS) attacks are a major threat currently observable in computer networks and especially the Internet. In such an attack a malicious party tries to either break a service, running on a server, or exhaust the capacity or bandwidth of the victim to hinder customers to effectively use the service. Recent reports show that the total number of Distributed Denial of Service (DDoS) attacks is steadily growing with "mega-attacks" peaking at hundreds of gigabit/s (Gbps). In this paper, we will provide a quantification of DDoS attacks in size and duration beyond these outliers reported in the media. We find that these mega attacks do exist, but the bulk of attacks is in practice only a fraction of these frequently reported values. We further show that it is feasible to collect meaningful backscatter traces using surprisingly small telescopes, thereby enabling a broader audience to perform attack intelligence research.