Biblio

Filters: Keyword is Planning  [Clear All Filters]
2021-04-08
Al-Dhaqm, A., Razak, S. A., Dampier, D. A., Choo, K. R., Siddique, K., Ikuesan, R. A., Alqarni, A., Kebande, V. R..  2020.  Categorization and Organization of Database Forensic Investigation Processes. IEEE Access. 8:112846—112858.
Database forensic investigation (DBFI) is an important area of research within digital forensics. It's importance is growing as digital data becomes more extensive and commonplace. The challenges associated with DBFI are numerous, and one of the challenges is the lack of a harmonized DBFI process for investigators to follow. In this paper, therefore, we conduct a survey of existing literature with the hope of understanding the body of work already accomplished. Furthermore, we build on the existing literature to present a harmonized DBFI process using design science research methodology. This harmonized DBFI process has been developed based on three key categories (i.e. planning, preparation and pre-response, acquisition and preservation, and analysis and reconstruction). Furthermore, the DBFI has been designed to avoid confusion or ambiguity, as well as providing practitioners with a systematic method of performing DBFI with a higher degree of certainty.
2021-02-16
Jin, Z., Yu, P., Guo, S. Y., Feng, L., Zhou, F., Tao, M., Li, W., Qiu, X., Shi, L..  2020.  Cyber-Physical Risk Driven Routing Planning with Deep Reinforcement-Learning in Smart Grid Communication Networks. 2020 International Wireless Communications and Mobile Computing (IWCMC). :1278—1283.
In modern grid systems which is a typical cyber-physical System (CPS), information space and physical space are closely related. Once the communication link is interrupted, it will make a great damage to the power system. If the service path is too concentrated, the risk will be greatly increased. In order to solve this problem, this paper constructs a route planning algorithm that combines node load pressure, link load balance and service delay risk. At present, the existing intelligent algorithms are easy to fall into the local optimal value, so we chooses the deep reinforcement learning algorithm (DRL). Firstly, we build a risk assessment model. The node risk assessment index is established by using the node load pressure, and then the link risk assessment index is established by using the average service communication delay and link balance degree. The route planning problem is then solved by a route planning algorithm based on DRL. Finally, experiments are carried out in a simulation scenario of a power grid system. The results show that our method can find a lower risk path than the original Dijkstra algorithm and the Constraint-Dijkstra algorithm.
2021-03-01
Said, S., Bouloiz, H., Gallab, M..  2020.  Identification and Assessment of Risks Affecting Sociotechnical Systems Resilience. 2020 IEEE 6th International Conference on Optimization and Applications (ICOA). :1–10.
Resilience is regarded nowadays as the ideal solution that can be envisaged by sociotechnical systems for coping with potential threats and crises. This being said, gaining and maintaining this ability is not always easy, given the multitude of risks driving the adverse and challenging events. This paper aims to propose a method consecrated to the assessment of risks directly affecting resilience. This work is conducted within the framework of risk assessment and resilience engineering approaches. A 5×5 matrix, dedicated to the identification and assessment of risk factors that constitute threats to the system resilience, has been elaborated. This matrix consists of two axes, namely, the impact on resilience metrics and the availability and effectiveness of resilience planning. Checklists serving to collect information about these two attributes are established and a case study is undertaken. In this paper, a new method for identifying and assessing risk factors menacing directly the resilience of a given system is presented. The analysis of these risks must be given priority to make the system more resilient to shocks.
2021-01-28
Collins, B. C., Brown, P. N..  2020.  Exploiting an Adversary’s Intentions in Graphical Coordination Games. 2020 American Control Conference (ACC). :4638—4643.

How does information regarding an adversary's intentions affect optimal system design? This paper addresses this question in the context of graphical coordination games where an adversary can indirectly influence the behavior of agents by modifying their payoffs. We study a situation in which a system operator must select a graph topology in anticipation of the action of an unknown adversary. The designer can limit her worst-case losses by playing a security strategy, effectively planning for an adversary which intends maximum harm. However, fine-grained information regarding the adversary's intention may help the system operator to fine-tune the defenses and obtain better system performance. In a simple model of adversarial behavior, this paper asks how much a system operator can gain by fine-tuning a defense for known adversarial intent. We find that if the adversary is weak, a security strategy is approximately optimal for any adversary type; however, for moderately-strong adversaries, security strategies are far from optimal.

2020-05-18
Sharma, Sarika, Kumar, Deepak.  2019.  Agile Release Planning Using Natural Language Processing Algorithm. 2019 Amity International Conference on Artificial Intelligence (AICAI). :934–938.
Once the requirement is gathered in agile, it is broken down into smaller pre-defined format called user stories. These user stories are then scoped in various sprint releases and delivered accordingly. Release planning in Agile becomes challenging when the number of user stories goes up in hundreds. In such scenarios it is very difficult to manually identify similar user stories and package them together into a release. Hence, this paper suggests application of natural language processing algorithms for identifying similar user stories and then scoping them into a release This paper takes the approach to build a word corpus for every project release identified in the project and then to convert the provided user stories into a vector of string using Java utility for calculating top 3 most occurring words from the given project corpus in a user story. Once all the user stories are represented as vector array then by using RV coefficient NLP algorithm the user stories are clustered into various releases of the software project. Using the proposed approach, the release planning for large and complex software engineering projects can be simplified resulting into efficient planning in less time. The automated commercial tools like JIRA and Rally can be enhanced to include suggested algorithms for managing release planning in Agile.
2020-03-02
Zhao, Min, Li, Shunxin, Xiao, Dong, Zhao, Guoliang, Li, Bo, Liu, Li, Chen, Xiangyu, Yang, Min.  2019.  Consumption Ability Estimation of Distribution System Interconnected with Microgrids. 2019 IEEE International Conference on Energy Internet (ICEI). :345–350.
With fast development of distributed generation, storages and control techniques, a growing number of microgrids are interconnected with distribution networks. Microgrid capacity that a local distribution system can afford, is important to distribution network planning and microgrids well-organized integration. Therefore, this paper focuses on estimating consumption ability of distribution system interconnected with microgrids. The method to judge rationality of microgrids access plan is put forward, and an index system covering operation security, power quality and energy management is proposed. Consumption ability estimation procedure based on rationality evaluation and interactions is built up then, and requirements on multi-scenario simulation are presented. Case study on a practical distribution system design with multi-microgrids guarantees the validity and reasonableness of the proposed method and process. The results also indicate construction and reinforcement directions for the distribution network.
2020-06-08
van den Berg, Eric, Robertson, Seth.  2019.  Game-Theoretic Planning to Counter DDoS in NEMESIS. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :1–6.
NEMESIS provides powerful and cost-effective defenses against extreme Distributed Denial of Service (DDos) attacks through a number of network maneuvers. However, selection of which maneuvers to deploy when and with what parameters requires great care to achieve optimal outcomes in the face of overwhelming attack. Analytical wargaming allows game theoretic optimal Courses of Action (COA) to be created real-time during live operations, orders of magnitude faster than packet-level simulation and with equivalent outcomes to even expert human hand-crafted COAs.
2020-09-08
Chen, Yu-Cheng, Gieseking, Tim, Campbell, Dustin, Mooney, Vincent, Grijalva, Santiago.  2019.  A Hybrid Attack Model for Cyber-Physical Security Assessment in Electricity Grid. 2019 IEEE Texas Power and Energy Conference (TPEC). :1–6.
A detailed model of an attack on the power grid involves both a preparation stage as well as an execution stage of the attack. This paper introduces a novel Hybrid Attack Model (HAM) that combines Probabilistic Learning Attacker, Dynamic Defender (PLADD) model and a Markov Chain model to simulate the planning and execution stages of a bad data injection attack in power grid. We discuss the advantages and limitations of the prior work models and of our proposed Hybrid Attack Model and show that HAM is more effective compared to individual PLADD or Markov Chain models.
2020-07-16
Ciupe, Aurelia, Mititica, Doru Florin, Meza, Serban, Orza, Bogdan.  2019.  Learning Agile with Intelligent Conversational Agents. 2019 IEEE Global Engineering Education Conference (EDUCON). :1100—1107.

Conversational agents assist traditional teaching-learning instruments in proposing new designs for knowledge creation and learning analysis, across organizational environments. Means of building common educative background in both industry and academic fields become of interest for ensuring educational effectiveness and consistency. Such a context requires transferable practices and becomes the basis for the Agile adoption into Higher Education, at both curriculum and operational levels. The current work proposes a model for delivering Agile Scrum training through an assistive web-based conversational service, where analytics are collected to provide an overview on learners' knowledge path. Besides its specific applicability into Software Engineering (SE) industry, the model is to assist the academic SE curriculum. A user-acceptance test has been carried out among 200 undergraduate students and patterns of interaction have been depicted for 2 conversational strategies.

2020-06-04
Briggs, Shannon, Perrone, Michael, Peveler, Matthew, Drozdal, Jaimie, Balagyozyan, Lilit, Su, Hui.  2019.  Multimodal, Multiuser Immersive Brainstorming and Scenario Planning for Intelligence Analysis. 2019 IEEE International Symposium on Technologies for Homeland Security (HST). :1—4.

This paper discusses two pieces of software designed for intelligence analysis, the brainstorming tool and the Scenario Planning Advisor. These tools were developed in the Cognitive Immersive Systems Lab (CISL) in conjunction with IBM. We discuss the immersive environment the tools are situated in, and the proposed benefit for intelligence analysis.

2020-03-02
Hamadah, Siham, Aqel, Darah.  2019.  A Proposed Virtual Private Cloud-Based Disaster Recovery Strategy. 2019 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT). :469–473.

Disaster is an unexpected event in a system lifetime, which can be made by nature or even human errors. Disaster recovery of information technology is an area of information security for protecting data against unsatisfactory events. It involves a set of procedures and tools for returning an organization to a state of normality after an occurrence of a disastrous event. So the organizations need to have a good plan in place for disaster recovery. There are many strategies for traditional disaster recovery and also for cloud-based disaster recovery. This paper focuses on using cloud-based disaster recovery strategies instead of the traditional techniques, since the cloud-based disaster recovery has proved its efficiency in providing the continuity of services faster and in less cost than the traditional ones. The paper introduces a proposed model for virtual private disaster recovery on cloud by using two metrics, which comprise a recovery time objective and a recovery point objective. The proposed model has been evaluated by experts in the field of information technology and the results show that the model has ensured the security and business continuity issues, as well as the faster recovery of a disaster that could face an organization. The paper also highlights the cloud computing services and illustrates the most benefits of cloud-based disaster recovery.

2019-11-25
Pham, Dinh-Lam, Ahn, Hyun, Kim, Kwanghoon.  2019.  A Temporal Work Transference Event Log Trace Classification Algorithm and Its Experimental Analysis. 2019 21st International Conference on Advanced Communication Technology (ICACT). :692–696.

In the field of process mining, a lot of information about what happened inside the information system has been exploited and has yielded significant results. However, information related to the relationship between performers and performers is only utilized and evaluated in certain aspects. In this paper, we propose an algorithm to classify the temporal work transference from workflow enactment event log. This result may be used to reduce system memory, increase the computation speed. Furthermore, it can be used as one of the factors to evaluate the performer, active role of resources in the information system.

2020-11-16
Yu, J., Ding, F., Zhao, X., Wang, Y..  2018.  An Resilient Cloud Architecture for Mission Assurance. 2018 IEEE 4th Information Technology and Mechatronics Engineering Conference (ITOEC). :343–346.
In view of the demand for the continuous guarantee capability of the information system in the diversified task and the complex cyber threat environment, a dual loop architecture of the resilient cloud environment for mission assurance is proposed. Firstly, general technical architecture of cloud environment is briefly introduced. Drawing on the idea of software definition, a resilient dual loop architecture based on "perception analysis planning adjustment" is constructed. Then, the core mission assurance system deployment mechanism is designed using the idea of distributed control. Finally, the core mission assurance system is designed in detail, which is consisted of six functional modules, including mission and environment awareness network, intelligent anomaly analysis and prediction, mission and resource situation generation, mission and resource planning, adaptive optimization and adjustment. The design of the dual loop architecture of the resilient cloud environment for mission assurance will further enhance the fast adaptability of the information system in the complex cyber physical environment.
2019-09-05
Gryzunov, V. V., Bondarenko, I. Y..  2018.  A Social Engineer in Terms of Control Theory. 2018 Third International Conference on Human Factors in Complex Technical Systems and Environments (ERGO)s and Environments (ERGO). :202-204.

Problem: Today, many methods of influencing on personnel in the communication process are available to social engineers and information security specialists, but in practice it is difficult to say which method and why it is appropriate to use one. Criteria and indicators of effective communication are not formalized. Purpose: to formalize the concept of effective communication, to offer a tool for combining existing methods and means of communication, to formalize the purpose of communication. Methods: Use of the terminal model of a control system for a non-stochastic communication object. Results. Two examples demonstrating the possibility of using the terminal model of the communication control system, which allows you to connect tools and methods of communication, justify the requirements for the structure and feedback of communication, select the necessary communication algorithms depending on the observed response of the communication object. Practical significance: the results of the research can be used in planning and conducting effective communication in the process of information protection, in business, in private relationships and in other areas of human activity.

2018-12-03
Matta, R. de, Miller, T..  2018.  A Strategic Manufacturing Capacity and Supply Chain Network Design Contingency Planning Approach. 2018 IEEE Technology and Engineering Management Conference (TEMSCON). :1–6.

We develop a contingency planning methodology for how a firm would build a global supply chain network with reserve manufacturing capacity which can be strategically deployed by the firm in the event actual demand exceeds forecast. The contingency planning approach is comprised of: (1) a strategic network design model for finding the profit maximizing plant locations, manufacturing capacity and inventory investments, and production level and product distribution; and (2) a scenario planning and risk assessment scheme to analyze the costs and benefits of alternative levels of manufacturing capacity and inventory investments. We develop an efficient heuristic procedure to solve the model. We show numerically how a firm would use our approach to explore and weigh the potential upside benefits and downside risks of alternative strategies.

2018-06-07
Hinojosa, V..  2017.  A generalized stochastic N-m security-constrained generation expansion planning methodology using partial transmission distribution factors. 2017 IEEE Power Energy Society General Meeting. :1–5.

This study proposes to apply an efficient formulation to solve the stochastic security-constrained generation capacity expansion planning (GCEP) problem using an improved method to directly compute the generalized generation distribution factors (GGDF) and the line outage distribution factors (LODF) in order to model the pre- and the post-contingency constraints based on the only application of the partial transmission distribution factors (PTDF). The classical DC-based formulation has been reformulated in order to include the security criteria solving both pre- and post-contingency constraints simultaneously. The methodology also takes into account the load uncertainty in the optimization problem using a two-stage multi-period model, and a clustering technique is used as well to reduce load scenarios (stochastic problem). The main advantage of this methodology is the feasibility to quickly compute the LODF especially with multiple-line outages (N-m). This idea could speed up contingency analyses and improve significantly the security-constrained analyses applied to GCEP problems. It is worth to mentioning that this approach is carried out without sacrificing optimality.

Hinojosa, V., Gonzalez-Longatt, F..  2017.  Stochastic security-constrained generation expansion planning methodology based on a generalized line outage distribution factors. 2017 IEEE Manchester PowerTech. :1–6.

In this study, it is proposed to carry out an efficient formulation in order to figure out the stochastic security-constrained generation capacity expansion planning (SC-GCEP) problem. The main idea is related to directly compute the line outage distribution factors (LODF) which could be applied to model the N - m post-contingency analysis. In addition, the post-contingency power flows are modeled based on the LODF and the partial transmission distribution factors (PTDF). The post-contingency constraints have been reformulated using linear distribution factors (PTDF and LODF) so that both the pre- and post-contingency constraints are modeled simultaneously in the SC-GCEP problem using these factors. In the stochastic formulation, the load uncertainty is incorporated employing a two-stage multi-period framework, and a K - means clustering technique is implemented to decrease the number of load scenarios. The main advantage of this methodology is the feasibility to quickly compute the post-contingency factors especially with multiple-line outages (N - m). This concept would improve the security-constraint analysis modeling quickly the outage of m transmission lines in the stochastic SC-GCEP problem. It is carried out several experiments using two electrical power systems in order to validate the performance of the proposed formulation.

2018-02-28
Kaelbling, L. P., Lozano-Pérez, T..  2017.  Learning composable models of parameterized skills. 2017 IEEE International Conference on Robotics and Automation (ICRA). :886–893.

There has been a great deal of work on learning new robot skills, but very little consideration of how these newly acquired skills can be integrated into an overall intelligent system. A key aspect of such a system is compositionality: newly learned abilities have to be characterized in a form that will allow them to be flexibly combined with existing abilities, affording a (good!) combinatorial explosion in the robot's abilities. In this paper, we focus on learning models of the preconditions and effects of new parameterized skills, in a form that allows those actions to be combined with existing abilities by a generative planning and execution system.

2018-05-02
Clifford, J., Garfield, K., Towhidnejad, M., Neighbors, J., Miller, M., Verenich, E., Staskevich, G..  2017.  Multi-layer model of swarm intelligence for resilient autonomous systems. 2017 IEEE/AIAA 36th Digital Avionics Systems Conference (DASC). :1–4.

Embry-Riddle Aeronautical University (ERAU) is working with the Air Force Research Lab (AFRL) to develop a distributed multi-layer autonomous UAS planning and control technology for gathering intelligence in Anti-Access Area Denial (A2/AD) environments populated by intelligent adaptive adversaries. These resilient autonomous systems are able to navigate through hostile environments while performing Intelligence, Surveillance, and Reconnaissance (ISR) tasks, and minimizing the loss of assets. Our approach incorporates artificial life concepts, with a high-level architecture divided into three biologically inspired layers: cyber-physical, reactive, and deliberative. Each layer has a dynamic level of influence over the behavior of the agent. Algorithms within the layers act on a filtered view of reality, abstracted in the layer immediately below. Each layer takes input from the layer below, provides output to the layer above, and provides direction to the layer below. Fast-reactive control systems in lower layers ensure a stable environment supporting cognitive function on higher layers. The cyber-physical layer represents the central nervous system of the individual, consisting of elements of the vehicle that cannot be changed such as sensors, power plant, and physical configuration. On the reactive layer, the system uses an artificial life paradigm, where each agent interacts with the environment using a set of simple rules regarding wants and needs. Information is communicated explicitly via message passing and implicitly via observation and recognition of behavior. In the deliberative layer, individual agents look outward to the group, deliberating on efficient resource management and cooperation with other agents. Strategies at all layers are developed using machine learning techniques such as Genetic Algorithm (GA) or NN applied to system training that takes place prior to the mission.

2018-05-15
2015-11-12
Xia, Weiyi, Kantarcioglu, Murat, Wan, Zhiyu, Heatherly, Raymond, Vorobeychik, Yevgeniy, Malin, Bradley.  2015.  Process-Driven Data Privacy. Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. :1021–1030.

The quantity of personal data gathered by service providers via our daily activities continues to grow at a rapid pace. The sharing, and the subsequent analysis of, such data can support a wide range of activities, but concerns around privacy often prompt an organization to transform the data to meet certain protection models (e.g., k-anonymity or E-differential privacy). These models, however, are based on simplistic adversarial frameworks, which can lead to both under- and over-protection. For instance, such models often assume that an adversary attacks a protected record exactly once. We introduce a principled approach to explicitly model the attack process as a series of steps. Specically, we engineer a factored Markov decision process (FMDP) to optimally plan an attack from the adversary's perspective and assess the privacy risk accordingly. The FMDP captures the uncertainty in the adversary's belief (e.g., the number of identied individuals that match the de-identified data) and enables the analysis of various real world deterrence mechanisms beyond a traditional protection model, such as a penalty for committing an attack. We present an algorithm to solve the FMDP and illustrate its efficiency by simulating an attack on publicly accessible U.S. census records against a real identied resource of over 500,000 individuals in a voter registry. Our results demonstrate that while traditional privacy models commonly expect an adversary to attack exactly once per record, an optimal attack in our model may involve exploiting none, one, or more indiviuals in the pool of candidates, depending on context.

2015-05-05
Tombaz, S., Sang-wook Han, Ki Won Sung, Zander, J..  2014.  Energy Efficient Network Deployment With Cell DTX. Communications Letters, IEEE. 18:977-980.

Cell discontinuous transmission (DTX) is a new feature that enables sleep mode operations at base station (BS) side during the transmission time intervals when there is no traffic. In this letter, we analyze the maximum achievable energy saving of the cell DTX. We incorporate the cell DTX with a clean-slate network deployment and obtain optimal BS density for lowest energy consumption satisfying a certain quality of service requirement considering daily traffic variation. The numerical result indicates that the fast traffic adaptation capability of cell DTX favors dense network deployment with lightly loaded cells, which brings about considerable improvement in energy saving.
 

Sourlas, V., Tassiulas, L..  2014.  Replication management and cache-aware routing in information-centric networks. Network Operations and Management Symposium (NOMS), 2014 IEEE. :1-7.

Content distribution in the Internet places content providers in a dominant position, with delivery happening directly between two end-points, that is, from content providers to consumers. Information-Centrism has been proposed as a paradigm shift from the host-to-host Internet to a host-to-content one, or in other words from an end-to-end communication system to a native distribution network. This trend has attracted the attention of the research community, which has argued that content, instead of end-points, must be at the center stage of attention. Given this emergence of information-centric solutions, the relevant management needs in terms of performance have not been adequately addressed, yet they are absolutely essential for relevant network operations and crucial for the information-centric approaches to succeed. Performance management and traffic engineering approaches are also required to control routing, to configure the logic for replacement policies in caches and to control decisions where to cache, for instance. Therefore, there is an urgent need to manage information-centric resources and in fact to constitute their missing management and control plane which is essential for their success as clean-slate technologies. In this thesis we aim to provide solutions to crucial problems that remain, such as the management of information-centric approaches which has not yet been addressed, focusing on the key aspect of route and cache management.
 

2015-05-01
Chen, R.L.-Y., Cohn, A., Neng Fan, Pinar, A..  2014.  Contingency-Risk Informed Power System Design. Power Systems, IEEE Transactions on. 29:2087-2096.

We consider the problem of designing (or augmenting) an electric power system at a minimum cost such that it satisfies the N-k-ε survivability criterion. This survivability criterion is a generalization of the well-known N-k criterion, and it requires that at least (1-εj) fraction of the steady-state demand be met after failures of j components, for j=0,1,...,k. The network design problem adds another level of complexity to the notoriously hard contingency analysis problem, since the contingency analysis is only one of the requirements for the design optimization problem. We present a mixed-integer programming formulation of this problem that takes into account both transmission and generation expansion. We propose an algorithm that can avoid combinatorial explosion in the number of contingencies, by seeking vulnerabilities in intermediary solutions and constraining the design space accordingly. Our approach is built on our ability to identify such system vulnerabilities quickly. Our empirical studies on modified instances of the IEEE 30-bus and IEEE 57-bus systems show the effectiveness of our methods. We were able to solve the transmission and generation expansion problems for k=4 in approximately 30 min, while other approaches failed to provide a solution at the end of 2 h.