Visible to the public Biblio

Found 150 results

Filters: First Letter Of Title is V  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S T U [V] W X Y Z   [Show ALL]
V
Eltayesh, Faryed, Bentahar, Jamal.  2017.  Verifiable Outsourced Database in the Cloud Using Game Theory. Proceedings of the Symposium on Applied Computing. :370–377.

In the verifiable database (VDB) model, a computationally weak client (database owner) delegates his database management to a database service provider on the cloud, which is considered untrusted third party, while users can query the data and verify the integrity of query results. Since the process can be computationally costly and has a limited support for sophisticated query types such as aggregated queries, we propose in this paper a framework that helps bridge the gap between security and practicality trade-offs. The proposed framework remodels the verifiable database problem using Stackelberg security game. In the new model, the database owner creates and uploads to the database service provider the database and its authentication structure (AS). Next, the game is played between the defender (verifier), who is a trusted party to the database owner and runs scheduled randomized verifications using Stackelberg mixed strategy, and the database service provider. The idea is to randomize the verification schedule in an optimized way that grants the optimal payoff for the verifier while making it extremely hard for the database service provider or any attacker to figure out which part of the database is being verified next. We have implemented and compared the proposed model performance with a uniform randomization model. Simulation results show that the proposed model outperforms the uniform randomization model. Furthermore, we have evaluated the efficiency of the proposed model against different cost metrics.

Ren, Yanli, Ding, Ning, Zhang, Xinpeng, Lu, Haining, Gu, Dawu.  2016.  Verifiable Outsourcing Algorithms for Modular Exponentiations with Improved Checkability. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :293–303.

The problem of securely outsourcing computation has received widespread attention due to the development of cloud computing and mobile devices. In this paper, we first propose a secure verifiable outsourcing algorithm of single modular exponentiation based on the one-malicious model of two untrusted servers. The outsourcer could detect any failure with probability 1 if one of the servers misbehaves. We also present the other verifiable outsourcing algorithm for multiple modular exponentiations based on the same model. Compared with the state-of-the-art algorithms, the proposed algorithms improve both checkability and efficiency for the outsourcer. Finally, we utilize the proposed algorithms as two subroutines to achieve outsource-secure polynomial evaluation and ciphertext-policy attributed-based encryption (CP-ABE) scheme with verifiable outsourced encryption and decryption.

Dai, H., Zhu, X., Yang, G., Yi, X..  2017.  A Verifiable Single Keyword Top-k Search Scheme against Insider Attacks over Cloud Data. 2017 3rd International Conference on Big Data Computing and Communications (BIGCOM). :111–116.

With the development of cloud computing and its economic benefit, more and more companies and individuals outsource their data and computation to clouds. Meanwhile, the business way of resource outsourcing makes the data out of control from its owner and results in many security issues. The existing secure keyword search methods assume that cloud servers are curious-but-honest or partial honest, which makes them powerless to deal with the deliberately falsified or fabricated results of insider attacks. In this paper, we propose a verifiable single keyword top-k search scheme against insider attacks which can verify the integrity of search results. Data owners generate verification codes (VCs) for the corresponding files, which embed the ordered sequence information of the relevance scores between files and keywords. Then files and corresponding VCs are outsourced to cloud servers. When a data user performs a keyword search in cloud servers, the qualified result files are determined according to the relevance scores between the files and the interested keyword and then returned to the data user together with a VC. The integrity of the result files is verified by data users through reconstructing a new VC on the received files and comparing it with the received one. Performance evaluation have been conducted to demonstrate the efficiency and result redundancy of the proposed scheme.

Kang, E. Y., Mu, D., Huang, L., Lan, Q..  2017.  Verification and Validation of a Cyber-Physical System in the Automotive Domain. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :326–333.
Software development for Cyber-Physical Systems (CPS), e.g., autonomous vehicles, requires both functional and non-functional quality assurance to guarantee that the CPS operates safely and effectively. EAST-ADL is a domain specific architectural language dedicated to safety-critical automotive embedded system design. We have previously modified EAST-ADL to include energy constraints and transformed energy-aware real-time (ERT) behaviors modeled in EAST-ADL/Stateflow into UPPAAL models amenable to formal verification. Previous work is extended in this paper by including support for Simulink and an integration of Simulink/Stateflow (S/S) within the same too lchain. S/S models are transformed, based on the extended ERT constraints with probability parameters, into verifiable UPPAAL-SMC models and integrate the translation with formal statistical analysis techniques: Probabilistic extension of EAST-ADL constraints is defined as a semantics denotation. A set of mapping rules is proposed to facilitate the guarantee of translation. Formal analysis on both functional- and non-functional properties is performed using Simulink Design Verifier and UPPAAL-SMC. Our approach is demonstrated on the autonomous traffic sign recognition vehicle case study.
Zhang, T., Zheng, H., Zhang, L..  2018.  Verification CAPTCHA Based on Deep Learning. 2018 37th Chinese Control Conference (CCC). :9056–9060.
At present, the captcha is widely used in the Internet. The method of captcha recognition using the convolutional neural networks was introduced in this paper. It was easier to apply the convolution neural network model of simple training to segment the captcha, and the network structure was established imitating VGGNet model. and the correct rate can be reached more than 90%. For the more difficult segmentation captcha, it can be used the end-to-end thought to the captcha as a whole to training, In this way, the recognition rate of the more difficult segmentation captcha can be reached about 85%.
T. Long, G. Yao.  2015.  "Verification for Security-Relevant Properties and Hyperproperties". 2015 IEEE 12th Intl Conf on Ubiquitous Intelligence and Computing and 2015 IEEE 12th Intl Conf on Autonomic and Trusted Computing and 2015 IEEE 15th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom). :490-497.

Privacy analysis is essential in the society. Data privacy preservation for access control, guaranteed service in wireless sensor networks are important parts. In programs' verification, we not only consider about these kinds of safety and liveness properties but some security policies like noninterference, and observational determinism which have been proposed as hyper properties. Fairness is widely applied in verification for concurrent systems, wireless sensor networks and embedded systems. This paper studies verification and analysis for proving security-relevant properties and hyper properties by proposing deductive proof rules under fairness requirements (constraints).

Mikami, Kei, Ando, Daisuke, Kaneko, Kunitake, Teraoka, Fumio.  2016.  Verification of a Multi-Domain Authentication and Authorization Infrastructure Yamata-no-Orochi. Proceedings of the 11th International Conference on Future Internet Technologies. :69–75.

Yamata-no-Orochi is an authentication and authorization infrastructure across multiple service domains and provides Internet services with unified authentication and authorization mechanisms. In this paper, Yamata-no-Orochi is incorporated into a video distribution system to verify its general versatility as a multi-domain authentication and authorization infrastructure for Internet services. This paper also reduces the authorization time of Yamata-no-Orochi to fulfill the processing time constrains of the video distribution system. The evaluation results show that all the authentication and authorization processes work correctly and the performance of Yamata-no-Orochi is practical for the video distribution system.

Sharma, Mohit, Strathman, Hunter J., Walker, Ross M..  2020.  Verification of a Rapidly Multiplexed Circuit for Scalable Action Potential Recording. 2020 IEEE International Symposium on Circuits and Systems (ISCAS). :1–1.
This report presents characterizations of in vivo neural recordings performed with a CMOS multichannel chip that uses rapid multiplexing directly at the electrodes, without any pre-amplification or buffering. Neural recordings were taken from a 16-channel microwire array implanted in rodent cortex, with comparison to a gold-standard commercial bench-top recording system. We were able to record well-isolated threshold crossings from 10 multiplexed electrodes and typical local field potential waveforms from 16, with strong agreement with the standard system (average SNR = 2.59 and 3.07 respectively). For 10 electrodes, the circuit achieves an effective area per channel of 0.0077 mm2, which is \textbackslashtextgreater5× smaller than typical multichannel chips. Extensive characterizations of noise and signal quality are presented and compared to fundamental theory, as well as results from in vivo and in vitro experiments. By demonstrating the validation of rapid multiplexing directly at the electrodes, this report confirms it as a promising approach for reducing circuit area in massively-multichannel neural recording systems, which is crucial for scaling recording site density and achieving large-scale sensing of brain activity with high spatiotemporal resolution.
Kammuller, F..  2014.  Verification of DNSsec Delegation Signatures. Telecommunications (ICT), 2014 21st International Conference on. :298-392.

In this paper, we present a formal model for the verification of the DNSsec Protocol in the interactive theorem prover Isabelle/HOL. Relying on the inductive approach to security protocol verification, this formal analysis provides a more expressive representation than the widely accepted model checking analysis. Our mechanized model allows to represent the protocol, all its possible traces and the attacker and his knowledge. The fine grained model allows to show origin authentication, and replay attack prevention. Most prominently, we succeed in expressing Delegation Signatures and proving their authenticity formally.

Chadha, R., Sistla, A. P., Viswanathan, M..  2017.  Verification of Randomized Security Protocols. 2017 32nd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS). :1–12.

We consider the problem of verifying the security of finitely many sessions of a protocol that tosses coins in addition to standard cryptographic primitives against a Dolev-Yao adversary. Two properties are investigated here - secrecy, which asks if no adversary interacting with a protocol P can determine a secret sec with probability textgreater 1 - p; and indistinguishability, which asks if the probability observing any sequence 0$øverline$ in P1 is the same as that of observing 0$øverline$ in P2, under the same adversary. Both secrecy and indistinguishability are known to be coNP-complete for non-randomized protocols. In contrast, we show that, for randomized protocols, secrecy and indistinguishability are both decidable in coNEXPTIME. We also prove a matching lower bound for the secrecy problem by reducing the non-satisfiability problem of monadic first order logic without equality.

Gouglidis, Antonios, Hu, Vincent C., Busby, Jeremy S., Hutchison, David.  2017.  Verification of Resilience Policies That Assist Attribute Based Access Control. Proceedings of the 2Nd ACM Workshop on Attribute-Based Access Control. :43–52.

Access control offers mechanisms to control and limit the actions or operations that are performed by a user on a set of resources in a system. Many access control models exist that are able to support this basic requirement. One of the properties examined in the context of these models is their ability to successfully restrict access to resources. Nevertheless, considering only restriction of access may not be enough in some environments, as in critical infrastructures. The protection of systems in this type of environment requires a new line of enquiry. It is essential to ensure that appropriate access is always possible, even when users and resources are subjected to challenges of various sorts. Resilience in access control is conceived as the ability of a system not to restrict but rather to ensure access to resources. In order to demonstrate the application of resilience in access control, we formally define an attribute based access control model (ABAC) based on guidelines provided by the National Institute of Standards and Technology (NIST). We examine how ABAC-based resilience policies can be specified in temporal logic and how these can be formally verified. The verification of resilience is done using an automated model checking technique, which eventually may lead to reducing the overall complexity required for the verification of resilience policies and serve as a valuable tool for administrators.

St-Martin, Michel, Felty, Amy P..  2016.  A Verified Algorithm for Detecting Conflicts in XACML Access Control Rules. Proceedings of the 5th ACM SIGPLAN Conference on Certified Programs and Proofs. :166–175.

We describe the formalization of a correctness proof for a conflict detection algorithm for XACML (eXtensible Access Control Markup Language). XACML is a standardized declarative access control policy language that is increasingly used in industry. In practice it is common for rule sets to grow large, and contain unintended errors, often due to conflicting rules. A conflict occurs in a policy when one rule permits a request and another denies that same request. Such errors can lead to serious risks involving both allowing access to an unauthorized user as well as denying access to someone who needs it. Removing conflicts is thus an important aspect of debugging policies, and the use of a verified algorithm provides the highest assurance in a domain where security is important. In this paper, we focus on several complex XACML constructs, including time ranges and integer intervals, as well as ways to combine any number of functions using the boolean operators and, or, and not. The latter are the most complex, and add significant expressive power to the language. We propose an algorithm to find conflicts and then use the Coq Proof Assistant to prove the algorithm correct. We develop a library of tactics to help automate the proof.

Ye, Katherine Q., Green, Matthew, Sanguansin, Naphat, Beringer, Lennart, Petcher, Adam, Appel, Andrew W..  2017.  Verified Correctness and Security of mbedTLS HMAC-DRBG. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :2007–2020.
We have formalized the functional specification of HMAC-DRBG (NIST 800-90A), and we have proved its cryptographic security-that its output is pseudorandom–using a hybrid game-based proof. We have also proved that the mbedTLS implementation (C program) correctly implements this functional specification. That proof composes with an existing C compiler correctness proof to guarantee, end-to-end, that the machine language program gives strong pseudorandomness. All proofs (hybrid games, C program verification, compiler, and their composition) are machine-checked in the Coq proof assistant. Our proofs are modular: the hybrid game proof holds on any implementation of HMAC-DRBG that satisfies our functional specification. Therefore, our functional specification can serve as a high-assurance reference.
Fournet, Cédric.  2016.  Verified Secure Implementations for the HTTPS Ecosystem: Invited Talk. Proceedings of the 2016 ACM Workshop on Programming Languages and Analysis for Security. :89–89.

The HTTPS ecosystem, including the SSL/TLS protocol, the X.509 public-key infrastructure, and their cryptographic libraries, is the standardized foundation of Internet Security. Despite 20 years of progress and extensions, however, its practical security remains controversial, as witnessed by recent efforts to improve its design and implementations, as well as recent disclosures of attacks against its deployments. The Everest project is a collaboration between Microsoft Research, INRIA, and the community at large that aims at modelling, programming, and verifying the main HTTPS components with strong machine-checked security guarantees, down to core system and cryptographic assumptions. Although HTTPS involves a relatively small amount of code, it requires efficient low-level programming and intricate proofs of functional correctness and security. To this end, we are also improving our verifications tools (F*, Dafny, Lean, Z3) and developing new ones. In my talk, I will present our project, review our experience with miTLS, a verified reference implementation of TLS coded in F*, and describe current work towards verified, secure, efficient HTTPS.

Wilcox, James R., Flanagan, Cormac, Freund, Stephen N..  2018.  VerifiedFT: A Verified, High-Performance Precise Dynamic Race Detector. Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. :354-367.

Dynamic data race detectors are valuable tools for testing and validating concurrent software, but to achieve good performance they are typically implemented using sophisticated concurrent algorithms. Thus, they are ironically prone to the exact same kind of concurrency bugs they are designed to detect. To address these problems, we have developed VerifiedFT, a clean slate redesign of the FastTrack race detector [19]. The VerifiedFT analysis provides the same precision guarantee as FastTrack, but is simpler to implement correctly and efficiently, enabling us to mechanically verify an implementation of its core algorithm using CIVL [27]. Moreover, VerifiedFT provides these correctness guarantees without sacrificing any performance over current state-of-the-art (but complex and unverified) FastTrack implementations for Java.

Ahmed Khurshid, University of Illinois at Urbana-Champaign, Wenxuan Zhou, University of Illinois at Urbana-Champaign, Matthew Caesar, University of Illinois at Urbana-Champaign, P. Brighten Godfrey, University of Illinois at Urbana-Champaign.  2012.  VeriFlow: Verifying Network-Wide Invariants in Real Time. First Workshop on Hot Topics in Software Defined Networks (HotSDN 2012).

Networks are complex and prone to bugs. Existing tools that check configuration files and data-plane state operate offline at timescales of seconds to hours, and cannot detect or prevent bugs as they arise. Is it possible to check network-wide invariants in real time, as the network state evolves? The key challenge here is to achieve extremely low latency during the checks so that network performance is not affected. In this paper, we present a preliminary design, VeriFlow, which suggests that this goal is achievable. VeriFlow is a layer between a software-defined networking controller and network devices that checks for network-wide invariant violations dynamically as each forwarding rule is inserted. Based on an implementation using a Mininet OpenFlow network and Route Views trace data, we find that VeriFlow can perform rigorous checking within hundreds of microseconds per rule insertion.

Ngo, V. C., Dehesa-Azuara, M., Fredrikson, M., Hoffmann, J..  2017.  Verifying and Synthesizing Constant-Resource Implementations with Types. 2017 IEEE Symposium on Security and Privacy (SP). :710–728.

Side channel attacks have been used to extract critical data such as encryption keys and confidential user data in a variety of adversarial settings. In practice, this threat is addressed by adhering to a constant-time programming discipline, which imposes strict constraints on the way in which programs are written. This introduces an additional hurdle for programmers faced with the already difficult task of writing secure code, highlighting the need for solutions that give the same source-level guarantees while supporting more natural programming models. We propose a novel type system for verifying that programs correctly implement constant-resource behavior. Our type system extends recent work on automatic amortized resource analysis (AARA), a set of techniques that automatically derive provable upper bounds on the resource consumption of programs. We devise new techniques that build on the potential method to achieve compositionality, precision, and automation. A strict global requirement that a program always maintains constant resource usage is too restrictive for most practical applications. It is sufficient to require that the program's resource behavior remain constant with respect to an attacker who is only allowed to observe part of the program's state and behavior. To account for this, our type system incorporates information flow tracking into its resource analysis. This allows our system to certify programs that need to violate the constant-time requirement in certain cases, as long as doing so does not leak confidential information to attackers. We formalize this guarantee by defining a new notion of resource-aware noninterference, and prove that our system enforces it. Finally, we show how our type inference algorithm can be used to synthesize a constant-time implementation from one that cannot be verified as secure, effectively repairing insecure programs automatically. We also show how a second novel AARA system that computes lower bounds on reso- rce usage can be used to derive quantitative bounds on the amount of information that a program leaks through its resource use. We implemented each of these systems in Resource Aware ML, and show that it can be applied to verify constant-time behavior in a number of applications including encryption and decryption routines, database queries, and other resource-aware functionality.

Hill, Z., Nichols, W. M., Papa, M., Hale, J. C., Hawrylak, P. J..  2017.  Verifying Attack Graphs through Simulation. 2017 Resilience Week (RWS). :64–67.

Verifying attacks against cyber physical systems can be a costly and time-consuming process. By using a simulated environment, attacks can be verified quickly and accurately. By combining the simulation of a cyber physical system with a hybrid attack graph, the effects of a series of exploits can be accurately analysed. Furthermore, the use of a simulated environment to verify attacks may uncover new information about the nature of the attacks.

Finkbeiner, Bernd, Müller, Christian, Seidl, Helmut, Z\u alinescu, Eugen.  2017.  Verifying Security Policies in Multi-Agent Workflows with Loops. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :633–645.

We consider the automatic verification of information flow security policies of web-based workflows, such as conference submission systems like EasyChair. Our workflow description language allows for loops, non-deterministic choice, and an unbounded number of participating agents. The information flow policies are specified in a temporal logic for hyperproperties. We show that the verification problem can be reduced to the satisfiability of a formula of first-order linear-time temporal logic, and provide decidability results for relevant classes of workflows and specifications. We report on experimental results obtained with an implementation of our approach on a series of benchmarks.

Schaerer, Jakob, Zumbrunn, Severin, Braun, Torsten.  2020.  Veritaa - The Graph of Trust. 2020 2nd Conference on Blockchain Research Applications for Innovative Networks and Services (BRAINS). :168—175.

Today the integrity of digital documents and the authenticity of their origin is often hard to verify. Existing Public Key Infrastructures (PKIs) are capable of certifying digital identities but do not provide solutions to immutably store signatures, and the process of certification is often not transparent. In this work we propose Veritaa, a Distributed Public Key Infrastructure and Signature Store (DPKISS). The major innovation of Veritaa is the Graph of Trust, a directed graph that uses relations between identity claims to certify the identities and stores signed relations to digital document identifiers. The distributed architecture of Veritaa and the Graph of Trust enables a transparent certification process. To ensure non-repudiation and immutability of all actions that have been signed on the Graph of Trust, an application specific Distributed Ledger Technology (DLT) is used as secure storage. In this work a reference implementation of the proposed architecture was designed and implemented. Furthermore, a testbed was created and used for the evaluation of Veritaa. The evaluation of Veritaa shows the benefits and the high performance of the proposed architecture.

Vaibhavi Deshmukh, Swarnima Deshmukh, Shivani Deosatwar, Reva Sarda, Lalit Kulkarni.  2020.  Versatile CAPTCHA Generation Using Machine Learning and Image Processing.

Due to the significant increase in the size of the internet and the number of users on this platform there has been a tremendous increase in load on various websites and web-based applications. This load is from the user end which causes unforeseen conditions which leads to unacceptable consequences such as crash or a data loss scenario at the webserver end. Therefore, there is a need to reduce the load on the server as well as the chances of network attacks that increase with the increased user base. The undue consequences such as data loss and server crash are caused due to two main reasons: the first one being an overload of users and the second due to an increased number of automatic programs or robots. A technique can be utilized to overcome this scenario by introducing a delay in the operation speed on the user end through the use of a CAPTCHA mechanism. Most of the classical approaches use a single method for the generation of the CAPTCHA, to overcome this proposed model uses the versatile image CAPTCHA generation mechanism. We have introduced a system that utilizes manualbased, face detection-based, colour based and random object insertion technique to generate 4 different random types of CAPTCHA. The proposed methodology implements a region of interest and convolutional neural networks to achieve the generation of the CAPTCHA effectively.

Brenner, Bernhard, Weippl, Edgar, Ekelhart, Andreas.  2019.  A Versatile Security Layer for AutomationML. 2019 IEEE 17th International Conference on Industrial Informatics (INDIN). 1:358–364.
The XML-based data format AutomationML enables vendor-independent exchange of design data between discipline-specific design tools. It is based on Computer Aided Engineering Exchange (CAEX) and hence, compatible with the W3C standards XMLEnc (XML encryption) and XMLDsig (XML signatures). However, despite the importance of protecting engineering data, so far no concept has been presented to ensure and control on a fine-grained level the confidentiality, authenticity and accessibility of information stored in AutomationML files. In this paper, we introduce a basic access control scheme for AutomationML that enables to define user read and write access for each component. Furthermore, the scheme supports non-repudiation based on a change history and so-called "signature chains". It is also capable of supporting views and restricted access to components. The scheme is based on cryptographic measures – i.e. cryptographic hashing, symmetric encryption, signatures, and asymmetric encryption – and enforces its access control mechanisms through encryption to protect against unauthorized reading, and through signature chains to protect against unauthorized manipulation and to ensure non-repudiation. This approach has the benefit to be independent of the underlying file and operating system, storage location, etc., and it keeps full CAEX-conformity by extending AutomationML.This concept can serve as basis for software tools that support AutomationML and want to integrate access control features directly into AutomationML.
Guo, W., Atthanayake, I., Thomas, P..  2020.  Vertical Underwater Molecular Communications via Buoyancy: Gaussian Velocity Distribution of Signal. ICC 2020 - 2020 IEEE International Conference on Communications (ICC). :1–6.
Underwater communication is vital for a variety of defence and scientific purposes. Current optical and sonar based carriers can deliver high capacity data rates, but their range and reliability is hampered by heavy propagation loss. A vertical Molecular Communication via Buoyancy (MCvB) channel is experimentally investigated here, where the dominant propagation force is buoyancy. Sequential puffs representing modulated symbols are injected and after the initial loss of momentum, the signal is driven by buoyancy forces which apply to both upwards and downwards channels. Coupled with the complex interaction of turbulent and viscous diffusion, we experimentally demonstrate that sequential symbols exhibit a Gaussian velocity spatial distribution. Our experimental results use Particle Image Velocimetry (PIV) to trace molecular clusters and infer statistical characteristics of their velocity profile. We believe our experimental paper's results can be the basis for long range underwater vertical communication between a deep sea vehicle and a surface buoy, establishing a covert and reliable delay-tolerant data link. The statistical distribution found in this paper is akin to the antenna pattern and the knowledge can be used to improve physical security.
Hang Shao, Japkowicz, N., Abielmona, R., Falcon, R..  2014.  Vessel track correlation and association using fuzzy logic and Echo State Networks. Evolutionary Computation (CEC), 2014 IEEE Congress on. :2322-2329.

Tracking moving objects is a task of the utmost importance to the defence community. As this task requires high accuracy, rather than employing a single detector, it has become common to use multiple ones. In such cases, the tracks produced by these detectors need to be correlated (if they belong to the same sensing modality) or associated (if they were produced by different sensing modalities). In this work, we introduce Computational-Intelligence-based methods for correlating and associating various contacts and tracks pertaining to maritime vessels in an area of interest. Fuzzy k-Nearest Neighbours will be used to conduct track correlation and Fuzzy C-Means clustering will be applied for association. In that way, the uncertainty of the track correlation and association is handled through fuzzy logic. To better model the state of the moving target, the traditional Kalman Filter will be extended using an Echo State Network. Experimental results on five different types of sensing systems will be discussed to justify the choices made in the development of our approach. In particular, we will demonstrate the judiciousness of using Fuzzy k-Nearest Neighbours and Fuzzy C-Means on our tracking system and show how the extension of the traditional Kalman Filter by a recurrent neural network is superior to its extension by other methods.