Visible to the public Biblio

Filters: Keyword is Neural Network  [Clear All Filters]
2021-05-20
Mheisn, Alaa, Shurman, Mohammad, Al-Ma’aytah, Abdallah.  2020.  WSNB: Wearable Sensors with Neural Networks Located in a Base Station for IoT Environment. 2020 7th International Conference on Internet of Things: Systems, Management and Security (IOTSMS). :1—4.
The Internet of Things (IoT) is a system paradigm that recently introduced, which includes different smart devices and applications, especially, in smart cities, e.g.; manufacturing, homes, and offices. To improve their awareness capabilities, it is attractive to add more sensors to their framework. In this paper, we propose adding a new sensor as a wearable sensor connected wirelessly with a neural network located on the base station (WSNB). WSNB enables the added sensor to refine their labels through active learning. The new sensors achieve an average accuracy of 93.81%, which is 4.5% higher than the existing method, removing human support and increasing the life cycle for the sensors by using neural network approach in the base station.
2021-05-13
Li, Yizhi.  2020.  Research on Application of Convolutional Neural Network in Intrusion Detection. 2020 7th International Forum on Electrical Engineering and Automation (IFEEA). :720–723.
At present, our life is almost inseparable from the network, the network provides a lot of convenience for our life. However, a variety of network security incidents occur very frequently. In recent years, with the continuous development of neural network technology, more and more researchers have applied neural network to intrusion detection, which has developed into a new research direction in intrusion detection. As long as the neural network is provided with input data including network data packets, through the process of self-learning, the neural network can separate abnormal data features and effectively detect abnormal data. Therefore, the article innovatively proposes an intrusion detection method based on deep convolutional neural networks (CNN), which is used to test on public data sets. The results show that the model has a higher accuracy rate and a lower false negative rate than traditional intrusion detection methods.
2021-03-30
Ashiku, L., Dagli, C..  2020.  Agent Based Cybersecurity Model for Business Entity Risk Assessment. 2020 IEEE International Symposium on Systems Engineering (ISSE). :1—6.

Computer networks and surging advancements of innovative information technology construct a critical infrastructure for network transactions of business entities. Information exchange and data access though such infrastructure is scrutinized by adversaries for vulnerabilities that lead to cyber-attacks. This paper presents an agent-based system modelling to conceptualize and extract explicit and latent structure of the complex enterprise systems as well as human interactions within the system to determine common vulnerabilities of the entity. The model captures emergent behavior resulting from interactions of multiple network agents including the number of workstations, regular, administrator and third-party users, external and internal attacks, defense mechanisms for the network setting, and many other parameters. A risk-based approach to modelling cybersecurity of a business entity is utilized to derive the rate of attacks. A neural network model will generalize the type of attack based on network traffic features allowing dynamic state changes. Rules of engagement to generate self-organizing behavior will be leveraged to appoint a defense mechanism suitable for the attack-state of the model. The effectiveness of the model will be depicted by time-state chart that shows the number of affected assets for the different types of attacks triggered by the entity risk and the time it takes to revert into normal state. The model will also associate a relevant cost per incident occurrence that derives the need for enhancement of security solutions.

2021-03-09
Rojas-Dueñas, G., Riba, J., Kahalerras, K., Moreno-Eguilaz, M., Kadechkar, A., Gomez-Pau, A..  2020.  Black-Box Modelling of a DC-DC Buck Converter Based on a Recurrent Neural Network. 2020 IEEE International Conference on Industrial Technology (ICIT). :456–461.
Artificial neural networks allow the identification of black-box models. This paper proposes a method aimed at replicating the static and dynamic behavior of a DC-DC power converter based on a recurrent nonlinear autoregressive exogenous neural network. The method proposed in this work applies an algorithm that trains a neural network based on the inputs and outputs (currents and voltages) of a Buck converter. The approach is validated by means of simulated data of a realistic nonsynchronous Buck converter model programmed in Simulink and by means of experimental results. The predictions made by the neural network are compared to the actual outputs of the system, to determine the accuracy of the method, thus validating the proposed approach. Both simulation and experimental results show the feasibility and accuracy of the proposed black-box approach.
MATSUNAGA, Y., AOKI, N., DOBASHI, Y., KOJIMA, T..  2020.  A Black Box Modeling Technique for Distortion Stomp Boxes Using LSTM Neural Networks. 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC). :653–656.
This paper describes an experimental result of modeling stomp boxes of the distortion effect based on a machine learning approach. Our proposed technique models a distortion stomp box as a neural network consisting of LSTM layers. In this approach, the neural network is employed for learning the nonlinear behavior of the distortion stomp boxes. All the parameters for replicating the distortion sound are estimated through its training process using the input and output signals obtained from some commercial stomp boxes. The experimental result indicates that the proposed technique may have a certain appropriateness to replicate the distortion sound by using the well-trained neural networks.
Muñoz, C. M. Blanco, Cruz, F. Gómez, Valero, J. S. Jimenez.  2020.  Software architecture for the application of facial recognition techniques through IoT devices. 2020 Congreso Internacional de Innovación y Tendencias en Ingeniería (CONIITI). :1–5.

The facial recognition time by time takes more importance, due to the extend kind of applications it has, but it is still challenging when faces big variations in the characteristics of the biometric data used in the process and especially referring to the transportation of information through the internet in the internet of things context. Based on the systematic review and rigorous study that supports the extraction of the most relevant information on this topic [1], a software architecture proposal which contains basic security requirements necessary for the treatment of the data involved in the application of facial recognition techniques, oriented to an IoT environment was generated. Concluding that the security and privacy considerations of the information registered in IoT devices represent a challenge and it is a priority to be able to guarantee that the data circulating on the network are only accessible to the user that was designed for this.

2021-03-04
Nugraha, B., Nambiar, A., Bauschert, T..  2020.  Performance Evaluation of Botnet Detection using Deep Learning Techniques. 2020 11th International Conference on Network of the Future (NoF). :141—149.

Botnets are one of the major threats on the Internet. They are used for malicious activities to compromise the basic network security goals, namely Confidentiality, Integrity, and Availability. For reliable botnet detection and defense, deep learning-based approaches were recently proposed. In this paper, four different deep learning models, namely Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), hybrid CNN-LSTM, and Multi-layer Perception (MLP) are applied for botnet detection and simulation studies are carried out using the CTU-13 botnet traffic dataset. We use several performance metrics such as accuracy, sensitivity, specificity, precision, and F1 score to evaluate the performance of each model on classifying both known and unknown (zero-day) botnet traffic patterns. The results show that our deep learning models can accurately and reliably detect both known and unknown botnet traffic, and show better performance than other deep learning models.

Carlini, N., Farid, H..  2020.  Evading Deepfake-Image Detectors with White- and Black-Box Attacks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :2804—2813.

It is now possible to synthesize highly realistic images of people who do not exist. Such content has, for example, been implicated in the creation of fraudulent socialmedia profiles responsible for dis-information campaigns. Significant efforts are, therefore, being deployed to detect synthetically-generated content. One popular forensic approach trains a neural network to distinguish real from synthetic content.We show that such forensic classifiers are vulnerable to a range of attacks that reduce the classifier to near- 0% accuracy. We develop five attack case studies on a state- of-the-art classifier that achieves an area under the ROC curve (AUC) of 0.95 on almost all existing image generators, when only trained on one generator. With full access to the classifier, we can flip the lowest bit of each pixel in an image to reduce the classifier's AUC to 0.0005; perturb 1% of the image area to reduce the classifier's AUC to 0.08; or add a single noise pattern in the synthesizer's latent space to reduce the classifier's AUC to 0.17. We also develop a black-box attack that, with no access to the target classifier, reduces the AUC to 0.22. These attacks reveal significant vulnerabilities of certain image-forensic classifiers.

2021-02-23
Al-Emadi, S., Al-Mohannadi, A., Al-Senaid, F..  2020.  Using Deep Learning Techniques for Network Intrusion Detection. 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT). :171—176.
In recent years, there has been a significant increase in network intrusion attacks which raises a great concern from the privacy and security aspects. Due to the advancement of the technology, cyber-security attacks are becoming very complex such that the current detection systems are not sufficient enough to address this issue. Therefore, an implementation of an intelligent and effective network intrusion detection system would be crucial to solve this problem. In this paper, we use deep learning techniques, namely, Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to design an intelligent detection system which is able to detect different network intrusions. Additionally, we evaluate the performance of the proposed solution using different evaluation matrices and we present a comparison between the results of our proposed solution to find the best model for the network intrusion detection system.
Liu, J., Xiao, K., Luo, L., Li, Y., Chen, L..  2020.  An intrusion detection system integrating network-level intrusion detection and host-level intrusion detection. 2020 IEEE 20th International Conference on Software Quality, Reliability and Security (QRS). :122—129.
With the rapid development of Internet, the issue of cyber security has increasingly gained more attention. An intrusion Detection System (IDS) is an effective technique to defend cyber-attacks and reduce security losses. However, the challenge of IDS lies in the diversity of cyber-attackers and the frequently-changing data requiring a flexible and efficient solution. To address this problem, machine learning approaches are being applied in the IDS field. In this paper, we propose an efficient scalable neural-network-based hybrid IDS framework with the combination of Host-level IDS (HIDS) and Network-level IDS (NIDS). We applied the autoencoders (AE) to NIDS and designed HIDS using word embedding and convolutional neural network. To evaluate the IDS, many experiments are performed on the public datasets NSL-KDD and ADFA. It can detect many attacks and reduce the security risk with high efficiency and excellent scalability.
2021-02-22
Haile, J., Havens, S..  2020.  Identifying Ubiquitious Third-Party Libraries in Compiled Executables Using Annotated and Translated Disassembled Code with Supervised Machine Learning. 2020 IEEE Security and Privacy Workshops (SPW). :157–162.
The size and complexity of the software ecosystem is a major challenge for vendors, asset owners and cybersecurity professionals who need to understand the security posture of these systems. Annotated and Translated Disassembled Code is a graph based datastore designed to organize firmware and software analysis data across builds, packages and systems, providing a highly scalable platform enabling automated binary software analysis tasks including corpora construction and storage for machine learning. This paper describes an approach for the identification of ubiquitous third-party libraries in firmware and software using Annotated and Translated Disassembled Code and supervised machine learning. Annotated and Translated Disassembled Code provide matched libraries, function names and addresses of previously unidentified code in software as it is being automatically analyzed. This data can be ingested by other software analysis tools to improve accuracy and save time. Defenders can add the identified libraries to their vulnerability searches and add effective detection and mitigation into their operating environment.
2021-02-08
Wang, R., Li, L., Hong, W., Yang, N..  2009.  A THz Image Edge Detection Method Based on Wavelet and Neural Network. 2009 Ninth International Conference on Hybrid Intelligent Systems. 3:420—424.

A THz image edge detection approach based on wavelet and neural network is proposed in this paper. First, the source image is decomposed by wavelet, the edges in the low-frequency sub-image are detected using neural network method and the edges in the high-frequency sub-images are detected using wavelet transform method on the coarsest level of the wavelet decomposition, the two edge images are fused according to some fusion rules to obtain the edge image of this level, it then is projected to the next level. Afterwards the final edge image of L-1 level is got according to some fusion rule. This process is repeated until reaching the 0 level thus to get the final integrated and clear edge image. The experimental results show that our approach based on fusion technique is superior to Canny operator method and wavelet transform method alone.

2021-02-01
Ye, H., Liu, W., Huang, S..  2020.  Method of Image Style Transfer Based on Edge Detection. 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). 1:1635–1639.
In order to overcome the problem of edge information loss in the process of neural network processing, a method of neural network style transfer based on edge detection is presented. The edge information of the content image is extracted, and the edge information image is processed in the neural network together with the content image and the style image to constrain the edge information of the content image. Compared with Gatys algorithm and markov random field neural network algorithm, the content image edge structure after image style transfer is successfully retained.
2021-01-18
Molek, V., Hurtik, P..  2020.  Training Neural Network Over Encrypted Data. 2020 IEEE Third International Conference on Data Stream Mining Processing (DSMP). :23–27.
We are answering the question whenever systems with convolutional neural network classifier trained over plain and encrypted data keep the ordering according to accuracy. Our motivation is need for designing convolutional neural network classifiers when data in their plain form are not accessible because of private company policy or sensitive data gathered by police. We propose to use a combination of fully connected autoencoder together with a convolutional neural network classifier. The autoencoder transforms the data info form that allows the convolutional classifier to be trained. We present three experiments that show the ordering of systems over plain and encrypted data. The results show that the systems indeed keep the ordering, and thus a NN designer can select appropriate architecture over encrypted data and later let data owner train or fine-tune the system/CNN classifier on the plain data.
2021-01-15
Yadav, D., Salmani, S..  2019.  Deepfake: A Survey on Facial Forgery Technique Using Generative Adversarial Network. 2019 International Conference on Intelligent Computing and Control Systems (ICCS). :852—857.
"Deepfake" it is an incipiently emerging face video forgery technique predicated on AI technology which is used for creating the fake video. It takes images and video as source and it coalesces these to make a new video using the generative adversarial network and the output is very convincing. This technique is utilized for generating the unauthentic spurious video and it is capable of making it possible to generate an unauthentic spurious video of authentic people verbally expressing and doing things that they never did by swapping the face of the person in the video. Deepfake can create disputes in countries by influencing their election process by defaming the character of the politician. This technique is now being used for character defamation of celebrities and high-profile politician just by swapping the face with someone else. If it is utilized in unethical ways, this could lead to a serious problem. Someone can use this technique for taking revenge from the person by swapping face in video and then posting it to a social media platform. In this paper, working of Deepfake technique along with how it can swap faces with maximum precision in the video has been presented. Further explained are the different ways through which we can identify if the video is generated by Deepfake and its advantages and drawback have been listed.
2020-12-17
Maram, S. S., Vishnoi, T., Pandey, S..  2019.  Neural Network and ROS based Threat Detection and Patrolling Assistance. 2019 Second International Conference on Advanced Computational and Communication Paradigms (ICACCP). :1—5.

To bring a uniform development platform which seamlessly combines hardware components and software architecture of various developers across the globe and reduce the complexity in producing robots which help people in their daily ergonomics. ROS has come out to be a game changer. It is disappointing to see the lack of penetration of technology in different verticals which involve protection, defense and security. By leveraging the power of ROS in the field of robotic automation and computer vision, this research will pave path for identification of suspicious activity with autonomously moving bots which run on ROS. The research paper proposes and validates a flow where ROS and computer vision algorithms like YOLO can fall in sync with each other to provide smarter and accurate methods for indoor and limited outdoor patrolling. Identification of age,`gender, weapons and other elements which can disturb public harmony will be an integral part of the research and development process. The simulation and testing reflects the efficiency and speed of the designed software architecture.

2020-12-14
Arjoune, Y., Salahdine, F., Islam, M. S., Ghribi, E., Kaabouch, N..  2020.  A Novel Jamming Attacks Detection Approach Based on Machine Learning for Wireless Communication. 2020 International Conference on Information Networking (ICOIN). :459–464.
Jamming attacks target a wireless network creating an unwanted denial of service. 5G is vulnerable to these attacks despite its resilience prompted by the use of millimeter wave bands. Over the last decade, several types of jamming detection techniques have been proposed, including fuzzy logic, game theory, channel surfing, and time series. Most of these techniques are inefficient in detecting smart jammers. Thus, there is a great need for efficient and fast jamming detection techniques with high accuracy. In this paper, we compare the efficiency of several machine learning models in detecting jamming signals. We investigated the types of signal features that identify jamming signals, and generated a large dataset using these parameters. Using this dataset, the machine learning algorithms were trained, evaluated, and tested. These algorithms are random forest, support vector machine, and neural network. The performance of these algorithms was evaluated and compared using the probability of detection, probability of false alarm, probability of miss detection, and accuracy. The simulation results show that jamming detection based random forest algorithm can detect jammers with a high accuracy, high detection probability and low probability of false alarm.
2020-12-11
Peng, M., Wu, Q..  2019.  Enhanced Style Transfer in Real-Time with Histogram-Matched Instance Normalization. 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). :2001—2006.

Since the neural networks are utilized to extract information from an image, Gatys et al. found that they could separate the content and style of images and reconstruct them to another image which called Style Transfer. Moreover, there are many feed-forward neural networks have been suggested to speeding up the original method to make Style Transfer become practical application. However, this takes a price: these feed-forward networks are unchangeable because of their fixed parameters which mean we cannot transfer arbitrary styles but only single one in real-time. Some coordinated approaches have been offered to relieve this dilemma. Such as a style-swap layer and an adaptive normalization layer (AdaIN) and soon. Its worth mentioning that we observed that the AdaIN layer only aligns the means and variance of the content feature maps with those of the style feature maps. Our method is aimed at presenting an operational approach that enables arbitrary style transfer in real-time, reserving more statistical information by histogram matching, providing more reliable texture clarity and more humane user control. We achieve performance more cheerful than existing approaches without adding calculation, complexity. And the speed comparable to the fastest Style Transfer method. Our method provides more flexible user control and trustworthy quality and stability.

Cao, Y., Tang, Y..  2019.  Development of Real-Time Style Transfer for Video System. 2019 3rd International Conference on Circuits, System and Simulation (ICCSS). :183—187.

Re-drawing the image as a certain artistic style is considered to be a complicated task for computer machine. On the contrary, human can easily master the method to compose and describe the style between different images. In the past, many researchers studying on the deep neural networks had found an appropriate representation of the artistic style using perceptual loss and style reconstruction loss. In the previous works, Gatys et al. proposed an artificial system based on convolutional neural networks that creates artistic images of high perceptual quality. Whereas in terms of running speed, it was relatively time-consuming, thus it cannot apply to video style transfer. Recently, a feed-forward CNN approach has shown the potential of fast style transformation, which is an end-to-end system without hundreds of iteration while transferring. We combined the benefits of both approaches, optimized the feed-forward network and defined time loss function to make it possible to implement the style transfer on video in real time. In contrast to the past method, our method runs in real time with higher resolution while creating competitive visually pleasing and temporally consistent experimental results.

2020-12-07
Chang, R., Chang, C., Way, D., Shih, Z..  2018.  An improved style transfer approach for videos. 2018 International Workshop on Advanced Image Technology (IWAIT). :1–2.

In this paper, we present an improved approach to transfer style for videos based on semantic segmentation. We segment foreground objects and background, and then apply different styles respectively. A fully convolutional neural network is used to perform semantic segmentation. We increase the reliability of the segmentation, and use the information of segmentation and the relationship between foreground objects and background to improve segmentation iteratively. We also use segmentation to improve optical flow, and apply different motion estimation methods between foreground objects and background. This improves the motion boundaries of optical flow, and solves the problems of incorrect and discontinuous segmentation caused by occlusion and shape deformation.

2020-11-23
Dong, C., Liu, Y., Zhang, Y., Shi, P., Shao, X., Ma, C..  2018.  Abnormal Bus Data Detection of Intelligent and Connected Vehicle Based on Neural Network. 2018 IEEE International Conference on Computational Science and Engineering (CSE). :171–176.
In the paper, our research of abnormal bus data analysis of intelligent and connected vehicle aims to detect the abnormal data rapidly and accurately generated by the hackers who send malicious commands to attack vehicles through three patterns, including remote non-contact, short-range non-contact and contact. The research routine is as follows: Take the bus data of 10 different brands of intelligent and connected vehicles through the real vehicle experiments as the research foundation, set up the optimized neural network, collect 1000 sets of the normal bus data of 15 kinds of driving scenarios and the other 300 groups covering the abnormal bus data generated by attacking the three systems which are most common in the intelligent and connected vehicles as the training set. In the end after repeated amendments, with 0.5 seconds per detection, the intrusion detection system has been attained in which for the controlling system the abnormal bus data is detected at the accuracy rate of 96% and the normal data is detected at the accuracy rate of 90%, for the body system the abnormal one is 87% and the normal one is 80%, for the entertainment system the abnormal one is 80% and the normal one is 65%.
2020-11-17
Zhou, Z., Qian, L., Xu, H..  2019.  Intelligent Decentralized Dynamic Power Allocation in MANET at Tactical Edge based on Mean-Field Game Theory. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :604—609.

In this paper, decentralized dynamic power allocation problem has been investigated for mobile ad hoc network (MANET) at tactical edge. Due to the mobility and self-organizing features in MANET and environmental uncertainties in the battlefield, many existing optimal power allocation algorithms are neither efficient nor practical. Furthermore, the continuously increasing large scale of the wireless connection population in emerging Internet of Battlefield Things (IoBT) introduces additional challenges for optimal power allocation due to the “Curse of Dimensionality”. In order to address these challenges, a novel Actor-Critic-Mass algorithm is proposed by integrating the emerging Mean Field game theory with online reinforcement learning. The proposed approach is able to not only learn the optimal power allocation for IoBT in a decentralized manner, but also effectively handle uncertainties from harsh environment at tactical edge. In the developed scheme, each agent in IoBT has three neural networks (NN), i.e., 1) Critic NN learns the optimal cost function that minimizes the Signal-to-interference-plus-noise ratio (SINR), 2) Actor NN estimates the optimal transmitter power adjustment rate, and 3) Mass NN learns the probability density function of all agents' transmitting power in IoBT. The three NNs are tuned based on the Fokker-Planck-Kolmogorov (FPK) and Hamiltonian-Jacobian-Bellman (HJB) equation given in the Mean Field game theory. An IoBT wireless network has been simulated to evaluate the effectiveness of the proposed algorithm. The results demonstrate that the actor-critic-mass algorithm can effectively approximate the probability distribution of all agents' transmission power and converge to the target SINR. Moreover, the optimal decentralized power allocation is obtained through integrated mean-field game theory with reinforcement learning.

2020-11-04
Liang, Y., He, D., Chen, D..  2019.  Poisoning Attack on Load Forecasting. 2019 IEEE Innovative Smart Grid Technologies - Asia (ISGT Asia). :1230—1235.

Short-term load forecasting systems for power grids have demonstrated high accuracy and have been widely employed for commercial use. However, classic load forecasting systems, which are based on statistical methods, are subject to vulnerability from training data poisoning. In this paper, we demonstrate a data poisoning strategy that effectively corrupts the forecasting model even in the presence of outlier detection. To the best of our knowledge, poisoning attack on short-term load forecasting with outlier detection has not been studied in previous works. Our method applies to several forecasting models, including the most widely-adapted and best-performing ones, such as multiple linear regression (MLR) and neural network (NN) models. Starting with the MLR model, we develop a novel closed-form solution to quickly estimate the new MLR model after a round of data poisoning without retraining. We then employ line search and simulated annealing to find the poisoning attack solution. Furthermore, we use the MLR attacking solution to generate a numerical solution for other models, such as NN. The effectiveness of our algorithm has been tested on the Global Energy Forecasting Competition (GEFCom2012) data set with the presence of outlier detection.

Rahman, S., Aburub, H., Mekonnen, Y., Sarwat, A. I..  2018.  A Study of EV BMS Cyber Security Based on Neural Network SOC Prediction. 2018 IEEE/PES Transmission and Distribution Conference and Exposition (T D). :1—5.

Recent changes to greenhouse gas emission policies are catalyzing the electric vehicle (EV) market making it readily accessible to consumers. While there are challenges that arise with dense deployment of EVs, one of the major future concerns is cyber security threat. In this paper, cyber security threats in the form of tampering with EV battery's State of Charge (SOC) was explored. A Back Propagation (BP) Neural Network (NN) was trained and tested based on experimental data to estimate SOC of battery under normal operation and cyber-attack scenarios. NeuralWare software was used to run scenarios. Different statistic metrics of the predicted values were compared against the actual values of the specific battery tested to measure the stability and accuracy of the proposed BP network under different operating conditions. The results showed that BP NN was able to capture and detect the false entries due to a cyber-attack on its network.

2020-10-12
Rudd-Orthner, Richard N M, Mihaylova, Lyudmilla.  2019.  An Algebraic Expert System with Neural Network Concepts for Cyber, Big Data and Data Migration. 2019 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). :1–6.

This paper describes a machine assistance approach to grading decisions for values that might be missing or need validation, using a mathematical algebraic form of an Expert System, instead of the traditional textual or logic forms and builds a neural network computational graph structure. This Experts System approach is also structured into a neural network like format of: input, hidden and output layers that provide a structured approach to the knowledge-base organization, this provides a useful abstraction for reuse for data migration applications in big data, Cyber and relational databases. The approach is further enhanced with a Bayesian probability tree approach to grade the confidences of value probabilities, instead of the traditional grading of the rule probabilities, and estimates the most probable value in light of all evidence presented. This is ground work for a Machine Learning (ML) experts system approach in a form that is closer to a Neural Network node structure.