Visible to the public Biblio

Filters: Keyword is Training data  [Clear All Filters]
2021-09-21
Jin, Xiang, Xing, Xiaofei, Elahi, Haroon, Wang, Guojun, Jiang, Hai.  2020.  A Malware Detection Approach Using Malware Images and Autoencoders. 2020 IEEE 17th International Conference on Mobile Ad Hoc and Sensor Systems (MASS). :1–6.
Most machine learning-based malware detection systems use various supervised learning methods to classify different instances of software as benign or malicious. This approach provides no information regarding the behavioral characteristics of malware. It also requires a large amount of training data and is prone to labeling difficulties and can reduce accuracy due to redundant training data. Therefore, we propose a malware detection method based on deep learning, which uses malware images and a set of autoencoders to detect malware. The method is to design an autoencoder to learn the functional characteristics of malware, and then to observe the reconstruction error of autoencoder to realize the classification and detection of malware and benign software. The proposed approach achieves 93% accuracy and comparatively better F1-score values while detecting malware and needs little training data when compared with traditional malware detection systems.
2021-08-17
Ouchi, Yumo, Okudera, Ryosuke, Shiomi, Yuya, Uehara, Kota, Sugimoto, Ayaka, Ohki, Tetsushi, Nishigaki, Masakatsu.  2020.  Study on Possibility of Estimating Smartphone Inputs from Tap Sounds. 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). :1425—1429.
Side-channel attacks occur on smartphone keystrokes, where the input can be intercepted by a tapping sound. Ilia et al. reported that keystrokes can be predicted with 61% accuracy from tapping sounds listened to by the built-in microphone of a legitimate user's device. Li et al. reported that by emitting sonar sounds from an attacker smartphone's built-in speaker and analyzing the reflected waves from a legitimate user's finger at the time of tap input, keystrokes can be estimated with 90% accuracy. However, the method proposed by Ilia et al. requires prior penetration of the target smartphone and the attack scenario lacks plausibility; if the attacker's smartphone can be penetrated, the keylogger can directly acquire the keystrokes of a legitimate user. In addition, the method proposed by Li et al. is a side-channel attack in which the attacker actively interferes with the terminals of legitimate users and can be described as an active attack scenario. Herein, we analyze the extent to which a user's keystrokes are leaked to the attacker in a passive attack scenario, where the attacker wiretaps the sounds of the legitimate user's keystrokes using an external microphone. First, we limited the keystrokes to the personal identification number input. Subsequently, mel-frequency cepstrum coefficients of tapping sound data were represented as image data. Consequently, we found that the input is discriminated with high accuracy using a convolutional neural network to estimate the key input.
2021-06-30
Wang, Zhaoyuan, Wang, Dan, Duan, Qing, Sha, Guanglin, Ma, Chunyan, Zhao, Caihong.  2020.  Missing Load Situation Reconstruction Based on Generative Adversarial Networks. 2020 IEEE/IAS Industrial and Commercial Power System Asia (I CPS Asia). :1528—1534.
The completion and the correction of measurement data are the foundation of the ubiquitous power internet of things construction. However, data missing may occur during the data transporting process. Therefore, a model of missing load situation reconstruction based on the generative adversarial networks is proposed in this paper to overcome the disadvantage of depending on data of other relevant factors in conventional methods. Through the unsupervised training, the proposed model can automatically learn the complex features of loads that are difficult to model explicitly to fill the incomplete load data without using other relevant data. Meanwhile, a method of online correction is put forward to improve the robustness of the reconstruction model in different scenarios. The proposed method is fully data-driven and contains no explicit modeling process. The test results indicate that the proposed algorithm is well-matched for the various scenarios, including the discontinuous missing load reconstruction and the continuous missing load reconstruction even massive data missing. Specifically, the reconstruction error rate of the proposed algorithm is within 4% under the absence of 50% load data.
Wang, Chenguang, Tindemans, Simon, Pan, Kaikai, Palensky, Peter.  2020.  Detection of False Data Injection Attacks Using the Autoencoder Approach. 2020 International Conference on Probabilistic Methods Applied to Power Systems (PMAPS). :1—6.
State estimation is of considerable significance for the power system operation and control. However, well-designed false data injection attacks can utilize blind spots in conventional residual-based bad data detection methods to manipulate measurements in a coordinated manner and thus affect the secure operation and economic dispatch of grids. In this paper, we propose a detection approach based on an autoencoder neural network. By training the network on the dependencies intrinsic in `normal' operation data, it effectively overcomes the challenge of unbalanced training data that is inherent in power system attack detection. To evaluate the detection performance of the proposed mechanism, we conduct a series of experiments on the IEEE 118-bus power system. The experiments demonstrate that the proposed autoencoder detector displays robust detection performance under a variety of attack scenarios.
2021-06-24
Lee, Dongseop, Kim, Hyunjin, Ryou, Jaecheol.  2020.  Poisoning Attack on Show and Tell Model and Defense Using Autoencoder in Electric Factory. 2020 IEEE International Conference on Big Data and Smart Computing (BigComp). :538–541.
Recently, deep neural network technology has been developed and used in various fields. The image recognition model can be used for automatic safety checks at the electric factory. However, as the deep neural network develops, the importance of security increases. A poisoning attack is one of security problems. It is an attack that breaks down by entering malicious data into the training data set of the model. This paper generates adversarial data that modulates feature values to different targets by manipulating less RGB values. Then, poisoning attacks in one of the image recognition models, the show and tell model. Then use autoencoder to defend adversarial data.
2021-06-02
Shi, Jie, Foggo, Brandon, Kong, Xianghao, Cheng, Yuanbin, Yu, Nanpeng, Yamashita, Koji.  2020.  Online Event Detection in Synchrophasor Data with Graph Signal Processing. 2020 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm). :1—7.
Online detection of anomalies is crucial to enhancing the reliability and resiliency of power systems. We propose a novel data-driven online event detection algorithm with synchrophasor data using graph signal processing. In addition to being extremely scalable, our proposed algorithm can accurately capture and leverage the spatio-temporal correlations of the streaming PMU data. This paper also develops a general technique to decouple spatial and temporal correlations in multiple time series. Finally, we develop a unique framework to construct a weighted adjacency matrix and graph Laplacian for product graph. Case studies with real-world, large-scale synchrophasor data demonstrate the scalability and accuracy of our proposed event detection algorithm. Compared to the state-of-the-art benchmark, the proposed method not only achieves higher detection accuracy but also yields higher computational efficiency.
2021-05-18
Ogawa, Yuji, Kimura, Tomotaka, Cheng, Jun.  2020.  Vulnerability Assessment for Machine Learning Based Network Anomaly Detection System. 2020 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan). :1–2.
In this paper, we assess the vulnerability of network anomaly detection systems that use machine learning methods. Although the performance of these network anomaly detection systems is high in comparison to that of existing methods without machine learning methods, the use of machine learning methods for detecting vulnerabilities is a growing concern among researchers of image processing. If the vulnerabilities of machine learning used in the network anomaly detection method are exploited by attackers, large security threats are likely to emerge in the near future. Therefore, in this paper we clarify how vulnerability detection of machine learning network anomaly detection methods affects their performance.
2021-05-13
Fernandes, Steven, Raj, Sunny, Ewetz, Rickard, Pannu, Jodh Singh, Kumar Jha, Sumit, Ortiz, Eddy, Vintila, Iustina, Salter, Margaret.  2020.  Detecting Deepfake Videos using Attribution-Based Confidence Metric. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :1250–1259.
Recent advances in generative adversarial networks have made detecting fake videos a challenging task. In this paper, we propose the application of the state-of-the-art attribution based confidence (ABC) metric for detecting deepfake videos. The ABC metric does not require access to the training data or training the calibration model on the validation data. The ABC metric can be used to draw inferences even when only the trained model is available. Here, we utilize the ABC metric to characterize whether a video is original or fake. The deep learning model is trained only on original videos. The ABC metric uses the trained model to generate confidence values. For, original videos, the confidence values are greater than 0.94.
2021-03-30
Ganfure, G. O., Wu, C.-F., Chang, Y.-H., Shih, W.-K..  2020.  DeepGuard: Deep Generative User-behavior Analytics for Ransomware Detection. 2020 IEEE International Conference on Intelligence and Security Informatics (ISI). :1—6.

In the last couple of years, the move to cyberspace provides a fertile environment for ransomware criminals like ever before. Notably, since the introduction of WannaCry, numerous ransomware detection solution has been proposed. However, the ransomware incidence report shows that most organizations impacted by ransomware are running state of the art ransomware detection tools. Hence, an alternative solution is an urgent requirement as the existing detection models are not sufficient to spot emerging ransomware treat. With this motivation, our work proposes "DeepGuard," a novel concept of modeling user behavior for ransomware detection. The main idea is to log the file-interaction pattern of typical user activity and pass it through deep generative autoencoder architecture to recreate the input. With sufficient training data, the model can learn how to reconstruct typical user activity (or input) with minimal reconstruction error. Hence, by applying the three-sigma limit rule on the model's output, DeepGuard can distinguish the ransomware activity from the user activity. The experiment result shows that DeepGuard effectively detects a variant class of ransomware with minimal false-positive rates. Overall, modeling the attack detection with user-behavior permits the proposed strategy to have deep visibility of various ransomware families.

Li, Y., Ji, X., Li, C., Xu, X., Yan, W., Yan, X., Chen, Y., Xu, W..  2020.  Cross-domain Anomaly Detection for Power Industrial Control System. 2020 IEEE 10th International Conference on Electronics Information and Emergency Communication (ICEIEC). :383—386.

In recent years, artificial intelligence has been widely used in the field of network security, which has significantly improved the effect of network security analysis and detection. However, because the power industrial control system is faced with the problem of shortage of attack data, the direct deployment of the network intrusion detection system based on artificial intelligence is faced with the problems of lack of data, low precision, and high false alarm rate. To solve this problem, we propose an anomaly traffic detection method based on cross-domain knowledge transferring. By using the TrAdaBoost algorithm, we achieve a lower error rate than using LSTM alone.

2021-03-29
Begaj, S., Topal, A. O., Ali, M..  2020.  Emotion Recognition Based on Facial Expressions Using Convolutional Neural Network (CNN). 2020 International Conference on Computing, Networking, Telecommunications Engineering Sciences Applications (CoNTESA). :58—63.

Over the last few years, there has been an increasing number of studies about facial emotion recognition because of the importance and the impact that it has in the interaction of humans with computers. With the growing number of challenging datasets, the application of deep learning techniques have all become necessary. In this paper, we study the challenges of Emotion Recognition Datasets and we also try different parameters and architectures of the Conventional Neural Networks (CNNs) in order to detect the seven emotions in human faces, such as: anger, fear, disgust, contempt, happiness, sadness and surprise. We have chosen iCV MEFED (Multi-Emotion Facial Expression Dataset) as the main dataset for our study, which is relatively new, interesting and very challenging.

2021-03-09
Kamilin, M. H. B., Yamaguchi, S..  2020.  White-Hat Worm Launcher Based on Deep Learning in Botnet Defense System. 2020 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia). :1—2.

This paper proposes a deep learning-based white-hat worm launcher in Botnet Defense System (BDS). BDS uses white-hat botnets to defend an IoT system against malicious botnets. White-hat worm launcher literally launches white-hat worms to create white-hat botnets according to the strategy decided by BDS. The proposed launcher learns with deep learning where is the white-hat worms' right place to successfully drive out malicious botnets. Given a system situation invaded by malicious botnets, it predicts a worms' placement by the learning result and launches them. We confirmed the effect of the proposed launcher through simulating evaluation.

2021-03-04
Kalin, J., Ciolino, M., Noever, D., Dozier, G..  2020.  Black Box to White Box: Discover Model Characteristics Based on Strategic Probing. 2020 Third International Conference on Artificial Intelligence for Industries (AI4I). :60—63.

In Machine Learning, White Box Adversarial Attacks rely on knowing underlying knowledge about the model attributes. This works focuses on discovering to distrinct pieces of model information: the underlying architecture and primary training dataset. With the process in this paper, a structured set of input probes and the output of the model become the training data for a deep classifier. Two subdomains in Machine Learning are explored - image based classifiers and text transformers with GPT-2. With image classification, the focus is on exploring commonly deployed architectures and datasets available in popular public libraries. Using a single transformer architecture with multiple levels of parameters, text generation is explored by fine tuning off different datasets. Each dataset explored in image and text are distinguishable from one another. Diversity in text transformer outputs implies further research is needed to successfully classify architecture attribution in text domain.

2021-02-23
Ratti, R., Singh, S. R., Nandi, S..  2020.  Towards implementing fast and scalable Network Intrusion Detection System using Entropy based Discretization Technique. 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1—7.

With the advent of networking technologies and increasing network attacks, Intrusion Detection systems are apparently needed to stop attacks and malicious activities. Various frameworks and techniques have been developed to solve the problem of intrusion detection, still there is need for new frameworks as per the challenging scenario of enormous scale in data size and nature of attacks. Current IDS systems pose challenges on the throughput to work with high speed networks. In this paper we address the issue of high computational overhead of anomaly based IDS and propose the solution using discretization as a data preprocessing step which can drastically reduce the computation overhead. We propose method to provide near real time detection of attacks using only basic flow level features that can easily be extracted from network packets.

2021-02-01
Rutard, F., Sigaud, O., Chetouani, M..  2020.  TIRL: Enriching Actor-Critic RL with non-expert human teachers and a Trust Model. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :604–611.
Reinforcement learning (RL) algorithms have been demonstrated to be very attractive tools to train agents to achieve sequential tasks. However, these algorithms require too many training data to converge to be efficiently applied to physical robots. By using a human teacher, the learning process can be made faster and more robust, but the overall performance heavily depends on the quality and availability of teacher demonstrations or instructions. In particular, when these teaching signals are inadequate, the agent may fail to learn an optimal policy. In this paper, we introduce a trust-based interactive task learning approach. We propose an RL architecture able to learn both from environment rewards and from various sparse teaching signals provided by non-expert teachers, using an actor-critic agent, a human model and a trust model. We evaluate the performance of this architecture on 4 different setups using a maze environment with different simulated teachers and show that the benefits of the trust model.
2021-01-22
Alghamdi, A. A., Reger, G..  2020.  Pattern Extraction for Behaviours of Multi-Stage Threats via Unsupervised Learning. 2020 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA). :1—8.
Detection of multi-stage threats such as Advanced Persistent Threats (APT) is extremely challenging due to their deceptive approaches. Sequential events of threats might look benign when performed individually or from different addresses. We propose a new unsupervised framework to identify patterns and correlations of malicious behaviours by analysing heterogeneous log-files. The framework consists of two main phases of data analysis to extract inner-behaviours of log-files and then the patterns of those behaviours over analysed files. To evaluate the framework we have produced a (publicly available) labelled version of the SotM43 dataset. Our results demonstrate that the framework can (i) efficiently cluster inner-behaviours of log-files with high accuracy and (ii) extract patterns of malicious behaviour and correlations between those patterns from real-world data.
2020-12-14
Deng, M., Wu, X., Feng, P., Zeng, W..  2020.  Sparse Support Vector Machine for Network Behavior Anomaly Detection. 2020 IEEE 8th International Conference on Information, Communication and Networks (ICICN). :199–204.
Network behavior anomaly detection (NBAD) require fast mechanisms for learning from the large scale data. However, the training velocity of general machine learning approach is largely limited by the adopted training weights of all features in the NBAD. In this paper, we notice, however, that the related weights matching of NBAD features is sparse, which is not necessary for holding all weights. Hence, in this paper, we consider an efficient support vector machine (SVM) approach for NBAD by imposing 1 -norm. Essentially, we propose to use sparse SVM (S-SVM), where sparsity in model, i.e. in weights is used to interfere with special feature selection and that can achieve feature selection and classification efficiently.
Habibi, G., Surantha, N..  2020.  XSS Attack Detection With Machine Learning and n-Gram Methods. 2020 International Conference on Information Management and Technology (ICIMTech). :516–520.

Cross-Site Scripting (XSS) is an attack most often carried out by attackers to attack a website by inserting malicious scripts into a website. This attack will take the user to a webpage that has been specifically designed to retrieve user sessions and cookies. Nearly 68% of websites are vulnerable to XSS attacks. In this study, the authors conducted a study by evaluating several machine learning methods, namely Support Vector Machine (SVM), K-Nearest Neighbour (KNN), and Naïve Bayes (NB). The machine learning algorithm is then equipped with the n-gram method to each script feature to improve the detection performance of XSS attacks. The simulation results show that the SVM and n-gram method achieves the highest accuracy with 98%.

2020-11-04
Zhang, J., Chen, J., Wu, D., Chen, B., Yu, S..  2019.  Poisoning Attack in Federated Learning using Generative Adversarial Nets. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :374—380.

Federated learning is a novel distributed learning framework, where the deep learning model is trained in a collaborative manner among thousands of participants. The shares between server and participants are only model parameters, which prevent the server from direct access to the private training data. However, we notice that the federated learning architecture is vulnerable to an active attack from insider participants, called poisoning attack, where the attacker can act as a benign participant in federated learning to upload the poisoned update to the server so that he can easily affect the performance of the global model. In this work, we study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN). That is, an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker. Then these generated samples will be fully controlled by the attacker to generate the poisoning updates, and the global model will be compromised by the attacker with uploading the scaled poisoning updates to the server. In our evaluation, we show that the attacker in our construction can successfully generate samples of other benign participants using GAN and the global model performs more than 80% accuracy on both poisoning tasks and main tasks.

Liang, Y., He, D., Chen, D..  2019.  Poisoning Attack on Load Forecasting. 2019 IEEE Innovative Smart Grid Technologies - Asia (ISGT Asia). :1230—1235.

Short-term load forecasting systems for power grids have demonstrated high accuracy and have been widely employed for commercial use. However, classic load forecasting systems, which are based on statistical methods, are subject to vulnerability from training data poisoning. In this paper, we demonstrate a data poisoning strategy that effectively corrupts the forecasting model even in the presence of outlier detection. To the best of our knowledge, poisoning attack on short-term load forecasting with outlier detection has not been studied in previous works. Our method applies to several forecasting models, including the most widely-adapted and best-performing ones, such as multiple linear regression (MLR) and neural network (NN) models. Starting with the MLR model, we develop a novel closed-form solution to quickly estimate the new MLR model after a round of data poisoning without retraining. We then employ line search and simulated annealing to find the poisoning attack solution. Furthermore, we use the MLR attacking solution to generate a numerical solution for other models, such as NN. The effectiveness of our algorithm has been tested on the Global Energy Forecasting Competition (GEFCom2012) data set with the presence of outlier detection.

2020-09-28
Oya, Simon, Troncoso, Carmela, Pèrez-Gonzàlez, Fernando.  2019.  Rethinking Location Privacy for Unknown Mobility Behaviors. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :416–431.
Location Privacy-Preserving Mechanisms (LPPMs) in the literature largely consider that users' data available for training wholly characterizes their mobility patterns. Thus, they hardwire this information in their designs and evaluate their privacy properties with these same data. In this paper, we aim to understand the impact of this decision on the level of privacy these LPPMs may offer in real life when the users' mobility data may be different from the data used in the design phase. Our results show that, in many cases, training data does not capture users' behavior accurately and, thus, the level of privacy provided by the LPPM is often overestimated. To address this gap between theory and practice, we propose to use blank-slate models for LPPM design. Contrary to the hardwired approach, that assumes known users' behavior, blank-slate models learn the users' behavior from the queries to the service provider. We leverage this blank-slate approach to develop a new family of LPPMs, that we call Profile Estimation-Based LPPMs. Using real data, we empirically show that our proposal outperforms optimal state-of-the-art mechanisms designed on sporadic hardwired models. On non-sporadic location privacy scenarios, our method is only better if the usage of the location privacy service is not continuous. It is our hope that eliminating the need to bootstrap the mechanisms with training data and ensuring that the mechanisms are lightweight and easy to compute help fostering the integration of location privacy protections in deployed systems.
2020-09-08
Isnan Imran, Muh. Ikhdar, Putrada, Aji Gautama, Abdurohman, Maman.  2019.  Detection of Near Field Communication (NFC) Relay Attack Anomalies in Electronic Payment Cases using Markov Chain. 2019 Fourth International Conference on Informatics and Computing (ICIC). :1–4.
Near Field Communication (NFC) is a short- range wireless communication technology that supports several features, one of which is an electronic payment. NFC works at a limited distance to exchange information. In terms of security, NFC technology has a gap for attackers to carry out attacks by forwarding information illegally using the target NFC network. A relay attack that occurs due to the theft of some data by an attacker by utilizing close communication from NFC is one of them. Relay attacks can cause a lot of loss in terms of material sacrifice. It takes countermeasures to overcome the problem of electronic payments with NFC technology. Detection of anomalous data is one way that can be done. In an attack, several abnormalities can be detected which can be used to prevent an attack. Markov Chain is one method that can be used to detect relay attacks that occur in electronic payments using NFC. The result shows Markov chain can detect anomalies in relay attacks in the case of electronic payment.
2020-09-04
Jing, Huiyun, Meng, Chengrui, He, Xin, Wei, Wei.  2019.  Black Box Explanation Guided Decision-Based Adversarial Attacks. 2019 IEEE 5th International Conference on Computer and Communications (ICCC). :1592—1596.
Adversarial attacks have been the hot research field in artificial intelligence security. Decision-based black-box adversarial attacks are much more appropriate in the real-world scenarios, where only the final decisions of the targeted deep neural networks are accessible. However, since there is no available guidance for searching the imperceptive adversarial perturbation, boundary attack, one of the best performing decision-based black-box attacks, carries out computationally expensive search. For improving attack efficiency, we propose a novel black box explanation guided decision-based black-box adversarial attack. Firstly, the problem of decision-based adversarial attacks is modeled as a derivative-free and constraint optimization problem. To solve this optimization problem, the black box explanation guided constrained random search method is proposed to more quickly find the imperceptible adversarial example. The insights into the targeted deep neural networks explored by the black box explanation are fully used to accelerate the computationally expensive random search. Experimental results demonstrate that our proposed attack improves the attack efficiency by 64% compared with boundary attack.
2020-08-10
Kwon, Hyun, Yoon, Hyunsoo, Park, Ki-Woong.  2019.  Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error. 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE). :136–139.

Deep neural networks (DNNs) provide good performance for image recognition, speech recognition, and pattern recognition. However, a poisoning attack is a serious threat to DNN's security. The poisoning attack is a method to reduce the accuracy of DNN by adding malicious training data during DNN training process. In some situations such as a military, it may be necessary to drop only a chosen class of accuracy in the model. For example, if an attacker does not allow only nuclear facilities to be selectively recognized, it may be necessary to intentionally prevent UAV from correctly recognizing nuclear-related facilities. In this paper, we propose a selective poisoning attack that reduces the accuracy of only chosen class in the model. The proposed method reduces the accuracy of a chosen class in the model by training malicious training data corresponding to a chosen class, while maintaining the accuracy of the remaining classes. For experiment, we used tensorflow as a machine learning library and MNIST and CIFAR10 as datasets. Experimental results show that the proposed method can reduce the accuracy of the chosen class to 43.2% and 55.3% in MNIST and CIFAR10, while maintaining the accuracy of the remaining classes.

2020-07-03
Cai, Guang-Wei, Fang, Zhi, Chen, Yue-Feng.  2019.  Estimating the Number of Hidden Nodes of the Single-Hidden-Layer Feedforward Neural Networks. 2019 15th International Conference on Computational Intelligence and Security (CIS). :172—176.

In order to solve the problem that there is no effective means to find the optimal number of hidden nodes of single-hidden-layer feedforward neural network, in this paper, a method will be introduced to solve it effectively by using singular value decomposition. First, the training data need to be normalized strictly by attribute-based data normalization and sample-based data normalization. Then, the normalized data is decomposed based on the singular value decomposition, and the number of hidden nodes is determined according to main eigenvalues. The experimental results of MNIST data set and APS data set show that the feedforward neural network can attain satisfactory performance in the classification task.