You are here

Design of a Test Framework for the Evaluation of Transfer Learning Algorithms

Download pdf | Full Screen View

Date Issued:
2017
Summary:
A traditional machine learning environment is characterized by the training and testing data being drawn from the same domain, therefore, having similar distribution characteristics. In contrast, a transfer learning environment is characterized by the training data having di erent distribution characteristics from the testing data. Previous research on transfer learning has focused on the development and evaluation of transfer learning algorithms using real-world datasets. Testing with real-world datasets exposes an algorithm to a limited number of data distribution di erences and does not exercise an algorithm's full capability and boundary limitations. In this research, we de ne, implement, and deploy a transfer learning test framework to test machine learning algorithms. The transfer learning test framework is designed to create a wide-range of distribution di erences that are typically encountered in a transfer learning environment. By testing with many di erent distribution di erences, an algorithm's strong and weak points can be discovered and evaluated against other algorithms. This research additionally performs case studies that use the transfer learning test framework. The rst case study focuses on measuring the impact of exposing algorithms to the Domain Class Imbalance distortion pro le. The next case study uses the entire transfer learning test framework to evaluate both transfer learning and traditional machine learning algorithms. The nal case study uses the transfer learning test framework in conjunction with real-world datasets to measure the impact of the base traditional learner on the performance of transfer learning algorithms. Two additional experiments are performed that are focused on using unique realworld datasets. The rst experiment uses transfer learning techniques to predict fraudulent Medicare claims. The second experiment uses a heterogeneous transfer learning method to predict phishing webgages. These case studies will be of interest to researchers who develop and improve transfer learning algorithms. This research will also be of bene t to machine learning practitioners in the selection of high-performing transfer learning algorithms.
Title: Design of a Test Framework for the Evaluation of Transfer Learning Algorithms.
176 views
94 downloads
Name(s): Weiss, Karl Robert, author
Khoshgoftaar, Taghi M., Thesis advisor
Florida Atlantic University, Degree grantor
College of Engineering and Computer Science
Department of Computer and Electrical Engineering and Computer Science
Type of Resource: text
Genre: Electronic Thesis Or Dissertation
Date Created: 2017
Date Issued: 2017
Publisher: Florida Atlantic University
Place of Publication: Boca Raton, Fla.
Physical Form: application/pdf
Extent: 186 p.
Language(s): English
Summary: A traditional machine learning environment is characterized by the training and testing data being drawn from the same domain, therefore, having similar distribution characteristics. In contrast, a transfer learning environment is characterized by the training data having di erent distribution characteristics from the testing data. Previous research on transfer learning has focused on the development and evaluation of transfer learning algorithms using real-world datasets. Testing with real-world datasets exposes an algorithm to a limited number of data distribution di erences and does not exercise an algorithm's full capability and boundary limitations. In this research, we de ne, implement, and deploy a transfer learning test framework to test machine learning algorithms. The transfer learning test framework is designed to create a wide-range of distribution di erences that are typically encountered in a transfer learning environment. By testing with many di erent distribution di erences, an algorithm's strong and weak points can be discovered and evaluated against other algorithms. This research additionally performs case studies that use the transfer learning test framework. The rst case study focuses on measuring the impact of exposing algorithms to the Domain Class Imbalance distortion pro le. The next case study uses the entire transfer learning test framework to evaluate both transfer learning and traditional machine learning algorithms. The nal case study uses the transfer learning test framework in conjunction with real-world datasets to measure the impact of the base traditional learner on the performance of transfer learning algorithms. Two additional experiments are performed that are focused on using unique realworld datasets. The rst experiment uses transfer learning techniques to predict fraudulent Medicare claims. The second experiment uses a heterogeneous transfer learning method to predict phishing webgages. These case studies will be of interest to researchers who develop and improve transfer learning algorithms. This research will also be of bene t to machine learning practitioners in the selection of high-performing transfer learning algorithms.
Identifier: FA00005925 (IID)
Degree granted: Dissertation (Ph.D.)--Florida Atlantic University, 2017.
Collection: FAU Electronic Theses and Dissertations Collection
Note(s): Includes bibliography.
Subject(s): Dissertations, Academic -- Florida Atlantic University
Machine learning.
Algorithms.
Machine learning Development.
Held by: Florida Atlantic University Libraries
Sublocation: Digital Library
Persistent Link to This Record: http://purl.flvc.org/fau/fd/FA00005925
Use and Reproduction: Copyright © is held by the author, with permission granted to Florida Atlantic University to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Use and Reproduction: http://rightsstatements.org/vocab/InC/1.0/
Host Institution: FAU
Is Part of Series: Florida Atlantic University Digital Library Collections.