A Priori Synthetic Over-Sampling Methods For Increasing Classification Sensitivity In Imbalanced Data Sets

Keywords

Class imbalance; Classification; OUPS; SMOTE

Abstract

Building accurate classifiers for predicting group membership is made difficult when using data that is skewed or imbalanced which is typical of real world data sets. The classifier has a tendency to be biased towards the over represented or majority group as a result. Re-sampling techniques offer simple approaches that can be used to minimize the effect. Over-sampling methods aim to combat class imbalance by increasing the number of minority group samples also refereed to as members of the minority group. Over the last decade SMOTE based methods have been used and extended to overcome this problem. There has been little emphasis on improvements to this approach with consideration to data intrinsic properties beyond that of class imbalance alone. In this paper we introduce modifications to a priori based methods Safe Level OUPS and OUPS that result in improvement for sensitivity measures over competing approaches using the SMOTE based method such as the Local neighborhood extension to SMOTE (LN-SMOTE), Borderline-SMOTE and Safe-Level-SMOTE.

Publication Date

12-30-2016

Publication Title

Expert Systems with Applications

Volume

66

Number of Pages

124-135

Document Type

Article

Personal Identifier

scopus

DOI Link

https://doi.org/10.1016/j.eswa.2016.09.010

Socpus ID

84987968822 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/84987968822

This document is currently not available here.

Share

COinS