NUMERIC ENCODING OPTIONS WITH AUTOMUNGE

Abstract

Mainstream practice in machine learning with tabular data may take for granted that any feature engineering beyond scaling for numeric sets is superfluous in context of deep neural networks. This paper will offer arguments for potential benefits of extended encodings of numeric streams in deep learning by way of a survey of options for numeric transformations as available in the Automunge open source python library platform for tabular data pipelines, where transformations may be applied to distinct columns in "family tree" sets with generations and branches of derivations. Automunge transformation options include normalization, binning, noise injection, derivatives, and more. The aggregation of these methods into family tree sets of transformations are demonstrated for use to present numeric features to machine learning in multiple configurations of varying information content, as may be applied to encode numeric sets of unknown interpretation. Experiments demonstrate the realization of a novel generalized solution to data augmentation by noise injection for tabular learning, as may materially benefit model performance in applications with underserved training data.

1. INTRODUCTION

Of the various modalities of machine learning application (e.g. images, language, audio, etc.) tabular data, aka structured data, as may comprise tables of feature set columns and collected sample rows, in my experience does not command as much attention from the research community, for which I speculate may be partly attributed to the general non-uniformity across manifestations precluding the conventions of most other modalities for representative benchmarks and availability of pre-trained architectures as could be adapted with fine-tuning to practical applications. That is not to say that tabular data lacks points of uniformity across data sets, for at its core the various feature sets can at a high level be grouped into just two primary types: numeric and categoric. It was the focus of a recent paper by this author (Author, 2020) to explore methods of preparing categoric sets for machine learning as are available in the Automunge open source python library platform for tabular data pipelines. This paper will give similar treatment for methods to prepare numeric feature sets for machine learning. Of course it would be an oversimplification to characterize "numeric feature sets" as a sufficient descriptor alone to represent the wide amount of diversity as may be found between different such instances. Numeric could be referring to integers, floats, or combinations thereof. The set of entries could be bounded, the potential range of entries could be bounded on the left, right, or both sides, the distribution of values could be thin or fat tailed, single or multi-modal. The order of samples could be independent or sequential. In some cases the values could themselves be an encoded representation of a categoric feature. Beyond the potential diversity found within our numeric features, another source of diversity could be considered based on relationships between multiple feature sets. For example one feature could be independent of the others, could contain full or partial redundancy with one or more other variables by correlation, or in the case of sequential data there could even be causal relationships between variables across time steps. The primary focus of transformations to be discussed in this paper will not take into account variable interdependencies, and will instead operate under the assumption that the training operation of a downstream learning algorithm may be more suitable for the efficient interpretation of such interdependencies, as the convention for Automunge is that data transformations (and in some cases

