dataPreparation

2018-05-11

This vignette introduces dataPreparation package (v0.2), what it offers, how simple it is to use it.

1 Introduction

1.1 Package presentation

Based on data.table package, dataPreparation will allow you to do most of the painful data preparation for a data science project with a minimum amount of code.

This package is

• fast (use data.table and exponential search)
• RAM efficient (perform operations by reference and column-wise to avoid copying data)
• stable (most exceptions are handled)
• verbose (log a lot)

data.table and other dependencies are handled at installation.

1.2 Main preparation steps

Before using any machine learning (ML) algorithm, one needs to prepare its data. Preparing a data set for a data science project can be long and tricky. The main steps are the followings:

• Read: load the data set (this package don’t treat this point: for csv we recommend data.table::fread)
• Correct: most of the times, there are some mistake after reading, wrong format… one have to correct them
• Transform: creating new features from date, categorical, character… in order to have information usable for a ML algorithm (aka: numeric or categorical)
• Filter: get rid of useless information in order to speed up computation
• Handle NA: replace missing values
• Pre model transformation: Specific manipulation for the chosen model (handling NA, discretization, one hot encoding, scaling…)
• Shape: put your data set in a nice shape usable by a ML algorithm

Here are the functions available in this package to tackle those issues:

Correct Transform Filter Pre model manipulation Shape
unFactor generateDateDiffs fastFilterVariables fastHandleNa shapeSet
findAndTransformDates generateFactorFromDate whichAreConstant fastDiscretization sameShape
findAndTransformNumerics aggregateByKey whichAreInDouble fastScale setAsNumericMatrix
setColAsCharacter generateFromFactor whichAreBijection one_hot_encoder
setColAsNumeric generateFromCharacter
setColAsDate fastRound
setColAsFactor

All of those functions are integrated in the full pipeline function prepareSet.

In this tutorial we will detail all those steps and how to treat them with this package using an example data set.

1.3 Tutorial data

For this tutorial, we are going to use a messy version of adult data base.

#        date1      date2        date3              date4    num1   num2
# 1:      <NA> 1510441200  24-Mar-2017     26-march, 2017  1.9309 0,0864
# 2: 2017-26-9 1490482800  01-Feb-2017  03-february, 2017 -0.4273 0,6345
# 3:      <NA> 1510614000  18-Sep-2017 20-september, 2017  0.6093 1,8958
# 4:  2017-6-1         NA  25-Jun-2017      27-june, 2017 -0.5138 0,4505
#    constant                             mail    num3 age    type_employer
# 1:        1          pierre.caroline@aol.com  1,9309  39        State-gov
# 2:        1           pierre.lucas@yahoo.com -0,4273  50 Self-emp-not-inc
# 3:        1 caroline.caroline@protonmail.com  0,6093  38          Private
# 4:        1         marie.caroline@gmail.com -0,5138  53          Private
#    fnlwgt education education_num            marital        occupation
# 1:  77516 Bachelors            13      Never-married      Adm-clerical
# 2:  83311 Bachelors            13 Married-civ-spouse   Exec-managerial
# 3: 215646   HS-grad             9           Divorced Handlers-cleaners
# 4: 234721      11th             7 Married-civ-spouse Handlers-cleaners
#     relationship  race  sex capital_gain capital_loss hr_per_week
# 1: Not-in-family White Male         2174            0          40
# 2:       Husband White Male            0            0          13
# 3: Not-in-family White Male            0            0          40
# 4:       Husband Black Male            0            0          40
#          country income
# 1: United-States  <=50K
# 2: United-States  <=50K
# 3: United-States  <=50K
# 4: United-States  <=50K

We added 9 really ugly columns to the data set:

• 4 dates with various formats, or time stamps, and NAs
• 1 constant column
• 3 numeric with different decimal separator
• 1 email address

The same info can be contained in two different columns.

2Correct functions

2.1 Identifying factor that shouldn’t be

It often happens when reading a data set that R put string into a factor even if it shouldn’t be. In this tutorial data set, mail is a factor but shouldn’t be. It will automatically be detected using unFactor function:

print(class(messy_adult$mail)) # "factor" messy_adult <- unFactor(messy_adult) # "unFactor: I will identify variable that are factor but shouldn't be." # "unFactor: I unfactor mail." # "unFactor: It took me 0.16s to unfactor 1 column(s)." print(class(messy_adult$mail))
# "character"

2.2 Identifing and transforming date columns

The next thing to do is to identify columns that are dates (the first 4 ones) and transform them.

# "findAndTransformDates: It took me 0.8s to identify formats"
# "findAndTransformDates: It took me 0.11s to transform 4 columns to a Date format."
Let’s have a look to the transformation performed on those 4 columns:
date1_prev date2_prev date3_prev date4_prev transfo date1 date2 date3 date4
NA 1510441200 24-Mar-2017 26-march, 2017 => NA 2017-11-12 00:00:00 2017-03-24 2017-03-26
2017-26-9 1490482800 01-Feb-2017 03-february, 2017 => 2017-09-26 2017-03-26 00:00:00 2017-02-01 2017-02-03
NA 1510614000 18-Sep-2017 20-september, 2017 => NA 2017-11-14 00:00:00 2017-09-18 2017-09-20
2017-6-1 NA 25-Jun-2017 27-june, 2017 => 2017-01-06 NA 2017-06-25 2017-06-27
NA 1494457200 26-Jan-2017 28-january, 2017 => NA 2017-05-11 01:00:00 2017-01-26 2017-01-28
2017-18-7 1494370800 04-Apr-2017 06-april, 2017 => 2017-07-18 2017-05-10 01:00:00 2017-04-04 2017-04-06

As one can see, even if formats were different and somehow ugly, they were all handled.

2.3 Identifying and transforming numeric columns

And now the same thing with numeric

# "findAndTransformNumerics: It took me 0.21s to identify 3 numerics column(s), i will set them as numerics"
# "setColAsNumeric: I will set some columns as numeric"
# "setColAsNumeric: I am doing the column num1."
# "setColAsNumeric: 0 NA have been created due to transformation to numeric."
# "setColAsNumeric: I will set some columns as numeric"
# "setColAsNumeric: I am doing the column num2."
# "setColAsNumeric: 0 NA have been created due to transformation to numeric."
# "setColAsNumeric: I am doing the column num3."
# "setColAsNumeric: 0 NA have been created due to transformation to numeric."
# "findAndTransformNumerics: It took me 0.04s to transform 3 column(s) to a numeric format."
num1_prev num2_prev num3_prev transfo num1 num2 num3
1.9309 0,0864 1,9309 => 1.9309 0.0864 1.9309
-0.4273 0,6345 -0,4273 => -0.4273 0.6345 -0.4273
0.6093 1,8958 0,6093 => 0.6093 1.8958 0.6093
-0.5138 0,4505 -0,5138 => -0.5138 0.4505 -0.5138
1.0563 1,342 1,0563 => 1.0563 1.3420 1.0563
-0.9377 -0,0421 -0,9377 => -0.9377 -0.0421 -0.9377

So now our data set is a bit less ugly.

3Filter functions

The idea now is to identify useless columns:

• constant columns: they take the same value for every line,
• double columns: they have an exact copy in the data set,
• bijection columns: there is another column containing the exact same information (but maybe coded differently) for example col1: Men/Women, col2 M/W.

3.1 Look for constant variables

# "whichAreConstant: constant is constant."
# "whichAreConstant: it took me 0s to identify 1 constant column(s)"

3.2 Look for columns in double

# "whichAreInDouble: num3 is exactly equal to num1. I put it in drop list."
# "whichAreInDouble: it took me 0.15s to identify 1 column(s) to drop."

3.3 Look for columns that are bijections of one another

# "whichAreBijection: date4 is a bijection of date3. I put it in drop list."
# "whichAreBijection: num3 is a bijection of num1. I put it in drop list."
# "whichAreBijection: education_num is a bijection of education. I put it in drop list."
# "whichAreBijection: it took me 0.19s to identify 3 column(s) to drop."
To control this, let’s have a look to the concerned columns:
constant date3 date4 num1 num3 education education_num
1 2017-03-24 2017-03-26 1.9309 1.9309 Bachelors 13
1 2017-02-01 2017-02-03 -0.4273 -0.4273 Bachelors 13
1 2017-09-18 2017-09-20 0.6093 0.6093 HS-grad 9
1 2017-06-25 2017-06-27 -0.5138 -0.5138 11th 7
1 2017-01-26 2017-01-28 1.0563 1.0563 Bachelors 13
1 2017-04-04 2017-04-06 -0.9377 -0.9377 Masters 14

Indeed:

• constant was build constant, it contains only 1,
• num1 and num3 are equal,
• date3 and date4 are separated by 2 days: date4 doesn’t contain any new information for a ML algorithm,
• education and education_num contains the same information one with a key index, the other one with the character corresponding. whichAreBijection keeps the character column.

3.4 Filter them all

To directly filter all of them:

print(paste0("messy_adult now have ", ncol(messy_adult), " columns; so ", ncols - ncol(messy_adult), " less than before."))
# "fastFilterVariables: I check for constant columns."
# "fastFilterVariables: I delete 1 constant column(s) in dataSet."
# "fastFilterVariables: I check for columns in double."
# "fastFilterVariables: I delete 1 column(s) that are in double in dataSet."
# "fastFilterVariables: I check for columns that are bijections of another column."
# "fastFilterVariables: I delete 2 column(s) that are bijections of another column in dataSet."
# "messy_adult now have 20 columns; so 4 less than before."

4 useless rows have been deleted. Without those useless columns, your machine learning algorithm will at least be faster and maybe give better results.

4Transform functions

Before sending this to a machine learning algorithm, a few transformations should be performed.

The idea with the functions presented here is to perform those transformations in a RAM efficient way.

4.1 Dates differences

Since no machine learning algorithm handle Dates, one needs to transform them or drop them. A way to transform dates is to perform differences between every date.

We can also add an analysis date to compare dates with the date your data is from. For example, if you have a birth-date you may want to compute age by performing today - birth-date.

messy_adult <- generateDateDiffs(messy_adult, cols = "auto", analysisDate = as.Date("2018-01-01"), units = "days")
# "generateDateDiffs: I will generate difference between dates."
# "generateDateDiffs: It took me 0s to create 6 column(s)."
date1.Minus.date3 date1.Minus.analysisDate date2.Minus.date3 date2.Minus.analysisDate date3.Minus.analysisDate
NA NA 232.95833 -50 -282.9583
237 -96.95833 52.95833 -281 -333.9583
NA NA 56.95833 -48 -104.9583
-170 -359.95833 NA NA -189.9583
NA NA 104.95833 -235 -339.9583
105 -166.95833 35.95833 -236 -271.9583

4.2 Transforming dates into aggregates

Another way to work around dates would be to aggregate them at some level. This time drop is set to TRUE in order to drop date columns.

messy_adult <- generateFactorFromDate(messy_adult, cols = "auto", type = "quarter", drop = TRUE)
# "generateFactorFromDate: I will create a factor column from each date column."
# "generateFactorFromDate: It took me 0.16s to transform 3 column(s)."
date1.quarter date2.quarter date3.quarter
QNA Q4 Q1
Q3 Q1 Q1
QNA Q4 Q3
Q1 QNA Q2
QNA Q2 Q1
Q3 Q2 Q2

4.3 Generate features from character columns

Character columns are not handled by any machine learning algorithm, one should transform them. Function generateFromCharacter build some new feature from them, and then drop them.

messy_adult <- generateFromCharacter(messy_adult, cols = "auto", drop = TRUE)
# "generateFromCharacter: it took me: 0.03s to transform 1 character columns into, 3 new columns."
mail.notnull mail.num mail.order
FALSE 200 1
FALSE 200 1
FALSE 200 1
FALSE 200 1
FALSE 200 1
FALSE 200 1

4.4 Aggregate according to a key

To model something by country; one would want to to compute an aggregation of this table in order to have one line per country.

# "aggregateByKey: I start to aggregate"
# "aggregateByKey: 139 columns have been constructed. It took 0.41 seconds. "
country max.age type_employer.Without-pay education.Assoc-acdm marital.Married-AF-spouse
? 90 0 10 0
Cambodia 65 0 0 0
Canada 80 0 1 0
China 75 0 0 0
Columbia 75 0 4 0
Cuba 82 0 3 0

Every time you have more than one line per individual this function would be pretty cool.

4.5 Rounding

One might want to round numeric variables in order to save some RAM, or for algorithmic reasons:

num1 num2 age type_employer fnlwgt education
0.59 -0.50 60 Private 173960 Bachelors
NA -0.60 25 Private 371987 Bachelors
NA 0.48 26 Private 94936 Assoc-acdm
0.02 2.83 28 Private 166481 7th-8th
-0.87 -0.39 45 Self-emp-inc 197332 Some-college
1.20 -0.74 31 Private 244147 HS-grad

5 Handling NAs values

Then, let’s handle NAs

#    num1  num2 age type_employer   ...       country income
# 1: 0.59 -0.50  60       Private   ... United-States  <=50K
# 2: 0.00 -0.60  25       Private   ... United-States  <=50K
# 3: 0.00  0.48  26       Private   ... United-States  <=50K
# 4: 0.02  2.83  28       Private   ...   Puerto-Rico  <=50K
#    date1.Minus.date2 date1.Minus.date3 date1.Minus.analysisDate
# 1:           -173.96                65                  -293.96
# 2:             23.04              -117                  -334.96
# 3:            -73.96               -33                  -138.96
# 4:           -234.96              -228                  -336.96
#    date2.Minus.date3 date2.Minus.analysisDate date3.Minus.analysisDate
# 1:            238.96                     -120                  -358.96
# 2:           -140.04                     -358                  -217.96
# 3:             40.96                      -65                  -105.96
# 4:              6.96                     -102                  -108.96
#    date1.quarter date2.quarter date3.quarter mail.notnull mail.num
# 1:            Q1            Q3            Q1        FALSE      200
# 2:            Q1            Q1            Q2        FALSE      200
# 3:            Q3            Q4            Q3        FALSE      200
# 4:            Q1            Q3            Q3        FALSE      200
#    mail.order
# 1:          1
# 2:          1
# 3:          1
# 4:          1

It set default values in place of NA. If you want to put some specific values (constants, or even a function for example mean of values) you should go check fastHandleNa documentation.

6 Shape functions

There are two types of machine learning algorithm in R: those which accept data.table and factor, those which only accept numeric matrix.

Transforming a data set into something acceptable for a machine learning algorithm could be tricky.

The shapeSet function do it for you, you just have to choose if you want a data.table or a numerical_matrix.

First with data.table:

clean_adult = shapeSet(copy(messy_adult), finalForm = "data.table", verbose = FALSE)
# "setColAsFactor: num1 has more than 10 values, i don't transform it."
# "setColAsFactor: num2 has more than 10 values, i don't transform it."
# "setColAsFactor: age has more than 10 values, i don't transform it."
# "setColAsFactor: fnlwgt has more than 10 values, i don't transform it."
# "setColAsFactor: capital_gain has more than 10 values, i don't transform it."
# "setColAsFactor: capital_loss has more than 10 values, i don't transform it."
# "setColAsFactor: hr_per_week has more than 10 values, i don't transform it."
# "setColAsFactor: date1.Minus.date2 has more than 10 values, i don't transform it."
# "setColAsFactor: date1.Minus.date3 has more than 10 values, i don't transform it."
# "setColAsFactor: date1.Minus.analysisDate has more than 10 values, i don't transform it."
# "setColAsFactor: date2.Minus.date3 has more than 10 values, i don't transform it."
# "setColAsFactor: date2.Minus.analysisDate has more than 10 values, i don't transform it."
# "setColAsFactor: date3.Minus.analysisDate has more than 10 values, i don't transform it."
# "setColAsFactor: mail.num has more than 10 values, i don't transform it."
# "setColAsFactor: mail.order has more than 10 values, i don't transform it."
#
#  factor integer numeric
#      12       1      15

As one can see, there only are, numeric and factors.

Now with numerical_matrix:

clean_adult <- shapeSet(copy(messy_adult), finalForm = "numerical_matrix", verbose = FALSE)
# "setColAsFactor: num1 has more than 10 values, i don't transform it."
# "setColAsFactor: num2 has more than 10 values, i don't transform it."
# "setColAsFactor: age has more than 10 values, i don't transform it."
# "setColAsFactor: fnlwgt has more than 10 values, i don't transform it."
# "setColAsFactor: capital_gain has more than 10 values, i don't transform it."
# "setColAsFactor: capital_loss has more than 10 values, i don't transform it."
# "setColAsFactor: hr_per_week has more than 10 values, i don't transform it."
# "setColAsFactor: date1.Minus.date2 has more than 10 values, i don't transform it."
# "setColAsFactor: date1.Minus.date3 has more than 10 values, i don't transform it."
# "setColAsFactor: date1.Minus.analysisDate has more than 10 values, i don't transform it."
# "setColAsFactor: date2.Minus.date3 has more than 10 values, i don't transform it."
# "setColAsFactor: date2.Minus.analysisDate has more than 10 values, i don't transform it."
# "setColAsFactor: date3.Minus.analysisDate has more than 10 values, i don't transform it."
# "setColAsFactor: mail.num has more than 10 values, i don't transform it."
# "setColAsFactor: mail.order has more than 10 values, i don't transform it."
num1 num2 age type_employer? type_employerFederal-gov type_employerLocal-gov
0.59 -0.50 60 0 0 0
0.00 -0.60 25 0 0 0
0.00 0.48 26 0 0 0
0.02 2.83 28 0 0 0
-0.87 -0.39 45 0 0 0
1.20 -0.74 31 0 0 0

As one can see, with finalForm = "numerical_matrix" every character and factor have been binarized.

7 Full pipeline

Doing it all with one function is possible:

To do that we will reload the ugly data set and perform aggregation.

agg_adult <- prepareSet(messy_adult, finalForm = "data.table", key = "country", analysisDate = Sys.Date(), digits = 2)
# "prepareSet: step one: correcting mistakes."
# "fastFilterVariables: I check for constant columns."
# "fastFilterVariables: I delete 1 constant column(s) in dataSet."
# "fastFilterVariables: I check for columns in double."
# "fastFilterVariables: I check for columns that are bijections of another column."
# "fastFilterVariables: I delete 3 column(s) that are bijections of another column in dataSet."
# "unFactor: I will identify variable that are factor but shouldn't be."
# "unFactor: I unfactor mail."
# "unFactor: It took me 0.14s to unfactor 1 column(s)."
# "findAndTransformNumerics: It took me 0.14s to identify 2 numerics column(s), i will set them as numerics"
# "setColAsNumeric: I will set some columns as numeric"
# "setColAsNumeric: I will set some columns as numeric"
# "setColAsNumeric: I am doing the column num2."
# "setColAsNumeric: 0 NA have been created due to transformation to numeric."
# "setColAsNumeric: I am doing the column num3."
# "setColAsNumeric: 0 NA have been created due to transformation to numeric."
# "findAndTransformNumerics: It took me 0.07s to transform 2 column(s) to a numeric format."
# "findAndTransformDates: It took me 0.65s to identify formats"
# "findAndTransformDates: It took me 0.08s to transform 3 columns to a Date format."
# "prepareSet: step two: transforming dataSet."
# "generateDateDiffs: I will generate difference between dates."
# "generateDateDiffs: It took me 0s to create 6 column(s)."
# "generateFactorFromDate: I will create a factor column from each date column."
# "generateFactorFromDate: It took me 0.52s to transform 3 column(s)."
# "generateFromCharacter: it took me: 0.01s to transform 1 character columns into, 3 new columns."
# "aggregateByKey: I start to aggregate"
# "aggregateByKey: 164 columns have been constructed. It took 0.28 seconds. "
# "prepareSet: step three: filtering dataSet."
# "fastFilterVariables: I check for constant columns."
# "fastFilterVariables: I delete 2 constant column(s) in result."
# "fastFilterVariables: I check for columns in double."
# "fastFilterVariables: I delete 1 column(s) that are in double in result."
# "fastFilterVariables: I check for columns that are bijections of another column."
# "fastFilterVariables: I delete 35 column(s) that are bijections of another column in result."
# "prepareSet: step four: handling NA."
# "prepareSet: step five: shaping result."
# "setColAsFactor: I will set some columns to factor."
# "setColAsFactor: it took me: 0s to transform 0 column(s) to factor."
# "Transforming numerical variables into factors when length(unique(col)) <= 10."
# "setColAsFactor: nbrLines has more than 10 values, i don't transform it."
# "setColAsFactor: max.age has more than 10 values, i don't transform it."
# "setColAsFactor: type_employer.? has more than 10 values, i don't transform it."
# "setColAsFactor: type_employer.Local-gov has more than 10 values, i don't transform it."
# "setColAsFactor: type_employer.Private has more than 10 values, i don't transform it."
# "setColAsFactor: type_employer.Self-emp-not-inc has more than 10 values, i don't transform it."
# "setColAsFactor: education.11th has more than 10 values, i don't transform it."
# "setColAsFactor: education.5th-6th has more than 10 values, i don't transform it."
# "setColAsFactor: education.7th-8th has more than 10 values, i don't transform it."
# "setColAsFactor: education.Bachelors has more than 10 values, i don't transform it."
# "setColAsFactor: education.HS-grad has more than 10 values, i don't transform it."
# "setColAsFactor: education.Masters has more than 10 values, i don't transform it."
# "setColAsFactor: education.Some-college has more than 10 values, i don't transform it."
# "setColAsFactor: marital.Divorced has more than 10 values, i don't transform it."
# "setColAsFactor: marital.Married-civ-spouse has more than 10 values, i don't transform it."
# "setColAsFactor: marital.Married-spouse-absent has more than 10 values, i don't transform it."
# "setColAsFactor: marital.Never-married has more than 10 values, i don't transform it."
# "setColAsFactor: marital.Separated has more than 10 values, i don't transform it."
# "setColAsFactor: occupation.Adm-clerical has more than 10 values, i don't transform it."
# "setColAsFactor: occupation.Craft-repair has more than 10 values, i don't transform it."
# "setColAsFactor: occupation.Exec-managerial has more than 10 values, i don't transform it."
# "setColAsFactor: occupation.Handlers-cleaners has more than 10 values, i don't transform it."
# "setColAsFactor: occupation.Machine-op-inspct has more than 10 values, i don't transform it."
# "setColAsFactor: occupation.Other-service has more than 10 values, i don't transform it."
# "setColAsFactor: occupation.Prof-specialty has more than 10 values, i don't transform it."
# "setColAsFactor: occupation.Sales has more than 10 values, i don't transform it."
# "setColAsFactor: occupation.Transport-moving has more than 10 values, i don't transform it."
# "setColAsFactor: relationship.Husband has more than 10 values, i don't transform it."
# "setColAsFactor: relationship.Not-in-family has more than 10 values, i don't transform it."
# "setColAsFactor: relationship.Other-relative has more than 10 values, i don't transform it."
# "setColAsFactor: relationship.Own-child has more than 10 values, i don't transform it."
# "setColAsFactor: relationship.Unmarried has more than 10 values, i don't transform it."
# "setColAsFactor: relationship.Wife has more than 10 values, i don't transform it."
# "setColAsFactor: race.Asian-Pac-Islander has more than 10 values, i don't transform it."
# "setColAsFactor: race.Black has more than 10 values, i don't transform it."
# "setColAsFactor: race.Other has more than 10 values, i don't transform it."
# "setColAsFactor: race.White has more than 10 values, i don't transform it."
# "setColAsFactor: sex.Female has more than 10 values, i don't transform it."
# "setColAsFactor: sex.Male has more than 10 values, i don't transform it."
# "setColAsFactor: mean.capital_gain has more than 10 values, i don't transform it."
# "setColAsFactor: max.capital_gain has more than 10 values, i don't transform it."
# "setColAsFactor: mean.capital_loss has more than 10 values, i don't transform it."
# "setColAsFactor: max.capital_loss has more than 10 values, i don't transform it."
# "setColAsFactor: sd.capital_loss has more than 10 values, i don't transform it."
# "setColAsFactor: min.hr_per_week has more than 10 values, i don't transform it."
# "setColAsFactor: max.hr_per_week has more than 10 values, i don't transform it."
# "setColAsFactor: income.<=50K has more than 10 values, i don't transform it."
# "setColAsFactor: income.>50K has more than 10 values, i don't transform it."
# "setColAsFactor: date1.yearmonth.2017 Apr has more than 10 values, i don't transform it."
# "setColAsFactor: date1.yearmonth.2017 Aug has more than 10 values, i don't transform it."
# "setColAsFactor: date1.yearmonth.2017 Dec has more than 10 values, i don't transform it."
# "setColAsFactor: date1.yearmonth.2017 Feb has more than 10 values, i don't transform it."
# "setColAsFactor: date1.yearmonth.2017 Jan has more than 10 values, i don't transform it."
# "setColAsFactor: date1.yearmonth.2017 Jul has more than 10 values, i don't transform it."
# "setColAsFactor: date1.yearmonth.2017 Jun has more than 10 values, i don't transform it."
# "setColAsFactor: date1.yearmonth.2017 Mar has more than 10 values, i don't transform it."
# "setColAsFactor: date1.yearmonth.2017 May has more than 10 values, i don't transform it."
# "setColAsFactor: date1.yearmonth.2017 Nov has more than 10 values, i don't transform it."
# "setColAsFactor: date1.yearmonth.2017 Oct has more than 10 values, i don't transform it."
# "setColAsFactor: date1.yearmonth.2017 Sep has more than 10 values, i don't transform it."
# "setColAsFactor: date1.yearmonth.NA has more than 10 values, i don't transform it."
# "setColAsFactor: date2.yearmonth.2017 Apr has more than 10 values, i don't transform it."
# "setColAsFactor: date2.yearmonth.2017 Aug has more than 10 values, i don't transform it."
# "setColAsFactor: date2.yearmonth.2017 Dec has more than 10 values, i don't transform it."
# "setColAsFactor: date2.yearmonth.2017 Feb has more than 10 values, i don't transform it."
# "setColAsFactor: date2.yearmonth.2017 Jan has more than 10 values, i don't transform it."
# "setColAsFactor: date2.yearmonth.2017 Jul has more than 10 values, i don't transform it."
# "setColAsFactor: date2.yearmonth.2017 Jun has more than 10 values, i don't transform it."
# "setColAsFactor: date2.yearmonth.2017 Mar has more than 10 values, i don't transform it."
# "setColAsFactor: date2.yearmonth.2017 May has more than 10 values, i don't transform it."
# "setColAsFactor: date2.yearmonth.2017 Nov has more than 10 values, i don't transform it."
# "setColAsFactor: date2.yearmonth.2017 Oct has more than 10 values, i don't transform it."
# "setColAsFactor: date2.yearmonth.2017 Sep has more than 10 values, i don't transform it."
# "setColAsFactor: date2.yearmonth.NA has more than 10 values, i don't transform it."
# "setColAsFactor: date4.yearmonth.2017 Apr has more than 10 values, i don't transform it."
# "setColAsFactor: date4.yearmonth.2017 Aug has more than 10 values, i don't transform it."
# "setColAsFactor: date4.yearmonth.2017 Dec has more than 10 values, i don't transform it."
# "setColAsFactor: date4.yearmonth.2017 Feb has more than 10 values, i don't transform it."
# "setColAsFactor: date4.yearmonth.2017 Jan has more than 10 values, i don't transform it."
# "setColAsFactor: date4.yearmonth.2017 Jul has more than 10 values, i don't transform it."
# "setColAsFactor: date4.yearmonth.2017 Jun has more than 10 values, i don't transform it."
# "setColAsFactor: date4.yearmonth.2017 Mar has more than 10 values, i don't transform it."
# "setColAsFactor: date4.yearmonth.2017 May has more than 10 values, i don't transform it."
# "setColAsFactor: date4.yearmonth.2017 Nov has more than 10 values, i don't transform it."
# "setColAsFactor: date4.yearmonth.2017 Oct has more than 10 values, i don't transform it."
# "setColAsFactor: date4.yearmonth.2017 Sep has more than 10 values, i don't transform it."
# "setColAsFactor: max.mail.num has more than 10 values, i don't transform it."
# "setColAsFactor: min.mail.order has more than 10 values, i don't transform it."
# "setColAsFactor: max.mail.order has more than 10 values, i don't transform it."
# "Previous distribution of column types:"
# col_class_init
#  factor numeric
#       1     125
# "Current distribution of column types:"
# col_class_end
#  factor numeric
#      37      89

As one can see, every previously steps have been done.

Let’s have a look to the result

# "126 columns have been built; for 42 countries."
country nbrLines mean.num2 sd.num2 mean.num3 sd.num3 min.age
? 529 0 0 0 0 17
Cambodia 16 0.08 0.78 0 0 25
Canada 108 0 0 0 0 17
China 67 0 0 0 0 22
Columbia 53 0 0 0 0 21
Cuba 88 0 0 0 0 21

8 Description

Finally, to generate a description file from this data set, function description is available.

It will describe, the set and its variables. Here we put level=0 to have some global descriptions:

description(agg_adult, level = 0)
# "dataSet is a data.table-data.frame"
# [1] "dataSet contains 42 rows and 126 cols."
# [1] "Columns are of the following classes:"
#
#  factor numeric
#      37      89

9 Conclusion

We presented some of the functions of dataPreparation package. There are a few more available, plus they have some parameters to make their use easier. So if you liked it, please go check the package documentation (by installing it or on CRAN)

We hope that this package is helpful, that it helped you prepare your data in a faster way.

If you would like to give us some feedback, report some issues, add some features to this package, please tell us on GitHub. Also if you want to contribute, please don’t hesitate to contact us.