## Overview

SparkR is an R package that provides a light-weight frontend to use Apache Spark from R. With Spark 2.3.0, SparkR provides a distributed data frame implementation that supports data processing operations like selection, filtering, aggregation etc. and distributed machine learning using MLlib.

## Getting Started

We begin with an example running on the local machine and provide an overview of the use of SparkR: data ingestion, data processing and machine learning.

First, let's load and attach the package.

library(SparkR)


SparkSession is the entry point into SparkR which connects your R program to a Spark cluster. You can create a SparkSession using sparkR.session and pass in options such as the application name, any Spark packages depended on, etc.

We use default settings in which it runs in local mode. It auto downloads Spark package in the background if no previous installation is found. For more details about setup, see Spark Session.

sparkR.session()


The operations in SparkR are centered around an R class called SparkDataFrame. It is a distributed collection of data organized into named columns, which is conceptually equivalent to a table in a relational database or a data frame in R, but with richer optimizations under the hood.

SparkDataFrame can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing local R data frames. For example, we create a SparkDataFrame from a local R data frame,

cars <- cbind(model = rownames(mtcars), mtcars)
carsDF <- createDataFrame(cars)


We can view the first few rows of the SparkDataFrame by head or showDF function.

head(carsDF)

##               model  mpg cyl disp  hp drat    wt  qsec vs am gear carb
## 1         Mazda RX4 21.0   6  160 110 3.90 2.620 16.46  0  1    4    4
## 2     Mazda RX4 Wag 21.0   6  160 110 3.90 2.875 17.02  0  1    4    4
## 3        Datsun 710 22.8   4  108  93 3.85 2.320 18.61  1  1    4    1
## 4    Hornet 4 Drive 21.4   6  258 110 3.08 3.215 19.44  1  0    3    1
## 5 Hornet Sportabout 18.7   8  360 175 3.15 3.440 17.02  0  0    3    2
## 6           Valiant 18.1   6  225 105 2.76 3.460 20.22  1  0    3    1


Common data processing operations such as filter and select are supported on the SparkDataFrame.

carsSubDF <- select(carsDF, "model", "mpg", "hp")
carsSubDF <- filter(carsSubDF, carsSubDF$hp >= 200) head(carsSubDF)  ## model mpg hp ## 1 Duster 360 14.3 245 ## 2 Cadillac Fleetwood 10.4 205 ## 3 Lincoln Continental 10.4 215 ## 4 Chrysler Imperial 14.7 230 ## 5 Camaro Z28 13.3 245 ## 6 Ford Pantera L 15.8 264  SparkR can use many common aggregation functions after grouping. carsGPDF <- summarize(groupBy(carsDF, carsDF$gear), count = n(carsDF$gear)) head(carsGPDF)  ## gear count ## 1 4 12 ## 2 3 15 ## 3 5 5  The results carsDF and carsSubDF are SparkDataFrame objects. To convert back to R data.frame, we can use collect. Caution: This can cause your interactive environment to run out of memory, though, because collect() fetches the entire distributed DataFrame to your client, which is acting as a Spark driver. carsGP <- collect(carsGPDF) class(carsGP)  ## [1] "data.frame"  SparkR supports a number of commonly used machine learning algorithms. Under the hood, SparkR uses MLlib to train the model. Users can call summary to print a summary of the fitted model, predict to make predictions on new data, and write.ml/read.ml to save/load fitted models. SparkR supports a subset of R formula operators for model fitting, including ~, ., :, +, and -. We use linear regression as an example. model <- spark.glm(carsDF, mpg ~ wt + cyl)  The result matches that returned by R glm function applied to the corresponding data.frame mtcars of carsDF. In fact, for Generalized Linear Model, we specifically expose glm for SparkDataFrame as well so that the above is equivalent to model <- glm(mpg ~ wt + cyl, data = carsDF). summary(model)  ## ## Deviance Residuals: ## (Note: These are approximate quantiles with relative error <= 0.01) ## Min 1Q Median 3Q Max ## -4.2893 -1.7085 -0.4713 1.5729 6.1004 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 39.6863 1.71498 23.1409 0.00000000 ## wt -3.1910 0.75691 -4.2158 0.00022202 ## cyl -1.5078 0.41469 -3.6360 0.00106428 ## ## (Dispersion parameter for gaussian family taken to be 6.592137) ## ## Null deviance: 1126.05 on 31 degrees of freedom ## Residual deviance: 191.17 on 29 degrees of freedom ## AIC: 156 ## ## Number of Fisher Scoring iterations: 1  The model can be saved by write.ml and loaded back using read.ml. write.ml(model, path = "/HOME/tmp/mlModel/glmModel")  In the end, we can stop Spark Session by running sparkR.session.stop()  ## Setup ### Installation Different from many other R packages, to use SparkR, you need an additional installation of Apache Spark. The Spark installation will be used to run a backend process that will compile and execute SparkR programs. After installing the SparkR package, you can call sparkR.session as explained in the previous section to start and it will check for the Spark installation. If you are working with SparkR from an interactive shell (eg. R, RStudio) then Spark is downloaded and cached automatically if it is not found. Alternatively, we provide an easy-to-use function install.spark for running this manually. If you don't have Spark installed on the computer, you may download it from Apache Spark Website. install.spark()  If you already have Spark installed, you don't have to install again and can pass the sparkHome argument to sparkR.session to let SparkR know where the existing Spark installation is. sparkR.session(sparkHome = "/HOME/spark")  ### Spark Session {#SetupSparkSession} In addition to sparkHome, many other options can be specified in sparkR.session. For a complete list, see Starting up: SparkSession and SparkR API doc. In particular, the following Spark driver properties can be set in sparkConfig. Property Name Property group spark-submit equivalent spark.driver.memory Application Properties --driver-memory spark.driver.extraClassPath Runtime Environment --driver-class-path spark.driver.extraJavaOptions Runtime Environment --driver-java-options spark.driver.extraLibraryPath Runtime Environment --driver-library-path spark.yarn.keytab Application Properties --keytab spark.yarn.principal Application Properties --principal For Windows users: Due to different file prefixes across operating systems, to avoid the issue of potential wrong prefix, a current workaround is to specify spark.sql.warehouse.dir when starting the SparkSession. spark_warehouse_path <- file.path(path.expand('~'), "spark-warehouse") sparkR.session(spark.sql.warehouse.dir = spark_warehouse_path)  #### Cluster Mode SparkR can connect to remote Spark clusters. Cluster Mode Overview is a good introduction to different Spark cluster modes. When connecting SparkR to a remote Spark cluster, make sure that the Spark version and Hadoop version on the machine match the corresponding versions on the cluster. Current SparkR package is compatible with ## [1] "Spark 2.3.0"  It should be used both on the local computer and on the remote cluster. To connect, pass the URL of the master node to sparkR.session. A complete list can be seen in Spark Master URLs. For example, to connect to a local standalone Spark master, we can call sparkR.session(master = "spark://local:7077")  For YARN cluster, SparkR supports the client mode with the master set as “yarn”. sparkR.session(master = "yarn")  Yarn cluster mode is not supported in the current version. ## Data Import ### Local Data Frame The simplest way is to convert a local R data frame into a SparkDataFrame. Specifically we can use as.DataFrame or createDataFrame and pass in the local R data frame to create a SparkDataFrame. As an example, the following creates a SparkDataFrame based using the faithful dataset from R. df <- as.DataFrame(faithful) head(df)  ## eruptions waiting ## 1 3.600 79 ## 2 1.800 54 ## 3 3.333 74 ## 4 2.283 62 ## 5 4.533 85 ## 6 2.883 55  ### Data Sources SparkR supports operating on a variety of data sources through the SparkDataFrame interface. You can check the Spark SQL Programming Guide for more specific options that are available for the built-in data sources. The general method for creating SparkDataFrame from data sources is read.df. This method takes in the path for the file to load and the type of data source, and the currently active Spark Session will be used automatically. SparkR supports reading CSV, JSON and Parquet files natively and through Spark Packages you can find data source connectors for popular file formats like Avro. These packages can be added with sparkPackages parameter when initializing SparkSession using sparkR.session. sparkR.session(sparkPackages = "com.databricks:spark-avro_2.11:3.0.0")  We can see how to use data sources using an example CSV input file. For more information please refer to SparkR read.df API documentation. df <- read.df(csvPath, "csv", header = "true", inferSchema = "true", na.strings = "NA")  The data sources API natively supports JSON formatted input files. Note that the file that is used here is not a typical JSON file. Each line in the file must contain a separate, self-contained valid JSON object. As a consequence, a regular multi-line JSON file will most often fail. Let's take a look at the first two lines of the raw JSON file used here. filePath <- paste0(sparkR.conf("spark.home"), "/examples/src/main/resources/people.json") readLines(filePath, n = 2L)  ## [1] "{\"name\":\"Michael\"}" "{\"name\":\"Andy\", \"age\":30}"  We use read.df to read that into a SparkDataFrame. people <- read.df(filePath, "json") count(people)  ## [1] 3  head(people)  ## age name ## 1 NA Michael ## 2 30 Andy ## 3 19 Justin  SparkR automatically infers the schema from the JSON file. printSchema(people)  ## root ## |-- age: long (nullable = true) ## |-- name: string (nullable = true)  If we want to read multiple JSON files, read.json can be used. people <- read.json(paste0(Sys.getenv("SPARK_HOME"), c("/examples/src/main/resources/people.json", "/examples/src/main/resources/people.json"))) count(people)  ## [1] 6  The data sources API can also be used to save out SparkDataFrames into multiple file formats. For example we can save the SparkDataFrame from the previous example to a Parquet file using write.df. write.df(people, path = "people.parquet", source = "parquet", mode = "overwrite")  ### Hive Tables You can also create SparkDataFrames from Hive tables. To do this we will need to create a SparkSession with Hive support which can access tables in the Hive MetaStore. Note that Spark should have been built with Hive support and more details can be found in the SQL Programming Guide. In SparkR, by default it will attempt to create a SparkSession with Hive support enabled (enableHiveSupport = TRUE). sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)") txtPath <- paste0(sparkR.conf("spark.home"), "/examples/src/main/resources/kv1.txt") sqlCMD <- sprintf("LOAD DATA LOCAL INPATH '%s' INTO TABLE src", txtPath) sql(sqlCMD) results <- sql("FROM src SELECT key, value") # results is now a SparkDataFrame head(results)  ## Data Processing To dplyr users: SparkR has similar interface as dplyr in data processing. However, some noticeable differences are worth mentioning in the first place. We use df to represent a SparkDataFrame and col to represent the name of column here. 1. indicate columns. SparkR uses either a character string of the column name or a Column object constructed with $ to indicate a column. For example, to select col in df, we can write select(df, "col") or select(df, df$col). 2. describe conditions. In SparkR, the Column object representation can be inserted into the condition directly, or we can use a character string to describe the condition, without referring to the SparkDataFrame used. For example, to select rows with value > 1, we can write filter(df, df$col > 1) or filter(df, "col > 1").

Here are more concrete examples.

dplyr SparkR
select(mtcars, mpg, hp) select(carsDF, "mpg", "hp")
filter(mtcars, mpg > 20, hp > 100) filter(carsDF, carsDF$mpg > 20, carsDF$hp > 100)

Other differences will be mentioned in the specific methods.

We use the SparkDataFrame carsDF created above. We can get basic information about the SparkDataFrame.

carsDF

## SparkDataFrame[model:string, mpg:double, cyl:double, disp:double, hp:double, drat:double, wt:double, qsec:double, vs:double, am:double, gear:double, carb:double]


Print out the schema in tree format.

printSchema(carsDF)

## root
##  |-- model: string (nullable = true)
##  |-- mpg: double (nullable = true)
##  |-- cyl: double (nullable = true)
##  |-- disp: double (nullable = true)
##  |-- hp: double (nullable = true)
##  |-- drat: double (nullable = true)
##  |-- wt: double (nullable = true)
##  |-- qsec: double (nullable = true)
##  |-- vs: double (nullable = true)
##  |-- am: double (nullable = true)
##  |-- gear: double (nullable = true)
##  |-- carb: double (nullable = true)


### SparkDataFrame Operations

#### Selecting rows, columns

SparkDataFrames support a number of functions to do structured data processing. Here we include some basic examples and a complete list can be found in the API docs:

You can also pass in column name as strings.

head(select(carsDF, "mpg"))

##    mpg
## 1 21.0
## 2 21.0
## 3 22.8
## 4 21.4
## 5 18.7
## 6 18.1


Filter the SparkDataFrame to only retain rows with mpg less than 20 miles/gallon.

### User-Defined Function

In SparkR, we support several kinds of user-defined functions (UDFs).

#### Apply by Partition

dapply can apply a function to each partition of a SparkDataFrame. The function to be applied to each partition of the SparkDataFrame should have only one parameter, a data.frame corresponding to a partition, and the output should be a data.frame as well. Schema specifies the row format of the resulting a SparkDataFrame. It must match to data types of returned value. See here for mapping between R and Spark.

We convert mpg to kmpg (kilometers per gallon). carsSubDF is a SparkDataFrame with a subset of carsDF columns.

carsSubDF <- select(carsDF, "model", "mpg")
schema <- "model STRING, mpg DOUBLE, kmpg DOUBLE"
out <- dapply(carsSubDF, function(x) { x <- cbind(x, x$mpg * 1.61) }, schema) head(collect(out))  ## model mpg kmpg ## 1 Mazda RX4 21.0 33.810 ## 2 Mazda RX4 Wag 21.0 33.810 ## 3 Datsun 710 22.8 36.708 ## 4 Hornet 4 Drive 21.4 34.454 ## 5 Hornet Sportabout 18.7 30.107 ## 6 Valiant 18.1 29.141  Like dapply, dapplyCollect can apply a function to each partition of a SparkDataFrame and collect the result back. The output of the function should be a data.frame, but no schema is required in this case. Note that dapplyCollect can fail if the output of the UDF on all partitions cannot be pulled into the driver's memory. out <- dapplyCollect( carsSubDF, function(x) { x <- cbind(x, "kmpg" = x$mpg * 1.61)
})

##           model  mpg   kmpg
## 1     Mazda RX4 21.0 33.810
## 2 Mazda RX4 Wag 21.0 33.810
## 3    Datsun 710 22.8 36.708


#### Apply by Group

gapply can apply a function to each group of a SparkDataFrame. The function is to be applied to each group of the SparkDataFrame and should have only two parameters: grouping key and R data.frame corresponding to that key. The groups are chosen from SparkDataFrames column(s). The output of function should be a data.frame. Schema specifies the row format of the resulting SparkDataFrame. It must represent R functions output schema on the basis of Spark data types. The column names of the returned data.frame are set by user. See here for mapping between R and Spark.

schema <- structType(structField("cyl", "double"), structField("max_mpg", "double"))
result <- gapply(
carsDF,
"cyl",
function(key, x) {
y <- data.frame(key, max(x$mpg)) }, schema) head(arrange(result, "max_mpg", decreasing = TRUE))  ## cyl max_mpg ## 1 4 33.9 ## 2 6 21.4 ## 3 8 19.2  Like gapply, gapplyCollect can apply a function to each partition of a SparkDataFrame and collect the result back to R data.frame. The output of the function should be a data.frame but no schema is required in this case. Note that gapplyCollect can fail if the output of the UDF on all partitions cannot be pulled into the driver's memory. result <- gapplyCollect( carsDF, "cyl", function(key, x) { y <- data.frame(key, max(x$mpg))
colnames(y) <- c("cyl", "max_mpg")
y
})
head(result[order(result$max_mpg, decreasing = TRUE), ])  ## cyl max_mpg ## 2 4 33.9 ## 3 6 21.4 ## 1 8 19.2  #### Distribute Local Functions Similar to lapply in native R, spark.lapply runs a function over a list of elements and distributes the computations with Spark. spark.lapply works in a manner that is similar to doParallel or lapply to elements of a list. The results of all the computations should fit in a single machine. If that is not the case you can do something like df <- createDataFrame(list) and then use dapply. We use svm in package e1071 as an example. We use all default settings except for varying costs of constraints violation. spark.lapply can train those different models in parallel. costs <- exp(seq(from = log(1), to = log(1000), length.out = 5)) train <- function(cost) { stopifnot(requireNamespace("e1071", quietly = TRUE)) model <- e1071::svm(Species ~ ., data = iris, cost = cost) summary(model) }  Return a list of model's summaries. model.summaries <- spark.lapply(costs, train)  class(model.summaries)  ## [1] "list"  To avoid lengthy display, we only present the partial result of the second fitted model. You are free to inspect other models as well. print(model.summaries[[2]])  ##$call
## svm(formula = Species ~ ., data = iris, cost = cost)
##
## $type ## [1] 0 ## ##$kernel
## [1] 2
##
## $cost ## [1] 5.623413 ## ##$degree
## [1] 3
##
## $gamma ## [1] 0.25 ## ##$coef0
## [1] 0
##
## $nu ## [1] 0.5 ## ##$epsilon
## [1] 0.1
##
## $sparse ## [1] FALSE ## ##$scaled
## [1] TRUE TRUE TRUE TRUE
##
## $x.scale ##$x.scale$scaled:center ## Sepal.Length Sepal.Width Petal.Length Petal.Width ## 5.843333 3.057333 3.758000 1.199333 ## ##$x.scale$scaled:scale ## Sepal.Length Sepal.Width Petal.Length Petal.Width ## 0.8280661 0.4358663 1.7652982 0.7622377 ## ## ##$y.scale
## NULL
##
## $nclasses ## [1] 3 ## ##$levels
## [1] "setosa"     "versicolor" "virginica"
##
## $tot.nSV ## [1] 35 ## ##$nSV
## [1]  6 15 14
##
## $labels ## [1] 1 2 3 ## ##$SV
##     Sepal.Length Sepal.Width Petal.Length Petal.Width
## 14   -1.86378030 -0.13153881   -1.5056946  -1.4422448
## 16   -0.17309407  3.08045544   -1.2791040  -1.0486668
## 21   -0.53538397  0.78617383   -1.1658087  -1.3110521
## 23   -1.50149039  1.24503015   -1.5623422  -1.3110521
## 24   -0.89767388  0.55674567   -1.1658087  -0.9174741
## 42   -1.62225369 -1.73753594   -1.3923993  -1.1798595
## 51    1.39682886  0.32731751    0.5336209   0.2632600
## 53    1.27606556  0.09788935    0.6469162   0.3944526
## 54   -0.41462067 -1.73753594    0.1370873   0.1320673
## 55    0.79301235 -0.59039513    0.4769732   0.3944526
##  [ reached getOption("max.print") -- omitted 25 rows ]
##
## $index ## [1] 14 16 21 23 24 42 51 53 54 55 58 61 69 71 73 78 79 ## [18] 84 85 86 99 107 111 119 120 124 127 128 130 132 134 135 139 149 ## [35] 150 ## ##$rho
## [1] -0.10346530  0.12160294 -0.09540346
##
## $compprob ## [1] FALSE ## ##$probA
## NULL
##
## $probB ## NULL ## ##$sigma
## NULL
##
## $coefs ## [,1] [,2] ## [1,] 0.00000000 0.06561739 ## [2,] 0.76813720 0.93378721 ## [3,] 0.00000000 0.12123270 ## [4,] 0.00000000 0.31170741 ## [5,] 1.11614066 0.46397392 ## [6,] 1.88141600 1.10392128 ## [7,] -0.55872622 0.00000000 ## [8,] 0.00000000 5.62341325 ## [9,] 0.00000000 0.27711792 ## [10,] 0.00000000 5.28440007 ## [11,] -1.06596713 0.00000000 ## [12,] -0.57076709 1.09019756 ## [13,] -0.03365904 5.62341325 ## [14,] 0.00000000 5.62341325 ## [15,] 0.00000000 5.62341325 ## [16,] 0.00000000 5.62341325 ## [17,] 0.00000000 4.70398738 ## [18,] 0.00000000 5.62341325 ## [19,] 0.00000000 4.97981371 ## [20,] -0.77497987 0.00000000 ## [ reached getOption("max.print") -- omitted 15 rows ] ## ##$na.action
## NULL
##
## $fitted ## 1 2 3 4 5 6 7 8 9 10 ## setosa setosa setosa setosa setosa setosa setosa setosa setosa setosa ## 11 12 13 14 15 16 17 18 19 20 ## setosa setosa setosa setosa setosa setosa setosa setosa setosa setosa ## 21 22 23 24 25 26 27 28 29 30 ## setosa setosa setosa setosa setosa setosa setosa setosa setosa setosa ## 31 32 33 34 35 36 37 38 39 40 ## setosa setosa setosa setosa setosa setosa setosa setosa setosa setosa ## [ reached getOption("max.print") -- omitted 110 entries ] ## Levels: setosa versicolor virginica ## ##$decision.values
##     setosa/versicolor setosa/virginica versicolor/virginica
## 1           1.1911739        1.0908424            1.1275805
## 2           1.1336557        1.0619543            1.3260964
## 3           1.2085065        1.0698101            1.0511345
## 4           1.1646153        1.0505915            1.0806874
## 5           1.1880814        1.0950348            0.9542815
## 6           1.0990761        1.0984626            0.9326361
## 7           1.1573474        1.0343287            0.9726843
## 8           1.1851598        1.0815750            1.2206802
## 9           1.1673499        1.0406734            0.8837945
## 10          1.1629911        1.0560925            1.2430067
## 11          1.1339282        1.0803946            1.0338357
## 12          1.1724182        1.0641469            1.1190423
## 13          1.1827355        1.0667956            1.1414844
##  [ reached getOption("max.print") -- omitted 137 rows ]
##
## $terms ## Species ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width ## attr(,"variables") ## list(Species, Sepal.Length, Sepal.Width, Petal.Length, Petal.Width) ## attr(,"factors") ## Sepal.Length Sepal.Width Petal.Length Petal.Width ## Species 0 0 0 0 ## Sepal.Length 1 0 0 0 ## Sepal.Width 0 1 0 0 ## Petal.Length 0 0 1 0 ## Petal.Width 0 0 0 1 ## attr(,"term.labels") ## [1] "Sepal.Length" "Sepal.Width" "Petal.Length" "Petal.Width" ## attr(,"order") ## [1] 1 1 1 1 ## attr(,"intercept") ## [1] 0 ## attr(,"response") ## [1] 1 ## attr(,".Environment") ## <environment: 0x7fd4895d9b18> ## attr(,"predvars") ## list(Species, Sepal.Length, Sepal.Width, Petal.Length, Petal.Width) ## attr(,"dataClasses") ## Species Sepal.Length Sepal.Width Petal.Length Petal.Width ## "factor" "numeric" "numeric" "numeric" "numeric" ## ## attr(,"class") ## [1] "summary.svm"  ### SQL Queries A SparkDataFrame can also be registered as a temporary view in Spark SQL so that one can run SQL queries over its data. The sql function enables applications to run SQL queries programmatically and returns the result as a SparkDataFrame. people <- read.df(paste0(sparkR.conf("spark.home"), "/examples/src/main/resources/people.json"), "json")  Register this SparkDataFrame as a temporary view. createOrReplaceTempView(people, "people")  SQL statements can be run using the sql method. teenagers <- sql("SELECT name FROM people WHERE age >= 13 AND age <= 19") head(teenagers)  ## name ## 1 Justin  ## Machine Learning SparkR supports the following machine learning models and algorithms. #### Classification • Linear Support Vector Machine (SVM) Classifier • Logistic Regression • Multilayer Perceptron (MLP) • Naive Bayes #### Regression • Accelerated Failure Time (AFT) Survival Model • Generalized Linear Model (GLM) • Isotonic Regression #### Tree - Classification and Regression • Decision Tree • Gradient-Boosted Trees (GBT) • Random Forest #### Clustering • Bisecting $$k$$-means • Gaussian Mixture Model (GMM) • $$k$$-means Clustering • Latent Dirichlet Allocation (LDA) #### Collaborative Filtering • Alternating Least Squares (ALS) #### Frequent Pattern Mining • FP-growth #### Statistics • Kolmogorov-Smirnov Test ### R Formula For most above, SparkR supports R formula operators, including ~, ., :, + and - for model fitting. This makes it a similar experience as using R functions. ### Training and Test Sets We can easily split SparkDataFrame into random training and test sets by the randomSplit function. It returns a list of split SparkDataFrames with provided weights. We use carsDF as an example and want to have about $$70%$$ training data and $$30%$$ test data. splitDF_list <- randomSplit(carsDF, c(0.7, 0.3), seed = 0) carsDF_train <- splitDF_list[[1]] carsDF_test <- splitDF_list[[2]]  count(carsDF_train)  ## [1] 21  head(carsDF_train)  ## model mpg cyl disp hp drat wt qsec vs am gear carb ## 1 Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4 ## 2 Camaro Z28 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4 ## 3 Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4 ## 4 Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1 ## 5 Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1 ## 6 Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4  count(carsDF_test)  ## [1] 11  head(carsDF_test)  ## model mpg cyl disp hp drat wt qsec vs am gear carb ## 1 AMC Javelin 15.2 8 304 150 3.15 3.435 17.30 0 0 3 2 ## 2 Chrysler Imperial 14.7 8 440 230 3.23 5.345 17.42 0 0 3 4 ## 3 Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1 ## 4 Dodge Challenger 15.5 8 318 150 2.76 3.520 16.87 0 0 3 2 ## 5 Ferrari Dino 19.7 6 145 175 3.62 2.770 15.50 0 1 5 6 ## 6 Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4  ### Models and Algorithms #### Linear Support Vector Machine (SVM) Classifier Linear Support Vector Machine (SVM) classifier is an SVM classifier with linear kernels. This is a binary classifier. We use a simple example to show how to use spark.svmLinear for binary classification. # load training data and create a DataFrame t <- as.data.frame(Titanic) training <- createDataFrame(t) # fit a Linear SVM classifier model model <- spark.svmLinear(training, Survived ~ ., regParam = 0.01, maxIter = 10) summary(model)  ##$coefficients
##                 Estimate
## (Intercept)  0.155016898
## Class_Crew   0.616590877
## Class_1st    0.000000000
## Class_3rd    0.148622548
## Sex_Male     0.210290445
## Freq        -0.004794042
##
## $numClasses ## [1] 2 ## ##$numFeatures
## [1] 6


Predict values on training data

prediction <- predict(model, training)


#### Logistic Regression

Logistic regression is a widely-used model when the response is categorical. It can be seen as a special case of the Generalized Linear Predictive Model. We provide spark.logit on top of spark.glm to support logistic regression with advanced hyper-parameters. It supports both binary and multiclass classification with elastic-net regularization and feature standardization, similar to glmnet.

We use a simple example to demonstrate spark.logit usage. In general, there are three steps of using spark.logit: 1). Create a dataframe from a proper data source; 2). Fit a logistic regression model using spark.logit with a proper parameter setting; and 3). Obtain the coefficient matrix of the fitted model using summary and use the model for prediction with predict.

Binomial logistic regression

t <- as.data.frame(Titanic)
training <- createDataFrame(t)
model <- spark.logit(training, Survived ~ ., regParam = 0.04741301)
summary(model)

## $coefficients ## Estimate ## (Intercept) -0.108748837 ## Class_Crew 0.152970953 ## Class_1st -0.018199615 ## Class_3rd 0.116011205 ## Sex_Male 0.201436637 ## Age_Adult 0.326629954 ## Freq -0.003310102  Predict values on training data fitted <- predict(model, training)  Multinomial logistic regression against three classes t <- as.data.frame(Titanic) training <- createDataFrame(t) # Note in this case, Spark infers it is multinomial logistic regression, so family = "multinomial" is optional. model <- spark.logit(training, Class ~ ., regParam = 0.07815179) summary(model)  ##$coefficients
##                     Crew          1st          3rd          2nd
## (Intercept)  0.055749216 -0.036372587  0.020150962 -0.039527591
## Sex_Male    -0.131326302  0.088032755 -0.059233656  0.102527203
## Age_Adult   -0.208864050  0.141940085 -0.102563703  0.169487668
## Survived_No -0.081293328  0.052721700 -0.029409091  0.057980720
## Freq         0.002222453 -0.001555913  0.001303827 -0.001970366


#### Multilayer Perceptron

Multilayer perceptron classifier (MLPC) is a classifier based on the feedforward artificial neural network. MLPC consists of multiple layers of nodes. Each layer is fully connected to the next layer in the network. Nodes in the input layer represent the input data. All other nodes map inputs to outputs by a linear combination of the inputs with the nodes weights $$w$$ and bias $$b$$ and applying an activation function. This can be written in matrix form for MLPC with $$K+1$$ layers as follows: $y(x)=f_K(\ldots f_2(w_2^T f_1(w_1^T x + b_1) + b_2) \ldots + b_K).$

Nodes in intermediate layers use sigmoid (logistic) function: $f(z_i) = \frac{1}{1+e^{-z_i}}.$

Nodes in the output layer use softmax function: $f(z_i) = \frac{e^{z_i}}{\sum_{k=1}^N e^{z_k}}.$

The number of nodes $$N$$ in the output layer corresponds to the number of classes.

MLPC employs backpropagation for learning the model. We use the logistic loss function for optimization and L-BFGS as an optimization routine.

spark.mlp requires at least two columns in data: one named "label" and the other one "features". The "features" column should be in libSVM-format.

We use Titanic data set to show how to use spark.mlp in classification.

t <- as.data.frame(Titanic)
training <- createDataFrame(t)
# fit a Multilayer Perceptron Classification Model
model <- spark.mlp(training, Survived ~ Age + Sex, blockSize = 128, layers = c(2, 3), solver = "l-bfgs", maxIter = 100, tol = 0.5, stepSize = 1, seed = 1, initialWeights = c( 0, 0, 0, 5, 5, 5, 9, 9, 9))


To avoid lengthy display, we only present partial results of the model summary. You can check the full result from your sparkR shell.

# check the summary of the fitted model
summary(model)

## $numOfInputs ## [1] 2 ## ##$numOfOutputs
## [1] 3
##
## $layers ## [1] 2 3 ## ##$weights
## $weights[[1]] ## [1] 0 ## ##$weights[[2]]
## [1] 0
##
## $weights[[3]] ## [1] 0 ## ##$weights[[4]]
## [1] 5
##
## $weights[[5]] ## [1] 5 ## ## [ reached getOption("max.print") -- omitted 4 entries ]  # make predictions use the fitted model predictions <- predict(model, training) head(select(predictions, predictions$prediction))

##   prediction
## 1         No
## 2         No
## 3         No
## 4         No
## 5         No
## 6         No


#### Naive Bayes

Naive Bayes model assumes independence among the features. spark.naiveBayes fits a Bernoulli naive Bayes model against a SparkDataFrame. The data should be all categorical. These models are often used for document classification.

titanic <- as.data.frame(Titanic)
titanicDF <- createDataFrame(titanic[titanic$Freq > 0, -5]) naiveBayesModel <- spark.naiveBayes(titanicDF, Survived ~ Class + Sex + Age) summary(naiveBayesModel)  ##$apriori
##            Yes        No
## [1,] 0.5769231 0.4230769
##
## $tables ## Class_3rd Class_1st Class_2nd Sex_Male Age_Adult ## Yes 0.3125 0.3125 0.3125 0.5 0.5625 ## No 0.4166667 0.25 0.25 0.5 0.75  naiveBayesPrediction <- predict(naiveBayesModel, titanicDF) head(select(naiveBayesPrediction, "Class", "Sex", "Age", "Survived", "prediction"))  ## Class Sex Age Survived prediction ## 1 3rd Male Child No Yes ## 2 3rd Female Child No Yes ## 3 1st Male Adult No Yes ## 4 2nd Male Adult No Yes ## 5 3rd Male Adult No No ## 6 Crew Male Adult No Yes  #### Accelerated Failure Time Survival Model Survival analysis studies the expected duration of time until an event happens, and often the relationship with risk factors or treatment taken on the subject. In contrast to standard regression analysis, survival modeling has to deal with special characteristics in the data including non-negative survival time and censoring. Accelerated Failure Time (AFT) model is a parametric survival model for censored data that assumes the effect of a covariate is to accelerate or decelerate the life course of an event by some constant. For more information, refer to the Wikipedia page AFT Model and the references there. Different from a Proportional Hazards Model designed for the same purpose, the AFT model is easier to parallelize because each instance contributes to the objective function independently. library(survival) ovarianDF <- createDataFrame(ovarian) aftModel <- spark.survreg(ovarianDF, Surv(futime, fustat) ~ ecog_ps + rx) summary(aftModel)  ##$coefficients
##                  Value
## (Intercept)  6.8966930
## ecog_ps     -0.3850426
## rx           0.5286457
## Log(scale)  -0.1234418

aftPredictions <- predict(aftModel, ovarianDF)

##   futime fustat     age resid_ds rx ecog_ps label prediction
## 1     59      1 72.3315        2  1       1    59  1141.7256
## 2    115      1 74.4932        2  1       1   115  1141.7256
## 3    156      1 66.4658        2  1       2   156   776.8548
## 4    421      0 53.3644        2  2       1   421  1937.0893
## 5    431      1 50.3397        2  1       1   431  1141.7256
## 6    448      0 56.4301        1  1       2   448   776.8548


#### Generalized Linear Model

The main function is spark.glm. The following families and link functions are supported. The default is gaussian.

gaussian identity, log, inverse
binomial logit, probit, cloglog (complementary log-log)
poisson log, identity, sqrt
gamma inverse, identity, log

There are three ways to specify the family argument.

• Family name as a character string, e.g. family = "gaussian".

• Family function, e.g. family = binomial.

• Result returned by a family function, e.g. family = poisson(link = log).

• Note that there are two ways to specify the tweedie family: a) Set family = "tweedie" and specify the var.power and link.power b) When package statmod is loaded, the tweedie family is specified using the family definition therein, i.e., tweedie().

We use the mtcars dataset as an illustration. The corresponding SparkDataFrame is carsDF. After fitting the model, we print out a summary and see the fitted values by making predictions on the original dataset. We can also pass into a new SparkDataFrame of same schema to predict on new data.

gaussianGLM <- spark.glm(carsDF, mpg ~ wt + hp)
summary(gaussianGLM)

##
## Deviance Residuals:
## (Note: These are approximate quantiles with relative error <= 0.01)
##     Min       1Q   Median       3Q      Max
## -3.9410  -1.6499  -0.3267   1.0373   5.8538
##
## Coefficients:
##               Estimate  Std. Error  t value    Pr(>|t|)
## (Intercept)  37.227270   1.5987875  23.2847  0.0000e+00
## wt           -3.877831   0.6327335  -6.1287  1.1196e-06
## hp           -0.031773   0.0090297  -3.5187  1.4512e-03
##
## (Dispersion parameter for gaussian family taken to be 6.725785)
##
##     Null deviance: 1126.05  on 31  degrees of freedom
## Residual deviance:  195.05  on 29  degrees of freedom
## AIC: 156.7
##
## Number of Fisher Scoring iterations: 1


When doing prediction, a new column called prediction will be appended. Let's look at only a subset of columns here.

gaussianFitted <- predict(gaussianGLM, carsDF)
head(select(gaussianFitted, "model", "prediction", "mpg", "wt", "hp"))

##               model prediction  mpg    wt  hp
## 1         Mazda RX4   23.57233 21.0 2.620 110
## 2     Mazda RX4 Wag   22.58348 21.0 2.875 110
## 3        Datsun 710   25.27582 22.8 2.320  93
## 4    Hornet 4 Drive   21.26502 21.4 3.215 110
## 5 Hornet Sportabout   18.32727 18.7 3.440 175
## 6           Valiant   20.47382 18.1 3.460 105


The following is the same fit using the tweedie family:

tweedieGLM1 <- spark.glm(carsDF, mpg ~ wt + hp, family = "tweedie", var.power = 0.0)
summary(tweedieGLM1)

##
## Deviance Residuals:
## (Note: These are approximate quantiles with relative error <= 0.01)
##     Min       1Q   Median       3Q      Max
## -3.9410  -1.6499  -0.3267   1.0373   5.8538
##
## Coefficients:
##               Estimate  Std. Error  t value    Pr(>|t|)
## (Intercept)  37.227270   1.5987875  23.2847  0.0000e+00
## wt           -3.877831   0.6327335  -6.1287  1.1196e-06
## hp           -0.031773   0.0090297  -3.5187  1.4512e-03
##
## (Dispersion parameter for tweedie family taken to be 6.725785)
##
##     Null deviance: 1126.05  on 31  degrees of freedom
## Residual deviance:  195.05  on 29  degrees of freedom
## AIC: 156.7
##
## Number of Fisher Scoring iterations: 1


We can try other distributions in the tweedie family, for example, a compound Poisson distribution with a log link:

tweedieGLM2 <- spark.glm(carsDF, mpg ~ wt + hp, family = "tweedie",
var.power = 1.2, link.power = 0.0)
summary(tweedieGLM2)

##
## Deviance Residuals:
## (Note: These are approximate quantiles with relative error <= 0.01)
##      Min        1Q    Median        3Q       Max
## -0.58074  -0.25335  -0.09892   0.18608   0.82717
##
## Coefficients:
##                Estimate  Std. Error  t value    Pr(>|t|)
## (Intercept)   3.8500849  0.06698272  57.4788  0.0000e+00
## wt           -0.2018426  0.02897283  -6.9666  1.1691e-07
## hp           -0.0016248  0.00041603  -3.9054  5.1697e-04
##
## (Dispersion parameter for tweedie family taken to be 0.1340111)
##
##     Null deviance: 29.8820  on 31  degrees of freedom
## Residual deviance:  3.7739  on 29  degrees of freedom
## AIC: NA
##
## Number of Fisher Scoring iterations: 4


#### Isotonic Regression

spark.isoreg fits an Isotonic Regression model against a SparkDataFrame. It solves a weighted univariate a regression problem under a complete order constraint. Specifically, given a set of real observed responses $$y_1, \ldots, y_n$$, corresponding real features $$x_1, \ldots, x_n$$, and optionally positive weights $$w_1, \ldots, w_n$$, we want to find a monotone (piecewise linear) function $$f$$ to minimize $\ell(f) = \sum_{i=1}^n w_i (y_i - f(x_i))^2.$

There are a few more arguments that may be useful.

• weightCol: a character string specifying the weight column.

• isotonic: logical value indicating whether the output sequence should be isotonic/increasing (TRUE) or antitonic/decreasing (FALSE).

• featureIndex: the index of the feature on the right hand side of the formula if it is a vector column (default: 0), no effect otherwise.

We use an artificial example to show the use.

y <- c(3.0, 6.0, 8.0, 5.0, 7.0)
x <- c(1.0, 2.0, 3.5, 3.0, 4.0)
w <- rep(1.0, 5)
data <- data.frame(y = y, x = x, w = w)
df <- createDataFrame(data)
isoregModel <- spark.isoreg(df, y ~ x, weightCol = "w")
isoregFitted <- predict(isoregModel, df)

##     x y prediction
## 1 1.0 3        3.0
## 2 2.0 6        5.5
## 3 3.5 8        7.5
## 4 3.0 5        5.5
## 5 4.0 7        7.5


In the prediction stage, based on the fitted monotone piecewise function, the rules are:

• If the prediction input exactly matches a training feature then associated prediction is returned. In case there are multiple predictions with the same feature then one of them is returned. Which one is undefined.

• If the prediction input is lower or higher than all training features then prediction with lowest or highest feature is returned respectively. In case there are multiple predictions with the same feature then the lowest or highest is returned respectively.

• If the prediction input falls between two training features then prediction is treated as piecewise linear function and interpolated value is calculated from the predictions of the two closest features. In case there are multiple values with the same feature then the same rules as in previous point are used.

For example, when the input is $$3.2$$, the two closest feature values are $$3.0$$ and $$3.5$$, then predicted value would be a linear interpolation between the predicted values at $$3.0$$ and $$3.5$$.

newDF <- createDataFrame(data.frame(x = c(1.5, 3.2)))

##     x prediction
## 1 1.5       4.25
## 2 3.2       6.30


#### Decision Tree

spark.decisionTree fits a decision tree classification or regression model on a SparkDataFrame. Users can call summary to get a summary of the fitted model, predict to make predictions, and write.ml/read.ml to save/load fitted models.

We use the Titanic dataset to train a decision tree and make predictions:

t <- as.data.frame(Titanic)
df <- createDataFrame(t)
dtModel <- spark.decisionTree(df, Survived ~ ., type = "classification", maxDepth = 2)
summary(dtModel)

## Formula:  Survived ~ .
## Number of features:  6
## Features:  Class_Crew Class_1st Class_3rd Sex_Male Age_Adult Freq
## Feature importances:  (6,[4,5],[0.0821114369501465,0.9178885630498534])
## Max Depth:  2
##  DecisionTreeClassificationModel (uid=dtc_b2f6b40224dc) of depth 2 with 7 nodes
##   If (feature 5 <= 4.5)
##    If (feature 4 in {1.0})
##     Predict: 0.0
##    Else (feature 4 not in {1.0})
##     Predict: 0.0
##   Else (feature 5 > 4.5)
##    If (feature 5 <= 84.5)
##     Predict: 1.0
##    Else (feature 5 > 84.5)
##     Predict: 0.0
##

predictions <- predict(dtModel, df)


spark.gbt fits a gradient-boosted tree classification or regression model on a SparkDataFrame. Users can call summary to get a summary of the fitted model, predict to make predictions, and write.ml/read.ml to save/load fitted models.

We use the Titanic dataset to train a gradient-boosted tree and make predictions:

t <- as.data.frame(Titanic)
df <- createDataFrame(t)
gbtModel <- spark.gbt(df, Survived ~ ., type = "classification", maxDepth = 2, maxIter = 2)
summary(gbtModel)

## Formula:  Survived ~ .
## Number of features:  6
## Features:  Class_Crew Class_1st Class_3rd Sex_Male Age_Adult Freq
## Feature importances:  (6,[0,4,5],[0.13232868918568455,0.04105571847507324,0.8266155923392422])
## Max Depth:  2
## Number of trees:  2
## Tree weights:  1 0.1
##  GBTClassificationModel (uid=gbtc_c57a2a2d102f) with 2 trees
##   Tree 0 (weight 1.0):
##     If (feature 5 <= 4.5)
##      If (feature 4 in {1.0})
##       Predict: -1.0
##      Else (feature 4 not in {1.0})
##       Predict: -0.3333333333333333
##     Else (feature 5 > 4.5)
##      If (feature 5 <= 84.5)
##       Predict: 0.5714285714285714
##      Else (feature 5 > 84.5)
##       Predict: -0.42857142857142855
##   Tree 1 (weight 0.1):
##     If (feature 0 in {0.0})
##      If (feature 5 <= 0.5)
##       Predict: -1.3569745249367313
##      Else (feature 5 > 0.5)
##       Predict: 0.03904410748693565
##     Else (feature 0 not in {0.0})
##      If (feature 5 <= 289.5)
##       Predict: 0.8386754830442805
##      Else (feature 5 > 289.5)
##       Predict: -1.1917465204842816
##

predictions <- predict(gbtModel, df)


#### Random Forest

spark.randomForest fits a random forest classification or regression model on a SparkDataFrame. Users can call summary to get a summary of the fitted model, predict to make predictions, and write.ml/read.ml to save/load fitted models.

In the following example, we use the Titanic dataset to train a random forest and make predictions:

t <- as.data.frame(Titanic)
df <- createDataFrame(t)
rfModel <- spark.randomForest(df, Survived ~ ., type = "classification", maxDepth = 2, numTrees = 2)
summary(rfModel)

## Formula:  Survived ~ .
## Number of features:  6
## Features:  Class_Crew Class_1st Class_3rd Sex_Male Age_Adult Freq
## Feature importances:  (6,[1,3,5],[0.3283972370801142,0.005455825413586564,0.6661469375062993])
## Max Depth:  2
## Number of trees:  2
## Tree weights:  1 1
##  RandomForestClassificationModel (uid=rfc_60715e79bd90) with 2 trees
##   Tree 0 (weight 1.0):
##     If (feature 5 <= 0.5)
##      If (feature 3 in {0.0})
##       Predict: 0.0
##      Else (feature 3 not in {0.0})
##       Predict: 0.0
##     Else (feature 5 > 0.5)
##      If (feature 1 in {1.0})
##       Predict: 1.0
##      Else (feature 1 not in {1.0})
##       Predict: 0.0
##   Tree 1 (weight 1.0):
##     If (feature 5 <= 84.5)
##      If (feature 1 in {1.0})
##       Predict: 0.0
##      Else (feature 1 not in {1.0})
##       Predict: 1.0
##     Else (feature 5 > 84.5)
##      Predict: 0.0
##

predictions <- predict(rfModel, df)


#### Bisecting k-Means

spark.bisectingKmeans is a kind of hierarchical clustering using a divisive (or “top-down”) approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy.

t <- as.data.frame(Titanic)
training <- createDataFrame(t)
model <- spark.bisectingKmeans(training, Class ~ Survived, k = 4)
summary(model)

## $k ## [1] 4 ## ##$coefficients
##   Survived_No
## 1 0
## 2 1
## 3 0
## 4 1
##
## $size ##$size[[1]]
## [1] 16
##
## $size[[2]] ## [1] 16 ## ##$size[[3]]
## [1] 0
##
## $size[[4]] ## [1] 0 ## ## ##$cluster
## SparkDataFrame[prediction:int]
##
## $is.loaded ## [1] FALSE  fitted <- predict(model, training) head(select(fitted, "Class", "prediction"))  ## Class prediction ## 1 1st 1 ## 2 2nd 1 ## 3 3rd 1 ## 4 Crew 1 ## 5 1st 1 ## 6 2nd 1  #### Gaussian Mixture Model spark.gaussianMixture fits multivariate Gaussian Mixture Model (GMM) against a SparkDataFrame. Expectation-Maximization (EM) is used to approximate the maximum likelihood estimator (MLE) of the model. We use a simulated example to demonstrate the usage. X1 <- data.frame(V1 = rnorm(4), V2 = rnorm(4)) X2 <- data.frame(V1 = rnorm(6, 3), V2 = rnorm(6, 4)) data <- rbind(X1, X2) df <- createDataFrame(data) gmmModel <- spark.gaussianMixture(df, ~ V1 + V2, k = 2) summary(gmmModel)  ##$lambda
## [1] 0.5536208 0.4463792
##
## $mu ##$mu[[1]]
## [1] 1.429689 1.569165
##
## $mu[[2]] ## [1] 2.413790 2.296108 ## ## ##$sigma
## $sigma[[1]] ## [,1] [,2] ## [1,] 1.67633 3.335639 ## [2,] 3.335639 7.063168 ## ##$sigma[[2]]
##      [,1]     [,2]
## [1,] 5.279463 2.719007
## [2,] 2.719007 1.486211
##
##
## $loglik ## [1] -31.77761 ## ##$posterior
## SparkDataFrame[posterior:array<double>]
##
## $is.loaded ## [1] FALSE  gmmFitted <- predict(gmmModel, df) head(select(gmmFitted, "V1", "V2", "prediction"))  ## V1 V2 prediction ## 1 0.87896096 0.4153526 0 ## 2 -0.04033495 -0.3142507 0 ## 3 0.14300284 -2.0576358 0 ## 4 -1.45542199 0.3614593 1 ## 5 4.88710614 3.9506910 1 ## 6 2.77808414 2.4819867 1  #### k-Means Clustering spark.kmeans fits a $$k$$-means clustering model against a SparkDataFrame. As an unsupervised learning method, we don't need a response variable. Hence, the left hand side of the R formula should be left blank. The clustering is based only on the variables on the right hand side. kmeansModel <- spark.kmeans(carsDF, ~ mpg + hp + wt, k = 3) summary(kmeansModel)  ##$k
## [1] 3
##
## $coefficients ## mpg hp wt ## 1 23.28947 99.47368 2.6920 ## 2 15.45000 205.75000 4.0195 ## 3 15.00000 335.00000 3.5700 ## ##$size
## $size[[1]] ## [1] 19 ## ##$size[[2]]
## [1] 12
##
## $size[[3]] ## [1] 1 ## ## ##$cluster
## SparkDataFrame[prediction:int]
##
## $is.loaded ## [1] FALSE ## ##$clusterSize
## [1] 3

kmeansPredictions <- predict(kmeansModel, carsDF)
head(select(kmeansPredictions, "model", "mpg", "hp", "wt", "prediction"), n = 20L)

##                  model  mpg  hp    wt prediction
## 1            Mazda RX4 21.0 110 2.620          0
## 2        Mazda RX4 Wag 21.0 110 2.875          0
## 3           Datsun 710 22.8  93 2.320          0
## 4       Hornet 4 Drive 21.4 110 3.215          0
## 5    Hornet Sportabout 18.7 175 3.440          1
## 6              Valiant 18.1 105 3.460          0
## 7           Duster 360 14.3 245 3.570          1
## 8            Merc 240D 24.4  62 3.190          0
## 9             Merc 230 22.8  95 3.150          0
## 10            Merc 280 19.2 123 3.440          0
## 11           Merc 280C 17.8 123 3.440          0
## 12          Merc 450SE 16.4 180 4.070          1
## 13          Merc 450SL 17.3 180 3.730          1
## 14         Merc 450SLC 15.2 180 3.780          1
## 15  Cadillac Fleetwood 10.4 205 5.250          1
## 16 Lincoln Continental 10.4 215 5.424          1
## 17   Chrysler Imperial 14.7 230 5.345          1
## 18            Fiat 128 32.4  66 2.200          0
## 19         Honda Civic 30.4  52 1.615          0
## 20      Toyota Corolla 33.9  65 1.835          0


#### Latent Dirichlet Allocation

spark.lda fits a Latent Dirichlet Allocation model on a SparkDataFrame. It is often used in topic modeling in which topics are inferred from a collection of text documents. LDA can be thought of as a clustering algorithm as follows:

• Topics correspond to cluster centers, and documents correspond to examples (rows) in a dataset.

• Topics and documents both exist in a feature space, where feature vectors are vectors of word counts (bag of words).

• Rather than clustering using a traditional distance, LDA uses a function based on a statistical model of how text documents are generated.

To use LDA, we need to specify a features column in data where each entry represents a document. There are two options for the column:

• character string: This can be a string of the whole document. It will be parsed automatically. Additional stop words can be added in customizedStopWords.

• libSVM: Each entry is a collection of words and will be processed directly.

Two more functions are provided for the fitted model.

• spark.posterior returns a SparkDataFrame containing a column of posterior probabilities vectors named “topicDistribution”.

• spark.perplexity returns the log perplexity of given SparkDataFrame, or the log perplexity of the training data if missing argument data.

For more information, see the help document ?spark.lda.

Let's look an artificial example.

corpus <- data.frame(features = c(
"1 2 6 0 2 3 1 1 0 0 3",
"1 3 0 1 3 0 0 2 0 0 1",
"1 4 1 0 0 4 9 0 1 2 0",
"2 1 0 3 0 0 5 0 2 3 9",
"3 1 1 9 3 0 2 0 0 1 3",
"4 2 0 3 4 5 1 1 1 4 0",
"2 1 0 3 0 0 5 0 2 2 9",
"1 1 1 9 2 1 2 0 0 1 3",
"4 4 0 3 4 2 1 3 0 0 0",
"2 8 2 0 3 0 2 0 2 7 2",
"1 1 1 9 0 2 2 0 0 3 3",
"4 1 0 0 4 5 1 3 0 1 0"))
corpusDF <- createDataFrame(corpus)
model <- spark.lda(data = corpusDF, k = 5, optimizer = "em")
summary(model)

## $docConcentration ## [1] 11 11 11 11 11 ## ##$topicConcentration
## [1] 1.1
##
## $logLikelihood ## [1] -353.2948 ## ##$logPerplexity
## [1] 2.676476
##
## $isDistributed ## [1] TRUE ## ##$vocabSize
## [1] 10
##
## $topics ## SparkDataFrame[topic:int, term:array<string>, termWeights:array<double>] ## ##$vocabulary
##  [1] "0" "1" "2" "3" "4" "9" "5" "8" "7" "6"
##
## $trainingLogLikelihood ## [1] -239.5629 ## ##$logPrior
## [1] -980.2974

posterior <- spark.posterior(model, corpusDF)

##                features
## 1 1 2 6 0 2 3 1 1 0 0 3
## 2 1 3 0 1 3 0 0 2 0 0 1
## 3 1 4 1 0 0 4 9 0 1 2 0
## 4 2 1 0 3 0 0 5 0 2 3 9
## 5 3 1 1 9 3 0 2 0 0 1 3
## 6 4 2 0 3 4 5 1 1 1 4 0
##                                       topicDistribution
## 1 0.1972168, 0.1986637, 0.2022009, 0.2006592, 0.2012594
## 2 0.1989981, 0.1988759, 0.2016026, 0.2006369, 0.1998866
## 3 0.2020589, 0.2026099, 0.1968845, 0.1987297, 0.1997171
## 4 0.2004073, 0.1981928, 0.2013019, 0.2006331, 0.1994649
## 5 0.1971471, 0.1983961, 0.2023595, 0.2011584, 0.2009389
## 6 0.2020246, 0.2041815, 0.1955392, 0.1997228, 0.1985320

perplexity <- spark.perplexity(model, corpusDF)
perplexity

## [1] 2.676476


#### Alternating Least Squares

spark.als learns latent factors in collaborative filtering via alternating least squares.

There are multiple options that can be configured in spark.als, including rank, reg, and nonnegative. For a complete list, refer to the help file.

ratings <- list(list(0, 0, 4.0), list(0, 1, 2.0), list(1, 1, 3.0), list(1, 2, 4.0),
list(2, 1, 1.0), list(2, 2, 5.0))
df <- createDataFrame(ratings, c("user", "item", "rating"))
model <- spark.als(df, "rating", "user", "item", rank = 10, reg = 0.1, nonnegative = TRUE)


Extract latent factors.

stats <- summary(model)
userFactors <- stats$userFactors itemFactors <- stats$itemFactors


Make predictions.

predicted <- predict(model, df)


#### FP-growth

spark.fpGrowth executes FP-growth algorithm to mine frequent itemsets on a SparkDataFrame. itemsCol should be an array of values.

df <- selectExpr(createDataFrame(data.frame(rawItems = c(
"T,R,U", "T,S", "V,R", "R,U,T,V", "R,S", "V,S,U", "U,R", "S,T", "V,R", "V,U,S",
"T,V,U", "R,V", "T,S", "T,S", "S,T", "S,U", "T,R", "V,R", "S,V", "T,S,U"
))), "split(rawItems, ',') AS items")

fpm <- spark.fpGrowth(df, minSupport = 0.2, minConfidence = 0.5)


spark.freqItemsets method can be used to retrieve a SparkDataFrame with the frequent itemsets.

head(spark.freqItemsets(fpm))

##   items freq
## 1     R    9
## 2     U    8
## 3  U, T    4
## 4  U, V    4
## 5  U, S    4
## 6     T   10


spark.associationRules returns a SparkDataFrame with the association rules.

head(spark.associationRules(fpm))

##   antecedent consequent confidence
## 1          V          R  0.5555556
## 2          S          T  0.5454545
## 3          T          S  0.6000000
## 4          R          V  0.5555556
## 5          U          T  0.5000000
## 6          U          V  0.5000000


We can make predictions based on the antecedent.

head(predict(fpm, df))

##        items prediction
## 1    T, R, U       S, V
## 2       T, S       NULL
## 3       V, R       NULL
## 4 R, U, T, V          S
## 5       R, S       T, V
## 6    V, S, U       R, T


#### Kolmogorov-Smirnov Test

spark.kstest runs a two-sided, one-sample Kolmogorov-Smirnov (KS) test. Given a SparkDataFrame, the test compares continuous data in a given column testCol with the theoretical distribution specified by parameter nullHypothesis. Users can call summary to get a summary of the test results.

In the following example, we test whether the Titanic dataset's Freq column follows a normal distribution. We set the parameters of the normal distribution using the mean and standard deviation of the sample.

t <- as.data.frame(Titanic)
df <- createDataFrame(t)
freqStats <- head(select(df, mean(df$Freq), sd(df$Freq)))
freqMean <- freqStats[1]
freqStd <- freqStats[2]

test <- spark.kstest(df, "Freq", "norm", c(freqMean, freqStd))
testSummary <- summary(test)
testSummary

## Kolmogorov-Smirnov test summary:
## degrees of freedom = 0
## statistic = 0.3065126710255011
## pValue = 0.0036336792155329256
## Very strong presumption against null hypothesis: Sample follows theoretical distribution.


### Model Persistence

The following example shows how to save/load an ML model in SparkR.

t <- as.data.frame(Titanic)
training <- createDataFrame(t)
gaussianGLM <- spark.glm(training, Freq ~ Sex + Age, family = "gaussian")

# Save and then load a fitted MLlib model
modelPath <- tempfile(pattern = "ml", fileext = ".tmp")
write.ml(gaussianGLM, modelPath)

# Check model summary
summary(gaussianGLM2)

##
## Saved-loaded model does not support output 'Deviance Residuals'.
##
## Coefficients:
##              Estimate  Std. Error   t value   Pr(>|t|)
## (Intercept)   -32.594      35.994  -0.90553  0.3726477
## Sex_Male       78.812      41.562   1.89624  0.0679311
## Age_Adult     123.938      41.562   2.98196  0.0057522
##
## (Dispersion parameter for gaussian family taken to be 13819.52)
##
##     Null deviance: 573341  on 31  degrees of freedom
## Residual deviance: 400766  on 29  degrees of freedom
## AIC: 400.7
##
## Number of Fisher Scoring iterations: 1

# Check model prediction
gaussianPredictions <- predict(gaussianGLM2, training)

##   Class    Sex   Age Survived Freq label prediction
## 1   1st   Male Child       No    0     0   46.21875
## 2   2nd   Male Child       No    0     0   46.21875
## 3   3rd   Male Child       No   35    35   46.21875
## 4  Crew   Male Child       No    0     0   46.21875
## 5   1st Female Child       No    0     0  -32.59375
## 6   2nd Female Child       No    0     0  -32.59375

unlink(modelPath)


## Structured Streaming

SparkR supports the Structured Streaming API.

You can check the Structured Streaming Programming Guide for an introduction to its programming model and basic concepts.

### Simple Source and Sink

Spark has a few built-in input sources. As an example, to test with a socket source reading text into words and displaying the computed word counts:

# Create DataFrame representing the stream of input lines from connection
lines <- read.stream("socket", host = hostname, port = port)

# Split the lines into words
words <- selectExpr(lines, "explode(split(value, ' ')) as word")

# Generate running word count
wordCounts <- count(groupBy(words, "word"))

# Start running the query that prints the running counts to the console
query <- write.stream(wordCounts, "console", outputMode = "complete")


### Kafka Source

It is simple to read data from Kafka. For more information, see Input Sources supported by Structured Streaming.

topic <- read.stream("kafka",
kafka.bootstrap.servers = "host1:port1,host2:port2",
subscribe = "topic1")
keyvalue <- selectExpr(topic, "CAST(key AS STRING)", "CAST(value AS STRING)")


### Operations and Sinks

Most of the common operations on SparkDataFrame are supported for streaming, including selection, projection, and aggregation. Once you have defined the final result, to start the streaming computation, you will call the write.stream method setting a sink and outputMode.

A streaming SparkDataFrame can be written for debugging to the console, to a temporary in-memory table, or for further processing in a fault-tolerant manner to a File Sink in different formats.

noAggDF <- select(where(deviceDataStreamingDf, "signal > 10"), "device")

# Print new data to console
write.stream(noAggDF, "console")

# Write new data to Parquet files
write.stream(noAggDF,
"parquet",
path = "path/to/destination/dir",
checkpointLocation = "path/to/checkpoint/dir")

# Aggregate
aggDF <- count(groupBy(noAggDF, "device"))

# Print updated aggregations to console
write.stream(aggDF, "console", outputMode = "complete")

# Have all the aggregates in an in memory table. The query name will be the table name
write.stream(aggDF, "memory", queryName = "aggregates", outputMode = "complete")



### SparkR Object Classes

There are three main object classes in SparkR you may be working with.

• SparkDataFrame: the central component of SparkR. It is an S4 class representing distributed collection of data organized into named columns, which is conceptually equivalent to a table in a relational database or a data frame in R. It has two slots sdf and env.

• sdf stores a reference to the corresponding Spark Dataset in the Spark JVM backend.
• env saves the meta-information of the object such as isCached.

It can be created by data import methods or by transforming an existing SparkDataFrame. We can manipulate SparkDataFrame by numerous data processing functions and feed that into machine learning algorithms.

• Column: an S4 class representing a column of SparkDataFrame. The slot jc saves a reference to the corresponding Column object in the Spark JVM backend.

It can be obtained from a SparkDataFrame by $ operator, e.g., df$col. More often, it is used together with other functions, for example, with select to select particular columns, with filter and constructed conditions to select rows, with aggregation functions to compute aggregate statistics for each group.

• GroupedData: an S4 class representing grouped data created by groupBy or by transforming other GroupedData. Its sgd slot saves a reference to a RelationalGroupedDataset object in the backend.

This is often an intermediate object with group information and followed up by aggregation operations.

### Architecture

A complete description of architecture can be seen in the references, in particular the paper SparkR: Scaling R Programs with Spark.

Under the hood of SparkR is Spark SQL engine. This avoids the overheads of running interpreted R code, and the optimized SQL execution engine in Spark uses structural information about data and computation flow to perform a bunch of optimizations to speed up the computation.

The main method calls of actual computation happen in the Spark JVM of the driver. We have a socket-based SparkR API that allows us to invoke functions on the JVM from R. We use a SparkR JVM backend that listens on a Netty-based socket server.

Two kinds of RPCs are supported in the SparkR JVM backend: method invocation and creating new objects. Method invocation can be done in two ways.

• sparkR.callJMethod takes a reference to an existing Java object and a list of arguments to be passed on to the method.

• sparkR.callJStatic takes a class name for static method and a list of arguments to be passed on to the method.

The arguments are serialized using our custom wire format which is then deserialized on the JVM side. We then use Java reflection to invoke the appropriate method.

To create objects, sparkR.newJObject is used and then similarly the appropriate constructor is invoked with provided arguments.

Finally, we use a new R class jobj that refers to a Java object existing in the backend. These references are tracked on the Java side and are automatically garbage collected when they go out of scope on the R side.

## Appendix

### R and Spark Data Types {#DataTypes}

R Spark
byte byte
integer integer
float float
double double
numeric double
character string
string string
binary binary
raw binary
logical boolean
POSIXct timestamp
POSIXlt timestamp
Date date
array array
list array
env map