copy_to.spark_connection {sparklyr} | R Documentation |
Copy an R data.frame
to Spark, and return a reference to the
generated Spark DataFrame as a tbl_spark
. The returned object will
act as a dplyr
-compatible interface to the underlying Spark table.
## S3 method for class 'spark_connection' copy_to(dest, df, name = deparse(substitute(df)), memory = TRUE, repartition = 0L, overwrite = FALSE, ...)
dest |
A |
df |
An R |
name |
The name to assign to the copied table in Spark. |
memory |
Boolean; should the table be cached into memory? |
repartition |
The number of partitions to use when distributing the table across the Spark cluster. The default (0) can be used to avoid partitioning. |
overwrite |
Boolean; overwrite a pre-existing table with the name |
... |
Optional arguments; currently unused. |
A tbl_spark
, representing a dplyr
-compatible interface
to a Spark DataFrame.