| title | Analyze Data in Local Compute Context | Microsoft Docs | |
|---|---|---|
| ms.custom | ||
| ms.date | 05/18/2017 | |
| ms.prod | sql-server-2016 | |
| ms.reviewer | ||
| ms.suite | ||
| ms.technology |
|
|
| ms.tgt_pltfrm | ||
| ms.topic | article | |
| applies_to |
|
|
| dev_langs |
|
|
| ms.assetid | 787bb526-4a13-40fa-9343-75d3bf5ba6a2 | |
| caps.latest.revision | 13 | |
| author | jeannt | |
| ms.author | jeannt | |
| manager | jhubbard |
Although it might be faster to run complex R code using the server context, sometimes it is just more convenient to get your data out of [!INCLUDEssNoVersion] and analyze it on your private workstation.
In this section, you'll learn how to switch back to a local compute context, and move data between contexts to optimize performance.
-
Change the compute context to do all your work locally.
rxSetComputeContext ("local") -
When extracting data from [!INCLUDEssNoVersion], you can often get better performance by increasing the number of rows extracted for each read. To do this, increase the value for the rowsPerRead parameter on the data source.
sqlServerDS1 <- RxSqlServerData( connectionString = sqlConnString, table = sqlFraudTable, colInfo = ccColInfo, rowsPerRead = 10000)
Previously, the value of rowsPerRead was set to 5000.
-
Now, call rxSummary on the new data source.
rxSummary(formula = ~gender + balance + numTrans + numIntlTrans + creditLine, data = sqlServerDS1)
The actual results should be the same as when you run rxSummary in the context of the [!INCLUDEssNoVersion] computer. However, the operation might be faster or slower. Much depends on the connection to your database, because the data is being transferred to your local computer for analysis.