У нас вы можете посмотреть бесплатно Databricks - Apache Spark Quickstart- HelloWorld или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Quick Start Using Python Using a Databricks notebook to showcase DataFrame operations using Python Reference http://spark.apache.org/docs/latest/q... Take a look at the file system display(dbutils.fs.ls("/databricks-datasets/samples/docs/")) DataFrames have transformations, which return pointers to new DataFrames, and actions, which return values. transformation textFile = spark.read.text("/databricks-datasets/samples/docs/README.md") action textFile.count() Out[7]: 65 Output the first line from the text file textFile.first() Out[8]: Row(value=u'Welcome to the Spark documentation!') Now we're using a filter transformation to return a new DataFrame with a subset of the items in the file. Filter all of the lines within the DataFrame linesWithSpark = textFile.filter(textFile.value.contains("Spark")) Notice that this completes quickly because it is a transformation but lacks any action. But when performing the actions below (e.g. count, take) then you will see the executions. Perform a count (action) linesWithSpark.count() Out[11]: 12 Output the first five rows linesWithSpark.take(5) Out[12]: [Row(value=u'Welcome to the Spark documentation!'), Row(value=u'This readme will walk you through navigating and building the Spark documentation, which is included'), Row(value=u'here with the Spark source code. You can also find documentation specific to release versions of'), Row(value=u'Spark at http://spark.apache.org/documentation..., Row(value=u'whichever version of Spark you currently have checked out of revision control.')]