У нас вы можете посмотреть бесплатно Snowflake Lambda Data Loader – Example with AWS S3 Trigger или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
This is an example of how to make an AWS Lambda Snowflake database data loader. (Snowflake database is a cloud platform suited to working with large amounts of data for data warehousing and analysis. AWS Lambda provides serverless compute – or really what is server on demand compute.) Note: -------- A very important point to note that copy into is not copying the files which were loaded earlier & in destination table , we are not getting duplicate records Unless you explicitly specify FORCE = TRUE as one of the copy options, the command ignores staged data files that were already loaded into the table. To reload the data, you must either specify FORCE = TRUE or modify the file and stage it again, which generates a new checksum. For details , you can refer this documentation: https://docs.snowflake.com/en/sql-ref... (check Reloading Files part) Windows 10 Bash & Linux Subsystem Setup • Windows 10 Bash & Linux Subsystem Setup Installing the packages: python3 --version python3 -m pip install --upgrade pip sudo apt-get install -y libssl-dev libffi-dev pip3 install -r https://raw.githubusercontent.com/sno... -t . pip3 install snowflake-connector-python==2.3.10 -t . Snowflake Code: --drop database if required drop database ramu; --Create Database create database if not exists ramu; --use the database use ramu; --Table Creation create table if not exists video (sepal_length number(10,4),sepal_width number(10,4),petal_length number(10,4) ,petal_width number(10,4),species varchar(200)); --file-format creation create or replace file format my_csv_format type = csv field_delimiter = ',' skip_header = 1 field_optionally_enclosed_by = '"' null_if = ('NULL', 'null') empty_field_as_null = true; --stage creation create or replace stage ramu.PUBLIC.snow_simple url="s3://" credentials=(aws_key_id='' aws_secret_key='') file_format = my_csv_format; --check the data present in S3 list @ramu.PUBLIC.snow_simple; --Execute the query select * from video; Python Code: import snowflake.connector as sf def run_query(conn, query): cursor = conn.cursor(); cursor.execute(query); cursor.close(); def lambda_handler(event, context): s3_file_key = event['Records'][0]['s3']['object']['key']; user="" password="" account=""; database="RAMU" warehouse="COMPUTE_WH" schema="PUBLIC" role="SYSADMIN" conn=sf.connect(user=user,password=password,account=account); statement_1='use warehouse '+warehouse; statement3="use database "+database; statement4="use role "+role; run_query(conn,statement_1) run_query(conn,statement3) run_query(conn,statement4) sql_query = "copy into ramu.PUBLIC.video from @ramu.PUBLIC.snow_simple FILE_FORMAT=(FORMAT_NAME=my_csv_format);" run_query(conn, sql_query);