Become Amazon Certified with updated AWS-DEA-C01 exam questions and correct answers
A company uses Amazon S3 as a data lake. The company sets up a data warehouse by using a multi-nodeAmazon Redshift cluster. The company organizes the data files in the data lake based on the data source ofeach data file.The company loads all the data files into one table in the Redshift cluster by using a separate COPY commandfor each data file location. This approach takes a long time to load all the data files into the table. Thecompany must increase the speed of the data ingestion. The company does not want to increase the cost of theprocess.Which solution will meet these requirements?
A sales company uses AWS Glue ETL to collect, process, and ingest data into an Amazon S3 bucket. The AWS Glue pipeline creates a new file in the S3 bucket every hour. File sizes vary from 200 KB to 300 KB. The company wants to build a sales prediction model by using data from the previous 5 years. The historic data includes 44,000 files. The company builds a second AWS Glue ETL pipeline by using the smallest worker type. The second pipeline retrieves the historic files from the S3 bucket and processes the files for downstream analysis. The company notices significant performance issues with the second ETL pipeline. The company needs to improve the performance of the second pipeline. Which solution will meet this requirement MOST cost-effectively?
A data engineer has two datasets that contain sales information for multiple cities and states. One dataset is named reference, and the other dataset is named primary. The data engineer needs a solution to determine whether a specific set of values in the city and state columns of the primary dataset exactly match the same specific values in the reference dataset. The data engineer wants to use Data Quality Definition Language (DQDL) rules in an AWS Glue Data Quality job. Which rule will meet these requirements?
A company needs to load customer data that comes from a third party into an Amazon Redshift datawarehouse. The company stores order data and product data in the same data warehouse. The company wantsto use the combined dataset to identify potential new customers.A data engineer notices that one of the fields in the source data includes values that are in JSON format.How should the data engineer load the JSON data into the data warehouse with the LEAST effort?
A company uses Amazon S3 as a data lake. The company sets up a data warehouse by using a multi-nodeAmazon Redshift cluster. The company organizes the data files in the data lake based on the data source ofeach data file.The company loads all the data files into one table in the Redshift cluster by using a separate COPY commandfor each data file location. This approach takes a long time to load all the data files into the table. Thecompany must increase the speed of the data ingestion. The company does not want to increase the cost of theprocess.Which solution will meet these requirements?
© Copyrights DumpsCertify 2025. All Rights Reserved
We use cookies to ensure your best experience. So we hope you are happy to receive all cookies on the DumpsCertify.