Exams > Amazon > AWS Certified Big Data - Specialty
AWS Certified Big Data - Specialty
Page 1 out of 9 pages Questions 1-10 out of 81 questions
Question#1

A data engineer in a manufacturing company is designing a data processing platform that receives a large volume of unstructured data. The data engineer must populate a well-structured star schema in Amazon
Redshift.
What is the most efficient architecture strategy for this purpose?

  • A. Transform the unstructured data using Amazon EMR and generate CSV data. COPY the CSV data into the analysis schema within Redshift.
  • B. Load the unstructured data into Redshift, and use string parsing functions to extract structured data for inserting into the analysis schema.
  • C. When the data is saved to Amazon S3, use S3 Event Notifications and AWS Lambda to transform the file contents. Insert the data into the analysis schema on Redshift.
  • D. Normalize the data using an AWS Marketplace ETL tool, persist the results to Amazon S3, and use AWS Lambda to INSERT the data into Redshift.
Discover Answer Hide Answer

A

Question#2

A new algorithm has been written in Python to identify SPAM e-mails. The algorithm analyzes the free text contained within a sample set of 1 million e-mails stored on Amazon S3. The algorithm must be scaled across a production dataset of 5 PB, which also resides in Amazon S3 storage.
Which AWS service strategy is best for this use case?

  • A. Copy the data into Amazon ElastiCache to perform text analysis on the in-memory data and export the results of the model into Amazon Machine Learning.
  • B. Use Amazon EMR to parallelize the text analysis tasks across the cluster using a streaming program step.
  • C. Use Amazon Elasticsearch Service to store the text and then use the Python Elasticsearch Client to run analysis against the text index.
  • D. Initiate a Python job from AWS Data Pipeline to run directly against the Amazon S3 text files.
Discover Answer Hide Answer

C
Reference: https://aws.amazon.com/blogs/database/indexing-metadata-in-amazon-elasticsearch-service- using-aws-lambda-and-python/

Question#3

A data engineer chooses Amazon DynamoDB as a data store for a regulated application. This application must be submitted to regulators for review. The data engineer needs to provide a control framework that lists the security controls from the process to follow to add new users down to the physical controls of the data center, including items like security guards and cameras.
How should this control mapping be achieved using AWS?

  • A. Request AWS third-party audit reports and/or the AWS quality addendum and map the AWS responsibilities to the controls that must be provided.
  • B. Request data center Temporary Auditor access to an AWS data center to verify the control mapping.
  • C. Request relevant SLAs and security guidelines for Amazon DynamoDB and define these guidelines within the applications architecture to map to the control framework.
  • D. Request Amazon DynamoDB system architecture designs to determine how to map the AWS responsibilities to the control that must be provided.
Discover Answer Hide Answer

A

Question#4

An administrator needs to design a distribution strategy for a star schema in a Redshift cluster. The administrator needs to determine the optimal distribution style for the tables in the Redshift schema.
In which three circumstances would choosing Key-based distribution be most appropriate? (Select three.)

  • A. When the administrator needs to optimize a large, slowly changing dimension table.
  • B. When the administrator needs to reduce cross-node traffic.
  • C. When the administrator needs to optimize the fact table for parity with the number of slices.
  • D. When the administrator needs to balance data distribution and collocation data.
  • E. When the administrator needs to take advantage of data locality on a local node for joins and aggregates.
Discover Answer Hide Answer

ACD

Question#5

Managers in a company need access to the human resources database that runs on Amazon Redshift, to run reports about their employees. Managers must only see information about their direct reports.
Which technique should be used to address this requirement with Amazon Redshift?

  • A. Define an IAM group for each manager with each employee as an IAM user in that group, and use that to limit the access.
  • B. Use Amazon Redshift snapshot to create one cluster per manager. Allow the manager to access only their designated clusters.
  • C. Define a key for each manager in AWS KMS and encrypt the data for their employees with their private keys.
  • D. Define a view that uses the employee’s manager name to filter the records based on current user names.
Discover Answer Hide Answer

A

Question#6

A company is building a new application in AWS. The architect needs to design a system to collect application log events. The design should be a repeatable pattern that minimizes data loss if an application instance fails, and keeps a durable copy of a log data for at least 30 days.
What is the simplest architecture that will allow the architect to analyze the logs?

  • A. Write them directly to a Kinesis Firehose. Configure Kinesis Firehose to load the events into an Amazon Redshift cluster for analysis.
  • B. Write them to a file on Amazon Simple Storage Service (S3). Write an AWS Lambda function that runs in response to the S3 event to load the events into Amazon Elasticsearch Service for analysis.
  • C. Write them to the local disk and configure the Amazon CloudWatch Logs agent to load the data into CloudWatch Logs and subsequently into Amazon Elasticsearch Service.
  • D. Write them to CloudWatch Logs and use an AWS Lambda function to load them into HDFS on an Amazon Elastic MapReduce (EMR) cluster for analysis.
Discover Answer Hide Answer

B

Question#7

An organization uses a custom map reduce application to build monthly reports based on many small data files in an Amazon S3 bucket. The data is submitted from various business units on a frequent but unpredictable schedule. As the dataset continues to grow, it becomes increasingly difficult to process all of the data in one day. The organization has scaled up its Amazon EMR cluster, but other optimizations could improve performance.
The organization needs to improve performance with minimal changes to existing processes and applications.
What action should the organization take?

  • A. Use Amazon S3 Event Notifications and AWS Lambda to create a quick search file index in DynamoDB.
  • B. Add Spark to the Amazon EMR cluster and utilize Resilient Distributed Datasets in-memory.
  • C. Use Amazon S3 Event Notifications and AWS Lambda to index each file into an Amazon Elasticsearch Service cluster.
  • D. Schedule a daily AWS Data Pipeline process that aggregates content into larger files using S3DistCp.
  • E. Have business units submit data via Amazon Kinesis Firehose to aggregate data hourly into Amazon S3.
Discover Answer Hide Answer

B

Question#8

An administrator is processing events in near real-time using Kinesis streams and Lambda. Lambda intermittently fails to process batches from one of the shards due to a 5-munite time limit.
What is a possible solution for this problem?

  • A. Add more Lambda functions to improve concurrent batch processing.
  • B. Reduce the batch size that Lambda is reading from the stream.
  • C. Ignore and skip events that are older than 5 minutes and put them to Dead Letter Queue (DLQ).
  • D. Configure Lambda to read from fewer shards in parallel.
Discover Answer Hide Answer

D

Question#9

An organization uses Amazon Elastic MapReduce(EMR) to process a series of extract-transform-load (ETL) steps that run in sequence. The output of each step must be fully processed in subsequent steps but will not be retained.
Which of the following techniques will meet this requirement most efficiently?

  • A. Use the EMR File System (EMRFS) to store the outputs from each step as objects in Amazon Simple Storage Service (S3).
  • B. Use the s3n URI to store the data to be processed as objects in Amazon S3.
  • C. Define the ETL steps as separate AWS Data Pipeline activities.
  • D. Load the data to be processed into HDFS, and then write the final output to Amazon S3.
Discover Answer Hide Answer

B

Question#10

The department of transportation for a major metropolitan area has placed sensors on roads at key locations around the city. The goal is to analyze the flow of traffic and notifications from emergency services to identify potential issues and to help planners correct trouble spots.
A data engineer needs a scalable and fault-tolerant solution that allows planners to respond to issues within
30 seconds of their occurrence.
Which solution should the data engineer choose?

  • A. Collect the sensor data with Amazon Kinesis Firehose and store it in Amazon Redshift for analysis. Collect emergency services events with Amazon SQS and store in Amazon DynampDB for analysis.
  • B. Collect the sensor data with Amazon SQS and store in Amazon DynamoDB for analysis. Collect emergency services events with Amazon Kinesis Firehose and store in Amazon Redshift for analysis.
  • C. Collect both sensor data and emergency services events with Amazon Kinesis Streams and use DynamoDB for analysis.
  • D. Collect both sensor data and emergency services events with Amazon Kinesis Firehose and use Amazon Redshift for analysis.
Discover Answer Hide Answer

A

chevron rightPrevious Nextchevron right