Exams > Amazon > AWS Certified Big Data - Specialty
AWS Certified Big Data - Specialty
Page 2 out of 9 pages Questions 11-20 out of 81 questions
Question#11

A telecommunications company needs to predict customer churn (i.e., customers who decide to switch to a competitor). The company has historic records of each customer, including monthly consumption patterns, calls to customer service, and whether the customer ultimately quit the service. All of this data is stored in
Amazon S3. The company needs to know which customers are likely going to churn soon so that they can win back their loyalty.
What is the optimal approach to meet these requirements?

  • A. Use the Amazon Machine Learning service to build the binary classification model based on the dataset stored in Amazon S3. The model will be used regularly to predict churn attribute for existing customers.
  • B. Use AWS QuickSight to connect it to data stored in Amazon S3 to obtain the necessary business insight. Plot the churn trend graph to extrapolate churn likelihood for existing customers.
  • C. Use EMR to run the Hive queries to build a profile of a churning customer. Apply a profile to existing customers to determine the likelihood of churn.
  • D. Use a Redshift cluster to COPY the data from Amazon S3. Create a User Defined Function in Redshift that computes the likelihood of churn.
Discover Answer Hide Answer

B

Question#12

A system needs to collect on-premises application spool files into a persistent storage layer in AWS. Each spool file is 2 KB. The application generates 1 M files per hour. Each source file is automatically deleted from the local server after an hour.
What is the most cost-efficient option to meet these requirements?

  • A. Write file contents to an Amazon DynamoDB table.
  • B. Copy files to Amazon S3 Standard Storage.
  • C. Write file contents to Amazon ElastiCache.
  • D. Copy files to Amazon S3 infrequent Access Storage.
Discover Answer Hide Answer

C

Question#13

An administrator receives about 100 files per hour into Amazon S3 and will be loading the files into Amazon
Redshift. Customers who analyze the data within Redshift gain significant value when they receive data as quickly as possible. The customers have agreed to a maximum loading interval of 5 minutes.
Which loading approach should the administrator use to meet this objective?

  • A. Load each file as it arrives because getting data into the cluster as quickly as possibly is the priority.
  • B. Load the cluster as soon as the administrator has the same number of files as nodes in the cluster.
  • C. Load the cluster when the administrator has an event multiple of files relative to Cluster Slice Count, or 5 minutes, whichever comes first.
  • D. Load the cluster when the number of files is less than the Cluster Slice Count.
Discover Answer Hide Answer

C

Question#14

An enterprise customer is migrating to Redshift and is considering using dense storage nodes in its Redshift cluster. The customer wants to migrate 50 TB of data. The customers query patterns involve performing many joins with thousands of rows.
The customer needs to know how many nodes are needed in its target Redshift cluster. The customer has a limited budget and needs to avoid performing tests unless absolutely needed.
Which approach should this customer use?

  • A. Start with many small nodes.
  • B. Start with fewer large nodes.
  • C. Have two separate clusters with a mix of a small and large nodes.
  • D. Insist on performing multiple tests to determine the optimal configuration.
Discover Answer Hide Answer

A

Question#15

A company is centralizing a large number of unencrypted small files from multiple Amazon S3 buckets. The company needs to verify that the files contain the same data after centralization.
Which method meets the requirements?

  • A. Compare the S3 Etags from the source and destination objects.
  • B. Call the S3 CompareObjects API for the source and destination objects.
  • C. Place a HEAD request against the source and destination objects comparing SIG v4.
  • D. Compare the size of the source and destination objects.
Discover Answer Hide Answer

A

Question#16

An online gaming company uses DynamoDB to store user activity logs and is experiencing throttled writes on the companys DynamoDB table. The company is NOT consuming close to the provisioned capacity. The table contains a large number of items and is partitioned on user and sorted by date. The table is 200GB and is currently provisioned at 10K WCU and 20K RCU.
Which two additional pieces of information are required to determine the cause of the throttling? (Choose two.)

  • A. The structure of any GSIs that have been defined on the table
  • B. CloudWatch data showing consumed and provisioned write capacity when writes are being throttled
  • C. Application-level metrics showing the average item size and peak update rates for each attribute
  • D. The structure of any LSIs that have been defined on the table
  • E. The maximum historical WCU and RCU for the table
Discover Answer Hide Answer

AD

Question#17

A city has been collecting data on its public bicycle share program for the past three years. The 5PB dataset currently resides on Amazon S3. The data contains the following datapoints:
✑ Bicycle origination points
✑ Bicycle destination points
✑ Mileage between the points
✑ Number of bicycle slots available at the station (which is variable based on the station location)
✑ Number of slots available and taken at a given time
The program has received additional funds to increase the number of bicycle stations available. All data is regularly archived to Amazon Glacier.
The new bicycle stations must be located to provide the most riders access to bicycles.
How should this task be performed?

  • A. Move the data from Amazon S3 into Amazon EBS-backed volumes and use an EC-2 based Hadoop cluster with spot instances to run a Spark job that performs a stochastic gradient descent optimization.
  • B. Use the Amazon Redshift COPY command to move the data from Amazon S3 into Redshift and perform a SQL query that outputs the most popular bicycle stations.
  • C. Persist the data on Amazon S3 and use a transient EMR cluster with spot instances to run a Spark streaming job that will move the data into Amazon Kinesis.
  • D. Keep the data on Amazon S3 and use an Amazon EMR-based Hadoop cluster with spot instances to run a Spark job that performs a stochastic gradient descent optimization over EMRFS.
Discover Answer Hide Answer

B

Question#18

An administrator tries to use the Amazon Machine Learning service to classify social media posts that mention the administrators company into posts that require a response and posts that do not. The training dataset of
10,000 posts contains the details of each post including the timestamp, author, and full text of the post. The administrator is missing the target labels that are required for training.
Which Amazon Machine Learning model is the most appropriate for the task?

  • A. Binary classification model, where the target class is the require-response post
  • B. Binary classification model, where the two classes are the require-response post and does-not-require- response
  • C. Multi-class prediction model, with two classes: require-response post and does-not-require-response
  • D. Regression model where the predicted value is the probability that the post requires a response
Discover Answer Hide Answer

A

Question#19

A medical record filing system for a government medical fund is using an Amazon S3 bucket to archive documents related to patients. Every patient visit to a physician creates a new file, which can add up millions of files each month. Collection of these files from each physician is handled via a batch process that runs ever night using AWS Data Pipeline. This is sensitive data, so the data and any associated metadata must be encrypted at rest.
Auditors review some files on a quarterly basis to see whether the records are maintained according to regulations. Auditors must be able to locate any physical file in the S3 bucket for a given date, patient, or physician. Auditors spend a significant amount of time location such files.
What is the most cost- and time-efficient collection methodology in this situation?

  • A. Use Amazon Kinesis to get the data feeds directly from physicians, batch them using a Spark application on Amazon Elastic MapReduce (EMR), and then store them in Amazon S3 with folders separated per physician.
  • B. Use Amazon API Gateway to get the data feeds directly from physicians, batch them using a Spark application on Amazon Elastic MapReduce (EMR), and then store them in Amazon S3 with folders separated per physician.
  • C. Use Amazon S3 event notification to populate an Amazon DynamoDB table with metadata about every file loaded to Amazon S3, and partition them based on the month and year of the file.
  • D. Use Amazon S3 event notification to populate an Amazon Redshift table with metadata about every file loaded to Amazon S3, and partition them based on the month and year of the file.
Discover Answer Hide Answer

A

Question#20

A clinical trial will rely on medical sensors to remotely assess patient health. Each physician who participates in the trial requires visual reports each morning. The reports are built from aggregations of all the sensor data taken each minute.
What is the most cost-effective solution for creating this visualization each day?

  • A. Use Kinesis Aggregators Library to generate reports for reviewing the patient sensor data and generate a QuickSight visualization on the new data each morning for the physician to review.
  • B. Use a transient EMR cluster that shuts down after use to aggregate the patient sensor data each night and generate a QuickSight visualization on the new data each morning for the physician to review.
  • C. Use Spark streaming on EMR to aggregate the patient sensor data in every 15 minutes and generate a QuickSight visualization on the new data each morning for the physician to review.
  • D. Use an EMR cluster to aggregate the patient sensor data each night and provide Zeppelin notebooks that look at the new data residing on the cluster each morning for the physician to review.
Discover Answer Hide Answer

D

chevron rightPrevious Nextchevron right