Exams > Amazon > AWS Certified Machine Learning - Specialty: AWS Certified Machine Learning - Specialty (MLS-C01)
AWS Certified Machine Learning - Specialty: AWS Certified Machine Learning - Specialty (MLS-C01)
Page 2 out of 21 pages Questions 11-20 out of 203 questions
Question#11

A manufacturing company has a large set of labeled historical sales data. The manufacturer would like to predict how many units of a particular part should be produced each quarter.
Which machine learning approach should be used to solve this problem?

  • A. Logistic regression
  • B. Random Cut Forest (RCF)
  • C. Principal component analysis (PCA)
  • D. Linear regression
Discover Answer Hide Answer

B

Question#12

A financial services company is building a robust serverless data lake on Amazon S3. The data lake should be flexible and meet the following requirements:
✑ Support querying old and new data on Amazon S3 through Amazon Athena and Amazon Redshift Spectrum.
✑ Support event-driven ETL pipelines
✑ Provide a quick and easy way to understand metadata
Which approach meets these requirements?

  • A. Use an AWS Glue crawler to crawl S3 data, an AWS Lambda function to trigger an AWS Glue ETL job, and an AWS Glue Data catalog to search and discover metadata.
  • B. Use an AWS Glue crawler to crawl S3 data, an AWS Lambda function to trigger an AWS Batch job, and an external Apache Hive metastore to search and discover metadata.
  • C. Use an AWS Glue crawler to crawl S3 data, an Amazon CloudWatch alarm to trigger an AWS Batch job, and an AWS Glue Data Catalog to search and discover metadata.
  • D. Use an AWS Glue crawler to crawl S3 data, an Amazon CloudWatch alarm to trigger an AWS Glue ETL job, and an external Apache Hive metastore to search and discover metadata.
Discover Answer Hide Answer

A

Question#13

A company's Machine Learning Specialist needs to improve the training speed of a time-series forecasting model using TensorFlow. The training is currently implemented on a single-GPU machine and takes approximately 23 hours to complete. The training needs to be run daily.
The model accuracy is acceptable, but the company anticipates a continuous increase in the size of the training data and a need to update the model on an hourly, rather than a daily, basis. The company also wants to minimize coding effort and infrastructure changes.
What should the Machine Learning Specialist do to the training solution to allow it to scale for future demand?

  • A. Do not change the TensorFlow code. Change the machine to one with a more powerful GPU to speed up the training.
  • B. Change the TensorFlow code to implement a Horovod distributed framework supported by Amazon SageMaker. Parallelize the training to as many machines as needed to achieve the business goals.
  • C. Switch to using a built-in AWS SageMaker DeepAR model. Parallelize the training to as many machines as needed to achieve the business goals.
  • D. Move the training to Amazon EMR and distribute the workload to as many machines as needed to achieve the business goals.
Discover Answer Hide Answer

B

Question#14

Which of the following metrics should a Machine Learning Specialist generally use to compare/evaluate machine learning classification models against each other?

  • A. Recall
  • B. Misclassification rate
  • C. Mean absolute percentage error (MAPE)
  • D. Area Under the ROC Curve (AUC)
Discover Answer Hide Answer

D

Question#15

A company is running a machine learning prediction service that generates 100 TB of predictions every day. A Machine Learning Specialist must generate a visualization of the daily precision-recall curve from the predictions, and forward a read-only version to the Business team.
Which solution requires the LEAST coding effort?

  • A. Run a daily Amazon EMR workflow to generate precision-recall data, and save the results in Amazon S3. Give the Business team read-only access to S3.
  • B. Generate daily precision-recall data in Amazon QuickSight, and publish the results in a dashboard shared with the Business team.
  • C. Run a daily Amazon EMR workflow to generate precision-recall data, and save the results in Amazon S3. Visualize the arrays in Amazon QuickSight, and publish them in a dashboard shared with the Business team.
  • D. Generate daily precision-recall data in Amazon ES, and publish the results in a dashboard shared with the Business team.
Discover Answer Hide Answer

C

Question#16

A Machine Learning Specialist is preparing data for training on Amazon SageMaker. The Specialist is using one of the SageMaker built-in algorithms for the training. The dataset is stored in .CSV format and is transformed into a numpy.array, which appears to be negatively affecting the speed of the training.
What should the Specialist do to optimize the data for training on SageMaker?

  • A. Use the SageMaker batch transform feature to transform the training data into a DataFrame.
  • B. Use AWS Glue to compress the data into the Apache Parquet format.
  • C. Transform the dataset into the RecordIO protobuf format.
  • D. Use the SageMaker hyperparameter optimization feature to automatically optimize the data.
Discover Answer Hide Answer

C

Question#17

A Machine Learning Specialist is required to build a supervised image-recognition model to identify a cat. The ML Specialist performs some tests and records the following results for a neural network-based image classifier:
Total number of images available = 1,000
Test set images = 100 (constant test set)
The ML Specialist notices that, in over 75% of the misclassified images, the cats were held upside down by their owners.
Which techniques can be used by the ML Specialist to improve this specific test error?

  • A. Increase the training data by adding variation in rotation for training images.
  • B. Increase the number of epochs for model training
  • C. Increase the number of layers for the neural network.
  • D. Increase the dropout rate for the second-to-last layer.
Discover Answer Hide Answer

B

Question#18

A Machine Learning Specialist needs to be able to ingest streaming data and store it in Apache Parquet files for exploration and analysis.
Which of the following services would both ingest and store this data in the correct format?

  • A. AWS DMS
  • B. Amazon Kinesis Data Streams
  • C. Amazon Kinesis Data Firehose
  • D. Amazon Kinesis Data Analytics
Discover Answer Hide Answer

C

Question#19

A data scientist has explored and sanitized a dataset in preparation for the modeling phase of a supervised learning task. The statistical dispersion can vary widely between features, sometimes by several orders of magnitude. Before moving on to the modeling phase, the data scientist wants to ensure that the prediction performance on the production data is as accurate as possible.
Which sequence of steps should the data scientist take to meet these requirements?

  • A. Apply random sampling to the dataset. Then split the dataset into training, validation, and test sets.
  • B. Split the dataset into training, validation, and test sets. Then rescale the training set and apply the same scaling to the validation and test sets.
  • C. Rescale the dataset. Then split the dataset into training, validation, and test sets.
  • D. Split the dataset into training, validation, and test sets. Then rescale the training set, the validation set, and the test set independently.
Discover Answer Hide Answer

D
Reference:
https://www.kdnuggets.com/2018/12/six-steps-master-machine-learning-data-preparation.html

Question#20

A Machine Learning Specialist is assigned a TensorFlow project using Amazon SageMaker for training, and needs to continue working for an extended period with no Wi-Fi access.
Which approach should the Specialist use to continue working?

  • A. Install Python 3 and boto3 on their laptop and continue the code development using that environment.
  • B. Download the TensorFlow Docker container used in Amazon SageMaker from GitHub to their local environment, and use the Amazon SageMaker Python SDK to test the code.
  • C. Download TensorFlow from tensorflow.org to emulate the TensorFlow kernel in the SageMaker environment.
  • D. Download the SageMaker notebook to their local environment, then install Jupyter Notebooks on their laptop and continue the development in a local notebook.
Discover Answer Hide Answer

B

chevron rightPrevious Nextchevron right