Exams > Amazon > AWS Certified Machine Learning - Specialty: AWS Certified Machine Learning - Specialty (MLS-C01)
AWS Certified Machine Learning - Specialty: AWS Certified Machine Learning - Specialty (MLS-C01)
Page 6 out of 21 pages Questions 51-60 out of 203 questions
Question#51

A manufacturer of car engines collects data from cars as they are being driven. The data collected includes timestamp, engine temperature, rotations per minute
(RPM), and other sensor readings. The company wants to predict when an engine is going to have a problem, so it can notify drivers in advance to get engine maintenance. The engine data is loaded into a data lake for training.
Which is the MOST suitable predictive model that can be deployed into production?

  • A. Add labels over time to indicate which engine faults occur at what time in the future to turn this into a supervised learning problem. Use a recurrent neural network (RNN) to train the model to recognize when an engine might need maintenance for a certain fault.
  • B. This data requires an unsupervised learning algorithm. Use Amazon SageMaker k-means to cluster the data.
  • C. Add labels over time to indicate which engine faults occur at what time in the future to turn this into a supervised learning problem. Use a convolutional neural network (CNN) to train the model to recognize when an engine might need maintenance for a certain fault.
  • D. This data is already formulated as a time series. Use Amazon SageMaker seq2seq to model the time series.
Discover Answer Hide Answer

B

Question#52

A company wants to predict the sale prices of houses based on available historical sales data. The target variable in the company's dataset is the sale price. The features include parameters such as the lot size, living area measurements, non-living area measurements, number of bedrooms, number of bathrooms, year built, and postal code. The company wants to use multi-variable linear regression to predict house sale prices.
Which step should a machine learning specialist take to remove features that are irrelevant for the analysis and reduce the model's complexity?

  • A. Plot a histogram of the features and compute their standard deviation. Remove features with high variance.
  • B. Plot a histogram of the features and compute their standard deviation. Remove features with low variance.
  • C. Build a heatmap showing the correlation of the dataset against itself. Remove features with low mutual correlation scores.
  • D. Run a correlation check of all features against the target variable. Remove features with low target variable correlation scores.
Discover Answer Hide Answer

D

Question#53

A company wants to classify user behavior as either fraudulent or normal. Based on internal research, a machine learning specialist will build a binary classifier based on two features: age of account, denoted by x, and transaction month, denoted by y. The class distributions are illustrated in the provided figure. The positive class is portrayed in red, while the negative class is portrayed in black.

Which model would have the HIGHEST accuracy?

  • A. Linear support vector machine (SVM)
  • B. Decision tree
  • C. Support vector machine (SVM) with a radial basis function kernel
  • D. Single perceptron with a Tanh activation function
Discover Answer Hide Answer

C

Question#54

A health care company is planning to use neural networks to classify their X-ray images into normal and abnormal classes. The labeled data is divided into a training set of 1,000 images and a test set of 200 images. The initial training of a neural network model with 50 hidden layers yielded 99% accuracy on the training set, but only 55% accuracy on the test set.
What changes should the Specialist consider to solve this issue? (Choose three.)

  • A. Choose a higher number of layers
  • B. Choose a lower number of layers
  • C. Choose a smaller learning rate
  • D. Enable dropout
  • E. Include all the images from the test set in the training set
  • F. Enable early stopping
Discover Answer Hide Answer

ADE

Question#55

This graph shows the training and validation loss against the epochs for a neural network.
The network being trained is as follows:
✑ Two dense layers, one output neuron
✑ 100 neurons in each layer
✑ 100 epochs
Random initialization of weights


Which technique can be used to improve model performance in terms of accuracy in the validation set?

  • A. Early stopping
  • B. Random initialization of weights with appropriate seed
  • C. Increasing the number of epochs
  • D. Adding another layer with the 100 neurons
Discover Answer Hide Answer

C

Question#56

A Machine Learning Specialist is using an Amazon SageMaker notebook instance in a private subnet of a corporate VPC. The ML Specialist has important data stored on the Amazon SageMaker notebook instance's Amazon EBS volume, and needs to take a snapshot of that EBS volume. However, the ML Specialist cannot find the Amazon SageMaker notebook instance's EBS volume or Amazon EC2 instance within the VPC.
Why is the ML Specialist not seeing the instance visible in the VPC?

  • A. Amazon SageMaker notebook instances are based on the EC2 instances within the customer account, but they run outside of VPCs.
  • B. Amazon SageMaker notebook instances are based on the Amazon ECS service within customer accounts.
  • C. Amazon SageMaker notebook instances are based on EC2 instances running within AWS service accounts.
  • D. Amazon SageMaker notebook instances are based on AWS ECS instances running within AWS service accounts.
Discover Answer Hide Answer

C
Reference:
https://docs.aws.amazon.com/sagemaker/latest/dg/gs-setup-working-env.html

Question#57

A Machine Learning Specialist is building a model that will perform time series forecasting using Amazon SageMaker. The Specialist has finished training the model and is now planning to perform load testing on the endpoint so they can configure Auto Scaling for the model variant.
Which approach will allow the Specialist to review the latency, memory utilization, and CPU utilization during the load test?

  • A. Review SageMaker logs that have been written to Amazon S3 by leveraging Amazon Athena and Amazon QuickSight to visualize logs as they are being produced.
  • B. Generate an Amazon CloudWatch dashboard to create a single view for the latency, memory utilization, and CPU utilization metrics that are outputted by Amazon SageMaker.
  • C. Build custom Amazon CloudWatch Logs and then leverage Amazon ES and Kibana to query and visualize the log data as it is generated by Amazon SageMaker.
  • D. Send Amazon CloudWatch Logs that were generated by Amazon SageMaker to Amazon ES and use Kibana to query and visualize the log data.
Discover Answer Hide Answer

B
Reference:
https://docs.aws.amazon.com/sagemaker/latest/dg/monitoring-cloudwatch.html

Question#58

A manufacturing company has structured and unstructured data stored in an Amazon S3 bucket. A Machine Learning Specialist wants to use SQL to run queries on this data.
Which solution requires the LEAST effort to be able to query this data?

  • A. Use AWS Data Pipeline to transform the data and Amazon RDS to run queries.
  • B. Use AWS Glue to catalogue the data and Amazon Athena to run queries.
  • C. Use AWS Batch to run ETL on the data and Amazon Aurora to run the queries.
  • D. Use AWS Lambda to transform the data and Amazon Kinesis Data Analytics to run queries.
Discover Answer Hide Answer

B

Question#59

A Machine Learning Specialist is developing a custom video recommendation model for an application. The dataset used to train this model is very large with millions of data points and is hosted in an Amazon S3 bucket. The Specialist wants to avoid loading all of this data onto an Amazon SageMaker notebook instance because it would take hours to move and will exceed the attached 5 GB Amazon EBS volume on the notebook instance.
Which approach allows the Specialist to use all the data to train the model?

  • A. Load a smaller subset of the data into the SageMaker notebook and train locally. Confirm that the training code is executing and the model parameters seem reasonable. Initiate a SageMaker training job using the full dataset from the S3 bucket using Pipe input mode.
  • B. Launch an Amazon EC2 instance with an AWS Deep Learning AMI and attach the S3 bucket to the instance. Train on a small amount of the data to verify the training code and hyperparameters. Go back to Amazon SageMaker and train using the full dataset
  • C. Use AWS Glue to train a model using a small subset of the data to confirm that the data will be compatible with Amazon SageMaker. Initiate a SageMaker training job using the full dataset from the S3 bucket using Pipe input mode.
  • D. Load a smaller subset of the data into the SageMaker notebook and train locally. Confirm that the training code is executing and the model parameters seem reasonable. Launch an Amazon EC2 instance with an AWS Deep Learning AMI and attach the S3 bucket to train the full dataset.
Discover Answer Hide Answer

A

Question#60

A Machine Learning Specialist has completed a proof of concept for a company using a small data sample, and now the Specialist is ready to implement an end- to-end solution in AWS using Amazon SageMaker. The historical training data is stored in Amazon RDS.
Which approach should the Specialist use for training a model using that data?

  • A. Write a direct connection to the SQL database within the notebook and pull data in
  • B. Push the data from Microsoft SQL Server to Amazon S3 using an AWS Data Pipeline and provide the S3 location within the notebook.
  • C. Move the data to Amazon DynamoDB and set up a connection to DynamoDB within the notebook to pull data in.
  • D. Move the data to Amazon ElastiCache using AWS DMS and set up a connection within the notebook to pull data in for fast access.
Discover Answer Hide Answer

B

chevron rightPrevious Nextchevron right