Exams > Amazon > AWS Certified Solutions Architect - Associate SAA-C02
AWS Certified Solutions Architect - Associate SAA-C02
Page 39 out of 83 pages Questions 381-390 out of 822 questions
Question#381

A user owns a MySQL database that is accessed by various clients who expect, at most, 100 ms latency on requests. Once a record is stored in the database, it is rarely changed. Clients only access one record at a time.
Database access has been increasing exponentially due to increased client demand. The resultant load will soon exceed the capacity of the most expensive hardware available for purchase. The user wants to migrate to AWS, and is willing to change database systems.
Which service would alleviate the database load issue and offer virtually unlimited scalability for the future?

  • A. Amazon RDS
  • B. Amazon DynamoDB
  • C. Amazon Redshift
  • D. AWS Data Pipeline
Discover Answer Hide Answer

B
Reference:
https://aws.amazon.com/blogs/big-data/near-zero-downtime-migration-from-mysql-to-dynamodb/

Question#382

A company designs a mobile app for its customers to upload photos to a website. The app needs a secure login with multi-factor authentication (MFA). The company wants to limit the initial build time and the maintenance of the solution.
Which solution should a solutions architect recommend to meet these requirements?

  • A. Use Amazon Cognito Identity with SMS-based MFA.
  • B. Edit IAM policies to require MFA for all users.
  • C. Federate IAM against the corporate Active Directory that requires MFA.
  • D. Use Amazon API Gateway and require server-side encryption (SSE) for photos.
Discover Answer Hide Answer

A
Reference:
https://aws.amazon.com/cognito/

Question#383

A company has an application that uses overnight digital images of products on store shelves to analyze inventory data. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB) and obtains the images from an Amazon S3 bucket for its metadata to be processed by worker nodes for analysis. A solutions architect needs to ensure that every image is processed by the worker nodes.
What should the solutions architect do to meet this requirement in the MOST cost-efficient way?

  • A. Send the image metadata from the application directly to a second ALB for the worker nodes that use an Auto Scaling group of EC2 Spot Instances as the target group.
  • B. Process the image metadata by sending it directly to EC2 Reserved Instances in an Auto Scaling group. With a dynamic scaling policy, use an Amazon CloudWatch metric for average CPU utilization of the Auto Scaling group as soon as the front-end application obtains the images.
  • C. Write messages to Amazon Simple Queue Service (Amazon SQS) when the front-end application obtains an image. Process the images with EC2 On- Demand instances in an Auto Scaling group with instance scale-in protection and a fixed number of instances with periodic health checks.
  • D. Write messages to Amazon Simple Queue Service (Amazon SQS) when the application obtains an image. Process the images with EC2 Spot Instances in an Auto Scaling group with instance scale-in protection and a dynamic scaling policy using a custom Amazon CloudWatch metric for the current number of messages in the queue.
Discover Answer Hide Answer

B

Question#384

A solutions architect needs to host a high performance computing (HPC) workload in the AWS Cloud. The workload will run on hundreds of Amazon EC2 instances and will require parallel access to a shared file system to enable distributed processing of large datasets. Datasets will be accessed across multiple instances simultaneously. The workload requires access latency within 1 ms. After processing has completed, engineers will need access to the dataset for manual postprocessing.
Which solution will meet these requirements?

  • A. Use Amazon Elastic File System (Amazon EFS) as a shared file system. Access the dataset from Amazon EFS.
  • B. Mount an Amazon S3 bucket to serve as the shared file system. Perform postprocessing directly from the S3 bucket.
  • C. Use Amazon FSx for Lustre as a shared file system. Link the file system to an Amazon S3 bucket for postprocessing.
  • D. Configure AWS Resource Access Manager to share an Amazon S3 bucket so that it can be mounted to all instances for processing and postprocessing.
Discover Answer Hide Answer

C
Reference:
https://jayendrapatil.com/aws-fsx-for-lustre/

Question#385

A company is using Amazon Route 53 latency-based routing to route requests to its UDP-based application for users around the world. The application is hosted on redundant servers in the company's on-premises data centers in the United States, Asia, and Europe. The company's compliance requirements state that the application must be hosted on premises. The company wants to improve the performance and availability of the application.
What should a solutions architect do to meet these requirements?

  • A. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the NLBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS.
  • B. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the ALBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS.
  • C. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three NLBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAME that points to the CloudFront DNS.
  • D. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three ALBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAME that points to the CloudFront DNS.
Discover Answer Hide Answer

C

Question#386

A company manages its own Amazon EC2 instances that run MySQL databases. The company is manually managing replication and scaling as demand increases or decreases. The company needs a new solution that simplifies the process of adding or removing compute capacity to or from its database tier as needed. The solution also must offer improved performance, scaling, and durability with minimal effort from operations.
Which solution meets these requirements?

  • A. Migrate the databases to Amazon Aurora Serverless for Aurora MySQL.
  • B. Migrate the databases to Amazon Aurora Serverless for Aurora PostgreSQL.
  • C. Combine the databases into one larger MySQL database. Run the larger database on larger EC2 instances.
  • D. Create an EC2 Auto Scaling group for the database tier. Migrate the existing databases to the new environment.
Discover Answer Hide Answer

D
Reference:
https://aws-quickstart.s3.amazonaws.com/quickstart-drupal/doc/drupal-on-the-aws-cloud.pdf
(p.6)

Question#387

A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A solutions architect needs to implement a solution to ingest and store the alerts for future analysis.
The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure.
Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.
What is the MOST operationally efficient solution that meets these requirements?

  • A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
  • B. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts. Create a script on the EC2 instances that will store the alerts in an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
  • C. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon Elasticsearch Service (Amazon ES) cluster. Set up the Amazon ES cluster to take manual snapshots every day and delete data from the cluster that is older than 14 days.
  • D. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts, and set the message retention period to 14 days. Configure consumers to poll the SQS queue, check the age of the message, and analyze the message data as needed. If the message is 14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue.
Discover Answer Hide Answer

A

Question#388

A company has two AWS accounts: Production and Development. There are code changes ready in the Development account to push to the Production account.
In the alpha phase, only two senior developers on the development team need access to the Production account. In the beta phase, more developers might need access to perform testing as well.
What should a solutions architect recommend?

  • A. Create two policy documents using the AWS Management Console in each account. Assign the policy to developers who need access.
  • B. Create an IAM role in the Development account. Give one IAM role access to the Production account. Allow developers to assume the role.
  • C. Create an IAM role in the Production account with the trust policy that specifies the Development account. Allow developers to assume the role.
  • D. Create an IAM group in the Production account and add it as a principal in the trust policy that specifies the Production account. Add developers to the group.
Discover Answer Hide Answer

D

Question#389

A company is using an Amazon S3 bucket to store data uploaded by different departments from multiple locations. During an AWS Well-Architected review, the financial manager notices that 10 TB of S3 Standard storage data has been charged each month. However, in the AWS Management Console for Amazon S3, using the command to select all files and folders shows a total size of 5 TB.
What are the possible causes for this difference? (Choose two.)

  • A. Some files are stored with deduplication.
  • B. The S3 bucket has versioning enabled.
  • C. There are incomplete S3 multipart uploads.
  • D. The S3 bucker has AWS Key Management Service (AWS KMS) enabled.
  • E. The S3 bucket has Intelligent-Tiering enabled.
Discover Answer Hide Answer

AB

Question#390

A company is using a centralized AWS account to store log data in various Amazon S3 buckets. A solutions architect needs to ensure that the data is encrypted at rest before the data is uploaded to the S3 buckets. The data also must be encrypted in transit.
Which solution meets these requirements?

  • A. Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets.
  • B. Use server-side encryption to encrypt the data that is being uploaded to the S3 buckets.
  • C. Create bucket policies that require the use of server-side encryption with S3 managed encryption keys (SSE-S3) for S3 uploads.
  • D. Enable the security option to encrypt the S3 buckets through the use of a default AWS Key Management Service (AWS KMS) key.
Discover Answer Hide Answer

B
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingEncryption.html

chevron rightPrevious Nextchevron right