Exams > Amazon > AWS Certified Solutions Architect - Professional
AWS Certified Solutions Architect - Professional
Page 5 out of 101 pages Questions 41-50 out of 1009 questions
Question#41

A company has an application that is deployed on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are part of an Auto Scaling group. The application has unpredictable workloads and frequently scales out and in. The company's development team wants to analyze application logs to find ways to improve the application's performance. However, the logs are no longer available after instances scale in.

Which solution will give the development team the ability to view the application logs after a scale-in event?

  • A. Enable access logs for the ALB. Store the logs in an Amazon S3 bucket.
  • B. Configure the EC2 instances to publish logs to Amazon CloudWatch Logs by using the unified CloudWatch agent.
  • C. Modify the Auto Scaling group to use a step scaling policy.
  • D. Instrument the application with AWS X-Ray tracing.
Discover Answer Hide Answer

B

Question#42

A company is deploying a new cluster for big data analytics on AWS. The cluster will run across many Linux Amazon EC2 instances that are spread across multiple Availability Zones.

All of the nodes in the cluster must have read and write access to common underlying file storage. The file storage must be highly available, must be resilient, must be compatible with the Portable Operating System Interface (POSIX), and must accommodate high levels of throughput.

Which storage solution will meet these requirements?

  • A. Provision an AWS Storage Gateway file gateway NFS file share that is attached to an Amazon S3 bucket. Mount the NFS file share on each EC2 instance in the cluster.
  • B. Provision a new Amazon Elastic File System (Amazon EFS) file system that uses General Purpose performance mode. Mount the EFS file system on each EC2 instance in the cluster.
  • C. Provision a new Amazon Elastic Block Store (Amazon EBS) volume that uses the io2 volume type. Attach the EBS volume to all of the EC2 instances in the cluster.
  • D. Provision a new Amazon Elastic File System (Amazon EFS) file system that uses Max I/O performance mode. Mount the EFS file system on each EC2 instance in the cluster.
Discover Answer Hide Answer

B

Question#43

A company had a third-party audit of its AWS environment. The auditor identified secrets in developer documentation and found secrets that were hardcoded into AWS CloudFormation templates throughout the environment. The auditor also identified security groups that allowed inbound traffic from the internet and outbound traffic to all destinations on the internet.

A solutions architect must design a solution that will encrypt all secrets and rotate the secrets every 90 days. Additionally, the solutions architect must configure the security groups to prevent resources from being accessible from the internet.

Which solution will meet these requirements?

  • A. Use AWS Secrets Manager to create, store, and access secrets. Create new secrets in AWS CloudFormation by using the AWS::SecretsManager::Secret resource type. Reference the secrets in other templates by using Secrets Manager dynamic references. Configure automatic rotation in Secrets Manager to rotate the secrets every 90 days. Use AWS Firewall Manager to create a policy that identifies all security groups that allow inbound or outbound communications for any protocols to 0.0.0.0/0. Whenever the policy flags a security group in violation, remove the noncompliant rule from security groups.
  • B. Use AWS Systems Manager Parameter Store to create, store, and access secrets. Create new Parameter Store items in AWS CloudFormation by using the AWS::SSM::Parameter resource type. Access these items by using the AWS CLI or AWS APIs. Configure automatic rotation in Parameter Store to rotate the secrets every 90 days. Use AWS Firewall Manager to create a policy that identifies all security groups that allow inbound or outbound communications for any protocols to 0.0.0.0/0. Whenever the policy flags a security group in violation, remove the noncompliant rule from security groups.
  • C. Use AWS Secrets Manager to create, store, and access secrets. Create new secrets in AWS CloudFormation by using the AWS::SecretsManager::Secret resource type. Reference the secrets in other templates by using Secrets Manager dynamic references. Configure automatic rotation in Secrets Manager to rotate the secrets every 90 days. Use AWS Firewall Manager to create a policy that enforces a requirement for all security groups to explicitly deny inbound and outbound communications for all protocols to 0.0.0.0/0.
  • D. Use AWS Systems Manager Parameter Store to create, store, and access secrets. Create new Parameter Store items in AWS CloudFormation by using the AWS::SSM::Parameter resource type. Reference the items in other templates by using Systems Manager dynamic references. Configure automatic rotation in Parameter Store to rotate the secrets every 90 days. Use AWS Firewall Manager to create a policy that enforces a requirement for all security groups to explicitly deny inbound and outbound communications for all protocols to 0.0.0.0/0.
Discover Answer Hide Answer

D

Question#44

A company is running an application on Amazon EC2 instances in the AWS Cloud. The application is using a MongoDB database with a replica set as its data tier. The MongoDB database is installed on systems in the company's on-premises data center and is accessible through an AWS Direct Connect connection to the data center environment.

A solutions architect must migrate the on-premises MongoDB database to Amazon DocumentDB (with MongoDB compatibility).

Which strategy should the solutions architect choose to perform this migration?

  • A. Create a fleet of EC2 instances. Install MongoDB Community Edition on the EC2 instances, and create a database. Configure continuous synchronous replication with the database that is running in the on-premises data center.
  • B. Create an AWS Database Migration Service (AWS DMS) replication instance. Create a source endpoint for the on-premises MongoDB database by using change data capture (CDC). Create a target endpoint for the Amazon DocumentDB database. Create and run a DMS migration task.
  • C. Create a data migration pipeline by using AWS Data Pipeline. Define data nodes for the on-premises MongoDB database and the Amazon DocumentDB database. Create a scheduled task to run the data pipeline.
  • D. Create a source endpoint for the on-premises MongoDB database by using AWS Glue crawlers. Configure continuous asynchronous replication between the MongoDB database and the Amazon DocumentDB database.
Discover Answer Hide Answer

C

Question#45

A company is deploying a new API to AWS. The API uses Amazon API Gateway with a Regional API endpoint and an AWS Lambda function for hosting. The API retrieves data from an external vendor API, stores data in an Amazon DynamoDB global table, and retrieves data from the DynamoDB global table. The API key for the vendor's API is stored in AWS Secrets Manager and is encrypted with a customer managed key in AWS Key Management Service (AWS KMS). The company has deployed its own API into a single AWS Region.

A solutions architect needs to change the API components of the company's API to ensure that the components can run across multiple Regions in an active-active configuration.

Which combination of changes will meet this requirement with the LEAST operational overhead? (Choose three.)

  • A. Deploy the API to multiple Regions. Configure Amazon Route 53 with custom domain names that route traffic to each Regional API endpoint. Implement a Route 53 multivalue answer routing policy.
  • B. Create a new KMS multi-Region customer managed key. Create a new KMS customer managed replica key in each in-scope Region.
  • C. Replicate the existing Secrets Manager secret to other Regions. For each in-scope Region's replicated secret, select the appropriate KMS key.
  • D. Create a new AWS managed KMS key in each in-scope Region. Convert an existing key to a multi-Region key. Use the multi-Region key in other Regions.
  • E. Create a new Secrets Manager secret in each in-scope Region. Copy the secret value from the existing Region to the new secret in each in-scope Region.
  • F. Modify the deployment process for the Lambda function to repeat the deployment across in-scope Regions. Turn on the multi-Region option for the existing API. Select the Lambda function that is deployed in each Region as the backend for the multi-Region API.
Discover Answer Hide Answer

BDF

Question#46

A company deploys workloads in multiple AWS accounts. Each account has a VPC with VPC flow logs published in text log format to a centralized Amazon S3 bucket. Each log file is compressed with gzip compression. The company must retain the log files indefinitely.

A security engineer occasionally analyzes the logs by using Amazon Athena to query the VPC flow logs. The query performance is degrading over time as the number of ingested logs is growing. A solutions architect must improve the performance of the log analysis and reduce the storage space that the VPC flow logs use.

Which solution will meet these requirements with the LARGEST performance improvement?

  • A. Create an AWS Lambda function to decompress the gzip files and to compress the files with bzip2 compression. Subscribe the Lambda function to an s3:ObjectCreated:Put S3 event notification for the S3 bucket.
  • B. Enable S3 Transfer Acceleration for the S3 bucket. Create an S3 Lifecycle configuration to move files to the S3 Intelligent-Tiering storage class as soon as the files are uploaded.
  • C. Update the VPC flow log configuration to store the files in Apache Parquet format. Specify hourly partitions for the log files.
  • D. Create a new Athena workgroup without data usage control limits. Use Athena engine version 2.
Discover Answer Hide Answer

B

Question#47

A company's solutions architect is managing a learning platform that supports more than 1 million students. The company's business reporting team is experiencing slow performance while extracting large datasets from the database. The learning application is based on PHP and runs on Amazon EC2 instances that are in an Amazon EC2 Auto Scaling group behind an Application Load Balancer (ALB). Application data is stored in an Amazon S3 bucket and in an Amazon RDS for MySOL database. The ALB is the origin of an Amazon CloudFront distribution.

The solutions architect observes that slow read operations for SELECT queries are affecting the RDS for MySOL DB instance's CPU utilization. The solutions architect must find a scalable solution to improve the slow website performance with near-zero downtime. The solution also must provide automatic failover with no data loss.

Which solution will meet these requirements?

  • A. Create an incremental database backup by using Percona XtraBackup. Compress the backup files. Synchronize the backup files to Amazon S3. Restore the backup files from Amazon S3 to Amazon Aurora MySOL. Direct the application endpoint to the new Aurora DB instance.
  • B. Convert the DB instance to a Multi-AZ deployment. Set the query_cache_type parameter on the database to zero. Increase the CloudFront caching TTL to reduce application server CPU utilization.
  • C. Create an Amazon Aurora read replica from the DB instance. Wait until the read replica is synchronized with the source DB instance. Promote the read replica to a standalone DB cluster. Direct the application endpoint to the new Aurora DB instance.
  • D. Create a read replica cluster on the DB instance. Use a Multi-AZ deployment. Synchronize the read replica with the primary DB instance. Promote the read replica as the primary DB instance.
Discover Answer Hide Answer

C

Question#48

A company is using IoT devices on its manufacturing equipment. Data from the devices travels to the AWS Cloud through a connection to AWS IoT Core. An Amazon Kinesis data stream sends the data from AWS IoT Core to the company's processing application. The processing application stores data in Amazon S3.

A new requirement states that the company also must send the raw data to a third-party system by using an HTTP API.

Which solution will meet these requirements with the LEAST amount of development work?

  • A. Create a custom AWS Lambda function to consume records from the Kinesis data stream. Configure the Lambda function to call the third-party HTTP API.
  • B. Create an S3 event notification with Amazon EventBridge (Amazon CloudWatch Events) as the event destination. Create an EventBridge (CloudWatch Events) API destination for the third-party HTTP API.
  • C. Create an Amazon Kinesis Data Firehose delivery stream. Configure an HTTP endpoint destination that targets the third-party HTTP API. Configure the Kinesis data stream to send data to the Kinesis Data Firehose delivery stream.
  • D. Create an S3 event notification with an Amazon Simple Queue Service (Amazon SQS) queue as the event destination. Configure the SOS queue to invoke a custom AWS Lambda function. Configure the Lambda function to call the third-party HTTP API.
Discover Answer Hide Answer

D

Question#49

A solutions architect is deploying a web application that consists of a web tier, an application tier, and a database tier. The infrastructure must be highly available across two Availability Zones. The solution must minimize single points of failure and must be resilient.

Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

  • A. Deploy an Application Load Balancer (ALB) that is mapped to a public subnet in each Availability Zone for the web tier. Deploy Amazon EC2 instances as web servers in each of the private subnets. Configure the web server instances as the target group for the ALB. Use Amazon EC2 Auto Scaling for the web server instances.
  • B. Deploy an Application Load Balancer (ALB) that is mapped to a public subnet in each Availability Zone for the web tier. Deploy Amazon EC2 instances as web servers in each of the public subnets. Configure the web server instances as the target group for the ALUse Amazon EC2 Auto Scaling for the web server instances.
  • C. Deploy a new Application Load Balancer (ALB) to a private subnet in each Availability Zone for the application tier. Deploy Amazon EC2 instances as application servers in each of the private subnets. Configure the application server instances as targets for the new ALB. Configure the web server instances to forward traffic to the new ALB. Use Amazon EC2 Auto Scaling for the application server instances.
  • D. Deploy a new Application Load Balancer (ALB) to a private subnet in each Availability Zone for the application tier. Deploy Amazon EC2 instances as application servers in each of the private subnets. Configure the web server instances to forward traffic to the application server instances. Use Amazon EC2 Auto Scaling for the application server instances.
  • E. Deploy an Amazon RDS Multi-AZ DB instance. Configure the application to target the DB instance.
  • F. Deploy an Amazon RDS Single-AZ DB instance with a read replica in another Availability Zone. Configure the application to target the primary DB instance.
Discover Answer Hide Answer

BD

Question#50

How can an EBS volume that is currently attached to an EC2 instance be migrated from one Availability Zone to another?

  • A. Detach the volume and attach it to another EC2 instance in the other AZ.
  • B. Simply create a new volume in the other AZ and specify the original volume as the source.
  • C. Create a snapshot of the volume, and create a new volume from the snapshot in the other AZ.
  • D. Detach the volume, then use the ec2-migrate-volume command to move it to another AZ.
Discover Answer Hide Answer

C

chevron rightPrevious Nextchevron right