Exams > Amazon > AWS DevOps Engineer Professional: AWS DevOps Engineer - Professional (DOP-C01)
AWS DevOps Engineer Professional: AWS DevOps Engineer - Professional (DOP-C01)
Page 2 out of 11 pages Questions 11-20 out of 105 questions
Question#11

An n-tier application requires a table in an Amazon RDS MySQL DB instance to be dropped and repopulated at each deployment. This process can take several minutes and the web tier cannot come online until the process is complete. Currently, the web tier is configured in an Amazon EC2 Auto Scaling group, with instances being terminated and replaced at each deployment. The MySQL table is populated by running a SQL query through an AWS CodeBuild job.
What should be done to ensure that the web tier does not come online before the database is completely configured?

  • A. Use Amazon Aurora as a drop-in replacement for RDS MySQL. Use snapshots to populate the table with the correct data.
  • B. Modify the launch configuration of the Auto Scaling group to pause user data execution for 600 seconds, allowing the table to be populated.
  • C. Use AWS Step Functions to monitor and maintain the state of data population. Mark the database in service before continuing with the deployment.
  • D. Use an EC2 Auto Scaling lifecycle hook to pause the configuration of the web tier until the table is populated.
Discover Answer Hide Answer

D

Question#12

A highly regulated company has a policy that DevOps Engineers should not log in to their Amazon EC2 instances except in emergencies. If a DevOps Engineer does log in, the Security team must be notified within 15 minutes of the occurrence.
Which solution will meet these requirements?

  • A. Install the Amazon Inspector agent on each EC2 instance. Subscribe to Amazon CloudWatch Events notifications. Trigger an AWS Lambda function to check if a message is about user logins. If it is, send a notification to the Security team using Amazon SNS.
  • B. Install the Amazon CloudWatch agent on each EC2 instance. Configure the agent to push all logs to Amazon CloudWatch Logs and set up a CloudWatch metric filter that searches for user logins. If a login is found, send a notification to the Security team using Amazon SNS.
  • C. Set up AWS CloudTrail with Amazon CloudWatch Logs. Subscribe CloudWatch Logs to Amazon Kinesis. Attach AWS Lambda to Kinesis to parse and determine if a log contains a user login. If it does, send a notification to the Security team using Amazon SNS.
  • D. Set up a script on each Amazon EC2 instance to push all logs to Amazon S3. Set up an S3 event to trigger an AWS Lambda function, which triggers an Amazon Athena query to run. The Athena query checks for logins and sends the output to the Security team using Amazon SNS.
Discover Answer Hide Answer

B

Question#13

A DevOps engineer has automated a web service deployment by using AWS CodePipeline with the following steps:
1. An AWS CodeBuild project compiles the deployment artifact and runs unit tests.
2. An AWS CodeDeploy deployment group deploys the web service to Amazon EC2 instances in the staging environment.
3. A CodeDeploy deployment group deploys the web service to EC2 instances in the production environment.
The quality assurance (QA) team requests permission to inspect the build artifact before the deployment to the production environment occurs. The QA team wants to run an internal penetration testing tool to conduct manual tests. The tool will be invoked by a REST API call.
Which combination of actions should the DevOps engineer take to fulfill this request? (Choose two.)

  • A. Insert a manual approval action between the test actions and deployment actions of the pipeline.
  • B. Modify the buildspec.yml file for the compilation stage to require manual approval before completion.
  • C. Update the CodeDeploy deployment groups so that they require manual approval to proceed.
  • D. Update the pipeline to directly call the REST API for the penetration testing tool.
  • E. Update the pipeline to invoke a Lambda function that calls the REST API for the penetration testing tool.
Discover Answer Hide Answer

BC
Reference:
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-codedeploy.html

Question#14

A DevOps Engineer manages a large commercial website that runs on Amazon EC2. The website uses Amazon Kinesis Data Streams to collect and process web logs. The DevOps Engineer manages the Kinesis consumer application, which also runs on Amazon EC2.
Sudden increases of data cause the Kinesis consumer application to fall behind, and the Kinesis data streams drop records before the records can be processed.
The DevOps Engineer must implement a solution to improve stream handling.
Which solution meets these requirements with the MOST operational efficiency?

  • A. Modify the Kinesis consumer application to store the logs durably in Amazon S3. Use Amazon EMR to process the data directly on Amazon S3 to derive customer insights. Store the results in Amazon S3.
  • B. Horizontally scale the Kinesis consumer application by adding more EC2 instances based on the Amazon CloudWatch GetRecords.IteratorAgeMilliseconds metric. Increase the retention period of the Kinesis Data Streams.
  • C. Convert the Kinesis consumer application to run as an AWS Lambda function. Configure the Kinesis Data Streams as the event source for the Lambda function to process the data streams.
  • D. Increase the number of shards in the Kinesis Data Streams to increase the overall throughput so that the consumer application processes data faster.
Discover Answer Hide Answer

B

Question#15

A company uses AWS Organizations to manage multiple accounts. Information security policies require that all unencrypted Amazon EBS volumes be marked as non-compliant. A DevOps engineer needs to automatically deploy the solution and ensure that this compliance check is always present.
With solution will accomplish this?

  • A. Create an AWS CloudFormation template that defines an AWS Inspector rule to check whether EBS encryption is enabled. Save the template to an Amazon S3 bucket that has been shared with all accounts within the company. Update the account creation script pointing to the CloudFormation template in Amazon S3.
  • B. Create an AWS Config organizational rule to check whether EBS encryption is enabled and deploy the rule using the AWS CLI. Create and apply an SCP to prohibit stopping and deleting AWS Config across the organization.
  • C. Create an SCP in Organizations. Set the policy to prevent the launch of Amazon EC2 instances without encryption on the EBS volumes using a conditional expression. Apply the SCP to all AWS accounts. Use Amazon Athena to analyze the AWS CloudTrail output, looking for events that deny an ec2:RunInstances action.
  • D. Deploy an IAM role to all accounts from a single trusted account. Build a pipeline with AWS CodePipeline with a stage in AWS Lambda to assume the IAM role, and list all EBS volumes in the account. Publish a report to Amazon S3.
Discover Answer Hide Answer

B

Question#16

A company develops and maintains a web application using Amazon EC2 instances and an Amazon RDS for SQL Server DB instance in a single Availability
Zone. The resources need to run only when new deployments are being tested using AWS CodePipeline. Testing occurs one or more times a week and each test takes 2-3 hours to run. A DevOps engineer wants a solution that does not change the architecture components.
Which solution will meet these requirements in the MOST cost-effective manner?

  • A. Convert the RDS database to an Amazon Aurora Serverless database. Use an AWS Lambda function to start and stop the EC2 instances before and after tests.
  • B. Put the EC2 instances into an Auto Scaling group. Schedule scaling to run at the start of the deployment tests.
  • C. Replace the EC2 instances with EC2 Spot Instances and the RDS database with an RDS Reserved Instance.
  • D. Subscribe Amazon CloudWatch Events to CodePipeline to trigger AWS Systems Manager Automation documents that start and stop all EC2 and RDS instances before and after deployment tests.
Discover Answer Hide Answer

B
Reference:
https://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/using-features.managing.as.html?filter-select=AWS%20Management%20Console

Question#17

The Development team has grown substantially in recent months and so has the number of projects that use separate code repositories. The current process involves configuring AWS CodePipeline manually. There have been service limit alerts regarding the number of Amazon S3 buckets that exist.
Which pipeline option will reduce S3 bucket sprawl alerts?

  • A. Combine the multiple separate code repositories into a single one, and deploy using an AWS CodePipeline that has logic for each project.
  • B. Create new pipelines by using the AWS API or AWS CLI, and configure them to use a single S3 bucket with separate prefixes for each project.
  • C. Create a new pipeline in a different region for each project to bypass the service limits for S3 buckets in a single region.
  • D. Create a new pipeline and S3 bucket for each project by using the AWS API or AWS CLI to bypass the service limits for S3 buckets in a single account.
Discover Answer Hide Answer

B

Question#18

A company runs several applications across multiple AWS accounts in an organization in AWS Organizations. Some of the resources are not tagged properly and the company's finance team cannot determine which costs are associated with which applications. A DevOps engineer must remediate this issue and prevent this issue from happening in the future.
Which combination of actions should the DevOps engineer take to meet these requirements? (Choose two.)

  • A. Activate the user-defined cost allocation tags in each AWS account.
  • B. Create and attach an SCP that requires a specific tag.
  • C. Define each line of business (LOB) in AWS Budgets. Assign the required tag to each resource.
  • D. Scan all accounts with Tag Editor. Assign the required tag to each resource.
  • E. Use the budget report to find untagged resources. Assign the required tag to each resource.
Discover Answer Hide Answer

CD

Question#19

An online company uses Amazon EC2 Auto Scaling extensively to provide an excellent customer experience while minimizing the number of running EC2 instances. The company's self-hosted Puppet environment in the application layer manages the configuration of the instances. The IT manager wants the lowest licensing costs and wants to ensure that whenever the EC2 Auto Scaling group scales down, removed EC2 instances are deregistered from the Puppet master as soon as possible.
How can the requirement be met?

  • A. At instance launch time, use EC2 user data to deploy the AWS CodeDeploy agent. Use CodeDeploy to install the Puppet agent. When the Auto Scaling group scales out, run a script to register the newly deployed instances to the Puppet master. When the Auto Scaling group scales in, use the EC2 Auto Scaling EC2_INSTANCE_TERMINATING lifecycle hook to trigger de-registration from the Puppet master.
  • B. Bake the AWS CodeDeploy agent into the base AMI. When the Auto Scaling group scales out, use CodeDeploy to install the Puppet agent, and execute a script to register the newly deployed instances to the Puppet master. When the Auto Scaling group scales in, use the CodeDeploy ApplicationStop lifecycle hook to run a script to de-register the instance from the Puppet master.
  • C. At instance launch time, use EC2 user data to deploy the AWS CodeDeploy agent. When the Auto Scaling group scales out, use CodeDeploy to install the Puppet agent, and run a script to register the newly deployed instances to the Puppet master. When the Auto Scaling group scales in, use the EC2 user data instance stop script to run a script to de-register the instance from the Puppet master.
  • D. Bake the AWS Systems Manager agent into the base AMI. When the Auto Scaling group scales out, use the AWS Systems Manager to install the Puppet agent, and run a script to register the newly deployed instances to the Puppet master. When the Auto Scaling group scales in, use the Systems Manager instance stop lifecycle hook to run a script to de-register the instance from the Puppet master.
Discover Answer Hide Answer

A

Question#20

A company uses a series of individual Amazon CloudFormation templates to deploy its multi-Region applications. These templates must be deployed in a specific order. The company is making more changes to the templates than previously expected and wants to deploy new templates more efficiently. Additionally, the data engineering team must be notified of all changes to the templates.
What should the company do to accomplish these goals?

  • A. Create an AWS Lambda function to deploy the CloudFormation templates in the required order. Use stack policies to alert the data engineering team.
  • B. Host the CloudFormation templates in Amazon S3. Use Amazon S3 events to directly trigger CloudFormation updates and Amazon SNS notifications.
  • C. Implement CloudFormation StackSets and use drift detection to trigger update alerts to the data engineering team.
  • D. Leverage CloudFormation nested stacks and stack sets for deployments. Use Amazon SNS to notify the data engineering team.
Discover Answer Hide Answer

D
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html

chevron rightPrevious Nextchevron right