Exams > Amazon > AWS DevOps Engineer Professional: AWS DevOps Engineer - Professional (DOP-C01)
AWS DevOps Engineer Professional: AWS DevOps Engineer - Professional (DOP-C01)
Page 5 out of 11 pages Questions 41-50 out of 105 questions
Question#41

An Engineering team manages a Node.js e-commerce application. The current environment consists of the following components:
✑ Amazon S3 buckets for storing content
✑ Amazon EC2 for the front-end web servers
✑ AWS Lambda for image processing
✑ Amazon DynamoDB for storing session-related data
The team expects a significant increase in traffic to the site. The application should handle the additional load without interruption. The team ran initial tests by adding new servers to the EC2 front-end to handle the larger load, but the instances took up to 20 minutes to become fully configured. The team wants to reduce this configuration time.
What changes will the Engineering team need to implement to make the solution the MOST resilient and highly available while meeting the expected increase in demand?

  • A. Use AWS OpsWorks to automatically configure each new EC2 instance as it is launched. Configure the EC2 instances by using an Auto Scaling group behind an Application Load Balancer across multiple Availability Zones. Implement Amazon DynamoDB Auto Scaling. Use Amazon Route 53 to point the application DNS record to the Application Load Balancer.
  • B. Deploy a fleet of EC2 instances, doubling the current capacity, and place them behind an Application Load Balancer. Increase the Amazon DynamoDB read and write capacity units. Add an alias record that contains the Application Load Balancer endpoint to the existing Amazon Route 53 DNS record that points to the application.
  • C. Configure Amazon CloudFront and have its origin point to Amazon S3 to host the web application. Implement Amazon DynamoDB Auto Scaling. Use Amazon Route 53 to point the application DNS record to the CloudFront DNS name.
  • D. Use AWS Elastic Beanstalk with a custom AMI including all web components. Deploy the platform by using an Auto Scaling group behind an Application Load Balancer across multiple Availability Zones. Implement Amazon DynamoDB Auto Scaling. Use Amazon Route 53 to point the application DNS record to the Elastic Beanstalk load balancer.
Discover Answer Hide Answer

D

Question#42

A company's application development team uses Linux-based Amazon EC2 instances as bastion hosts. Inbound SSH access to the bastion hosts is restricted to specific IP addresses, as defined in the associated security groups. The company's security team wants to receive a notification if the security group rules are modified to allow SSH access from any IP address.
What should a DevOps engineer do to meet this requirement?

  • A. Create an Amazon EventBridge (Amazon CloudWatch Events) rule with a source of aws.cloudtrail and the event name AuthorizeSecurityGroupIngress. Define an Amazon Simple Notification Service (Amazon SNS) topic as the target.
  • B. Enable Amazon GuardDuty and check the findings for security group in AWS Security Hub. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule with a custom pattern that matches GuardDuty events with an output of NON_COMPLIANT. Define an Amazon Simple Notification Service (Amazon SNS) topic as the target.
  • C. Create an AWS Config rule by using the restricted-ssh managed rule to check whether security groups disallow unrestricted incoming SSH traffic. Configure automatic remediation to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.
  • D. Enable Amazon Inspector. Include the Common Vulnerabilities and Exposures-1.1 rules package to check the security groups that are associated with the bastion hosts. Configure Amazon Inspector to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.
Discover Answer Hide Answer

C
Reference:
https://docs.aws.amazon.com/config/latest/developerguide/restricted-ssh.html

Question#43

A company is using AWS Organizations to create separate AWS accounts for each of its departments. The company needs to automate the following tasks:
✑ Update the Linux AMIs with new patches periodically and generate a golden image
✑ Install a new version of Chef agents in the golden image, if available
✑ Provide the newly generated AMIs to the department's accounts
Which solution meets these requirements with the LEAST management overhead?

  • A. Write a script to launch an Amazon EC2 instance from the previous golden image. Apply the patch updates. Install the new version of the Chef agent, generate a new golden image, and then modify the AMI permissions to share only the new image with the department's accounts.
  • B. Use Amazon EC2 Image Builder to create an image pipeline that consists of the base Linux AMI and components to install the Chef agent. Use AWS Resource Access Manager to share EC2 Image Builder images with the department's accounts.
  • C. Use an AWS Systems Manager Automation runbook to update the Linux AMI by using the previous image. Provide the URL for the script that will update the Chef agent. Use AWS Organizations to replace the previous golden image in the department's accounts.
  • D. Use Amazon EC2 Image Builder to create an image pipeline that consists of the base Linux AMI and components to install the Chef agent. Create a parameter in AWS Systems Manager Parameter Store to store the new AMI ID that can be referenced by the department's accounts.
Discover Answer Hide Answer

A
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.html

Question#44

A company has an application that runs on 12 Amazon EC2 instances. The instances run in an Amazon EC2 Auto Scaling group across three Availability Zones.
On a typical day each EC2 instance has 30% CPU utilization during business hours and 10% CPU utilization after business hours. The CPU utilization increases suddenly in the first few minutes of business hours each day. Other increases in CPU utilization are gradual. A DevOps engineer needs to optimize costs while maintaining or improving the application's reliability.
Which solution meets these requirements?

  • A. Configure a target tracking scaling policy that is based on the Auto Scaling group's average CPU utilization, and set a target of 75%. Create a scheduled action for the Auto Scaling group to adjust the desired capacity to six instances just before business hours begin.
  • B. Configure the Auto Scaling group with two scheduled actions for Amazon EC2 Auto Scaling. Configure one action to start nine EC2 instances at the start of business hours. Configure the other action to stop nine instances at the end of business hours.
  • C. Change to an AWS Application Auto Scaling group. Configure a target tracking scaling policy that is based on the Auto Scaling group's average CPU utilization, and set a target of 75%. Create a scheduled action for the Auto Scaling group to adjust the minimum number of instances to three instances at the end of business hours and to reset the number to six instances before business hours begin.
  • D. Change to an AWS Application Auto Scaling group. Configure a target tracking scaling policy that is based on the Auto Scaling group's average CPU utilization, and set a target of 75%. Create a scheduled action to terminate nine instances each evening at the end of business hours.
Discover Answer Hide Answer

D

Question#45

A company runs an application with an Amazon EC2 and on-premises configuration. A DevOps Engineer needs to standardize patching across both environments. Company policy dictates that patching only happens during non-business hours.
Which combination of actions will meet these requirements? (Choose three.)

  • A. Add the physical machines into AWS Systems Manager using Systems Manager Hybrid Activations.
  • B. Attach an IAM role to the EC2 instances, allowing them to be managed by AWS Systems Manager.
  • C. Create IAM access keys for the on-premises machines to interact with AWS Systems Manager.
  • D. Execute an AWS Systems Manager Automation document to patch the systems every hour.
  • E. Use Amazon CloudWatch Events scheduled events to schedule a patch window.
  • F. Use AWS Systems Manager Maintenance Windows to schedule a patch window.
Discover Answer Hide Answer

ABF

Question#46

A company has many applications. Different teams in the company developed the applications by using multiple languages and frameworks. The applications run on premises and on different servers with different operating systems. Each team has its own release protocol and process. The company wants to reduce the complexity of the release and maintenance of these applications.
The company is migrating its technology stacks, including these applications, to AWS. The company wants centralized control of source code, a consistent and automatic delivery pipeline, and as few maintenance tasks as possible on the underlying infrastructure.
What should a DevOps engineer do to meet these requirements?

  • A. Create one AWS CodeCommit repository for all applications. Put each application's code in different branch. Merge the branches, and use AWS CodeBuild to build the applications. Use AWS CodeDeploy to deploy the applications to one centralized application server.
  • B. Create one AWS CodeCommit repository for each of the applications Use AWS CodeBuild to build the applications one at a time. Use AWS CodeDeploy to deploy the applications to one centralized application server.
  • C. Create one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build the applications one at a time to create one AMI for each server. Use AWS CloudFormation StackSets to automatically provision and decommission Amazon EC2 fleets by using these AMIs.
  • D. Create one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build one Docker image for each application in Amazon Elastic Container Registry (Amazon ECR). Use AWS CodeDeploy to deploy the applications to Amazon Elastic Container Service (Amazon ECS) on infrastructure that AWS Fargate manages.
Discover Answer Hide Answer

B
Reference:
https://towardsdatascience.com/ci-cd-logical-and-practical-approach-to-build-four-step-pipeline-on-aws-3f54183068ec

Question#47

A DevOps engineer is developing an application for a company. The application needs to persist files to Amazon S3. The application needs to upload files with different security classifications that the company defines. These classifications include confidential, private, and public. Files that have a confidential classification must not be viewable by anyone other than the user who uploaded them. The application uses the IAM role of the user to call the S3 API operations.
The DevOps engineer has modified the application to add a DataClassification tag with the value of confidential and an Owner tag with the uploading user's ID to each confidential object that is uploaded to Amazon S3.
Which set of additional steps must the DevOps engineer take to meet the company's requirements?

  • A. Modify the S3 bucket's ACL to grant bucket-owner-read access to the uploading user's IAM role. Create an IAM policy that grants s3:GetObject operations on the S3 bucket when aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Attach the policy to the IAM roles for users who require access to the S3 bucket.
  • B. Modify the S3 bucket policy to allow the s3:GetObject action when aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.
  • C. Modify the S3 bucket policy to allow the s3:GetObject action when aws:ResourceTag/DataClassification equals confidential, and aws:RequesttTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.
  • D. Modify the S3 bucket's ACL to grant authenticated-read access when aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.
Discover Answer Hide Answer

B

Question#48

A company has developed an AWS Lambda function that handles orders received through an API. The company is using AWS CodeDeploy to deploy the Lambda function as the final stage of a CI/CD pipeline.
A DevOps Engineer has noticed there are intermittent failures of the ordering API for a few seconds after deployment. After some investigation, the DevOps
Engineer believes the failures are due to database changes not having fully propagated before the Lambda function begins executing.
How should the DevOps Engineer overcome this?

  • A. Add a BeforeAllowTraffic hook to the AppSpec file that tests and waits for any necessary database changes before traffic can flow to the new version of the Lambda function
  • B. Add an AfterAllowTraffic hook to the AppSpec file that forces traffic to wait for any pending database changes before allowing the new version of the Lambda function to respond
  • C. Add a BeforeInstall hook to the AppSpec file that tests and waits for any necessary database changes before deploying the new version of the Lambda function
  • D. Add a ValidateService hook to the AppSpec file that inspects incoming traffic and rejects the payload if dependent services, such as the database, are not yet ready
Discover Answer Hide Answer

A

Question#49

A development team is building a full-stack serverless web application. The serverless application will consist of a backend REST API and a front end that is built with a single-page application (SPA) framework.
The team wants to use a Git-based workflow to develop and deploy the application. The team has created an AWS CodeCommit repository to store the application code. The team wants to use multiple development branches to test new features. In addition, the team wants to ensure that code changes on the development branches are deployed to the different development environments. Code changes to the main branches must be released automatically to production.
The development deployments must be available as a subdomain of the main application website, which is hosted in an Amazon Route 53 public hosted zone.
What should a DevOps engineer do to meet these requirements?

  • A. Create an application in the AWS Amplify console, and connect the CodeCommit repository. Create a feature branch deployment for each of the environments. Connect the Route 53 domain to the application. Activate the automatic creation of subdomains.
  • B. Create a single AWS CodePipeline pipeline that uses the CodeCommit repository as a source. Configure the pipeline so that it deploys to different environments based on the changed branch. Create an AWS Lambda function that creates a new subdomain based on the source branch name. Invoke the Lambda function in the deployment workflow.
  • C. Create an application in AWS Elastic Beanstalk that uses the CodeCommit repository as a source. Configure Elastic Beanstalk so that it creates a new application environment based on the changed branch. Connect the Route 53 domain to the application. Activate the automatic creation of subdomains.
  • D. Create multiple AWS CodePipeline pipelines that use the CodeCommit repository as a source. Configure each pipeline so that it deploys to a specific environment based on the configured branch. Configure an AWS CodeDeploy step in the pipeline to deploy the application components and to create the Route 53 public hosted zone.
Discover Answer Hide Answer

D

Question#50

A company is using AWS CodePipeline to deploy an application. According to a new guideline, a member of the company's security team must sign off on any application changes before the changes are deployed into production. The approval must be recorded and retained.
Which combination of actions will meet these requirements? (Choose two.)

  • A. Configure CodePipeline to write actions to Amazon CloudWatch Logs.
  • B. Configure CodePipeline to write actions to an Amazon S3 bucket at the end of each pipeline stage.
  • C. Create an AWS CloudTrail trail to deliver logs to Amazon S3.
  • D. Create a CodePipeline custom action to invoke an AWS Lambda function for approval. Create a policy that gives the security team access to manage CodePipeline custom actions.
  • E. Create a CodePipeline manual approval action before the deployment step. Create a policy that grants the security team access to approve manual approval stages.
Discover Answer Hide Answer

CE

chevron rightPrevious Nextchevron right