Exams > Amazon > AWS Certified Solutions Architect - Professional
AWS Certified Solutions Architect - Professional
Page 6 out of 101 pages Questions 51-60 out of 1009 questions
Question#51

After launching an instance that you intend to serve as a NAT (Network Address Translation) device in a public subnet you modify your route tables to have the
NAT device be the target of internet bound traffic of your private subnet. When you try and make an outbound connection to the internet from an instance in the private subnet, you are not successful.
Which of the following steps could resolve the issue?

  • A. Disabling the Source/Destination Check attribute on the NAT instance
  • B. Attaching an Elastic IP address to the instance in the private subnet
  • C. Attaching a second Elastic Network Interface (ENI) to the NAT instance, and placing it in the private subnet
  • D. Attaching a second Elastic Network Interface (ENI) to the instance in the private subnet, and placing it in the public subnet
Discover Answer Hide Answer

A
Reference:
http://docs.aws.amazon.com/workspaces/latest/adminguide/gsg_create_vpc.html

Question#52

Your application provides data transformation services. Files containing data to be transformed are first uploaded to Amazon S3 and then transformed by a fleet of spot EC2 instances. Files submitted by your premium customers must be transformed with the highest priority.
How should you implement such a system?

  • A. Use a DynamoDB table with an attribute defining the priority level. Transformation instances will scan the table for tasks, sorting the results by priority level.
  • B. Use Route 53 latency based-routing to send high priority tasks to the closest transformation instances.
  • C. Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue.
  • D. Use a single SQS queue. Each message contains the priority level. Transformation instances poll high-priority messages first.
Discover Answer Hide Answer

C

Question#53

Which of the following are characteristics of Amazon VPC subnets? (Choose two.)

  • A. Each subnet spans at least 2 Availability Zones to provide a high-availability environment.
  • B. Each subnet maps to a single Availability Zone.
  • C. CIDR block mask of /25 is the smallest range supported.
  • D. By default, all subnets can route between each other, whether they are private or public.
  • E. Instances in a private subnet can communicate with the Internet only if they have an Elastic IP.
Discover Answer Hide Answer

BD

Question#54

In AWS, which security aspects are the customer's responsibility? (Choose four.)

  • A. Security Group and ACL (Access Control List) settings
  • B. Decommissioning storage devices
  • C. Patch management on the EC2 instance's operating system
  • D. Life-cycle management of IAM credentials
  • E. Controlling physical access to compute resources
  • F. Encryption of EBS (Elastic Block Storage) volumes
Discover Answer Hide Answer

ACDF

Question#55

When you put objects in Amazon S3, what is the indication that an object was successfully stored?

  • A. A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful.
  • B. Amazon S3 is engineered for 99.999999999% durability. Therefore there is no need to confirm that data was inserted.
  • C. A success code is inserted into the S3 object metadata.
  • D. Each S3 account has a special bucket named _s3_logs. Success codes are written to this bucket with a timestamp and checksum.
Discover Answer Hide Answer

A

Question#56

Within the IAM service a GROUP is regarded as a:

  • A. A collection of AWS accounts
  • B. It's the group of EC2 machines that gain the permissions specified in the GROUP.
  • C. There's no GROUP in IAM, but only USERS and RESOURCES.
  • D. A collection of users.
Discover Answer Hide Answer

D
Use groups to assign permissions to IAM users
Instead of defining permissions for individual IAM users, it's usually more convenient to create groups that relate to job functions (administrators, developers, accounting, etc.), define the relevant permissions for each group, and then assign IAM users to those groups. All the users in an IAM group inherit the permissions assigned to the group. That way, you can make changes for everyone in a group in just one place. As people move around in your company, you can simply change what IAM group their IAM user belongs to.
Reference:
http://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#use-groups-for-permissions

Question#57

Amazon EC2 provides a repository of public data sets that can be seamlessly integrated into AWS cloud-based applications.
What is the monthly charge for using the public data sets?

  • A. A 1-time charge of 10$ for all the datasets.
  • B. 1$ per dataset per month
  • C. 10$ per month for all the datasets
  • D. There is no charge for using the public data sets
Discover Answer Hide Answer

D

Question#58

In the Amazon RDS Oracle DB engine, the Database Diagnostic Pack and the Database Tuning Pack are only available with __________.

  • A. Oracle Standard Edition
  • B. Oracle Express Edition
  • C. Oracle Enterprise Edition
  • D. None of these
Discover Answer Hide Answer

C
Reference:
https://blog.pythian.com/a-most-simple-cloud-is-amazon-rds-for-oracle-right-for-you/

Question#59

A 3-Ber e-commerce web application is currently deployed on-premises, and will be migrated to AWS for greater scalability and elasticity. The web tier currently shares read-only data using a network distributed file system. The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP multicast. The database tier uses shared-storage clustering to provide database failover capability, and uses several read slaves for scaling. Data on all servers and the distributed file system directory is backed up weekly to off-site tapes.
Which AWS storage and database architecture meets the requirements of the application?

  • A. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more read replicas. Backup: web servers, app servers, and database backed up weekly to Glacier using snapshots.
  • B. Web servers: store read-only data in an EC2 NFS server, mount to each web server at boot time. App servers: share state using a combination of DynamoDB and IP multicast. Database: use RDS with multi- AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
  • C. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
  • D. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
Discover Answer Hide Answer

A
Amazon Glacier doesn't suit all storage situations. Listed following are a few storage needs for which you should consider other AWS storage options instead of
Amazon Glacier.
Data that must be updated very frequently might be better served by a storage solution with lower read/write latencies, such as Amazon EBS, Amazon RDS,
Amazon DynamoDB, or relational databases running on EC2.
Reference:
https://d0.awsstatic.com/whitepapers/Storage/AWS%20Storage%20Services%20Whitepaper-v9.pdf

Question#60

A user is running a batch process on EBS backed EC2 instances. The batch process launches few EC2 instances to process Hadoop Map reduce jobs which can run between 50 ?600 minutes or sometimes for even more time. The user wants a configuration that can terminate the instance only when the process is completed.
How can the user configure this with CloudWatch?

  • A. Configure a job which terminates all instances after 600 minutes
  • B. It is not possible to terminate instances automatically
  • C. Configure the CloudWatch action to terminate the instance when the CPU utilization falls below 5%
  • D. Set up the CloudWatch with Auto Scaling to terminate all the instances
Discover Answer Hide Answer

C
Amazon CloudWatch alarm watches a single metric over a time period that the user specifies and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The user can setup an action which terminates the instances when their CPU utilization is below a certain threshold for a certain period of time. The EC2 action can either terminate or stop the instance as part of the EC2 action.
Reference:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/UsingAlarmActions.html

chevron rightPrevious Nextchevron right