Exams > Amazon > AWS-SysOps: AWS Certified SysOps Administrator
AWS-SysOps: AWS Certified SysOps Administrator
Page 43 out of 91 pages Questions 421-430 out of 910 questions
Question#421

What are characteristics of Amazon S3? (Choose two.)

  • A. Objects are directly accessible via a URL
  • B. S3 should be used to host a relational database
  • C. S3 allows you to store objects or virtually unlimited size
  • D. S3 allows you to store virtually unlimited amounts of data
  • E. S3 offers Provisioned IOPS
Discover Answer Hide Answer

AD
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
Reference:
https://aws.amazon.com/s3/faqs/

Question#422

You receive a frantic call from a new DBA who accidentally dropped a table containing all your customers.
Which Amazon RDS feature will allow you to reliably restore your database to within 5 minutes of when the mistake was made?

  • A. Multi-AZ RDS
  • B. RDS snapshots
  • C. RDS read replicas
  • D. RDS automated backup
Discover Answer Hide Answer

D
Reference:
https://aws.amazon.com/rds/details/#ha
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIT.html

Question#423

A media company produces new video files on-premises every day with a total size of around 100 GBS after compression All files have a size of 1 -2 GB and need to be uploaded to Amazon S3 every night in a fixed time window between 3am and 5am Current upload takes almost 3 hours, although less than half of the available bandwidth is used.
What step(s) would ensure that the file uploads are able to complete in the allotted time window?

  • A. Increase your network bandwidth to provide faster throughput to S3
  • B. Upload the files in parallel to S3
  • C. Pack all files into a single archive, upload it to S3, then extract the files in AWS
  • D. Use AWS Import/Export to transfer the video files
Discover Answer Hide Answer

B
https://aws.amazon.com/blogs/aws/amazon-s3-multipart-upload/

Question#424

You are running a web-application on AWS consisting of the following components an Elastic Load Balancer (ELB) an Auto-Scaling Group of EC2 instances running Linux/PHP/Apache, and Relational DataBase Service (RDS) MySQL.
Which security measures fall into AWS's responsibility?

  • A. Protect the EC2 instances against unsolicited access by enforcing the principle of least-privilege access
  • B. Protect against IP spoofing or packet sniffing
  • C. Assure all communication between EC2 instances and ELB is encrypted
  • D. Install latest security patches on ELB. RDS and EC2 instances
Discover Answer Hide Answer

B
https://d0.awsstatic.com/whitepapers/aws-security-whitepaper.pdf

Question#425

You use S3 to store critical data for your company Several users within your group currently have lull permissions to your S3 buckets You need to come up with a solution mat does not impact your users and also protect against the accidental deletion of objects.
Which two options will address this issue? (Choose two.)

  • A. Enable versioning on your S3 Buckets
  • B. Configure your S3 Buckets with MFA delete
  • C. Create a Bucket policy and only allow read only permissions to all users at the bucket level
  • D. Enable object life cycle policies and configure the data older than 3 months to be archived in Glacier
Discover Answer Hide Answer

AB
Versioning allows easy recovery of previous file version.
MFA delete requires additional MFA authentication to delete files.
Won't impact the users current access.
Reference:
http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html

Question#426

An organization's security policy requires multiple copies of all critical data to be replicated across at least a primary and backup data center. The organization has decided to store some critical data on Amazon S3.
Which option should you implement to ensure this requirement is met?

  • A. Use the S3 copy API to replicate data between two S3 buckets in different regions
  • B. You do not need to implement anything since S3 data is automatically replicated between regions
  • C. Use the S3 copy API to replicate data between two S3 buckets in different facilities within an AWS Region
  • D. You do not need to implement anything since S3 data is automatically replicated between multiple facilities within an AWS Region
Discover Answer Hide Answer

D
You specify a region when you create your Amazon S3 bucket. Within that region, your objects are redundantly stored on multiple devices across multiple facilities. Please refer to Regional Products and Services for details of Amazon S3 service availability by region.
Reference:
https://aws.amazon.com/s3/faqs/

Question#427

You are tasked with setting up a cluster of EC2 Instances for a NoSQL database. The database requires random read I/O disk performance up to a 100,000 IOPS at 4KB block side per node.
Which of the following EC2 instances will perform the best for this workload?

  • A. A High-Memory Quadruple Extra Large (m2.4xlarge) with EBS-Optimized set to true and a PIOPs EBS volume
  • B. A Cluster Compute Eight Extra Large (cc2.8xlarge) using instance storage
  • C. High I/O Quadruple Extra Large (hi1.4xlarge) using instance storage
  • D. A Cluster GPU Quadruple Extra Large (cg1.4xlarge) using four separate 4000 PIOPS EBS volumes in a RAID 0 configuration
Discover Answer Hide Answer

C
The SSD storage is local to the instance. Using PV virtualization, you can expect 120,000 random read IOPS (Input/Output Operations Per Second) and between
10,000 and 85,000 random write IOPS, both with 4K blocks.
For HVM and Windows AMIs, you can expect 90,000 random read IOPS and 9,000 to 75,000 random write IOPS.
Reference:
https://aws.amazon.com/blogs/aws/new-high-io-ec2-instance-type-hi14xlarge/

Question#428

When an EC2 EBS-backed (EBS root) instance is stopped, what happens to the data on any ephemeral store volumes?

  • A. Data will be deleted and win no longer be accessible
  • B. Data is automatically saved in an EBS volume.
  • C. Data is automatically saved as an EBS snapshot
  • D. Data is unavailable until the instance is restarted
Discover Answer Hide Answer

A
However, data in the instance store is lost under the following circumstances:

The underlying disk drive fails -

The instance stops -

The instance terminates -
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-lifetime

Question#429

Your team Is excited about the use of AWS because now they have access to programmable Infrastructure" You have been asked to manage your AWS infrastructure in a manner similar to the way you might manage application code You want to be able to deploy exact copies of different versions of your infrastructure, stage changes into different environments, revert back to previous versions, and identify what versions are running at any particular time
(development test QA. production).
Which approach addresses this requirement?

  • A. Use cost allocation reports and AWS Opsworks to deploy and manage your infrastructure.
  • B. Use AWS CloudWatch metrics and alerts along with resource tagging to deploy and manage your infrastructure.
  • C. Use AWS Beanstalk and a version control system like GIT to deploy and manage your infrastructure.
  • D. Use AWS CloudFormation and a version control system like GIT to deploy and manage your infrastructure.
Discover Answer Hide Answer

D
OpsWorks for Chef Automate automatically performs updates for new Chef minor versions.
OpsWorks for Chef Automate does not perform major platform version updates automatically (for example, a major new platform version such as Chef Automate
13) because these updates might include backward-incompatible changes and require additional testing. In these cases, you must manually initiate the update.
Reference:
https://aws.amazon.com/opsworks/chefautomate/faqs/

Question#430

You have a server with a 5O0GB Amazon EBS data volume. The volume is 80% full. You need to back up the volume at regular intervals and be able to re-create the volume in a new Availability Zone in the shortest time possible. All applications using the volume can be paused for a period of a few minutes with no discernible user impact.
Which of the following backup methods will best fulfill your requirements?

  • A. Take periodic snapshots of the EBS volume
  • B. Use a third party Incremental backup application to back up to Amazon Glacier
  • C. Periodically back up all data to a single compressed archive and archive to Amazon S3 using a parallelized multi-part upload
  • D. Create another EBS volume in the second Availability Zone attach it to the Amazon EC2 instance, and use a disk manager to mirror me two disks
Discover Answer Hide Answer

A
EBS volumes can only be attached to EC2 instances within the same Availability Zone.
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-restoring-volume.html

chevron rightPrevious Nextchevron right